id
stringlengths
14
16
text
stringlengths
13
2.7k
source
stringlengths
57
178
1c9484528b0a-1
Model name to use. param model_load_fn: Callable = <function load_embedding_model>¶ Function to load the model remotely on the server. param model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']¶ Requirements to install on hardware to inference the model. param pipeline_ref: Any = None¶ param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-2
Asynchronous Embed query text. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-3
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-5
input is still being generated. batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-6
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. embed_documents(texts: List[str]) → List[List[float]]¶ Compute doc embeddings using a HuggingFace transformer model. Parameters texts – The list of texts to embed.s Returns List of embeddings, one for each text. embed_query(text: str) → List[float]¶ Compute query embeddings using a HuggingFace transformer model. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶ Init the SelfHostedPipeline from a pipeline object or string.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-7
Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-8
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-9
Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable?
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-10
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-11
Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-12
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-13
fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
1c9484528b0a-14
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using SelfHostedHuggingFaceEmbeddings¶ Self Hosted
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
c6c5ee8cc413-0
langchain.embeddings.gpt4all.GPT4AllEmbeddings¶ class langchain.embeddings.gpt4all.GPT4AllEmbeddings[source]¶ Bases: BaseModel, Embeddings GPT4All embedding models. To use, you should have the gpt4all python package installed Example from langchain.embeddings import GPT4AllEmbeddings embeddings = GPT4AllEmbeddings() Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.gpt4all.GPT4AllEmbeddings.html
c6c5ee8cc413-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of documents using GPT4All. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Embed a query using GPT4All. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.gpt4all.GPT4AllEmbeddings.html
c6c5ee8cc413-2
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using GPT4AllEmbeddings¶ GPT4All Ollama Use local LLMs WebResearchRetriever
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.gpt4all.GPT4AllEmbeddings.html
0c51608d31b1-0
langchain.embeddings.ollama.OllamaEmbeddings¶ class langchain.embeddings.ollama.OllamaEmbeddings[source]¶ Bases: BaseModel, Embeddings Ollama locally runs large language models. To use, follow the instructions at https://ollama.ai/. Example from langchain.embeddings import OllamaEmbeddings ollama_emb = OllamaEmbeddings( model="llama:7b", ) r1 = ollama_emb.embed_documents( [ "Alpha is the first letter of Greek alphabet", "Beta is the second letter of Greek alphabet", ] ) r2 = ollama_emb.embed_query( "What is the second letter of Greek alphabet" ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param base_url: str = 'http://localhost:11434'¶ Base url the model is hosted under. param embed_instruction: str = 'passage: '¶ Instruction used to embed documents. param mirostat: Optional[int] = None¶ Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) param mirostat_eta: Optional[float] = None¶ Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) param mirostat_tau: Optional[float] = None¶ Controls the balance between coherence and diversity of the output. A lower value will result in more focused and
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.ollama.OllamaEmbeddings.html
0c51608d31b1-1
of the output. A lower value will result in more focused and coherent text. (Default: 5.0) param model: str = 'llama2'¶ Model name to use. param model_kwargs: Optional[dict] = None¶ Other model keyword args param num_ctx: Optional[int] = None¶ Sets the size of the context window used to generate the next token. (Default: 2048) param num_gpu: Optional[int] = None¶ The number of GPUs to use. On macOS it defaults to 1 to enable metal support, 0 to disable. param num_thread: Optional[int] = None¶ Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). param query_instruction: str = 'query: '¶ Instruction used to embed the query. param repeat_last_n: Optional[int] = None¶ Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx) param repeat_penalty: Optional[float] = None¶ Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1) param stop: Optional[List[str]] = None¶ Sets the stop tokens to use. param temperature: Optional[float] = None¶ The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) param tfs_z: Optional[float] = None¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.ollama.OllamaEmbeddings.html
0c51608d31b1-2
param tfs_z: Optional[float] = None¶ Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1) param top_k: Optional[int] = None¶ Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) param top_p: Optional[int] = None¶ Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.ollama.OllamaEmbeddings.html
0c51608d31b1-3
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed documents using a Ollama deployed embedding model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Embed a query using a Ollama deployed embedding model. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.ollama.OllamaEmbeddings.html
0c51608d31b1-4
Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.ollama.OllamaEmbeddings.html
fd310d8949de-0
langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding¶ class langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding[source]¶ Bases: BaseModel, Embeddings Aleph Alpha’s asymmetric semantic embedding. AA provides you with an endpoint to embed a document and a query. The models were optimized to make the embeddings of documents and the query for a document as similar as possible. To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/ Example Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param aleph_alpha_api_key: Optional[str] = None¶ API key for Aleph Alpha API. param compress_to_size: Optional[int] = None¶ Should the returned embeddings come back as an original 5120-dim vector, or should it be compressed to 128-dim. param contextual_control_threshold: Optional[int] = None¶ Attention control parameters only apply to those tokens that have explicitly been set in the request. param control_log_additive: bool = True¶ Apply controls on prompt items by adding the log(control_factor) to attention scores. param host: str = 'https://api.aleph-alpha.com'¶ The hostname of the API host. The default one is “https://api.aleph-alpha.com”) param hosting: Optional[str] = None¶ Determines in which datacenters the request may be processed. You can either set the parameter to “aleph-alpha” or omit it (defaulting to None). Not setting this value, or setting it to None, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
fd310d8949de-1
in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximal availability. Setting it to “aleph-alpha” allows us to only process the request in our own datacenters. Choose this option for maximal data privacy. param model: str = 'luminous-base'¶ Model name to use. param nice: bool = False¶ Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones. param normalize: Optional[bool] = None¶ Should returned embeddings be normalized param request_timeout_seconds: int = 305¶ Client timeout that will be set for HTTP requests in the requests library’s API calls. Server will close all requests after 300 seconds with an internal server error. param total_retries: int = 8¶ The number of retries made in case requests fail with certain retryable status codes. If the last retry fails a corresponding exception is raised. Note, that between retries an exponential backoff is applied, starting with 0.5 s after the first retry and doubling for each retry made. So with the default setting of 8 retries a total wait time of 63.5 s is added between the retries. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
fd310d8949de-2
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Call out to Aleph Alpha’s asymmetric Document endpoint. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
fd310d8949de-3
:param text: The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using AlephAlphaAsymmetricSemanticEmbedding¶ Aleph Alpha
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
51e627f9be7e-0
langchain.embeddings.self_hosted_hugging_face.load_embedding_model¶ langchain.embeddings.self_hosted_hugging_face.load_embedding_model(model_id: str, instruct: bool = False, device: int = 0) → Any[source]¶ Load the embedding model.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.load_embedding_model.html
14c421adf618-0
langchain.embeddings.bedrock.BedrockEmbeddings¶ class langchain.embeddings.bedrock.BedrockEmbeddings[source]¶ Bases: BaseModel, Embeddings Bedrock embedding models. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Bedrock service. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param client: Any = None¶ Bedrock client. param credentials_profile_name: Optional[str] = None¶ The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html param endpoint_url: Optional[str] = None¶ Needed if you don’t want to default to us-east-1 endpoint param model_id: str = 'amazon.titan-embed-text-v1'¶ Id of the model to call, e.g., amazon.titan-embed-text-v1, this is equivalent to the modelId property in the list-foundation-models api param model_kwargs: Optional[Dict] = None¶ Keyword arguments to pass to the model. param region_name: Optional[str] = None¶ The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html
14c421adf618-1
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. async aembed_documents(texts: List[str]) → List[List[float]][source]¶ Asynchronous compute doc embeddings using a Bedrock model. Parameters texts – The list of texts to embed Returns List of embeddings, one for each text. async aembed_query(text: str) → List[float][source]¶ Asynchronous compute query embeddings using a Bedrock model. Parameters text – The text to embed. Returns Embeddings for the text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html
14c421adf618-2
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using a Bedrock model. Parameters texts – The list of texts to embed Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Compute query embeddings using a Bedrock model. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html
14c421adf618-3
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using BedrockEmbeddings¶ Bedrock
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html
b02ccc50c7b8-0
langchain.embeddings.llamacpp.LlamaCppEmbeddings¶ class langchain.embeddings.llamacpp.LlamaCppEmbeddings[source]¶ Bases: BaseModel, Embeddings llama.cpp embedding models. To use, you should have the llama-cpp-python library installed, and provide the path to the Llama model as a named parameter to the constructor. Check out: https://github.com/abetlen/llama-cpp-python Example from langchain.embeddings import LlamaCppEmbeddings llama = LlamaCppEmbeddings(model_path="/path/to/model.bin") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param f16_kv: bool = False¶ Use half-precision for key/value cache. param logits_all: bool = False¶ Return logits for all tokens, not just the last token. param model_path: str [Required]¶ param n_batch: Optional[int] = 8¶ Number of tokens to process in parallel. Should be a number between 1 and n_ctx. param n_ctx: int = 512¶ Token context window. param n_gpu_layers: Optional[int] = None¶ Number of layers to be loaded into gpu memory. Default None. param n_parts: int = -1¶ Number of parts to split the model into. If -1, the number of parts is automatically determined. param n_threads: Optional[int] = None¶ Number of threads to use. If None, the number of threads is automatically determined. param seed: int = -1¶ Seed. If -1, a random seed is used. param use_mlock: bool = False¶ Force system to keep model in RAM. param verbose: bool = True¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html
b02ccc50c7b8-1
Force system to keep model in RAM. param verbose: bool = True¶ Print verbose output to stderr. param vocab_only: bool = False¶ Only load the vocabulary, no weights. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html
b02ccc50c7b8-2
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of documents using the Llama model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Embed a query using the Llama model. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html
b02ccc50c7b8-3
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using LlamaCppEmbeddings¶ Llama-cpp Llama.cpp
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.llamacpp.LlamaCppEmbeddings.html
87ef538c66dc-0
langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings¶ class langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings[source]¶ Bases: BaseModel, Embeddings Custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param client: Any = None¶ param content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]¶ The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. param credentials_profile_name: Optional[str] = None¶ The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html param endpoint_kwargs: Optional[Dict] = None¶ Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html
87ef538c66dc-1
function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> param endpoint_name: str = ''¶ The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. param model_kwargs: Optional[Dict] = None¶ Keyword arguments to pass to the model. param region_name: str = ''¶ The aws region where the Sagemaker model is deployed, eg. us-west-2. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html
87ef538c66dc-2
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]¶ Compute doc embeddings using a SageMaker Inference Endpoint. Parameters texts – The list of texts to embed. chunk_size – The chunk size defines how many input texts will be grouped together as request. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Compute query embeddings using a SageMaker inference endpoint. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html
87ef538c66dc-3
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using SagemakerEndpointEmbeddings¶ SageMaker SageMaker Endpoint
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html
dfc3780b5b63-0
langchain.embeddings.embaas.EmbaasEmbeddings¶ class langchain.embeddings.embaas.EmbaasEmbeddings[source]¶ Bases: BaseModel, Embeddings Embaas’s embedding service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Initialise with default model and instruction from langchain.embeddings import EmbaasEmbeddings emb = EmbaasEmbeddings() # Initialise with custom model and instruction from langchain.embeddings import EmbaasEmbeddings emb_model = "instructor-large" emb_inst = "Represent the Wikipedia document for retrieval" emb = EmbaasEmbeddings( model=emb_model, instruction=emb_inst ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_url: str = 'https://api.embaas.io/v1/embeddings/'¶ The URL for the embaas embeddings API. param embaas_api_key: Optional[str] = None¶ param instruction: Optional[str] = None¶ Instruction used for domain-specific embeddings. param model: str = 'e5-large-v2'¶ The model used for embeddings. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html
dfc3780b5b63-1
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Get embeddings for a list of texts. Parameters texts – The list of texts to get embeddings for. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Get embeddings for a single text. Parameters text – The text to get embeddings for. Returns List of embeddings. classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html
dfc3780b5b63-2
Returns List of embeddings. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using EmbaasEmbeddings¶ Embaas
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html
607b00b8f2dd-0
langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings¶ class langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings[source]¶ Bases: SelfHostedHuggingFaceEmbeddings HuggingFace InstructEmbedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings import runhouse as rh model_name = "hkunlp/instructor-large" gpu = rh.cluster(name='rh-a10x', instance_type='A100:1') hf = SelfHostedHuggingFaceInstructEmbeddings( model_name=model_name, hardware=gpu) Initialize the remote inference function. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param embed_instruction: str = 'Represent the document for retrieval: '¶ Instruction to use for embedding documents. param hardware: Any = None¶ Remote hardware to send the inference function to. param inference_fn: Callable = <function _embed_documents>¶ Inference function to extract the embeddings. param inference_kwargs: Any = None¶ Any kwargs to pass to the model’s inference function. param load_fn_kwargs: Optional[dict] = None¶ Keyword arguments to pass to the model load function. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-1
Metadata to add to the run trace. param model_id: str = 'hkunlp/instructor-large'¶ Model name to use. param model_load_fn: Callable = <function load_embedding_model>¶ Function to load the model remotely on the server. param model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']¶ Requirements to install on hardware to inference the model. param pipeline_ref: Any = None¶ param query_instruction: str = 'Represent the question for retrieving supporting documents: '¶ Instruction to use for embedding query. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-2
async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-3
callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-4
stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-5
Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-6
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using a HuggingFace instruct model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Compute query embeddings using a HuggingFace instruct model. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶ Init the SelfHostedPipeline from a pipeline object or string.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-7
Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-8
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-9
Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable?
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-10
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-11
Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-12
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-13
fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
607b00b8f2dd-14
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using SelfHostedHuggingFaceInstructEmbeddings¶ Self Hosted
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings.html
bd1a769637f0-0
langchain.embeddings.modelscope_hub.ModelScopeEmbeddings¶ class langchain.embeddings.modelscope_hub.ModelScopeEmbeddings[source]¶ Bases: BaseModel, Embeddings ModelScopeHub embedding models. To use, you should have the modelscope python package installed. Example from langchain.embeddings import ModelScopeEmbeddings model_id = "damo/nlp_corom_sentence-embedding_english-base" embed = ModelScopeEmbeddings(model_id=model_id, model_revision="v1.0.0") Initialize the modelscope param embed: Any = None¶ param model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'¶ Model name to use. param model_revision: Optional[str] = None¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html
bd1a769637f0-1
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using a modelscope embedding model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Compute query embeddings using a modelscope embedding model. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html
bd1a769637f0-2
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using ModelScopeEmbeddings¶ ModelScope
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html
712622edf2c4-0
langchain.embeddings.azure_openai.AzureOpenAIEmbeddings¶ class langchain.embeddings.azure_openai.AzureOpenAIEmbeddings[source]¶ Bases: OpenAIEmbeddings Azure OpenAI Embeddings API. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_special: Union[Literal['all'], Set[str]] = {}¶ param azure_ad_token: Union[str, None] = None¶ Your Azure Active Directory token. Automatically inferred from env var AZURE_OPENAI_AD_TOKEN if not provided. For more: https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id. param azure_ad_token_provider: Union[str, None] = None¶ A function that returns an Azure Active Directory token. Will be invoked on every request. param azure_endpoint: Union[str, None] = None¶ Your Azure endpoint, including the resource. Automatically inferred from env var AZURE_OPENAI_ENDPOINT if not provided. Example: https://example-resource.azure.openai.com/ param chunk_size: int = 1000¶ Maximum number of texts to embed in each batch param default_headers: Union[Mapping[str, str], None] = None¶ param default_query: Union[Mapping[str, object], None] = None¶ param deployment: Optional[str] = None (alias 'azure_deployment')¶ A model deployment. If given sets the base client URL to include /deployments/{azure_deployment}. Note: this means you won’t be able to use non-deployment endpoints. param disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all'¶ param embedding_ctx_length: int = 8191¶ The maximum number of tokens to embed at once.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.azure_openai.AzureOpenAIEmbeddings.html
712622edf2c4-1
The maximum number of tokens to embed at once. param headers: Any = None¶ param http_client: Union[Any, None] = None¶ Optional httpx.Client. param max_retries: int = 2¶ Maximum number of retries to make when generating. param model: str = 'text-embedding-ada-002'¶ param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param openai_api_base: Optional[str] = None (alias 'base_url')¶ Base URL path for API requests, leave blank if not using a proxy or service emulator. param openai_api_key: Union[str, None] = None (alias 'api_key')¶ Automatically inferred from env var AZURE_OPENAI_API_KEY if not provided. param openai_api_type: Optional[str] = None¶ param openai_api_version: Optional[str] = None (alias 'api_version')¶ Automatically inferred from env var OPENAI_API_VERSION if not provided. param openai_organization: Optional[str] = None (alias 'organization')¶ Automatically inferred from env var OPENAI_ORG_ID if not provided. param openai_proxy: Optional[str] = None¶ param request_timeout: Optional[Union[float, Tuple[float, float], Any]] = None (alias 'timeout')¶ Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None. param show_progress_bar: bool = False¶ Whether to show a progress bar when embedding. param skip_empty: bool = False¶ Whether to skip empty strings when embedding or raise an error. Defaults to not skipping. param tiktoken_model_name: Optional[str] = None¶ The model name to pass to tiktoken when using this class.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.azure_openai.AzureOpenAIEmbeddings.html
712622edf2c4-2
The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. param validate_base_url: bool = True¶ async aembed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]]¶ Call out to OpenAI’s embedding endpoint async for embedding search docs. Parameters texts – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. async aembed_query(text: str) → List[float]¶ Call out to OpenAI’s embedding endpoint async for embedding query text. Parameters text – The text to embed. Returns Embedding for the text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.azure_openai.AzureOpenAIEmbeddings.html
712622edf2c4-3
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]]¶ Call out to OpenAI’s embedding endpoint for embedding search docs. Parameters texts – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. embed_query(text: str) → List[float]¶ Call out to OpenAI’s embedding endpoint for embedding query text. Parameters
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.azure_openai.AzureOpenAIEmbeddings.html
712622edf2c4-4
Call out to OpenAI’s embedding endpoint for embedding query text. Parameters text – The text to embed. Returns Embedding for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.azure_openai.AzureOpenAIEmbeddings.html
1cc508a06087-0
langchain.embeddings.nlpcloud.NLPCloudEmbeddings¶ class langchain.embeddings.nlpcloud.NLPCloudEmbeddings[source]¶ Bases: BaseModel, Embeddings NLP Cloud embedding models. To use, you should have the nlpcloud python package installed Example from langchain.embeddings import NLPCloudEmbeddings embeddings = NLPCloudEmbeddings() Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param gpu: bool [Required]¶ param model_name: str [Required]¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.nlpcloud.NLPCloudEmbeddings.html
1cc508a06087-1
the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of documents using NLP Cloud. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Embed a query using NLP Cloud. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.nlpcloud.NLPCloudEmbeddings.html
1cc508a06087-2
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using NLPCloudEmbeddings¶ NLP Cloud NLPCloud
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.nlpcloud.NLPCloudEmbeddings.html
e907bdf892e9-0
langchain.embeddings.vertexai.VertexAIEmbeddings¶ class langchain.embeddings.vertexai.VertexAIEmbeddings[source]¶ Bases: _VertexAICommon, Embeddings Google Cloud VertexAI embedding models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param credentials: Any = None¶ The default custom credentials (google.auth.credentials.Credentials) to use param location: str = 'us-central1'¶ The default location to use when making API calls. param max_output_tokens: int = 128¶ Token limit determines the maximum amount of text output from one prompt. param max_retries: int = 6¶ The maximum number of retries to make when generating. param model_name: str = 'textembedding-gecko'¶ Underlying model name. param n: int = 1¶ How many completions to generate for each prompt. param project: Optional[str] = None¶ The default GCP project to use when making Vertex API calls. param request_parallelism: int = 5¶ The amount of parallelism allowed for requests issued to VertexAI models. param stop: Optional[List[str]] = None¶ Optional list of stop words to use when generating. param streaming: bool = False¶ Whether to stream the results or not. param temperature: float = 0.0¶ Sampling temperature, it controls the degree of randomness in token selection. param top_k: int = 40¶ How the model selects tokens for output, the next token is selected from param top_p: float = 0.95¶ Tokens are selected from most probable to least until the sum of their async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html
e907bdf892e9-1
Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str], batch_size: int = 5) → List[List[float]][source]¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html
e907bdf892e9-2
Embed a list of strings. Vertex AI currently sets a max batch size of 5 strings. Parameters texts – List[str] The list of strings to embed. batch_size – [int] The batch size of embeddings to send to the model Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Embed a text. Parameters text – The text to embed. Returns Embedding for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html
e907bdf892e9-3
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property is_codey_model: bool¶ task_executor: ClassVar[Optional[Executor]] = FieldInfo(exclude=True, extra={})¶ Examples using VertexAIEmbeddings¶ Google Vertex AI PaLM
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.vertexai.VertexAIEmbeddings.html
771ad3121ec3-0
langchain.embeddings.openai.async_embed_with_retry¶ async langchain.embeddings.openai.async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) → Any[source]¶ Use tenacity to retry the embedding call.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.async_embed_with_retry.html
7676447b2c68-0
langchain.embeddings.dashscope.DashScopeEmbeddings¶ class langchain.embeddings.dashscope.DashScopeEmbeddings[source]¶ Bases: BaseModel, Embeddings DashScope embedding models. To use, you should have the dashscope python package installed, and the environment variable DASHSCOPE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import DashScopeEmbeddings embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key") Example import os os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY" from langchain.embeddings.dashscope import DashScopeEmbeddings embeddings = DashScopeEmbeddings( model="text-embedding-v1", ) text = "This is a test query." query_result = embeddings.embed_query(text) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param client: Any = None¶ The DashScope client. param dashscope_api_key: Optional[str] = None¶ param max_retries: int = 5¶ Maximum number of retries to make when generating. param model: str = 'text-embedding-v1'¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html
7676447b2c68-1
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Call out to DashScope’s embedding endpoint for embedding search docs. Parameters texts – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Call out to DashScope’s embedding endpoint for embedding query text.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html
7676447b2c68-2
Call out to DashScope’s embedding endpoint for embedding query text. Parameters text – The text to embed. Returns Embedding for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html
7676447b2c68-3
classmethod validate(value: Any) → Model¶ Examples using DashScopeEmbeddings¶ DashScope DashVector
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html
bbe277246433-0
langchain.embeddings.fake.DeterministicFakeEmbedding¶ class langchain.embeddings.fake.DeterministicFakeEmbedding[source]¶ Bases: Embeddings, BaseModel Fake embedding model that always returns the same embedding vector for the same text. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param size: int [Required]¶ The size of the embedding vector. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.fake.DeterministicFakeEmbedding.html
bbe277246433-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed search docs. embed_query(text: str) → List[float][source]¶ Embed query text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.fake.DeterministicFakeEmbedding.html
bbe277246433-2
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.fake.DeterministicFakeEmbedding.html
b06c4a9f6146-0
langchain.embeddings.octoai_embeddings.OctoAIEmbeddings¶ class langchain.embeddings.octoai_embeddings.OctoAIEmbeddings[source]¶ Bases: BaseModel, Embeddings OctoAI Compute Service embedding models. The environment variable OCTOAI_API_TOKEN should be set with your API token, or it can be passed as a named parameter to the constructor. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param embed_instruction: str = 'Represent this input: '¶ Instruction to use for embedding documents. param endpoint_url: Optional[str] = None¶ Endpoint URL to use. param model_kwargs: Optional[dict] = None¶ Keyword arguments to pass to the model. param octoai_api_token: Optional[str] = None¶ OCTOAI API Token param query_instruction: str = 'Represent the question for retrieving similar documents: '¶ Instruction to use for embedding query. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.octoai_embeddings.OctoAIEmbeddings.html
b06c4a9f6146-1
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute document embeddings using an OctoAI instruct model. embed_query(text: str) → List[float][source]¶ Compute query embedding using an OctoAI instruct model. classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.octoai_embeddings.OctoAIEmbeddings.html
b06c4a9f6146-2
classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.octoai_embeddings.OctoAIEmbeddings.html
101e3a23f95d-0
langchain.embeddings.ernie.ErnieEmbeddings¶ class langchain.embeddings.ernie.ErnieEmbeddings[source]¶ Bases: BaseModel, Embeddings Ernie Embeddings V1 embedding models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param access_token: Optional[str] = None¶ param chunk_size: int = 16¶ param ernie_api_base: Optional[str] = None¶ param ernie_client_id: Optional[str] = None¶ param ernie_client_secret: Optional[str] = None¶ async aembed_documents(texts: List[str]) → List[List[float]][source]¶ Asynchronous Embed search docs. Parameters texts – The list of texts to embed Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text: str) → List[float][source]¶ Asynchronous Embed query text. Parameters text – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.ernie.ErnieEmbeddings.html
101e3a23f95d-1
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed search docs. Parameters texts – The list of texts to embed Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text: str) → List[float][source]¶ Embed query text. Parameters text – The text to embed. Returns Embeddings for the text. Return type List[float] classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.ernie.ErnieEmbeddings.html
101e3a23f95d-2
Return type List[float] classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.ernie.ErnieEmbeddings.html
f581cbbcf1a6-0
langchain.embeddings.xinference.XinferenceEmbeddings¶ class langchain.embeddings.xinference.XinferenceEmbeddings(server_url: Optional[str] = None, model_uid: Optional[str] = None)[source]¶ Xinference embedding models. To use, you should have the xinference library installed: pip install xinference Check out: https://github.com/xorbitsai/inference To run, you need to start a Xinference supervisor on one server and Xinference workers on the other servers. Example To start a local instance of Xinference, run $ xinference You can also deploy Xinference in a distributed cluster. Here are the steps: Starting the supervisor: $ xinference-supervisor Starting the worker: $ xinference-worker Then, launch a model using command line interface (CLI). Example: $ xinference launch -n orca -s 3 -q q4_0 It will return a model UID. Then you can use Xinference Embedding with LangChain. Example: from langchain.embeddings import XinferenceEmbeddings xinference = XinferenceEmbeddings( server_url="http://0.0.0.0:9997", model_uid = {model_uid} # replace model_uid with the model UID return from launching the model ) Attributes client server_url URL of the xinference server model_uid UID of the launched model Methods __init__([server_url, model_uid]) aembed_documents(texts) Asynchronous Embed search docs. aembed_query(text) Asynchronous Embed query text. embed_documents(texts) Embed a list of documents using Xinference. embed_query(text) Embed a query of documents using Xinference.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.xinference.XinferenceEmbeddings.html
f581cbbcf1a6-1
embed_query(text) Embed a query of documents using Xinference. __init__(server_url: Optional[str] = None, model_uid: Optional[str] = None)[source]¶ async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of documents using Xinference. :param texts: The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Embed a query of documents using Xinference. :param text: The text to embed. Returns Embeddings for the text. Examples using XinferenceEmbeddings¶ Xorbits inference (Xinference)
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.xinference.XinferenceEmbeddings.html
0a012726cba9-0
langchain.embeddings.cache.CacheBackedEmbeddings¶ class langchain.embeddings.cache.CacheBackedEmbeddings(underlying_embeddings: Embeddings, document_embedding_store: BaseStore[str, List[float]])[source]¶ Interface for caching results from embedding models. The interface allows works with any store that implements the abstract store interface accepting keys of type str and values of list of floats. If need be, the interface can be extended to accept other implementations of the value serializer and deserializer, as well as the key encoder. Examples Initialize the embedder. Parameters underlying_embeddings – the embedder to use for computing embeddings. document_embedding_store – The store to use for caching document embeddings. Methods __init__(underlying_embeddings, ...) Initialize the embedder. aembed_documents(texts) Asynchronous Embed search docs. aembed_query(text) Asynchronous Embed query text. embed_documents(texts) Embed a list of texts. embed_query(text) Embed query text. from_bytes_store(underlying_embeddings, ...) On-ramp that adds the necessary serialization and encoding to the store. __init__(underlying_embeddings: Embeddings, document_embedding_store: BaseStore[str, List[float]]) → None[source]¶ Initialize the embedder. Parameters underlying_embeddings – the embedder to use for computing embeddings. document_embedding_store – The store to use for caching document embeddings. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed a list of texts. The method first checks the cache for the embeddings.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html
0a012726cba9-1
Embed a list of texts. The method first checks the cache for the embeddings. If the embeddings are not found, the method uses the underlying embedder to embed the documents and stores the results in the cache. Parameters texts – A list of texts to embed. Returns A list of embeddings for the given texts. embed_query(text: str) → List[float][source]¶ Embed query text. This method does not support caching at the moment. Support for caching queries is easily to implement, but might make sense to hold off to see the most common patterns. If the cache has an eviction policy, we may need to be a bit more careful about sharing the cache between documents and queries. Generally, one is OK evicting query caches, but document caches should be kept. Parameters text – The text to embed. Returns The embedding for the given text. classmethod from_bytes_store(underlying_embeddings: Embeddings, document_embedding_cache: BaseStore[str, bytes], *, namespace: str = '') → CacheBackedEmbeddings[source]¶ On-ramp that adds the necessary serialization and encoding to the store. Parameters underlying_embeddings – The embedder to use for embedding. document_embedding_cache – The cache to use for storing document embeddings. * – :param : :param namespace: The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, set it to the name of the embedding model used. Examples using CacheBackedEmbeddings¶ Caching
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cache.CacheBackedEmbeddings.html
6c9169a12d15-0
langchain.embeddings.huggingface.HuggingFaceInferenceAPIEmbeddings¶ class langchain.embeddings.huggingface.HuggingFaceInferenceAPIEmbeddings[source]¶ Bases: BaseModel, Embeddings Embed texts using the HuggingFace API. Requires a HuggingFace Inference API key and a model name. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_key: str [Required]¶ Your API key for the HuggingFace Inference API. param model_name: str = 'sentence-transformers/all-MiniLM-L6-v2'¶ The name of the model to use for text embeddings. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInferenceAPIEmbeddings.html
6c9169a12d15-1
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Get the embeddings for a list of texts. Parameters texts (Documents) – A list of texts to get embeddings for. Returns Embedded texts as List[List[float]], where each inner List[float]corresponds to a single input text. Example from langchain.embeddings import HuggingFaceInferenceAPIEmbeddings hf_embeddings = HuggingFaceInferenceAPIEmbeddings( api_key="your_api_key", model_name="sentence-transformers/all-MiniLM-l6-v2" ) texts = ["Hello, world!", "How are you?"] hf_embeddings.embed_documents(texts) embed_query(text: str) → List[float][source]¶ Compute query embeddings using a HuggingFace transformer model. Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInferenceAPIEmbeddings.html
6c9169a12d15-2
Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using HuggingFaceInferenceAPIEmbeddings¶ Hugging Face
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceInferenceAPIEmbeddings.html
0c98078e7e0b-0
langchain.embeddings.minimax.MiniMaxEmbeddings¶ class langchain.embeddings.minimax.MiniMaxEmbeddings[source]¶ Bases: BaseModel, Embeddings MiniMax’s embedding service. To use, you should have the environment variable MINIMAX_GROUP_ID and MINIMAX_API_KEY set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import MiniMaxEmbeddings embeddings = MiniMaxEmbeddings() query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param embed_type_db: str = 'db'¶ For embed_documents param embed_type_query: str = 'query'¶ For embed_query param endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'¶ Endpoint URL to use. param minimax_api_key: Optional[str] = None¶ API Key for MiniMax API. param minimax_group_id: Optional[str] = None¶ Group ID for MiniMax API. param model: str = 'embo-01'¶ Embeddings model name to use. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html
0c98078e7e0b-1
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed(texts: List[str], embed_type: str) → List[List[float]][source]¶ embed_documents(texts: List[str]) → List[List[float]][source]¶ Embed documents using a MiniMax embedding endpoint. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Embed a query using a MiniMax embedding endpoint. Parameters text – The text to embed.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html
0c98078e7e0b-2
Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using MiniMaxEmbeddings¶ MiniMax Minimax
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html
70b1faedc4f3-0
langchain.embeddings.minimax.embed_with_retry¶ langchain.embeddings.minimax.embed_with_retry(embeddings: MiniMaxEmbeddings, *args: Any, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.embed_with_retry.html
850daaa2347c-0
langchain.embeddings.huggingface.HuggingFaceBgeEmbeddings¶ class langchain.embeddings.huggingface.HuggingFaceBgeEmbeddings[source]¶ Bases: BaseModel, Embeddings HuggingFace BGE sentence_transformers embedding models. To use, you should have the sentence_transformers python package installed. Example from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Initialize the sentence_transformer. param cache_folder: Optional[str] = None¶ Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. param encode_kwargs: Dict[str, Any] [Optional]¶ Keyword arguments to pass when calling the encode method of the model. param model_kwargs: Dict[str, Any] [Optional]¶ Keyword arguments to pass to the model. param model_name: str = 'BAAI/bge-large-en'¶ Model name to use. param query_instruction: str = 'Represent this question for searching relevant passages: '¶ Instruction to use for embedding query. async aembed_documents(texts: List[str]) → List[List[float]]¶ Asynchronous Embed search docs. async aembed_query(text: str) → List[float]¶ Asynchronous Embed query text. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceBgeEmbeddings.html
850daaa2347c-1
Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. embed_documents(texts: List[str]) → List[List[float]][source]¶ Compute doc embeddings using a HuggingFace transformer model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]¶ Compute query embeddings using a HuggingFace transformer model. Parameters text – The text to embed. Returns Embeddings for the text.
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceBgeEmbeddings.html
850daaa2347c-2
Parameters text – The text to embed. Returns Embeddings for the text. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using HuggingFaceBgeEmbeddings¶ BGE on Hugging Face
lang/api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceBgeEmbeddings.html