id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
0d6e7b01bb24-1 | load_schema()
Load the graph schema information.
query(query)
Query the graph.
update(query)
Update the graph.
__init__(source_file: Optional[str] = None, serialization: Optional[str] = 'ttl', query_endpoint: Optional[str] = None, update_endpoint: Optional[str] = None, standard: Optional[str] = 'rdf', local_copy: Optional[str] = None) → None[source]¶
Set up the RDFlib graph
Parameters
source_file – either a path for a local file or a URL
serialization – serialization of the input
query_endpoint – SPARQL endpoint for queries, read access
update_endpoint – SPARQL endpoint for UPDATE queries, write access
standard – RDF, RDFS, or OWL
local_copy – new local copy for storing changes
load_schema() → None[source]¶
Load the graph schema information.
query(query: str) → List[rdflib.query.ResultRow][source]¶
Query the graph.
update(query: str) → None[source]¶
Update the graph.
Examples using RdfGraph¶
GraphSparqlQAChain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.rdf_graph.RdfGraph.html |
6eef03a5a44e-0 | langchain.graphs.graph_document.GraphDocument¶
class langchain.graphs.graph_document.GraphDocument[source]¶
Bases: Serializable
Represents a graph document consisting of nodes and relationships.
nodes¶
A list of nodes in the graph.
Type
List[Node]
relationships¶
A list of relationships in the graph.
Type
List[Relationship]
source¶
The document from which the graph information is derived.
Type
Document
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param nodes: List[langchain.graphs.graph_document.Node] [Required]¶
param relationships: List[langchain.graphs.graph_document.Relationship] [Required]¶
param source: langchain.schema.document.Document [Required]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.GraphDocument.html |
6eef03a5a44e-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.GraphDocument.html |
6eef03a5a44e-2 | A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.GraphDocument.html |
8513d83af50e-0 | langchain.graphs.falkordb_graph.FalkorDBGraph¶
class langchain.graphs.falkordb_graph.FalkorDBGraph(database: str, host: str = 'localhost', port: int = 6379, username: Optional[str] = None, password: Optional[str] = None, ssl: bool = False)[source]¶
FalkorDB wrapper for graph operations.
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Create a new FalkorDB graph wrapper instance.
Attributes
get_schema
Returns the schema of the FalkorDB database
get_structured_schema
Returns the structured schema of the Graph
Methods
__init__(database[, host, port, username, ...])
Create a new FalkorDB graph wrapper instance.
add_graph_documents(graph_documents[, ...])
Take GraphDocument as input as uses it to construct a graph.
query(query[, params])
Query FalkorDB database.
refresh_schema()
Refreshes the schema of the FalkorDB database
__init__(database: str, host: str = 'localhost', port: int = 6379, username: Optional[str] = None, password: Optional[str] = None, ssl: bool = False) → None[source]¶
Create a new FalkorDB graph wrapper instance. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.falkordb_graph.FalkorDBGraph.html |
8513d83af50e-1 | Create a new FalkorDB graph wrapper instance.
add_graph_documents(graph_documents: List[GraphDocument], include_source: bool = False) → None[source]¶
Take GraphDocument as input as uses it to construct a graph.
query(query: str, params: dict = {}) → List[Dict[str, Any]][source]¶
Query FalkorDB database.
refresh_schema() → None[source]¶
Refreshes the schema of the FalkorDB database
Examples using FalkorDBGraph¶
FalkorDBQAChain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.falkordb_graph.FalkorDBGraph.html |
f1caadfcdd6b-0 | langchain.graphs.neo4j_graph.Neo4jGraph¶
class langchain.graphs.neo4j_graph.Neo4jGraph(url: Optional[str] = None, username: Optional[str] = None, password: Optional[str] = None, database: str = 'neo4j')[source]¶
Neo4j wrapper for graph operations.
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Create a new Neo4j graph wrapper instance.
Attributes
get_schema
Returns the schema of the Graph
get_structured_schema
Returns the structured schema of the Graph
Methods
__init__([url, username, password, database])
Create a new Neo4j graph wrapper instance.
add_graph_documents(graph_documents[, ...])
Take GraphDocument as input as uses it to construct a graph.
query(query[, params])
Query Neo4j database.
refresh_schema()
Refreshes the Neo4j graph schema information.
__init__(url: Optional[str] = None, username: Optional[str] = None, password: Optional[str] = None, database: str = 'neo4j') → None[source]¶
Create a new Neo4j graph wrapper instance.
add_graph_documents(graph_documents: List[GraphDocument], include_source: bool = False) → None[source]¶
Take GraphDocument as input as uses it to construct a graph. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.neo4j_graph.Neo4jGraph.html |
f1caadfcdd6b-1 | Take GraphDocument as input as uses it to construct a graph.
query(query: str, params: dict = {}) → List[Dict[str, Any]][source]¶
Query Neo4j database.
refresh_schema() → None[source]¶
Refreshes the Neo4j graph schema information.
Examples using Neo4jGraph¶
Neo4j
Diffbot Graph Transformer
Neo4j DB QA chain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.neo4j_graph.Neo4jGraph.html |
cc77fedc2afc-0 | langchain.graphs.networkx_graph.NetworkxEntityGraph¶
class langchain.graphs.networkx_graph.NetworkxEntityGraph(graph: Optional[Any] = None)[source]¶
Networkx wrapper for entity graph operations.
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Create a new graph.
Methods
__init__([graph])
Create a new graph.
add_triple(knowledge_triple)
Add a triple to the graph.
clear()
Clear the graph.
delete_triple(knowledge_triple)
Delete a triple from the graph.
draw_graphviz(**kwargs)
Provides better drawing
from_gml(gml_path)
get_entity_knowledge(entity[, depth])
Get information about an entity.
get_topological_sort()
Get a list of entity names in the graph sorted by causal dependence.
get_triples()
Get all triples in the graph.
write_to_gml(path)
__init__(graph: Optional[Any] = None) → None[source]¶
Create a new graph.
add_triple(knowledge_triple: KnowledgeTriple) → None[source]¶
Add a triple to the graph.
clear() → None[source]¶
Clear the graph.
delete_triple(knowledge_triple: KnowledgeTriple) → None[source]¶
Delete a triple from the graph.
draw_graphviz(**kwargs: Any) → None[source]¶ | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.NetworkxEntityGraph.html |
cc77fedc2afc-1 | draw_graphviz(**kwargs: Any) → None[source]¶
Provides better drawing
Usage in a jupyter notebook:
>>> from IPython.display import SVG
>>> self.draw_graphviz_svg(layout="dot", filename="web.svg")
>>> SVG('web.svg')
classmethod from_gml(gml_path: str) → NetworkxEntityGraph[source]¶
get_entity_knowledge(entity: str, depth: int = 1) → List[str][source]¶
Get information about an entity.
get_topological_sort() → List[str][source]¶
Get a list of entity names in the graph sorted by causal dependence.
get_triples() → List[Tuple[str, str, str]][source]¶
Get all triples in the graph.
write_to_gml(path: str) → None[source]¶
Examples using NetworkxEntityGraph¶
Graph QA | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.NetworkxEntityGraph.html |
77fe050102bc-0 | langchain.graphs.memgraph_graph.MemgraphGraph¶
class langchain.graphs.memgraph_graph.MemgraphGraph(url: str, username: str, password: str, *, database: str = 'memgraph')[source]¶
Memgraph wrapper for graph operations.
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Create a new Memgraph graph wrapper instance.
Attributes
get_schema
Returns the schema of the Graph
get_structured_schema
Returns the structured schema of the Graph
Methods
__init__(url, username, password, *[, database])
Create a new Memgraph graph wrapper instance.
add_graph_documents(graph_documents[, ...])
Take GraphDocument as input as uses it to construct a graph.
query(query[, params])
Query Neo4j database.
refresh_schema()
Refreshes the Memgraph graph schema information.
__init__(url: str, username: str, password: str, *, database: str = 'memgraph') → None[source]¶
Create a new Memgraph graph wrapper instance.
add_graph_documents(graph_documents: List[GraphDocument], include_source: bool = False) → None¶
Take GraphDocument as input as uses it to construct a graph.
query(query: str, params: dict = {}) → List[Dict[str, Any]]¶
Query Neo4j database. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.memgraph_graph.MemgraphGraph.html |
77fe050102bc-1 | Query Neo4j database.
refresh_schema() → None[source]¶
Refreshes the Memgraph graph schema information.
Examples using MemgraphGraph¶
Memgraph QA chain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.memgraph_graph.MemgraphGraph.html |
6767b6c855b8-0 | langchain.graphs.graph_document.Relationship¶
class langchain.graphs.graph_document.Relationship[source]¶
Bases: Serializable
Represents a directed relationship between two nodes in a graph.
source¶
The source node of the relationship.
Type
Node
target¶
The target node of the relationship.
Type
Node
type¶
The type of the relationship.
Type
str
properties¶
Additional properties associated with the relationship.
Type
dict
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param properties: dict [Optional]¶
param source: langchain.graphs.graph_document.Node [Required]¶
param target: langchain.graphs.graph_document.Node [Required]¶
param type: str [Required]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.Relationship.html |
6767b6c855b8-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.Relationship.html |
6767b6c855b8-2 | A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.Relationship.html |
4db7acbbd147-0 | langchain.graphs.neptune_graph.NeptuneQueryException¶
class langchain.graphs.neptune_graph.NeptuneQueryException(exception: Union[str, Dict])[source]¶
A class to handle queries that fail to execute | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.neptune_graph.NeptuneQueryException.html |
25ef9fd27952-0 | langchain.graphs.graph_store.GraphStore¶
class langchain.graphs.graph_store.GraphStore[source]¶
An abstract class wrapper for graph operations.
Attributes
get_schema
Returns the schema of the Graph database
get_structured_schema
Returns the schema of the Graph database
Methods
__init__()
add_graph_documents(graph_documents[, ...])
Take GraphDocument as input as uses it to construct a graph.
query(query[, params])
Query the graph.
refresh_schema()
Refreshes the graph schema information.
__init__()¶
abstract add_graph_documents(graph_documents: List[GraphDocument], include_source: bool = False) → None[source]¶
Take GraphDocument as input as uses it to construct a graph.
abstract query(query: str, params: dict = {}) → List[Dict[str, Any]][source]¶
Query the graph.
abstract refresh_schema() → None[source]¶
Refreshes the graph schema information. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_store.GraphStore.html |
2220aa1b9347-0 | langchain.graphs.graph_document.Node¶
class langchain.graphs.graph_document.Node[source]¶
Bases: Serializable
Represents a node in a graph with associated properties.
id¶
A unique identifier for the node.
Type
Union[str, int]
type¶
The type or label of the node, default is “Node”.
Type
str
properties¶
Additional properties and metadata associated with the node.
Type
dict
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param id: Union[str, int] [Required]¶
param properties: dict [Optional]¶
param type: str = 'Node'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.Node.html |
2220aa1b9347-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.Node.html |
2220aa1b9347-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.graph_document.Node.html |
964dfea21381-0 | langchain.graphs.networkx_graph.get_entities¶
langchain.graphs.networkx_graph.get_entities(entity_str: str) → List[str][source]¶
Extract entities from entity string. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.get_entities.html |
be0ddd3d8afd-0 | langchain.graphs.networkx_graph.parse_triples¶
langchain.graphs.networkx_graph.parse_triples(knowledge_str: str) → List[KnowledgeTriple][source]¶
Parse knowledge triples from the knowledge string. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.parse_triples.html |
0fa31c184a0f-0 | langchain.graphs.arangodb_graph.get_arangodb_client¶
langchain.graphs.arangodb_graph.get_arangodb_client(url: Optional[str] = None, dbname: Optional[str] = None, username: Optional[str] = None, password: Optional[str] = None) → Any[source]¶
Get the Arango DB client from credentials.
Parameters
url – Arango DB url. Can be passed in as named arg or set as environment
var ARANGODB_URL. Defaults to “http://localhost:8529”.
dbname – Arango DB name. Can be passed in as named arg or set as
environment var ARANGODB_DBNAME. Defaults to “_system”.
username – Can be passed in as named arg or set as environment var
ARANGODB_USERNAME. Defaults to “root”.
password – Can be passed ni as named arg or set as environment var
ARANGODB_PASSWORD. Defaults to “”.
Returns
An arango.database.StandardDatabase. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.arangodb_graph.get_arangodb_client.html |
b7684ba0a1a4-0 | langchain.graphs.nebula_graph.NebulaGraph¶
class langchain.graphs.nebula_graph.NebulaGraph(space: str, username: str = 'root', password: str = 'nebula', address: str = '127.0.0.1', port: int = 9669, session_pool_size: int = 30)[source]¶
NebulaGraph wrapper for graph operations.
NebulaGraph inherits methods from Neo4jGraph to bring ease to the user space.
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Create a new NebulaGraph wrapper instance.
Attributes
get_schema
Returns the schema of the NebulaGraph database
Methods
__init__(space[, username, password, ...])
Create a new NebulaGraph wrapper instance.
execute(query[, params, retry])
Query NebulaGraph database.
query(query[, retry])
refresh_schema()
Refreshes the NebulaGraph schema information.
__init__(space: str, username: str = 'root', password: str = 'nebula', address: str = '127.0.0.1', port: int = 9669, session_pool_size: int = 30) → None[source]¶
Create a new NebulaGraph wrapper instance. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.nebula_graph.NebulaGraph.html |
b7684ba0a1a4-1 | Create a new NebulaGraph wrapper instance.
execute(query: str, params: Optional[dict] = None, retry: int = 0) → Any[source]¶
Query NebulaGraph database.
query(query: str, retry: int = 0) → Dict[str, Any][source]¶
refresh_schema() → None[source]¶
Refreshes the NebulaGraph schema information.
Examples using NebulaGraph¶
NebulaGraphQAChain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.nebula_graph.NebulaGraph.html |
42a6bca9ef73-0 | langchain.graphs.kuzu_graph.KuzuGraph¶
class langchain.graphs.kuzu_graph.KuzuGraph(db: Any, database: str = 'kuzu')[source]¶
Kùzu wrapper for graph operations.
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Attributes
get_schema
Returns the schema of the Kùzu database
Methods
__init__(db[, database])
query(query[, params])
Query Kùzu database
refresh_schema()
Refreshes the Kùzu graph schema information
__init__(db: Any, database: str = 'kuzu') → None[source]¶
query(query: str, params: dict = {}) → List[Dict[str, Any]][source]¶
Query Kùzu database
refresh_schema() → None[source]¶
Refreshes the Kùzu graph schema information
Examples using KuzuGraph¶
KuzuQAChain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.kuzu_graph.KuzuGraph.html |
ff765c3a274d-0 | langchain.graphs.hugegraph.HugeGraph¶
class langchain.graphs.hugegraph.HugeGraph(username: str = 'default', password: str = 'default', address: str = '127.0.0.1', port: int = 8081, graph: str = 'hugegraph')[source]¶
HugeGraph wrapper for graph operations.
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Create a new HugeGraph wrapper instance.
Attributes
get_schema
Returns the schema of the HugeGraph database
Methods
__init__([username, password, address, ...])
Create a new HugeGraph wrapper instance.
query(query)
refresh_schema()
Refreshes the HugeGraph schema information.
__init__(username: str = 'default', password: str = 'default', address: str = '127.0.0.1', port: int = 8081, graph: str = 'hugegraph') → None[source]¶
Create a new HugeGraph wrapper instance.
query(query: str) → List[Dict[str, Any]][source]¶
refresh_schema() → None[source]¶
Refreshes the HugeGraph schema information.
Examples using HugeGraph¶
HugeGraph QA Chain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.hugegraph.HugeGraph.html |
60d9c90a9acb-0 | langchain.graphs.arangodb_graph.ArangoGraph¶
class langchain.graphs.arangodb_graph.ArangoGraph(db: Any)[source]¶
ArangoDB wrapper for graph operations.
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Create a new ArangoDB graph wrapper instance.
Attributes
db
schema
Methods
__init__(db)
Create a new ArangoDB graph wrapper instance.
from_db_credentials([url, dbname, username, ...])
Convenience constructor that builds Arango DB from credentials.
generate_schema([sample_ratio])
Generates the schema of the ArangoDB Database and returns it User can specify a sample_ratio (0 to 1) to determine the ratio of documents/edges used (in relation to the Collection size) to render each Collection Schema.
query(query[, top_k])
Query the ArangoDB database.
set_db(db)
set_schema([schema])
Set the schema of the ArangoDB Database.
__init__(db: Any) → None[source]¶
Create a new ArangoDB graph wrapper instance.
classmethod from_db_credentials(url: Optional[str] = None, dbname: Optional[str] = None, username: Optional[str] = None, password: Optional[str] = None) → Any[source]¶
Convenience constructor that builds Arango DB from credentials.
Parameters | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.arangodb_graph.ArangoGraph.html |
60d9c90a9acb-1 | Convenience constructor that builds Arango DB from credentials.
Parameters
url – Arango DB url. Can be passed in as named arg or set as environment
var ARANGODB_URL. Defaults to “http://localhost:8529”.
dbname – Arango DB name. Can be passed in as named arg or set as
environment var ARANGODB_DBNAME. Defaults to “_system”.
username – Can be passed in as named arg or set as environment var
ARANGODB_USERNAME. Defaults to “root”.
password – Can be passed ni as named arg or set as environment var
ARANGODB_PASSWORD. Defaults to “”.
Returns
An arango.database.StandardDatabase.
generate_schema(sample_ratio: float = 0) → Dict[str, List[Dict[str, Any]]][source]¶
Generates the schema of the ArangoDB Database and returns it
User can specify a sample_ratio (0 to 1) to determine the
ratio of documents/edges used (in relation to the Collection size)
to render each Collection Schema.
query(query: str, top_k: Optional[int] = None, **kwargs: Any) → List[Dict[str, Any]][source]¶
Query the ArangoDB database.
set_db(db: Any) → None[source]¶
set_schema(schema: Optional[Dict[str, Any]] = None) → None[source]¶
Set the schema of the ArangoDB Database.
Auto-generates Schema if schema is None.
Examples using ArangoGraph¶
ArangoDB QA chain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.arangodb_graph.ArangoGraph.html |
1a9522a35e73-0 | langchain.graphs.neptune_graph.NeptuneGraph¶
class langchain.graphs.neptune_graph.NeptuneGraph(host: str, port: int = 8182, use_https: bool = True, client: Any = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, service: str = 'neptunedata', sign: bool = True)[source]¶
Neptune wrapper for graph operations.
Parameters
host – endpoint for the database instance
port – port number for the database instance, default is 8182
use_https – whether to use secure connection, default is True
client – optional boto3 Neptune client
credentials_profile_name – optional AWS profile name
region_name – optional AWS region, e.g., us-west-2
service – optional service name, default is neptunedata
sign – optional, whether to sign the request payload, default is True
Example
graph = NeptuneGraph(host=’<my-cluster>’,
port=8182
)
Security note: Make sure that the database connection uses credentialsthat are narrowly-scoped to only include necessary permissions.
Failure to do so may result in data corruption or loss, since the calling
code may attempt commands that would result in deletion, mutation
of data if appropriately prompted or reading sensitive data if such
data is present in the database.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this tool.
See https://python.langchain.com/docs/security for more information.
Create a new Neptune graph wrapper instance.
Attributes
get_schema
Returns the schema of the Neptune database
Methods
__init__(host[, port, use_https, client, ...])
Create a new Neptune graph wrapper instance.
query(query[, params])
Query Neptune database. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.neptune_graph.NeptuneGraph.html |
1a9522a35e73-1 | query(query[, params])
Query Neptune database.
__init__(host: str, port: int = 8182, use_https: bool = True, client: Any = None, credentials_profile_name: Optional[str] = None, region_name: Optional[str] = None, service: str = 'neptunedata', sign: bool = True) → None[source]¶
Create a new Neptune graph wrapper instance.
query(query: str, params: dict = {}) → Dict[str, Any][source]¶
Query Neptune database.
Examples using NeptuneGraph¶
Neptune Open Cypher QA Chain | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.neptune_graph.NeptuneGraph.html |
84654b042bf6-0 | langchain.graphs.networkx_graph.KnowledgeTriple¶
class langchain.graphs.networkx_graph.KnowledgeTriple(subject: str, predicate: str, object_: str)[source]¶
A triple in the graph.
Create new instance of KnowledgeTriple(subject, predicate, object_)
Attributes
object_
Alias for field number 2
predicate
Alias for field number 1
subject
Alias for field number 0
Methods
__init__()
count(value, /)
Return number of occurrences of value.
from_string(triple_string)
Create a KnowledgeTriple from a string.
index(value[, start, stop])
Return first index of value.
__init__()¶
count(value, /)¶
Return number of occurrences of value.
classmethod from_string(triple_string: str) → KnowledgeTriple[source]¶
Create a KnowledgeTriple from a string.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present. | lang/api.python.langchain.com/en/latest/graphs/langchain.graphs.networkx_graph.KnowledgeTriple.html |
c70b98bdb4cd-0 | langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain¶
class langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain[source]¶
Bases: Chain
Chain that interprets a prompt and executes python code to do symbolic math.
Example
from langchain.chains import LLMSymbolicMathChain
from langchain.llms import OpenAI
llm_symbolic_math = LLMSymbolicMathChain.from_llm(OpenAI())
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm_chain: LLMChain [Required]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks. | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-1 | and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to the global verbose value,
accessible via langchain.globals.get_verbose().
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-2 | tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False. | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-3 | chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-4 | with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-5 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode. | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-6 | e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-7 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
chain.dict(exclude_unset=True)
# -> {"_type": "foo", "verbose": False, ...} | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-8 | # -> {"_type": "foo", "verbose": False, ...}
classmethod from_llm(llm: BaseLanguageModel, prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], template='Translate a math problem into a expression that can be executed using Python\'s SymPy library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```text\n${{single line sympy expression that solves the problem}}\n```\n...sympy.sympify(text, evaluate=True)...\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is the limit of sin(x) / x as x goes to 0\n```text\nlimit(sin(x)/x, x, 0)\n```\n...sympy.sympify("limit(sin(x)/x, x, 0)")...\n```output\n1\n```\nAnswer: 1\n\nQuestion: What is the integral of e^-x from 0 to infinity\n```text\nintegrate(exp(-x), (x, 0, oo))\n```\n...sympy.sympify("integrate(exp(-x), (x, 0, oo))")...\n```output\n1\n```\n\nQuestion: What are the solutions to this equation x**2 - x?\n```text\nsolveset(x**2 - x, x)\n```\n...sympy.sympify("solveset(x**2 - x, x)")...\n```output\n[0, 1]\n```\nQuestion: {question}\n'), **kwargs: Any) → LLMSymbolicMathChain[source]¶
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-9 | classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-10 | purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-11 | prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects. | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-12 | these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-13 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object. | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-14 | on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor. | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
c70b98bdb4cd-15 | These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using LLMSymbolicMathChain¶
LLM Symbolic Math | lang/api.python.langchain.com/en/latest/llm_symbolic_math/langchain_experimental.llm_symbolic_math.base.LLMSymbolicMathChain.html |
ddd8366112fb-0 | langchain.cache.FullLLMCache¶
class langchain.cache.FullLLMCache(**kwargs)[source]¶
SQLite table for full LLM Cache (all generations).
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
Attributes
idx
llm
metadata
prompt
registry
response
Methods
__init__(**kwargs)
A simple constructor that allows initialization from kwargs.
__init__(**kwargs)¶
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.FullLLMCache.html |
41363e5a12be-0 | langchain.cache.FullMd5LLMCache¶
class langchain.cache.FullMd5LLMCache(**kwargs)[source]¶
SQLite table for full LLM Cache (all generations).
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships.
Attributes
id
idx
llm
metadata
prompt
prompt_md5
registry
response
Methods
__init__(**kwargs)
A simple constructor that allows initialization from kwargs.
__init__(**kwargs)¶
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and
values in kwargs.
Only keys that are present as
attributes of the instance’s class are allowed. These could be,
for example, any mapped columns or relationships. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.FullMd5LLMCache.html |
454bc31da1b8-0 | langchain.cache.RedisSemanticCache¶
class langchain.cache.RedisSemanticCache(redis_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶
Cache that uses Redis as a vector-store backend.
Initialize by passing in the init GPTCache func
Parameters
redis_url (str) – URL to connect to Redis.
embedding (Embedding) – Embedding provider for semantic encoding and search.
score_threshold (float, 0.2) –
Example:
from langchain.globals import set_llm_cache
from langchain.cache import RedisSemanticCache
from langchain.embeddings import OpenAIEmbeddings
set_llm_cache(RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
))
Attributes
DEFAULT_SCHEMA
Methods
__init__(redis_url, embedding[, score_threshold])
Initialize by passing in the init GPTCache func
clear(**kwargs)
Clear semantic cache for a given llm_string.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update cache based on prompt and llm_string.
__init__(redis_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶
Initialize by passing in the init GPTCache func
Parameters
redis_url (str) – URL to connect to Redis.
embedding (Embedding) – Embedding provider for semantic encoding and search.
score_threshold (float, 0.2) –
Example:
from langchain.globals import set_llm_cache
from langchain.cache import RedisSemanticCache
from langchain.embeddings import OpenAIEmbeddings
set_llm_cache(RedisSemanticCache( | lang/api.python.langchain.com/en/latest/cache/langchain.cache.RedisSemanticCache.html |
454bc31da1b8-1 | set_llm_cache(RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
))
clear(**kwargs: Any) → None[source]¶
Clear semantic cache for a given llm_string.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update cache based on prompt and llm_string.
Examples using RedisSemanticCache¶
Redis
LLM Caching integrations | lang/api.python.langchain.com/en/latest/cache/langchain.cache.RedisSemanticCache.html |
6b64fd0beea2-0 | langchain.cache.CassandraCache¶
class langchain.cache.CassandraCache(session: Optional[CassandraSession] = None, keyspace: Optional[str] = None, table_name: str = 'langchain_llm_cache', ttl_seconds: Optional[int] = None, skip_provisioning: bool = False)[source]¶
Cache that uses Cassandra / Astra DB as a backend.
It uses a single Cassandra table.
The lookup keys (which get to form the primary key) are:
prompt, a string
llm_string, a deterministic str representation of the model parameters.
(needed to prevent collisions same-prompt-different-model collisions)
Initialize with a ready session and a keyspace name.
:param session: an open Cassandra session
:type session: cassandra.cluster.Session
:param keyspace: the keyspace to use for storing the cache
:type keyspace: str
:param table_name: name of the Cassandra table to use as cache
:type table_name: str
:param ttl_seconds: time-to-live for cache entries
(default: None, i.e. forever)
Methods
__init__([session, keyspace, table_name, ...])
Initialize with a ready session and a keyspace name. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param table_name: name of the Cassandra table to use as cache :type table_name: str :param ttl_seconds: time-to-live for cache entries (default: None, i.e. forever) :type ttl_seconds: optional int.
clear(**kwargs)
Clear cache.
delete(prompt, llm_string)
Evict from cache if there's an entry.
delete_through_llm(prompt, llm[, stop])
A wrapper around delete with the LLM being passed. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.CassandraCache.html |
6b64fd0beea2-1 | A wrapper around delete with the LLM being passed.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update cache based on prompt and llm_string.
__init__(session: Optional[CassandraSession] = None, keyspace: Optional[str] = None, table_name: str = 'langchain_llm_cache', ttl_seconds: Optional[int] = None, skip_provisioning: bool = False)[source]¶
Initialize with a ready session and a keyspace name.
:param session: an open Cassandra session
:type session: cassandra.cluster.Session
:param keyspace: the keyspace to use for storing the cache
:type keyspace: str
:param table_name: name of the Cassandra table to use as cache
:type table_name: str
:param ttl_seconds: time-to-live for cache entries
(default: None, i.e. forever)
clear(**kwargs: Any) → None[source]¶
Clear cache. This is for all LLMs at once.
delete(prompt: str, llm_string: str) → None[source]¶
Evict from cache if there’s an entry.
delete_through_llm(prompt: str, llm: LLM, stop: Optional[List[str]] = None) → None[source]¶
A wrapper around delete with the LLM being passed.
In case the llm(prompt) calls have a stop param, you should pass it here
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update cache based on prompt and llm_string. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.CassandraCache.html |
602ded01a747-0 | langchain.cache.SQLAlchemyMd5Cache¶
class langchain.cache.SQLAlchemyMd5Cache(engine: ~sqlalchemy.engine.base.Engine, cache_schema: ~typing.Type[~langchain.cache.FullMd5LLMCache] = <class 'langchain.cache.FullMd5LLMCache'>)[source]¶
Cache that uses SQAlchemy as a backend.
Initialize by creating all tables.
Methods
__init__(engine[, cache_schema])
Initialize by creating all tables.
clear(**kwargs)
Clear cache.
get_md5(input_string)
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update based on prompt and llm_string.
__init__(engine: ~sqlalchemy.engine.base.Engine, cache_schema: ~typing.Type[~langchain.cache.FullMd5LLMCache] = <class 'langchain.cache.FullMd5LLMCache'>)[source]¶
Initialize by creating all tables.
clear(**kwargs: Any) → None[source]¶
Clear cache.
static get_md5(input_string: str) → str[source]¶
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update based on prompt and llm_string. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.SQLAlchemyMd5Cache.html |
25ae951a9068-0 | langchain.cache.MomentoCache¶
class langchain.cache.MomentoCache(cache_client: momento.CacheClient, cache_name: str, *, ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶
Cache that uses Momento as a backend. See https://gomomento.com/
Instantiate a prompt cache using Momento as a backend.
Note: to instantiate the cache client passed to MomentoCache,
you must have a Momento account. See https://gomomento.com/.
Parameters
cache_client (CacheClient) – The Momento cache client.
cache_name (str) – The name of the cache to use to store the data.
ttl (Optional[timedelta], optional) – The time to live for the cache items.
Defaults to None, ie use the client default TTL.
ensure_cache_exists (bool, optional) – Create the cache if it doesn’t
exist. Defaults to True.
Raises
ImportError – Momento python package is not installed.
TypeError – cache_client is not of type momento.CacheClientObject
ValueError – ttl is non-null and non-negative
Methods
__init__(cache_client, cache_name, *[, ttl, ...])
Instantiate a prompt cache using Momento as a backend.
clear(**kwargs)
Clear the cache.
from_client_params(cache_name, ttl, *[, ...])
Construct cache from CacheClient parameters.
lookup(prompt, llm_string)
Lookup llm generations in cache by prompt and associated model and settings.
update(prompt, llm_string, return_val)
Store llm generations in cache.
__init__(cache_client: momento.CacheClient, cache_name: str, *, ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶ | lang/api.python.langchain.com/en/latest/cache/langchain.cache.MomentoCache.html |
25ae951a9068-1 | Instantiate a prompt cache using Momento as a backend.
Note: to instantiate the cache client passed to MomentoCache,
you must have a Momento account. See https://gomomento.com/.
Parameters
cache_client (CacheClient) – The Momento cache client.
cache_name (str) – The name of the cache to use to store the data.
ttl (Optional[timedelta], optional) – The time to live for the cache items.
Defaults to None, ie use the client default TTL.
ensure_cache_exists (bool, optional) – Create the cache if it doesn’t
exist. Defaults to True.
Raises
ImportError – Momento python package is not installed.
TypeError – cache_client is not of type momento.CacheClientObject
ValueError – ttl is non-null and non-negative
clear(**kwargs: Any) → None[source]¶
Clear the cache.
Raises
SdkException – Momento service or network error
classmethod from_client_params(cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, api_key: Optional[str] = None, auth_token: Optional[str] = None, **kwargs: Any) → MomentoCache[source]¶
Construct cache from CacheClient parameters.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Lookup llm generations in cache by prompt and associated model and settings.
Parameters
prompt (str) – The prompt run through the language model.
llm_string (str) – The language model version and settings.
Raises
SdkException – Momento service or network error
Returns
A list of language model generations.
Return type
Optional[RETURN_VAL_TYPE]
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Store llm generations in cache.
Parameters | lang/api.python.langchain.com/en/latest/cache/langchain.cache.MomentoCache.html |
25ae951a9068-2 | Store llm generations in cache.
Parameters
prompt (str) – The prompt run through the language model.
llm_string (str) – The language model string.
return_val (RETURN_VAL_TYPE) – A list of language model generations.
Raises
SdkException – Momento service or network error
Exception – Unexpected response
Examples using MomentoCache¶
Momento
LLM Caching integrations | lang/api.python.langchain.com/en/latest/cache/langchain.cache.MomentoCache.html |
887cf4117a62-0 | langchain.cache.GPTCache¶
class langchain.cache.GPTCache(init_func: Optional[Union[Callable[[Any, str], None], Callable[[Any], None]]] = None)[source]¶
Cache that uses GPTCache as a backend.
Initialize by passing in init function (default: None).
Parameters
init_func (Optional[Callable[[Any], None]]) – init GPTCache function
(default – None)
Example:
.. code-block:: python
# Initialize GPTCache with a custom init function
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.globals import set_llm_cache
# Avoid multiple caches using the same file,
causing different llm model caches to affect each other
def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(pre_embedding_func=get_prompt,
data_manager=manager_factory(
manager=”map”,
data_dir=f”map_cache_{llm}”
),
)
set_llm_cache(GPTCache(init_gptcache))
Methods
__init__([init_func])
Initialize by passing in init function (default: None).
clear(**kwargs)
Clear cache.
lookup(prompt, llm_string)
Look up the cache data.
update(prompt, llm_string, return_val)
Update cache.
__init__(init_func: Optional[Union[Callable[[Any, str], None], Callable[[Any], None]]] = None)[source]¶
Initialize by passing in init function (default: None).
Parameters
init_func (Optional[Callable[[Any], None]]) – init GPTCache function
(default – None)
Example:
.. code-block:: python
# Initialize GPTCache with a custom init function
import gptcache | lang/api.python.langchain.com/en/latest/cache/langchain.cache.GPTCache.html |
887cf4117a62-1 | # Initialize GPTCache with a custom init function
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.globals import set_llm_cache
# Avoid multiple caches using the same file,
causing different llm model caches to affect each other
def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(pre_embedding_func=get_prompt,
data_manager=manager_factory(
manager=”map”,
data_dir=f”map_cache_{llm}”
),
)
set_llm_cache(GPTCache(init_gptcache))
clear(**kwargs: Any) → None[source]¶
Clear cache.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up the cache data.
First, retrieve the corresponding cache object using the llm_string parameter,
and then retrieve the data from the cache based on the prompt.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update cache.
First, retrieve the corresponding cache object using the llm_string parameter,
and then store the prompt and return_val in the cache object.
Examples using GPTCache¶
LLM Caching integrations | lang/api.python.langchain.com/en/latest/cache/langchain.cache.GPTCache.html |
f9be7f5666ca-0 | langchain.cache.InMemoryCache¶
class langchain.cache.InMemoryCache[source]¶
Cache that stores things in memory.
Initialize with empty cache.
Methods
__init__()
Initialize with empty cache.
clear(**kwargs)
Clear cache.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update cache based on prompt and llm_string.
__init__() → None[source]¶
Initialize with empty cache.
clear(**kwargs: Any) → None[source]¶
Clear cache.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update cache based on prompt and llm_string.
Examples using InMemoryCache¶
LLM Caching integrations | lang/api.python.langchain.com/en/latest/cache/langchain.cache.InMemoryCache.html |
d303a8a6e5c3-0 | langchain.cache.SQLiteCache¶
class langchain.cache.SQLiteCache(database_path: str = '.langchain.db')[source]¶
Cache that uses SQLite as a backend.
Initialize by creating the engine and all tables.
Methods
__init__([database_path])
Initialize by creating the engine and all tables.
clear(**kwargs)
Clear cache.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update based on prompt and llm_string.
__init__(database_path: str = '.langchain.db')[source]¶
Initialize by creating the engine and all tables.
clear(**kwargs: Any) → None¶
Clear cache.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]]¶
Look up based on prompt and llm_string.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None¶
Update based on prompt and llm_string.
Examples using SQLiteCache¶
LLM Caching integrations | lang/api.python.langchain.com/en/latest/cache/langchain.cache.SQLiteCache.html |
0042dc44a83b-0 | langchain.cache.RedisCache¶
class langchain.cache.RedisCache(redis_: Any, *, ttl: Optional[int] = None)[source]¶
Cache that uses Redis as a backend.
Initialize an instance of RedisCache.
This method initializes an object with Redis caching capabilities.
It takes a redis_ parameter, which should be an instance of a Redis
client class, allowing the object to interact with a Redis
server for caching purposes.
Parameters
redis (Any) – An instance of a Redis client class
(e.g., redis.Redis) used for caching.
This allows the object to communicate with a
Redis server for caching operations.
ttl (int, optional) – Time-to-live (TTL) for cached items in seconds.
If provided, it sets the time duration for how long cached
items will remain valid. If not provided, cached items will not
have an automatic expiration.
Methods
__init__(redis_, *[, ttl])
Initialize an instance of RedisCache.
clear(**kwargs)
Clear cache.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update cache based on prompt and llm_string.
__init__(redis_: Any, *, ttl: Optional[int] = None)[source]¶
Initialize an instance of RedisCache.
This method initializes an object with Redis caching capabilities.
It takes a redis_ parameter, which should be an instance of a Redis
client class, allowing the object to interact with a Redis
server for caching purposes.
Parameters
redis (Any) – An instance of a Redis client class
(e.g., redis.Redis) used for caching.
This allows the object to communicate with a
Redis server for caching operations.
ttl (int, optional) – Time-to-live (TTL) for cached items in seconds. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.RedisCache.html |
0042dc44a83b-1 | If provided, it sets the time duration for how long cached
items will remain valid. If not provided, cached items will not
have an automatic expiration.
clear(**kwargs: Any) → None[source]¶
Clear cache. If asynchronous is True, flush asynchronously.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update cache based on prompt and llm_string.
Examples using RedisCache¶
Redis
LLM Caching integrations | lang/api.python.langchain.com/en/latest/cache/langchain.cache.RedisCache.html |
a84e5779f7af-0 | langchain.cache.UpstashRedisCache¶
class langchain.cache.UpstashRedisCache(redis_: Any, *, ttl: Optional[int] = None)[source]¶
Cache that uses Upstash Redis as a backend.
Initialize an instance of UpstashRedisCache.
This method initializes an object with Upstash Redis caching capabilities.
It takes a redis_ parameter, which should be an instance of an Upstash Redis
client class, allowing the object to interact with Upstash Redis
server for caching purposes.
Parameters
redis – An instance of Upstash Redis client class
(e.g., Redis) used for caching.
This allows the object to communicate with
Redis server for caching operations on.
ttl (int, optional) – Time-to-live (TTL) for cached items in seconds.
If provided, it sets the time duration for how long cached
items will remain valid. If not provided, cached items will not
have an automatic expiration.
Methods
__init__(redis_, *[, ttl])
Initialize an instance of UpstashRedisCache.
clear(**kwargs)
Clear cache.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update cache based on prompt and llm_string.
__init__(redis_: Any, *, ttl: Optional[int] = None)[source]¶
Initialize an instance of UpstashRedisCache.
This method initializes an object with Upstash Redis caching capabilities.
It takes a redis_ parameter, which should be an instance of an Upstash Redis
client class, allowing the object to interact with Upstash Redis
server for caching purposes.
Parameters
redis – An instance of Upstash Redis client class
(e.g., Redis) used for caching.
This allows the object to communicate with
Redis server for caching operations on. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.UpstashRedisCache.html |
a84e5779f7af-1 | This allows the object to communicate with
Redis server for caching operations on.
ttl (int, optional) – Time-to-live (TTL) for cached items in seconds.
If provided, it sets the time duration for how long cached
items will remain valid. If not provided, cached items will not
have an automatic expiration.
clear(**kwargs: Any) → None[source]¶
Clear cache. If asynchronous is True, flush asynchronously.
This flushes the whole db.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update cache based on prompt and llm_string. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.UpstashRedisCache.html |
95b051bd9b2e-0 | langchain.cache.SQLAlchemyCache¶
class langchain.cache.SQLAlchemyCache(engine: ~sqlalchemy.engine.base.Engine, cache_schema: ~typing.Type[~langchain.cache.FullLLMCache] = <class 'langchain.cache.FullLLMCache'>)[source]¶
Cache that uses SQAlchemy as a backend.
Initialize by creating all tables.
Methods
__init__(engine[, cache_schema])
Initialize by creating all tables.
clear(**kwargs)
Clear cache.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update based on prompt and llm_string.
__init__(engine: ~sqlalchemy.engine.base.Engine, cache_schema: ~typing.Type[~langchain.cache.FullLLMCache] = <class 'langchain.cache.FullLLMCache'>)[source]¶
Initialize by creating all tables.
clear(**kwargs: Any) → None[source]¶
Clear cache.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update based on prompt and llm_string.
Examples using SQLAlchemyCache¶
LLM Caching integrations | lang/api.python.langchain.com/en/latest/cache/langchain.cache.SQLAlchemyCache.html |
86502af505b3-0 | langchain.cache.CassandraSemanticCache¶
class langchain.cache.CassandraSemanticCache(session: Optional[CassandraSession], keyspace: Optional[str], embedding: Embeddings, table_name: str = 'langchain_llm_semantic_cache', distance_metric: str = 'dot', score_threshold: float = 0.85, ttl_seconds: Optional[int] = None, skip_provisioning: bool = False)[source]¶
Cache that uses Cassandra as a vector-store backend for semantic
(i.e. similarity-based) lookup.
It uses a single (vector) Cassandra table and stores, in principle,
cached values from several LLMs, so the LLM’s llm_string is part
of the rows’ primary keys.
The similarity is based on one of several distance metrics (default: “dot”).
If choosing another metric, the default threshold is to be re-tuned accordingly.
Initialize the cache with all relevant parameters.
:param session: an open Cassandra session
:type session: cassandra.cluster.Session
:param keyspace: the keyspace to use for storing the cache
:type keyspace: str
:param embedding: Embedding provider for semantic
encoding and search.
Parameters
table_name (str) – name of the Cassandra (vector) table
to use as cache
distance_metric (str, 'dot') – which measure to adopt for
similarity searches
score_threshold (optional float) – numeric value to use as
cutoff for the similarity searches
ttl_seconds (optional int) – time-to-live for cache entries
(default: None, i.e. forever)
The default score threshold is tuned to the default metric.
Tune it carefully yourself if switching to another distance metric.
Methods
__init__(session, keyspace, embedding[, ...]) | lang/api.python.langchain.com/en/latest/cache/langchain.cache.CassandraSemanticCache.html |
86502af505b3-1 | Methods
__init__(session, keyspace, embedding[, ...])
Initialize the cache with all relevant parameters. :param session: an open Cassandra session :type session: cassandra.cluster.Session :param keyspace: the keyspace to use for storing the cache :type keyspace: str :param embedding: Embedding provider for semantic encoding and search. :type embedding: Embedding :param table_name: name of the Cassandra (vector) table to use as cache :type table_name: str :param distance_metric: which measure to adopt for similarity searches :type distance_metric: str, 'dot' :param score_threshold: numeric value to use as cutoff for the similarity searches :type score_threshold: optional float :param ttl_seconds: time-to-live for cache entries (default: None, i.e. forever) :type ttl_seconds: optional int.
clear(**kwargs)
Clear the whole semantic cache.
delete_by_document_id(document_id)
Given this is a "similarity search" cache, an invalidation pattern that makes sense is first a lookup to get an ID, and then deleting with that ID.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
lookup_with_id(prompt, llm_string)
Look up based on prompt and llm_string.
lookup_with_id_through_llm(prompt, llm[, stop])
update(prompt, llm_string, return_val)
Update cache based on prompt and llm_string.
__init__(session: Optional[CassandraSession], keyspace: Optional[str], embedding: Embeddings, table_name: str = 'langchain_llm_semantic_cache', distance_metric: str = 'dot', score_threshold: float = 0.85, ttl_seconds: Optional[int] = None, skip_provisioning: bool = False)[source]¶ | lang/api.python.langchain.com/en/latest/cache/langchain.cache.CassandraSemanticCache.html |
86502af505b3-2 | Initialize the cache with all relevant parameters.
:param session: an open Cassandra session
:type session: cassandra.cluster.Session
:param keyspace: the keyspace to use for storing the cache
:type keyspace: str
:param embedding: Embedding provider for semantic
encoding and search.
Parameters
table_name (str) – name of the Cassandra (vector) table
to use as cache
distance_metric (str, 'dot') – which measure to adopt for
similarity searches
score_threshold (optional float) – numeric value to use as
cutoff for the similarity searches
ttl_seconds (optional int) – time-to-live for cache entries
(default: None, i.e. forever)
The default score threshold is tuned to the default metric.
Tune it carefully yourself if switching to another distance metric.
clear(**kwargs: Any) → None[source]¶
Clear the whole semantic cache.
delete_by_document_id(document_id: str) → None[source]¶
Given this is a “similarity search” cache, an invalidation pattern
that makes sense is first a lookup to get an ID, and then deleting
with that ID. This is for the second step.
lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
lookup_with_id(prompt: str, llm_string: str) → Optional[Tuple[str, Sequence[Generation]]][source]¶
Look up based on prompt and llm_string.
If there are hits, return (document_id, cached_entry)
lookup_with_id_through_llm(prompt: str, llm: LLM, stop: Optional[List[str]] = None) → Optional[Tuple[str, Sequence[Generation]]][source]¶
update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶ | lang/api.python.langchain.com/en/latest/cache/langchain.cache.CassandraSemanticCache.html |
86502af505b3-3 | Update cache based on prompt and llm_string. | lang/api.python.langchain.com/en/latest/cache/langchain.cache.CassandraSemanticCache.html |
ee177465870a-0 | langchain.llms.base.get_prompts¶
langchain.llms.base.get_prompts(params: Dict[str, Any], prompts: List[str]) → Tuple[Dict[int, List], str, List[int], List[str]][source]¶
Get prompts that are already cached. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.base.get_prompts.html |
1f2fea10ccf9-0 | langchain.llms.beam.Beam¶
class langchain.llms.beam.Beam[source]¶
Bases: LLM
Beam API for gpt2 large language model.
To use, you should have the beam-sdk python package installed,
and the environment variable BEAM_CLIENT_ID set with your client id
and BEAM_CLIENT_SECRET set with your client secret. Information on how
to get this is available here: https://docs.beam.cloud/account/api-keys.
The wrapper can then be called as follows, where the name, cpu, memory, gpu,
python version, and python packages can be updated accordingly. Once deployed,
the instance can be called.
Example
llm = Beam(model_name="gpt2",
name="langchain-gpt2",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length=50)
llm._deploy()
call_result = llm._call(input)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param app_id: Optional[str] = None¶
param beam_client_id: str = ''¶
param beam_client_secret: str = ''¶
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param cpu: str = ''¶
param gpu: str = ''¶
param max_length: str = ''¶ | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-1 | param gpu: str = ''¶
param max_length: str = ''¶
param memory: str = ''¶
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not
explicitly specified.
param model_name: str = ''¶
param name: str = ''¶
param python_packages: List[str] = []¶
param python_version: str = ''¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param url: str = ''¶
model endpoint to use
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-2 | e.g., if the underlying runnable uses an API which supports a batch mode.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-3 | functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
app_creation() → None[source]¶
Creates a Python file which will contain your Beam app definition.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-4 | Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-5 | Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-6 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-7 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-8 | This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-9 | Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶ | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-10 | to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-11 | first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
run_creation() → None[source]¶
Creates a Python file which will be deployed on beam.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-12 | classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
1f2fea10ccf9-13 | Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Type[str]¶
Get the input type for this runnable.
property authorization: str¶
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using Beam¶
Beam | lang/api.python.langchain.com/en/latest/llms/langchain.llms.beam.Beam.html |
b0652f76899c-0 | langchain.llms.cohere.completion_with_retry¶
langchain.llms.cohere.completion_with_retry(llm: Cohere, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.cohere.completion_with_retry.html |
2087e80acaa0-0 | langchain.llms.amazon_api_gateway.AmazonAPIGateway¶
class langchain.llms.amazon_api_gateway.AmazonAPIGateway[source]¶
Bases: LLM
Amazon API Gateway to access LLM models hosted on AWS.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str [Required]¶
API Gateway URL
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param content_handler: langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway = <langchain.llms.amazon_api_gateway.ContentHandlerAmazonAPIGateway object>¶
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
param headers: Optional[Dict] = None¶
API Gateway HTTP Headers to send, e.g. for authentication
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_kwargs: Optional[Dict] = None¶
Keyword arguments to pass to the model.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-1 | Check Cache and run the LLM on the given prompt and input.
async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value, | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-2 | need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-3 | **kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-4 | This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-5 | Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-6 | classmethod from_orm(obj: Any) → Model¶
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-7 | functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-8 | Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable? | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-9 | classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-10 | Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-11 | stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures. | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
2087e80acaa0-12 | fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: TypeAlias¶
Get the input type for this runnable.
property OutputType: Type[str]¶
Get the input type for this runnable.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ | lang/api.python.langchain.com/en/latest/llms/langchain.llms.amazon_api_gateway.AmazonAPIGateway.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.