id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
7ddb0f2dd656-1 | classmethod from_language(language: Language, **kwargs: Any) → RecursiveCharacterTextSplitter¶
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
static get_separators_for_language(language: Language) → List[str]¶
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them. | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.LatexTextSplitter.html |
07cd57351492-0 | langchain.text_splitter.Tokenizer¶
class langchain.text_splitter.Tokenizer(chunk_overlap: int, tokens_per_chunk: int, decode: Callable[[List[int]], str], encode: Callable[[str], List[int]])[source]¶
Tokenizer data class.
Attributes
chunk_overlap
Overlap in tokens between chunks
tokens_per_chunk
Maximum number of tokens per chunk
decode
Function to decode a list of token ids to a string
encode
Function to encode a string to a list of token ids
Methods
__init__(chunk_overlap, tokens_per_chunk, ...)
__init__(chunk_overlap: int, tokens_per_chunk: int, decode: Callable[[List[int]], str], encode: Callable[[str], List[int]]) → None¶ | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.Tokenizer.html |
ac102e292f04-0 | langchain.text_splitter.TokenTextSplitter¶
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]¶
Splitting text to tokens using model tokenizer.
Create a new TextSplitter.
Methods
__init__([encoding_name, model_name, ...])
Create a new TextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split text into multiple components.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → None[source]¶
Create a new TextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶ | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TokenTextSplitter.html |
ac102e292f04-1 | Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str][source]¶
Split text into multiple components.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Examples using TokenTextSplitter¶
StarRocks
Split by tokens | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TokenTextSplitter.html |
9fbbb3514599-0 | langchain.text_splitter.ElementType¶
class langchain.text_splitter.ElementType[source]¶
Element type as typed dict.
url: str¶
xpath: str¶
content: str¶
metadata: Dict[str, str]¶ | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.ElementType.html |
8e0c909da5fb-0 | langchain.text_splitter.HeaderType¶
class langchain.text_splitter.HeaderType[source]¶
Header type as typed dict.
level: int¶
name: str¶
data: str¶ | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.HeaderType.html |
102b9a581418-0 | langchain.text_splitter.split_text_on_tokens¶
langchain.text_splitter.split_text_on_tokens(*, text: str, tokenizer: Tokenizer) → List[str][source]¶
Split incoming text and return chunks using tokenizer. | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.split_text_on_tokens.html |
bc4fa8ec001f-0 | langchain.text_splitter.CharacterTextSplitter¶
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', is_separator_regex: bool = False, **kwargs: Any)[source]¶
Splitting text that looks at characters.
Create a new TextSplitter.
Methods
__init__([separator, is_separator_regex])
Create a new TextSplitter.
atransform_documents(documents, **kwargs)
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts[, metadatas])
Create documents from a list of texts.
from_huggingface_tokenizer(tokenizer, **kwargs)
Text splitter that uses HuggingFace tokenizer to count length.
from_tiktoken_encoder([encoding_name, ...])
Text splitter that uses tiktoken encoder to count length.
split_documents(documents)
Split documents.
split_text(text)
Split incoming text and return chunks.
transform_documents(documents, **kwargs)
Transform sequence of documents by splitting them.
__init__(separator: str = '\n\n', is_separator_regex: bool = False, **kwargs: Any) → None[source]¶
Create a new TextSplitter.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[Document]¶
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → TextSplitter¶
Text splitter that uses HuggingFace tokenizer to count length. | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.CharacterTextSplitter.html |
bc4fa8ec001f-1 | Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → TS¶
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[Document]) → List[Document]¶
Split documents.
split_text(text: str) → List[str][source]¶
Split incoming text and return chunks.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Transform sequence of documents by splitting them.
Examples using CharacterTextSplitter¶
Confident
Hugging Face
OpenAI
Elasticsearch
Vectara Text Generation
Document Comparison
Vectorstore
LanceDB
sqlite-vss
Weaviate
DashVector
ScaNN
Xata
Vectara
PGVector
Rockset
DingoDB
Zilliz
SingleStoreDB
Annoy
Typesense
Activeloop Deep Lake
Neo4j Vector Index
Tair
Chroma
Alibaba Cloud OpenSearch
Baidu Cloud VectorSearch
StarRocks
scikit-learn
Tencent Cloud VectorDB
DocArray HnswSearch
MyScale
ClickHouse
Qdrant
Tigris
AwaDB
Supabase (Postgres)
OpenSearch
Pinecone
BagelDB
Azure Cognitive Search
Cassandra
USearch
Milvus
Marqo
DocArray InMemorySearch
Postgres Embedding
Faiss
Epsilla
AnalyticDB
Hologres
MongoDB Atlas
Meilisearch
Figma
Psychic
Manifest | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.CharacterTextSplitter.html |
bc4fa8ec001f-2 | MongoDB Atlas
Meilisearch
Figma
Psychic
Manifest
LLM Caching integrations
Set env var OPENAI_API_KEY or load from a .env file
Conversational Retrieval Agent
Retrieve from vector stores directly
Improve document indexing with HyDE
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Activeloop’s Deep Lake
Use LangChain, GPT and Activeloop’s Deep Lake to work with code base
Structure answers with OpenAI functions
QA using Activeloop’s DeepLake
SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base
Indexing
Caching
Split by tokens
Memory in the Multi-Input Chain
Combine agents and vector stores
Loading from LangChainHub | lang/api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.CharacterTextSplitter.html |
4f5a09aef4b4-0 | langchain.hub.pull¶
langchain.hub.pull(owner_repo_commit: str, *, api_url: Optional[str] = None, api_key: Optional[str] = None) → Any[source]¶
Pulls an object from the hub and returns it as a LangChain object.
Parameters
owner_repo_commit – The full name of the repo to pull from in the format of
owner/repo:commit_hash.
api_url – The URL of the LangChain Hub API. Defaults to the hosted API service
if you have an api key set, or a localhost instance if not.
api_key – The API key to use to authenticate with the LangChain Hub API. | lang/api.python.langchain.com/en/latest/hub/langchain.hub.pull.html |
884cb0457d4d-0 | langchain.hub.push¶
langchain.hub.push(repo_full_name: str, object: Any, *, api_url: Optional[str] = None, api_key: Optional[str] = None, parent_commit_hash: Optional[str] = 'latest', new_repo_is_public: bool = True, new_repo_description: str = '') → str[source]¶
Pushes an object to the hub and returns the URL it can be viewed at in a browser.
Parameters
repo_full_name – The full name of the repo to push to in the format of
owner/repo.
object – The LangChain to serialize and push to the hub.
api_url – The URL of the LangChain Hub API. Defaults to the hosted API service
if you have an api key set, or a localhost instance if not.
api_key – The API key to use to authenticate with the LangChain Hub API.
parent_commit_hash – The commit hash of the parent commit to push to. Defaults
to the latest commit automatically.
new_repo_is_public – Whether the repo should be public. Defaults to
True (Public by default).
new_repo_description – The description of the repo. Defaults to an empty
string. | lang/api.python.langchain.com/en/latest/hub/langchain.hub.push.html |
f5f8e22590b4-0 | langchain.smith.evaluation.runner_utils.TestResult¶
class langchain.smith.evaluation.runner_utils.TestResult[source]¶
A dictionary of the results of a single test run.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
get_aggregate_feedback([quantiles])
Return quantiles for the feedback scores.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
to_dataframe()
Convert the results to a dataframe.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.TestResult.html |
f5f8e22590b4-1 | Return the value for key if key is in the dictionary, else default.
get_aggregate_feedback(quantiles: Optional[Sequence[float]] = None) → pd.DataFrame[source]¶
Return quantiles for the feedback scores.
This method calculates and prints the quantiles for the feedback scores
across all feedback keys.
Returns
A DataFrame containing the quantiles for each feedback key.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
to_dataframe() → pd.DataFrame[source]¶
Convert the results to a dataframe.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.TestResult.html |
b7b6d8b0c672-0 | langchain.smith.evaluation.config.EvalConfig¶
class langchain.smith.evaluation.config.EvalConfig[source]¶
Bases: BaseModel
Configuration for a given run evaluator.
Parameters
evaluator_type (EvaluatorType) – The type of evaluator to use.
get_kwargs()[source]¶
Get the keyword arguments for the evaluator configuration.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType [Required]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.EvalConfig.html |
b7b6d8b0c672-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any][source]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.EvalConfig.html |
b7b6d8b0c672-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.EvalConfig.html |
c6bee0e41e99-0 | langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain¶
class langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain[source]¶
Bases: Chain, RunEvaluator
Evaluate Run and optional examples.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param example_mapper: Optional[StringExampleMapper] = None¶
Maps the Example (dataset row) to a dictionary
with a ‘reference’ string.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param name: str [Required]¶
The name of the evaluation metric.
param run_mapper: StringRunMapper [Required]¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-1 | The name of the evaluation metric.
param run_mapper: StringRunMapper [Required]¶
Maps the Run to a dictionary with ‘input’ and ‘prediction’ strings.
param string_evaluator: StringEvaluator [Required]¶
The evaluation chain.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to the global verbose value,
accessible via langchain.globals.get_verbose().
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-2 | addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-3 | response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async aevaluate_run(run: Run, example: Optional[Example] = None) → EvaluationResult[source]¶
Evaluate an example.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-4 | Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-5 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-6 | e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-7 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
chain.dict(exclude_unset=True)
# -> {"_type": "foo", "verbose": False, ...}
evaluate_run(run: Run, example: Optional[Example] = None) → EvaluationResult[source]¶
Evaluate an example.
classmethod from_orm(obj: Any) → Model¶
classmethod from_run_and_data_type(evaluator: StringEvaluator, run_type: str, data_type: DataType, input_key: Optional[str] = None, prediction_key: Optional[str] = None, reference_key: Optional[str] = None, tags: Optional[List[str]] = None) → StringRunEvaluatorChain[source]¶
Create a StringRunEvaluatorChain from an evaluator and the run and dataset types.
This method provides an easy way to instantiate a StringRunEvaluatorChain, by
taking an evaluator and information about the type of run and the data.
The method supports LLM and chain runs.
Parameters
evaluator (StringEvaluator) – The string evaluator to use.
run_type (str) – The type of run being evaluated.
Supported types are LLM and Chain.
data_type (DataType) – The type of dataset used in the run.
input_key (str, optional) – The key used to map the input from the run. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-8 | input_key (str, optional) – The key used to map the input from the run.
prediction_key (str, optional) – The key used to map the prediction from the run.
reference_key (str, optional) – The key used to map the reference from the dataset.
tags (List[str], optional) – List of tags to attach to the evaluation chain.
Returns
The instantiated evaluation chain.
Return type
StringRunEvaluatorChain
Raises
ValueError – If the run type is not supported, or if the evaluator requires a
reference from the dataset but the reference key is not provided.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-9 | This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-10 | to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-11 | Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-12 | save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-13 | Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
c6bee0e41e99-14 | Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_keys: List[str]¶
Keys expected to be in the chain input.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_keys: List[str]¶
Keys expected to be in the chain output.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunEvaluatorChain.html |
d878661438b2-0 | langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper¶
class langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper[source]¶
Bases: StringRunMapper
Extract items to evaluate from the run object from a chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_key: Optional[str] = None¶
The key from the model Run’s inputs to use as the eval input.
If not provided, will use the only input key or raise an
error if there are multiple.
param prediction_key: Optional[str] = None¶
The key from the model Run’s outputs to use as the eval prediction.
If not provided, will use the only output key or raise an error
if there are multiple.
__call__(run: Run) → Dict[str, str]¶
Maps the Run to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html |
d878661438b2-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html |
d878661438b2-2 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html |
d878661438b2-3 | For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_keys: List[str]¶
The keys to extract from the run. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ChainStringRunMapper.html |
c1d39b3b03e1-0 | langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper¶
class langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper[source]¶
Bases: StringRunMapper
Extract items to evaluate from the run object.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
__call__(run: Run) → Dict[str, str]¶
Maps the Run to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html |
c1d39b3b03e1-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html |
c1d39b3b03e1-2 | The unique identifier is a list of strings that describes the path
to the object.
map(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
serialize_chat_messages(messages: List[Dict]) → str[source]¶
Extract the input messages from the run.
serialize_inputs(inputs: Dict) → str[source]¶
serialize_outputs(outputs: Dict) → str[source]¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_keys: List[str]¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html |
c1d39b3b03e1-3 | property output_keys: List[str]¶
The keys to extract from the run. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.LLMStringRunMapper.html |
3ccdfd4e760f-0 | langchain.smith.evaluation.config.RunEvalConfig¶
class langchain.smith.evaluation.config.RunEvalConfig[source]¶
Bases: BaseModel
Configuration for a run evaluation.
Parameters
evaluators (List[Union[EvaluatorType, EvalConfig]]) – Configurations for which evaluators to apply to the dataset run.
Each can be the string of an EvaluatorType, such
as EvaluatorType.QA, the evaluator type string (“qa”), or a configuration for a
given evaluator (e.g., RunEvalConfig.QA).
custom_evaluators (Optional[List[Union[RunEvaluator, StringEvaluator]]]) – Custom evaluators to apply to the dataset run.
reference_key (Optional[str]) – The key in the dataset run to use as the reference string.
If not provided, it will be inferred automatically.
prediction_key (Optional[str]) – The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
input_key (Optional[str]) – The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
eval_llm (Optional[BaseLanguageModel]) – The language model to pass to any evaluators that use a language model.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param custom_evaluators: Optional[List[Union[langsmith.evaluation.evaluator.RunEvaluator, langchain.evaluation.schema.StringEvaluator]]] = None¶
Custom evaluators to apply to the dataset run.
param eval_llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
The language model to pass to any evaluators that require one. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-1 | The language model to pass to any evaluators that require one.
param evaluators: List[Union[langchain.evaluation.schema.EvaluatorType, str, langchain.smith.evaluation.config.EvalConfig]] [Optional]¶
Configurations for which evaluators to apply to the dataset run.
Each can be the string of an
EvaluatorType, such
as EvaluatorType.QA, the evaluator type string (“qa”), or a configuration for a
given evaluator
(e.g.,
RunEvalConfig.QA).
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
class CoTQA[source]¶
Bases: SingleKeyEvalConfig
Configuration for a context-based QA evaluator.
Parameters
prompt (Optional[BasePromptTemplate]) – The prompt template to use for generating the question.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.CONTEXT_QA¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-2 | input. If not provided, it will be inferred automatically.
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-3 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-4 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class ContextQA[source]¶
Bases: SingleKeyEvalConfig
Configuration for a context-based QA evaluator.
Parameters
prompt (Optional[BasePromptTemplate]) – The prompt template to use for generating the question.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.CONTEXT_QA¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-5 | param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-6 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-7 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class Criteria[source]¶
Bases: SingleKeyEvalConfig
Configuration for a reference-free criteria evaluator.
Parameters
criteria (Optional[CRITERIA_TYPE]) – The criteria to evaluate.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param criteria: Optional[Union[Mapping[str, str], langchain.evaluation.criteria.eval_chain.Criteria, langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]] = None¶
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.CRITERIA¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-8 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any] | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-9 | The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class EmbeddingDistance[source]¶
Bases: SingleKeyEvalConfig
Configuration for an embedding distance evaluator.
Parameters | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-10 | Bases: SingleKeyEvalConfig
Configuration for an embedding distance evaluator.
Parameters
embeddings (Optional[Embeddings]) – The embeddings to use for computing the distance.
distance_metric (Optional[EmbeddingDistanceEnum]) – The distance metric to use for computing the distance.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param distance_metric: Optional[langchain.evaluation.embedding_distance.base.EmbeddingDistance] = None¶
param embeddings: Optional[langchain.schema.embeddings.Embeddings] = None¶
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.EMBEDDING_DISTANCE¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-11 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any] | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-12 | The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class ExactMatch[source]¶
Bases: SingleKeyEvalConfig
Configuration for an exact match string evaluator.
Parameters | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-13 | Bases: SingleKeyEvalConfig
Configuration for an exact match string evaluator.
Parameters
ignore_case (bool) – Whether to ignore case when comparing strings.
ignore_punctuation (bool) – Whether to ignore punctuation when comparing strings.
ignore_numbers (bool) – Whether to ignore numbers when comparing strings.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.STRING_DISTANCE¶
param ignore_case: bool = False¶
param ignore_numbers: bool = False¶
param ignore_punctuation: bool = False¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-14 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-15 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class JsonEqualityEvaluator[source]¶
Bases: EvalConfig
Configuration for a json equality evaluator.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.JSON_EQUALITY¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-16 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any] | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-17 | The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class JsonValidity[source]¶
Bases: SingleKeyEvalConfig
Configuration for a json validity evaluator. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-18 | Bases: SingleKeyEvalConfig
Configuration for a json validity evaluator.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.JSON_VALIDITY¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-19 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-20 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class LabeledCriteria[source]¶
Bases: SingleKeyEvalConfig
Configuration for a labeled (with references) criteria evaluator.
Parameters
criteria (Optional[CRITERIA_TYPE]) – The criteria to evaluate.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param criteria: Optional[Union[Mapping[str, str], langchain.evaluation.criteria.eval_chain.Criteria, langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]] = None¶
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.LABELED_CRITERIA¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param prediction_key: Optional[str] = None¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-21 | param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-22 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-23 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class LabeledScoreString[source]¶
Bases: ScoreString
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param criteria: Optional[Union[Mapping[str, str], langchain.evaluation.criteria.eval_chain.Criteria, langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]] = None¶
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.LABELED_SCORE_STRING¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param normalize_by: Optional[float] = None¶
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-24 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any] | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-25 | The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class QA[source]¶
Bases: SingleKeyEvalConfig
Configuration for a QA evaluator.
Parameters | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-26 | Bases: SingleKeyEvalConfig
Configuration for a QA evaluator.
Parameters
prompt (Optional[BasePromptTemplate]) – The prompt template to use for generating the question.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.QA¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-27 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any] | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-28 | The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class RegexMatch[source]¶
Bases: SingleKeyEvalConfig
Configuration for a regex match string evaluator.
Parameters | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-29 | Bases: SingleKeyEvalConfig
Configuration for a regex match string evaluator.
Parameters
flags (int) – The flags to pass to the regex. Example: re.IGNORECASE.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.REGEX_MATCH¶
param flags: int = 0¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-30 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-31 | classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class ScoreString[source]¶
Bases: SingleKeyEvalConfig
Configuration for a score string evaluator.
This is like the criteria evaluator but it is configured by
default to return a score on the scale from 1-10.
It is recommended to normalize these scores
by setting normalize_by to 10.
Parameters
criteria (Optional[CRITERIA_TYPE]) – The criteria to evaluate.
llm (Optional[BaseLanguageModel]) – The language model to use for the evaluation chain.
normalize_by (Optional[int] = None) – If you want to normalize the score, the denominator to use.
If not provided, the score will be between 1 and 10 (by default).
prompt (Optional[BasePromptTemplate]) –
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-32 | Raises ValidationError if the input data cannot be parsed to form a valid model.
param criteria: Optional[Union[Mapping[str, str], langchain.evaluation.criteria.eval_chain.Criteria, langchain.chains.constitutional_ai.models.ConstitutionalPrinciple]] = None¶
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.SCORE_STRING¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
param normalize_by: Optional[float] = None¶
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param prompt: Optional[langchain.schema.prompt_template.BasePromptTemplate] = None¶
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-33 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-34 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
class StringDistance[source]¶
Bases: SingleKeyEvalConfig
Configuration for a string distance evaluator.
Parameters
distance (Optional[StringDistanceEnum]) – The string distance metric to use.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param distance: Optional[langchain.evaluation.string_distance.base.StringDistance] = None¶
The string distance metric to use.
damerau_levenshtein: The Damerau-Levenshtein distance.
levenshtein: The Levenshtein distance.
jaro: The Jaro distance.
jaro_winkler: The Jaro-Winkler distance. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-35 | jaro_winkler: The Jaro-Winkler distance.
param evaluator_type: langchain.evaluation.schema.EvaluatorType = EvaluatorType.STRING_DISTANCE¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param normalize_score: bool = True¶
Whether to normalize the distance to between 0 and 1.
Applies only to the Levenshtein and Damerau-Levenshtein distances.
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-36 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_kwargs() → Dict[str, Any]¶
Get the keyword arguments for the load_evaluator call.
Returns
The keyword arguments for the load_evaluator call.
Return type
Dict[str, Any]
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-37 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-38 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
3ccdfd4e760f-39 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using RunEvalConfig¶
LangSmith Walkthrough | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.RunEvalConfig.html |
766d1815d98c-0 | langchain.smith.evaluation.name_generation.random_name¶
langchain.smith.evaluation.name_generation.random_name(prefix: str = 'test') → str[source]¶
Generate a random name. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.name_generation.random_name.html |
1a27feef601f-0 | langchain.smith.evaluation.runner_utils.InputFormatError¶
class langchain.smith.evaluation.runner_utils.InputFormatError[source]¶
Raised when the input format is invalid. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.InputFormatError.html |
79ac78b174f0-0 | langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper¶
class langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper[source]¶
Bases: StringRunMapper
Map an input to the tool.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
__call__(run: Run) → Dict[str, str]¶
Maps the Run to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper.html |
79ac78b174f0-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper.html |
79ac78b174f0-2 | The unique identifier is a list of strings that describes the path
to the object.
map(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_keys: List[str]¶
The keys to extract from the run. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.ToolStringRunMapper.html |
b6d03b595ad1-0 | langchain.smith.evaluation.runner_utils.run_on_dataset¶
langchain.smith.evaluation.runner_utils.run_on_dataset(client: Optional[Client], dataset_name: str, llm_or_chain_factory: Union[Callable[[], Union[Chain, Runnable]], BaseLanguageModel, Callable[[dict], Any], Runnable, Chain], *, evaluation: Optional[RunEvalConfig] = None, concurrency_level: int = 5, project_name: Optional[str] = None, project_metadata: Optional[Dict[str, Any]] = None, verbose: bool = False, tags: Optional[List[str]] = None, **kwargs: Any) → Dict[str, Any][source]¶
Run the Chain or language model on a dataset and store traces
to the specified project name.
Parameters
dataset_name – Name of the dataset to run the chain on.
llm_or_chain_factory – Language model or Chain constructor to run
over the dataset. The Chain constructor is used to permit
independent calls on each example without carrying over state.
evaluation – Configuration for evaluators to run on the
results of the chain
concurrency_level – The number of async tasks to run concurrently.
project_name – Name of the project to store the traces in.
Defaults to {dataset_name}-{chain class name}-{datetime}.
project_metadata – Optional metadata to add to the project.
Useful for storing information the test variant.
(prompt version, model version, etc.)
client – LangSmith client to use to access the dataset and to
log feedback and run traces.
verbose – Whether to print progress.
tags – Tags to add to each run in the project.
Returns
A dictionary containing the run’s project name and the resulting model outputs.
For the (usually faster) async version of this function, see arun_on_dataset().
Examples
from langsmith import Client
from langchain.chat_models import ChatOpenAI | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.run_on_dataset.html |
b6d03b595ad1-1 | Examples
from langsmith import Client
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import smith_eval.RunEvalConfig, run_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = smith_eval.RunEvalConfig(
evaluators=[
"qa", # "Correctness" against a reference answer
"embedding_distance",
smith_eval.RunEvalConfig.Criteria("helpfulness"),
smith_eval.RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the
StringEvaluator
or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluator
class MyStringEvaluator(StringEvaluator):
@property
def requires_input(self) -> bool:
return False
@property
def requires_reference(self) -> bool:
return True
@property
def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict: | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.run_on_dataset.html |
b6d03b595ad1-2 | return {"score": prediction == reference}
evaluation_config = smith_eval.RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
Examples using run_on_dataset¶
LangSmith Walkthrough | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.run_on_dataset.html |
059b12f8d400-0 | langchain.smith.evaluation.runner_utils.arun_on_dataset¶
async langchain.smith.evaluation.runner_utils.arun_on_dataset(client: Optional[Client], dataset_name: str, llm_or_chain_factory: Union[Callable[[], Union[Chain, Runnable]], BaseLanguageModel, Callable[[dict], Any], Runnable, Chain], *, evaluation: Optional[RunEvalConfig] = None, concurrency_level: int = 5, project_name: Optional[str] = None, project_metadata: Optional[Dict[str, Any]] = None, verbose: bool = False, tags: Optional[List[str]] = None, **kwargs: Any) → Dict[str, Any][source]¶
Run the Chain or language model on a dataset and store traces
to the specified project name.
Parameters
dataset_name – Name of the dataset to run the chain on.
llm_or_chain_factory – Language model or Chain constructor to run
over the dataset. The Chain constructor is used to permit
independent calls on each example without carrying over state.
evaluation – Configuration for evaluators to run on the
results of the chain
concurrency_level – The number of async tasks to run concurrently.
project_name – Name of the project to store the traces in.
Defaults to {dataset_name}-{chain class name}-{datetime}.
project_metadata – Optional metadata to add to the project.
Useful for storing information the test variant.
(prompt version, model version, etc.)
client – LangSmith client to use to access the dataset and to
log feedback and run traces.
verbose – Whether to print progress.
tags – Tags to add to each run in the project.
Returns
A dictionary containing the run’s project name and the resulting model outputs.
For the (usually faster) async version of this function, see arun_on_dataset().
Examples
from langsmith import Client
from langchain.chat_models import ChatOpenAI | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html |
059b12f8d400-1 | Examples
from langsmith import Client
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import smith_eval.RunEvalConfig, run_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = smith_eval.RunEvalConfig(
evaluators=[
"qa", # "Correctness" against a reference answer
"embedding_distance",
smith_eval.RunEvalConfig.Criteria("helpfulness"),
smith_eval.RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
await arun_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the
StringEvaluator
or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluator
class MyStringEvaluator(StringEvaluator):
@property
def requires_input(self) -> bool:
return False
@property
def requires_reference(self) -> bool:
return True
@property
def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict: | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html |
059b12f8d400-2 | return {"score": prediction == reference}
evaluation_config = smith_eval.RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
await arun_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
Examples using arun_on_dataset¶
LangSmith Walkthrough | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html |
927239efe8b0-0 | langchain.smith.evaluation.string_run_evaluator.StringRunMapper¶
class langchain.smith.evaluation.string_run_evaluator.StringRunMapper[source]¶
Bases: Serializable
Extract items to evaluate from the run object.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
__call__(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html |
927239efe8b0-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html |
927239efe8b0-2 | The unique identifier is a list of strings that describes the path
to the object.
abstract map(run: Run) → Dict[str, str][source]¶
Maps the Run to a dictionary.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_keys: List[str]¶
The keys to extract from the run. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringRunMapper.html |
6bf59dc7fa0b-0 | langchain.smith.evaluation.progress.ProgressBarCallback¶
class langchain.smith.evaluation.progress.ProgressBarCallback(total: int, ncols: int = 50, **kwargs: Any)[source]¶
A simple progress bar for the console.
Initialize the progress bar.
Parameters
total – int, the total number of items to be processed.
ncols – int, the character width of the progress bar.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(total[, ncols])
Initialize the progress bar.
increment()
Increment the counter and update the progress bar.
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id[, parent_run_id])
Run when LLM ends running.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors.
on_llm_new_token(token, *[, chunk, ...]) | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html |
6bf59dc7fa0b-1 | on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__(total: int, ncols: int = 50, **kwargs: Any)[source]¶
Initialize the progress bar.
Parameters
total – int, the total number of items to be processed.
ncols – int, the character width of the progress bar.
increment() → None[source]¶
Increment the counter and update the progress bar.
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html |
6bf59dc7fa0b-2 | Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when chain ends running.
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when LLM errors.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on new LLM token. Only available when streaming is enabled.
Parameters | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html |
6bf59dc7fa0b-3 | Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) –
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when LLM starts running.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html |
6bf59dc7fa0b-4 | Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when tool ends running.
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running. | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.progress.ProgressBarCallback.html |
250271cdfbb1-0 | langchain.smith.evaluation.config.SingleKeyEvalConfig¶
class langchain.smith.evaluation.config.SingleKeyEvalConfig[source]¶
Bases: EvalConfig
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param evaluator_type: langchain.evaluation.schema.EvaluatorType [Required]¶
param input_key: Optional[str] = None¶
The key from the traced run’s inputs dictionary to use to represent the
input. If not provided, it will be inferred automatically.
param prediction_key: Optional[str] = None¶
The key from the traced run’s outputs dictionary to use to
represent the prediction. If not provided, it will be inferred
automatically.
param reference_key: Optional[str] = None¶
The key in the dataset run to use as the reference string.
If not provided, we will attempt to infer automatically.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.SingleKeyEvalConfig.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.