status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
โ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,720 | AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) | ### System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
## Versions
* Intel Mac with latest OSX
* Python 3.11.2
* langchain 0.0.190, includes fix for #5651
* ggml-mpt-7b-instruct.bin, downloaded at June 5th from https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
### Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Error message
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/chat.py:30 in โ
โ <module> โ
โ โ
โ 27 โ model_name="all-mpnet-base-v2") โ
โ 28 โ
โ 29 # see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin โ
โ โฑ 30 llm = GPT4All( โ
โ 31 โ model="./ggml-mpt-7b-instruct.bin", โ
โ 32 โ #backend='gptj', โ
โ 33 โ top_p=0.5, โ
โ โ
โ in pydantic.main.BaseModel.__init__:339 โ
โ โ
โ in pydantic.main.validate_model:1102 โ
โ โ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/venv/lib/python3.1 โ
โ 1/site-packages/langchain/llms/gpt4all.py:156 in validate_environment โ
โ โ
โ 153 โ โ if values["n_threads"] is not None: โ
โ 154 โ โ โ # set n_threads โ
โ 155 โ โ โ values["client"].model.set_thread_count(values["n_threads"]) โ
โ โฑ 156 โ โ values["backend"] = values["client"].model_type โ
โ 157 โ โ โ
โ 158 โ โ return values โ
โ 159 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
As you can see in _gpt4all.py:156_ contains the changed from the fix of #5651
## Code
```python
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="./ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
```
FYI I am following [this example in a blog post](https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb).
### Expected behavior
I expect an instance of _GPT4All_ instead of a stacktrace. | https://github.com/langchain-ai/langchain/issues/5720 | https://github.com/langchain-ai/langchain/pull/5743 | d0d89d39efb5f292f72e70973f3b70c4ca095047 | 74f8e603d942ca22ed07bf0ea23a57ed67b36b2c | "2023-06-05T09:44:08Z" | python | "2023-06-05T19:45:29Z" | langchain/llms/gpt4all.py | """Configuration for this pydantic object."""
extra = Extra.forbid
@staticmethod
def _model_param_names() -> Set[str]:
return {
"n_ctx",
"n_predict",
"top_k",
"top_p",
"temp",
"n_batch",
"repeat_penalty",
"repeat_last_n",
"context_erase",
}
def _default_params(self) -> Dict[str, Any]:
return {
"n_ctx": self.n_ctx,
"n_predict": self.n_predict,
"top_k": self.top_k,
"top_p": self.top_p,
"temp": self.temp,
"n_batch": self.n_batch,
"repeat_penalty": self.repeat_penalty,
"repeat_last_n": self.repeat_last_n,
"context_erase": self.context_erase,
}
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,720 | AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) | ### System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
## Versions
* Intel Mac with latest OSX
* Python 3.11.2
* langchain 0.0.190, includes fix for #5651
* ggml-mpt-7b-instruct.bin, downloaded at June 5th from https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
### Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Error message
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/chat.py:30 in โ
โ <module> โ
โ โ
โ 27 โ model_name="all-mpnet-base-v2") โ
โ 28 โ
โ 29 # see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin โ
โ โฑ 30 llm = GPT4All( โ
โ 31 โ model="./ggml-mpt-7b-instruct.bin", โ
โ 32 โ #backend='gptj', โ
โ 33 โ top_p=0.5, โ
โ โ
โ in pydantic.main.BaseModel.__init__:339 โ
โ โ
โ in pydantic.main.validate_model:1102 โ
โ โ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/venv/lib/python3.1 โ
โ 1/site-packages/langchain/llms/gpt4all.py:156 in validate_environment โ
โ โ
โ 153 โ โ if values["n_threads"] is not None: โ
โ 154 โ โ โ # set n_threads โ
โ 155 โ โ โ values["client"].model.set_thread_count(values["n_threads"]) โ
โ โฑ 156 โ โ values["backend"] = values["client"].model_type โ
โ 157 โ โ โ
โ 158 โ โ return values โ
โ 159 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
As you can see in _gpt4all.py:156_ contains the changed from the fix of #5651
## Code
```python
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="./ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
```
FYI I am following [this example in a blog post](https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb).
### Expected behavior
I expect an instance of _GPT4All_ instead of a stacktrace. | https://github.com/langchain-ai/langchain/issues/5720 | https://github.com/langchain-ai/langchain/pull/5743 | d0d89d39efb5f292f72e70973f3b70c4ca095047 | 74f8e603d942ca22ed07bf0ea23a57ed67b36b2c | "2023-06-05T09:44:08Z" | python | "2023-06-05T19:45:29Z" | langchain/llms/gpt4all.py | """Validate that the python package exists in the environment."""
try:
from gpt4all import GPT4All as GPT4AllModel
except ImportError:
raise ImportError(
"Could not import gpt4all python package. "
"Please install it with `pip install gpt4all`."
)
full_path = values["model"]
model_path, delimiter, model_name = full_path.rpartition("/")
model_path += delimiter
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
)
if values["n_threads"] is not None:
values["client"].model.set_thread_count(values["n_threads"])
values["backend"] = values["client"].model_type
return values
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,720 | AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) | ### System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
## Versions
* Intel Mac with latest OSX
* Python 3.11.2
* langchain 0.0.190, includes fix for #5651
* ggml-mpt-7b-instruct.bin, downloaded at June 5th from https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
### Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Error message
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/chat.py:30 in โ
โ <module> โ
โ โ
โ 27 โ model_name="all-mpnet-base-v2") โ
โ 28 โ
โ 29 # see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin โ
โ โฑ 30 llm = GPT4All( โ
โ 31 โ model="./ggml-mpt-7b-instruct.bin", โ
โ 32 โ #backend='gptj', โ
โ 33 โ top_p=0.5, โ
โ โ
โ in pydantic.main.BaseModel.__init__:339 โ
โ โ
โ in pydantic.main.validate_model:1102 โ
โ โ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/venv/lib/python3.1 โ
โ 1/site-packages/langchain/llms/gpt4all.py:156 in validate_environment โ
โ โ
โ 153 โ โ if values["n_threads"] is not None: โ
โ 154 โ โ โ # set n_threads โ
โ 155 โ โ โ values["client"].model.set_thread_count(values["n_threads"]) โ
โ โฑ 156 โ โ values["backend"] = values["client"].model_type โ
โ 157 โ โ โ
โ 158 โ โ return values โ
โ 159 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
As you can see in _gpt4all.py:156_ contains the changed from the fix of #5651
## Code
```python
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="./ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
```
FYI I am following [this example in a blog post](https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb).
### Expected behavior
I expect an instance of _GPT4All_ instead of a stacktrace. | https://github.com/langchain-ai/langchain/issues/5720 | https://github.com/langchain-ai/langchain/pull/5743 | d0d89d39efb5f292f72e70973f3b70c4ca095047 | 74f8e603d942ca22ed07bf0ea23a57ed67b36b2c | "2023-06-05T09:44:08Z" | python | "2023-06-05T19:45:29Z" | langchain/llms/gpt4all.py | """Get the identifying parameters."""
return {
"model": self.model,
**self._default_params(),
**{
k: v for k, v in self.__dict__.items() if k in self._model_param_names()
},
}
@property
def _llm_type(self) -> str:
"""Return the type of llm."""
return "gpt4all"
def _call( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,720 | AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) | ### System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
## Versions
* Intel Mac with latest OSX
* Python 3.11.2
* langchain 0.0.190, includes fix for #5651
* ggml-mpt-7b-instruct.bin, downloaded at June 5th from https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
### Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Error message
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/chat.py:30 in โ
โ <module> โ
โ โ
โ 27 โ model_name="all-mpnet-base-v2") โ
โ 28 โ
โ 29 # see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin โ
โ โฑ 30 llm = GPT4All( โ
โ 31 โ model="./ggml-mpt-7b-instruct.bin", โ
โ 32 โ #backend='gptj', โ
โ 33 โ top_p=0.5, โ
โ โ
โ in pydantic.main.BaseModel.__init__:339 โ
โ โ
โ in pydantic.main.validate_model:1102 โ
โ โ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/venv/lib/python3.1 โ
โ 1/site-packages/langchain/llms/gpt4all.py:156 in validate_environment โ
โ โ
โ 153 โ โ if values["n_threads"] is not None: โ
โ 154 โ โ โ # set n_threads โ
โ 155 โ โ โ values["client"].model.set_thread_count(values["n_threads"]) โ
โ โฑ 156 โ โ values["backend"] = values["client"].model_type โ
โ 157 โ โ โ
โ 158 โ โ return values โ
โ 159 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
As you can see in _gpt4all.py:156_ contains the changed from the fix of #5651
## Code
```python
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="./ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
```
FYI I am following [this example in a blog post](https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb).
### Expected behavior
I expect an instance of _GPT4All_ instead of a stacktrace. | https://github.com/langchain-ai/langchain/issues/5720 | https://github.com/langchain-ai/langchain/pull/5743 | d0d89d39efb5f292f72e70973f3b70c4ca095047 | 74f8e603d942ca22ed07bf0ea23a57ed67b36b2c | "2023-06-05T09:44:08Z" | python | "2023-06-05T19:45:29Z" | langchain/llms/gpt4all.py | self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
r"""Call out to GPT4All's generate method.
Args:
prompt: The prompt to pass into the model.
stop: A list of strings to stop generation when encountered.
Returns:
The string generated by the model.
Example:
.. code-block:: python
prompt = "Once upon a time, "
response = model(prompt, n_predict=55)
"""
text_callback = None
if run_manager:
text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose)
text = ""
for token in self.client.generate(prompt, **self._default_params()):
if text_callback:
text_callback(token)
text += token
if stop is not None:
text = enforce_stop_tokens(text, stop)
return text |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,699 | Sitemap filters not working due to lack of stripping whitespace and newlines | https://github.com/hwchase17/langchain/blob/8d9e9e013ccfe72d839dcfa37a3f17c340a47a88/langchain/document_loaders/sitemap.py#L83
if
```
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml">
<url>
<loc>
https://tatum.com/
</loc>
<xhtml:link rel="alternate" hreflang="x-default" href="https://tatum.com/"/>
</url>
```
then
` re.match(r, loc.text) for r in self.filter_urls`
Comparison to filter here will be comparing against a value that includes those whitespaces and newlines.
What worked for me:
``` def parse_sitemap(self, soup: Any) -> List[dict]:
"""Parse sitemap xml and load into a list of dicts."""
els = []
for url in soup.find_all("url"):
loc = url.find("loc")
if not loc:
continue
loc_text = loc.text.strip()
if self.filter_urls and not any(
re.match(r, loc_text) for r in self.filter_urls
):
continue```
| https://github.com/langchain-ai/langchain/issues/5699 | https://github.com/langchain-ai/langchain/pull/5728 | 98dd6d068a67c2ac1c14785ea189c2e4c8882bf5 | 2dcda8a8aca4c427ff5716e6ac37ab0c24a7f2e5 | "2023-06-04T22:49:54Z" | python | "2023-06-05T23:33:55Z" | langchain/document_loaders/sitemap.py | """Loader that fetches a sitemap and loads those URLs."""
import itertools
import re
from typing import Any, Callable, Generator, Iterable, List, Optional
from langchain.document_loaders.web_base import WebBaseLoader
from langchain.schema import Document
def _default_parsing_function(content: Any) -> str:
return str(content.get_text())
def _default_meta_function(meta: dict, _content: Any) -> dict:
return {"source": meta["loc"], **meta}
def _batch_block(iterable: Iterable, size: int) -> Generator[List[dict], None, None]:
it = iter(iterable)
while item := list(itertools.islice(it, size)):
yield item
class SitemapLoader(WebBaseLoader):
"""Loader that fetches a sitemap and loads those URLs."""
def __init__(
self,
web_path: str,
filter_urls: Optional[List[str]] = None,
parsing_function: Optional[Callable] = None,
blocksize: Optional[int] = None,
blocknum: int = 0,
meta_function: Optional[Callable] = None,
is_local: bool = False,
):
"""Initialize with webpage path and optional filter URLs. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,699 | Sitemap filters not working due to lack of stripping whitespace and newlines | https://github.com/hwchase17/langchain/blob/8d9e9e013ccfe72d839dcfa37a3f17c340a47a88/langchain/document_loaders/sitemap.py#L83
if
```
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml">
<url>
<loc>
https://tatum.com/
</loc>
<xhtml:link rel="alternate" hreflang="x-default" href="https://tatum.com/"/>
</url>
```
then
` re.match(r, loc.text) for r in self.filter_urls`
Comparison to filter here will be comparing against a value that includes those whitespaces and newlines.
What worked for me:
``` def parse_sitemap(self, soup: Any) -> List[dict]:
"""Parse sitemap xml and load into a list of dicts."""
els = []
for url in soup.find_all("url"):
loc = url.find("loc")
if not loc:
continue
loc_text = loc.text.strip()
if self.filter_urls and not any(
re.match(r, loc_text) for r in self.filter_urls
):
continue```
| https://github.com/langchain-ai/langchain/issues/5699 | https://github.com/langchain-ai/langchain/pull/5728 | 98dd6d068a67c2ac1c14785ea189c2e4c8882bf5 | 2dcda8a8aca4c427ff5716e6ac37ab0c24a7f2e5 | "2023-06-04T22:49:54Z" | python | "2023-06-05T23:33:55Z" | langchain/document_loaders/sitemap.py | Args:
web_path: url of the sitemap. can also be a local path
filter_urls: list of strings or regexes that will be applied to filter the
urls that are parsed and loaded
parsing_function: Function to parse bs4.Soup output
blocksize: number of sitemap locations per block
blocknum: the number of the block that should be loaded - zero indexed
meta_function: Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata["loc"]
to metadata["source"] if you are using this field
is_local: whether the sitemap is a local file
"""
if blocksize is not None and blocksize < 1:
raise ValueError("Sitemap blocksize should be at least 1")
if blocknum < 0:
raise ValueError("Sitemap blocknum can not be lower then 0")
try:
import lxml
except ImportError:
raise ImportError(
"lxml package not found, please install it with " "`pip install lxml`"
)
super().__init__(web_path)
self.filter_urls = filter_urls
self.parsing_function = parsing_function or _default_parsing_function
self.meta_function = meta_function or _default_meta_function
self.blocksize = blocksize
self.blocknum = blocknum
self.is_local = is_local
def parse_sitemap(self, soup: Any) -> List[dict]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,699 | Sitemap filters not working due to lack of stripping whitespace and newlines | https://github.com/hwchase17/langchain/blob/8d9e9e013ccfe72d839dcfa37a3f17c340a47a88/langchain/document_loaders/sitemap.py#L83
if
```
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml">
<url>
<loc>
https://tatum.com/
</loc>
<xhtml:link rel="alternate" hreflang="x-default" href="https://tatum.com/"/>
</url>
```
then
` re.match(r, loc.text) for r in self.filter_urls`
Comparison to filter here will be comparing against a value that includes those whitespaces and newlines.
What worked for me:
``` def parse_sitemap(self, soup: Any) -> List[dict]:
"""Parse sitemap xml and load into a list of dicts."""
els = []
for url in soup.find_all("url"):
loc = url.find("loc")
if not loc:
continue
loc_text = loc.text.strip()
if self.filter_urls and not any(
re.match(r, loc_text) for r in self.filter_urls
):
continue```
| https://github.com/langchain-ai/langchain/issues/5699 | https://github.com/langchain-ai/langchain/pull/5728 | 98dd6d068a67c2ac1c14785ea189c2e4c8882bf5 | 2dcda8a8aca4c427ff5716e6ac37ab0c24a7f2e5 | "2023-06-04T22:49:54Z" | python | "2023-06-05T23:33:55Z" | langchain/document_loaders/sitemap.py | """Parse sitemap xml and load into a list of dicts."""
els = []
for url in soup.find_all("url"):
loc = url.find("loc")
if not loc:
continue
if self.filter_urls and not any(
re.match(r, loc.text) for r in self.filter_urls
):
continue
els.append(
{
tag: prop.text
for tag in ["loc", "lastmod", "changefreq", "priority"]
if (prop := url.find(tag))
}
)
for sitemap in soup.find_all("sitemap"):
loc = sitemap.find("loc")
if not loc:
continue
soup_child = self.scrape_all([loc.text], "xml")[0]
els.extend(self.parse_sitemap(soup_child))
return els
def load(self) -> List[Document]:
"""Load sitemap."""
if self.is_local: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,699 | Sitemap filters not working due to lack of stripping whitespace and newlines | https://github.com/hwchase17/langchain/blob/8d9e9e013ccfe72d839dcfa37a3f17c340a47a88/langchain/document_loaders/sitemap.py#L83
if
```
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml">
<url>
<loc>
https://tatum.com/
</loc>
<xhtml:link rel="alternate" hreflang="x-default" href="https://tatum.com/"/>
</url>
```
then
` re.match(r, loc.text) for r in self.filter_urls`
Comparison to filter here will be comparing against a value that includes those whitespaces and newlines.
What worked for me:
``` def parse_sitemap(self, soup: Any) -> List[dict]:
"""Parse sitemap xml and load into a list of dicts."""
els = []
for url in soup.find_all("url"):
loc = url.find("loc")
if not loc:
continue
loc_text = loc.text.strip()
if self.filter_urls and not any(
re.match(r, loc_text) for r in self.filter_urls
):
continue```
| https://github.com/langchain-ai/langchain/issues/5699 | https://github.com/langchain-ai/langchain/pull/5728 | 98dd6d068a67c2ac1c14785ea189c2e4c8882bf5 | 2dcda8a8aca4c427ff5716e6ac37ab0c24a7f2e5 | "2023-06-04T22:49:54Z" | python | "2023-06-05T23:33:55Z" | langchain/document_loaders/sitemap.py | try:
import bs4
except ImportError:
raise ImportError(
"beautifulsoup4 package not found, please install it"
" with `pip install beautifulsoup4`"
)
fp = open(self.web_path)
soup = bs4.BeautifulSoup(fp, "xml")
else:
soup = self.scrape("xml")
els = self.parse_sitemap(soup)
if self.blocksize is not None:
elblocks = list(_batch_block(els, self.blocksize))
blockcount = len(elblocks)
if blockcount - 1 < self.blocknum:
raise ValueError(
"Selected sitemap does not contain enough blocks for given blocknum"
)
else:
els = elblocks[self.blocknum]
results = self.scrape_all([el["loc"].strip() for el in els if "loc" in el])
return [
Document(
page_content=self.parsing_function(results[i]),
metadata=self.meta_function(els[i], results[i]),
)
for i in range(len(results))
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | """Functionality for splitting text."""
from __future__ import annotations
import copy
import logging
import re
from abc import ABC, abstractmethod
from dataclasses import dataclass
from enum import Enum
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Iterable,
List,
Literal,
Optional,
Sequence,
Type,
TypeVar,
Union,
)
from langchain.docstore.document import Document
from langchain.schema import BaseDocumentTransformer
logger = logging.getLogger(__name__)
TS = TypeVar("TS", bound="TextSplitter")
def _split_text(text: str, separator: str, keep_separator: bool) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | if separator:
if keep_separator:
_splits = re.split(f"({separator})", text)
splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]
if len(_splits) % 2 == 0:
splits += _splits[-1:]
splits = [_splits[0]] + splits
else:
splits = text.split(separator)
else:
splits = list(text)
return [s for s in splits if s != ""]
class TextSplitter(BaseDocumentTransformer, ABC): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | """Interface for splitting text into chunks."""
def __init__(
self,
chunk_size: int = 4000,
chunk_overlap: int = 200,
length_function: Callable[[str], int] = len,
keep_separator: bool = False,
):
"""Create a new TextSplitter.
Args:
chunk_size: Maximum size of chunks to return
chunk_overlap: Overlap in characters between chunks
length_function: Function that measures the length of given chunks
keep_separator: Whether or not to keep the separator in the chunks
"""
if chunk_overlap > chunk_size:
raise ValueError(
f"Got a larger chunk overlap ({chunk_overlap}) than chunk size "
f"({chunk_size}), should be smaller."
)
self._chunk_size = chunk_size
self._chunk_overlap = chunk_overlap
self._length_function = length_function
self._keep_separator = keep_separator
@abstractmethod
def split_text(self, text: str) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | """Split text into multiple components."""
def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
for chunk in self.split_text(text):
new_doc = Document(
page_content=chunk, metadata=copy.deepcopy(_metadatas[i])
)
documents.append(new_doc)
return documents
def split_documents(self, documents: Iterable[Document]) -> List[Document]:
"""Split documents."""
texts, metadatas = [], []
for doc in documents:
texts.append(doc.page_content)
metadatas.append(doc.metadata)
return self.create_documents(texts, metadatas=metadatas)
def _join_docs(self, docs: List[str], separator: str) -> Optional[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | text = separator.join(docs)
text = text.strip()
if text == "":
return None
else:
return text
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | if doc is not None:
docs.append(doc)
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
@classmethod
def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:
"""Text splitter that uses HuggingFace tokenizer to count length."""
try:
from transformers import PreTrainedTokenizerBase
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise ValueError(
"Tokenizer received was not an instance of PreTrainedTokenizerBase"
)
def _huggingface_tokenizer_length(text: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | return len(tokenizer.encode(text))
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
)
return cls(length_function=_huggingface_tokenizer_length, **kwargs)
@classmethod
def from_tiktoken_encoder(
cls: Type[TS],
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> TS:
"""Text splitter that uses tiktoken encoder to count length."""
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate max_tokens_for_prompt. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
def _tiktoken_encoder(text: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | return len(
enc.encode(
text,
allowed_special=allowed_special,
disallowed_special=disallowed_special,
)
)
if issubclass(cls, TokenTextSplitter):
extra_kwargs = {
"encoding_name": encoding_name,
"model_name": model_name,
"allowed_special": allowed_special,
"disallowed_special": disallowed_special,
}
kwargs = {**kwargs, **extra_kwargs}
return cls(length_function=_tiktoken_encoder, **kwargs)
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Transform sequence of documents by splitting them."""
return self.split_documents(list(documents))
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Asynchronously transform a sequence of documents by splitting them."""
raise NotImplementedError
class CharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters."""
def __init__(self, separator: str = "\n\n", **kwargs: Any): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | """Create a new TextSplitter."""
super().__init__(**kwargs)
self._separator = separator
def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
splits = _split_text(text, self._separator, self._keep_separator)
_separator = "" if self._keep_separator else self._separator
return self._merge_splits(splits, _separator)
@dataclass(frozen=True)
class Tokenizer:
chunk_overlap: int
tokens_per_chunk: int
decode: Callable[[list[int]], str]
encode: Callable[[str], List[int]]
def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]:
"""Split incoming text and return chunks."""
splits = []
input_ids = tokenizer.encode(text)
start_idx = 0
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
while start_idx < len(input_ids):
splits.append(tokenizer.decode(chunk_ids))
start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
return splits
class TokenTextSplitter(TextSplitter): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | """Implementation of splitting text that looks at tokens."""
def __init__(
self,
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
):
"""Create a new TextSplitter."""
super().__init__(**kwargs)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to for TokenTextSplitter. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
self._tokenizer = enc
self._allowed_special = allowed_special
self._disallowed_special = disallowed_special
def split_text(self, text: str) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | def _encode(_text: str) -> List[int]:
return self._tokenizer.encode(
_text,
allowed_special=self._allowed_special,
disallowed_special=self._disallowed_special,
)
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self._chunk_size,
decode=self._tokenizer.decode,
encode=_encode,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
class SentenceTransformersTokenTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at tokens."""
def __init__(
self,
chunk_overlap: int = 50,
model_name: str = "sentence-transformers/all-mpnet-base-v2",
tokens_per_chunk: Optional[int] = None,
**kwargs: Any,
):
"""Create a new TextSplitter."""
super().__init__(**kwargs, chunk_overlap=chunk_overlap)
from transformers import AutoTokenizer
self.model_name = model_name
self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)
self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)
def _initialize_chunk_configuration( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | self, *, tokens_per_chunk: Optional[int]
) -> None:
self.maximum_tokens_per_chunk = self.tokenizer.max_len_single_sentence
if tokens_per_chunk is None:
self.tokens_per_chunk = self.maximum_tokens_per_chunk
else:
self.tokens_per_chunk = tokens_per_chunk
if self.tokens_per_chunk > self.maximum_tokens_per_chunk:
raise ValueError(
f"The token limit of the models '{self.model_name}'"
f" is: {self.maximum_tokens_per_chunk}."
f" Argument tokens_per_chunk={self.tokens_per_chunk}"
f" > maximum token limit."
)
def split_text(self, text: str) -> List[str]:
def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:
return self._encode(text)[1:-1]
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self.tokens_per_chunk,
decode=self.tokenizer.decode,
encode=encode_strip_start_and_stop_token_ids,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
def count_tokens(self, *, text: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | return len(self._encode(text))
_max_length_equal_32_bit_integer = 2**32
def _encode(self, text: str) -> List[int]:
token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
text,
max_length=self._max_length_equal_32_bit_integer,
truncation="do_not_truncate",
)
return token_ids_with_start_and_end_token_ids
class Language(str, Enum):
CPP = "cpp"
GO = "go"
JAVA = "java"
JS = "js"
PHP = "php"
PROTO = "proto"
PYTHON = "python"
RST = "rst"
RUBY = "ruby"
RUST = "rust"
SCALA = "scala"
SWIFT = "swift"
MARKDOWN = "markdown"
LATEX = "latex"
HTML = "html"
class RecursiveCharacterTextSplitter(TextSplitter): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | """Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
"""
def __init__(
self,
separators: Optional[List[str]] = None,
keep_separator: bool = True,
**kwargs: Any,
):
"""Create a new TextSplitter."""
super().__init__(keep_separator=keep_separator, **kwargs)
self._separators = separators or ["\n\n", "\n", " ", ""]
def _split_text(self, text: str, separators: List[str]) -> List[str]:
"""Split incoming text and return chunks."""
final_chunks = []
separator = separators[-1]
new_separators = None
for i, _s in enumerate(separators):
if _s == "":
separator = _s
break
if _s in text:
separator = _s
new_separators = separators[i + 1 :]
break
splits = _split_text(text, separator, self._keep_separator) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | _good_splits = []
_separator = "" if self._keep_separator else separator
for s in splits:
if self._length_function(s) < self._chunk_size:
_good_splits.append(s)
else:
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
_good_splits = []
if new_separators is None:
final_chunks.append(s)
else:
other_info = self._split_text(s, new_separators)
final_chunks.extend(other_info)
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
return final_chunks
def split_text(self, text: str) -> List[str]:
return self._split_text(text, self._separators)
@classmethod
def from_language(
cls, language: Language, **kwargs: Any
) -> RecursiveCharacterTextSplitter:
separators = cls.get_separators_for_language(language)
return cls(separators=separators, **kwargs)
@staticmethod
def get_separators_for_language(language: Language) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | if language == Language.CPP:
return [
"\nclass ",
"\nvoid ",
"\nint ",
"\nfloat ",
"\ndouble ",
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.GO:
return [
"\nfunc ",
"\nvar ",
"\nconst ",
"\ntype ",
"\nif ", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | "\nfor ",
"\nswitch ",
"\ncase ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JAVA:
return [
"\nclass ",
"\npublic ",
"\nprotected ",
"\nprivate ",
"\nstatic ",
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
"\n\n",
"\n",
" ",
"",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | elif language == Language.JS:
return [
"\nfunction ",
"\nconst ",
"\nlet ",
"\nvar ",
"\nclass ",
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
"\ndefault ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PHP:
return [
"\nfunction ",
"\nclass ",
"\nif ",
"\nforeach ", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | "\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PROTO:
return [
"\nmessage ",
"\nservice ",
"\nenum ",
"\noption ",
"\nimport ",
"\nsyntax ",
"\n\n",
"\n",
" ",
"",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | elif language == Language.PYTHON:
return [
"\nclass ",
"\ndef ",
"\n\tdef ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RST:
return [
"\n===\n",
"\n---\n",
"\n***\n",
"\n.. ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RUBY:
return [
"\ndef ", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | "\nclass ",
"\nif ",
"\nunless ",
"\nwhile ",
"\nfor ",
"\ndo ",
"\nbegin ",
"\nrescue ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RUST:
return [
"\nfn ",
"\nconst ",
"\nlet ",
"\nif ",
"\nwhile ",
"\nfor ",
"\nloop ",
"\nmatch ",
"\nconst ",
"\n\n", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | "\n",
" ",
"",
]
elif language == Language.SCALA:
return [
"\nclass ",
"\nobject ",
"\ndef ",
"\nval ",
"\nvar ",
"\nif ",
"\nfor ",
"\nwhile ",
"\nmatch ",
"\ncase ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.SWIFT:
return [
"\nfunc ", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | "\nclass ",
"\nstruct ",
"\nenum ",
"\nif ",
"\nfor ",
"\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.MARKDOWN:
return [
"\n## ",
"\n### ",
"\n#### ",
"\n##### ",
"\n###### ",
"```\n\n", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | "\n\n***\n\n",
"\n\n---\n\n",
"\n\n___\n\n",
"\n\n",
"\n",
" ",
"",
]
elif language == Language.LATEX:
return [
"\n\\chapter{",
"\n\\section{",
"\n\\subsection{",
"\n\\subsubsection{",
"\n\\begin{enumerate}",
"\n\\begin{itemize}",
"\n\\begin{description}",
"\n\\begin{list}",
"\n\\begin{quote}",
"\n\\begin{quotation}",
"\n\\begin{verse}",
"\n\\begin{verbatim}",
"\n\\begin{align}",
"$$",
"$", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | " ",
"",
]
elif language == Language.HTML:
return [
"<body>",
"<div>",
"<p>",
"<br>",
"<li>",
"<h1>",
"<h2>",
"<h3>",
"<h4>",
"<h5>",
"<h6>",
"<span>",
"<table>",
"<tr>",
"<td>",
"<th>",
"<ul>",
"<ol>",
"<header>",
"<footer>",
"<nav>",
"<head>", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | "<style>",
"<script>",
"<meta>",
"<title>",
"",
]
else:
raise ValueError(
f"Language {language} is not supported! "
f"Please choose from {list(Language)}"
)
class NLTKTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at sentences using NLTK."""
def __init__(self, separator: str = "\n\n", **kwargs: Any):
"""Initialize the NLTK splitter."""
super().__init__(**kwargs)
try:
from nltk.tokenize import sent_tokenize
self._tokenizer = sent_tokenize
except ImportError:
raise ImportError(
"NLTK is not installed, please install it with `pip install nltk`."
)
self._separator = separator
def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
splits = self._tokenizer(text)
return self._merge_splits(splits, self._separator)
class SpacyTextSplitter(TextSplitter): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | """Implementation of splitting text that looks at sentences using Spacy."""
def __init__(
self, separator: str = "\n\n", pipeline: str = "en_core_web_sm", **kwargs: Any
):
"""Initialize the spacy text splitter."""
super().__init__(**kwargs)
try:
import spacy
except ImportError:
raise ImportError(
"Spacy is not installed, please install it with `pip install spacy`."
)
self._tokenizer = spacy.load(pipeline)
self._separator = separator
def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
splits = (str(s) for s in self._tokenizer(text).sents)
return self._merge_splits(splits, self._separator)
class PythonCodeTextSplitter(RecursiveCharacterTextSplitter): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | langchain/text_splitter.py | """Attempts to split the text along Python syntax."""
def __init__(self, **kwargs: Any):
"""Initialize a PythonCodeTextSplitter."""
separators = self.get_separators_for_language(Language.PYTHON)
super().__init__(separators=separators, **kwargs)
class MarkdownTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Markdown-formatted headings."""
def __init__(self, **kwargs: Any):
"""Initialize a MarkdownTextSplitter."""
separators = self.get_separators_for_language(Language.MARKDOWN)
super().__init__(separators=separators, **kwargs)
class LatexTextSplitter(RecursiveCharacterTextSplitter):
"""Attempts to split the text along Latex-formatted layout elements."""
def __init__(self, **kwargs: Any):
"""Initialize a LatexTextSplitter."""
separators = self.get_separators_for_language(Language.LATEX)
super().__init__(separators=separators, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | """Test text splitting functionality."""
import pytest
from langchain.docstore.document import Document
from langchain.text_splitter import (
CharacterTextSplitter,
Language,
PythonCodeTextSplitter,
RecursiveCharacterTextSplitter,
)
FAKE_PYTHON_TEXT = """
class Foo:
def bar():
def foo():
def testing_func():
def bar():
"""
def test_character_text_splitter() -> None:
"""Test splitting by character count."""
text = "foo bar baz 123"
splitter = CharacterTextSplitter(separator=" ", chunk_size=7, chunk_overlap=3)
output = splitter.split_text(text)
expected_output = ["foo bar", "bar baz", "baz 123"]
assert output == expected_output
def test_character_text_splitter_empty_doc() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | """Test splitting by character count doesn't create empty documents."""
text = "foo bar"
splitter = CharacterTextSplitter(separator=" ", chunk_size=2, chunk_overlap=0)
output = splitter.split_text(text)
expected_output = ["foo", "bar"]
assert output == expected_output
def test_character_text_splitter_separtor_empty_doc() -> None:
"""Test edge cases are separators."""
text = "f b"
splitter = CharacterTextSplitter(separator=" ", chunk_size=2, chunk_overlap=0)
output = splitter.split_text(text)
expected_output = ["f", "b"]
assert output == expected_output
def test_character_text_splitter_long() -> None:
"""Test splitting by character count on long words."""
text = "foo bar baz a a"
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=1)
output = splitter.split_text(text)
expected_output = ["foo", "bar", "baz", "a a"]
assert output == expected_output
def test_character_text_splitter_short_words_first() -> None:
"""Test splitting by character count when shorter words are first."""
text = "a a foo bar baz"
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=1)
output = splitter.split_text(text)
expected_output = ["a a", "foo", "bar", "baz"]
assert output == expected_output
def test_character_text_splitter_longer_words() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | """Test splitting by characters when splits not found easily."""
text = "foo bar baz 123"
splitter = CharacterTextSplitter(separator=" ", chunk_size=1, chunk_overlap=1)
output = splitter.split_text(text)
expected_output = ["foo", "bar", "baz", "123"]
assert output == expected_output
def test_character_text_splitting_args() -> None:
"""Test invalid arguments."""
with pytest.raises(ValueError):
CharacterTextSplitter(chunk_size=2, chunk_overlap=4)
def test_merge_splits() -> None:
"""Test merging splits with a given separator."""
splitter = CharacterTextSplitter(separator=" ", chunk_size=9, chunk_overlap=2)
splits = ["foo", "bar", "baz"]
expected_output = ["foo bar", "baz"]
output = splitter._merge_splits(splits, separator=" ")
assert output == expected_output
def test_create_documents() -> None:
"""Test create documents method."""
texts = ["foo bar", "baz"]
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=0)
docs = splitter.create_documents(texts)
expected_docs = [
Document(page_content="foo"),
Document(page_content="bar"),
Document(page_content="baz"),
]
assert docs == expected_docs
def test_create_documents_with_metadata() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | """Test create documents with metadata method."""
texts = ["foo bar", "baz"]
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=0)
docs = splitter.create_documents(texts, [{"source": "1"}, {"source": "2"}])
expected_docs = [
Document(page_content="foo", metadata={"source": "1"}),
Document(page_content="bar", metadata={"source": "1"}),
Document(page_content="baz", metadata={"source": "2"}),
]
assert docs == expected_docs
def test_metadata_not_shallow() -> None:
"""Test that metadatas are not shallow."""
texts = ["foo bar"]
splitter = CharacterTextSplitter(separator=" ", chunk_size=3, chunk_overlap=0)
docs = splitter.create_documents(texts, [{"source": "1"}])
expected_docs = [
Document(page_content="foo", metadata={"source": "1"}),
Document(page_content="bar", metadata={"source": "1"}),
]
assert docs == expected_docs
docs[0].metadata["foo"] = 1
assert docs[0].metadata == {"source": "1", "foo": 1}
assert docs[1].metadata == {"source": "1"}
def test_iterative_text_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | """Test iterative text splitter."""
text = """Hi.\n\nI'm Harrison.\n\nHow? Are? You?\nOkay then f f f f.
This is a weird text to write, but gotta test the splittingggg some how.
Bye!\n\n-H."""
splitter = RecursiveCharacterTextSplitter(chunk_size=10, chunk_overlap=1)
output = splitter.split_text(text)
expected_output = [
"Hi.",
"I'm",
"Harrison.",
"How? Are?",
"You?",
"Okay then",
"f f f f.",
"This is a",
"weird",
"text to",
"write,",
"but gotta",
"test the",
"splitting",
"gggg",
"some how.",
"Bye!",
"-H.",
]
assert output == expected_output
def test_split_documents() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | """Test split_documents."""
splitter = CharacterTextSplitter(separator="", chunk_size=1, chunk_overlap=0)
docs = [
Document(page_content="foo", metadata={"source": "1"}),
Document(page_content="bar", metadata={"source": "2"}),
Document(page_content="baz", metadata={"source": "1"}),
]
expected_output = [
Document(page_content="f", metadata={"source": "1"}),
Document(page_content="o", metadata={"source": "1"}),
Document(page_content="o", metadata={"source": "1"}),
Document(page_content="b", metadata={"source": "2"}),
Document(page_content="a", metadata={"source": "2"}),
Document(page_content="r", metadata={"source": "2"}),
Document(page_content="b", metadata={"source": "1"}),
Document(page_content="a", metadata={"source": "1"}),
Document(page_content="z", metadata={"source": "1"}),
]
assert splitter.split_documents(docs) == expected_output
def test_python_text_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = PythonCodeTextSplitter(chunk_size=30, chunk_overlap=0)
splits = splitter.split_text(FAKE_PYTHON_TEXT)
split_0 = """class Foo:\n\n def bar():"""
split_1 = """def foo():"""
split_2 = """def testing_func():"""
split_3 = """def bar():"""
expected_splits = [split_0, split_1, split_2, split_3]
assert splits == expected_splits
CHUNK_SIZE = 16
def test_python_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.PYTHON, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
def hello_world():
print("Hello, World!")
# Call the function
hello_world()
"""
chunks = splitter.split_text(code)
assert chunks == [
"def",
"hello_world():",
'print("Hello,',
'World!")',
"# Call the",
"function",
"hello_world()",
]
def test_golang_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.GO, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
package main
import "fmt"
func helloWorld() {
fmt.Println("Hello, World!")
}
func main() {
helloWorld()
}
"""
chunks = splitter.split_text(code)
assert chunks == [
"package main",
'import "fmt"',
"func",
"helloWorld() {",
'fmt.Println("He',
"llo,",
'World!")',
"}",
"func main() {",
"helloWorld()",
"}",
]
def test_rst_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.RST, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
Sample Document
===============
Section
-------
This is the content of the section.
Lists
-----
- Item 1
- Item 2
- Item 3
"""
chunks = splitter.split_text(code)
assert chunks == [
"Sample Document",
"===============",
"Section",
"-------",
"This is the",
"content of the",
"section.",
"Lists\n-----",
"- Item 1",
"- Item 2",
"- Item 3",
]
def test_proto_file_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.PROTO, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
syntax = "proto3";
package example;
message Person {
string name = 1;
int32 age = 2;
repeated string hobbies = 3;
}
"""
chunks = splitter.split_text(code)
assert chunks == [
"syntax =",
'"proto3";',
"package",
"example;",
"message Person",
"{",
"string name",
"= 1;",
"int32 age =",
"2;",
"repeated",
"string hobbies",
"= 3;",
"}",
]
def test_javascript_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.JS, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
function helloWorld() {
console.log("Hello, World!");
}
// Call the function
helloWorld();
"""
chunks = splitter.split_text(code)
assert chunks == [
"function",
"helloWorld() {",
'console.log("He',
"llo,",
'World!");',
"}",
"// Call the",
"function",
"helloWorld();",
]
def test_java_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.JAVA, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}
"""
chunks = splitter.split_text(code)
assert chunks == [
"public class",
"HelloWorld {",
"public",
"static void",
"main(String[]",
"args) {",
"System.out.prin",
'tln("Hello,',
'World!");',
"}\n}",
]
def test_cpp_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.CPP, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
#include <iostream>
int main() {
std::cout << "Hello, World!" << std::endl;
return 0;
}
"""
chunks = splitter.split_text(code)
assert chunks == [
"#include",
"<iostream>",
"int main() {",
"std::cout",
'<< "Hello,',
'World!" <<',
"std::endl;",
"return 0;\n}",
]
def test_scala_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.SCALA, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
object HelloWorld {
def main(args: Array[String]): Unit = {
println("Hello, World!")
}
}
"""
chunks = splitter.split_text(code)
assert chunks == [
"object",
"HelloWorld {",
"def",
"main(args:",
"Array[String]):",
"Unit = {",
'println("Hello,',
'World!")',
"}\n}",
]
def test_ruby_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.RUBY, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
def hello_world
puts "Hello, World!"
end
hello_world
"""
chunks = splitter.split_text(code)
assert chunks == [
"def hello_world",
'puts "Hello,',
'World!"',
"end",
"hello_world",
]
def test_php_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.PHP, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
<?php
function hello_world() {
echo "Hello, World!";
}
hello_world();
?>
"""
chunks = splitter.split_text(code)
assert chunks == [
"<?php",
"function",
"hello_world() {",
"echo",
'"Hello,',
'World!";',
"}",
"hello_world();",
"?>",
]
def test_swift_code_splitter() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | "2023-06-02T12:20:41Z" | python | "2023-06-05T23:40:26Z" | tests/unit_tests/test_text_splitter.py | splitter = RecursiveCharacterTextSplitter.from_language(
Language.SWIFT, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
func helloWorld() {
print("Hello, World!")
}
helloWorld()
"""
chunks = splitter.split_text(code)
assert chunks == [
"func",
"helloWorld() {",
'print("Hello,',
'World!")',
"}",
"helloWorld()",
]
def test_rust_code_splitter() -> None:
splitter = RecursiveCharacterTextSplitter.from_language(
Language.RUST, chunk_size=CHUNK_SIZE, chunk_overlap=0
)
code = """
fn main() {
println!("Hello, World!");
}
"""
chunks = splitter.split_text(code)
assert chunks == ["fn main() {", 'println!("Hello', ",", 'World!");', "}"] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,535 | Add Tigris vectorstore for vector search | ### Feature request
Support Tigris as a vector search backend
### Motivation
Tigris is a Serverless NoSQL Database and Search Platform and have their [vector search](https://www.tigrisdata.com/docs/concepts/vector-search/python/) product. It will be great option for users to use an integrated database and search product.
### Your contribution
I can submit a a PR | https://github.com/langchain-ai/langchain/issues/5535 | https://github.com/langchain-ai/langchain/pull/5703 | 38dabdbb3a900ae60e4b503cd48c26903b2d4673 | 233b52735e77121849b0fc9f8eaf6170222f0ac7 | "2023-06-01T03:18:00Z" | python | "2023-06-06T03:39:16Z" | langchain/vectorstores/__init__.py | """Wrappers on top of vector stores."""
from langchain.vectorstores.analyticdb import AnalyticDB
from langchain.vectorstores.annoy import Annoy
from langchain.vectorstores.atlas import AtlasDB
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.chroma import Chroma
from langchain.vectorstores.clickhouse import Clickhouse, ClickhouseSettings
from langchain.vectorstores.deeplake import DeepLake
from langchain.vectorstores.docarray import DocArrayHnswSearch, DocArrayInMemorySearch
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores.faiss import FAISS
from langchain.vectorstores.lancedb import LanceDB
from langchain.vectorstores.milvus import Milvus
from langchain.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch
from langchain.vectorstores.myscale import MyScale, MyScaleSettings
from langchain.vectorstores.opensearch_vector_search import OpenSearchVectorSearch
from langchain.vectorstores.pinecone import Pinecone
from langchain.vectorstores.qdrant import Qdrant
from langchain.vectorstores.redis import Redis
from langchain.vectorstores.sklearn import SKLearnVectorStore
from langchain.vectorstores.supabase import SupabaseVectorStore
from langchain.vectorstores.tair import Tair
from langchain.vectorstores.typesense import Typesense
from langchain.vectorstores.vectara import Vectara
from langchain.vectorstores.weaviate import Weaviate
from langchain.vectorstores.zilliz import Zilliz
__all__ = [ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,535 | Add Tigris vectorstore for vector search | ### Feature request
Support Tigris as a vector search backend
### Motivation
Tigris is a Serverless NoSQL Database and Search Platform and have their [vector search](https://www.tigrisdata.com/docs/concepts/vector-search/python/) product. It will be great option for users to use an integrated database and search product.
### Your contribution
I can submit a a PR | https://github.com/langchain-ai/langchain/issues/5535 | https://github.com/langchain-ai/langchain/pull/5703 | 38dabdbb3a900ae60e4b503cd48c26903b2d4673 | 233b52735e77121849b0fc9f8eaf6170222f0ac7 | "2023-06-01T03:18:00Z" | python | "2023-06-06T03:39:16Z" | langchain/vectorstores/__init__.py | "Redis",
"ElasticVectorSearch",
"FAISS",
"VectorStore",
"Pinecone",
"Weaviate",
"Qdrant",
"Milvus",
"Zilliz",
"Chroma",
"OpenSearchVectorSearch",
"AtlasDB",
"DeepLake",
"Annoy",
"MongoDBAtlasVectorSearch",
"MyScale",
"MyScaleSettings",
"SKLearnVectorStore",
"SupabaseVectorStore",
"AnalyticDB",
"Vectara",
"Tair",
"LanceDB",
"DocArrayHnswSearch",
"DocArrayInMemorySearch",
"Typesense",
"Clickhouse",
"ClickhouseSettings",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | """Wrapper around OpenAI embedding models."""
from __future__ import annotations
import logging
from typing import (
Any,
Callable,
Dict,
List,
Literal,
Optional,
Sequence,
Set,
Tuple,
Union,
)
import numpy as np
from pydantic import BaseModel, Extra, root_validator
from tenacity import (
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
def _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | import openai
min_seconds = 4
max_seconds = 10
return retry(
reraise=True,
stop=stop_after_attempt(embeddings.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:
"""Use tenacity to retry the embedding call."""
retry_decorator = _create_retry_decorator(embeddings)
@retry_decorator
def _embed_with_retry(**kwargs: Any) -> Any:
return embeddings.client.create(**kwargs)
return _embed_with_retry(**kwargs)
class OpenAIEmbeddings(BaseModel, Embeddings): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | """Wrapper around OpenAI embedding models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key or pass it
as a named parameter to the constructor.
Example:
.. code-block:: python
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to 'azure' and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example:
.. code-block:: python
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | model="your-embeddings-model-name",
api_base="https://your-endpoint.openai.azure.com/",
api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
"""
client: Any
model: str = "text-embedding-ada-002"
deployment: str = model
openai_api_version: Optional[str] = None
openai_api_base: Optional[str] = None
openai_api_type: Optional[str] = None
openai_proxy: Optional[str] = None
embedding_ctx_length: int = 8191
openai_api_key: Optional[str] = None
openai_organization: Optional[str] = None
allowed_special: Union[Literal["all"], Set[str]] = set()
disallowed_special: Union[Literal["all"], Set[str], Sequence[str]] = "all"
chunk_size: int = 1000
"""Maximum number of texts to embed in each batch"""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout in seconds for the OpenAPI request."""
headers: Any = None
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | """Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
values["openai_api_base"] = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
values["openai_api_type"] = get_from_dict_or_env(
values,
"openai_api_type",
"OPENAI_API_TYPE",
default="",
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | "OPENAI_PROXY",
default="",
)
if values["openai_api_type"] in ("azure", "azure_ad", "azuread"):
default_api_version = "2022-12-01"
else:
default_api_version = ""
values["openai_api_version"] = get_from_dict_or_env(
values,
"openai_api_version",
"OPENAI_API_VERSION",
default=default_api_version,
)
values["openai_organization"] = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai
values["client"] = openai.Embedding
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
return values
@property
def _invocation_params(self) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | openai_args = {
"engine": self.deployment,
"request_timeout": self.request_timeout,
"headers": self.headers,
"api_key": self.openai_api_key,
"organization": self.openai_organization,
"api_base": self.openai_api_base,
"api_type": self.openai_api_type,
"api_version": self.openai_api_version,
}
if self.openai_proxy:
openai_args["proxy"] = {
"http": self.openai_proxy,
"https": self.openai_proxy,
}
return openai_args
def _get_len_safe_embeddings( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
) -> List[List[float]]:
embeddings: List[List[float]] = [[] for _ in range(len(texts))]
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to for OpenAIEmbeddings. "
"Please install it with `pip install tiktoken`."
)
tokens = []
indices = []
encoding = tiktoken.model.encoding_for_model(self.model)
for i, text in enumerate(texts):
if self.model.endswith("001"):
text = text.replace("\n", " ")
token = encoding.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
)
for j in range(0, len(token), self.embedding_ctx_length):
tokens += [token[j : j + self.embedding_ctx_length]] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | indices += [i]
batched_embeddings = []
_chunk_size = chunk_size or self.chunk_size
for i in range(0, len(tokens), _chunk_size):
response = embed_with_retry(
self,
input=tokens[i : i + _chunk_size],
**self._invocation_params,
)
batched_embeddings += [r["embedding"] for r in response["data"]]
results: List[List[List[float]]] = [[] for _ in range(len(texts))]
num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))]
for i in range(len(indices)):
results[indices[i]].append(batched_embeddings[i])
num_tokens_in_batch[indices[i]].append(len(tokens[i]))
for i in range(len(texts)):
_result = results[i]
if len(_result) == 0:
average = embed_with_retry(
self,
input="",
engine=self.deployment,
request_timeout=self.request_timeout,
headers=self.headers,
)["data"][0]["embedding"]
else:
average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])
embeddings[i] = (average / np.linalg.norm(average)).tolist()
return embeddings
def _embedding_func(self, text: str, *, engine: str) -> List[float]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | """Call out to OpenAI's embedding endpoint."""
if len(text) > self.embedding_ctx_length:
return self._get_len_safe_embeddings([text], engine=engine)[0]
else:
if self.model.endswith("001"):
text = text.replace("\n", " ")
return embed_with_retry(
self,
input=[text],
engine=engine,
request_timeout=self.request_timeout,
headers=self.headers,
)["data"][0]["embedding"]
def embed_documents( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192ใBut i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function๏ผand used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected]๏ผ | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | "2023-06-07T08:36:23Z" | python | "2023-06-07T14:32:57Z" | langchain/embeddings/openai.py | self, texts: List[str], chunk_size: Optional[int] = 0
) -> List[List[float]]:
"""Call out to OpenAI's embedding endpoint for embedding search docs.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns:
List of embeddings, one for each text.
"""
return self._get_len_safe_embeddings(texts, engine=self.deployment)
def embed_query(self, text: str) -> List[float]:
"""Call out to OpenAI's embedding endpoint for embedding query text.
Args:
text: The text to embed.
Returns:
Embedding for the text.
"""
embedding = self._embedding_func(text, engine=self.deployment)
return embedding |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | """Functionality for loading chains."""
import json
from pathlib import Path
from typing import Any, Union
import yaml
from langchain.chains.api.base import APIChain
from langchain.chains.base import Chain
from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain
from langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain
from langchain.chains.combine_documents.refine import RefineDocumentsChain
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains.hyde.base import HypotheticalDocumentEmbedder
from langchain.chains.llm import LLMChain
from langchain.chains.llm_bash.base import LLMBashChain
from langchain.chains.llm_checker.base import LLMCheckerChain
from langchain.chains.llm_math.base import LLMMathChain
from langchain.chains.llm_requests import LLMRequestsChain
from langchain.chains.pal.base import PALChain
from langchain.chains.qa_with_sources.base import QAWithSourcesChain
from langchain.chains.qa_with_sources.vector_db import VectorDBQAWithSourcesChain
from langchain.chains.retrieval_qa.base import VectorDBQA
from langchain.chains.sql_database.base import SQLDatabaseChain
from langchain.llms.loading import load_llm, load_llm_from_config
from langchain.prompts.loading import load_prompt, load_prompt_from_config
from langchain.utilities.loading import try_load_from_hub
URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/chains/"
def _load_llm_chain(config: dict, **kwargs: Any) -> LLMChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | """Load LLM chain from config dict."""
if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
else:
raise ValueError("One of `prompt` or `prompt_path` must be present.")
return LLMChain(llm=llm, prompt=prompt, **config)
def _load_hyde_chain(config: dict, **kwargs: Any) -> HypotheticalDocumentEmbedder: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | """Load hypothetical document embedder chain from config dict."""
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.")
if "embeddings" in kwargs:
embeddings = kwargs.pop("embeddings")
else:
raise ValueError("`embeddings` must be present.")
return HypotheticalDocumentEmbedder(
llm_chain=llm_chain, base_embeddings=embeddings, **config
)
def _load_stuff_documents_chain(config: dict, **kwargs: Any) -> StuffDocumentsChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.")
if not isinstance(llm_chain, LLMChain):
raise ValueError(f"Expected LLMChain, got {llm_chain}")
if "document_prompt" in config:
prompt_config = config.pop("document_prompt")
document_prompt = load_prompt_from_config(prompt_config)
elif "document_prompt_path" in config:
document_prompt = load_prompt(config.pop("document_prompt_path"))
else:
raise ValueError(
"One of `document_prompt` or `document_prompt_path` must be present."
)
return StuffDocumentsChain(
llm_chain=llm_chain, document_prompt=document_prompt, **config
)
def _load_map_reduce_documents_chain(
config: dict, **kwargs: Any
) -> MapReduceDocumentsChain:
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.")
if not isinstance(llm_chain, LLMChain):
raise ValueError(f"Expected LLMChain, got {llm_chain}")
if "combine_document_chain" in config:
combine_document_chain_config = config.pop("combine_document_chain")
combine_document_chain = load_chain_from_config(combine_document_chain_config)
elif "combine_document_chain_path" in config:
combine_document_chain = load_chain(config.pop("combine_document_chain_path"))
else:
raise ValueError(
"One of `combine_document_chain` or "
"`combine_document_chain_path` must be present."
)
if "collapse_document_chain" in config:
collapse_document_chain_config = config.pop("collapse_document_chain")
if collapse_document_chain_config is None:
collapse_document_chain = None
else:
collapse_document_chain = load_chain_from_config(
collapse_document_chain_config
)
elif "collapse_document_chain_path" in config:
collapse_document_chain = load_chain(config.pop("collapse_document_chain_path"))
return MapReduceDocumentsChain(
llm_chain=llm_chain,
combine_document_chain=combine_document_chain,
collapse_document_chain=collapse_document_chain,
**config,
)
def _load_llm_bash_chain(config: dict, **kwargs: Any) -> LLMBashChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | llm_chain = None
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
elif "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
if llm_chain:
return LLMBashChain(llm_chain=llm_chain, prompt=prompt, **config)
else:
return LLMBashChain(llm=llm, prompt=prompt, **config)
def _load_llm_checker_chain(config: dict, **kwargs: Any) -> LLMCheckerChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "create_draft_answer_prompt" in config:
create_draft_answer_prompt_config = config.pop("create_draft_answer_prompt")
create_draft_answer_prompt = load_prompt_from_config(
create_draft_answer_prompt_config
)
elif "create_draft_answer_prompt_path" in config:
create_draft_answer_prompt = load_prompt( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | config.pop("create_draft_answer_prompt_path")
)
if "list_assertions_prompt" in config:
list_assertions_prompt_config = config.pop("list_assertions_prompt")
list_assertions_prompt = load_prompt_from_config(list_assertions_prompt_config)
elif "list_assertions_prompt_path" in config:
list_assertions_prompt = load_prompt(config.pop("list_assertions_prompt_path"))
if "check_assertions_prompt" in config:
check_assertions_prompt_config = config.pop("check_assertions_prompt")
check_assertions_prompt = load_prompt_from_config(
check_assertions_prompt_config
)
elif "check_assertions_prompt_path" in config:
check_assertions_prompt = load_prompt(
config.pop("check_assertions_prompt_path")
)
if "revised_answer_prompt" in config:
revised_answer_prompt_config = config.pop("revised_answer_prompt")
revised_answer_prompt = load_prompt_from_config(revised_answer_prompt_config)
elif "revised_answer_prompt_path" in config:
revised_answer_prompt = load_prompt(config.pop("revised_answer_prompt_path"))
return LLMCheckerChain(
llm=llm,
create_draft_answer_prompt=create_draft_answer_prompt,
list_assertions_prompt=list_assertions_prompt,
check_assertions_prompt=check_assertions_prompt,
revised_answer_prompt=revised_answer_prompt,
**config,
)
def _load_llm_math_chain(config: dict, **kwargs: Any) -> LLMMathChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | llm_chain = None
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
elif "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
if llm_chain:
return LLMMathChain(llm_chain=llm_chain, prompt=prompt, **config)
else:
return LLMMathChain(llm=llm, prompt=prompt, **config)
def _load_map_rerank_documents_chain( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | config: dict, **kwargs: Any
) -> MapRerankDocumentsChain:
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.")
return MapRerankDocumentsChain(llm_chain=llm_chain, **config)
def _load_pal_chain(config: dict, **kwargs: Any) -> PALChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | llm_chain = None
if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
elif "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
else:
raise ValueError("One of `prompt` or `prompt_path` must be present.")
if llm_chain:
return PALChain(llm_chain=llm_chain, prompt=prompt, **config)
else:
return PALChain(llm=llm, prompt=prompt, **config)
def _load_refine_documents_chain(config: dict, **kwargs: Any) -> RefineDocumentsChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | if "initial_llm_chain" in config:
initial_llm_chain_config = config.pop("initial_llm_chain")
initial_llm_chain = load_chain_from_config(initial_llm_chain_config)
elif "initial_llm_chain_path" in config:
initial_llm_chain = load_chain(config.pop("initial_llm_chain_path"))
else:
raise ValueError(
"One of `initial_llm_chain` or `initial_llm_chain_config` must be present."
)
if "refine_llm_chain" in config:
refine_llm_chain_config = config.pop("refine_llm_chain")
refine_llm_chain = load_chain_from_config(refine_llm_chain_config)
elif "refine_llm_chain_path" in config:
refine_llm_chain = load_chain(config.pop("refine_llm_chain_path"))
else:
raise ValueError(
"One of `refine_llm_chain` or `refine_llm_chain_config` must be present."
)
if "document_prompt" in config:
prompt_config = config.pop("document_prompt")
document_prompt = load_prompt_from_config(prompt_config)
elif "document_prompt_path" in config:
document_prompt = load_prompt(config.pop("document_prompt_path"))
return RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
document_prompt=document_prompt,
**config,
)
def _load_qa_with_sources_chain(config: dict, **kwargs: Any) -> QAWithSourcesChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | if "combine_documents_chain" in config:
combine_documents_chain_config = config.pop("combine_documents_chain")
combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
elif "combine_documents_chain_path" in config:
combine_documents_chain = load_chain(config.pop("combine_documents_chain_path"))
else:
raise ValueError(
"One of `combine_documents_chain` or "
"`combine_documents_chain_path` must be present."
)
return QAWithSourcesChain(combine_documents_chain=combine_documents_chain, **config)
def _load_sql_database_chain(config: dict, **kwargs: Any) -> SQLDatabaseChain:
if "database" in kwargs:
database = kwargs.pop("database")
else:
raise ValueError("`database` must be present.")
if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"))
else:
raise ValueError("One of `llm` or `llm_path` must be present.")
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
else:
prompt = None
return SQLDatabaseChain.from_llm(llm, database, prompt=prompt, **config)
def _load_vector_db_qa_with_sources_chain( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | config: dict, **kwargs: Any
) -> VectorDBQAWithSourcesChain:
if "vectorstore" in kwargs:
vectorstore = kwargs.pop("vectorstore")
else:
raise ValueError("`vectorstore` must be present.")
if "combine_documents_chain" in config:
combine_documents_chain_config = config.pop("combine_documents_chain")
combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
elif "combine_documents_chain_path" in config:
combine_documents_chain = load_chain(config.pop("combine_documents_chain_path"))
else:
raise ValueError(
"One of `combine_documents_chain` or "
"`combine_documents_chain_path` must be present."
)
return VectorDBQAWithSourcesChain(
combine_documents_chain=combine_documents_chain,
vectorstore=vectorstore,
**config,
)
def _load_vector_db_qa(config: dict, **kwargs: Any) -> VectorDBQA: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | if "vectorstore" in kwargs:
vectorstore = kwargs.pop("vectorstore")
else:
raise ValueError("`vectorstore` must be present.")
if "combine_documents_chain" in config:
combine_documents_chain_config = config.pop("combine_documents_chain")
combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
elif "combine_documents_chain_path" in config:
combine_documents_chain = load_chain(config.pop("combine_documents_chain_path"))
else:
raise ValueError(
"One of `combine_documents_chain` or "
"`combine_documents_chain_path` must be present."
)
return VectorDBQA(
combine_documents_chain=combine_documents_chain,
vectorstore=vectorstore,
**config,
)
def _load_api_chain(config: dict, **kwargs: Any) -> APIChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | if "api_request_chain" in config:
api_request_chain_config = config.pop("api_request_chain")
api_request_chain = load_chain_from_config(api_request_chain_config)
elif "api_request_chain_path" in config:
api_request_chain = load_chain(config.pop("api_request_chain_path"))
else:
raise ValueError(
"One of `api_request_chain` or `api_request_chain_path` must be present."
)
if "api_answer_chain" in config:
api_answer_chain_config = config.pop("api_answer_chain")
api_answer_chain = load_chain_from_config(api_answer_chain_config)
elif "api_answer_chain_path" in config:
api_answer_chain = load_chain(config.pop("api_answer_chain_path"))
else:
raise ValueError(
"One of `api_answer_chain` or `api_answer_chain_path` must be present."
)
if "requests_wrapper" in kwargs:
requests_wrapper = kwargs.pop("requests_wrapper")
else:
raise ValueError("`requests_wrapper` must be present.")
return APIChain(
api_request_chain=api_request_chain,
api_answer_chain=api_answer_chain,
requests_wrapper=requests_wrapper,
**config,
)
def _load_llm_requests_chain(config: dict, **kwargs: Any) -> LLMRequestsChain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | if "llm_chain" in config:
llm_chain_config = config.pop("llm_chain")
llm_chain = load_chain_from_config(llm_chain_config)
elif "llm_chain_path" in config:
llm_chain = load_chain(config.pop("llm_chain_path"))
else:
raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.")
if "requests_wrapper" in kwargs:
requests_wrapper = kwargs.pop("requests_wrapper")
return LLMRequestsChain(
llm_chain=llm_chain, requests_wrapper=requests_wrapper, **config
)
else:
return LLMRequestsChain(llm_chain=llm_chain, **config)
type_to_loader_dict = {
"api_chain": _load_api_chain,
"hyde_chain": _load_hyde_chain,
"llm_chain": _load_llm_chain,
"llm_bash_chain": _load_llm_bash_chain,
"llm_checker_chain": _load_llm_checker_chain, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | "llm_math_chain": _load_llm_math_chain,
"llm_requests_chain": _load_llm_requests_chain,
"pal_chain": _load_pal_chain,
"qa_with_sources_chain": _load_qa_with_sources_chain,
"stuff_documents_chain": _load_stuff_documents_chain,
"map_reduce_documents_chain": _load_map_reduce_documents_chain,
"map_rerank_documents_chain": _load_map_rerank_documents_chain,
"refine_documents_chain": _load_refine_documents_chain,
"sql_database_chain": _load_sql_database_chain,
"vector_db_qa_with_sources_chain": _load_vector_db_qa_with_sources_chain,
"vector_db_qa": _load_vector_db_qa,
}
def load_chain_from_config(config: dict, **kwargs: Any) -> Chain:
"""Load chain from Config Dict."""
if "_type" not in config:
raise ValueError("Must specify a chain Type in config")
config_type = config.pop("_type")
if config_type not in type_to_loader_dict:
raise ValueError(f"Loading {config_type} chain not supported")
chain_loader = type_to_loader_dict[config_type]
return chain_loader(config, **kwargs)
def load_chain(path: Union[str, Path], **kwargs: Any) -> Chain:
"""Unified method for loading a chain from LangChainHub or local fs."""
if hub_result := try_load_from_hub(
path, _load_chain_from_file, "chains", {"json", "yaml"}, **kwargs
):
return hub_result
else:
return _load_chain_from_file(path, **kwargs)
def _load_chain_from_file(file: Union[str, Path], **kwargs: Any) -> Chain: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/loading.py | """Load chain from file."""
if isinstance(file, str):
file_path = Path(file)
else:
file_path = file
if file_path.suffix == ".json":
with open(file_path) as f:
config = json.load(f)
elif file_path.suffix == ".yaml":
with open(file_path, "r") as f:
config = yaml.safe_load(f)
else:
raise ValueError("File type must be json or yaml")
if "verbose" in kwargs:
config["verbose"] = kwargs.pop("verbose")
if "memory" in kwargs:
config["memory"] = kwargs.pop("memory")
return load_chain_from_config(config, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | """Chain for question-answering against a vector database."""
from __future__ import annotations
import warnings
from abc import abstractmethod
from typing import Any, Dict, List, Optional
from pydantic import Extra, Field, root_validator
from langchain.base_language import BaseLanguageModel
from langchain.callbacks.manager import (
AsyncCallbackManagerForChainRun,
CallbackManagerForChainRun,
)
from langchain.chains.base import Chain
from langchain.chains.combine_documents.base import BaseCombineDocumentsChain
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains.llm import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.question_answering.stuff_prompt import PROMPT_SELECTOR
from langchain.prompts import PromptTemplate
from langchain.schema import BaseRetriever, Document
from langchain.vectorstores.base import VectorStore
class BaseRetrievalQA(Chain): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | combine_documents_chain: BaseCombineDocumentsChain
"""Chain to use to combine the documents."""
input_key: str = "query"
output_key: str = "result"
return_source_documents: bool = False
"""Return the source documents."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
allow_population_by_field_name = True
@property
def input_keys(self) -> List[str]:
"""Return the input keys.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | """Return the output keys.
:meta private:
"""
_output_keys = [self.output_key]
if self.return_source_documents:
_output_keys = _output_keys + ["source_documents"]
return _output_keys
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
prompt: Optional[PromptTemplate] = None,
**kwargs: Any,
) -> BaseRetrievalQA:
"""Initialize from LLM."""
_prompt = prompt or PROMPT_SELECTOR.get_prompt(llm)
llm_chain = LLMChain(llm=llm, prompt=_prompt)
document_prompt = PromptTemplate(
input_variables=["page_content"], template="Context:\n{page_content}"
)
combine_documents_chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_variable_name="context",
document_prompt=document_prompt,
)
return cls(combine_documents_chain=combine_documents_chain, **kwargs)
@classmethod
def from_chain_type( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | cls,
llm: BaseLanguageModel,
chain_type: str = "stuff",
chain_type_kwargs: Optional[dict] = None,
**kwargs: Any,
) -> BaseRetrievalQA:
"""Load chain from chain type."""
_chain_type_kwargs = chain_type_kwargs or {}
combine_documents_chain = load_qa_chain(
llm, chain_type=chain_type, **_chain_type_kwargs
)
return cls(combine_documents_chain=combine_documents_chain, **kwargs)
@abstractmethod
def _get_docs(self, question: str) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | """Get documents to do question answering over."""
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
"""Run get_relevant_text and llm on input query.
If chain has 'return_source_documents' as 'True', returns
the retrieved documents as well under the key 'source_documents'.
Example:
.. code-block:: python
res = indexqa({'query': 'This is my query'})
answer, docs = res['result'], res['source_documents']
"""
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
question = inputs[self.input_key]
docs = self._get_docs(question)
answer = self.combine_documents_chain.run(
input_documents=docs, question=question, callbacks=_run_manager.get_child()
)
if self.return_source_documents:
return {self.output_key: answer, "source_documents": docs}
else:
return {self.output_key: answer}
@abstractmethod
async def _aget_docs(self, question: str) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | """Get documents to do question answering over."""
async def _acall(
self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
"""Run get_relevant_text and llm on input query.
If chain has 'return_source_documents' as 'True', returns
the retrieved documents as well under the key 'source_documents'.
Example:
.. code-block:: python
res = indexqa({'query': 'This is my query'})
answer, docs = res['result'], res['source_documents']
"""
_run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()
question = inputs[self.input_key]
docs = await self._aget_docs(question)
answer = await self.combine_documents_chain.arun(
input_documents=docs, question=question, callbacks=_run_manager.get_child()
)
if self.return_source_documents:
return {self.output_key: answer, "source_documents": docs}
else:
return {self.output_key: answer}
class RetrievalQA(BaseRetrievalQA): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | """Chain for question-answering against an index.
Example:
.. code-block:: python
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.faiss import FAISS
from langchain.vectorstores.base import VectorStoreRetriever
retriever = VectorStoreRetriever(vectorstore=FAISS(...))
retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)
"""
retriever: BaseRetriever = Field(exclude=True)
def _get_docs(self, question: str) -> List[Document]:
return self.retriever.get_relevant_documents(question)
async def _aget_docs(self, question: str) -> List[Document]:
return await self.retriever.aget_relevant_documents(question)
class VectorDBQA(BaseRetrievalQA): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | """Chain for question-answering against a vector database."""
vectorstore: VectorStore = Field(exclude=True, alias="vectorstore")
"""Vector Database to connect to."""
k: int = 4
"""Number of documents to query for."""
search_type: str = "similarity"
"""Search type to use over vectorstore. `similarity` or `mmr`."""
search_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Extra search args."""
@root_validator()
def raise_deprecation(cls, values: Dict) -> Dict:
warnings.warn(
"`VectorDBQA` is deprecated - "
"please use `from langchain.chains import RetrievalQA`"
)
return values
@root_validator()
def validate_search_type(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,983 | RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| https://github.com/langchain-ai/langchain/issues/3983 | https://github.com/langchain-ai/langchain/pull/5818 | b93638ef1ef683dfbb46e8e7654e96325324a98c | 5518f24ec38654510bada81025fe4e96c26556a7 | "2023-05-02T16:17:48Z" | python | "2023-06-08T04:07:13Z" | langchain/chains/retrieval_qa/base.py | """Validate search type."""
if "search_type" in values:
search_type = values["search_type"]
if search_type not in ("similarity", "mmr"):
raise ValueError(f"search_type of {search_type} not allowed.")
return values
def _get_docs(self, question: str) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(
question, k=self.k, **self.search_kwargs
)
elif self.search_type == "mmr":
docs = self.vectorstore.max_marginal_relevance_search(
question, k=self.k, **self.search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def _aget_docs(self, question: str) -> List[Document]:
raise NotImplementedError("VectorDBQA does not support async")
@property
def _chain_type(self) -> str:
"""Return the chain type."""
return "vector_db_qa" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,713 | Inference parameters for Bedrock titan models not working | ### System Info
LangChain version 0.0.190
Python 3.9
### Who can help?
@seanpmorgan @3coins
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Tried the following to provide the `temperature` and `maxTokenCount` parameters when using the `Bedrock` class for the `amazon.titan-tg1-large` model.
```
import boto3
import botocore
from langchain.chains import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
from langchain.embeddings import BedrockEmbeddings
prompt = PromptTemplate(
input_variables=["text"],
template="{text}",
)
llm = Bedrock(model_id="amazon.titan-tg1-large")
llmchain = LLMChain(llm=llm, prompt=prompt)
llm.model_kwargs = {'temperature': 0.3, "maxTokenCount": 512}
text = "Write a blog explaining Generative AI in ELI5 style."
response = llmchain.run(text=text)
print(f"prompt={text}\n\nresponse={response}")
```
This results in the following exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
```
This happens because https://github.com/hwchase17/langchain/blob/d0d89d39efb5f292f72e70973f3b70c4ca095047/langchain/llms/bedrock.py#L20 passes these params as key value pairs rather than putting them in the `textgenerationConfig` structure as the Titan model expects them to be,
The proposed fix is as follows:
```
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic" or provider == "ai21":
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 50
return input_body
```
```
### Expected behavior
Support the inference config parameters. | https://github.com/langchain-ai/langchain/issues/5713 | https://github.com/langchain-ai/langchain/pull/5896 | 767fa91eae3455050d85a594fededddff3311dbe | a6ebffb69504576a805f3b9f09732ad344751b89 | "2023-06-05T06:48:57Z" | python | "2023-06-08T21:16:01Z" | langchain/llms/bedrock.py | import json
from typing import Any, Dict, List, Mapping, Optional
from pydantic import Extra, root_validator
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from langchain.llms.utils import enforce_stop_tokens
class LLMInputOutputAdapter:
"""Adapter class to prepare the inputs from Langchain to a format
that LLM model expects. Also, provides helper function to extract
the generated text from the model response."""
@classmethod
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic" or provider == "ai21":
input_body["prompt"] = prompt
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 50
return input_body
@classmethod
def prepare_output(cls, provider: str, response: Any) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,713 | Inference parameters for Bedrock titan models not working | ### System Info
LangChain version 0.0.190
Python 3.9
### Who can help?
@seanpmorgan @3coins
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Tried the following to provide the `temperature` and `maxTokenCount` parameters when using the `Bedrock` class for the `amazon.titan-tg1-large` model.
```
import boto3
import botocore
from langchain.chains import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
from langchain.embeddings import BedrockEmbeddings
prompt = PromptTemplate(
input_variables=["text"],
template="{text}",
)
llm = Bedrock(model_id="amazon.titan-tg1-large")
llmchain = LLMChain(llm=llm, prompt=prompt)
llm.model_kwargs = {'temperature': 0.3, "maxTokenCount": 512}
text = "Write a blog explaining Generative AI in ELI5 style."
response = llmchain.run(text=text)
print(f"prompt={text}\n\nresponse={response}")
```
This results in the following exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
```
This happens because https://github.com/hwchase17/langchain/blob/d0d89d39efb5f292f72e70973f3b70c4ca095047/langchain/llms/bedrock.py#L20 passes these params as key value pairs rather than putting them in the `textgenerationConfig` structure as the Titan model expects them to be,
The proposed fix is as follows:
```
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic" or provider == "ai21":
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 50
return input_body
```
```
### Expected behavior
Support the inference config parameters. | https://github.com/langchain-ai/langchain/issues/5713 | https://github.com/langchain-ai/langchain/pull/5896 | 767fa91eae3455050d85a594fededddff3311dbe | a6ebffb69504576a805f3b9f09732ad344751b89 | "2023-06-05T06:48:57Z" | python | "2023-06-08T21:16:01Z" | langchain/llms/bedrock.py | if provider == "anthropic":
response_body = json.loads(response.get("body").read().decode())
return response_body.get("completion")
else:
response_body = json.loads(response.get("body").read())
if provider == "ai21":
return response_body.get("completions")[0].get("data").get("text")
else:
return response_body.get("results")[0].get("outputText")
class Bedrock(LLM):
"""LLM provider to invoke Bedrock models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
"""
""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,713 | Inference parameters for Bedrock titan models not working | ### System Info
LangChain version 0.0.190
Python 3.9
### Who can help?
@seanpmorgan @3coins
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Tried the following to provide the `temperature` and `maxTokenCount` parameters when using the `Bedrock` class for the `amazon.titan-tg1-large` model.
```
import boto3
import botocore
from langchain.chains import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
from langchain.embeddings import BedrockEmbeddings
prompt = PromptTemplate(
input_variables=["text"],
template="{text}",
)
llm = Bedrock(model_id="amazon.titan-tg1-large")
llmchain = LLMChain(llm=llm, prompt=prompt)
llm.model_kwargs = {'temperature': 0.3, "maxTokenCount": 512}
text = "Write a blog explaining Generative AI in ELI5 style."
response = llmchain.run(text=text)
print(f"prompt={text}\n\nresponse={response}")
```
This results in the following exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
```
This happens because https://github.com/hwchase17/langchain/blob/d0d89d39efb5f292f72e70973f3b70c4ca095047/langchain/llms/bedrock.py#L20 passes these params as key value pairs rather than putting them in the `textgenerationConfig` structure as the Titan model expects them to be,
The proposed fix is as follows:
```
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic" or provider == "ai21":
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 50
return input_body
```
```
### Expected behavior
Support the inference config parameters. | https://github.com/langchain-ai/langchain/issues/5713 | https://github.com/langchain-ai/langchain/pull/5896 | 767fa91eae3455050d85a594fededddff3311dbe | a6ebffb69504576a805f3b9f09732ad344751b89 | "2023-06-05T06:48:57Z" | python | "2023-06-08T21:16:01Z" | langchain/llms/bedrock.py | Example:
.. code-block:: python
from bedrock_langchain.bedrock_llm import BedrockLLM
llm = BedrockLLM(
credentials_profile_name="default",
model_id="amazon.titan-tg1-large"
)
"""
client: Any
region_name: Optional[str] = None
"""The aws region e.g., `us-west-2`. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
"""
credentials_profile_name: Optional[str] = None
"""The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
"""
model_id: str
"""Id of the model to call, e.g., amazon.titan-tg1-large, this is
equivalent to the modelId property in the list-foundation-models api"""
model_kwargs: Optional[Dict] = None
"""Key word arguments to pass to the model."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,713 | Inference parameters for Bedrock titan models not working | ### System Info
LangChain version 0.0.190
Python 3.9
### Who can help?
@seanpmorgan @3coins
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Tried the following to provide the `temperature` and `maxTokenCount` parameters when using the `Bedrock` class for the `amazon.titan-tg1-large` model.
```
import boto3
import botocore
from langchain.chains import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
from langchain.embeddings import BedrockEmbeddings
prompt = PromptTemplate(
input_variables=["text"],
template="{text}",
)
llm = Bedrock(model_id="amazon.titan-tg1-large")
llmchain = LLMChain(llm=llm, prompt=prompt)
llm.model_kwargs = {'temperature': 0.3, "maxTokenCount": 512}
text = "Write a blog explaining Generative AI in ELI5 style."
response = llmchain.run(text=text)
print(f"prompt={text}\n\nresponse={response}")
```
This results in the following exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
```
This happens because https://github.com/hwchase17/langchain/blob/d0d89d39efb5f292f72e70973f3b70c4ca095047/langchain/llms/bedrock.py#L20 passes these params as key value pairs rather than putting them in the `textgenerationConfig` structure as the Titan model expects them to be,
The proposed fix is as follows:
```
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic" or provider == "ai21":
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 50
return input_body
```
```
### Expected behavior
Support the inference config parameters. | https://github.com/langchain-ai/langchain/issues/5713 | https://github.com/langchain-ai/langchain/pull/5896 | 767fa91eae3455050d85a594fededddff3311dbe | a6ebffb69504576a805f3b9f09732ad344751b89 | "2023-06-05T06:48:57Z" | python | "2023-06-08T21:16:01Z" | langchain/llms/bedrock.py | """Validate that AWS credentials to and python package exists in environment."""
if values["client"] is not None:
return values
try:
import boto3
if values["credentials_profile_name"] is not None:
session = boto3.Session(profile_name=values["credentials_profile_name"])
else:
session = boto3.Session()
client_params = {}
if values["region_name"]:
client_params["region_name"] = values["region_name"]
values["client"] = session.client("bedrock", **client_params)
except ImportError:
raise ModuleNotFoundError(
"Could not import boto3 python package. "
"Please install it with `pip install boto3`."
)
except Exception as e:
raise ValueError(
"Could not load credentials to authenticate with AWS client. "
"Please check that credentials in the specified "
"profile name are valid."
) from e
return values
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,713 | Inference parameters for Bedrock titan models not working | ### System Info
LangChain version 0.0.190
Python 3.9
### Who can help?
@seanpmorgan @3coins
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Tried the following to provide the `temperature` and `maxTokenCount` parameters when using the `Bedrock` class for the `amazon.titan-tg1-large` model.
```
import boto3
import botocore
from langchain.chains import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
from langchain.embeddings import BedrockEmbeddings
prompt = PromptTemplate(
input_variables=["text"],
template="{text}",
)
llm = Bedrock(model_id="amazon.titan-tg1-large")
llmchain = LLMChain(llm=llm, prompt=prompt)
llm.model_kwargs = {'temperature': 0.3, "maxTokenCount": 512}
text = "Write a blog explaining Generative AI in ELI5 style."
response = llmchain.run(text=text)
print(f"prompt={text}\n\nresponse={response}")
```
This results in the following exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
```
This happens because https://github.com/hwchase17/langchain/blob/d0d89d39efb5f292f72e70973f3b70c4ca095047/langchain/llms/bedrock.py#L20 passes these params as key value pairs rather than putting them in the `textgenerationConfig` structure as the Titan model expects them to be,
The proposed fix is as follows:
```
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic" or provider == "ai21":
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 50
return input_body
```
```
### Expected behavior
Support the inference config parameters. | https://github.com/langchain-ai/langchain/issues/5713 | https://github.com/langchain-ai/langchain/pull/5896 | 767fa91eae3455050d85a594fededddff3311dbe | a6ebffb69504576a805f3b9f09732ad344751b89 | "2023-06-05T06:48:57Z" | python | "2023-06-08T21:16:01Z" | langchain/llms/bedrock.py | """Get the identifying parameters."""
_model_kwargs = self.model_kwargs or {}
return {
**{"model_kwargs": _model_kwargs},
}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "amazon_bedrock"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,713 | Inference parameters for Bedrock titan models not working | ### System Info
LangChain version 0.0.190
Python 3.9
### Who can help?
@seanpmorgan @3coins
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Tried the following to provide the `temperature` and `maxTokenCount` parameters when using the `Bedrock` class for the `amazon.titan-tg1-large` model.
```
import boto3
import botocore
from langchain.chains import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
from langchain.embeddings import BedrockEmbeddings
prompt = PromptTemplate(
input_variables=["text"],
template="{text}",
)
llm = Bedrock(model_id="amazon.titan-tg1-large")
llmchain = LLMChain(llm=llm, prompt=prompt)
llm.model_kwargs = {'temperature': 0.3, "maxTokenCount": 512}
text = "Write a blog explaining Generative AI in ELI5 style."
response = llmchain.run(text=text)
print(f"prompt={text}\n\nresponse={response}")
```
This results in the following exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
```
This happens because https://github.com/hwchase17/langchain/blob/d0d89d39efb5f292f72e70973f3b70c4ca095047/langchain/llms/bedrock.py#L20 passes these params as key value pairs rather than putting them in the `textgenerationConfig` structure as the Titan model expects them to be,
The proposed fix is as follows:
```
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic" or provider == "ai21":
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 50
return input_body
```
```
### Expected behavior
Support the inference config parameters. | https://github.com/langchain-ai/langchain/issues/5713 | https://github.com/langchain-ai/langchain/pull/5896 | 767fa91eae3455050d85a594fededddff3311dbe | a6ebffb69504576a805f3b9f09732ad344751b89 | "2023-06-05T06:48:57Z" | python | "2023-06-08T21:16:01Z" | langchain/llms/bedrock.py | ) -> str:
"""Call out to Bedrock service model.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The string generated by the model.
Example:
.. code-block:: python
response = se("Tell me a joke.")
"""
_model_kwargs = self.model_kwargs or {}
provider = self.model_id.split(".")[0]
input_body = LLMInputOutputAdapter.prepare_input(
provider, prompt, _model_kwargs
)
body = json.dumps(input_body)
accept = "application/json"
contentType = "application/json"
try:
response = self.client.invoke_model(
body=body, modelId=self.model_id, accept=accept, contentType=contentType
)
text = LLMInputOutputAdapter.prepare_output(provider, response)
except Exception as e:
raise ValueError(f"Error raised by bedrock service: {e}")
if stop is not None:
text = enforce_stop_tokens(text, stop)
return text |
Subsets and Splits