issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### System Info
python==3.10.13
langchain==0.0.336
pydantic==1.10.13
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Noticed an error with Pydantic validation when the schema contains optional lists. Here are the steps to reproduce the issue and the error that I am getting.
1. A basic extraction scheme is defined using Pydantic.
```python
from pydantic import BaseModel, Field
class Properties(BaseModel):
person_names: Optional[List[str]] = Field([], description="The names of the people")
person_heights: Optional[List[int]] = Field([], description="The heights of the people")
person_hair_colors: Optional[List[str]] = Field([], description="The hair colors of the people")
```
2. Extraction chain is created to extract the defined fields from a document.
```python
from langchain.chat_models import ChatOpenAI
from langchain.chains import create_extraction_chain_pydantic
llm = ChatOpenAI(
temperature=0, model="gpt-3.5-turbo", request_timeout=20, max_retries=1
)
chain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)
```
3. When we run the extraction on a document, sometimes the OpenAI function call does not return the `info` field as a `list` but instead as a `dict`. That creates a validation error with Pydantic, even if the extracted fields are perfectly given in the returned dictionary.
```python
inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""
response = chain.run(inp)
```
4. The error and the traceback are as follows
```python
File "pydantic/main.py", line 549, in pydantic.main.BaseModel.parse_raw
File "pydantic/main.py", line 526, in pydantic.main.BaseModel.parse_obj
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for PydanticSchema
info
value is not a valid list (type=type_error.list)
```
### Expected behavior
We can make the Pydantic validation pass by maybe simply casting the `info` field into a list if it is somehow returned as a dictionary by the OpenAI function call. | OpenAI Functions Extraction Chain not returning a list | https://api.github.com/repos/langchain-ai/langchain/issues/13533/comments | 6 | 2023-11-17T20:18:52Z | 2024-04-15T16:42:30Z | https://github.com/langchain-ai/langchain/issues/13533 | 1,999,863,087 | 13,533 |
[
"hwchase17",
"langchain"
]
| ### System Info
- Python 3.11.5
- google-ai-generativelanguage==0.3.3
- langchain==0.0.336
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the issue:
1. Create a simple GooglePalm llm
2. Run simple chain with some medical related prompt e.g: `Tell me reasons why I am having symptoms of cold`
Error:
```
return self.parse(result[0].text)
~~~~~~^^^
IndexError: list index out of range
```
### Expected behavior
There is a proper error thrown by GooglePalm in completion object
```
Completion(candidates=[],
result=None,
filters=[{'reason': <BlockedReason.SAFETY: 1>}],
safety_feedback=[{'rating': {'category': <HarmCategory.HARM_CATEGORY_MEDICAL: 5>,
'probability': <HarmProbability.HIGH: 4>},
'setting': {'category': <HarmCategory.HARM_CATEGORY_MEDICAL: 5>,
'threshold': <HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE: 2>}}])
```
If same is relayed to user it will be more useful. | Output parser fails with index out of range error but doesn't give actual fail reason in case of GooglePalm | https://api.github.com/repos/langchain-ai/langchain/issues/13532/comments | 3 | 2023-11-17T20:02:52Z | 2024-02-23T16:05:32Z | https://github.com/langchain-ai/langchain/issues/13532 | 1,999,837,560 | 13,532 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I would be very nice to be able to build something similar to RAG, but for checking an assumption quality against a knowledge base.
### Motivation
Currently, RAG allows a semantic search but does not help when it comes to evaluating the quality of a user input, based on your vector db.
The point is not to fact-check the news in a newspaper (must be very hard actually...), but to evaluate truth in a random prompt.
### Your contribution
- prompting
- PR
- doc
- discussions
| RAG but for fact checking | https://api.github.com/repos/langchain-ai/langchain/issues/13526/comments | 5 | 2023-11-17T17:31:41Z | 2024-02-24T16:05:37Z | https://github.com/langchain-ai/langchain/issues/13526 | 1,999,617,934 | 13,526 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain == 0.0.336
python == 3.9.6
### Who can help?
@hwchase17 @eyu
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code:
video_id = YoutubeLoader.extract_video_id("https://www.youtube.com/watch?v=DWUzWvv3xz4")
loader = YoutubeLoader(video_id, add_video_info=True,
language=["en","hi","ru"],
translation = "en")
loader.load()
Output:
[Document(page_content='में रुके शोध सजावट हाउ टू लेट सफल टीम मेंबर्स नो दैट वास नॉट फीयर कॉन्टेक्ट्स रिस्पोंड फॉर एग्जांपल एक्टिव एंड स्ट्रैंथ एंड assistant 3000 1000 कॉन्टैक्ट एंड वे नेवर रिग्रेट थे रिस्पांस यू वांट यू ऑटोमेटेकली गेट नोटिफिकेशन टुडे ओके खुफिया इज द नोटिफिकेशन पाइथन इन नोटिफिकेशन सीनेटर नोटिफिकेशन प्यार सेलिब्रेट विन रिस्पोंस वर्षीय बेटे ईमेल एस वेल एजुकेटेड व्हाट्सएप नोटिफिकेशन इफ यू वांट एनी फीयर अदर टीम मेंबर टू ऑल्सो गेट नोटिफिकेशन व्हेन यू व्हेन यू रिसीवर रिस्पांस फ्रॉम अननोन फीयर कांटेक्ट सुधर रिस्पांस सिस्टम लिफ्ट से ज़ू और यह टीम मेंबर कैन रिस्पोंड इम्युनिटी द न्यू कैंट व ईमेल ऐड्रेस आफ थे पर्सन वरीय स्वीडन से लेफ्ट से अब दूर एक पाठ 7 टारगेट्स ऑयल सुबह रायपुर दिस ईमेल एड्रेस नो अब्दुल विल गेट एनी नोटिफिकेशन फ्रॉम assistant साक्षी व्हेनेवर एनी बडी दिस पॉइंट स्ट्रेन ईमेल अब्दुल विल अलसो गेट डर्टी में अगर सब्जेक्ट लाइन रिस्पांस रिसीवड ए [संगीत]', metadata={'source': 'DWUzWvv3xz4', 'title': 'How to Notify your team member Immediately when a Lead Responds', 'description': 'Unknown', 'view_count': 56, 'thumbnail_url': 'https://i.ytimg.com/vi/DWUzWvv3xz4/hq720.jpg', 'publish_date': '2021-10-04 00:00:00', 'length': 87, 'author': '7Targets AI Sales Assistant'})]
### Expected behavior
The output video transcript should be in English. | YoutubeLoader translation not working | https://api.github.com/repos/langchain-ai/langchain/issues/13523/comments | 6 | 2023-11-17T16:32:37Z | 2024-02-19T18:30:42Z | https://github.com/langchain-ai/langchain/issues/13523 | 1,999,507,481 | 13,523 |
[
"hwchase17",
"langchain"
]
| Hello,
### System Info
Langchain Version: 0.0.336
OS: Windows
### Who can help?
No response
### Information
- [x] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [x] Vector Stores / Retrievers
- [x] Chains
- [x] SQL Database
### Reproduction
Related to structured data.
I have predefined SQL and variable information. SQL is quite complicated when having to join multiple tables with abbreviated column names. This is a common practical situation. Is there a way to output sqltext and run them with the constraint that the user must fully fill in the variable value?
Example SQL standard information:
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta name=ProgId content=Excel.Sheet>
<meta name=Generator content="Microsoft Excel 15">
<link id=Main-File rel=Main-File
href="file:///C:/Users/buido/AppData/Local/Temp/msohtmlclip1/01/clip.htm">
<link rel=File-List
href="file:///C:/Users/buido/AppData/Local/Temp/msohtmlclip1/01/clip_filelist.xml">
<style>
<!--table
{mso-displayed-decimal-separator:"\.";
mso-displayed-thousand-separator:"\,";}
@page
{margin:.75in .7in .75in .7in;
mso-header-margin:.3in;
mso-footer-margin:.3in;}
tr
{mso-height-source:auto;}
col
{mso-width-source:auto;}
br
{mso-data-placement:same-cell;}
td
{padding-top:1px;
padding-right:1px;
padding-left:1px;
mso-ignore:padding;
color:black;
font-size:11.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:Calibri, sans-serif;
mso-font-charset:0;
mso-number-format:General;
text-align:general;
vertical-align:bottom;
border:none;
mso-background-source:auto;
mso-pattern:auto;
mso-protection:locked visible;
white-space:nowrap;
mso-rotate:0;}
.xl65
{border:.5pt solid windowtext;}
.xl66
{text-align:center;
vertical-align:middle;
border:.5pt solid windowtext;}
.xl67
{text-align:left;
vertical-align:middle;}
.xl68
{font-size:9.0pt;
font-weight:700;
font-family:"Malgun Gothic", sans-serif;
mso-font-charset:0;
text-align:center;
vertical-align:middle;
border:.5pt solid windowtext;}
.xl69
{font-size:9.0pt;
font-family:"Malgun Gothic", sans-serif;
mso-font-charset:0;
text-align:left;
vertical-align:middle;
border:.5pt solid windowtext;
white-space:normal;}
.xl70
{font-size:9.0pt;
font-family:"Malgun Gothic", sans-serif;
mso-font-charset:0;
text-align:center;
vertical-align:middle;
border:.5pt solid windowtext;}
.xl71
{font-weight:700;
border:.5pt solid windowtext;}
-->
</style>
</head>
<body link="#0563C1" vlink="#954F72">
No | Schema | SQL Text | Intent | Condition | Example
-- | -- | -- | -- | -- | --
1 | m_data | Select sum(s.qty) from shipment_info s, product_info p where 1=1 and s.prod_id = p.prod_id and p.prod_type = #prod_type and p.prod_name = #prod_name | Total quantity of goods exported during the day | #prod_type, #prod_name | I want to calculate the total number of model AAA(#prod_name) phones(#prod_type) shipped during the day
100 | … | … | … | … | …
</body>
</html>
### Expected behavior
I expect to output SQLtext when variables are fully supplied and execute sql.
| Extract SQL information and execute | https://api.github.com/repos/langchain-ai/langchain/issues/13519/comments | 7 | 2023-11-17T15:27:24Z | 2024-02-24T16:05:42Z | https://github.com/langchain-ai/langchain/issues/13519 | 1,999,384,822 | 13,519 |
[
"hwchase17",
"langchain"
]
| ### System Info
aiohttp==3.8.4
aiosignal==1.3.1
altair==5.0.1
annotated-types==0.6.0
anyio==3.7.1
asgiref==3.7.2
async-timeout==4.0.2
attrs==23.1.0
backoff==2.2.1
blinker==1.6.2
cachetools==5.3.1
certifi==2023.5.7
cffi==1.15.1
chardet==5.1.0
charset-normalizer==3.2.0
click==8.1.5
clickhouse-connect==0.6.6
coloredlogs==15.0.1
cryptography==41.0.2
dataclasses-json==0.5.9
decorator==5.1.1
deprecation==2.1.0
dnspython==2.4.0
duckdb==0.8.1
ecdsa==0.18.0
et-xmlfile==1.1.0
fastapi==0.104.1
fastapi-pagination==0.12.12
filetype==1.2.0
flatbuffers==23.5.26
frozenlist==1.4.0
gitdb==4.0.10
GitPython==3.1.32
greenlet==2.0.2
h11==0.14.0
hnswlib==0.7.0
httpcore==0.17.3
httptools==0.6.0
humanfriendly==10.0
idna==3.4
importlib-metadata==6.8.0
install==1.3.5
itsdangerous==2.1.2
Jinja2==3.1.2
joblib==1.3.1
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.18.3
jsonschema-specifications==2023.6.1
langchain==0.0.335
langsmith==0.0.64
loguru==0.7.0
lxml==4.9.3
lz4==4.3.2
Markdown==3.4.3
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.19.0
marshmallow-enum==1.5.1
mdurl==0.1.2
monotonic==1.6
mpmath==1.3.0
msg-parser==1.2.0
multidict==6.0.4
mypy-extensions==1.0.0
nltk==3.8.1
numexpr==2.8.4
numpy==1.25.1
olefile==0.46
onnxruntime==1.15.1
openai==0.27.8
openapi-schema-pydantic==1.2.4
openpyxl==3.1.2
overrides==7.3.1
packaging==23.1
pandas==2.0.3
pdf2image==1.16.3
pdfminer.six==20221105
Pillow==9.5.0
pinecone-client==2.2.4
posthog==3.0.1
protobuf==4.23.4
psycopg2-binary==2.9.7
pulsar-client==3.2.0
py-automapper==1.2.3
pyarrow==12.0.1
pyasn1==0.5.0
pycparser==2.21
pycryptodome==3.18.0
pydantic==2.5.0
pydantic-settings==2.1.0
pydantic_core==2.14.1
pydeck==0.8.1b0
Pygments==2.15.1
pymongo==4.6.0
Pympler==1.0.1
pypandoc==1.11
python-dateutil==2.8.2
python-docx==0.8.11
python-dotenv==1.0.0
python-jose==3.3.0
python-keycloak==2.16.6
python-magic==0.4.27
python-pptx==0.6.21
pytz==2023.3
pytz-deprecation-shim==0.1.0.post0
PyYAML==6.0
referencing==0.29.1
regex==2023.6.3
requests==2.31.0
requests-toolbelt==1.0.0
rich==13.4.2
rpds-py==0.8.11
rsa==4.9
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
SQLAlchemy==2.0.19
starlette==0.27.0
streamlit==1.24.1
sympy==1.12
tabulate==0.9.0
tenacity==8.2.2
tiktoken==0.4.0
tokenizers==0.13.3
toml==0.10.2
toolz==0.12.0
tornado==6.3.2
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.8.0
tzdata==2023.3
tzlocal==4.3.1
unstructured==0.8.1
urllib3==2.0.3
uvicorn==0.23.0
uvloop==0.17.0
validators==0.20.0
watchdog==3.0.0
watchfiles==0.19.0
websockets==11.0.3
xlrd==2.0.1
XlsxWriter==3.1.2
yarl==1.9.2
zipp==3.16.2
zstandard==0.21.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
// Omitted LLM and store retriever code
memory = VectorStoreRetrieverMemory(
retriever=retriever,
return_messages=True,
)
tool = create_retriever_tool(
retriever,
"search_egypt_mythology",
"Searches and returns documents about egypt mythology",
)
tools = [tool]
system_message = SystemMessage(
content=(
"Do your best to answer the questions. "
"Feel free to use any tools available to look up "
"relevant information, only if necessary"
)
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=system_message,
extra_prompt_messages=[
MessagesPlaceholder(variable_name="history")
],
)
agent = OpenAIFunctionsAgent(llm=__llm, tools=tools, prompt=prompt)
chat = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
return_intermediate_steps=True,
)
result = chat({"input":"my question"})
answer = result['output']
```
**The error is:
ValueError: variable history should be a list of base messages, got**
The agent works with the mongodb as chat history though, so it should have worked with the vector memory retriever:
```
mongo_history = MongoDBChatMessageHistory(
connection_string=settings.mongo_connection_str,
session_id=__get_chat_id(user_uuid),
database_name = settings.mongo_db_name,
collection_name = 'chat_history',
)
memory = ConversationBufferMemory(
chat_memory=mongo_history,
memory_key='history',
input_key="input",
output_key="output",
return_messages=True,
)
```
### Expected behavior
Vector retrieval memory should have worked like the MongoDBChatMessageHistory memory.
There is some omission about this issue in the documentation. | VectorStoreRetrieverMemory doesn't work with AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/13516/comments | 4 | 2023-11-17T14:08:19Z | 2024-02-23T16:05:47Z | https://github.com/langchain-ai/langchain/issues/13516 | 1,999,219,108 | 13,516 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I see the current SQLiteCache is storing the entire prompt message in the SQLite db.
It would be more efficient to just hash the prompt and store this as the key for cache lookup.
### Motivation
My prompt messages are often lengthy and I want to optimize the storage requirements of the Cache.
Keeping an md5 hash as the lookup key would make the lookup also faster rather than doing a string search.
### Your contribution
If this feature makes sense, I can work on this and raise a PR. Let me know. | SQLiteCache - store only the hash of the prompt as key instead of the entire prompt | https://api.github.com/repos/langchain-ai/langchain/issues/13513/comments | 3 | 2023-11-17T12:40:31Z | 2024-02-23T16:05:52Z | https://github.com/langchain-ai/langchain/issues/13513 | 1,999,036,428 | 13,513 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am facing an issue using `MarkdownHeaderTextSplitter` class and, after looking at the code, I noticed that the problem might be present in several Text Splitters and I did not find any issue on this, so I create a new one.
I am trying to use `MarkdownHeaderTextSplitter` regarding the `TextSplitter` interface by calling the method `transform_document`.
However, the [`MarkdownHeaderTextSplitter` does not inherit from `TextSplitter`](https://github.com/langchain-ai/langchain/blob/35e04f204ba3e69356a4f8f557ea88f46d2fa389/libs/langchain/langchain/text_splitter.py#L331) and I wondered if it was a justified implementation or just an oversight.
It seems that the [`HTMLHeaderTextSplitter` is in that case too](https://github.com/langchain-ai/langchain/blob/35e04f204ba3e69356a4f8f557ea88f46d2fa389/libs/langchain/langchain/text_splitter.py#L496).
Can you give me some insight on how to use theses classes if the behavior is normal ?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This code is a good way to reproduce what I am trying to do
```python
from langchain.text_splitter import MarkdownHeaderTextSplitter
from langchain.document_loaders import TextLoader
loader = TextLoader("test.md")
document = loader.load()
transformer = MarkdownHeaderTextSplitter([
("#", "Header 1"),
("##", "Header 2"),
("###", "Header 3"),
])
tr_documents = transformer.transform_documents(document)
```
### Expected behavior
I want this to return a list of documents (`langchain.docstore.document.Document`) splitted in the same way `MarkdownHeaderTextSplitter.split_text` does on the content of a markdown document [as presented in the documentation](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/markdown_header_metadata).
| Text splitters inhéritance | https://api.github.com/repos/langchain-ai/langchain/issues/13510/comments | 3 | 2023-11-17T10:20:27Z | 2024-03-18T16:06:44Z | https://github.com/langchain-ai/langchain/issues/13510 | 1,998,763,281 | 13,510 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I am trying claude-2 model using the ChatAnthropic library and iterating over my data to call the model end for predictions. Since it's a chain of input, I am using StuffDocumentsChain. I am facing two issues currently.
1. CLOSE_WAIT error after the connection is established with cloud front with anthropic, it keeps creating a new connection for each iteration of data and goes into CLOSE_WAIT state after the process is completed for that iteration after some time.
2. When too many connections go into the close wait state the application gives too many files open error due to the file descriptor handling too many connections.
## Solutions Tried
1. Giving request_default_timeout to handle the CLOSE_WAIT error.
2. Tried creating a function try/catch and final to close any objects within the block.
## Code
```
import warnings
warnings.filterwarnings("ignore")
import jaconv
import numpy as np
import pandas as pd
from langchain import PromptTemplate
from fuzzysearch import find_near_matches
from langchain.vectorstores import Chroma
from langchain.chat_models import ChatAnthropic
from langchain.document_loaders import PDFPlumberLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.schema import HumanMessage, SystemMessage
from langchain.schema.output_parser import OutputParserException
from langchain.chains import LLMChain, StuffDocumentsChain
from langchain.retrievers import BM25Retriever, EnsembleRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
import time
from json.decoder import JSONDecodeError
from loguru import logger
output_parser = StructuredOutputParser.from_response_schemas([
ResponseSchema(
name="answer",
description="answer to the user's question",
type="string"
),
])
format_instructions = output_parser.get_format_instructions()
def load_data_(file_path):
return PDFPlumberLoader(file_path).load_and_split()
def retriever_(docs):
return EnsembleRetriever(
retrievers=[
BM25Retriever.from_documents(docs, k=4),
Chroma.from_documents(
docs,
OpenAIEmbeddings(
chunk_size=200,
max_retries=60,
show_progress_bar=True
)
).as_retriever(search_kwargs={"k": 4})
],
weights=[0.5, 0.5]
)
def qa_chain():
start_qa = time.time()
qa_chain_output = StuffDocumentsChain(
llm_chain=LLMChain(
llm=ChatAnthropic(
model_name="claude-2",
temperature=0,
top_p=1,
max_tokens_to_sample=500000,
default_request_timeout=30
),
prompt=ChatPromptTemplate(
messages=[
SystemMessage(content='You are a world class & knowledgeable product catalog assistance.'),
HumanMessagePromptTemplate.from_template('Context:\n{context}'),
HumanMessagePromptTemplate.from_template('{format_instructions}'),
HumanMessagePromptTemplate.from_template('Questions:\n{question}'),
HumanMessage(content='Tips: Make sure to answer in the correct format.'),
],
partial_variables={"format_instructions": format_instructions}
)
),
document_variable_name='context',
document_prompt=PromptTemplate(
template='Content: {page_content}',
input_variables=['page_content'],
)
)
logger.info("Finished QA Chain in {}",time.time()-start_qa)
return qa_chain_output
def generate(input_data, retriever):
start_generate = time.time()
doc_query = jaconv.z2h(
jaconv.normalize(
"コード: {}\n製品の種類: {}\nメーカー品名: {}".format(
str(input_data["Product Code"]),
str(input_data["Product Type"]),
str(input_data["Product Name"])
),
"NFKC"
),
kana=False,
digit=True,
ascii=True
)
docs = RecursiveCharacterTextSplitter(
chunk_size=512,
chunk_overlap=128
).split_documents(
retriever.get_relevant_documents(doc_query)
)
docs = [
doc
for doc in docs
if len(
find_near_matches(
str(input_data["Product Code"]),
str(doc.page_content),
max_l_dist=1
)
) > 0
]
pages = list(set([str(doc.metadata["page"]) for doc in docs]))
question = (
"Analyze the provided context and understand to extract the following attributes as precise as possible for {}"
"Attributes:\n"
"{}\n"
"Return it in this JSON format: {}. "
"if no information found or not sure return \"None\"."
)
generate_output = output_parser.parse(
qa_chain().run({
"input_documents": docs,
"question": question.format(
str(input_data["Product Code"]),
'\n'.join([
f" {i + 1}. What is the value of \"{attrib}\"?"
for i, attrib in enumerate(input_data["Attributes"])
]),
str({"answer": {attrib: f"value of {attrib}" for attrib in input_data["Attributes"]}}),
)
})
)["answer"], ';'.join(pages)
logger.info("Finished QA Chain in {}",time.time()-start_generate)
return generate_output
def predict(pdf_file_path:str, csv_file_path:str,output_csv_file_path:str):
# load PDF data
start_load = time.time()
logger.info("Started loading the pdf")
documents = load_data_(pdf_file_path)
logger.info("Finished loading the pdf in {}",time.time()-start_load)
try:
start_retriever = time.time()
logger.info("Started retriever the pdf")
retriever = retriever_(documents)
logger.info("Finished retriever the pdf in {}",time.time()-start_retriever)
except Exception as e:
logger.error(f"Error in Retriever: {e}")
# Load CSV
start_load_csv = time.time()
logger.info("Started Reading Csv")
df = pd.read_csv(
csv_file_path,
low_memory=False,
dtype=object
).replace(np.nan, 'None')
logger.info("Finished Reading Csv in {}",time.time()-start_load_csv)
# Inference
index = 0
result_df = list()
start_generate = time.time()
logger.info("Started Generate Function")
for code, dfx in df.groupby('Product Code'):
start_generate_itr = time.time()
try:
logger.info("Reached index {}",index)
logger.info("Reached index code {}",code)
index = index + 1
result, page = generate(
{
'Product Code': code,
'Product Type': dfx['Product Type'].tolist()[0],
'Product Name': dfx['Product Name'].tolist()[0],
'Attributes': dfx['Attributes'].tolist()
},
retriever
)
dfx['Value'] = dfx['Attributes'].apply(lambda attrib: result.get(attrib, 'None'))
dfx['Page'] = page
logger.info("Generate Function 1 iterations {}",time.time()-start_generate_itr)
except OutputParserException as e:
dfx['Value'] = "None"
dfx['Page'] = "None"
logger.info("Generate Function 1 iterations {}",time.time()-start_generate_itr)
logger.info("JSONDecodeError Exception Occurred in {}",e)
result_df.append(dfx)
logger.info("Finished Generate Function in {}",time.time()-start_generate)
try:
result_df = pd.concat(result_df)
result_df.to_csv(output_csv_file_path)
except FileNotFoundError as e:
logger.error("FileNotFoundError Exception Occurred in {}",e)
return result_df
# df = predict(f"{PDF_FILE}", f"{CSV_FILE}","output.csv")
```
### Suggestion:
I would like to know how to handle the connection CLOSE_WAIT error since I am handling a big amount of data to be processed through Anthropic claude-2 | Issue: Getting CLOSE_WAIT and too many files open error using ChatAnthropic and StuffDocumentsChain | https://api.github.com/repos/langchain-ai/langchain/issues/13509/comments | 12 | 2023-11-17T10:05:35Z | 2024-06-24T16:07:27Z | https://github.com/langchain-ai/langchain/issues/13509 | 1,998,738,328 | 13,509 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.9
langchain 0.0.336
openai 1.3.2
pandas 2.1.3
### Who can help?
@EYU
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
First of all, thank you for this great library !
Concerning the bug, I have a vllm openai server (0.2.1.post1) running locally started with the following command:
```
python -m vllm.entrypoints.openai.api_server --model ./zephyr-7b-beta --served-model-name zephyr-7b-beta
```
On the client side, I have this piece of code, slightly adapted from the documentation (only the model name changes).
```python
from langchain.llms import VLLMOpenAI
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base="http://localhost:8000/v1",
model_name="zephyr-7b-beta",
)
print(llm("Rome is"))
```
And I got the following error:
```text
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[19], line 6
1 llm = VLLMOpenAI(
2 openai_api_key="EMPTY",
3 openai_api_base="http://localhost:8000/v1",
4 model_name="zephyr-7b-beta",
5 )
----> 6 llm("Rome is")
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:876, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
869 if not isinstance(prompt, str):
870 raise ValueError(
871 "Argument `prompt` is expected to be a string. Instead found "
872 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
873 "`generate` instead."
874 )
875 return (
--> 876 self.generate(
877 [prompt],
878 stop=stop,
879 callbacks=callbacks,
880 tags=tags,
881 metadata=metadata,
882 **kwargs,
883 )
884 .generations[0][0]
885 .text
886 )
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:656, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
641 raise ValueError(
642 "Asked to cache, but no cache found at `langchain.cache`."
643 )
644 run_managers = [
645 callback_manager.on_llm_start(
646 dumpd(self),
(...)
654 )
655 ]
--> 656 output = self._generate_helper(
657 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
658 )
659 return output
660 if len(missing_prompts) > 0:
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:544, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
542 for run_manager in run_managers:
543 run_manager.on_llm_error(e)
--> 544 raise e
545 flattened_outputs = output.flatten()
546 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:531, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
521 def _generate_helper(
522 self,
523 prompts: List[str],
(...)
527 **kwargs: Any,
528 ) -> LLMResult:
529 try:
530 output = (
--> 531 self._generate(
532 prompts,
533 stop=stop,
534 # TODO: support multiple run managers
535 run_manager=run_managers[0] if run_managers else None,
536 **kwargs,
537 )
538 if new_arg_supported
539 else self._generate(prompts, stop=stop)
540 )
541 except BaseException as e:
542 for run_manager in run_managers:
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:454, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs)
442 choices.append(
443 {
444 "text": generation.text,
(...)
451 }
452 )
453 else:
--> 454 response = completion_with_retry(
455 self, prompt=_prompts, run_manager=run_manager, **params
456 )
457 if not isinstance(response, dict):
458 # V1 client returns the response in an PyDantic object instead of
459 # dict. For the transition period, we deep convert it to dict.
460 response = response.dict()
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:114, in completion_with_retry(llm, run_manager, **kwargs)
112 """Use tenacity to retry the completion call."""
113 if is_openai_v1():
--> 114 return llm.client.create(**kwargs)
116 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
118 @retry_decorator
119 def _completion_with_retry(**kwargs: Any) -> Any:
File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/openai/_utils/_utils.py:299, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
297 msg = f"Missing required argument: {quote(missing[0])}"
298 raise TypeError(msg)
--> 299 return func(*args, **kwargs)
TypeError: create() got an unexpected keyword argument 'api_key'
```
It seems that if I remove the line 158 from `langchain/llms/vllm.py`, the code is working.
### Expected behavior
I expect a completion with no error. | VLLMOpenAI -- create() got an unexpected keyword argument 'api_key' | https://api.github.com/repos/langchain-ai/langchain/issues/13507/comments | 3 | 2023-11-17T08:56:07Z | 2023-11-20T01:49:57Z | https://github.com/langchain-ai/langchain/issues/13507 | 1,998,591,711 | 13,507 |
[
"hwchase17",
"langchain"
]
| ### System Info
I was using 0.0.182-rc.1 and I tried upgrading to the latest, 0.0.192
But I'm still getting the error.
Please note: everything was working fine before, I have made no changes.
Did openai change something?
Not sure what is going on here but it looks like its from openai side. If so how do I fix this? Do I wait for a langchain update?
Error:
/node_modules/openai
/src/error.ts:66
return new BadRequestError(status, error, message, headers);
^
Error: 400 '$.input' is invalid. Please check the API reference: https://p
latform.openai.com/docs/api-reference.
at Function.generate (/home/hedgehog/Europ/profiling-github/profiling/
server/node_modules/openai/src/error.ts:66:14)
at OpenAI.makeStatusError (/home/hedgehog/Europ/profiling-github/profi
ling/server/node_modules/openai/src/core.ts:358:21)
at OpenAI.makeRequest (/home/hedgehog/Europ/profiling-github/profiling
/server/node_modules/openai/src/core.ts:416:24)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at /home/hedgehog/Europ/profiling-github/profiling/server/node_modules
/langchain/dist/embeddings/openai.cjs:223:29
at RetryOperation._fn (/home/hedgehog/Europ/profiling-github/profiling
/server/node_modules/p-retry/index.js:50:12)
@agola11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Not sure, it broke suddenly without any changes
### Expected behavior
It would initialize properly and execute all the onModuleLoad functions in the Nest js application with the embeddings ready for use. | Sudden failure to initialize | https://api.github.com/repos/langchain-ai/langchain/issues/13505/comments | 3 | 2023-11-17T07:42:15Z | 2024-02-23T16:05:57Z | https://github.com/langchain-ai/langchain/issues/13505 | 1,998,469,952 | 13,505 |
[
"hwchase17",
"langchain"
]
| ### System Info
Location: langchain/lllms/base.py
I think there's a hidden error in a 'generate' method. (line 554)
Inside of the second 'if' statement, it calls the index of callbacks that may cause the potential error.
`Callbacks` is an object of `CallbackManager`, which is not iterable, so it can not be called as `callbacks[0]`. (see the source code below)
Thanks for @Seuleeee
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
langchain/lllms/base.py
```
if (
isinstance(callbacks, list)
and callbacks
and (
isinstance(callbacks[0], (list, BaseCallbackManager))
or callbacks[0] is None
)
):
```
Error message
```
Traceback (most recent call last):
File "/tmp/ipykernel_713/4197562787.py", line 10, in test_rag
result=conv_chain.run(query)
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/conversational_retrieval/base.py", line 159, in _call
answer = self.combine_docs_chain.run(
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 510, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/usr/local/lib/python3.8/dist-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
...
File "/usr/local/lib/python3.8/dist-packages/langchain/llms/base.py", line 617, in generate
isinstance(callbacks[0], (list, BaseCallbackManager)
TypeError: 'CallbackManager' object is not subscriptable
```
### Expected behavior
Revise the code that has a potential error.
Tell me how can I contribute to fix this code. (test code ...etc) | Found Potential Bug! (langchain > llm > base.py) | https://api.github.com/repos/langchain-ai/langchain/issues/13504/comments | 4 | 2023-11-17T07:05:15Z | 2024-02-26T16:06:48Z | https://github.com/langchain-ai/langchain/issues/13504 | 1,998,415,070 | 13,504 |
[
"hwchase17",
"langchain"
]
| ### Feature request
can you add memory for RetrievalQA.from_chain_type(). I haven't seen any implementation of memory for this kind of RAG chain. It would be nice to have memory and ask questions in context.
### Motivation
I just can't get any memory to work with RetrievalQA.from_chain_type().
### Your contribution
Not right now... I don't have all required knowledge about LLMs | implement memory for RetrievalQA.from_chain_type() | https://api.github.com/repos/langchain-ai/langchain/issues/13503/comments | 3 | 2023-11-17T06:52:41Z | 2024-02-23T16:06:07Z | https://github.com/langchain-ai/langchain/issues/13503 | 1,998,400,060 | 13,503 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
**Description:**
The Redistext search feature in Redis Vector Store functions well with a small number of indexed documents (20-30). However, when the quantity of indexed documents exceeds 400-500, the performance degrades noticeably. Some keys are missed during the Redistext search, and Redis Similarity search retrieves incorrect keys.
**Steps to Reproduce:**
1. Store 400-500 documents in an Index of Redis vector store database.
2. Conduct Redistext search and observe that it is not able to find some of the stored keys.
3. Use Redis Similarity search and notice retrieval of incorrect keys, for some number of keys that are already stored in REdis vector store database.
**Additional Details:**
The issue is not present with smaller datasets (20-30 indexed documents) .
All problematic keys are confirmed to be stored in the Redis Vector Store database with double-checked hash values.
**Expected Behavior:**
Redistext search and Redis Similarity search should provide accurate results even with larger datasets.
**Environment:**
- Official Redis stack docker image (redis/redis-stack-server:latest)
- Docker version 24.0.2
**Any guidance or solutions to improve search performance would be highly appreciated.**
Thank you.
### Suggestion:
**Optimize Redistext Search and Redis Similarity Search for Larger Datasets**
**Proposed Solution:**
Given the observed performance degradation with Redistext search and Redis Similarity search when handling larger datasets (400-500 indexed documents), I suggest reassessing the Redis index similarity search and Redistext search functions. The objective is to ensure accurate results even when the dataset is extensive.
| Why does Redis vector store misses some keys in redistext search. although key and related details exists in index.Issue | https://api.github.com/repos/langchain-ai/langchain/issues/13500/comments | 3 | 2023-11-17T05:05:26Z | 2024-02-23T16:06:12Z | https://github.com/langchain-ai/langchain/issues/13500 | 1,998,286,493 | 13,500 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I created a tool in an agent to output some data. The data contains ADMET properties and other properties. Although I have made it very clear that all properties should be kept in the tool function as well as in the output parser, I still can not get the non-ADMET properties in the final output. Here is my code:
def Func_Tool_XYZ(parameters):
print("\n", parameters)
print("触发Func_Tool_ADMET插件")
print("........正在解析ADMET属性..........")
data = {
"SMILES": "C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1",
"humanIntestinalAbsorption": "HIA+|0.73",
"caco2Permeability": "None",
"caco2PermeabilityIi": "Caco2+|0.70",
"savePath": "/profile/chemicalAppsResult/admetResult/2023/11/10/2567",
"status": "1",
"favoriteFlag": "0"
}
prompt = f"Properties: {data['SMILES']} {data['humanIntestinalAbsorption']} {data['caco2Permeability']} {data['caco2PermeabilityIi']} {data['savePath']} {data['status']} {data['favoriteFlag']}"
return {
'output': data,
'prompt': prompt
}
tools = [
Tool(
name="Tool XYZ",
func=Func_Tool_XYZ,
description="""
useful when you want to obtain the XYZ data for a molecule.
like: get the XYZ data for molecule X
The input to this tool should be a string, representing the smiles_id.
"""
)
]
class CustomOutputParser(JSONAgentOutputParser):
def parse(self, llm_output: str):
print('jdlfjdlfjdlfjsdlfdjld')
print(llm_output)
smiles = self.data["SMILES"]
return f"SMILES: {smiles}, {llm_output}"
output_parser = CustomOutputParser()
llm = OpenAI(temperature=0, max_tokens=2048)
memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
memory.clear()
agent = initialize_agent(tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
return_intermediate_steps=True,
output_parser=output_parser
)
pdf_id = 1111
Human_prompt = f'provide a document. PDF ID is {pdf_id}. This information helps you to understand this document, but you should use tools to finish following tasks, rather than directly imagine from my description. If you understand, say \"Received\".'
AI_prompt = "Received. "
memory.save_context({"input": Human_prompt}, {"output": AI_prompt})
answer = agent({"input": "Get the ADMET data for molecule X."})
print(answer["output"])
And here is my output messages:
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Tool XYZ
Action Input: molecule X
molecule X
触发Func_Tool_ADMET插件
........正在解析ADMET属性..........
Observation: {'output': {'SMILES': 'C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1', 'humanIntestinalAbsorption': 'HIA+|0.73', 'caco2Permeability': 'None', 'caco2PermeabilityIi': 'Caco2+|0.70', 'savePath': '/profile/chemicalAppsResult/admetResult/2023/11/10/2567', 'status': '1', 'favoriteFlag': '0'}, 'prompt': 'Properties: C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1 HIA+|0.73 None Caco2+|0.70 /profile/chemicalAppsResult/admetResult/2023/11/10/2567 1 0'}
Thought: Do I need to use a tool? No
AI: The ADMET data for molecule X is as follows: Human Intestinal Absorption: HIA+|0.73, Caco2 Permeability: None, Caco2 Permeability II: Caco2+|0.70, Save Path: /profile/chemicalAppsResult/admetResult/2023/11/10/2567, Status: 1, Favorite Flag: 0.
> Finished chain.
The ADMET data for molecule X is as follows: Human Intestinal Absorption: HIA+|0.73, Caco2 Permeability: None, Caco2 Permeability II: Caco2+|0.70, Save Path: /profile/chemicalAppsResult/admetResult/2023/11/10/2567, Status: 1, Favorite Flag: 0.
It seems like the non-ADMET properties have been lost.
And, it may be causing by the output parser not working, becuase if it works fine, I should get something like "jdlfjdlfjdlfjsdlfdjld" which is stated in the output parser.
So how should I do next?
### Suggestion:
_No response_ | Output Parser not Work in an Agent Chain | https://api.github.com/repos/langchain-ai/langchain/issues/13498/comments | 3 | 2023-11-17T03:53:58Z | 2024-02-23T16:06:17Z | https://github.com/langchain-ai/langchain/issues/13498 | 1,998,227,516 | 13,498 |
[
"hwchase17",
"langchain"
]
| ### System Info
When using fail, similarity_search_with_score function, the parameters filter and score_threshold together, the results will be problematic
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.use similarity_search_with_score
2.parameter:filter and score_threshold
### Expected behavior
result is error | faiss中的错误 | https://api.github.com/repos/langchain-ai/langchain/issues/13497/comments | 3 | 2023-11-17T03:29:19Z | 2024-02-23T16:06:22Z | https://github.com/langchain-ai/langchain/issues/13497 | 1,998,208,034 | 13,497 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want to create an ADMET properties prediction tool in an agent. But the result is not so good. Here is my code:
def Func_Tool_XYZ(parameters):
print("\n", parameters)
print("触发Func_Tool_ADMET插件")
print("........正在解析ADMET属性..........")
data = {
"SMILES": "C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1",
"humanIntestinalAbsorption": "HIA+|0.73",
"caco2Permeability": "None",
"caco2PermeabilityIi": "Caco2+|0.70",
"savePath": "/profile/chemicalAppsResult/admetResult/2023/11/10/2567",
"status": "1",
"favoriteFlag": "0"
}
prompt = f"Properties: {data['SMILES']} {data['humanIntestinalAbsorption']} {data['caco2Permeability']} {data['caco2PermeabilityIi']} {data['savePath']} {data['status']} {data['favoriteFlag']}"
return {
'output': data,
'prompt': prompt
}
tools = [
Tool(
name="Tool XYZ",
func=Func_Tool_XYZ,
description="""
useful when you want to obtain the XYZ data for a molecule.
like: get the XYZ data for molecule X
The input to this tool should be a string, representing the smiles_id.
"""
)
]
class CustomOutputParser(JSONAgentOutputParser):
def parse(self, llm_output: str):
print('jdlfjdlfjdlfjsdlfdjld')
print(llm_output)
return llm_output
output_parser = CustomOutputParser()
llm = OpenAI(temperature=0, max_tokens=2048)
memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
memory.clear()
agent = initialize_agent(tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
return_intermediate_steps=True,
output_parser=output_parser
)
pdf_id = 1111
Human_prompt = f'provide a document. PDF ID is {pdf_id}. This information helps you to understand this document, but you should use tools to finish following tasks, rather than directly imagine from my description. If you understand, say \"Received\".'
AI_prompt = "Received. "
memory.save_context({"input": Human_prompt}, {"output": AI_prompt})
answer = agent({"input": "Get the ADMET data for molecule X."})
print(answer["output"])
I got the output like this:
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Tool XYZ
Action Input: molecule X
molecule X
触发Func_Tool_ADMET插件
........正在解析ADMET属性..........
Observation: {'output': {'SMILES': 'C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1', 'humanIntestinalAbsorption': 'HIA+|0.73', 'caco2Permeability': 'None', 'caco2PermeabilityIi': 'Caco2+|0.70', 'savePath': '/profile/chemicalAppsResult/admetResult/2023/11/10/2567', 'status': '1', 'favoriteFlag': '0'}, 'prompt': 'Properties: C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1 HIA+|0.73 None Caco2+|0.70 /profile/chemicalAppsResult/admetResult/2023/11/10/2567 1 0'}
Thought: Do I need to use a tool? No
AI: The ADMET data for molecule X is as follows: Human Intestinal Absorption: HIA+|0.73, Caco2 Permeability: None, Caco2 Permeability II: Caco2+|0.70, Save Path: /profile/chemicalAppsResult/admetResult/2023/11/10/2567, Status: 1, Favorite Flag: 0.
> Finished chain.
The ADMET data for molecule X is as follows: Human Intestinal Absorption: HIA+|0.73, Caco2 Permeability: None, Caco2 Permeability II: Caco2+|0.70, Save Path: /profile/chemicalAppsResult/admetResult/2023/11/10/2567, Status: 1, Favorite Flag: 0.
As you can see it, the property of "SMILES" has been lost, although I have made it clear that the properties in the tool should be kept as they are originally.
So what happend? How should I revise the code to make it work?
### Suggestion:
_No response_ | Property Lost in an Agent Chain | https://api.github.com/repos/langchain-ai/langchain/issues/13495/comments | 4 | 2023-11-17T03:09:10Z | 2024-02-23T16:06:27Z | https://github.com/langchain-ai/langchain/issues/13495 | 1,998,187,189 | 13,495 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The `make api_docs_build` command is very slow. This command builds the API Reference documentation. This command is defined in the `Makefile`
### Suggestion:
_No response_ | very slow make command | https://api.github.com/repos/langchain-ai/langchain/issues/13494/comments | 7 | 2023-11-17T02:58:21Z | 2024-02-09T16:47:54Z | https://github.com/langchain-ai/langchain/issues/13494 | 1,998,176,960 | 13,494 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.320
MacOS 13.14.1
Python 3.9.18
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm using `DynamoDBChatMessageHistory` for storing chat messages. However, I do not use `Human` or `AI` as my chat prefixes. Using pre-defined methods `add_user_message` and `add_ai_message` won't work for me. I extended the `BaseMessage` class to create a new message type. I get this error when trying to read messages from history.
```
raise ValueError(f"Got unexpected message type: {_type}")
ValueError: Got unexpected message type: User
```
Here's my code:
```
import uuid
from typing_extensions import Literal
from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
from langchain.schema.messages import BaseMessage
class UserMessage(BaseMessage):
type: Literal["User"] = "User"
class BotMessage(BaseMessage):
type: Literal["Bot"] = "Bot"
session_id = str(uuid.uuid4())
history = DynamoDBChatMessageHistory(
table_name="tableName",
session_id=session_id,
primary_key_name='sessionId'
)
# history.add_user_message("hi!") # works fine
# history.add_ai_message("whats up?") # works fine
history.add_message(UserMessage(content="Hello, I'm the user!")) # saves the message to the table with the correct message type
history.add_message(BotMessage(content="Hello, I'm the bot!!")) # doesn't run due to the above-encountered error
print(history.messages) # throws a ValueError
```
### Expected behavior
The error should not be thrown so that other execution steps are completed. | Adding messages to history doesn't work with custom message types | https://api.github.com/repos/langchain-ai/langchain/issues/13493/comments | 4 | 2023-11-17T02:23:36Z | 2024-02-23T16:06:32Z | https://github.com/langchain-ai/langchain/issues/13493 | 1,998,146,316 | 13,493 |
[
"hwchase17",
"langchain"
]
| ### System Info
google-cloud-aiplatform = "^1.36.4"
langchain = "0.0.336"
python 3.11
```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for VertexAI
__root__
Unknown model publishers/google/models/chat-bison-32k; {'gs://google-cloud-aiplatform/schema/predict/instance/text_generation_1.0.0.yaml': <class 'vertexai.preview.language_models._PreviewTextGenerationModel'>} (type=value_error)
```
```
(Pdb) model_id
'publishers/google/models/codechat-bison-32k'
(Pdb) _publisher_models._PublisherModel(resource_name=model_id)
<google.cloud.aiplatform._publisher_models._PublisherModel object at 0xffff752ba110>
resource name: publishers/google/models/codechat-bison-32k
```
Public Preview: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/chat-bison?hl=en
|name|released|status|
| -------------- | ------------ | -------------|
|chat-bison-32k | 2023-08-29 | Public Preview|
### Who can help?
@ey
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm = VertexAI(
model_name="chat-bison-32k",
max_output_tokens=8192,
temperature=0.1,
top_p=0.8,
top_k=40,
verbose=True,
# streaming=True,
)
```
### Expected behavior
chat-bison-32k works
There may be some other models released recently that may also need similar update. | VertexAI chat-bison-32k support | https://api.github.com/repos/langchain-ai/langchain/issues/13478/comments | 3 | 2023-11-16T20:58:24Z | 2023-11-16T22:28:16Z | https://github.com/langchain-ai/langchain/issues/13478 | 1,997,770,078 | 13,478 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The MLflowAIGateway module names should be consistent with `MLflow` (upper case ML and lowercase f):
`from langchain.chat_models import ChatMLflowAIGateway`
For embeddings and completions, current module names are:
```
from langchain.llms import MlflowAIGateway
from langchain.embeddings import MlflowAIGatewayEmbeddings
```
should be:
```
from langchain.llms import MLflowAIGateway
from langchain.embeddings import MLflowAIGatewayEmbeddings
```
### Suggestion:
_No response_ | Issue: minor naming discrepency of MLflowAIGateway | https://api.github.com/repos/langchain-ai/langchain/issues/13475/comments | 3 | 2023-11-16T19:00:55Z | 2024-02-22T16:06:03Z | https://github.com/langchain-ai/langchain/issues/13475 | 1,997,547,851 | 13,475 |
[
"hwchase17",
"langchain"
]
| ### System Info
databricks Machine Learning Runtime: 13.3
langchain==0.0.319
python == 3.9
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. create a route following [this instruction](https://mlflow.org/docs/latest/python_api/mlflow.gateway.html)
```
gateway.create_route(
name="chat",
route_type="llm/v1/chat",
model={
"name": "llama2-70b-chat",
"provider": "mosaicml",
"mosaicml_config": {
"mosaicml_api_key": <key>
}
}
)
```
3. use the template code from the documentation page [here](https://python.langchain.com/docs/integrations/providers/mlflow_ai_gateway#chat-example)
```
from langchain.chat_models import ChatMLflowAIGateway
from langchain.schema import HumanMessage, SystemMessage
chat = ChatMLflowAIGateway(
gateway_uri="databricks",
route="chat",
params={
"temperature": 0.1
}
)
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French: I love programming."
),
]
print(chat(messages))
```
This will complain parameter `max_tokens` is not provided. Similarly, if we update the model
```
chat = ChatMLflowAIGateway(
gateway_uri="databricks",
route="chat",
params={
"temperature": 0.1,
"max_tokens": 200
}
)
```
it complains about parameter `stop` is not provided.
However, both parameters are supposed to be non-required parameters based on MLflow's doc [here](https://mlflow.org/docs/latest/llms/gateway/index.html#chat)
### Expected behavior
expecting the example code below should be executed successfully
```
from langchain.chat_models import ChatMLflowAIGateway
from langchain.schema import HumanMessage, SystemMessage
chat = ChatMLflowAIGateway(
gateway_uri="databricks",
route="chat",
params={
"temperature": 0.1
}
)
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French: I love programming."
),
]
print(chat(messages))
```
| inconsistency parameter requirements for chat_models.ChatMLflowAIGateway | https://api.github.com/repos/langchain-ai/langchain/issues/13474/comments | 3 | 2023-11-16T18:51:10Z | 2024-02-22T16:06:09Z | https://github.com/langchain-ai/langchain/issues/13474 | 1,997,529,011 | 13,474 |
[
"hwchase17",
"langchain"
]
| ### System Info
Version: langchain 0.0.336
Version: sqlalchemy 2.0.1
File "/Users/anonymous/code/anon/anon/utils/cache.py", line 40, in set_cache
from langchain.cache import SQLiteCache
File "/Users/anonymous/dotfiles/virtualenvs/aiproject/lib/python3.11/site-packages/langchain/cache.py", line 45, in <module>
from sqlalchemy import Column, Integer, Row, String, create_engine, select
ImportError: cannot import name 'Row' from 'sqlalchemy' (/Users/anonymous/dotfiles/virtualenvs/aiproject/lib/python3.11/site-packages/sqlalchemy/__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.cache import SQLiteCache
### Expected behavior
Doesn't throw exception | ImportError: cannot import name 'Row' from 'sqlalchemy' | https://api.github.com/repos/langchain-ai/langchain/issues/13464/comments | 6 | 2023-11-16T15:08:51Z | 2024-03-18T16:06:39Z | https://github.com/langchain-ai/langchain/issues/13464 | 1,997,084,178 | 13,464 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am facing an issue during the testing of my chatbot, here is below i attached a screenshot which can show you the exact issue.

But when i hit the api again with first small letter 'what' instead of 'What' then it return correct response.

Could someone please help me identify the exact issue? thanks
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [x] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
{
"question": "what are the reactions associated with hydromorphone use in animals?"
}
### Expected behavior

| ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/13461/comments | 6 | 2023-11-16T13:42:31Z | 2024-02-22T16:06:13Z | https://github.com/langchain-ai/langchain/issues/13461 | 1,996,884,066 | 13,461 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Wrote the following code in Google Colab but it is unable to fetch the data. Kindly help.
from langchain.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
docs = sitemap_loader.load()
print(docs)

### Suggestion:
_No response_ | Issue: Sitemap Loader not fetching | https://api.github.com/repos/langchain-ai/langchain/issues/13460/comments | 7 | 2023-11-16T12:56:45Z | 2024-05-03T06:29:38Z | https://github.com/langchain-ai/langchain/issues/13460 | 1,996,800,557 | 13,460 |
[
"hwchase17",
"langchain"
]
| @dosu-bot
1 more question for my code thats below.
This is my code:
loader = PyPDFLoader(file_name)
documents = loader.load()
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=300,
chunk_overlap=50,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\Users\Asus\OneDrive\Documents\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory=persist_directory)
knowledge_base.persist()
prompt_template = """
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)
conversation = ConversationalRetrievalChain.from_llm(
llm=llm,
memory = memory,
retriever=knowledge_base.as_retriever(search_type = "similarity", search_kwargs = {"k":2}),
chain_type="stuff",
verbose=False,
combine_docs_chain_kwargs={"prompt":PROMPT}
)
def main():
chat_history = []
while True:
query = input("Ask me anything about the files (type 'exit' to quit): ")
if query.lower() in ["exit"] and len(query) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if query != "":
# with get_openai_callback() as cb:
llm_response = conversation({"question": query})
if name == "main":
main()
Below is an example of my terminal in vs code when I ask my AI model a question.
Ask me anything about the files (type 'exit' to quit): How do I delete a staff account
How can I delete a staff account?To delete a staff account, you need to have administrative privileges. As an admin, you have the ability to delete staff accounts when necessary.
Ask me anything about the files (type 'exit' to quit): | Output being rephrased | https://api.github.com/repos/langchain-ai/langchain/issues/13458/comments | 3 | 2023-11-16T12:34:29Z | 2024-02-22T16:06:23Z | https://github.com/langchain-ai/langchain/issues/13458 | 1,996,761,493 | 13,458 |
[
"hwchase17",
"langchain"
]
| @dosu-bot
Below is my code and in the line " llm_response = conversation({"query": question})", what would happen if I did conversation.run or conversation.apply or conversation.batch or conversation.invoke? When I do use each of those?
load_dotenv()
file_name = "Admin 2.0.pdf"
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
loader = PyPDFLoader(file_name)
documents = loader.load()
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=300,
chunk_overlap=50,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory=persist_directory)
knowledge_base.persist()
prompt_template = """
You must only follow the instructions in list below:
1) You are a friendly and conversational assistant named RAAFYA.
3) Answer the questions based on the document or if the user asked something
3) Never mention the name of the file to anyone to prevent any potential security risk
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory()
# conversation = ConversationalRetrievalChain.from_llm(
# llm=llm,
# retriever=knowledge_base.as_retriever(search_type = "similarity", search_kwargs = {"k":2}),
# chain_type="stuff",
# verbose=False,
# combine_docs_chain_kwargs={"prompt":PROMPT}
# )
conversation = RetrievalQA.from_chain_type(
llm = llm,
retriever=knowledge_base.as_retriever(search_type = "similarity", search_kwargs = {"k":3}),
chain_type="stuff",
return_source_documents = True,
verbose=False,
chain_type_kwargs={"prompt":PROMPT,
"memory":memory}
)
def process_source(llm_response):
print(llm_response['result'])
print('\n\nSources:')
for source in llm_response['source_documents']:
print(source.metadata['source'])
def main():
chat_history = []
while True:
question = input("Ask me anything about the files (type 'exit' to quit): ")
if question.lower() in ["exit"] and len(question) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if question != "":
# with get_openai_callback() as cb:
llm_response = conversation.({"query": question})
process_source(llm_response)
# chat_history.append((question, llm_response["result"]))
# print(result["answer"])
print()
# print(cb)
# print()
if __name__ == "__main__":
main() | Difference between run, apply, invoke, batch | https://api.github.com/repos/langchain-ai/langchain/issues/13457/comments | 7 | 2023-11-16T11:42:35Z | 2024-04-09T16:15:50Z | https://github.com/langchain-ai/langchain/issues/13457 | 1,996,672,590 | 13,457 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.336
OS: Windows
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using "SQLDatabase.from_uri" I am not able to access table information from private schema of PostgreSQL database. But using the same I am able to access table information from public schema of PostgreSQL DB. How can I access table information from all the schema's ? Please find the code below. Can someone help me?
db = SQLDatabase.from_uri(f"postgresql+psycopg2://{username}:{password}@{host}:{port}/{database}",sample_rows_in_table_info=5)
print(db.get_table_names())
### Expected behavior
I expected it give information of tables from all the schemas. | Langchain SQLDatabase is not able to access table information correctly from private schema of PostgreSQL database | https://api.github.com/repos/langchain-ai/langchain/issues/13455/comments | 3 | 2023-11-16T11:12:24Z | 2024-03-17T16:05:51Z | https://github.com/langchain-ai/langchain/issues/13455 | 1,996,620,397 | 13,455 |
[
"hwchase17",
"langchain"
]
| ### System Info
**System Information:**
- Python: 3.10.13
- Conda: 23.3.1
- Openai: 0.28.1
- LangChain: 0.0.330
- System: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
- Platform: AWS SageMaker Notebook
**Issue:**
- Few hours ago everything was working fine but now all of a sudden OpenAI isn't generating JSON format results
- I can't upgrade OpenAI or LangChain because another issue related to ChatCompletion due to OpenAI's recent update comes in which LangChain hasn't resolved
- Screenshot of Prompt and the output is below
**Screenshot:**

### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Just ask the OpenAI to generate a JSON format output:**
prompt = """
You're analyzing an agent's calls performance with his customers.
Please generate an expressive agent analysis report by using the executive summary, metadata provided to you and by
comparing the agent emotions against the top agent's emotions. Don't make the report about numbers only.
Make it look like an expressive and qualitative summary but keep it to ten lines only. Also generate only
five training guidelines for the agent in concise bullet points.
The output should be in the following JSON format:
{
"Agent Analysis Report" : "..."
"Training Guidelines" : "..."
}
"""
### Expected behavior
**Output should be like this:**
Output = {
"Agent Performance Report": "Agent did not perform well....",
"Training Guidelines": "Agent should work on empathy..."
} | LangChain(OpenAI) not returning JSON format output | https://api.github.com/repos/langchain-ai/langchain/issues/13454/comments | 3 | 2023-11-16T10:42:38Z | 2024-02-18T13:34:12Z | https://github.com/langchain-ai/langchain/issues/13454 | 1,996,569,188 | 13,454 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Can you allow the usage of existing Open ai assistant instead of creating a new one every time when using OpenAI Runnable
### Motivation
I dont want to clutter my assistant list with a bunch of clones
### Your contribution
The developer should only provide the assistant ID in the Constructor for OpenAIRunnable | OpenAI Assitant | https://api.github.com/repos/langchain-ai/langchain/issues/13453/comments | 10 | 2023-11-16T10:40:22Z | 2024-03-10T06:08:22Z | https://github.com/langchain-ai/langchain/issues/13453 | 1,996,565,178 | 13,453 |
[
"hwchase17",
"langchain"
]
| ### System Info
Error info:
<CT_SectPr '<w:sectPr>' at 0x2c9fa6c50> is not in list
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. create a new word ( .docx ) file,
2. wirte some content inside, then insert a directory in some where and save file.
3. upload .docx file.
4. click Button 'add file to Knowledge base'
5. check the file added.
6. you could see "DocumentLoader, spliter is None" and Document count is 0.
### Expected behavior
load docx file successful. | Can't load docx file with directory in content. | https://api.github.com/repos/langchain-ai/langchain/issues/13452/comments | 4 | 2023-11-16T10:17:19Z | 2024-02-22T16:06:28Z | https://github.com/langchain-ai/langchain/issues/13452 | 1,996,521,784 | 13,452 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Friends, how can I return the retrieval texts(context)? My script is as follows:
```python
from langchain.document_loaders import PyPDFDirectoryLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.schema import Document
from langchain.chat_models.openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
pdf_path = '/home/data/pdf'
pdf_loader = PyPDFDirectoryLoader(pdf_path)
docs = pdf_loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=256,chunk_overlap=0)
split_docs = splitter.split_documents(docs)
docsearch = FAISS.from_documents(split_docs,OpenAIEmbeddings())
retriever = docsearch.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI(temperature=0.2)
# RAG pipeline
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
print(chain.invoke("How to maintain the seat belts in the Jingke model?"))
```
### Suggestion:
_No response_ | Issue: How to return retrieval texts? | https://api.github.com/repos/langchain-ai/langchain/issues/13446/comments | 5 | 2023-11-16T08:05:09Z | 2024-02-22T16:06:33Z | https://github.com/langchain-ai/langchain/issues/13446 | 1,996,290,256 | 13,446 |
[
"hwchase17",
"langchain"
]
| ### System Info
python
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import OpenAI
from langchain.chains import AnalyzeDocumentChain
from langchain.chains.summarize import load_summarize_chain
from langchain_experimental.agents.agent_toolkits import create_csv_agent
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentType
model = ChatOpenAI(model="gpt-4") # gpt-3.5-turbo, gpt-4
agent = create_csv_agent(model,"APT.csv",verbose=True)
agent.run("how many rows are there?")
### Expected behavior
UnicodeDecodeError Traceback (most recent call last)
[<ipython-input-25-0a5f8e8f5933>](https://localhost:8080/#) in <cell line: 11>()
9 model = ChatOpenAI(model="gpt-4") # gpt-3.5-turbo, gpt-4
10
---> 11 agent = create_csv_agent(model,"APT.csv",verbose=True)
12
13 agent.run("how many rows are there?")
11 frames
/usr/local/lib/python3.10/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb1 in position 0: invalid start byte | create_csv_agent UnicodeDecodeError('utf-8' ) | https://api.github.com/repos/langchain-ai/langchain/issues/13444/comments | 4 | 2023-11-16T06:55:28Z | 2024-02-22T16:06:38Z | https://github.com/langchain-ai/langchain/issues/13444 | 1,996,188,135 | 13,444 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: latest (0.0.336)
### Who can help?
@hwchase17 (from git blame and from #13110)
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code to reproduce (based on [code from docs](https://python.langchain.com/docs/modules/agents/agent_types/openai_tools))
```python
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import BearlyInterpreterTool, DuckDuckGoSearchRun
from langchain.tools.render import format_tool_to_openai_tool
lc_tools = [DuckDuckGoSearchRun()]
oai_tools = [format_tool_to_openai_tool(tool) for tool in lc_tools]
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-1106",
streaming=True)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
| prompt
| llm.bind(tools=oai_tools)
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=lc_tools, verbose=True)
agent_executor.invoke(
{"input": "What's the average of the temperatures in LA, NYC, and SF today?"}
)
```
Logs:
```
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "./test-functions.py", line 42, in <module>
agent_executor.invoke(
File "./venv/lib/python3.11/site-packages/langchain/chains/base.py", line 87, in invoke
return self(
^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "./venv/lib/python3.11/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "./venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1245, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1032, in _take_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 461, in plan
output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 1427, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 2787, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 142, in invoke
self.generate_prompt(
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 459, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 349, in generate
raise e
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 339, in generate
self._generate_with_cache(
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 492, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 422, in _generate
return _generate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 61, in _generate_from_stream
generation += chunk
File "./venv/lib/python3.11/site-packages/langchain/schema/output.py", line 94, in __add__
message=self.message + other.message,
~~~~~~~~~~~~~^~~~~~~~~~~~~~~
File "./venv/lib/python3.11/site-packages/langchain/schema/messages.py", line 225, in __add__
additional_kwargs=self._merge_kwargs_dict(
^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/langchain/schema/messages.py", line 138, in _merge_kwargs_dict
raise ValueError(
ValueError: Additional kwargs key tool_calls already exists in this message.
```
`left` and `right` from inside`_merge_kwargs_dict`:
```python
left = {'tool_calls': [{'index': 0, 'id': 'call_xhpbRSsUKkzsvtFgnkTXEFtAtHc', 'function': {'arguments': '', 'name': 'duckduckgo_search'}, 'type': 'function'}]}
right = {'tool_calls': [{'index': 0, 'id': None, 'function': {'arguments': '{"qu', 'name': None}, 'type': None}]}
```
### Expected behavior
No errors and same result as without `streaming=True`. | openai tools don't work with streaming=True | https://api.github.com/repos/langchain-ai/langchain/issues/13442/comments | 6 | 2023-11-16T05:02:17Z | 2023-12-16T07:55:17Z | https://github.com/langchain-ai/langchain/issues/13442 | 1,996,065,154 | 13,442 |
[
"hwchase17",
"langchain"
]
| ### System Info
This was working fine for my previous configuration,
langchain v0.0.225
chromadb v0.4.7
But now neither this is working, nor the latest version of both
langchain v0.0.336
chromadb v0.4.17
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have the packages installed
Running these pieces of code
```
from langchain.document_loaders import TextLoader
from langchain.indexes import VectorstoreIndexCreator
loader = TextLoader(file_path)
index = VectorstoreIndexCreator().from_loaders([loader]) # this is where I am getting the error
```
OR
```
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
text_splitter = RecursiveCharacterTextSplitter()
splits = text_splitter.split_documents(docs)
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents( # this is where I am getting the error
documents=splits,
embedding=embedding,
)
```
Here is the error
```
Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])\nPlease see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.\nPlease note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023 \n
```
### Expected behavior
Earlier a chromadb instance would be created, and I would be able to query it with my prompts. That is the expected behaviour. | langchain.vectorstores.Chroma support for EmbeddingFunction.__call__ update of ChromaDB | https://api.github.com/repos/langchain-ai/langchain/issues/13441/comments | 6 | 2023-11-16T04:32:31Z | 2024-03-18T16:06:34Z | https://github.com/langchain-ai/langchain/issues/13441 | 1,996,037,409 | 13,441 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi Guys,
Here I made a new fork to [https://github.com/MetaSLAM/CyberChain](https://github.com/MetaSLAM/CyberChain), where I would like to combine the powerful Langchain ability and GPT4 into the real-world robotic challenges.
My question raised here is mainly about:
1. How can I setup the Chat GPT 4 version into Langchain, where I would like to levevarge the powerful visual inference of GPT4;
2. I there any suggestion for the memory system, because the robot may travel through a large-scale environment for lifelong navigation, I would like to construct a memory system within langchain to enhance its behaviour in the long-term operation.
Many thanks for your hard work, and Langchain is difinitely an impressive work.
Max
### Motivation
Combine real-world robotic application with the LangChain framework.
### Your contribution
I will provide my research outcomes under our MetaSLAM organization (https://github.com/MetaSLAM), hope this can benefit both robotics and AI communitry, targeting for the general AI system. | Request new feature for Robotic Application | https://api.github.com/repos/langchain-ai/langchain/issues/13440/comments | 5 | 2023-11-16T04:11:33Z | 2024-04-18T16:35:43Z | https://github.com/langchain-ai/langchain/issues/13440 | 1,996,017,598 | 13,440 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.336
mac
python3.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
import langchain
from langchain.chat_models.minimax import MiniMaxChat
print(langchain.__version__)
chat = MiniMaxChat(minimax_api_host="test_host", minimax_api_key="test_api_key", minimax_group_id="test_group_id")
assert chat._client
assert chat._client.host == "test_host"
assert chat._client.group_id == "test_group_id"
assert chat._client.api_key == "test_api_key"
```
output:
```sh
0.0.336
Traceback (most recent call last):
File "/Users/hulk/code/py/workhome/test_pydantic/test_langchain.py", line 4, in <module>
chat = MiniMaxChat(minimax_api_host="test_host", minimax_api_key="test_api_key", minimax_group_id="test_group_id")
File "/Users/hulk/miniforge3/envs/py38/lib/python3.8/site-packages/langchain/llms/minimax.py", line 121, in __init__
self._client = _MinimaxEndpointClient(
File "pydantic/main.py", line 357, in pydantic.main.BaseModel.__setattr__
ValueError: "MiniMaxChat" object has no field "_client"
```
### Expected behavior
Instantiation Success | model init ValueError | https://api.github.com/repos/langchain-ai/langchain/issues/13438/comments | 3 | 2023-11-16T03:23:29Z | 2024-02-21T09:50:32Z | https://github.com/langchain-ai/langchain/issues/13438 | 1,995,978,394 | 13,438 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.266
### Who can help?
@hwchase17
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Behavior
When I call the `vector_store.similarity_search_with_score` function:
- Expected: The returned scores will be proportional to the similarity. This means the higher score, the higher similarity.
- Actual: The scores are proportional to the the distance.
### Problem
- When I call `as_retriever` function with the `score_threshold`, the behavior is wrong. Because when `score_threshold` is declared, it will filter documents that have score greater than or equal to `score_threshold` value. So the top documents that found from pgvector will be filter out while it's the most similar in fact.
### Expected behavior
The returned scores from PGVector queries are proportional to the similarity.
In other words, the higher score, the higher similarity. | [PGVector] The scores returned by 'similarity_search_with_score' are NOT proportional to the similarity | https://api.github.com/repos/langchain-ai/langchain/issues/13437/comments | 5 | 2023-11-16T03:10:54Z | 2024-02-22T16:06:43Z | https://github.com/langchain-ai/langchain/issues/13437 | 1,995,965,954 | 13,437 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I'm trying to make a chatbot using conversationchain. The prompt takes three variables, “history” from memory, user input: "input", and another variable say variable3. But it has an error “Got unexpected prompt input variables. The prompt expects ['history', 'input', 'variable3'], but got ['history'] as inputs from memory, and input as the normal input key. ”
This error doesn't occur if I'm using llmchain. So how can I prevent it if I want to use conversationchain?
Thanks
### Suggestion:
_No response_ | Issue: problem with conversationchain take multiple inputs | https://api.github.com/repos/langchain-ai/langchain/issues/13433/comments | 4 | 2023-11-16T00:54:37Z | 2024-07-19T13:57:00Z | https://github.com/langchain-ai/langchain/issues/13433 | 1,995,838,321 | 13,433 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Although there are many output parsers in langchain, how to custom the output in a chain agent can not find any solutions right now--yet it may be sometimes necessary. Let's say a chain agent x, the tools are Tool1, Tool2 and Tool3, if:
the output of Tool2 should be customized, and no longer be processed by GPT again,
the outputs of Tool1 and Tool3 should be normal, and be processed by GPT again,
in this case, no solution could be found: because the output parser is based on the agent other than any specific tools.
Should this feature be satisfied?
### Motivation
When multiple tools in an agent chain, and some of the tools should be output customized.
### Your contribution
no | Custom Tool Output in a Chain | https://api.github.com/repos/langchain-ai/langchain/issues/13432/comments | 2 | 2023-11-16T00:46:31Z | 2024-02-22T16:06:48Z | https://github.com/langchain-ai/langchain/issues/13432 | 1,995,831,335 | 13,432 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
langchain==0.0.335
python==3.10
```
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
From the LCEL interface [docs](https://python.langchain.com/docs/expression_language/interface) we have the following snippet:
```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model
for s in chain.stream({"topic": "bears"}):
print(s.content, end="", flush=True)
```
From the token usage tracking [docs](https://python.langchain.com/docs/modules/model_io/chat/token_usage_tracking) we have the snippet
```python
from langchain.callbacks import get_openai_callback
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-4")
with get_openai_callback() as cb:
result = llm.invoke("Tell me a joke")
print(cb)
```
that yields the following output:
```
Tokens Used: 24
Prompt Tokens: 11
Completion Tokens: 13
Successful Requests: 1
Total Cost (USD): $0.0011099999999999999
```
I am trying to combine the two concepts in the following snippet
```python
from langchain.prompts import ChatPromptTemplate
from langchain.callbacks import get_openai_callback
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-4")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | llm
with get_openai_callback() as cb:
for s in chain.stream({"topic": "bears"}):
print(s.content, end="", flush=True)
print(cb)
```
but get the following result:
```
Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0
```
Is token counting (and pricing) while streaming not supported at the moment?
### Expected behavior
The following but with the actual values for tokens and cost.
```
Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0
``` | `get_openai_callback()` does not count tokens when LCEL chain used with `.stream()` method | https://api.github.com/repos/langchain-ai/langchain/issues/13430/comments | 13 | 2023-11-16T00:05:51Z | 2024-07-24T13:34:05Z | https://github.com/langchain-ai/langchain/issues/13430 | 1,995,786,367 | 13,430 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.335
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When status code is not 200, [TextGen ](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/textgen.py#L223)only prints the code and returns an empty string.
This is a problem if I want to use TextGen `with_retry()` and I want to retry non 200 responses such as 5xx.
- If I edit the textgen.py source code myself and raise an Exception [here](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/textgen.py#L223), then my `with_retry()` works as desired.
I tried to handle this by raising an empty string error in my Output Parser, but the TextGen `with_retry()` is not triggering.
### Expected behavior
I'd like TextGen to raise an Exception when the status code is not 200.
Perhaps consider using the `HTTPError` from `requests` package. | TextGen is not raising Exception when response status code is not 200 | https://api.github.com/repos/langchain-ai/langchain/issues/13416/comments | 4 | 2023-11-15T21:01:18Z | 2024-06-01T00:07:34Z | https://github.com/langchain-ai/langchain/issues/13416 | 1,995,562,229 | 13,416 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi,
I have been learning LangChain for the last month and I have been struggling in the last week to "_guarantee_" `ConversationalRetrievalChain` only answers based on the knowledge added on embeddings. I don't know if I am missing some LangChain configuration or if it is just a matter of tuning my prompt. I will add my code here (simplified, not the actual one, but I will try to preserve everything important).
```
chat = AzureChatOpenAI(
deployment_name="chat",
model_name="gpt-3.5-turbo",
openai_api_version=os.getenv('OPENAI_API_VERSION'),
openai_api_key=os.getenv('OPENAI_API_KEY'),
openai_api_base=os.getenv('OPENAI_API_BASE'),
openai_api_type="azure",
temperature=0
)
embeddings = OpenAIEmbeddings(deployment_id="text-embedding-ada-002", chunk_size=1)
acs = AzureSearch(azure_search_endpoint=os.getenv('AZURE_COGNITIVE_SEARCH_SERVICE_NAME'),
azure_search_key=os.getenv('AZURE_COGNITIVE_SEARCH_API_KEY'),
index_name=os.getenv('AZURE_COGNITIVE_SEARCH_INDEX_NAME'),
embedding_function=embeddings.embed_query)
custom_template = """You work for CompanyX which sells things located in United States.
If you don't know the answer, just say that you don't. Don't try to make up an answer.
Base your questions only on the knowledge provided here. Do not use any outside knowledge.
Given the following chat history and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:
"""
CUSTOM_QUESTION_PROMPT = PromptTemplate.from_template(custom_template)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="question", return_messages=True)
qa = ConversationalRetrievalChain.from_llm(
llm=chat,
retriever=acs.as_retriever(),
condense_question_prompt=CUSTOM_QUESTION_PROMPT,
memory=memory
)
```
When I ask it something like `qa({"question": "What is an elephant?"})` it still answers it, although it is totally unrelated to the knowledge base added to the AzureSearch via embeddings.
I tried different `condense_question_prompt`, with different results, but nothing near _good_. I've been reading the documentation and API for the last 3 weeks, but nothing else seems to help in this case. I'd appreciate any suggestions.
### Suggestion:
_No response_ | Issue: Making sure `ConversationalRetrievalChain` only answer based on the retriever information | https://api.github.com/repos/langchain-ai/langchain/issues/13414/comments | 9 | 2023-11-15T19:19:26Z | 2024-05-30T06:05:59Z | https://github.com/langchain-ai/langchain/issues/13414 | 1,995,385,588 | 13,414 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add IDE auto-complete to `langchain.llm` module
Currently, IntelliJ-based IDEs (PyCharm) interpret an LLM model to be `typing.Any`. Below is an example for the `GPT4All` package

### Motivation
Not having auto-complete on a `LLM` class can be a bit frustrating as a Python developer who works on an IntelliJ product. I've been performing a more direct import of the LLM models to handle this instead:
```python
from langchain.llms.gpt4all import GPT4All
```
Adding support for the `langchain.llms` API would improve the developer experience with the top level `langchain.llms` API
### Your contribution
The existing implementation uses a lazy-loading technique implemented in #11237 to speed up imports. Maintaining this performance is important for whatever solution is implemented. I believe this can be achieved with some imports behind an `if TYPE_CHECKING` block. If the below `Proposed Implementation` is acceptable I'd be happy to open a PR to add this functionality.
<details><summary>Current Implementation</summary>
<p>
`langchain.llms.__init__.py` (abbreviated)
```python
from typing import Any, Callable, Dict, Type
from langchain.llms.base import BaseLLM
def _import_anthropic() -> Any:
from langchain.llms.anthropic import Anthropic
return Anthropic
def _import_gpt4all() -> Any:
from langchain.llms.gpt4all import GPT4All
return GPT4All
def __getattr__(name: str) -> Any:
if name == "Anthropic":
return _import_anthropic()
elif name == "GPT4All":
return _import_gpt4all()
else:
raise AttributeError(f"Could not find: {name}")
__all__ = [
"Anthropic",
"GPT4All",
]
```
</p>
</details>
<details><summary>Proposed Implementation</summary>
<p>
`langchain.llms.__init__.py` (abbreviated)
```python
from typing import Any, Callable, Dict, Type, TYPE_CHECKING
from langchain.llms.base import BaseLLM
if TYPE_CHECKING:
from langchain.llms.anthropic import Anthropic
from langchain.llms.gpt4all import GPT4All
def _import_anthropic() -> "Anthropic":
from langchain.llms.anthropic import Anthropic
return Anthropic
def _import_gpt4all() -> "GPT4All":
from langchain.llms.gpt4all import GPT4All
return GPT4All
def __getattr__(name: str) -> "BaseLLM":
if name == "Anthropic":
return _import_anthropic()
elif name == "GPT4All":
return _import_gpt4all()
else:
raise AttributeError(f"Could not find: {name}")
__all__ = [
"Anthropic",
"GPT4All",
]
```
</p>
</details>
<details><summary>IntelliJ Screenshot</summary>
<p>
Here's a screenshot after implementing the above `Proposed Implementation`
<img width="588" alt="image" src="https://github.com/langchain-ai/langchain/assets/49741340/1647f3b5-238e-4137-8fa4-f30a2234fdb0">
</p>
</details> | IDE Support - Python Package - langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/13411/comments | 1 | 2023-11-15T18:14:01Z | 2024-02-21T16:06:19Z | https://github.com/langchain-ai/langchain/issues/13411 | 1,995,289,924 | 13,411 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain = 0.0.335
openai = 1.2.4
python = 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Modified example code (https://python.langchain.com/docs/integrations/llms/azure_openai) from langchain to access AzureOpenAI inferencing endpoint
```
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
os.environ["OPENAI_API_BASE"] = "..."
os.environ["OPENAI_API_KEY"] = "..."
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(
deployment_name="td2",
model_name="text-davinci-002",
)
# Run the LLM
llm("Tell me a joke")
```
I get the following error:
TypeError: Missing required arguments; Expected either ('model' and 'prompt') or ('model', 'prompt' and 'stream') arguments to be given
If I modify the last line as follows:
`llm("Tell me a joke", model="text-davinci-002") `
i get a different error:
Completions.create() got an unexpected keyword argument 'engine'
It appears to be passing all keywords to the create method, the first of which is 'engine', and it appears that and other kws are being added by the code.
### Expected behavior
I expect the model to return a response, such as is shown in the example. | Missing required arguments; Expected either ('model' and 'prompt') or ('model', 'prompt' and 'stream') | https://api.github.com/repos/langchain-ai/langchain/issues/13410/comments | 14 | 2023-11-15T18:00:46Z | 2024-05-10T16:08:15Z | https://github.com/langchain-ai/langchain/issues/13410 | 1,995,270,190 | 13,410 |
[
"hwchase17",
"langchain"
]
| ### System Info
Name: langchain
Version: 0.0.327
Name: chromadb
Version: 0.4.8
Summary: Chroma.
### Who can help?
@hwchase17 I believe the chromadb don't close the file handle during persistence making it difficult to use it on cloud services like Modal Labs. What about adding a close method or similar to make sure this doesn't happen?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using ModalLabs with a simple example:
```
@stub.function(volumes={CHROMA_DIR: stub.volume})
def test_chroma():
import chromadb
from importlib.metadata import version
print("ChromaDB: %s" % version('chromadb'))
# Initialize ChromaDB client
client = chromadb.PersistentClient(path=SENTENCE_DIR.as_posix())
# Create the collection
neo_collection = client.create_collection(name="neo")
# Adding raw documents
neo_collection.add(
documents=["I know kung fu.", "There is no spoon."], ids=["quote_1", "quote_2"]
)
# Counting items in a collection
item_count = neo_collection.count()
print(f"Count of items in collection: {item_count}")
stub.volume.commit()
```
Error:
`grpclib.exceptions.GRPCError: (<Status.FAILED_PRECONDITION: 9>, 'there are open files preventing the operation', None)
`
### Expected behavior
There shouldn't be any open file handles.
| Persistent client open file handles | https://api.github.com/repos/langchain-ai/langchain/issues/13409/comments | 3 | 2023-11-15T17:59:14Z | 2024-02-21T16:06:24Z | https://github.com/langchain-ai/langchain/issues/13409 | 1,995,268,049 | 13,409 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| RunnableLambda: returned runnable called synchronously when using ainvoke | https://api.github.com/repos/langchain-ai/langchain/issues/13407/comments | 2 | 2023-11-15T17:27:49Z | 2023-11-28T11:18:27Z | https://github.com/langchain-ai/langchain/issues/13407 | 1,995,223,902 | 13,407 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi there,
I am doing a research on creating a PDF reader AI which can answer users' questions based on the PDF uploaded and the prompt user entered. I got it so far with using the OpenAI package but now want's to make it more advance by using ChatOpenAI with the LangChain Schema package (SystemMessage, HumanMessage, and AIMessage). I am kinda lost on where I should start and make the adjustments. Could you help me on that?
Below is my code so far:
## Imports
import streamlit as st
import os
from apikey import apikey
import pickle
from PyPDF2 import PdfReader
from streamlit_extras.add_vertical_space import add_vertical_space
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.callbacks import get_openai_callback
from langchain.chat_models import ChatOpenAI
from langchain.schema import (SystemMessage, HumanMessage, AIMessage)
os.environ['OPENAI_API_KEY'] = apikey
## User Interface
# Side Bar
with st.sidebar:
st.title('🚀 Zi-GPT Version 2.0')
st.markdown('''
## About
This app is an LLM-powered chatbot built using:
- [Streamlit](https://streamlit.io/)
- [LangChain](https://python.langchain.com/)
- [OpenAI](https://platform.openai.com/docs/models) LLM model
''')
add_vertical_space(5)
st.write('Made with ❤️ by Zi')
# Main Page
def main():
st.header("Zi's PDF Helper: Chat with PDF")
# upload a PDF file
pdf = st.file_uploader("Please upload your PDF here", type='pdf')
# st.write(pdf)
# read PDF
if pdf is not None:
pdf_reader = PdfReader(pdf)
# split document into chunks
# also can use text split: good for PDFs that do not contains charts and visuals
sections = []
for page in pdf_reader.pages:
# Split the page text by paragraphs (assuming two newlines indicate a new paragraph)
page_sections = page.extract_text().split('\n\n')
sections.extend(page_sections)
chunks = sections
# st.write(chunks)
# embeddings
file_name = pdf.name[:-4]
# comvert into pickle file
# wb: open in binary mode
# rb: read the file
# Note: only create new vectors for new files updated
if os.path.exists(f"{file_name}.pkl"):
with open(f"{file_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
st.write('Embeddings Loaded from the Disk')
else:
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
VectorStore = FAISS.from_texts(chunks,embedding=embeddings)
with open(f"{file_name}.pkl", "wb") as f:
pickle.dump(VectorStore, f)
st.write('Embeddings Computation Completed')
# Create chat history
if pdf:
# generate chat history
chat_history_file = f"{pdf.name}_chat_history.pkl"
# load history if exist
if os.path.exists(chat_history_file):
with open(chat_history_file, "rb") as f:
chat_history = pickle.load(f)
else:
chat_history = []
# Initialize chat_history in session_state if not present
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
# Check if 'prompt' is in session state
if 'last_input' not in st.session_state:
st.session_state.last_input = ''
# User Input
current_prompt = st.session_state.get('user_input', '')
prompt_placeholder = st.empty()
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input")
submit_button = st.button("Submit")
if submit_button and prompt:
# Update the last input in session state
st.session_state.last_input = prompt
docs = VectorStore.similarity_search(query=prompt, k=3)
#llm = OpenAI(temperature=0.9, model_name='gpt-3.5-turbo')
chat = ChatOpenAI(model='gpt-3.5-turbo', temperature=0.7)
chain = load_qa_chain(llm=chat, chain_type="stuff")
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=prompt)
print(cb)
# st.write(response)
# st.write(docs)
# Add to chat history
st.session_state.chat_history.append((prompt, response))
# Save chat history
with open(chat_history_file, "wb") as f:
pickle.dump(st.session_state.chat_history, f)
# Clear the input after processing
prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt")
# Display the entire chat
chat_content = ""
for user_msg, bot_resp in st.session_state.chat_history:
chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**You:** {user_msg}</div>"
chat_content += f"<div style='background-color: #333333; color: white; padding: 10px;'>**Zi GPT:** {bot_resp}</div>"
st.markdown(chat_content, unsafe_allow_html=True)
if __name__ == '__main__':
main()
### Suggestion:
_No response_ | Issue: Need Help - Implement ChatOpenAI into my LangChain Research | https://api.github.com/repos/langchain-ai/langchain/issues/13406/comments | 3 | 2023-11-15T17:12:08Z | 2023-11-28T21:44:27Z | https://github.com/langchain-ai/langchain/issues/13406 | 1,995,196,688 | 13,406 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently no support for multi-modal embeddings from VertexAI exists. However, I did stumble upon this experimental implementation of [GoogleVertexAIMultimodalEmbeddings](https://js.langchain.com/docs/modules/data_connection/experimental/multimodal_embeddings/google_vertex_ai) in LangChain for Javascript. Hence, I think this would also be a very nice feature to implement in the Python version of LangChain.
### Motivation
Using multi-modal embeddings could positively affect applications that rely on information of different modalities. One example could be product search in a web catalogue. Since more cloud providers are making [endpoints for multi-modal embeddings](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-multimodal-embeddings) available, it makes sense to incorporate these into LangChain as well. The embeddings of these endpoints could be stored in vector stores and hence be used in downstream applications that are built using LangChain.
### Your contribution
I can contribute to this feature. | Add support for multimodal embeddings from Google Vertex AI | https://api.github.com/repos/langchain-ai/langchain/issues/13400/comments | 4 | 2023-11-15T15:02:35Z | 2024-02-23T16:06:37Z | https://github.com/langchain-ai/langchain/issues/13400 | 1,994,956,885 | 13,400 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Elastic supports natively to have multiple DenseVectors in a document, which can be selected during query time / search. The langchain vector search interface enables to pass additional keyword args. But at the moment, the implementation of _search in the ElasticSearch implementation does not consider the `vector_query_field` variable, which could be passed through the kwargs. Furthermore, there should be a solution, to allow a document to have multiple text fields that get passed as a queryable vector, not just the standard text field.
### Motivation
If you have multiple vector fields in one index, this feature could simplify the query of the right one, like it's natively allowed in Elastic. In the current implementation one would need to add additional vector fields in the metadata and change the `vector_query_field` of ElasticSearchStore the whole class every time before you call the search. This is not a clean solution and I would vote for a more generic and clean solution.
### Your contribution
I could help by implementing this issue, although I need to state that I am not an expert in Elastic. I saw this issue when we tried to use an existing index in Elastic to add and retrieve Documents within the Langchain Framework. | ElasticSearch allow for multiple vector_query_fields than default text & make it a kwarg in search functions | https://api.github.com/repos/langchain-ai/langchain/issues/13398/comments | 1 | 2023-11-15T14:42:10Z | 2024-03-13T19:57:51Z | https://github.com/langchain-ai/langchain/issues/13398 | 1,994,918,113 | 13,398 |
[
"hwchase17",
"langchain"
]
| ### System Info
Not relevant.
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use any LLM relying on stop words with special regex characters.
For example instanciate a `HuggingFacePipeline` LLM instance with [openchat](https://huggingface.co/openchat/openchat_3.5) model. This model uses `<|end_of_turn|>` stop words.
Since the `llms.utils.enforce_stop_tokens()` function doesn't escape the provided stop words strings the `|` chars are interpreted as part of the regex instead of the stop word. So in this case any single `<` chars in the output would trigger the
split.
### Expected behavior
Stop words should be escaped with `re.escape()` so the split only happens on the complete words. | Missing escape in `llms.utils.enforce_stop_tokens()` | https://api.github.com/repos/langchain-ai/langchain/issues/13397/comments | 3 | 2023-11-15T14:36:39Z | 2023-11-17T22:09:17Z | https://github.com/langchain-ai/langchain/issues/13397 | 1,994,907,050 | 13,397 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Make `Message` and/or `Memory` to support `Timestamp`
### Motivation
To provide context and clarity regarding the timing of conversation. This can be helpful for reference and coordination, especially when discussing time-sensitive topics.
I noticed that [one agent in opengpts](https://opengpts-example-vz4y4ooboq-uc.a.run.app/) has supported this feature.
### Your contribution
I have not made a clear outline of adding the `Timestamps` feature.
The following is some possible ways to support it for discussion:
Proposal 1: Add `Timestamps` to `Message Schema`. This way every `Memory Entity` should support `Timestamps`.
Proposal 2: Create a new `TimestampedMemory`. This way has a better backward compatibility. | [Enhancement] Timestamp supported Message and/or Memory | https://api.github.com/repos/langchain-ai/langchain/issues/13393/comments | 2 | 2023-11-15T13:21:24Z | 2023-11-15T13:40:39Z | https://github.com/langchain-ai/langchain/issues/13393 | 1,994,769,118 | 13,393 |
[
"hwchase17",
"langchain"
]
| ### System Info
Windows 11, Langchain 0.0327, Python 3.10.
The Doc2txtLoader does not work for web paths, as a PermissionError occurs when self.tempfile attempts to write content to the tempfile:
if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):
r = requests.get(self.file_path)
if r.status_code != 200:
raise ValueError(
"Check the url of your file; returned status code %s"
% r.status_code
)
self.web_path = self.file_path
self.temp_file = tempfile.NamedTemporaryFile()
**self.temp_file.write(r.content)**
self.file_path = self.temp_file.name
elif not os.path.isfile(self.file_path):
raise ValueError("File path %s is not a valid file or url" % self.file_path)
It produces a Permission Error: _[Errno 13] Permission denied:_ as the file is already open, and will be deleted on close.
I can work around this by replacing this section of the code with the following:
if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):
self.temp_dir = tempfile.TemporaryDirectory()
_, suffix = os.path.splitext(self.file_path)
temp_pdf = os.path.join(self.temp_dir.name, f"tmp{suffix}")
self.web_path = self.file_path
if not self._is_s3_url(self.file_path):
r = requests.get(self.file_path, headers=self.headers)
if r.status_code != 200:
raise ValueError(
"Check the url of your file; returned status code %s"
% r.status_code
)
with open(temp_pdf, mode="wb") as f:
f.write(r.content)
self.file_path = str(temp_pdf)
elif not os.path.isfile(self.file_path):
raise ValueError("File path %s is not a valid file or url" % self.file_path)
This is the method that works for the PDF loader.
The workaround is fine for now but will cause a problem if I need to update the langchain version any time in the future.
### Who can help?
@hwchase17 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import Docx2txtLoader
loader = Docx2txtLoader("https://file-examples.com/wp-content/storage/2017/02/file-sample_100kB.docx")
doc = loader.load()[0]
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.2\plugins\python-ce\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 3, in <module>
File "C:\Users\crawleyb\PycharmProjects\SharepointGPT\venv\lib\site-packages\langchain\document_loaders\word_document.py", line 55, in load
return [
File "C:\Users\crawleyb\PycharmProjects\SharepointGPT\venv\lib\site-packages\docx2txt\docx2txt.py", line 76, in process
zipf = zipfile.ZipFile(docx)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\zipfile.py", line 1251, in __init__
self.fp = io.open(file, filemode)
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\
### Expected behavior
The DocumentLoader should be able to get the contents of the docx file, loader.load()[0] should return a Document object. | Doc2txtLoader not working for web paths | https://api.github.com/repos/langchain-ai/langchain/issues/13391/comments | 3 | 2023-11-15T10:15:47Z | 2024-02-21T16:06:34Z | https://github.com/langchain-ai/langchain/issues/13391 | 1,994,466,938 | 13,391 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I'm trying to work through the streaming parameters around run_manager and callbacks.
Here's a minimal setup of what I'm trying to establish
```
class MyTool(BaseTool)
name: "my_extra_tool"
async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
nested_manager = run_manager.get_child() # run_manager doesn't have an `on_llm_start` method, only supports `on_tool_end` / `on_tool_error` and `on_text` (which has no callback in LangchainTracer
llm_run_manager = await nested_manager.on_llm_start({"llm": self.llm, "name": self.name+"_substep"}, prompts=[query]) # need to give
# do_stuff, results in main_response
main_response = "<gets created in the tool, might be part of streaming output in the future>"
await llm_run_manager[0].on_llm_new_token(main_response) # can't use llm_run_manager directly as it's a list
await llm_run_manager[0].on_llm_end(response=main_response)
```
I'm seeing that on_llm_new_token callback is being called in my custom callback handler, but I don't see the response in Langsmith.
The docs aren't fully clear on how to make sure these run_ids should be propagated.
### Idea or request for content:
It would be fantastic to have a detailed example of how to correctly nest runs with arbitrary tools. | DOC: Clarify how to handle runs and linked calls with run_managers | https://api.github.com/repos/langchain-ai/langchain/issues/13390/comments | 5 | 2023-11-15T09:59:55Z | 2024-02-21T16:06:39Z | https://github.com/langchain-ai/langchain/issues/13390 | 1,994,439,783 | 13,390 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.321
Python: 3.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using Langchain to generate and execute SQL queries for MySql database.
The SQL Query generated is enclosed in single quotes
Generate SQL Query: **'**"SELECT * FROM EMPLOYEE WHERE ID = 123"**'**
Expected SQL Query: "SELECT * FROM EMPLOYEE WHERE ID = 123"
Hence though the query is correct, sql alchemy is unable to execute query and gives Programming error
(pymysql.err.ProgrammingError) (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use
### Expected behavior
Expected SQL Query should be without enclosing single quotes.
I did some debugging as looks like we get single quotes while invoking predict method of LLM - https://github.com/langchain-ai/langchain/blob/master/libs/experimental/langchain_experimental/sql/base.py#L156 | MySQL : SQL Query generated contains enclosing single quotes leading to SQL Alchemy giving Programming Error, 1064 | https://api.github.com/repos/langchain-ai/langchain/issues/13387/comments | 4 | 2023-11-15T09:09:20Z | 2023-11-15T09:51:03Z | https://github.com/langchain-ai/langchain/issues/13387 | 1,994,353,668 | 13,387 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
```
openai.proxy = {
"http": "http://127.0.0.1:7890",
"https": "http://127.0.0.1:7890"
}
callback = AsyncIteratorCallbackHandler()
llm = OpenAI(
openai_api_key= os.environ["OPENAI_API_KEY"],
temperature=0,
streaming=True,
callbacks=[callback]
)
embeddings = OpenAIEmbeddings()
# faq
loader = TextLoader("static/faq/ecommerce_faq.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
docsearch = Chroma.from_documents(texts, embeddings)
faq_chain = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever=docsearch.as_retriever(),
)
@tool("FAQ")
def faq(input) -> str:
""""useful for when you need to answer questions about shopping policies, like return policy, shipping policy, etc."""
print('faq input', input)
return faq_chain.acall(input)
tools = [faq]
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation_agent = initialize_agent(
tools,
llm,
agent="conversational-react-description",
memory=memory,
verbose=True,
)
async def wait_done(fn, event):
try:
await fn
except Exception as e:
print('error', e)
# event.set()
finally:
event.set()
async def call_openai(question):
# chain = faq(question)
chain = conversation_agent.acall(question)
coroutine = wait_done(chain, callback.done)
task = asyncio.create_task(coroutine)
async for token in callback.aiter():
# print('token', token)
yield f"{token}"
await task
app = FastAPI()
@app.get("/")
async def homepage():
return FileResponse('static/index.html')
@app.post("/ask")
def ask(body: dict):
return StreamingResponse(call_openai(body['question']), media_type="text/event-stream")
if __name__ == "__main__":
uvicorn.run(host="127.0.0.1", port=8888, app=app)
```
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Thought: Do I need to use a tool? No
AI: 你好!很高兴认识你!
> Finished chain.
INFO: 127.0.0.1:65193 - "POST /ask HTTP/1.1" 200 OK
conversation_agent <coroutine object Chain.acall at 0x1326b78a0>
coroutine <async_generator object AsyncIteratorCallbackHandler.aiter at 0x133b49340>
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: FAQ
Action Input: 如何更改帐户信息error faq() takes 1 positional argument but 2 were given
### Suggestion:
_No response_ | Issue: <error faq() takes 1 positional argument but 2 were given> | https://api.github.com/repos/langchain-ai/langchain/issues/13383/comments | 8 | 2023-11-15T04:05:00Z | 2024-02-21T16:06:44Z | https://github.com/langchain-ai/langchain/issues/13383 | 1,993,997,381 | 13,383 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi guys,
Here is the situation, I have 3 tools to use one by one. But, the outputs of which are not in the same types. Say:
Tool 1: output normally, and can be further thought by GPT in the chain,
Tool 2: output specially, which means the output should not be thought further by GPT again, because in that way the output format and data will be not correctly,
Tool 3: output normally, like Tool 1.
In this way, I found it hard to get the correct answer for myself. If all the tools are set to return_direct=False, the answer to Tool 2 will not be right, and if I set return_dircect=True, the chain of Tools 1->2->3 would be lost...
What should I do?
Any help will be highly appreciated.
Best
### Suggestion:
_No response_ | Custom Tool Output | https://api.github.com/repos/langchain-ai/langchain/issues/13382/comments | 4 | 2023-11-15T03:27:22Z | 2024-02-21T16:06:49Z | https://github.com/langchain-ai/langchain/issues/13382 | 1,993,967,484 | 13,382 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version:
Platform: WSL for Windows (Linux 983G3J3 5.15.90.1-microsoft-standard-WSL2)
Python version: 3.10.6
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1a. Create a project in GCP and deploy an open source model from Vertex AI Model Garden (follow the provided Colab notebook to deploy to an endpoint)
1b. Instantiate a VertexAIModelGarden object
```python
llm = VertexAIModelGarden(project=PROJECT_ID, endpoint_id=ENDPOINT_ID)
```
2. Create a prompt string
```python
prompt = "This is an example prompt"
```
3. Call the generate method on the VertexAIModelGarden object
```python
llm.generate([prompt])
```
4. The following error will be produced:
```python
../python3.10/site-packages/langchain/llms/vertexai.py", line 452, in <listcomp>
[Generation(text=prediction[self.result_arg]) for prediction in result]
TypeError: string indices must be integers
```
### Expected behavior
Expecting the generate method to return an LLMResult object that contains the model's response in the 'generations' property
In order to align with Vertex AI api the _generate method should iterate through response.predictions and set text property of Generation object to the iterator variable since response.predictions is a list data type that contains the output strings.
```python
for result in response.predictions:
generations.append(
[Generation(text=result)]
)
``` | VertexAIModelGarden _generate method not in sync with VertexAI API | https://api.github.com/repos/langchain-ai/langchain/issues/13370/comments | 8 | 2023-11-14T21:55:57Z | 2024-03-18T16:06:29Z | https://github.com/langchain-ai/langchain/issues/13370 | 1,993,638,093 | 13,370 |
[
"hwchase17",
"langchain"
]
| ### System Info
openai==1.2.4
langchain==0.0.325
llama_index==0.8.69
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
/usr/local/lib/python3.10/site-packages/llama_index/indices/base.py:102: in from_documents
return cls(
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:49: in __init__
super().__init__(
/usr/local/lib/python3.10/site-packages/llama_index/indices/base.py:71: in __init__
index_struct = self.build_index_from_nodes(nodes)
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:254: in build_index_from_nodes
return self._build_index_from_nodes(nodes, **insert_kwargs)
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:235: in _build_index_from_nodes
self._add_nodes_to_index(
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:188: in _add_nodes_to_index
nodes = self._get_node_with_embedding(nodes, show_progress)
/usr/local/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py:100: in _get_node_with_embedding
id_to_embed_map = embed_nodes(
/usr/local/lib/python3.10/site-packages/llama_index/indices/utils.py:137: in embed_nodes
new_embeddings = embed_model.get_text_embedding_batch(
/usr/local/lib/python3.10/site-packages/llama_index/embeddings/base.py:250: in get_text_embedding_batch
embeddings = self._get_text_embeddings(cur_batch)
/usr/local/lib/python3.10/site-packages/llama_index/embeddings/langchain.py:82: in _get_text_embeddings
return self._langchain_embedding.embed_documents(texts)
/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py:490: in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py:374: in _get_len_safe_embeddings
response = embed_with_retry(
/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py:100: in embed_with_retry
retry_decorator = _create_retry_decorator(embeddings)
/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py:47: in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout)
E AttributeError: module 'openai' has no attribute 'error'
```
### Expected behavior
I suppose it should run, I'll provide some reproducible code here in a minute. | module 'openai' has no attribute 'error' | https://api.github.com/repos/langchain-ai/langchain/issues/13368/comments | 15 | 2023-11-14T20:37:27Z | 2024-05-15T21:00:26Z | https://github.com/langchain-ai/langchain/issues/13368 | 1,993,528,820 | 13,368 |
[
"hwchase17",
"langchain"
]
| ### System Info
after getting million embedding records in postgres, everything became ridiculously slow and postgres cpu usage went to 100%
fix was simple:
```
CREATE INDEX CONCURRENTLY langchain_pg_embedding_collection_id ON langchain_pg_embedding(collection_id);
CREATE INDEX CONCURRENTLY langchain_pg_collection_name ON langchain_pg_collection(name);
```
I think it's important to include index creation in basic setup.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just run PGVector, and it will create tables without indices.
### Expected behavior
Should create indices | PGVector don't have indices what kills postgres in production | https://api.github.com/repos/langchain-ai/langchain/issues/13365/comments | 2 | 2023-11-14T19:38:34Z | 2024-02-20T16:05:56Z | https://github.com/langchain-ai/langchain/issues/13365 | 1,993,436,169 | 13,365 |
[
"hwchase17",
"langchain"
]
| ### System Info
CosmosDBHistory messages are wiping out the session of each run.
Need the library to not recreate the object for the same session or else this cannot be used as chatbot which is stateless
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`import os
from time import perf_counter
from langchain.chat_models import AzureChatOpenAI
from langchain.memory.chat_message_histories import CosmosDBChatMessageHistory
from langchain.schema import (
SystemMessage,
HumanMessage
)
from logger_setup import logger
llm = AzureChatOpenAI(
deployment_name=os.getenv('OPENAI_GPT4_DEPLOYMENT_NAME'),
model=os.getenv('OPENAI_GPT4_MODEL_NAME')
)
cosmos_history = CosmosDBChatMessageHistory(
cosmos_endpoint=os.getenv('AZ_COSMOS_ENDPOINT'),
cosmos_database=os.getenv('AZ_COSMOS_DATABASE'),
cosmos_container=os.getenv('AZ_COSMOS_CONTAINER'),
session_id='1234',
user_id='user001',
connection_string=os.getenv('AZ_COSMOS_CS')
)
# cosmos_history.prepare_cosmos()
sys_msg = SystemMessage(content='You are a helpful bot that can run various SQL queries')
# cosmos_history.add_message(sys_msg)
human_msg = HumanMessage(content='Can you tell me how things are done in database')
cosmos_history.add_message(human_msg)
messages = []
messages.append(sys_msg)
messages.append(human_msg)
start_time = perf_counter()
response = llm.predict_messages(messages=messages)
end_time = perf_counter()
logger.info('Total time taken %d s', (end_time - start_time))
print(response)
messages.append(response)
cosmos_history.add_message(response)
`
### Expected behavior
Need this to save it for each subsequent web service calls with the same session id
Also this code does not retrieve data
try:
logger.info(f"Reading item with session_id: {self.session_id}, user_id: {self.user_id}")
item = self._container.read_item(
item=self.session_id,
partition_key=self.user_id
)
except CosmosHttpResponseError as ex:
logger.error(f"Error reading item from CosmosDB: {ex}")
return
But using sql does
query = f"SELECT * FROM c WHERE c.id = '{self.session_id}' AND c.user_id = '{self.user_id}'"
items = list(self._container.query_items(query, enable_cross_partition_query=True))
if items:
item = items[0]
# Continue with reading the item
else:
logger.info("Item not found in CosmosDB")
return
| CosmosDBHistoryMessage | https://api.github.com/repos/langchain-ai/langchain/issues/13361/comments | 6 | 2023-11-14T18:58:13Z | 2024-06-08T16:07:30Z | https://github.com/langchain-ai/langchain/issues/13361 | 1,993,370,326 | 13,361 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
notion page properties
https://developers.notion.com/reference/page-property-values
Current version Notion DB loader for doesn't supports following properties for metadata
- `checkbox`
- `email`
- `number`
- `select`
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | Issue: Notion DB loader for doesn't supports some properties | https://api.github.com/repos/langchain-ai/langchain/issues/13356/comments | 0 | 2023-11-14T17:20:22Z | 2023-11-15T04:31:13Z | https://github.com/langchain-ai/langchain/issues/13356 | 1,993,198,924 | 13,356 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Supplying a not default parameter to a model is not something that will require always showing a warning. It should be easy to suppress that specific warning.
### Motivation
The logging gets full of those warnings just because you are supplying not default parameters such as "top_p", "frequency_penalty" or "presence_penalty". | Ability to suppress warning when supplying a "not default parameter" to OpenAI Chat Models | https://api.github.com/repos/langchain-ai/langchain/issues/13351/comments | 3 | 2023-11-14T15:43:35Z | 2024-02-20T16:06:00Z | https://github.com/langchain-ai/langchain/issues/13351 | 1,993,012,205 | 13,351 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have my code and for the first section of it I am inputting a memory key based in the session_id as a UUID
I am receiving the following error: Any idea what the issue is?
> Traceback (most recent call last):
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/llm.py", line 62, in <module>
> print(chat_csv(session_id,None,'Hi how are you?'))
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/llm.py", line 37, in chat_csv
> response = conversation({"question": question})
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 286, in __call__
> inputs = self.prep_inputs(inputs)
> ^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 443, in prep_inputs
> self._validate_inputs(inputs)
> File "/Users/habuhassan004/Desktop/VIK/10x-csv-chat/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 195, in _validate_inputs
> raise ValueError(f"Missing some input keys: {missing_keys}")
> ValueError: Missing some input keys: {'session_id'}
here is the input:
`session_id = uuid.uuid4()
print(chat_csv(session_id,None,'Hi how are you?'))`
Here is the code:
```
def chat_csv(
session_id: UUID = None,
file_path: str = None,
question: str = None
):
# session_id = str(session_id)
memory = ConversationBufferMemory(
memory_key=str(session_id),
return_messages=True
)
if file_path == None:
template = """You are a nice chatbot having a conversation with a human.
Previous conversation:
{session_id}
New human question: {question}
Response:"""
prompt = PromptTemplate.from_template(template)
conversation = LLMChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=False,
memory=memory
)
response = conversation({"question": question})
```
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13349/comments | 7 | 2023-11-14T13:53:39Z | 2024-02-20T16:06:06Z | https://github.com/langchain-ai/langchain/issues/13349 | 1,992,791,286 | 13,349 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
```
llm = ChatOpenAI(
openai_api_key= os.environ["OPENAI_API_KEY"],
temperature=0,
streaming=True,
callbacks=[AsyncIteratorCallbackHandler()]
)
embeddings = OpenAIEmbeddings()
loader = TextLoader("static/faq/ecommerce_faq.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
docsearch = Chroma.from_documents(texts, embeddings)
faq_chain = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever=docsearch.as_retriever(),
)
async def call_openai(question):
callback = AsyncIteratorCallbackHandler()
# coroutine = wait_done(model.agenerate(messages=[[HumanMessage(content=question)]]), callback.done)
coroutine = wait_done(faq_chain.arun(question), callback.done)
task = asyncio.create_task(coroutine)
async for token in callback.aiter():
print('token', token)
yield f"data: {token}\n\n"
await task
app = FastAPI()
@app.get("/")
async def homepage():
return FileResponse('static/index.html')
@app.post("/ask")
def ask(body: dict):
return StreamingResponse(call_openai(body['question']), media_type="text/event-stream")
if __name__ == "__main__":
uvicorn.run(host="127.0.0.1", port=8888, app=app)
```
I can run it normally using model.agenerate, but it cannot run after using faq_chain.arun. what is the reason for this?
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13344/comments | 4 | 2023-11-14T11:02:21Z | 2024-02-20T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13344 | 1,992,514,531 | 13,344 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Is there a way to autoawq support for vllm, Im 'setting quantization to 'awq' but its not working
### Motivation
faster inference
### Your contribution
N/A | autoawq for vllm | https://api.github.com/repos/langchain-ai/langchain/issues/13343/comments | 3 | 2023-11-14T10:47:09Z | 2024-03-13T19:57:18Z | https://github.com/langchain-ai/langchain/issues/13343 | 1,992,482,519 | 13,343 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.335
Platform: Windows 10
Python Version = 3.11.3
IDE: VS Code
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
MWE:
A PowerPoint presentation named "test.pptx" in the same folder as the script. Content is a title slide with sample text.
```python
from langchain.document_loaders import UnstructuredPowerPointLoader
def ingest_docs():
loader= UnstructuredPowerPointLoader('test.pptx')
docs = loader.load()
return docs
```
I get a problem in 2/3 tested environments:
1. Running the above MWE with `ingest_docs()` in a simple python script will yield no problem. The content of the PowerPoint (text on the title slide) is displayed.
2. Running the above MWE in a Jupyter Notebook with `ingest_docs()` will cause the cell to run indefinetely. Trying to interrupt the kernel results in: `Interrupting the kernel timed out`. A fix is to restart the kernel.
3. Running the MWE in Streamlit (see code below) will result the spawned server to die immediately. (The cmd Window simply closes)
```python
import streamlit as st
from langchain.document_loaders import UnstructuredPowerPointLoader
def ingest_docs():
[as above]
st.write(ingest_docs())
```
### Expected behavior
I expect the MWE to work the same in the Notebook and Streamlit environments just as in the simple Python script. | PowerPoint loader crashing | https://api.github.com/repos/langchain-ai/langchain/issues/13342/comments | 9 | 2023-11-14T10:24:09Z | 2024-02-25T16:05:42Z | https://github.com/langchain-ai/langchain/issues/13342 | 1,992,439,429 | 13,342 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi all. Can anyone help to answer why there's no init function (constructor) for classes in LangChain, while Pydantic occurs everywhere. It seems hard for IDEs to jump to related codes. Is that one kind of anti-pattern of Python programming?
### Suggestion:
_No response_ | Why no constructor (init function) in Langchain while pydantics are everywhere? | https://api.github.com/repos/langchain-ai/langchain/issues/13340/comments | 3 | 2023-11-14T09:14:09Z | 2024-02-14T00:35:21Z | https://github.com/langchain-ai/langchain/issues/13340 | 1,992,311,627 | 13,340 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hi all, on main website [langchain.com](https://www.langchain.com/), above the folder, to the right of hero section both boxes (python and js) point to the [python docs](https://python.langchain.com/docs/get_started/introduction).
### Idea or request for content:
It's better and clearer if each block points to the related lang. Of course, it's not urgent or critical :) | DOC: Wrong documentation link | https://api.github.com/repos/langchain-ai/langchain/issues/13336/comments | 2 | 2023-11-14T08:10:40Z | 2024-02-20T16:06:20Z | https://github.com/langchain-ai/langchain/issues/13336 | 1,992,202,637 | 13,336 |
[
"hwchase17",
"langchain"
]
| ### Feature request
LangChain supports GET functions, but there is no support for POST functions. This feature request proposes the addition of POST API functionality to enhance the capabilities of LangChain.
### Motivation
The motivation behind this feature request is to extend the capabilities of LangChain to handle not only GET requests but also POST requests.
### Your contribution
I am willing to contribute to the development of this feature. I will carefully follow the contributing guidelines and provide a pull request to implement the POST API functionality. | Add POST API Functionality to LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/13334/comments | 5 | 2023-11-14T07:49:34Z | 2024-05-03T18:24:54Z | https://github.com/langchain-ai/langchain/issues/13334 | 1,992,174,212 | 13,334 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I couldnt find any documentation on this , please help
How can i stream the response from Ollama ?
### Suggestion:
_No response_ | Issue: <How to do streaming response from Ollama??> | https://api.github.com/repos/langchain-ai/langchain/issues/13333/comments | 6 | 2023-11-14T06:21:20Z | 2024-06-14T21:05:29Z | https://github.com/langchain-ai/langchain/issues/13333 | 1,992,069,370 | 13,333 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm trying to replicate [this example](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch#basic-example) of langchain. I'm using ElasticSearch as the database to store the embedding. In the given example I have replaced `embeddings = OpenAIEmbeddings()` with `embeddings = OllamaEmbeddings(model="llama2")` which one can import `from langchain.embeddings import OllamaEmbeddings`. I'm running `Ollama` locally. But, I'm running into below error:
```
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
elasticsearch.exceptions.RequestError: RequestError(400, 'mapper_parsing_exception', 'The number of dimensions for field [vector] should be in the range [1, 2048] but was [4096]')
```
The Ollama model always create the embedding of size `4096` even when I set the chunk size of `500`. Is there any way to reduce the size of embedding? or is there anyway to store larger size embeddings in `ElasticSearch`
### Suggestion:
_No response_ | Reduce embeddings size | https://api.github.com/repos/langchain-ai/langchain/issues/13332/comments | 4 | 2023-11-14T05:19:21Z | 2024-02-26T01:10:56Z | https://github.com/langchain-ai/langchain/issues/13332 | 1,992,011,242 | 13,332 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Implement extraction of message content whenever iMessageChatLoader skips messages with null `text` field even though the content is present in other fields.
### Motivation
Due to iMessage's database schema change in its new MacOS Ventura, newer messages now have content encoded in the `attributedBody` body, with the `text` field being null.
This issue was originally raised by @idvorkin in Issue #10680.
### Your contribution
We intend to submit a pull request for this issue close to the end of November. | Enhancing iMessageChatLoader to prevent skipping messages with extractable content | https://api.github.com/repos/langchain-ai/langchain/issues/13326/comments | 0 | 2023-11-14T04:08:46Z | 2023-11-28T20:45:45Z | https://github.com/langchain-ai/langchain/issues/13326 | 1,991,952,054 | 13,326 |
[
"hwchase17",
"langchain"
]
| I have a web app and the SQLAlchemy is 1.x , when I use langchain which needs SQLAlchemy 2.x , it‘s diffcult for me to update the SQLAlchemy to 2.x,how can I fix this problem | SQLAlchemy version | https://api.github.com/repos/langchain-ai/langchain/issues/13325/comments | 3 | 2023-11-14T03:56:20Z | 2024-02-20T16:06:30Z | https://github.com/langchain-ai/langchain/issues/13325 | 1,991,938,627 | 13,325 |
[
"hwchase17",
"langchain"
]
| ### System Info
- Langchain version: langchain==0.0.335
- OpenAI version: openai==1.2.3
- Platform: Darwin MacBook Pro 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103 arm64
- Python version: Python 3.9.6
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Example script:
```python
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI()
user_input = input("Input prompt: ")
response = llm.invoke(user_input)
```
```python
return self._generate(
File "...lib/python3.9/site-packages/langchain/chat_models/openai.py", line 360, in _generate
response = self.completion_with_retry(
File "...lib/python3.9/site-packages/langchain/chat_models/openai.py", line 293, in completion_with_retry
retry_decorator = _create_retry_decorator(self, run_manager=run_manager)
File "...lib/python3.9/site-packages/langchain/chat_models/openai.py", line 73, in _create_retry_decorator
openai.error.Timeout,
Exception module 'openai' has no attribute 'error'
```
Cause appears to be in `langchain.chat_models.openai`: https://github.com/langchain-ai/langchain/blob/5a920e14c06735441a9ea28c1313f8bd433dc721/libs/langchain/langchain/chat_models/openai.py#L82-L88
Modifying above to this appears to resolve the problem:
```python
errors = [
openai.Timeout,
openai.APIError,
openai.APIConnectionError,
openai.RateLimitError,
openai.ServiceUnavailableError,
]
```
### Expected behavior
Chat model invocation should return a BaseMessage. | OpenAI exception classes have different import path in openai 1.2.3, causing breaking change in ChatOpenAI - Simple fix | https://api.github.com/repos/langchain-ai/langchain/issues/13323/comments | 2 | 2023-11-14T03:41:42Z | 2024-02-14T04:22:33Z | https://github.com/langchain-ai/langchain/issues/13323 | 1,991,927,961 | 13,323 |
[
"hwchase17",
"langchain"
]
| ### System Info
I get following error just by adding model parameters to existing code that works with other models
"Malformed input request: 2 schema violations found, please reformat your input and try again."
```
model_name = "meta.llama2-13b-chat-v1"
model_kwargs = {
"max_gen_len": 512,
"temperature": 0.2,
"top_p": 0.9
}
bedrock_boto = boto3.client("bedrock-runtime", "us-east-1")
bedrock_llm = Bedrock(model_id=model_name, client=bedrock_boto,model_kwargs=model_kwargs)
bedrock_llm("Hello!")
```
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Attempt to call the new llama2 bedrock model like so:
```
model_name = "meta.llama2-13b-chat-v1"
model_kwargs = {
"max_gen_len": 512,
"temperature": 0.2,
"top_p": 0.9
}
bedrock_boto = boto3.client("bedrock-runtime", "us-east-1")
bedrock_llm = Bedrock(model_id=model_name, client=bedrock_boto,model_kwargs=model_kwargs)
bedrock_llm("Hello!")
```
### Expected behavior
The Bedrock class would work successfully as it does for other BedRock models | Add support for Bedrock Llama 2 13b model (meta.llama2-13b-chat-v1) | https://api.github.com/repos/langchain-ai/langchain/issues/13316/comments | 4 | 2023-11-14T00:34:45Z | 2024-02-22T16:07:04Z | https://github.com/langchain-ai/langchain/issues/13316 | 1,991,719,732 | 13,316 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I wan´t to use Pinecone SelfQueryRetriever with a metadata in datetime format. When I try to ask anything with a date show the error Bad Request.
I already change my class PineconeTranslator with this code with the same pattern of Scale, change the translator class. Since Pinecone just accept integer values and they ask you to use epoch format.
langchain\retrievers\self_query\pinecone.py
Class PineconeTranslator
` def visit_comparison(self, comparison: Comparison) -> Dict:
value = comparison.value
# convert timestamp to float as epoch format.
if type(value) is date:
value = time.mktime(value.timetuple())
return {
comparison.attribute: {
self._format_func(comparison.comparator): value
}
}`
### Motivation
I want to use selfretriever with pinecone and langchain the error limit the use of langchain with pinecone.
### Your contribution
Yes, I already made the changes in my machine like the code in the description | Pinecone SelfQueryRetriever with datetime filter | https://api.github.com/repos/langchain-ai/langchain/issues/13309/comments | 3 | 2023-11-13T22:25:18Z | 2024-06-01T00:07:33Z | https://github.com/langchain-ai/langchain/issues/13309 | 1,991,588,047 | 13,309 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Implement Async methods in ollama llm and chat model classes.
### Motivation
Ollama implementation doesn't include the async methods _astream and _agenerate and i cannot create a async agent...
### Your contribution
This is my first issue, i can try but i am working in 3 different projects right now... | Ollama LLM: Implement async functionality | https://api.github.com/repos/langchain-ai/langchain/issues/13306/comments | 6 | 2023-11-13T21:59:16Z | 2024-05-20T16:07:34Z | https://github.com/langchain-ai/langchain/issues/13306 | 1,991,555,162 | 13,306 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We have several RAG templates in [langchain template](https://github.com/langchain-ai/langchain/tree/master/templates) using other vector DB except Opensearch.
We need to add a similar template for Opensearch also as opensearch support the same feature.
As a first step we will start with OpenAI and then we expand this to other LLMs including Bedrock.
### Motivation
Opensearch supports this feature and it will be super helpful for opensearch users.
### Your contribution
I will take the initiative to raise PR on this issue. | Langchain Template for RAG using Opensearch | https://api.github.com/repos/langchain-ai/langchain/issues/13295/comments | 3 | 2023-11-13T16:58:01Z | 2024-02-12T16:51:20Z | https://github.com/langchain-ai/langchain/issues/13295 | 1,991,066,462 | 13,295 |
[
"hwchase17",
"langchain"
]
| ### System Info
Linux and langchain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am currently implementing a RAG chatbot using `ConversationBufferMemory` and `ConversationalRetrievalChain`as follow:
```
memory = ConversationBufferMemory(
memory_key="chat_history",
output_key="answer",
return_messages=True,
)
```
chain = ConversationalRetrievalChain.from_llm(
llm,
chain_type="stuff",
retriever=db.as_retriever(search_kwargs={"k": 1}),
memory=memory,
combine_docs_chain_kwargs={"prompt": PROMPT},
return_source_documents=True,
verbose=True,
)`
Where PROMPT is a prompt template that inputs chat_history, context and question.
### Expected behavior
I would like the retrieval chain and memory chain to keep track of the context i.e. instead of updating the context with the retrieved documents every time a user input a new question, I can keep track of the context.
My end goal is to have a function that takes into the old context, validate if retrieval of new documents is necessary, and if those documents can replace the new context or they need to be append to the old context. Is there a way to do this? | Context retrieval | https://api.github.com/repos/langchain-ai/langchain/issues/13291/comments | 3 | 2023-11-13T13:48:33Z | 2024-02-19T16:06:10Z | https://github.com/langchain-ai/langchain/issues/13291 | 1,990,699,489 | 13,291 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I currently have an OLLAMA model running locally. I aim to develop a CustomLLM model that supports asynchronous streaming to interface with this model. However, the OLLAMA adapter provided by LangChain lacks support for these specific features. How can I accomplish this integration?
### Suggestion:
_No response_ | Issue: Create async streaming custom LLM | https://api.github.com/repos/langchain-ai/langchain/issues/13289/comments | 6 | 2023-11-13T11:20:02Z | 2024-02-19T16:06:15Z | https://github.com/langchain-ai/langchain/issues/13289 | 1,990,442,921 | 13,289 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am working with an elastic search index running locally via a docker container. Now I want to incorporate memory so that an elastic search query can be handled just like a chat. How can I do that so that every time the question I asked from the previous answers? It should prepare the es query accordingly.
Here is the sample code:
```
from elasticsearch import Elasticsearch
from langchain.chat_models import ChatOpenAI
from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain
import os
import openai
from dotenv import load_dotenv
import json
from langchain.prompts import HumanMessagePromptTemplate, AIMessagePromptTemplate
from langchain.schema import HumanMessage, AIMessage
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv('OPENAI_API_KEY')
ELASTIC_SEARCH_SERVER = "http://localhost:9200"
db = Elasticsearch(ELASTIC_SEARCH_SERVER)
memory = ConversationBufferMemory(memory_key="chat_history")
PROMPT_TEMPLATE = """Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.
Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.
Use the following format:
Question: Question here
ESQuery: Elasticsearch Query formatted as json
"""
PROMPT_SUFFIX = """Only use the following Elasticsearch indices:
{indices_info}
Question: {input}
ESQuery:"""
query_Prompt=PROMPT_TEMPLATE + PROMPT_SUFFIX
PROMPT = PromptTemplate.from_template(
query_Prompt,
)
DEFAULT_ANSWER_TEMPLATE = """Given an input question and relevant data from a database, answer the user question.
Only provide answer to me.
Use the following format:
Question: {input}
Data: {data}
Answer:"""
ANSWER_PROMPT = PromptTemplate.from_template(DEFAULT_ANSWER_TEMPLATE)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT, answer_prompt=ANSWER_PROMPT ,verbose=True, memory=memory)
while True:
# Get user input
user_input = input("Ask a question (or type 'exit' to end): ")
if user_input.lower() == 'exit':
break
# Invoke the chat model with the user's question
chain_answer = chain.invoke(user_input)
```
What could be the solution? Currently, it is not working if the following question is related to the previous answer.
### Suggestion:
_No response_ | Incorporate Memory inside ElasticsearchDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/13288/comments | 6 | 2023-11-13T11:02:36Z | 2024-02-20T16:06:40Z | https://github.com/langchain-ai/langchain/issues/13288 | 1,990,415,478 | 13,288 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Similar to Text Generation Inference (TGI) for LLMs, HuggingFace created an inference server for text embeddings models called Text Embedding Inference (TEI).
See: https://github.com/huggingface/text-embeddings-inference
Could you integrate TEI into the supported LangChain text embedding models or do you guys already plan to do this?
### Motivation
We currently develop a rag based chat app and plan to deploy the components as microservices (LLM, DB, Embedding Model). Currently the only other suitable solution for us would be to use SagemakerEndpointEmbeddings. However being able to use TEI would be a great benefit.
### Your contribution
I work as an ML Engineer and could probably assist in some way if necessary. | Support for Text Embedding Inference (TEI) from HuggingFace | https://api.github.com/repos/langchain-ai/langchain/issues/13286/comments | 3 | 2023-11-13T08:52:10Z | 2024-04-29T16:11:21Z | https://github.com/langchain-ai/langchain/issues/13286 | 1,990,163,579 | 13,286 |
[
"hwchase17",
"langchain"
]
| ### System Info
from langchain.chat_models import AzureChatOpenAI
llm_chat = AzureChatOpenAI(deployment_name="gpt-4_32k", model_name = 'gpt-4-32k', openai_api_version=openai.api_version, temperature=0)
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables = ["text"],
template="{text}"
)
llmchain = LLMChain(llm=llm_chat, prompt=prompt)
llmchain.run(text)
NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
openai==1.2.3 and langchain==0.0.335
from langchain.chat_models import AzureChatOpenAI
text = "Where is Germany located?"
llm_chat = AzureChatOpenAI(deployment_name="gpt-4_32k", model_name = 'gpt-4-32k', openai_api_version=openai.api_version, temperature=0)
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables = ["text"],
template="{text}"
)
llmchain = LLMChain(llm=llm_chat, prompt=prompt)
llmchain.run(text)
### Expected behavior
In europe | NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} when using openai==1.2.3 and langchain==0.0.335 | https://api.github.com/repos/langchain-ai/langchain/issues/13284/comments | 24 | 2023-11-13T08:30:39Z | 2024-07-24T06:53:59Z | https://github.com/langchain-ai/langchain/issues/13284 | 1,990,129,379 | 13,284 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
**my code:**
from langchain.document_loaders import PyPDFLoader, TextLoader, PyMuPDFLoader
from langchain.chat_models import ChatOpenAI
from langchain.chains import QAGenerationChain
from langchain.prompts.prompt import PromptTemplate
loader_pdf = PyMuPDFLoader("/Users/11.pdf")
doc_pdf = loader_pdf.load()
llm = ChatOpenAI(temperature=0.1,model_name="gpt-3.5-turbo-16k")
templ = """You are a smart assistant designed to help high school teachers come up with reading comprehension questions.
Given a piece of text, you must come up with a question and answer pair that can be used to test a student's reading comprehension abilities.
When coming up with this question/answer pair, you must respond in the following format in Chinese:
```
{{
"question": "$YOUR_QUESTION_HERE",
"answer": "$THE_ANSWER_HERE"
}}
```
Everything between the ``` must be valid json.
Please come up with a question/answer pair, in the specified JSON format, for the following text:
----------------
{text}"""
prompt = PromptTemplate.from_template(templ)
chain = QAGenerationChain.from_llm(llm=llm, prompt=prompt)
doc_ = ''
for i in doc_pdf:
doc_ += i.page_content
print(doc_)
qa_pdf = chain.run(doc_)
This code can run correctly and have results.
But when “model_name='gpt-4' ”, an error is reported.
What is it???
**the error:**
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
Cell In[22], line 1
----> 1 qa_pdf = chain.run(doc_)
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/base.py:505, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
503 if len(args) != 1:
504 raise ValueError("`run` supports only one positional argument.")
--> 505 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
506 _output_key
507 ]
509 if kwargs and not args:
510 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
511 _output_key
512 ]
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
312 final_outputs: Dict[str, Any] = self.prep_outputs(
313 inputs, outputs, return_only_outputs
314 )
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
297 run_manager = callback_manager.on_chain_start(
298 dumpd(self),
299 inputs,
300 name=run_name,
301 )
302 try:
303 outputs = (
--> 304 self._call(inputs, run_manager=run_manager)
305 if new_arg_supported
306 else self._call(inputs)
307 )
308 except BaseException as e:
309 run_manager.on_chain_error(e)
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/qa_generation/base.py:75, in QAGenerationChain._call(self, inputs, run_manager)
71 docs = self.text_splitter.create_documents([inputs[self.input_key]])
72 results = self.llm_chain.generate(
73 [{"text": d.page_content} for d in docs], run_manager=run_manager
74 )
---> 75 print(results)
76 qa = [json.loads(res[0].text) for res in results.generations]
77 return {self.output_key: qa}
File ~/anaconda3/envs/langchain-py311/lib/python3.11/site-packages/langchain/chains/qa_generation/base.py:75, in <listcomp>(.0)
71 docs = self.text_splitter.create_documents([inputs[self.input_key]])
72 results = self.llm_chain.generate(
73 [{"text": d.page_content} for d in docs], run_manager=run_manager
74 )
---> 75 print(results)
76 qa = [json.loads(res[0].text) for res in results.generations]
77 return {self.output_key: qa}
File ~/anaconda3/envs/langchain-py311/lib/python3.11/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
341 s = s.decode(detect_encoding(s), 'surrogatepass')
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
348 cls = JSONDecoder
File ~/anaconda3/envs/langchain-py311/lib/python3.11/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
332 def decode(self, s, _w=WHITESPACE.match):
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
File ~/anaconda3/envs/langchain-py311/lib/python3.11/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
### Suggestion:
_No response_ | Replacing the correct code with GPT-4 will result in an error | https://api.github.com/repos/langchain-ai/langchain/issues/13283/comments | 3 | 2023-11-13T08:16:48Z | 2024-02-19T16:06:25Z | https://github.com/langchain-ai/langchain/issues/13283 | 1,990,101,660 | 13,283 |
[
"hwchase17",
"langchain"
]
| ### System Info
Platform:Windows
LangChain:0.0.335
python:3.9.12
### Who can help?
@eyurtsev @hwchase17 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code:
from langchain.memory import ConversationBufferMemory
import langchain.load
ChatMemory = ConversationBufferMemory()
ChatMemory.chat_memory.add_ai_message("Hello")
ChatMemory.chat_memory.add_user_message("HI")
tojson = ChatMemory.json()
with open('E:\\memory.json', 'w') as file:
file.write(tojson)
file.close()
ParserChatMemory = ConversationBufferMemory.parse_file('E:\\memory.json')
### Expected behavior
Regarding the parse_file method in ConversationBufferMemory, there is not much introduction in the document, and I have not found a similar problem description. However, saving Memory into a file/or generating it from a file is a normal requirement, but when I execute parse_file This method throws a "ValidationError", the complete description is as follows:
ValidationError: 1 validation error for ConversationBufferMemory
chat_memory
instance of BaseChatMessageHistory expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseChatMessageHistory)
If there is any violation of my operation, please tell me, thank you very much | Use "parse_file" in class "ConversationBufferMemory" raise a ValidationError | https://api.github.com/repos/langchain-ai/langchain/issues/13282/comments | 7 | 2023-11-13T07:47:40Z | 2024-03-13T18:59:57Z | https://github.com/langchain-ai/langchain/issues/13282 | 1,990,059,585 | 13,282 |
[
"hwchase17",
"langchain"
]
|
I am using PGvector as retriever in my Conversational Retriever Chain.
in my document there is some meta information saved like topic, keywords etc.
how can use this meta information in retrieval instead of only using content?
Would appreciate any solution for this. | Utilize metadata during retrieval in ConversationalRetrieval and PGvector | https://api.github.com/repos/langchain-ai/langchain/issues/13281/comments | 4 | 2023-11-13T07:43:07Z | 2024-02-21T16:07:04Z | https://github.com/langchain-ai/langchain/issues/13281 | 1,990,054,092 | 13,281 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
i should qa over an document using retrieval. i should not use any large language model. i can only use embedding model. here is my code:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
```
how can i change this code with not using any llm in "qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever())"
### Suggestion:
_No response_ | qa retrieval | https://api.github.com/repos/langchain-ai/langchain/issues/13279/comments | 5 | 2023-11-13T07:15:53Z | 2024-02-19T16:06:36Z | https://github.com/langchain-ai/langchain/issues/13279 | 1,990,021,102 | 13,279 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
i fix the code as following:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("paul_graham/paul_graham_essay.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=2000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What were the two main things the author worked on before college?"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
i get following output:
"I was nervous about money, because I could sense that Interleaf was on the way down. Freelance Lisp hacking work was very rare, and I didn't want to have to program in another language, which in those days would have meant C++ if I was lucky. So with my unerring nose for financial opportunity, I decided to write another book on Lisp. This would be a popular book, the sort of book that could be used as a textbook. I imagined myself living frugally off the royalties and spending all my time painting. (The painting on the cover of this book, ANSI Common Lisp, is one that I painted around this time.)
The best thing about New York for me was the presence of Idelle and Julian Weber. Idelle Weber was a painter, one of the early photorealists, and I'd taken her painting class at Harvard. I've never known a teacher more beloved by her students. Large numbers of former students kept in touch with her, including me. After I moved to New York I became her de facto studio assistant.
She liked to paint on big, square canvases, 4 to 5 feet on a side. One day in late 1994 as I was stretching one of these monsters there was something on the radio about a famous fund manager. He wasn't that much older than me, and was super rich. The thought suddenly occurred to me: why don't I become rich? Then I'll be able to work on whatever I want.
Meanwhile I'd been hearing more and more about this new thing called the World Wide Web. Robert Morris showed it to me when I visited him in Cambridge, where he was now in grad school at Harvard. It seemed to me that the web would be a big deal. I'd seen what graphical user interfaces had done for the popularity of microcomputers. It seemed like the web would do the same for the internet."
bu i should get "Before college the two main things I worked on, outside of school, were writing and programming."
### Suggestion:
_No response_ | retrieval embedding | https://api.github.com/repos/langchain-ai/langchain/issues/13278/comments | 4 | 2023-11-13T06:49:13Z | 2024-03-17T16:05:37Z | https://github.com/langchain-ai/langchain/issues/13278 | 1,989,993,127 | 13,278 |
[
"hwchase17",
"langchain"
]
| how can I use SQLDatabaseSequentialChain to reduce the number of tokens for llms,can you give me some example or demo,I dont find useful info from the docs | how to use SQLDatabaseSequentialChain | https://api.github.com/repos/langchain-ai/langchain/issues/13277/comments | 3 | 2023-11-13T06:38:26Z | 2024-02-19T16:06:40Z | https://github.com/langchain-ai/langchain/issues/13277 | 1,989,982,161 | 13,277 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
i have the following code for q&a system with retrieval mechanism:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("paul_graham/paul_graham_essay.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=2000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What were the two main things the author worked on before college?"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
i should do this retrieval in turkish dataset. so i should use turkish embeddings. how can i do that in my code?
### Suggestion:
_No response_ | turkish embedding | https://api.github.com/repos/langchain-ai/langchain/issues/13276/comments | 6 | 2023-11-13T06:15:51Z | 2024-03-17T16:05:31Z | https://github.com/langchain-ai/langchain/issues/13276 | 1,989,960,245 | 13,276 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
cat /proc/version
=> Linux version 5.15.107+
cat /etc/debian_version
=> 10.13
import langchain
langchain.__version__
=> '0.0.334'
```
I spent some time to debug why the function signature `search` is different between linux / macos m1.
I found that in macos, because it has arm arch, doesn't have swigfaiss_avx2.py file on download wheel.
Besides, in linux, because it has amd64 arch, which support avx2, has swigfaiss_avx2.py.
Here's the amd64 example:
If I make faiss vector store using `from_documents` like
```py
from langchain.embeddings import OpenAIEmbeddings
docs = [] # some documents
vs = FAISS.from_documents(docs, OpenAIEmbeddings())
vs.index
# <faiss.swigfaiss_avx2.IndexFlatL2; proxy of <Swig Object of type 'faiss::IndexFlatL2 *' at> >
vs.index.search
<bound method handle_Index.<locals>.replacement_search of <faiss.swigfaiss_avx2.IndexFlatL2; proxy of <Swig Object of type 'faiss::IndexFlatL2 *' at> >>
```
It is delived from swigfaiss_avx2's IndexFlatL2, and it's search method is replaced by `replacement_search` in faiss.__init__.py intentionally.
But, when we save this to local and load like:
```py
vs.save_local("faiss", "abcd.index")
vs2 = FAISS.load_local("faiss", OpenAIEmbeddings(), "abcd.index")
vs2.index
<faiss.swigfaiss.IndexFlat; proxy of <Swig Object of type 'faiss::IndexFlat *' at> >
vs2.index.search
<bound method IndexFlat.search of <faiss.swigfaiss.IndexFlat; proxy of <Swig Object of type 'faiss::IndexFlat *' at> >>
```
We can see that the vector store : vs2 is the IndexFlat of swigfaiss.py, not swigfaiss_avx2.py
I don't know whether the save process is problem, or the load process is problem, but I think it's quite a big bug because two `search` signatures are totally different.
```py
import inspect
inspect.signature(vs.index.search)
# <Signature (x, k, *, params=None, D=None, I=None)>
inspect.signature(vs2.index.search)
# <Signautre (n, x, k, distances, labels, params=None)>
```
Besides, I found the `FAISS_NO_AVX2` is not work, too because when using the flag,
the `replacement_search` is not wrapped the original `search` method at all.
like explained in https://github.com/langchain-ai/langchain/issues/8857
So, how can I use the local save / load mechanism in amd64 arch?
(arm doesn't have any problem)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can refer to my issue above.
### Expected behavior
Can use save/load faiss vectorstore machanism with avx2 IndexFlat in amd64 so that method `search` can wrapped properly by `replacement_search` | FAISS save_local / load_local don't be aware of avx2 | https://api.github.com/repos/langchain-ai/langchain/issues/13275/comments | 5 | 2023-11-13T06:05:42Z | 2024-02-19T16:06:45Z | https://github.com/langchain-ai/langchain/issues/13275 | 1,989,950,234 | 13,275 |
[
"hwchase17",
"langchain"
]
| ### System Info
Thank you for your great help. I have issue with setting the score function (I have AWS LangChain using bedrock). What I have is:
from langchain.vectorstores import FAISS
loader = CSVLoader("./rag_data/a.csv")
documents_aws = loader.load()
docs = CharacterTextSplitter(chunk_size=2000, chunk_overlap=400, separator=",").split_documents(documents_aws)
def custom_score(i):
#return 1 - 1 / (1 + np.exp(i))
return 1
vectorstore_faiss_aws = FAISS.from_documents(documents=docs,embedding = br_embeddings)
vectorstore_faiss_aws.relevance_score_function=custom_score
This made no difference in the scores (did not give any errors either as I am getting large negative numbers and the vectorstore_faiss_aws.similarity_search_with_relevance_scores is indifferent to the score_threshold value)
Then I tried:
vectorstore_faiss_aws = FAISS(relevance_score_fn=custom_score).from_documents(documents=docs,embedding = br_embeddings)
and it gave the following error:
FAISS.__init__() missing 4 required positional arguments: 'embedding_function', 'index', 'docstore', and 'index_to_docstore_id'
Your advice is highly appreciated
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code is included:
from langchain.vectorstores import FAISS
loader = CSVLoader("./rag_data/a.csv")
documents_aws = loader.load()
docs = CharacterTextSplitter(chunk_size=2000, chunk_overlap=400, separator=",").split_documents(documents_aws)
def custom_score(i):
#return 1 - 1 / (1 + np.exp(i))
return 1
vectorstore_faiss_aws = FAISS.from_documents(documents=docs,embedding = br_embeddings)
vectorstore_faiss_aws.relevance_score_function=custom_score
### Expected behavior
The score to be in the range of 0 and 1 and the vectorstore_faiss_aws.similarity_search_with_relevance_scores to react to score_threshold | AWS LangChain using bedrock: Setting Relevance Score Function | https://api.github.com/repos/langchain-ai/langchain/issues/13273/comments | 18 | 2023-11-13T04:15:41Z | 2024-03-18T16:06:24Z | https://github.com/langchain-ai/langchain/issues/13273 | 1,989,857,834 | 13,273 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I am try to use sagemaker endpoint to use the embedding model, but I am confused about using EmbeddingsContentHandler, after trying I figure out how to define the transform_input function, but the transform_output did not work as expected, it give out some errors, which like
**embeddings = response_json[0]["embedding"]
TypeError: list indices must be integers or slices, not str**, I would be really garteful it someone konw how to sove it
my ContentHandler function is :
class ContentHandler(EmbeddingsContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs={}) -> bytes:
input_str = json.dumps({"inputs": prompt, **model_kwargs})
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
embeddings = response_json[0]["embedding"]
return embeddings
### Idea or request for content:
_No response_ | https://python.langchain.com/docs/modules/model_io/models/llms/integrations/sagemaker | https://api.github.com/repos/langchain-ai/langchain/issues/13271/comments | 5 | 2023-11-13T02:59:08Z | 2024-02-19T16:06:50Z | https://github.com/langchain-ai/langchain/issues/13271 | 1,989,795,517 | 13,271 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The LangChail modules can be naturally used to build something like OpenAI GPTs builder.
My understanding is, LangChain needs to add descriptions and description_embeddings to all integrations/chains/agents (not only to the tools). It allows to build the Super Agent aka Agent Builder.
### Motivation
LangChain's space (in terms of integrations/chains/agents) is bigger than OpenAI's. Let's use this.
### Your contribution
I can help with documentations, examples,ut-s | Chain/Agent Builder | https://api.github.com/repos/langchain-ai/langchain/issues/13270/comments | 2 | 2023-11-13T01:44:32Z | 2024-02-12T16:56:52Z | https://github.com/langchain-ai/langchain/issues/13270 | 1,989,733,972 | 13,270 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
- Langchain version: 0.0.335
- Affected LLM: chatgpt 3.5 turbo 1106 (NOT affecting: gpt 3.5 turbo 16k, gpt4, gpt4 turbo preview)
- Code template, similar to:
`llm=ChatOpenAI(model_name='gpt-3.5-turbo-1106',
openai_api_key=openai_key
)`
`memory = ConversationBufferMemory()`
`conversation = ConversationChain(
llm=llm, memory=memory
)`
`conversation.predict(input='my instructions')
`
**The problem**
When the identical instructions and parameters are used in the API and the open ai playground, model responds radically different. The open ai playground responds as expected, producing answers. While the API through langchain always returns 'I cannot fufil your request'.
**suspected causes**
Open AI made an upgrade to its API to 1.x, and as far as I know Langchain now only works with open ai 0.28.1. Could that be the reason?
Thanks
**UPDATE 13 Nov:**
I further tested my prompts with OpenAI api 1.2.3 alone, without langchain and everything works fine:
- First I put my prompts into the OpenAI playground, which produced responses
- Second I 'view source' > 'copy', put the python code into a .py file, edited out the last AI response and ran the code = I got the response I wanted.
- Third I tested the same prompt with langchain 0.0.335 (openai 0.28.1) this time, and I always get 'I cannot fulfil this request'.
I know that langchain adds a little bit of context to every request and my prompts did consist of two rounds of chats and thus my code uses memory. I don't know which part of this is problematic.
Again, this problem does not occur with gpt-3.5-turbo, or gpt-4, or gpt-4-turbo, only **'gpt-3.5-turbo-1106'**
### Suggestion:
_No response_ | Issue: gpt3.5-turbo-1106 responds differently from the OpenAI playground | https://api.github.com/repos/langchain-ai/langchain/issues/13268/comments | 5 | 2023-11-12T23:20:38Z | 2024-05-01T16:05:23Z | https://github.com/langchain-ai/langchain/issues/13268 | 1,989,630,478 | 13,268 |
[
"hwchase17",
"langchain"
]
| I am testing a simple RAG implementation with Azure Cognitive Search. I am seeing a "cannot import name 'Vector' from azure.search.documents.models" error when I invoke my chain. Origin of my error is line 434 in lanchain/vectorstores/azuresearch.py (from azure.search.documents.models import Vector)
this is the relevant code snippet, I get the import error when I execute rag_chain.invoke(question)
from langchain.schema.runnable import RunnablePassthrough
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models.azure_openai import AzureChatOpenAI
question = "my question.."
-- vector_store is initialized using AzureSearch(), not including that snippet here --
retriever = vector_store.as_retriever()
template = '''
Answer the question based on the following context:
{context}
Question: {question}
'''
prompt = ChatPromptTemplate.from_template(template=template)
llm = AzureChatOpenAI( deployment_name='MY_DEPLOYMENT_NAME', model_name='MY_MODEL', openai_api_base=MY_AZURE_OPENAI_ENDPOINT, openai_api_key=MY_AZURE_OPENAI_KEY, openai_api_version='2023-05-15', openai_api_type='azure' )
rag_chain = {'context' : retriever, 'question' : RunnablePassthrough} | prompt | llm
rag_chain.invoke(question)
--------------
my package versions
langchain==0.0.331
azure-search-documents==11.4.0b11
azure-core==1.29.5
openai==0.28.1
_Originally posted by @yallapragada in https://github.com/langchain-ai/langchain/discussions/13245_ | Import error in lanchain/vectorstores/azuresearch.py | https://api.github.com/repos/langchain-ai/langchain/issues/13263/comments | 3 | 2023-11-12T20:01:20Z | 2024-02-18T16:05:11Z | https://github.com/langchain-ai/langchain/issues/13263 | 1,989,546,510 | 13,263 |
[
"hwchase17",
"langchain"
]
| ### System Info
I can not import BabyAGI
from langchain_experimental import BabyAGI
The langchain version is:
pip show langchain
Name: langchain
Version: 0.0.334
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: c:\users\test\anaconda3\envs\nenv\lib\site-packages
Requires: aiohttp, anyio, async-timeout, dataclasses-json, jsonpatch, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-experimental
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import dotenv
from dotenv import load_dotenv
load_dotenv()
import os
import langchain
from langchain.embeddings import OpenAIEmbeddings
import faiss
from langchain.vectorstores import FAISS
from langchain.docstore import InMemoryDocstore
# Define the embedding model
embeddings_model = OpenAIEmbeddings(model="text-embedding-ada-002")
# Initialize the vectorstore
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model, index, InMemoryDocstore({}), {})
from langchain.llms import OpenAI
from langchain_experimental import BabyAGI
### Expected behavior
I want to be able to execute:
# Define the embedding model
embeddings_model = OpenAIEmbeddings(model="text-embedding-ada-002") | ImportError: cannot import name 'BabyAGI' from 'langchain_experimental' | https://api.github.com/repos/langchain-ai/langchain/issues/13256/comments | 3 | 2023-11-12T17:16:05Z | 2024-02-18T16:05:16Z | https://github.com/langchain-ai/langchain/issues/13256 | 1,989,494,128 | 13,256 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I hope this message finds you well. I noticed that OpenAI has recently unveiled their Assistant API, and I observed that the Langchain framework in JavaScript has already implemented an agent utilizing this new API. Could you kindly provide information on when this integration will be extended to Python?
For your reference, here is the documentation for the JavaScript implementation: [JavaScript Doc Reference](https://js.langchain.com/docs/modules/agents/agent_types/openai_assistant).
Thank you for your assistance.
### Motivation
This is a new feature from OpenAI and will be really useful for building LLM-based Apps which needs to utilize code generation feature.
### Your contribution
N/A | OpenAI Assistant in Python | https://api.github.com/repos/langchain-ai/langchain/issues/13255/comments | 1 | 2023-11-12T14:58:55Z | 2023-11-26T14:48:34Z | https://github.com/langchain-ai/langchain/issues/13255 | 1,989,440,076 | 13,255 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.334
python 3.11.5
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
hubspot_chain = get_openapi_chain("https://api.hubspot.com/api-catalog-public/v1/apis/crm/v3/objects/companies",
headers={"Authorization": "Bearer <My Hubspot API token>"})
hubspot_chain("Fetch all contacts in my CRM, please")
I get the error :
BadRequestError: Error code: 400 - {'error': {'message': "'get__crm_v3_objects_companies_{companyId}_getById' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.1.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
This happens with other APIs too, I've tried the following :
1. OpenAPI
2. Asana
3. Hubspot
4. Slack
None work.
### Expected behavior
The API calls should work, I should get response to my natural language requests, which should get internally mapped to actions, as specified in the OpenAPI specs I've supplied. | Any time I try to use a openAPI API that requires auth, the openapi_chain call fails. | https://api.github.com/repos/langchain-ai/langchain/issues/13251/comments | 11 | 2023-11-12T06:43:23Z | 2024-02-21T16:07:08Z | https://github.com/langchain-ai/langchain/issues/13251 | 1,989,262,646 | 13,251 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.