issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello,
Thanks for this framework. It's making everyone's work simpler!
I'm using an LLM model to infer data from Portuguese websites, and expecting answers in Portuguese. But some of Langchain features, namely the "Get format instructions" for Output Parses come written in English. I'm not sure what the best approach would be here.
Should I do all my prompting in English, and just add "Answer in Portuguese" at the end?
Should this be a feature in Langchain, and if so, how would it work?
I'm not sure asking the community to translate it would be the right approach, because this is not just about translation, but making sure the prompts work correctly. I would be fine with some api call to replace the English text with my translation, but that doesn't seem to be part of the public API at the moment.
Thanks,
### Suggestion:
_No response_ | Issue: How to use other natural languages besides English? | https://api.github.com/repos/langchain-ai/langchain/issues/13250/comments | 7 | 2023-11-12T06:28:49Z | 2024-07-10T13:09:05Z | https://github.com/langchain-ai/langchain/issues/13250 | 1,989,259,330 | 13,250 |
[
"hwchase17",
"langchain"
]
| async def query(update: Update, context: CallbackContext):
global chain, metadatas, texts
if chain is None:
await context.bot.send_message(
chat_id=update.effective_chat.id,
text="Please load the chain first using /load")
return
user_query = update.message.text
cb = AsyncFinalIteratorCallbackHandler()
cb.stream_final_answer = True
cb.answer_prefix_tokens = ["FINAL", "ANSWER"]
cb.answer_reached = True
res = await chain.acall(user_query, callbacks=[cb])
answer = res["answer"]
sources = res.get("source_documents", [])
context.user_data['sources'] = sources
await context.bot.send_message(chat_id=update.effective_chat.id, text=answer)
for idx, source in enumerate(sources, start=1):
source_name = source.metadata.get("source", f"Unknown Source {idx}").replace(".", "")
keyboard = [[InlineKeyboardButton("Show Hadith", callback_data=str(idx))]]
await context.bot.send_message(chat_id=update.effective_chat.id,
text=f"{idx}. {source_name}",
reply_markup=InlineKeyboardMarkup(keyboard))
### Idea or request for content:
please help me, if the system cannot find the user's answer based on the existing context then the show hadith will not be displayed either the source name or keyboard
| why when the system doesn't find the answer to the user's question, show hadith still appears? | https://api.github.com/repos/langchain-ai/langchain/issues/13249/comments | 22 | 2023-11-12T05:49:18Z | 2024-02-18T16:05:31Z | https://github.com/langchain-ai/langchain/issues/13249 | 1,989,251,097 | 13,249 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
```
import uvicorn
import os
from typing import AsyncIterable, Awaitable
from dotenv import load_dotenv
from fastapi import FastAPI
from fastapi.responses import FileResponse, StreamingResponse
from langchain.callbacks import AsyncIteratorCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, ChatMessage
import asyncio
async def wait_done(fn, event):
try:
await fn
except Exception as e:
print('error', e)
event.set()
finally:
event.set()
async def call_openai(question):
callback = AsyncIteratorCallbackHandler()
model = ChatOpenAI(
openai_api_key= os.environ["OPENAI_API_KEY"],
streaming=True,
callbacks=[callback]
)
print('question', question)
coroutine = wait_done(model.agenerate(messages=[[HumanMessage(content=question)]]), callback.done)
task = asyncio.create_task(coroutine)
print('task', task)
print('coroutine', callback.aiter())
async for token in callback.aiter():
yield f"{token}"
await task
app = FastAPI()
@app.get("/")
async def homepage():
return FileResponse('static/index.html')
@app.post("/ask")
def ask(body: dict):
print('body', body)
# return call_openai(body['question'])
return StreamingResponse(call_openai(body['question']), media_type="text/event-stream")
if __name__ == "__main__":
uvicorn.run(host="127.0.0.1", port=8888, app=app)
```
run it:
(venv) (base) zhanglei@zhangleideMacBook-Pro chatbot % python server.py
INFO: Started server process [46402]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8888 (Press CTRL+C to quit)
INFO: 127.0.0.1:53273 - "GET / HTTP/1.1" 200 OK
body {'question': '你好'}
INFO: 127.0.0.1:53273 - "POST /ask HTTP/1.1" 200 OK
question 你好
question 你好
task <Task pending name='Task-6' coro=<wait_done() running at /Users/zhanglei/Desktop/github/chatbot/server.py:21>>
coroutine <async_generator object AsyncIteratorCallbackHandler.aiter at 0x117158e40>
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
### Suggestion:
_No response_ | Issue: <openai APIConnectionError> | https://api.github.com/repos/langchain-ai/langchain/issues/13247/comments | 4 | 2023-11-12T04:30:51Z | 2024-02-18T16:05:36Z | https://github.com/langchain-ai/langchain/issues/13247 | 1,989,233,891 | 13,247 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
when chain_type='stuff', normal
when
chain_type = 'map_reduce', error:
1 validation error for RetrievalQA
question_prompt
extra fields not permitted (type=value_error.extra)
### Suggestion:
_No response_ | when i use map_reduce type, error appear | https://api.github.com/repos/langchain-ai/langchain/issues/13246/comments | 3 | 2023-11-12T02:00:07Z | 2024-02-18T16:05:41Z | https://github.com/langchain-ai/langchain/issues/13246 | 1,989,197,947 | 13,246 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/langchain-ai/langchain/discussions/12799
<div type='discussions-op-text'>
<sup>Originally posted by **younes-io** November 2, 2023</sup>
I have an `NotImplementedError: ` error when I run this code:
```python
embeddings = OpenAIEmbeddings(deployment=embedding_model, chunk_size=1)
docsearch = OpenSearchVectorSearch(index_name=index_company_docs, embedding_function=embeddings,opensearch_url=opensearch_url, http_auth=('user', auth))
doc_retriever = docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
qa = RetrievalQAWithSourcesChain.from_chain_type(
memory=memory,
llm=llm,
chain_type="stuff", # See other types of chains here
retriever=doc_retriever,
return_source_documents=True,
verbose=True,
chain_type_kwargs=chain_type_kwargs,
)
response = qa({"question": "When was the company founded?"})
```</div> | I get a `NotImplementedError` when I use `docsearch.as_retriever` with `similarity_score_threshold` | https://api.github.com/repos/langchain-ai/langchain/issues/13242/comments | 6 | 2023-11-11T17:04:13Z | 2024-02-19T16:07:05Z | https://github.com/langchain-ai/langchain/issues/13242 | 1,989,042,216 | 13,242 |
[
"hwchase17",
"langchain"
]
| ### System Info
Macos: 13.4.1 (apple silicon M1)
python version 3.10.13
relevant packages:
langchain 0.0.334
pydantic 1.10.13
pydantic_core 2.10.1
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/18d2UIFd3LHwkD3ml6_OydetvytKNnoBW?usp=sharing
1. `from langchain.tools import tool`
2. define a simple structured function
3. Use `StructuredTool.from_function(simple_function)`
### Expected behavior
I expected that it would either work, or tell me that the @tool decorator is not meant to be used with StructuredTool. This is not specified anywhere in the docs or code. | @tool decorator for StructuredTool.from_function doesn't fill in the `__name__` attribute correctly | https://api.github.com/repos/langchain-ai/langchain/issues/13241/comments | 3 | 2023-11-11T11:37:37Z | 2023-11-11T12:14:23Z | https://github.com/langchain-ai/langchain/issues/13241 | 1,988,907,431 | 13,241 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.334
python: 3.11.6
weaviate-client: 3.25.3
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to implement the WeaviateHybridSearchRetriever to retrieve documents from Weaviate. My schema indicates the document ID is stored in the _id field based on the shardingConfig.
When setting up the retriever, I included _id in the attributes list:
````
hybrid_retriever = WeaviateHybridSearchRetriever(
attributes=["_id", "aliases", "categoryid", "name", "page_content", "ticker"]
)
````
However, when I try to access _id on the returned Document objects, I get an error that _id is not found.
For example:
````
results = hybrid_retriever.get_relevant_documents(query="some query")
print(results[0]._id) # Error!_id not found
````
I have tried variations like id, document_id instead of _id but still cannot seem to access the document ID field.
Any suggestions on what I am missing or doing wrong when trying to retrieve the document ID from the Weaviate results using the _id field specified in the schema?
Let me know if any other details would be helpful in troubleshooting this issue!
**Schema Details**
````
{
"classes":[
{
"class":"Category_taxonomy",
"invertedIndexConfig":{
"bm25":{
"b":0.75,
"k1":1.2
},
"cleanupIntervalSeconds":60,
"stopwords":{
"additions":"None",
"preset":"en",
"removals":"None"
}
},
"moduleConfig":{
"text2vec-openai":{
"baseURL":"https://api.openai.com",
"model":"ada",
"modelVersion":"002",
"type":"text",
"vectorizeClassName":true
}
},
"multiTenancyConfig":{
"enabled":false
},
"properties":[
{
"dataType":[
"text"
],
"description":"Content of the page",
"indexFilterable":true,
"indexSearchable":true,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"page_content",
"tokenization":"word"
},
{
"dataType":[
"number"
],
"description":"Identifier for the category",
"indexFilterable":true,
"indexSearchable":false,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"categoryid"
},
{
"dataType":[
"text"
],
"description":"Ticker symbol",
"indexFilterable":true,
"indexSearchable":true,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"ticker",
"tokenization":"word"
},
{
"dataType":[
"text"
],
"description":"Name of the entity",
"indexFilterable":true,
"indexSearchable":true,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"name",
"tokenization":"word"
},
{
"dataType":[
"text"
],
"description":"Aliases for the entity",
"indexFilterable":true,
"indexSearchable":true,
"moduleConfig":{
"text2vec-openai":{
"skip":false,
"vectorizePropertyName":false
}
},
"name":"aliases",
"tokenization":"word"
}
],
"replicationConfig":{
"factor":1
},
"shardingConfig":{
"virtualPerPhysical":128,
"desiredCount":1,
"actualCount":1,
"desiredVirtualCount":128,
"actualVirtualCount":128,
"key":"_id",
"strategy":"hash",
"function":"murmur3"
},
"vectorIndexConfig":{
"skip":false,
"cleanupIntervalSeconds":300,
"maxConnections":64,
"efConstruction":128,
"ef":-1,
"dynamicEfMin":100,
"dynamicEfMax":500,
"dynamicEfFactor":8,
"vectorCacheMaxObjects":1000000000000,
"flatSearchCutoff":40000,
"distance":"cosine",
"pq":{
"enabled":false,
"bitCompression":false,
"segments":0,
"centroids":256,
"trainingLimit":100000,
"encoder":{
"type":"kmeans",
"distribution":"log-normal"
}
}
},
"vectorIndexType":"hnsw",
"vectorizer":"text2vec-openai"
}
]
}
````
**Example Document**
````
{
"class": "Category_taxonomy",
"creationTimeUnix": 1699553747601,
"id": "ad092eb1-e4a6-4d93-a7d2-c507c33c3837",
"lastUpdateTimeUnix": 1699553747601,
"properties": {
"aliases": "Binance Coin, Binance Smart Chain",
"categoryid": 569,
"name": "BNB",
"page_content": "ticker: bnb\nname: BNB\naliases: Binance Coin, Binance Smart Chain",
"ticker": "bnb"
},
"vectorWeights": null
}
````
Example Search Result
````
{
"status":"success",
"results":[
{
"page_content":"ticker: bnb\nname: BNB\naliases: Binance Coin, Binance Smart Chain",
"metadata":{
"_additional":{
"explainScore":"(vector) [-0.0067740963 -0.03091735 0.00511335 0.0016186031 -0.016120477 0.017543973 -0.0072548385 -0.023063144 0.015246399 -0.0020884196]... \n(hybrid) Document ad092eb1-e4a6-4d93-a7d2-c507c33c3837 contributed 0.00819672131147541 to the score",
"score":"0.008196721"
},
"aliases":"Binance Coin, Binance Smart Chain",
"categoryid":569,
"name":"BNB",
"ticker":"bnb"
},
"type":"Document"
}
]
}
````
App Code
````
# Prepare global variables
WEAVIATE_URL = os.getenv('WEAVIATE_URL')
WEAVIATE_API_KEY = os.getenv('WEAVIATE_API_KEY')
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
INDEX_NAME = "Category_taxonomy"
TEXT_KEY = "page_content"
# Dependency provider function for Weaviate client
def get_weaviate_vectorstore():
# Initialize the Weaviate client with API key authentication
client = weaviate.Client(
url=WEAVIATE_URL,
auth_client_secret=weaviate.AuthApiKey(WEAVIATE_API_KEY),
additional_headers={
"X-Openai-Api-Key": OPENAI_API_KEY,
}
)
# Initialize embeddings with a specified model
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY, model='text-embedding-ada-002')
# Initialize vector store with attributes and schema
vectorstore = Weaviate(
client=client,
index_name=INDEX_NAME,
text_key=TEXT_KEY,
embedding=embeddings,
attributes=["aliases", "categoryid", "name", "page_content", "ticker"],
by_text=False
)
return client, vectorstore
def get_weaviate_hybrid_retriever(k: int = 5):
# Directly call the function to get the client and vectorstore
client, vectorstore = get_weaviate_vectorstore()
# Instantiate the retriever with the settings from the vectorstore
hybrid_retriever = WeaviateHybridSearchRetriever(
client=client,
index_name=INDEX_NAME,
text_key=TEXT_KEY,
attributes=["aliases", "categoryid", "name", "page_content", "ticker"],
k=k,
create_schema_if_missing=True
)
return hybrid_retriever
async def parse_query_params(request: Request) -> Dict[str, List[Any]]:
parsed_values = defaultdict(list)
for key, value in request.query_params.multi_items():
# Append the value for any key directly
parsed_values[key].append(value)
return parsed_values
@router.get("/hybrid_search_category_taxonomy/")
async def hybrid_search_category_taxonomy(parsed_values: Dict[str, List[Any]] = Depends(parse_query_params), query: Optional[str] = None, k: int = 5):
categoryids = parsed_values.get('categoryid', [])
tickers = parsed_values.get('ticker', [])
names = parsed_values.get('name', [])
aliasess = parsed_values.get('aliases', [])
# Use a partial function to pass 'k' to 'get_weaviate_hybrid_retriever'
retriever = get_weaviate_hybrid_retriever(k=k)
# Initialize the where_filter with an 'And' operator if there are any filters provided
logging.info(
f"query: {query}, "
f"categoryID: {categoryids}, "
f"ticker: {tickers}, "
f"name: {names}, "
f"aliases: {aliasess}, "
f"k: {k}"
)
# Adjustments to reference parameters from 'parse_query_params'
where_filter = {"operator": "And", "operands": []} if any([categoryids, tickers, names, aliasess]) else None
# Add filters for categoryid and ticker with the 'Equal' operator
if categoryids:
category_operands = [{"path": ["categoryid"], "operator": "Equal", "valueNumber": cid} for cid in categoryids]
if category_operands:
where_filter["operands"].append({"operator": "Or", "operands": category_operands})
if tickers:
ticker_operands = [{"path": ["ticker"], "operator": "Equal", "valueText": ticker} for ticker in tickers]
if ticker_operands:
where_filter["operands"].append({"operator": "Or", "operands": ticker_operands})
if names:
name_operands = [{"path": ["name"], "operator": "Equal", "valueText": name} for name in names]
if name_operands:
where_filter["operands"].append({"operator": "Or", "operands": name_operands})
if aliasess:
aliases_operands = [{"path": ["aliases"], "operator": "Equal", "valueText": aliases} for aliases in aliasess]
if aliases_operands:
where_filter["operands"].append({"operator": "Or", "operands": aliases_operands})
try:
# Format the results for the response
effective_query = " " if not query or not query.strip() else query
# Log the where_filter before fetching documents
logging.info(f"where_filter being used: {where_filter}")
# Fetch the relevant documents using the hybrid retriever instance
results = retriever.get_relevant_documents(effective_query, where_filter=where_filter, score=True)
# Format the results for the response
response_data = [vars(doc) for doc in results]
return {"status": "success", "results": response_data}
except Exception as e:
logger.error(f"Error while processing request: {str(e)}", exc_info=True)
raise HTTPException(detail=str(e), status_code=500)
````
### Expected behavior
**Expected Behavior**
When using the WeaviateHybridSearchRetriever for document retrieval, I expect that including the _id attribute in the attributes list will allow me to access the document ID of each retrieved document without any issues. Specifically, after setting up the WeaviateHybridSearchRetriever like so:
````
hybrid_retriever = WeaviateHybridSearchRetriever(
attributes=["_id", "aliases", "categoryid", "name", "page_content", "ticker"]
)
````
I anticipate that executing a query and attempting to print the _id of the first result should successfully return the unique identifier of the document, as per the below code snippet:
````
results = hybrid_retriever.get_relevant_documents(query="some query")
print(results[0]._id) # Expecting this to print the _id of the first result
````
In this scenario, my expectation is that the _id field, being specified in the attributes parameter, should be readily accessible in each Document object returned by the get_relevant_documents method. This behavior is crucial for my application as it relies on the unique document IDs for further processing and analysis of the retrieved data.
| Trouble Accessing Document ID in WeaviateHybridSearchRetriever Results | https://api.github.com/repos/langchain-ai/langchain/issues/13238/comments | 5 | 2023-11-11T04:39:30Z | 2024-05-15T16:07:19Z | https://github.com/langchain-ai/langchain/issues/13238 | 1,988,722,501 | 13,238 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm using colab
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector
key_selector = SemanticSimilarityExampleSelector(vectorstore=few_shots,k=2)
few_shots_selector = SemanticSimilarityExampleSelector(vectorstore=few_shots,k=2)
key_selector.select_examples({"All": "who is in Bengaluru?"})
# It's return from few_shots_selector
### Expected behavior
key_selector.select_examples({"All": "who is in Bengaluru?"})
it's need to return from key_selector | If i assign two SemanticSimilarityExampleSelector with different data in different variable but it combines | https://api.github.com/repos/langchain-ai/langchain/issues/13234/comments | 4 | 2023-11-11T02:23:06Z | 2023-11-11T05:35:01Z | https://github.com/langchain-ai/langchain/issues/13234 | 1,988,660,318 | 13,234 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Why did I follow the tutorial below to generate vector library data, but I wanted to use ConversationalRetrievalChain.from_llm to answer my question, but couldn't answer the question? Or can I only answer with chain?
https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb?ref=blog.langchain.dev
code show as below:
def read_item(query=Body(..., embed=True)):
question=query
print(question)
embeddings = OpenAIEmbeddings()
vector_store_path = r"/mnt/PD/VS"
docsearch = Chroma(persist_directory=vector_store_path, embedding_function=embeddings)
# Build prompt
template = """Use the following pieces of context to answer the question at the end. \
If you don't know the answer, just say that you don't know, don't try to make up an answer. \
Use three sentences maximum. Keep the answer as concise as possible. Always say "thanks for asking!" \
at the end of the answer.
{context}
{chat_history}
Question: {question}
Helpful Answer:"""
# Build prompt
#QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
prompt = PromptTemplate(
input_variables=["chat_history", "context", "question"],
template=template,
)
store = InMemoryStore()
id_key = "doc_id"
# The retriever (empty to start)
retriever = MultiVectorRetriever(
vectorstore=docsearch,
docstore=store,
id_key=id_key,
)
llm = OpenAI(
temperature=0,max_tokens=1024,
model_name="gpt-4-1106-preview"
)
memory = ConversationKGMemory(llm=llm,memory_key='chat_history',return_messages=True,output_key='answer')
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
chain_type="stuff",
retriever=retriever,
memory=memory,
return_source_documents=True,
return_generated_question=True,
combine_docs_chain_kwargs={'prompt': prompt}
)
# 进行问答
result = qa({"question": question})
#print(qa.combine_documents_chain.memory)
return result
### Suggestion:
_No response_ | Issue: <ConversationalRetrievalChain.from_llm and partition_pdf > | https://api.github.com/repos/langchain-ai/langchain/issues/13233/comments | 3 | 2023-11-11T02:10:50Z | 2024-02-17T16:05:28Z | https://github.com/langchain-ai/langchain/issues/13233 | 1,988,652,580 | 13,233 |
[
"hwchase17",
"langchain"
]
| ### System Info
from langchain.text_splitter import CharacterTextSplitter
from langchain.docstore.document import Document
from langchain.chains.summarize import load_summarize_chain
from fastapi.encoders import jsonable_encoder
from langchain.chains.mapreduce import MapReduceChain
from time import monotonic
gpt_4_8k_max_tokens = 8000 #https://platform.openai.com/docs/models/gpt-4
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(model_name=model_name, chunk_size=gpt_4_8k_max_tokens, chunk_overlap=0)
verbose = False
# Initialize output dataframe with all the columns in the patient history class
column_names = list(PatientHistory.model_fields.keys())
df_AOAI_extracted_text = pd.DataFrame(columns=column_names)
# Create documents from the input text
texts = text_splitter.split_text(test_text)
docs = [Document(page_content=t) for t in texts]
print(f"Number of Documents {len(docs)}")
# Count the number of tokens in the document
num_tokens = num_tokens_from_string(test_text, model_name)
print(f"Number of Tokens {num_tokens}")
# call langchain summarizer to get the output for the given prompt
summaries = []
if num_tokens < gpt_4_8k_max_tokens:
#Stuffing is the simplest method, whereby you simply stuff all the related data into the prompt as context to pass to the language model. This is implemented in LangChain as the StuffDocumentsChain.
#This method is sutable for smaller piece of data.
chain = load_summarize_chain(llm, chain_type="stuff", prompt=TABLE_PROMPT, verbose=verbose)
else:
#MapReduceDocumentsChain is an advanced document processing technique that extends the capabilities of the conventional MapReduce framework.
#It goes beyond the typical MapReduce approach by executing a distinct prompt to consolidate the initial outputs.
#This method is designed to generate a thorough and cohesive summary or response that encompasses the entire document.
print('mapreduce')
chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=TABLE_PROMPT, combine_prompt=TABLE_PROMPT, verbose=verbose, return_intermediate_steps=False)
start_time = monotonic()
summary = chain.run(docs)
print(summary)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.text_splitter import CharacterTextSplitter
from langchain.docstore.document import Document
from langchain.chains.summarize import load_summarize_chain
from fastapi.encoders import jsonable_encoder
from langchain.chains.mapreduce import MapReduceChain
from time import monotonic
gpt_4_8k_max_tokens = 8000 #https://platform.openai.com/docs/models/gpt-4
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(model_name=model_name, chunk_size=gpt_4_8k_max_tokens, chunk_overlap=0)
verbose = False
# Initialize output dataframe with all the columns in the patient history class
column_names = list(PatientHistory.model_fields.keys())
df_AOAI_extracted_text = pd.DataFrame(columns=column_names)
# Create documents from the input text
texts = text_splitter.split_text(test_text)
docs = [Document(page_content=t) for t in texts]
print(f"Number of Documents {len(docs)}")
# Count the number of tokens in the document
num_tokens = num_tokens_from_string(test_text, model_name)
print(f"Number of Tokens {num_tokens}")
# call langchain summarizer to get the output for the given prompt
summaries = []
if num_tokens < gpt_4_8k_max_tokens:
#Stuffing is the simplest method, whereby you simply stuff all the related data into the prompt as context to pass to the language model. This is implemented in LangChain as the StuffDocumentsChain.
#This method is sutable for smaller piece of data.
chain = load_summarize_chain(llm, chain_type="stuff", prompt=TABLE_PROMPT, verbose=verbose)
else:
#MapReduceDocumentsChain is an advanced document processing technique that extends the capabilities of the conventional MapReduce framework.
#It goes beyond the typical MapReduce approach by executing a distinct prompt to consolidate the initial outputs.
#This method is designed to generate a thorough and cohesive summary or response that encompasses the entire document.
print('mapreduce')
chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=TABLE_PROMPT, combine_prompt=TABLE_PROMPT, verbose=verbose, return_intermediate_steps=False)
start_time = monotonic()
summary = chain.run(docs)
print(summary)
### Expected behavior
Should go through all docs and provide the summary | load_summarize_chain with map_reduce error : InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 13516 tokens. Please reduce the length of the messages. | https://api.github.com/repos/langchain-ai/langchain/issues/13230/comments | 3 | 2023-11-10T23:05:46Z | 2024-02-16T16:05:46Z | https://github.com/langchain-ai/langchain/issues/13230 | 1,988,539,254 | 13,230 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi: I have tried several strategies to implement map reduce summarization using Azure OpenAi and Langchain . My model is "gpt-35-turbo-16k".
I have tried several experiments but always get to the same warning:
from langchain.schema.document import Document
from langchain.chains.mapreduce import MapReduceChain
from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import TextLoader
llm_summary = AzureChatOpenAI(
openai_api_base= azure_api_base,
openai_api_version=azure_openai_api_version,
deployment_name=azure_deployment_name,
openai_api_key=azure_openai_api_key,
openai_api_type= azure_api_type,
model_name=azure_model_name,
temperature=azure_model_temperature
)
text="""The ReduceDocumentsChain handles taking the document mapping results and reducing them into a single output.\
It wraps a generic CombineDocumentsChain (like StuffDocumentsChain) but adds the ability to collapse documents before passing it
to the CombineDocumentsChain if their cumulative size exceeds token_max. In this example, we can actually re-use our chain for combining
our docs to also collapse our docs."""
text1=""" You can continue with your English studies and never use Inversion in sentences. That’s perfectly okay. However, if you are preparing for a Cambridge or IELTS exam or other exams or situations where you need to demonstrate an extensive use of English, you will be expected to know about Inversion.
Let’s start with why and when. After all, if you don’t know why we use Inversion, you won’t know when to use it.
WHY & WHEN do we use INVERSION?
Inversion is mainly used for EMPHASIS. The expressions used (never, rarely, no sooner, only then, etc.) have much more impact when used at the beginning of a sentence than the more common pronoun subject, especially as most of them are negative.
Negatives are more dramatic. Consider negative contractions: don’t, won’t, can’t, haven’t, etc. They usually have strong stress in English whilst positive contractions: I’m, he’ll, she’s, we’ve, I’d, etc. usually have weak stress.
"""
doc= [Document(page_content=text1)]
chain = load_summarize_chain(llm_summary, chain_type="map_reduce") #chain_type="map_reduce"
chain.run(doc)`
and Strategy 2 with text_splitter:
`from langchain import PromptTemplate
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=5000, chunk_overlap=50)
chunks = text_splitter.create_documents([text1])
chain = load_summarize_chain(
llm_summary,
chain_type='map_reduce',
verbose=False
)
summary = chain.run(chunks)
summary
I always get the same output:
<img width="1473" alt="image" src="https://github.com/langchain-ai/langchain/assets/7675634/fc871a72-4f5f-43ff-9725-52a718ebaeac">
I have some questions:
1) How to fix this warning?
2) Can I trust in the output when the model is not found?
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run this chunks of code in any notebook
### Expected behavior
I want to fix this warning helping to langchain to find the model. | Warning: model not found. Using cl100k_base encoding. with Azure Openai and load_summarize_chain when I am trying to implement map_reduce | https://api.github.com/repos/langchain-ai/langchain/issues/13224/comments | 9 | 2023-11-10T20:00:32Z | 2024-05-31T17:36:59Z | https://github.com/langchain-ai/langchain/issues/13224 | 1,988,305,077 | 13,224 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.332
Python 3.10.12
Platform: GCP
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code to reproduce the problem:
```
from google.cloud import aiplatform
from langchain.embeddings import VertexAIEmbeddings
from langchain.vectorstores.matching_engine import MatchingEngine
embeddings = VertexAIEmbeddings()
embeddings
vector_store = MatchingEngine.from_components(
index_id=INDEX,
region=REGION,
embedding=embeddings,
project_id=PROJECT_ID,
endpoint_id=ENDPOINT,
gcs_bucket_name=DOCS_EMBEDDING)
vector_store.similarity_search('hello world', k=8)
```
Traceback:
```Traceback (most recent call last):
File "/opt/conda/envs/classifier/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3526, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-c6115207e7f5>", line 1, in <module>
relevant_documentation = vector_store.similarity_search('hello world', k=8)
File "/opt/conda/envs/classifier/lib/python3.10/site-packages/langchain/vectorstores/matching_engine.py", line 291, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/opt/conda/envs/classifier/lib/python3.10/site-packages/langchain/vectorstores/matching_engine.py", line 202, in similarity_search_with_score
return self.similarity_search_by_vector_with_score(
File "/opt/conda/envs/classifier/lib/python3.10/site-packages/langchain/vectorstores/matching_engine.py", line 234, in similarity_search_by_vector_with_score
if self.endpoint._public_match_client:
AttributeError: 'MatchingEngineIndexEndpoint' object has no attribute '_public_match_client'
```
**Expected Behavior**:
No Error
**Analysis**:
The newest changes in https://github.com/langchain-ai/langchain/pull/10056 added the following to matching engine. [Source](https://github.com/langchain-ai/langchain/blob/869df62736f9084864ab907e7ec5736dd19f05d4/libs/langchain/langchain/vectorstores/matching_engine.py#L234)
`if self.endpoint._public_match_client:`
However, in GCP's MatchingEngineIndexEndpoint Class, object _public_match_client does not get instantiated until the following check passes. [Source](https://github.com/googleapis/python-aiplatform/blob/fcf05cb6da15c83e91e6ce5f20ab3e6649983685/google/cloud/aiplatform/matching_engine/matching_engine_index_endpoint.py#L132-L133)
```
if self.public_endpoint_domain_name:
self._public_match_client = self._instantiate_public_match_client()
```
Therefore I think the if self.endpoint._public_match_client check may be preventing all Private Network users from using vector search with matching Engine. | Vector Search on GCP Private Network gives AttributeError: 'MatchingEngineIndexEndpoint' object has no attribute '_public_match_client' | https://api.github.com/repos/langchain-ai/langchain/issues/13218/comments | 3 | 2023-11-10T19:10:23Z | 2024-02-16T16:05:51Z | https://github.com/langchain-ai/langchain/issues/13218 | 1,988,218,361 | 13,218 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello All,
I have just installed the helm chart with some small additions to the basic values.yaml and the "langchain-langsmith-backend" contain keeps breaking with the following error, Has anyone had this before?
INFO: Started server process [1]
--
Fri, Nov 10 2023 2:41:24 pm | INFO: Waiting for application startup.
Fri, Nov 10 2023 2:41:24 pm | INFO: Application startup complete.
Fri, Nov 10 2023 2:41:24 pm | INFO: Uvicorn running on http://0.0.0.0:1984 (Press CTRL+C to quit)
Fri, Nov 10 2023 2:42:18 pm | INFO: Shutting down
Fri, Nov 10 2023 2:42:18 pm | INFO: Waiting for application shutdown.
Fri, Nov 10 2023 2:42:19 pm | ERROR:root:Error closing httpx client for clickhouse name '_httpx_client' is not defined
Fri, Nov 10 2023 2:42:19 pm | Traceback (most recent call last):
Fri, Nov 10 2023 2:42:19 pm | File "/code/smith-backend/app/main.py", line 140, in shutdown_event
Fri, Nov 10 2023 2:42:19 pm | File "/code/lc_database/lc_database/clickhouse.py", line 36, in close_clickhouse_client
Fri, Nov 10 2023 2:42:19 pm | await _httpx_client.aclose()
Fri, Nov 10 2023 2:42:19 pm | ^^^^^^^^^^^^^
Fri, Nov 10 2023 2:42:19 pm | NameError: name '_httpx_client' is not defined
Fri, Nov 10 2023 2:42:19 pm | INFO: Application shutdown complete.
Fri, Nov 10 2023 2:42:19 pm | INFO: Finished server process [1]
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
helm repo add langchain https://langchain-ai.github.io/helm/
helm install langchain/langsmith . --values values.yaml --namespace it-dev-langchain
Running this on an on prem Rancher cluster.
### Expected behavior
The contain runs normally. | NameError: name '_httpx_client' is not defined | https://api.github.com/repos/langchain-ai/langchain/issues/13204/comments | 4 | 2023-11-10T14:58:08Z | 2024-02-16T16:05:56Z | https://github.com/langchain-ai/langchain/issues/13204 | 1,987,788,713 | 13,204 |
[
"hwchase17",
"langchain"
]
| ### System Info
I tried to use `ChatVertexAI` as a replacement for `ChatOpenAI` as the latter is quite slow these days.
I have this code for Chat OpenAI
```
template_string = """ # some explanation
give me suggestions in JSON format where the suggestions are a list of dictionaries with the following keys:
- before:
- after:
- reason:
"""
chat = ChatOpenAI(
temperature=0.0,
openai_api_key=self.session_info.openai_api_key,
model=llm_model,
)
prompt_template = ChatPromptTemplate.from_template(template_string)
service_messages = prompt_template.format_messages(
text=self.text
)
response = chat(service_messages)
info = json.loads(response.content)
```
Then I print `response.content`, here is the format as expected:
```
[{"before": ..., "after": ...}, ...]
```
Then I use `ChatVertexAI` with the same `template_string`
```
chat = ChatVertexAI(
temperature=0.0,
google_api_key=google_api_key,
model="codechat-bison",
max_output_tokens=2048,
)
prompt_template = ChatPromptTemplate.from_template(template_string)
service_messages = prompt_template.format_messages(
text=self.text
)
response = chat(service_messages)
```
I print the `response.content` coming from Vertex AI and the output format is encapsulated by
```(3 backticks)JSON [{...}] (3 backticks)```. To make `response.content` the same, I used the following code
```
json_match = re.search(r'```JSON(.*?)\n```', response.content, re.DOTALL)
if json_match:
json_data = json_match.group(1).strip()
info = json.loads(json_data)
else:
info = {}
print("No JSON data found in the input string.")
```
I am not sure of the potential failures of this solution, but unless this is intentional, it might be better to make it consistent.
I will try to use the other LLMs and see how others work.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I shared my codes above, but the steps are:
1. setting up openai_api_key
2. setting up gcp environment
3. running ChatOpenAI
4. running ChatVertexAI
5. comparing the responses (i.e. `response.content`)
### Expected behavior
My expectation is to obtain the outputs of different models the same way. i.e., `json.loads` should output a list of dictionary in each case. Users should not have to deal with `regex` to obtain the data in the expected format. | Inconsistent output format of ChatVertexAI compared to ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/13202/comments | 2 | 2023-11-10T14:42:40Z | 2024-02-09T18:37:05Z | https://github.com/langchain-ai/langchain/issues/13202 | 1,987,762,764 | 13,202 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.333
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
The current mypy crash with `make lint` and propose to install the last master version.
So, after a `git clone` and `poetry install --with dev,test,lint`, run
```bash
pip install git+https://github.com/python/mypy.git
make lint
```
**Mypy found 834 errors in 238 files**
### Expected behavior
0 errors | More than 800 errors detected with the latest version of mypy | https://api.github.com/repos/langchain-ai/langchain/issues/13199/comments | 4 | 2023-11-10T14:26:34Z | 2024-02-17T16:05:33Z | https://github.com/langchain-ai/langchain/issues/13199 | 1,987,735,492 | 13,199 |
[
"hwchase17",
"langchain"
]
| ### System Info
The text-embedding-ada-002 OpenAI embedding model on Azure OpenAI has a maximum batch size of 16. MlflowAIGatewayEmbeddings has a hard-coded batch size of 20 which results in it being unusable with Azure OpenAI's text-embedding-ada-002.
The best fix would be to allow a configurable batchsize as an argument to MlflowAIGatewayEmbeddings,
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a gateway route to the text-embedding-ada-002 on azure openai
```
ROUTE_NAME = "azure-ada-002"
# Try to delete the route if it already exists
try:
delete_route(ROUTE_NAME)
print("Route deleted")
except:
print("Route does not exist, creating..")
create_route(
name=ROUTE_NAME,
route_type="llm/v1/embeddings",
model = {
"name" : "text-embedding-ada-002",
"provider" : "openai",
"openai_config" : {
"openai_api_type" : azure_openai_type,
"openai_api_key": azure_openai_key,
"openai_deployment_name": "ada-embed-v1",
"openai_api_base": azure_openai_base,
"openai_api_version": "2023-05-15"
}
}
)
```
2. Initialize the `MlflowAIGatewayEmbeddings` and try to embed more than 16 documents
```
from langchain.embeddings import MlflowAIGatewayEmbeddings
import pandas as pd
azure_ada = MlflowAIGatewayEmbeddings(route="azure-ada-002")
test_strings = [
"aaaa" for i in range(20)
]
azure_ada.embed_documents(test_strings)
```
3. Observe the error
```
HTTPError: 400 Client Error: Bad Request for url: https://ohio.cloud.databricks.com/gateway/azure-ada-002/invocations. Response text: {
"error": {
"message": "Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
```
### Expected behavior
I should be able to use the `MlflowAIGatewayEmbeddings` class with Azure OpenAI. | MlflowAIGatewayEmbeddings : Default Batch size incompatible with Azure OpenAI text-embedding-ada-002 | https://api.github.com/repos/langchain-ai/langchain/issues/13197/comments | 8 | 2023-11-10T13:52:34Z | 2024-05-15T16:01:41Z | https://github.com/langchain-ai/langchain/issues/13197 | 1,987,672,041 | 13,197 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
i have the following code:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
i must use retrieval mechanism. and i am using Turkish language data. please change my code
### Suggestion:
_No response_ | langchain Retrieval | https://api.github.com/repos/langchain-ai/langchain/issues/13196/comments | 4 | 2023-11-10T12:47:23Z | 2024-02-22T16:07:08Z | https://github.com/langchain-ai/langchain/issues/13196 | 1,987,564,233 | 13,196 |
[
"hwchase17",
"langchain"
]
| ### System Info
Latest langchain, Mac
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi Community,
I'm trying to build a chain with Chroma database as context, AzureOpenAI embeddings and AzureOpenAI GPT model.
The following libs work fine for me and doing their work
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import AzureChatOpenAI
But I can't figure out how to build the chain itself and what kind of chain I should use.
Could you kindly provide any suggestions?
Thanks, Artem.
### Expected behavior
Desired behaviour:
- user asks question
- code searches it in Chroma using AzureOpenAI embeddings to transform text
- In prompt for AzureOpenAI GPT there is context with info retrieved from Chroma and current chat history with Human/AI questions and answers
- Also there should be a custom prompt template used
- Also I want to wrap it in Flask server, so users chat history could be stored in session with username as key until it's forcibly cleared | Example for chat chain with documents retrieval and history capability | https://api.github.com/repos/langchain-ai/langchain/issues/13195/comments | 3 | 2023-11-10T12:23:44Z | 2024-02-16T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13195 | 1,987,527,709 | 13,195 |
[
"hwchase17",
"langchain"
]
| ### System Info
I just updated langchain to newest version and my Agent is not working anymore.
Tool structure :
```
class Data_Retriever(BaseModel):
db : Any
class Config:
extra = Extra.forbid
def run(self,request) -> str:
data = self.db.get(request)
return data
```
When Agent uses the above tool, it will always stop at "Action Input" step :
```
> Entering new AgentExecutor chain...
AI: Thought: Do I need to use a tool? Yes
Action: Data_Retriever
Action Input: DATA
> Finished chain.
```
Does anyone know how to fix this ? I'm using langchain==0.0.332
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```from langchain.tools import BaseTool, StructuredTool, Tool, tool
from langchain.agents import AgentType, initialize_agent,load_tools
tools = [
Tool(
name="Music Search",
func=lambda x: "'All I Want For Christmas Is You' by Mariah Carey.", # Mock Function
description="A Music search engine. Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'",
),
]
agent = initialize_agent(
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True)
agent.run(input = "Random song")
```
### Expected behavior
The output should be All I Want For Christmas Is You' by Mariah Carey.
My agent stopped once it hit the action input step.
> Entering new AgentExecutor chain...
AI: Thought: Do I need to use a tool? Yes
Action: Music Search
Action Input: "Random song"
> Finished chain. | The Agent is not using Custom Tools. | https://api.github.com/repos/langchain-ai/langchain/issues/13194/comments | 17 | 2023-11-10T12:23:33Z | 2024-07-25T19:04:06Z | https://github.com/langchain-ai/langchain/issues/13194 | 1,987,527,490 | 13,194 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
_No response_
### Suggestion:
my code:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("nazim.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "When was Nâzım Hikmet entered the Naval School?"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
my nazim.txt:
```
Nâzım Hikmet was born on 21 November 1901 in Salonica, but his birth was certified as 15 January 1902 in order to prevent his age appearing older by a year older on account of 40 days. He died on 3 June 1963 in Moscow.
His paternal grandfather Nâzım Pasha the governor was a liberal and a poet. He belonged to Mevlevi Mysticism. He was a close friend to Mithat Pasha. His father, Hikmet was graduated from Mekteb-i Sultani (later the Galatasaray Lycée). He firstly dealt with trade but when he had been unsuccessful in that area, he became a civil servant at the Ministry of Foreign Affairs (Kalem-i Ecnebiye).
His mother was the daughter of Enver Pasha who was a linguist and an educator. Celile Hanım spoke French, played the piano and painted pictures as well as an artist.
His family environment, with its progressive thoughts, had tremendous effect on the education of Nâzım Hikmet. He was first trained at a school where the language of instruction was French and later attended the Numune Mektep (Taş Mektep) in Göztepe in Istanbul. After graduating primary school, he attended the prep class of the Mekteb-i Sultani with his friend, Vâlâ Nurettin. The year after, because of the financial strait in which his family found themselves, he changed his school to the Nişantaşı Junior High School.
In this period, with the influence of his grandfather, he started to write poetry. During a family meeting, Cemal Pasha, the Minister of the Navy, evinced to very much moved when Nâzım Hikmet read a heroic poem he had written about sailors. Cemal Pasha offered to send him to the Heybeliada Naval School and after the acceptance of the offer by the family, he helped Nâzım enter this school.
Nâzım Hikmet entered the Naval School in 1917 and graduated thence in 1919 and started to work as intern deck officer on the Hamidiye Cruiser. But in winter of the same year his pleurisy illness repeated. After a medical treatment period of nearly two months, during which he was under control of the head doctor of the Navy Hospital, Hakkı Şinasi Pasha, he was given permission to go home for two months. But he did not recover enough to return to work as a navy officer. By a Health Council Report he was discharged as unfit for duty in May 1920.
At this time, he was going to be known among Syllabist Poets as a young voice. Nâzım Hikmet admired Yahya Kemal who was his history and literature teacher and also his family friend, and showed him his poems for his critique. In 1920, the Alemdar Newspaper organised a contest where the jury consisted of famous poets. Nâzım was elected recipient of the award. Young masters, such as Faruk Nafiz, Yusuf Ziya, Orhan Seyfi talked about him with great admiration.
Istanbul had been under occupation and Nâzım Hikmet was writing resistance poems reflecting the ebullient patriotism. In the last days of 1920 the poem "Gençlik" (Youth) called the young generation to fight for the liberation of the country.
On 1 January 1921, with the help of illegal organisation which provided weapons to Mustafa Kemal, four poets (Faruk Nafiz,Yusuf Ziya, Nâzım Hikmet and Vâlâ Nurettin) secretly got on the Yeni Dünya (New World) Ship in Sirkeci. In order to gain passage to Ankara, one had to wait nearly 5 or 6 days at Inebolu. The passage was granted only to Nâzım Hikmet and Vâlâ Nurettin by Ankara.
During the days in Inebolu, they met young students coming from Germany who were waiting for permission to go to Ankara, like them. Among them there were Sadık Ahi ( later Mehmet Eti- Parliamentary of CHP),Vehbi(Prof. Vehbi Sarıdal) Nafi Atuf(Kansu-General Secretary of CHP). These were called as Spartans and they defended socialism and praised SSCB that was the first country which accepted Turkey's National Pact (Misak-ı Milli) Borders. These ideas were new for Nâzım and his friend Vâlâ Nureddin.
When they reached Ankara the first duty given to them was to write a poem to summon the youth of Istanbul to the national Struggle. They finished this three-page poem within three days. It was published by the Matbuat Müdürlüğü (Directory of Press) in four pages in 11.5 x 18 cm. format and served out in ten-thousand copies in March 1921. The impact of the poem was so great that the members of the National Assembly started to argue how to resolve the enthusiastic upheaval which was caused by the poem. Muhittin Birgen, the Director of Press, was criticised in the negative because of publishing and serving out of the poem.
It would have been a great problem to find jobs if the youth of Istanbul had come to Ankara. Discomforted by having been compelled to report to the National Assembly on the matter, Muhittin Birgen decided to transfer Nâzım Hikmet and Vâlâ Nureddin to the auspices of the Ministry of Education.
At this time, İsmail Fazıl Pasha, one of the relatives of Celile Hanım, summoned to the Assembly these two talented poets whose poem had caused such a stir, and introduced them to Mustafa Kemal Pasha.
```
QUESTİON:
My output should be one line and correct answer according to question "Nâzım Hikmet entered the Naval School in 1917". What should i change in my code?
| Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13192/comments | 4 | 2023-11-10T11:17:39Z | 2024-02-21T16:07:19Z | https://github.com/langchain-ai/langchain/issues/13192 | 1,987,429,747 | 13,192 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
_No response_
### Suggestion:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
from silly import no_ssl_verification
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
with no_ssl_verification():
# load the document and split it into chunks
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# hfemb = HuggingFaceEmbeddings()
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
i use this code but i get the following error:
ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])
Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.
Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023
can you give me fixed code and explain?
| langchain & chroma - Basic Example | https://api.github.com/repos/langchain-ai/langchain/issues/13191/comments | 3 | 2023-11-10T10:46:38Z | 2024-02-16T16:06:21Z | https://github.com/langchain-ai/langchain/issues/13191 | 1,987,380,190 | 13,191 |
[
"hwchase17",
"langchain"
]
| ### System Info
newest version
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When loading a local model it keeps hallucinating a conversation, how to make it stop at "Human:" ? In the create_llm function you see two ways I tried, giving kwars and binding, but both did not work. Is the bind specifically made for LCEL?
Is there some other way to make the stopping work?
```
from langchain.llms import CTransformers
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
def create_llm(model_path='./models/mistral-7b-instruct-v0.1.Q4_K_M.gguf', model_type="mistral"):
llm = CTransformers(model=model_path, model_type=model_type,kwargs={'stop': ['Human:']})
llm.bind(stop=["Human:"])
return llm
def create_memory():
memory = ConversationBufferMemory(memory_key="history")
return memory
def create_memory_prompt():
template = """You are an AI chatbot having a conversation with a human. Answer his questions.
Previous conversation: {history}
Human: {human_input}
AI: """
prompt = PromptTemplate(input_variables=["history", "human_input"], template=template)
return prompt
def create_normal_chain(llm, prompt, memory):
llm_chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
return llm_chain
chain = create_normal_chain(create_llm(), create_memory_prompt(), create_memory())
out = chain.invoke({"human_input" : "Hello"})
print(out)
```
### Expected behavior
Generation stops at "Human:". | Binding stop to a local llm does not work? | https://api.github.com/repos/langchain-ai/langchain/issues/13188/comments | 4 | 2023-11-10T10:31:00Z | 2024-02-09T16:04:41Z | https://github.com/langchain-ai/langchain/issues/13188 | 1,987,355,999 | 13,188 |
[
"hwchase17",
"langchain"
]
| ### System Info
cenos7
python3.9
langchai 0.0.333
Reading error PDF file here:
[llm.pdf](https://github.com/langchain-ai/langchain/files/13318006/llm.pdf)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from lxml import html
from pydantic import BaseModel
from typing import Any, Optional
from unstructured.partition.pdf import partition_pdf
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
import uuid,os
from langchain.vectorstores import Chroma
from langchain.storage import InMemoryStore
from langchain.schema.document import Document
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers.multi_vector import MultiVectorRetriever
os.environ["OPENAI_API_KEY"] = ''
path=rf"/mnt/PD/PDF/"
# Get elements
raw_pdf_elements = partition_pdf(
filename=path + "llm.pdf",
# Unstructured first finds embedded image blocks
extract_images_in_pdf=False,
# Use layout model (YOLOX) to get bounding boxes (for tables) and find titles
# Titles are any sub-section of the document
infer_table_structure=True,
# Post processing to aggregate text once we have the title
chunking_strategy="by_title",
# Chunking params to aggregate text blocks
# Attempt to create a new chunk 3800 chars
# Attempt to keep chunks > 2000 chars
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_output_dir_path=path,
)
### Expected behavior
https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb?ref=blog.langchain.dev
When testing langchain's Semi_Structured_RAG.ipynb, I tested the PDF files used. Some PDFs were read normally, but some PDFs reported errors. The error log is as follows:
File "/mnt/PD/test.py", line 20, in <module>
raw_pdf_elements = partition_pdf(
File "/usr/local/lib/python3.9/site-packages/unstructured/documents/elements.py", line 372, in wrapper
elements = func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/file_utils/filetype.py", line 591, in wrapper
elements = func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/file_utils/filetype.py", line 546, in wrapper
elements = func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/chunking/title.py", line 297, in wrapper
elements = func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 182, in partition_pdf
return partition_pdf_or_image(
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 312, in partition_pdf_or_image
_layout_elements = _partition_pdf_or_image_local(
File "/usr/local/lib/python3.9/site-packages/unstructured/utils.py", line 179, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 413, in _partition_pdf_or_image_local
final_layout = process_file_with_ocr(
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/ocr.py", line 170, in process_file_with_ocr
raise e
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/ocr.py", line 159, in process_file_with_ocr
merged_page_layout = supplement_page_layout_with_ocr(
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/ocr.py", line 238, in supplement_page_layout_with_ocr
page_layout.elements[:] = supplement_element_with_table_extraction(
File "/usr/local/lib/python3.9/site-packages/unstructured/partition/ocr.py", line 274, in supplement_element_with_table_extraction
element.text_as_html = table_agent.predict(cropped_image, ocr_tokens=table_tokens)
File "/usr/local/lib/python3.9/site-packages/unstructured_inference/models/tables.py", line 54, in predict
return self.run_prediction(x, ocr_tokens=ocr_tokens)
File "/usr/local/lib/python3.9/site-packages/unstructured_inference/models/tables.py", line 191, in run_prediction
prediction = recognize(outputs_structure, x, tokens=ocr_tokens)[0]
IndexError: list index out of range | cookbook/Semi_Structured_RAG.ipynb ERROR:IndexError: list index out of range | https://api.github.com/repos/langchain-ai/langchain/issues/13187/comments | 4 | 2023-11-10T09:59:01Z | 2024-02-21T16:07:24Z | https://github.com/langchain-ai/langchain/issues/13187 | 1,987,302,580 | 13,187 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
**My code:**
loader_pdf = PyMuPDFLoader("/Users/python/test_pdf.pdf")
doc_pdf = loader_pdf.load()
llm = ChatOpenAI(temperature=0)
chain = QAGenerationChain.from_llm(llm=llm)
print("pdf:\n",doc_pdf[3].page_content)
qa_pdf = chain.run(doc_pdf[3].page_content)

The PDF I am using is a Chinese data file,but the output is in English.
There are no relevant records in the document. I would like to know how to solve this problem or can I set a Prompt to solve it?
### Suggestion:
_No response_ | QAGenerationChain output in different languages | https://api.github.com/repos/langchain-ai/langchain/issues/13186/comments | 3 | 2023-11-10T09:53:25Z | 2024-02-16T16:06:31Z | https://github.com/langchain-ai/langchain/issues/13186 | 1,987,293,567 | 13,186 |
[
"hwchase17",
"langchain"
]
| ### System Info
<pre>
platform = macOS-10.16-x86_64-i386-64bit
version = 3.10.13 (main, Sep 11 2023, 08:39:02) [Clang 14.0.6 ]
langchain.version = 0.0.332
pydantic.version = 1.10.13
openai.version = 1.2.0
</pre>
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to initialize an instance of AzureChatOpenAI with a custom http_client. Following is the code I'm using -
<pre>
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
import httpx
messages = [
SystemMessage(content="You are a helpful assistant that translates English to Spanish"),
HumanMessage(content="Good morning Vietnam!")
]
client = httpx.Client()
chat = AzureChatOpenAI(
openai_api_key="my-secret-key",
openai_api_version="2023-07-01-preview",
model="gpt-4",
deployment_name="my-deployment-name",
azure_endpoint="https://my-instance.openai.azure.com",
http_client=client
)
# AzureChatOpenAI.update_forward_refs()
chat(messages)
</pre>
This fails with the following pydantic config error:
<pre>
Traceback (most recent call last):
File "azure-custom.py", line 24, in <module>
chat = AzureChatOpenAI(
File "langchain-env/lib/python3.10/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1076, in pydantic.main.validate_model
File "pydantic/fields.py", line 860, in pydantic.fields.ModelField.validate
pydantic.errors.ConfigError: field "http_client" not yet prepared so type is still a ForwardRef, you might need to call AzureChatOpenAI.update_forward_refs().
</pre>
### Expected behavior
As per the [ChatOpenAI API Doc](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html), `http_client` seems to be a valid parameter to set in order to customize the httpx client in the background. But I'm unable to do that customization.
Adding `AzureChatOpenAI.update_forward_refs()` also does not solve this issue - the code raises the error even before hitting that line. | Setting a custom http_client fails with pydantic ConfigError | https://api.github.com/repos/langchain-ai/langchain/issues/13185/comments | 5 | 2023-11-10T09:36:13Z | 2023-11-27T18:50:16Z | https://github.com/langchain-ai/langchain/issues/13185 | 1,987,257,994 | 13,185 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hello guys,
I have to use a proxy to access Azure OpenAi because I'm using a VPN for my company.
However when I try to use the [](https://python.langchain.com/docs/modules/data_connection/retrievers/web_research) , I block at fetching the pages because the code inside the langchain library does not use the proxy.
So I managed to make it work for GoogleSearchAPIWrapper by setting it in the os.environ but it does not work for the inside request and html_load call in the langchain library.

### Idea or request for content:
So I would like to have a proxy parameter when I set up a web_research_retriever = WebResearchRetriever.from_llm(
vectorstore=vectorstore, llm=llm, search=search
)
Or any other tips that can help me to dodge this proxy issue.
Thanks in advance | DOC: Setting proxy for the whole langchain library, especially web_research_retriever | https://api.github.com/repos/langchain-ai/langchain/issues/13180/comments | 4 | 2023-11-10T08:54:16Z | 2024-02-16T16:06:36Z | https://github.com/langchain-ai/langchain/issues/13180 | 1,987,191,884 | 13,180 |
[
"hwchase17",
"langchain"
]
| ### Feature request
As the title suggests, there is currently no built-in method to retrieve chunks linked through edges in graph-based structures.
This is especially relevant in cases where documents are not self-contained and chunks reference other chunks, either within the same document or in other documents. These "children" chunks are often necessary to properly answer user queries.
Specifically, I would like to have the ability to perform the following within the same chain:
```
1. Get the user query
2. Embed it
3. Perform approximate nearest neighbor (ANN) lookup and retrieve the most relevant chunk
4. Given the ID of the chunk, expand the set of matches by navigating the graph
5. Retrieve all of these to provide to the Large Language Model (LLM)
6. Send everything to the LLM to answer the user query
```
To further contextualize my suggestion, I have been considering the following approach to begin with:
```python
import networkx as nx
GraphType = Union[nx.DiGraph, nx.Graph]
class NXInMemoryGraphRetriever(BaseRetriever):
def __init__(
self,
bootstrap_retriever: BaseRetriever,
graph: GraphType,
navigate_graph_function: Callable[[GraphType, str], List[Document]],
):
self._graph = graph
self._navigate_graph_fn = navigate_graph_function
...
def get_relevant_documents(
self,
query: str,
*,
callbacks: Optional[List[Callable]] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
run_name: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
base_documents = self.bootstrap_retriever.get_relevant_documents(...)
expanded_set_of_docs = set(base_documents)
for document in base_documents:
expanded_set_of_docs.update(
self.navigate_graph_function(self.graph, document.metadata["id"])
)
...
return list(expanded_set_of_documents)
# Example of callable function to pass
def get_everything_one_hop_away(graph: GraphType, source_doc_id: str) -> List[Documents]:
related_docs = []
for source_doc, related_doc in nx.dfs_edges(
graph, source=source_doc_id, depth_limit=1
):
related_docs.add(Document(...)) # From the related docs id.
return related_docs
```
Additionally, expanding this to support Neo4J or similar should be rather straightforward. Don't pass the graph but the DB driver and the `navigate_graph_function` with a cypher query string/literal.
### Motivation
Use langchain to navigate graph related datasets easily.
### Your contribution
Happy to send a PR your way :) | Document retriever from Knowledge-graph based source | https://api.github.com/repos/langchain-ai/langchain/issues/13179/comments | 1 | 2023-11-10T08:44:39Z | 2023-11-13T09:08:19Z | https://github.com/langchain-ai/langchain/issues/13179 | 1,987,177,608 | 13,179 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Support OpenAI seed and fingerprint parameters to get more consistent outputs for the same inputs and model version.
https://cookbook.openai.com/examples/deterministic_outputs_with_the_seed_parameter#seed
https://platform.openai.com/docs/api-reference/chat/create#chat-create-seed
### Motivation
Running unit tests that are dependent on LLM responses often leads to random flakiness.
### Your contribution
PR | Support OpenAI seed for deterministic outputs | https://api.github.com/repos/langchain-ai/langchain/issues/13177/comments | 6 | 2023-11-10T08:29:51Z | 2024-08-08T16:06:29Z | https://github.com/langchain-ai/langchain/issues/13177 | 1,987,156,699 | 13,177 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
`
# Import necessary libraries
from llama_index import (
LangchainEmbedding,
)
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from llama_index.vector_stores import ChromaVectorStore
from llama_index.storage.storage_context import StorageContext
import chromadb
# Create client and a new collection
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")
hfemb = HuggingFaceEmbeddings()
embed_model = LangchainEmbedding(hfemb)
documents = SimpleDirectoryReader("./docs/examples/data/paul_graham").load_data()
# Set up ChromaVectorStore and load in data
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(llm=None, embed_model=embed_model)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, service_context=service_context, show_progress=True
)
# Query Data
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
`
I used this code with llama-index retrieval mechanism for question&answer with documents. How can i write this code's LangChain form? I should'nt use model. I retrieve answer from documents. And i should use HuggingFaceEmbeddings for embedding.
### Suggestion:
_No response_ | Simple Retrieval QA Example | https://api.github.com/repos/langchain-ai/langchain/issues/13176/comments | 5 | 2023-11-10T08:20:00Z | 2024-02-16T16:06:41Z | https://github.com/langchain-ai/langchain/issues/13176 | 1,987,140,842 | 13,176 |
[
"hwchase17",
"langchain"
]
| ### System Info
python version: 3.11
I'm trying to run the sampe code "LangChain: Q&A over Documents", but when I run the below cell, it reports below error
```
pip install --upgrade langchain
from llm_commons.langchain.btp_llm import ChatBTPOpenAI
from llm_commons.langchain.btp_llm import BTPOpenAIEmbeddings
from langchain.chains import RetrievalQA
from langchain.document_loaders import CSVLoader
from langchain.vectorstores import DocArrayInMemorySearch
from IPython.display import display, Markdown
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file, encoding='utf-8')
from langchain.indexes import VectorstoreIndexCreator
```
ImportError Traceback (most recent call last)
Cell In[4], line 1
----> 1 from langchain.indexes import VectorstoreIndexCreator
File /opt/conda/lib/python3.11/site-packages/langchain/indexes/__init__.py:17
1 """Code to support various indexing workflows.
2
3 Provides code to:
(...)
14 documents that were derived from parent documents by chunking.)
15 """
16 from langchain.indexes._api import IndexingResult, aindex, index
---> 17 from langchain.indexes._sql_record_manager import SQLRecordManager
18 from langchain.indexes.graph import GraphIndexCreator
19 from langchain.indexes.vectorstore import VectorstoreIndexCreator
File /opt/conda/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py:21
18 import uuid
19 from typing import Any, AsyncGenerator, Dict, Generator, List, Optional, Sequence, Union
---> 21 from sqlalchemy import (
22 URL,
23 Column,
24 Engine,
25 Float,
26 Index,
27 String,
28 UniqueConstraint,
29 and_,
30 create_engine,
31 delete,
32 select,
33 text,
34 )
35 from sqlalchemy.ext.asyncio import (
36 AsyncEngine,
37 AsyncSession,
38 async_sessionmaker,
39 create_async_engine,
40 )
41 from sqlalchemy.ext.declarative import declarative_base
ImportError: cannot import name 'URL' from 'sqlalchemy' (/opt/conda/lib/python3.11/site-packages/sqlalchemy/__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
python version: 3.11
I'm trying to run the sampe code "LangChain: Q&A over Documents", but when I run the below cell, it reports below error
```
pip install --upgrade langchain
from llm_commons.langchain.btp_llm import ChatBTPOpenAI
from llm_commons.langchain.btp_llm import BTPOpenAIEmbeddings
from langchain.chains import RetrievalQA
from langchain.document_loaders import CSVLoader
from langchain.vectorstores import DocArrayInMemorySearch
from IPython.display import display, Markdown
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file, encoding='utf-8')
from langchain.indexes import VectorstoreIndexCreator
```
ImportError Traceback (most recent call last)
Cell In[4], line 1
----> 1 from langchain.indexes import VectorstoreIndexCreator
File /opt/conda/lib/python3.11/site-packages/langchain/indexes/__init__.py:17
1 """Code to support various indexing workflows.
2
3 Provides code to:
(...)
14 documents that were derived from parent documents by chunking.)
15 """
16 from langchain.indexes._api import IndexingResult, aindex, index
---> 17 from langchain.indexes._sql_record_manager import SQLRecordManager
18 from langchain.indexes.graph import GraphIndexCreator
19 from langchain.indexes.vectorstore import VectorstoreIndexCreator
File /opt/conda/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py:21
18 import uuid
19 from typing import Any, AsyncGenerator, Dict, Generator, List, Optional, Sequence, Union
---> 21 from sqlalchemy import (
22 URL,
23 Column,
24 Engine,
25 Float,
26 Index,
27 String,
28 UniqueConstraint,
29 and_,
30 create_engine,
31 delete,
32 select,
33 text,
34 )
35 from sqlalchemy.ext.asyncio import (
36 AsyncEngine,
37 AsyncSession,
38 async_sessionmaker,
39 create_async_engine,
40 )
41 from sqlalchemy.ext.declarative import declarative_base
ImportError: cannot import name 'URL' from 'sqlalchemy' (/opt/conda/lib/python3.11/site-packages/sqlalchemy/__init__.py)
### Expected behavior
the import should work as expected

| from langchain.indexes import VectorstoreIndexCreator report errors "" | https://api.github.com/repos/langchain-ai/langchain/issues/13172/comments | 3 | 2023-11-10T06:47:43Z | 2024-02-16T16:06:46Z | https://github.com/langchain-ai/langchain/issues/13172 | 1,987,020,797 | 13,172 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.333
openai == 1.2.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Make a simple OpenAI call with the latest OpenAI version
### Expected behavior
OpenAI in there recent changes have updated their error API so no longer it has to be referenced with OpenAI.error.
The error is in this file.
langchain/libs/langchain/langchain/chat_models/openai.py
So the error list that is
errors = [
openai.error.Timeout,
openai.error.APIError,
openai.error.APIConnectionError,
openai.error.RateLimitError,
openai.error.ServiceUnavailableError,
]
should be updated to
errors = [
openai.Timeout,
openai.APIError,
openai.APIConnectionError,
openai.RateLimitError,
openai.ServiceUnavailableError,
]
| Problem With OpenAI Error update | https://api.github.com/repos/langchain-ai/langchain/issues/13171/comments | 2 | 2023-11-10T06:37:20Z | 2024-02-17T16:05:52Z | https://github.com/langchain-ai/langchain/issues/13171 | 1,987,009,878 | 13,171 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
in https://python.langchain.com/docs/get_started/quickstart
in the section LLM / Chat Model
with the code
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
# I recevce the error
File [~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:40](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:40)
[29](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:29) import yaml
[30](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:30) from tenacity import (
[31](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:31) RetryCallState,
[32](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/home/nghia_1660s/nghia_1660s_workspace/gates-llm-chatbots/learn_langchain/~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:32) before_sleep_log,
ref='~/nghia_1660s_workspace/gates-llm-chatbots/.venv/lib/python3.10/site-packages/langchain/llms/base.py:0'>0</a>;32m (...)
...
--> [106](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/usr/lib/python3.10/abc.py:106) cls = super().__new__(mcls, name, bases, namespace, **kwargs)
[107](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/usr/lib/python3.10/abc.py:107) _abc_init(cls)
[108](https://vscode-remote+ssh-002dremote-002bnghia-002d1660s.vscode-resource.vscode-cdn.net/usr/lib/python3.10/abc.py:108) return cls
**TypeError: multiple bases have instance lay-out conflict**
### Idea or request for content:
The TypeError: multiple bases have instance lay-out conflict is the error for
Here is an example illustrating the instance layout conflict in Python:
Let's say we have two extension types defined in C:
class Base1(object):
__slots__ = ()
_fields_ = ["field1"]
class Base2(object):
__slots__ = ()
_fields_ = ["field2"]
Now let's attempt multiple inheritance:
class Derived(Base1, Base2):
__slots__ = ()
Here you will get TypeError: multiple bases have instance layout conflict.
This error is somewhat specific to extension types and it's due to violating multiple inheritance compatibility in Python under the hood when working with extension types. In simpler terms, Python is complaining about a conflict that occurs due to Base1 and Base2 having a different memory layout. Python doesn't know how to resolve this ambiguity or conflict, so it throws a TypeError.
I use python the python I use is 3.10.12 | TypeError: multiple bases have instance layout conflict. DOC: <Quick start code dont work with version langchain ==0.333> | https://api.github.com/repos/langchain-ai/langchain/issues/13168/comments | 3 | 2023-11-10T04:51:26Z | 2023-11-11T07:50:51Z | https://github.com/langchain-ai/langchain/issues/13168 | 1,986,880,174 | 13,168 |
[
"hwchase17",
"langchain"
]
| ### System Info
Getting the below error when try to use load_qa_chain
openai.lib._old_api.APIRemovedInV1:
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run load_qa_chain
### Expected behavior
it should not error | getiing an error with openai v1 | https://api.github.com/repos/langchain-ai/langchain/issues/13162/comments | 6 | 2023-11-10T02:41:34Z | 2024-03-13T20:02:47Z | https://github.com/langchain-ai/langchain/issues/13162 | 1,986,745,002 | 13,162 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm using llama2 with langchain. To use gpu, I re-installed the llama-cpp-python with cublas options, but after that, I can not get response from langchain.LLMChain.predict() anymore.
The result is like:
```
User:hi
> Entering new LLMChain chain...
Prompt after formatting:
History:
Question: hi
Answer:
llama_print_timings: load time = 883.81 ms
llama_print_timings: sample time = 0.47 ms / 2 runs ( 0.24 ms per token, 4255.32 tokens per second)
llama_print_timings: prompt eval time = 883.77 ms / 14 tokens ( 63.13 ms per token, 15.84 tokens per second)
llama_print_timings: eval time = 228.71 ms / 1 runs ( 228.71 ms per token, 4.37 tokens per second)
llama_print_timings: total time = 1117.20 ms
> Finished chain.
```
The following is how I define the llm and the chain:
```
llm = LlamaCpp(
model_path = model_path,
n_ctx=4096,
top_k=10,
top_p=0.9,
temperature=0.7, repeat_penalty=1.1,
verbose=True,
callback_manager=callback_manager,
n_gpu_layers=35,
n_batch=100,
stop=["Question:","Human:"]
)
memory = ConversationBufferMemory(memory_key='chat_history')
llm_chain = LLMChain(
prompt=prompt,
llm=llm,
memory=memory,
)
```
Actually when I used local llama.cpp and normally installed llama-cpp-python, everything went on well. The only change I made is the llama-cpp-python and all the code just remain the same. I wonder why will this happen.
### Suggestion:
_No response_ | Issue: Can't get response from chain.predict() | https://api.github.com/repos/langchain-ai/langchain/issues/13161/comments | 10 | 2023-11-10T02:09:17Z | 2023-11-13T03:13:33Z | https://github.com/langchain-ai/langchain/issues/13161 | 1,986,715,808 | 13,161 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi @baskaryan! Thanks for open-sourcing the [notebook](https://github.com/langchain-ai/langchain/blob/master/cookbook/openai_v1_cookbook.ipynb) related to GPT-4-Vision.
Is there a way, we can estimate the cost for the API calls similar to using `get_openai_callback`?
### Suggestion:
We could get the cost using callbacks as:
```
from langchain.callbacks import get_openai_callback
from langchain.prompts import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate.from_template(prompt)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, human_message_prompt]
)
with get_openai_callback() as cb:
openai_chain = LLMChain(prompt=chat_prompt, llm=model)
response = openai_chain.run({})
n_tokens = cb.total_tokens
total_cost = cb.total_cost
```
How do I use `HumanMessage` with the callback? | Issue: Get cost estimates of GPT-4-Vision | https://api.github.com/repos/langchain-ai/langchain/issues/13159/comments | 2 | 2023-11-10T00:53:36Z | 2024-05-20T16:07:24Z | https://github.com/langchain-ai/langchain/issues/13159 | 1,986,646,789 | 13,159 |
[
"hwchase17",
"langchain"
]
| ### System Info
Notebook with latest langchain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Trying in notebook the HTMLHeaderTextSplitter
```jupyter-notebook
from langchain.text_splitter import RecursiveCharacterTextSplitter
headers_to_split_on = [
("h1", "Header 1"),
("h2", "Header 2"),
("h3", "Header 3"),
("h4", "Header 4"),
]
html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
html_header_splits = html_splitter.split_text_from_file("/content/X.html")
chunk_size = 500
chunk_overlap = 30
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
Split
splits = text_splitter.split_documents(html_header_splits)
splits[80:85]`
```
```jupyter-notebook
---------------------------------------------------------------------------
XSLTApplyError Traceback (most recent call last)
[<ipython-input-54-bd3edea942d1>](https://localhost:8080/#) in <cell line: 12>()
10 html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
11
---> 12 html_header_splits = html_splitter.split_text_from_file("/content/X.html")
13
14 chunk_size = 500
[/usr/local/lib/python3.10/dist-packages/langchain/text_splitter.py](https://localhost:8080/#) in split_text_from_file(self, file)
586 xslt_tree = etree.parse(xslt_path)
587 transform = etree.XSLT(xslt_tree)
--> 588 result = transform(tree)
589 result_dom = etree.fromstring(str(result))
590
src/lxml/xslt.pxi in lxml.etree.XSLT.__call__()
XSLTApplyError: maxHead```
Is the HTML just to large to be handled by the textsplitter?
### Expected behavior
Load the html.. | HTMLHeaderTextSplitter won't run (maxHead) | https://api.github.com/repos/langchain-ai/langchain/issues/13149/comments | 8 | 2023-11-09T20:48:09Z | 2024-06-18T19:23:47Z | https://github.com/langchain-ai/langchain/issues/13149 | 1,986,380,076 | 13,149 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have 100 docs , from which i am trying to retrieve top 10 docs which are most relevant to my query using parent document retriever. How can this be achieved.
### Suggestion:
_No response_ | How to retrieve custom number of docs from parent retriver document , I have 100 docs and i don't want the reteiver to just retrieve only 4 docs | https://api.github.com/repos/langchain-ai/langchain/issues/13145/comments | 11 | 2023-11-09T19:56:05Z | 2024-04-02T09:18:17Z | https://github.com/langchain-ai/langchain/issues/13145 | 1,986,306,840 | 13,145 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Need the ability to pass arguments to `browser.chromium.launch()` in the `create_sync_playwright_browser` and `create_async_playwright_browser` functions located in [`langchain.tools.playwright.utils.py`](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/tools/playwright/utils.py).
### Motivation
I created an agent that uses `PlayWrightBrowserToolkit`. All local testing worked, but I encounter errors when deploying to an AWS Lambda. I encountered two errors when testing within the AWS Lambda UI: `Target page, context or browser has been closed` and `Timeout 30000ms exceeded`. I did not experience these issues locally. I identified the issue occurred when the agent used the `navigate_browser` tool. I needed to launch the browser with `browser.chromium.launch(headless=True, args=["--disable-gpu", "--single-process"])`
### Your contribution
I have modified `utils.py` for Playwright and will submit a PR. | Add ability to pass arguments when creating Playwright browser | https://api.github.com/repos/langchain-ai/langchain/issues/13143/comments | 1 | 2023-11-09T19:09:40Z | 2024-02-15T16:06:06Z | https://github.com/langchain-ai/langchain/issues/13143 | 1,986,241,586 | 13,143 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The documentation recommends to install openai with `pip install openai`.
If you do it currently (langchain 0.0.332) then an incompatible openai version is installed (1.1.2).
It's better to install it using `pip install langchain[openai]` as that will pick a version compatible with langchain.
So I propose to replace it in the docs.
WDYT ?
### Idea or request for content:
_No response_ | DOC: Install openai with langchain[openai] | https://api.github.com/repos/langchain-ai/langchain/issues/13134/comments | 11 | 2023-11-09T15:42:53Z | 2024-04-12T16:51:25Z | https://github.com/langchain-ai/langchain/issues/13134 | 1,985,905,385 | 13,134 |
[
"hwchase17",
"langchain"
]
| https://github.com/langchain-ai/langchain/blob/c52725bdc5958d5295c2d563fa9b7fcb6ed09a3e/libs/langchain/langchain/chat_models/vertexai.py#L136C29-L136C29
seems there is an issue
from vertexai.preview.language_models import ChatModel
changed into
from vertexai.language_models import ChatModel
| Import changed of Vertex chatModel | https://api.github.com/repos/langchain-ai/langchain/issues/13132/comments | 2 | 2023-11-09T15:30:47Z | 2024-02-15T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13132 | 1,985,875,195 | 13,132 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We are using langchain create_sql_agent to build a chat engine with database.
generation and execution of query will be handled by create_sql_agent.
I want to modify the query before execution.
Please find below example:
original query : select * from transaction where type = IPC
modified query : select * from (select * from transaction where account_id = 123) where type = IPC
Can you please provide me way forward to achieve this ?
### Motivation
To protect some data from client, need this functionality
### Your contribution
I'm not familiar with the internal code, but I can help if guided. | Modify mysql query before execution | https://api.github.com/repos/langchain-ai/langchain/issues/13129/comments | 16 | 2023-11-09T14:07:57Z | 2024-07-03T12:36:57Z | https://github.com/langchain-ai/langchain/issues/13129 | 1,985,703,330 | 13,129 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi.
I am currently using the model of the hugging face.
The code I use is below.
`llm = HuggingFacePipeline.from_model_id(model_id="beomi/llama-2-ko-7b", task="text-generation", model_kwargs={"max_length": 2000}, device=0)`
Currently I am only using device=0, but I would like to load the model using multiple devices (device=0,1,2,3).
But I couldn't find a way to handle multiple devices for HuggingFacePipeline.from_model_id.
If you know anything about this, I hope you can help.
### Suggestion:
_No response_ | How do I use multiple GPU when using a model with hugging face? | https://api.github.com/repos/langchain-ai/langchain/issues/13128/comments | 11 | 2023-11-09T13:45:39Z | 2024-06-28T07:43:23Z | https://github.com/langchain-ai/langchain/issues/13128 | 1,985,658,707 | 13,128 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.332
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The default `Runnable#stream` doesn't stream, but rather defers to `invoke`. As `RunnableLambda` doesn't override the `stream` function, this causes the entire chain to run with `invoke` instead of `stream` when it is used in any part of the chain.
```python
chain = RunnableLambda(lambda x: x) | ChatOpenAI()
for part in chain.stream("hello"):
print(part)
```
### Expected behavior
What I'd expect is that I could use a `LambdaRunnable` to transform my inputs on-the-fly, then send it through to the rest of my chain and still use the stream/batch/etc functionalities that LCEL should bring.
If the early runnables don't support streaming, that doesn't mean that the final output can't be streamed. Currently when any of the runnables in a chain don't support streaming, the entire chain doesn't support streaming. | Streaming not working when using RunnableLambda | https://api.github.com/repos/langchain-ai/langchain/issues/13126/comments | 3 | 2023-11-09T13:32:54Z | 2023-11-09T14:54:59Z | https://github.com/langchain-ai/langchain/issues/13126 | 1,985,634,692 | 13,126 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We are using langchain create_sql_agent to build a chat engine with database.
I want to write custom implementation for tool : query_sql_checker_tool
In this custom logic, we want to modify the sql query before execution.
Example:
original query : select * from transaction where type = IPC
modified query : select * from (select * from transaction where account_id = 123) where type = IPC
Can you please provide me way forward to achieve this ?
### Motivation
To protect some data from client, need this functionality
### Your contribution
I'm not familiar with the internal code, but I can help if guided. | Custom implementation for query checker tool | https://api.github.com/repos/langchain-ai/langchain/issues/13125/comments | 11 | 2023-11-09T13:05:16Z | 2024-02-16T16:07:01Z | https://github.com/langchain-ai/langchain/issues/13125 | 1,985,582,820 | 13,125 |
[
"hwchase17",
"langchain"
]
| ### System Info
Running langchain==0.0.332 with python 3.11 and openai==1.2.0 on Windows.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chat_models import ChatOpenAI
ChatOpenAI(
model="gpt-3.5-turbo",
request_timeout=5,
)
```
### Expected behavior
Should run without errors.
Likely due to newly introduced `httpx.Timeout` type in `request_timeout` (https://github.com/langchain-ai/langchain/pull/12948). Always importing httpx and tiktoken (i.e. not conditionally on `TYPE_CHECKING`) fixes the issue. | Adding timeout to ChatOpenAI raises ConfigError | https://api.github.com/repos/langchain-ai/langchain/issues/13124/comments | 4 | 2023-11-09T12:47:32Z | 2023-11-13T09:49:28Z | https://github.com/langchain-ai/langchain/issues/13124 | 1,985,552,554 | 13,124 |
[
"hwchase17",
"langchain"
]
| ### Feature request
please support httpx_client in openai version 1.1.1
just one more parameter
if there is another solution i would like to get it
thanks
### Motivation
cant work without ssl cert
### Your contribution
in AzureChatOpenAI and AzureOpenAI class
add parameter openai_http_client
in method validate_environment
please add at line 134 the next code
values["http_client"] = get_from_dict_or_env(
values, "openai_http_client", "OPENAI_HTTP_CLIENT", default=""
)
| httpx openai support | https://api.github.com/repos/langchain-ai/langchain/issues/13122/comments | 4 | 2023-11-09T12:04:56Z | 2024-02-25T16:05:52Z | https://github.com/langchain-ai/langchain/issues/13122 | 1,985,482,594 | 13,122 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu 22.04
langchain 0.0.332
python 3.10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install langchain --upgrade
langchain-server
### Expected behavior
Traceback (most recent call last):
File "/home/ps/anaconda3/envs/langchain/bin/langchain-server", line 6, in <module>
from langchain.server import main
ModuleNotFoundError: No module named 'langchain.server' | ModuleNotFoundError: No module named 'langchain.server' | https://api.github.com/repos/langchain-ai/langchain/issues/13120/comments | 4 | 2023-11-09T10:28:35Z | 2024-02-15T16:06:25Z | https://github.com/langchain-ai/langchain/issues/13120 | 1,985,321,425 | 13,120 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version : 0.0.327
Python version : 3.10.11
Platform : Windows 11
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I was using streaming callbacks with LLMChains.
I switched now to use LangChain Expression Language (LCEL), and using the stream/astream functions.
They don't seem to save the results to the configured LLM cache and don't load the results from there.
I'm not sure if this is intentional, or a bug.
If it is intentional I'm wondering how I should use the cache.
Code to reproduce the issue:
```python
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.globals import set_llm_cache
from langchain.cache import InMemoryCache
set_llm_cache(InMemoryCache())
model = ChatOpenAI(cache=True)
prompt = ChatPromptTemplate.from_template(
"tell me a joke in about 5 sentences about {topic}"
)
chain = prompt | model
for s in chain.stream({"topic": "bears"}):
print(s.content, end="", flush=True)
print("\n\nFINISHED FIRST COMPLETION\n")
for s in chain.stream({"topic": "bears"}):
print(s.content, end="", flush=True)
```
### Expected behavior
The second result should be the same and should be retrieved quickly from the LLM cache. | LCEL stream function doesn't use LLM cache | https://api.github.com/repos/langchain-ai/langchain/issues/13119/comments | 4 | 2023-11-09T10:12:40Z | 2024-03-17T16:05:16Z | https://github.com/langchain-ai/langchain/issues/13119 | 1,985,293,295 | 13,119 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Langchain is a great work!
Is there any possible that you guys can combine [Fusion](https://github.com/run-llama/llama_index/blob/main/docs/examples/low_level/fusion_retriever.ipynb) and [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)?
### Motivation
There are a lot of tables and text in my data. First of all, I have tried Fusion_RAG, which is much better than the baseline, but it is limited to the text, and the table can not be processed, so I wondered if there a way could combine Semi_Structured_RAG and Fusion_RAG so that I could deal with both text and table at the same time. ^-^
### Your contribution
Please make the fusion: BM25+Vec+Table | Fusion_RAG + Semi_Structured_RAG? | https://api.github.com/repos/langchain-ai/langchain/issues/13117/comments | 2 | 2023-11-09T09:53:44Z | 2024-02-15T16:06:30Z | https://github.com/langchain-ai/langchain/issues/13117 | 1,985,253,003 | 13,117 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version 0.0.332, affect all platforms
There is a mistake in the file:
qianfan\resources\llm\completion.py
this `endpoint="/chat/completions-pro`, is not correct ,it should be `endpoint="/chat/completions_pro`,"
like below:
``
"ERNIE-Bot-4": QfLLMInfo(
endpoint="/chat/completions_pro",
required_keys={"messages"},
optional_keys={
"stream",
"temperature",
"top_p",
"penalty_score",
"user_id",
"system",
},
),
``
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = QianfanLLMEndpoint(streaming = True,temperature = 0.5)
# llm.model = "ERNIE-Bot-turbo"
llm.model = "ERNIE-Bot-4"
# llm.model = "ChatGLM2-6B-32K"
res = llm("hi")
this will report error
### Expected behavior
should work without error | Qianfan llm error calling "ERNIE-Bot-4" | https://api.github.com/repos/langchain-ai/langchain/issues/13116/comments | 4 | 2023-11-09T09:42:52Z | 2024-02-16T16:07:06Z | https://github.com/langchain-ai/langchain/issues/13116 | 1,985,231,279 | 13,116 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.332
Python version: 3.11.5
### Who can help?
@hwchase17
When loading a local .owl file (the standard example pizza.owl) the operation breaks and gives the following error for all the URI:
does not look like a valid URI, trying to serialize this will break.
Here's the traceback
```
Traceback (most recent call last):
File ~\AppData\Roaming\Python\Python311\site-packages\IPython\core\interactiveshell.py:3526 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
Cell In[13], line 4
graph = RdfGraph(
File C:\Python311\Lib\site-packages\langchain\graphs\rdf_graph.py:159 in __init__
self.graph.parse(source_file, format=self.serialization)
File C:\Python311\Lib\site-packages\rdflib\graph.py:1501 in parse
raise se
File C:\Python311\Lib\site-packages\rdflib\graph.py:1492 in parse
parser.parse(source, self, **args)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:2021 in parse
p.loadStream(stream)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:479 in loadStream
return self.loadBuf(stream.read()) # Not ideal
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:485 in loadBuf
self.feed(buf)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:511 in feed
i = self.directiveOrStatement(s, j)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:532 in directiveOrStatement
return self.checkDot(argstr, j)
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:1214 in checkDot
self.BadSyntax(argstr, j, "expected '.' or '}' or ']' at end of statement")
File C:\Python311\Lib\site-packages\rdflib\plugins\parsers\notation3.py:1730 in BadSyntax
raise BadSyntax(self._thisDoc, self.lines, argstr, i, msg)
File <string>
BadSyntax
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behaviour:
1. Get the source file from : https://protege.stanford.edu/ontologies/pizza/pizza.owl and place it where the code runs
2. Use the following code:
```
from langchain.chains import GraphSparqlQAChain
from langchain.graphs import RdfGraph
graph = RdfGraph(
source_file="pizza.owl",
standard="owl"
)
graph.load_schema()
print(graph.get_schema)
```
### Expected behavior
For the graph to load and for graph.get_schema to show the classes and object properties. | langchain.graph RDFGraph does not read .owl extension files | https://api.github.com/repos/langchain-ai/langchain/issues/13115/comments | 3 | 2023-11-09T09:39:19Z | 2023-11-09T09:56:03Z | https://github.com/langchain-ai/langchain/issues/13115 | 1,985,225,113 | 13,115 |
[
"hwchase17",
"langchain"
]
|
# Issue you'd like to raise.
## Issue: <ValidationError: 1 validation error for ChatOpenAI __root__ openai has no ChatCompletion attribute, this is likely due to an old version of the openai package. Try upgrading it with pip install --upgrade openai. (type=value_error)>
### Import necessary packages for Streamlit app, PDF processing, and OpenAI integration.
```
`import` streamlit as st
import pdfplumber
import os
from langchain.vectorstores import faiss
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from langchain.docstore.document import Document
from dotenv import load_dotenv
```
### Load and verify environment variables, specifically the OpenAI API key.
### Load .env file that is in the same directory as your script.
```
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if OPENAI_API_KEY is None:
raise ValueError("OpenAI API key not found. Make sure you have an .env file with the key defined.")
```
### Initialize the OpenAI language model with a given API key and temperature setting.
```
chain = ConversationalRetrievalChain.from_llm(
llm = ChatOpenAI(temperature=0.1,model_name='gpt-4'),
retriever=vectorstore.as_retriever())
```
### Initializing the Embedding Model and OpenAI Model
```
embeddings = OpenAIEmbeddings()
vectorstore = faiss.from_documents(data, embeddings)
```
### Process an uploaded PDF document and extract its text content.
```
def process_document(uploaded_file):
with pdfplumber.open(uploaded_file) as pdf:
document_text = "\n".join(page.extract_text() for page in pdf.pages if page.extract_text())
return document_text
```
### Summarize the extracted text from the document using the OpenAI language model.
```
def summarize_document(llm, document_text):
text_splitter = CharacterTextSplitter(max_length=1000)
texts = text_splitter.split_text(document_text)
docs = [Document(content=t) for t in texts]
summarize_chain = load_summarize_chain(llm, chain_type='map_reduce')
return summarize_chain.run(docs)
```
### Initialize a conversation chain with memory capabilities for the chatbot.
```
def` initialize_conversation_chain(llm):
return ConversationalRetrievalChain(
llm=llm,
memory=ConversationBufferWindowMemory(k=5) # Stores the last 5 interactions.
)
```
### Define the main function to run the Streamlit application.
```
def run_app():
llm = initialize_llm(OPENAI_API_KEY)
st.title("Earnings Call Analysis App")
### UI for document upload and processing.
uploaded_file = st.file_uploader("Upload your earnings call transcript", type=["pdf"])
process_button = st.button("Process Document")
```
### Process document and generate summaries
```
if process_button and uploaded_file:
with st.spinner('Processing Document...'):
document_text = process_document(uploaded_file)
summaries = summarize_document(llm, document_text)
display_summaries(summaries)
st.success("Document processed!")
```
### UI for interactive chatbot with memory feature.
```
conversation_chain = initialize_conversation_chain(llm)
user_input = st.text_input("Ask a question about the earnings call:")
if st.button('Get Response'):
with st.spinner('Generating response...'):
response = generate_chat_response(conversation_chain, user_input, document_text)
st.write(response)
```
### Display summaries on the app interface and provide download option for each.
```
def display_summaries(summaries):
if summaries:
for i, summary in enumerate(summaries):
st.subheader(f"Topic {i+1}")
st.write("One-line topic descriptor: ", summary.get("one_line_summary", ""))
st.write("Detailed bulleted topic summaries: ", summary.get("bulleted_summary", ""))
download_summary(summary.get("bulleted_summary", ""), i+1)
```
### Create a downloadable summary file.
```
def download_summary(summary, topic_number):
summary_filename = f"topic_{topic_number}_summary.txt"
st.download_button(
label=f"Download Topic {topic_number} Summary",
data=summary,
file_name=summary_filename,
mime="text/plain"
)
```
### Generate a response from the chatbot based on the user's input and document's context.
```
def generate_chat_response(conversation_chain, user_input, document_text):
response = conversation_chain.generate_response(
prompt=user_input,
context=document_text
)
return response.get('text', "Sorry, I couldn't generate a response.")
if __name__ == "__main__":
run_app()
```
### Suggestion:
_No response_ | Issue: <ValidationError: 1 validation error for ChatOpenAI __root__ `openai` has no `ChatCompletion` attribute, this is likely due to an old version of the openai package. Try upgrading it with `pip install --upgrade openai`. (type=value_error)> | https://api.github.com/repos/langchain-ai/langchain/issues/13114/comments | 7 | 2023-11-09T09:35:22Z | 2024-02-08T16:06:57Z | https://github.com/langchain-ai/langchain/issues/13114 | 1,985,218,334 | 13,114 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
for example in my below case
```python
english_tools = [
Tool(name="SomeNAME_1",
func=lambda q: app.finance_chain.run(q),
description=" Some app related description ",
return_direct=True,
coroutine=lambda q: app.finance_chain.arun(q),
),
Tool(name="SomeNAME_2",
func=lambda q: app.rqa(q),
description=" Some app related description ",
coroutine=lambda q: app.rqa_english.arun(q),
return_direct=True
),
Tool.from_function(
name="SomeNAME_3",
func=lambda q: app.pd_agent(q),
description=" Some app related description",
coroutine=lambda q: app.pd_agent.arun(q),
)
]```
so when SomeNAME_3 invoke i don't wann pass memory to this tool
### Suggestion:
_No response_ | How can we selectively pass memory to specific tools without passing it to all tools? | https://api.github.com/repos/langchain-ai/langchain/issues/13112/comments | 4 | 2023-11-09T06:33:12Z | 2024-02-23T16:06:52Z | https://github.com/langchain-ai/langchain/issues/13112 | 1,984,928,992 | 13,112 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.300
Python version: 3.10.12
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
```py
# Prompt
prompt_text = """You are an assistant tasked with summarizing tables and text. \
Give a concise summary of the table or text. Table or text chunk: {element} """
prompt = PromptTemplate.from_template(prompt_text)
# Summary chain
model = Replicate(
model="meta/llama-2-7b-chat:13c3cdee13ee059ab779f0291d29054dab00a47dad8261375654de5540165fb0",
model_kwargs={"temperature": 0.75, "max_length": 3000, "top_p":0.25}
)
summarize_chain = {"element": lambda x: x} | prompt | model | StrOutputParser()
# Apply to text
texts = [i.text for i in text_elements]
text_summaries = summarize_chain.batch(texts, {"max_concurrency": 5})
```
Error:
```txt
ValueError: config must be a list of the same length as inputs, but got 27 configs for 5 inputs
```
With OpenAI, it is working as expected.
### Expected behavior
The summarization chain (powered by Replicate) should process each chunks batch-wise. | Summarization chain batching is not working with Replicate | https://api.github.com/repos/langchain-ai/langchain/issues/13108/comments | 5 | 2023-11-09T04:33:37Z | 2024-02-29T11:04:57Z | https://github.com/langchain-ai/langchain/issues/13108 | 1,984,802,487 | 13,108 |
[
"hwchase17",
"langchain"
]
| 
Hello all, I'm attempting to perform a SPARQL graph query using my local LLM, but it appears that something is amiss. Please feel free to share any helpful tips or guidance.
`graph = RdfGraph(
source_file="http://www.w3.org/People/Berners-Lee/card",
standard="rdf",
local_copy="test1109.ttl",
)
tokenizer = AutoTokenizer.from_pretrained('C:\\data\\llm\\chatglm-6b-int4', trust_remote_code=True)
model = AutoModel.from_pretrained('C:\\data\\llm\\chatglm-6b-int4', trust_remote_code=True).half().cuda().eval()
chain = GraphSparqlQAChain.from_llm(model, graph=graph, verbose=True)
question = "What is Tim Berners-Lee's work homepage?"
result = chain.run(question)`
` File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)`
| Performing a Graph SPARQL Query with a Local LLM | https://api.github.com/repos/langchain-ai/langchain/issues/13107/comments | 6 | 2023-11-09T03:37:33Z | 2024-02-26T16:07:08Z | https://github.com/langchain-ai/langchain/issues/13107 | 1,984,758,551 | 13,107 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Here's an example implementation of `RunnableRetry`,
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schame.runnable.retry import RunnableRetry
template = PromptTemplate.from_template("tell me a joke about {topic}.")
error_template = PromptTemplate.from_template("tell me a joke about {topic} with {context}.")
model = ChatOpenAI(temperature=0.5)
chain = template | model
retryable_chain = RunnableRetry(bound=chain, max_attempt_number=3, callback={'prompt': error_template})
```
As for now `RunnableRetry` won't support callback and I'm not sure about altering the first step of the RunnableSequence with modified inputs (inputs along with the error message.)
### Suggestion:
_No response_ | How to use a different prompt template upon runnable retry? | https://api.github.com/repos/langchain-ai/langchain/issues/13105/comments | 3 | 2023-11-09T03:13:18Z | 2024-02-12T08:10:50Z | https://github.com/langchain-ai/langchain/issues/13105 | 1,984,740,642 | 13,105 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.330
openai==0.28.1
python==3.9.17
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have written a simple structured output parser. I am using to extract useful data from a document text. Here's my code:
```
import os
import logging
from dotenv import load_dotenv
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
load_dotenv()
response_schemas = [
ResponseSchema(
name="document_type", description="Type of document, typically found on top"
),
ResponseSchema(name="shipper", description="Shipper name found in the data"),
ResponseSchema(name="consignee", description="Consignee name found in the data"),
ResponseSchema(name="point_of_origin", description="Point of origin in the data"),
ResponseSchema(
name="customer_order_number", description="Customer order number in the data"
),
ResponseSchema(
name="order_number", description="Order number mentioned in the data"
),
ResponseSchema(
name="bill_of_lading",
description="Bill of lading number(B/L number) found in the data",
),
ResponseSchema(
name="carrier_name", description="Carrier name mentioned in the data"
),
ResponseSchema(
name="required_ship_date", description="Required ship date in date format"
),
ResponseSchema(
name="shipped_date", description="Shipped date, typically separated by /"
),
ResponseSchema(
name="transportation_mode", description="Transportation mode such as truck etc."
),
ResponseSchema(
name="vehicle_number", description="Vehicle number found in the data"
),
ResponseSchema(name="routing_info", description="Routing info found in the data"),
ResponseSchema(
name="invoice_to_buyer", description="Invoice to buyer data found in the data"
),
ResponseSchema(
name="consignee_number", description="Consignee number mentioned in the data"
),
ResponseSchema(
name="net_weight",
description="Net weight found in the data, typically found on the second page. It's a number succeeded by weight symbol such as kg/lb/1b/15/16 and ends with NT (Net weight).",
),
ResponseSchema(name="ticket_number", description="Ticket number found in the data"),
ResponseSchema(name="outbound_date", description="Outbound date found in the data"),
]
system_prompt = """
Following is the data extracted from a document through OCR wrapped inside <ocr_data> delimeter. It may be unstructured and unorganized, and you'll help me extract key information from this data. The data can be nuanced, and field and it's respective values may be at different positions. The presented data can be of multiple pages, separated by (------). Analyze the OCR data below and give me the value of given fields. If you can't find the values in the OCR data, simply return 'N/A'.
"""
class BolAgent:
def __init__(self):
self.openai_api_key = os.getenv("OPENAI_API_KEY")
self.llm = OpenAI(
openai_api_key=self.openai_api_key, temperature=0.1, max_tokens=1000
)
self.chat_model = ChatOpenAI(
model="gpt-3.5-turbo-16k",
openai_api_key=self.openai_api_key,
temperature=0,
)
self.response_schemas = response_schemas
self.system_prompt = system_prompt
def extract_paramerts(
self,
ocr_data,
):
output_parser = StructuredOutputParser.from_response_schemas(
self.response_schemas
)
input_data = f"<ocr_data>/n{ocr_data}/n</ocr_data>"
format_instructions = output_parser.get_format_instructions()
prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template(
"{system_prompt}\n\n{format_instructions}\n\n{input_data}"
)
],
input_variables=["system_prompt", "input_data"],
partial_variables={"format_instructions": format_instructions},
)
llm_input = prompt.format_prompt(
system_prompt=system_prompt, input_data=input_data
)
logging.info(f"LLM Input: {llm_input}")
output = self.chat_model(llm_input.to_messages())
logging.info(f"LLM Output: {output}")
result = output_parser.parse(output.content)
return result
```
When I use this code on any input data, the output parser gives error most of the time. Here's a sample input data:
`\n------------------\nSTRAIGHT BILL OF LADING - SHORT FORM\nTEST\nCHEMTRADE\nFICHE D\'EXPEDITION - FORMULE REGULIERE\nS4D\nSHIPPER/EXPEDITEUR\nChemtrade West Limited Partnership\nTIME IN/ARRIVEE\nGROSS/BRUT\nCONSIGNEE/DESTINATAIRE\nSASK POWER\n TARE\nSHIP TO/EXPEDIEZ A\nCORY \nCOGENERATION STATION\nTIME OUT/DEPART\n8 KM W OF SASKATOON HWY 7\nNET\nVANSCOY SOK 1VO SK CA\nPOINT OF ORIGIN/POINT D\'EXPEDITION\nCUSTOMER ORDER NO./N DE COMMANDE DU CLIENT\nORDER NO./N DE COMM.\n3/L NO./NDE CONN.\nCHEMTRADE (SASKATOON)\nS\n1856\n80001877\n CARRIER NAME/NOM DU TRANSPORTEUR\nREQUIRED SHIP DATE/DATE EXP.DEM.\nDATE SHIPPED/EXPEDIE LE\nCARON TRANSPORT LTD\nNov 06,2023\nTRANSPORTATION MODE/MODE DE TRANSPORT\nVEHICLE T/C NO. - MARQUE DU WAGON\nTruck\n UNIVAR CANADA LTD.\n ROUTING/ITINERAIRE\nCONSIGNEE#/CONSIGNATAIRE\nPAGE\n600929\n1 of\n3\nNO.AND DESCRIPTION OF PACKS\nD.G.\nDESCRIPTION OF ARTICLES AND SPECIAL MARKS\nNET WEIGHT KG\nNBRE ET DESCRIPTION DE COLIS\nDESCRIPTION DES ARTICLES ET INDICATIONS SPECIALS\nPOIDS NET\n1 TT\nX\nUN1830, SULFURIC ACID, 8, PG II\n21.000 Tonne\nSULFURIC ACID 93%\nER GUIDE #137\n4 PLACARDS REQUIRED; CLASS 8, CORROSIVE\nSTCC 4930040\nSulfuric Acid 93%\nCOA W/ SHIPMENT\nDELIVERY HOURS: 8:OOAM-1: OOPM MON-THURS\nATTENDANCE DURING OFFLOAD REQUIRED\nSAFETY GOGGLES, FACE SHIELD, GLOVES, BOOTS, HARD\nHAT, STEEL TOED SHOES, PROTECTIVE SUIT\n3" QUICK CONNECT CAMLOCK; 1 HOSE REQUIRED\nPersonal Protective Equipment: Gloves. Protective clothing. Protective goggles. Face shield.\nnsufficient ventilation: wear respiratory protection.\nERP 2-1564 and Chemtrade Logistics 24-Hour Number >>\n1-866-416-4404\nPIU 2-1564 et Chemtrade Logistics Numero de 24 heures >>\n1-866-416-4404\nConsignor / Expediteur:\nLocation / Endroit:\nCHEMTRADE WEST LIMITED PARTNERSHIP\n11TH STREET WEST\nI hereby declare that the contents of this consignment are fully and accurately described above by the proper shipping\nSASKATOON SK CA\nare in all respects in proper condition for transport according to the Transportation of Dangerous Goods Regulations.\nS7K 4C8\nPer/Par:Michael Rumble, EHS Director, Risk Management\nIF CHARGES ARE TO BE PREPAID, WRITE OR STAMP\nJe declare que le contenu de ce chargement est decrit ci-dessus de faconcomplete et exacte par Iappellation reglementaire\nINDIQUER ICI SI L\'ENVOI SE FAIT EN "PORT-PAYE"\negards bien conditionne pouretre transporte conformement au Reglement sur le transport des marchandises dangereuses.\nPrepaid\nFORWARD INVOICE FOR PREPAID FREIGHT\nChemtrade West Limited Partnership\nQUOTING OUR B/L NO.TO:\n155 Gordon\nBaker Rd #300\nWeight Agreement\nFAIRE SUIVRE FACTURE POUR EXPEDITION PORT\nToronto,\nOnt.\nM2H 3N5\nPAYE EN REFERANT A NOTRE NUMERO DE CONN.A:\nSHIPPER\nChemtrade West Limited\nAGENT\nCONSIGNEE.\nEXPEDITEUR\nPartnership\nDESTINATAIRE\nPER\nPERMANENT POST OFFICE ADDRESS OF SHIPPER\nPER\nPER\nPAR\n(ADRESSE POSTALE PERMANENTE DE L\'EXPEDITEUR)\nTHESE PRODUCTS ARE SOLD AND SHIPPED IN ACCORDANCE WITH\nTHE TERMS OF SALES ON THE REVERSE SIDE OF THIS,DOCUMENT.\nResponsible Care\nCES PRODUITS SONT VENDUS ET EXPEDIES CONFORMEMENTAUX\nCONDITIONS DE VENTE APPARAISSANT AU VERSO DE LA PRESENTE\nOur commitment to sustainability.\nS4D PRASRNov 06,2023 1618`
Upon further debugging I found that for some reason the output has two triple-backticks at the end and because of this the Structured Output Parser ends up giving the error. Here the output for better clarity (Notice the end of output):
`content='```json\n{\n\t"document_type": "STRAIGHT BILL OF LADING - SHORT FORM",\n\t"shipper": "Chemtrade West Limited Partnership",\n\t"consignee": "SASK POWER",\n\t"point_of_origin": "VANSCOY SOK 1VO SK CA",\n\t"customer_order_number": "80001877",\n\t"order_number": "1856",\n\t"bill_of_lading": "600929",\n\t"carrier_name": "CARON TRANSPORT LTD",\n\t"required_ship_date": "Nov 06,2023",\n\t"shipped_date": "Nov 06,2023",\n\t"transportation_mode": "Truck",\n\t"vehicle_number": "T/C NO.",\n\t"routing_info": "UNIVAR CANADA LTD.",\n\t"invoice_to_buyer": "Chemtrade West Limited Partnership",\n\t"consignee_number": "600929",\n\t"net_weight": "21.000 Tonne",\n\t"ticket_number": "N/A",\n\t"outbound_date": "N/A"\n}\n```\n```'`
I have started to notice this error at high-frequency after OpenAI dev day. Any idea what I might be doing wrong?
### Expected behavior
The output should only have one triple-backticks at the end and the output parser should parse the output properly. | Structured Output Parser Always Gives Error | https://api.github.com/repos/langchain-ai/langchain/issues/13101/comments | 7 | 2023-11-09T01:22:50Z | 2024-02-15T16:06:40Z | https://github.com/langchain-ai/langchain/issues/13101 | 1,984,653,868 | 13,101 |
[
"hwchase17",
"langchain"
]
| I see the following when using AzureChatOpenAI with with_fallbacks. Removing with_fallbacks doesn't cause this issue.
`Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)`
### System Info
Pydantic v1, same with v2
**Lanchain 0.0.295**
### Who can help?
@hwchase17
@agola11
@ey
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This doesn't work:
``` fallback_chat_model = ChatOpenAI(model_name="model_name")
primary_chat_model = AzureChatOpenAI()
chat_model = primary_chat_model.with_fallbacks([fallback_chat_model])
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
length_function=primary_chat_model.get_num_tokens,
)
```
### Expected behavior
Should work as this, but with fallback support
``` fallback_chat_model = ChatOpenAI(model_name="model_name")
primary_chat_model = AzureChatOpenAI()
chat_model = primary_chat_model #.with_fallbacks([fallback_chat_model])
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
length_function=primary_chat_model.get_num_tokens,
)
``` | Serialize failing - Can't use with_fallbacks with MapReduceChain/Summarization: Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt... | https://api.github.com/repos/langchain-ai/langchain/issues/13098/comments | 4 | 2023-11-08T23:48:38Z | 2024-02-15T16:06:45Z | https://github.com/langchain-ai/langchain/issues/13098 | 1,984,577,336 | 13,098 |
[
"hwchase17",
"langchain"
]
| ### System Info
make text
Running Sphinx v4.5.0
loading pickled environment... done
[autosummary] generating autosummary for: index.rst
building [mo]: targets for 0 po files that are out of date
building [text]: targets for 0 source files that are out of date
updating environment: 0 added, 1 changed, 0 removed
reading sources... [100%] index
....
Exception occurred:
File "/lib/python3.10/site-packages/sphinx/registry.py", line 354, in create_translator
setattr(translator, 'visit_' + name, MethodType(visit, translator))
TypeError: first argument must be callable
Full path of file registry.py is truncated.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run make text in docs/api_reference folder
### Expected behavior
Documentation output in .rst format | make text fails | https://api.github.com/repos/langchain-ai/langchain/issues/13096/comments | 4 | 2023-11-08T22:39:37Z | 2024-02-15T16:06:50Z | https://github.com/langchain-ai/langchain/issues/13096 | 1,984,517,806 | 13,096 |
[
"hwchase17",
"langchain"
]
| ### System Info
llm/chat_models created using with_fallback() throw attribute error when trying to access their get_num_tokens() function. This could potentially break several use cases where one does chat_model.get_num_tokens(). I believe even the docs access it as such (For example in the [MapReduce using LCEL](https://python.langchain.com/docs/modules/chains/document/map_reduce#recreating-with-lcel) docs)
My use case was as such:
```
fallback_chat_model = ChatOpenAI(model_name="model_name", temperature=0)
primary_chat_model = AzureChatOpenAI(temperature=0)
chat_model = primary_chat_model.with_fallbacks([fallback_chat_model])
## Split text using length_function and RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
length_function=chat_model.get_num_tokens,
)
texts = text_splitter.create_documents([text_sentence_numbered])
```
Of course, I can just use primary_chat_model's get_num_tokens if it's equivalent to the fallback's but in the case it's not, this would be a bigger issue. Worse still, this may be used within chains like MapReduceChain etc and many haven't even begun the switch to LCEL.
**Langchain Version 0.0.295**
### Who can help?
@eyurtsev @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
fallback_chat_model = ChatOpenAI(model_name="model_name", temperature=0)
primary_chat_model = AzureChatOpenAI(temperature=0)
chat_model = primary_chat_model.with_fallbacks([fallback_chat_model])
## Split text using length_function and RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
length_function=chat_model.get_num_tokens,
)
texts = text_splitter.create_documents([text_sentence_numbered])
```
### Expected behavior
get_num_tokens() is accessible and based on the fallback/original chat_model or llm being used. | 'RunnableWithFallbacks' object has no attribute 'get_num_tokens' | https://api.github.com/repos/langchain-ai/langchain/issues/13095/comments | 8 | 2023-11-08T22:28:33Z | 2024-02-14T16:06:08Z | https://github.com/langchain-ai/langchain/issues/13095 | 1,984,500,319 | 13,095 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Is it possible to send back the annotations in the response when a request / response is content filtered by AzureOpenAI? Some prompts are content filtered by Azure and sometimes the response to certain prompts are content filtered as well.
Azure OpenAI sends back the annotations https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=python#annotations-preview
Langchain needs to capture these annotations and send it along with the response
### Motivation
This feature will help us to respond to our uses why a particular prompt was content filtered. Today we can only tell our users whether or not a prompt was content filtered by Azure but when the users ask why the filtration was applied, we have no definite answer. With these annotations we can tell our users why the content filtration was applied
### Your contribution
No, not at the moment | Send back annotations sent by OpenAI for content filtered requests (and responses) | https://api.github.com/repos/langchain-ai/langchain/issues/13090/comments | 2 | 2023-11-08T21:23:40Z | 2024-02-14T16:06:13Z | https://github.com/langchain-ai/langchain/issues/13090 | 1,984,421,073 | 13,090 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The tests for Langchain's `Chroma` are currently broken. The reason is:
- By default, chroma persists locally, for ease of use
- All the tests use the same collection name
- Thus, when a new `Chroma` is created, it sees the collections persisted from previous tests.
This causes inconsistent behavior, since the contents of the collection depend on the order of the tests, not just what the test itself added.
### Suggestion:
The solution is to create fixtures which appropriately teardown the `Chroma` after every test. | Issue: Chroma tests are buggy | https://api.github.com/repos/langchain-ai/langchain/issues/13087/comments | 2 | 2023-11-08T20:56:30Z | 2024-02-14T16:06:18Z | https://github.com/langchain-ai/langchain/issues/13087 | 1,984,384,651 | 13,087 |
[
"hwchase17",
"langchain"
]
| ### System Info
Traceback:
File "/home/appuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/app/demo-chatbot/updatedapp.py", line 351, in <module>
asyncio.run(main())
File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/app/demo-chatbot/updatedapp.py", line 214, in main
vectors = file.get_vector()
File "/app/demo-chatbot/file.py", line 39, in get_vector
self.save_vector()
File "/app/demo-chatbot/file.py", line 30, in save_vector
embeddings = OpenAIEmbeddings(openai_api_key=st.secrets["OPEN_AI_KEY"])
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/home/appuser/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 166, in validate_environment
values["client"] = openai.Embedding
### Who can help?
@agola11 OpenAI changed the code https://platform.openai.com/docs/guides/embeddings/what-are-embeddings
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just follow a funciton similar to
def save_vector(self):
self.create_images()
self.extract_text()
loader = TextLoader(f"{self.dir}/info.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectors = FAISS.from_documents(documents, embeddings)
### Expected behavior
No traceback adding .create | OpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/13082/comments | 4 | 2023-11-08T19:37:37Z | 2024-02-15T16:06:55Z | https://github.com/langchain-ai/langchain/issues/13082 | 1,984,273,473 | 13,082 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain cloned
peortry with python v3.11.4
openai v1.1.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
cloned the repositpry
cd to `libs/langchain`
```
poetry install --with test
```
```
make test
```
output
```
....
FAILED tests/unit_tests/llms/test_anyscale.py::test_api_key_is_secret_string - AttributeError: module 'openai' has no attribute 'ChatCompletion'
FAILED tests/unit_tests/llms/test_anyscale.py::test_api_key_masked_when_passed_from_env - AttributeError: module 'openai' has no attribute 'ChatCompletion'
FAILED tests/unit_tests/llms/test_anyscale.py::test_api_key_masked_when_passed_via_constructor - AttributeError: module 'openai' has no attribute 'ChatCompletion'
FAILED tests/unit_tests/llms/test_gooseai.py::test_api_key_is_secret_string - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_gooseai.py::test_api_key_masked_when_passed_via_constructor - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_gooseai.py::test_api_key_masked_when_passed_from_env - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_model_param - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_model_kwargs - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_incorrect_field - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_retries - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/llms/test_openai.py::test_openai_async_retries - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_dump.py::test_serialize_openai_llm - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_dump.py::test_serialize_llmchain - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_dump.py::test_serialize_llmchain_env - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_dump.py::test_serialize_llmchain_with_non_serializable_arg - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_loads_openai_llm - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_loads_llmchain - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_loads_llmchain_env - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_loads_llmchain_with_non_serializable_arg - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_load_openai_llm - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_load_llmchain - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_load_llmchain_env - AttributeError: module 'openai' has no attribute 'Completion'
FAILED tests/unit_tests/load/test_load.py::test_load_llmchain_with_non_serializable_arg - AttributeError: module 'openai' has no attribute 'Completion'
==================================================================================== 23 failed, 1348 passed, 270 skipped, 24 warnings in 16.11s =========================================================
```
### Expected behavior
All tests should pass or skipped without failing. | Tests are failing in local development | https://api.github.com/repos/langchain-ai/langchain/issues/13081/comments | 5 | 2023-11-08T19:12:43Z | 2024-02-14T16:06:28Z | https://github.com/langchain-ai/langchain/issues/13081 | 1,984,239,004 | 13,081 |
[
"hwchase17",
"langchain"
]
| ### System Info
version 0.0.331
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
omit
### Expected behavior
I would like to use BAAI/bge-reranker-large model as the reranker to rerank the intial retrieval results,
how to use this model with langchain ?
I have seen the demo with Cohere ranker,
but how to use ohter reranker models?
Does langchain support other reranker models except for CohereReranker? | how to use reranker model with langchain in retrievalQA case? | https://api.github.com/repos/langchain-ai/langchain/issues/13076/comments | 20 | 2023-11-08T18:06:30Z | 2024-07-31T16:06:20Z | https://github.com/langchain-ai/langchain/issues/13076 | 1,984,144,941 | 13,076 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Using LCEL as suggested on the [docs](https://python.langchain.com/docs/modules/agents/), combined with `AgentExecutor`, generates a typing error when passing the runnable agent to the `AgentExecutor` constructor. This is because `AgentExecutor` defines its `agent` property as of type `BaseSingleActionAgent | BaseMultiActionAgent`:
https://github.com/langchain-ai/langchain/blob/55aeff6777431dc24e48f018e39aa418f95a6489/libs/langchain/langchain/agents/agent.py#L728-L731
So:
```python
agent = {
...
} | prompt | llm | OpenAIFunctionsAgentOutputParser()
# agent would be of type Runnable[Unknown, Unknown]
# but the typing on AgentExecutor only takes a BaseSingleActionAgent
# or a BaseMultiActionAgent as a valid agent
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# ↳ "Runnable[Unknown, Unknown]" is incompatible with "BaseSingleActionAgent"
# ↳ "Runnable[Unknown, Unknown]" is incompatible with "BaseMultiActionAgent"
```
### Suggestion:
`AgentExecutor` should accept a `Runnable` for its agent property | Issue: AgentExecutor typings should accept Runnable for the agent property (to support LCEL agent) | https://api.github.com/repos/langchain-ai/langchain/issues/13075/comments | 3 | 2023-11-08T18:05:14Z | 2024-05-21T05:05:21Z | https://github.com/langchain-ai/langchain/issues/13075 | 1,984,143,084 | 13,075 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi, I'd like to request a feature for a `AnthropicFunctionsAgent` built on top of `AnthropicFunctions` and ideally compatible with `create_conversational_retrieval_agent`.
### Motivation
Everyone working with Anthropic models could use an Agents class!
### Your contribution
Can't currently. | [Feature Request] AnthropicFunctionsAgent | https://api.github.com/repos/langchain-ai/langchain/issues/13073/comments | 2 | 2023-11-08T17:29:21Z | 2024-02-14T16:06:33Z | https://github.com/langchain-ai/langchain/issues/13073 | 1,984,087,808 | 13,073 |
[
"hwchase17",
"langchain"
]
| ### System Info
python==3.11
langchain==0.0.326
ollama==v0.1.8
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Steps to reproduce the behavior:
```
llm = Ollama(model='llama2')
r = await llm.agenerate(['write a limerick about babies'])
print('\n'.join([t[0].text for t in r.generations]))
```
Generates the following output:
```
[/path/to/python3.11/site-packages/langchain/llms/ollama.py:164](https://file+.vscode-resource.vscode-cdn.net/Users/gburns/miniconda3/envs/alhazen/lib/python3.11/site-packages/langchain/llms/ollama.py:164): RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited
run_manager.on_llm_new_token(
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Sure! Here is a limerick about babies:
everybody loves babies, they're so sweet;
their coos and snuggles can't be beat.
They suck their thumbs and play with toes,
and bring joy to all who sees.
```
### Expected behavior
The generation should occur without the warning.
The error is due to the `_OllamaCommon._stream_with_aggregation(` function not being able to distinguish between being called in a blocking or an async context.
The reason this is important is that sometimes Ollama gets stuck in a generation (taking a long time to complete) and I would like to be able to call a timeout on the underlying process. The following code can do this, but we get the warning as previously described (note, this requires that ollama be running in the background).
```
def _callback(fut: asyncio.Future):
if fut.cancelled() or not fut.done():
print("Timed out! - Terminating server")
fut.cancel()
async def run_llm(llm, prompt, timeout=300):
# create task
task = asyncio.create_task(llm.agenerate([prompt]))
task.add_done_callback(_callback)
# try to await the task
try:
r = await asyncio.wait_for(task, timeout=timeout)
except asyncio.TimeoutError as ex:
print(ex)
if r is not None:
return '\n'.join([t[0].text for t in r.generations])
else:
return ''
text = await llm.agenerate(['write a limerick about babies'])
print(text)
```
| Running Ollama asynchronously generates a warning: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited (langchain/llms/ollama.py:164) | https://api.github.com/repos/langchain-ai/langchain/issues/13072/comments | 5 | 2023-11-08T17:23:48Z | 2024-04-02T16:06:09Z | https://github.com/langchain-ai/langchain/issues/13072 | 1,984,077,044 | 13,072 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Related to the #13036.
I've searched in the LangChain documentation by "Search" button and didn't find existing example which was in `cookbooks`.
### Idea or request for content:
Is it possible to add `cookbooks` into the Documentation search? Or maybe right into the Main menu? | DOC: no `cookbooks` search | https://api.github.com/repos/langchain-ai/langchain/issues/13070/comments | 4 | 2023-11-08T16:48:44Z | 2024-02-07T16:43:40Z | https://github.com/langchain-ai/langchain/issues/13070 | 1,984,016,062 | 13,070 |
[
"hwchase17",
"langchain"
]
| ### System Info
Unix and Windows
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
pip install langchain==0.0.79
pip install langchain>=0.0.331
```
### Expected behavior
I would expect to install the latest version 0.0.331 but its not. Seems pip undestands that 0.0.79 is a higher version than 0.0.331? | PyPI versions issue e.g. langchain==0.0.331 is not newer than langchain==0.0.79 | https://api.github.com/repos/langchain-ai/langchain/issues/13069/comments | 3 | 2023-11-08T16:23:02Z | 2023-11-16T12:28:10Z | https://github.com/langchain-ai/langchain/issues/13069 | 1,983,971,005 | 13,069 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The agent(STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION) sometimes does not respond the "Final Answer" instead of the AI's thought.
### Actual result:
The human is asking for the default xxxxxx. The tool has provided the information. I will now relay this information to the human.
### Expected result:
{
"action": "Final Answer",
"action_input": "The information comes from the tool."
}
### System info:
AzureChatOpenAI / gpt-4
This is a sometimes issue. I have tried with the smith playground, it can happen as well. Any thoughts about it? | Issue: STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION sometimes not return the "Final Answer" | https://api.github.com/repos/langchain-ai/langchain/issues/13065/comments | 2 | 2023-11-08T15:22:08Z | 2024-02-16T16:07:16Z | https://github.com/langchain-ai/langchain/issues/13065 | 1,983,852,453 | 13,065 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version pinned to "^0.0.278" (using Poetry)
Python 3.11.5
Other modules from langchain (such as langchain.cache and langchain.chains) are imported within the same file in the application code and are able to be found. Only the `langchain.globals` module is not being recognized
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Import langchain.globals
Error happens on application startup
### Expected behavior
Expected to be able to find the module, as langchain's other module's are able to be found by the same application code | Seeing an error ModuleNotFoundError: No module named 'langchain.globals' | https://api.github.com/repos/langchain-ai/langchain/issues/13064/comments | 3 | 2023-11-08T15:22:05Z | 2024-05-05T16:05:37Z | https://github.com/langchain-ai/langchain/issues/13064 | 1,983,852,320 | 13,064 |
[
"hwchase17",
"langchain"
]
| ### System Info
Using Google Colab Free version with T4 GPU.
chromadb==0.4.16
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
As per the latest Chromadb migration logs ([link](https://docs.trychroma.com/migration#migration-to-0416---november-7-2023)) `EmbeddingFunction` defnition has been updated and it affects all the custom made embedding function.
What this means is the `langchain.embeddings.HuggingFaceBgeEmbeddings` is inconsistent with this new definition and throws the following error:
```py
ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])
Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.
Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023
```
The above code can be reproduced by inserting documents into Chromadb embedded using `HuggingFaceBgeEmbeddings` like so:
```py
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.embeddings import HuggingFaceBgeEmbeddings
from transformers import AutoTokenizer
embedding_function = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-base-en-v1.5",
model_kwargs={'device': 'cuda'},
encode_kwargs={'normalize_embeddings': True},
query_instruction="Represent this sentence for searching relevant passages: "
)
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-base-en-v1.5')
text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
tokenizer, chunk_size=100, chunk_overlap=0
)
text = 'Some text that needs to be embedded.'
print(len(embedding_function.embed_query(text))) # works so far
splits = text_splitter.create_documents([text])
db = Chroma.from_documents(splits, embedding_function, persist_directory="./chroma_db")
```
I am not sure, but the answer might lie in correcting the `HuggingFaceBgeEmbeddings` class : [link](https://github.com/langchain-ai/langchain/blob/1f27104626fc71a5199df965011810426dd2eede/libs/langchain/langchain/embeddings/huggingface.py#L188) ?
### Expected behavior
The expected behaviour would have made a valid `db` object upon running the code
```py
db = Chroma.from_documents(splits, embedding_function, persist_directory="./chroma_db")
| ChromaDb EmbeddingFunction definition updated | https://api.github.com/repos/langchain-ai/langchain/issues/13061/comments | 11 | 2023-11-08T14:11:41Z | 2024-08-02T17:38:51Z | https://github.com/langchain-ai/langchain/issues/13061 | 1,983,705,244 | 13,061 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When initializing the langchain UntructuredPDFLoader e.g. as follows
` loader = UnstructuredPDFLoader(downloaded_file, mode='elements')`
This method calls the following function (see langchain/document_loaders/pdf.py):
` class UnstructuredPDFLoader(UnstructuredFileLoader): `
`def _get_elements(self) -> List: `
from unstructured.partition.pdf import partition_pdf
return partition_pdf(filename=self.file_path, **self.unstructured_kwargs)
The function `partition_pdf()` from Unstructured allows one to decide between passing either a file_path to a file in storage, or alternatively a ByteStream pointing to a file in memory but it does not allow one to pass both. Langchain forces users to pass the parameter ` file_path `and thus one cannot use the option of using a stream to load a file (as Unstructured doesn't expect both a file_path and a stream).
### Suggestion:
Remove the part which forces one to pass a ` file_path ` to UnstructuredPDFLoader initializiation. With this change, users can decide to pass a Stream in the `unstructured_kwargs ` field and thus use the loader.
To test this I rewrote the _get_elements function as follows and like this it works to pass a stream:
`
def _get_elements(self) -> List: `
` from unstructured.partition.pdf import partition_pdf
return partition_pdf(**self.unstructured_kwargs) ` | Issue: UnstructuredPDFLoader doesn't support Unstructured functionalities | https://api.github.com/repos/langchain-ai/langchain/issues/13060/comments | 1 | 2023-11-08T13:38:53Z | 2024-02-14T16:06:43Z | https://github.com/langchain-ai/langchain/issues/13060 | 1,983,638,100 | 13,060 |
[
"hwchase17",
"langchain"
]
| I have 4 tools which return responses of apis inside function. Now I want to build a system that only returns the response of api without any observations. Also agents should have memories.
Is it possible with a langchain agent. If yes can you tell me how ??? | Langchain agent which only returns tools response without observations. | https://api.github.com/repos/langchain-ai/langchain/issues/13059/comments | 4 | 2023-11-08T13:05:28Z | 2024-02-14T16:06:48Z | https://github.com/langchain-ai/langchain/issues/13059 | 1,983,572,989 | 13,059 |
[
"hwchase17",
"langchain"
]
| ### System Info
last version of langchain and cohere
### Who can help?
@@agola11
I am encountering an error related to the user_agent when attempting to create a CohererRank object in the LangChain p. I have verified that I am passing a valid cohere_api_key and can successfully use the rerank function from Cohere directly. However, when trying to utilize it through the LangChain project, I encounter this specific issue related to the user_agent.
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/document_compressors/cohere_rerank.py](https://localhost:8080/#) in validate_environment(cls, values)
53 values, "cohere_api_key", "COHERE_API_KEY"
54 )
---> 55 client_name = values["user_agent"]
56 values["client"] = cohere.Client(cohere_api_key, client_name=client_name)
57 return values
KeyError: 'user_agent'
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
import getpass
os.environ["COHERE_API_KEY"] = getpass.getpass("Cohere API Key:")
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank
compressor = CohereRerank()
### Expected behavior
i want to create an object of CohereReRank. | Issue with user_agent error when creating a CohereReRank object in LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/13058/comments | 3 | 2023-11-08T12:38:34Z | 2024-03-13T19:58:09Z | https://github.com/langchain-ai/langchain/issues/13058 | 1,983,512,624 | 13,058 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version = 0.0.331
Openai version = 1.1.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```Error creating LLM: module 'openai' has no attribute 'Embedding'
aai_app | Error creating LLM: module 'openai' has no attribute 'Embedding'
aai_app | Traceback (most recent call last):
aai_app | File "/code/aai/apps/slack_bot/views.py", line 160, in call_open_ai
aai_app | reply = open_ai.execute_query(question)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/open_ai.py", line 130, in execute_query
aai_app | named_entity_recognition = NamedEntityRecognition(
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/named_entity_recognition.py", line 41, in __init__
aai_app | self.llm_embeddings = self.llm.create_llm(
aai_app | ^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/llm_manager.py", line 63, in create_llm
aai_app | return OpenAIEmbeddings(
aai_app | ^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__
aai_app | values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 1050, in validate_model
aai_app | input_data = validator(cls_, input_data)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 284, in validate_environment
aai_app | values["client"] = openai.Embedding
aai_app | ^^^^^^^^^^^^^^^^
aai_app | AttributeError: module 'openai' has no attribute 'Embedding'
aai_app |
aai_app | module 'openai' has no attribute 'Embedding'
aai_app | Traceback (most recent call last):
aai_app | File "/code/aai/apps/slack_bot/views.py", line 160, in call_open_ai
aai_app | reply = open_ai.execute_query(question)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/open_ai.py", line 130, in execute_query
aai_app | named_entity_recognition = NamedEntityRecognition(
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/named_entity_recognition.py", line 41, in __init__
aai_app | self.llm_embeddings = self.llm.create_llm(
aai_app | ^^^^^^^^^^^^^^^^^^^^
aai_app | File "/code/aai/apps/affiliateai/services/open_ai/llm_manager.py", line 63, in create_llm
aai_app | return OpenAIEmbeddings(
aai_app | ^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__
aai_app | values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 1050, in validate_model
aai_app | input_data = validator(cls_, input_data)
aai_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
aai_app | File "/usr/local/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 284, in validate_environment
aai_app | values["client"] = openai.Embedding
aai_app | ^^^^^^^^^^^^^^^^
aai_app | AttributeError: module 'openai' has no attribute 'Embedding'```
### Expected behavior
Not to see this error. | Updated to latest langchain version but still getting OpenAI embeddings error | https://api.github.com/repos/langchain-ai/langchain/issues/13056/comments | 6 | 2023-11-08T12:01:41Z | 2024-02-19T16:07:45Z | https://github.com/langchain-ai/langchain/issues/13056 | 1,983,440,119 | 13,056 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11.5
Langchain (pip show) 0.0.327
Windows OS
Visual Studio Code
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I searched and was surprised this has not come up.
I am using LangChain for a RAG workflow - and when I send a document, if that document contains { } - it throws a missing key error - it is treating the content of the document, as it would a normal prompt where you might have "question {question}" and expect an input key of 'question' it then reports back that all of the { } are in fact different missing keys.
For example, my data contains this:
`"...1 2 ------------------------------------ {w14 w15 w16se w16cid w16 w16cex w16sdtdh wp14}{DP}{AD}{S::}"`
It will say that we are missing numerous keys:
`ValueError: Missing some input keys: {'AD', 'w14 w15 w16se w16cid w16 w16cex w16sdtdh wp14', ...}`
Now, I can clean the data prior to sending, but I was wondering whether it should behave like this given that this document is already within { } as content?
I use the "FewShotPromptTemplate" to create a prompt which includes a "Suffix" and my suffix is:
```
def get_suffix():
return """
Document: {content}
Question: {question}
"""
```
Here content is the content of the document that contains the { } set out above.
I build the prompt like this:
```
prompt_template = FewShotPromptTemplate(
examples = examples,
example_prompt = get_prompt_template(example_template, example_variables),
prefix = prefix,
suffix = suffix,
input_variables = input_variables
)
prompt = prompt_template.format(question=question, context=context)
return prompt
```
I also did a test using another piece of code:
```
document_context = text_response + "{AD}"
prompt = ChatPromptTemplate.from_template("my_specific_prompt": {document}.\n{format_instructions}")
formated_prompt = prompt.format(**{"document": document_context, "format_instructions":output_parser.get_format_instructions()})
```
Introducing a random {AD} into the text response. It did not fail. It messed up the results, but it didn't actually cause any missing input key errors.
So this may be limited to the FewShotPromptTemplate?
### Expected behavior
I would have thought that anything passed within a curly bracket set would be considered as plain text, not parsed for further keys that might be embedded in that curly bracket set and throw an error when it cannot find them?
Maybe I am wrong, but that is what I would have expected and is what appears to happen when using the ChatPromptTemplate.from_template? | ValueError: Missing some input keys: - passed data requires input keys if containing { } | https://api.github.com/repos/langchain-ai/langchain/issues/13055/comments | 3 | 2023-11-08T12:01:05Z | 2023-11-08T18:52:53Z | https://github.com/langchain-ai/langchain/issues/13055 | 1,983,439,114 | 13,055 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.331
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
response_schemas = [
ResponseSchema(type='list', name='disease', description='disease name')
]
**prompt**
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```":
```json
{
"disease": list // disease name
}
```
**output**
{
"disease": "感冒" // disease name
}
This output cannot be parsed by StructuredOutputParser,Because it contains notes。
### Expected behavior
Expect to generate prompt words:
**prompt**
The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```",and the field "disease" meas "disease name".
```json
{
"disease": list
}
```
Expected output:
{
"disease": "感冒"
} | The prompt word format misleads the output content of the large model | https://api.github.com/repos/langchain-ai/langchain/issues/13054/comments | 5 | 2023-11-08T09:54:19Z | 2024-04-04T15:31:33Z | https://github.com/langchain-ai/langchain/issues/13054 | 1,983,196,438 | 13,054 |
[
"hwchase17",
"langchain"
]
| ### System Info
# Hi there,
I have started learning about Langchain today. I was creating my first langchain prompt template but something doesn't seem to work.
Here is my code in main.py:
```python
from openai import OpenAI
from dotenv import load_dotenv
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
import os
# Load the .env file
load_dotenv()
# Create an instance of the OpenAI class
def generate_pet_name(animal_type="dog"):
prompt_template = PromptTemplate(
input_variables=['animal_type'],
template="I have a {animal_type} as my pet. Suggest me a name for it.",
)
name_chain = LLMChain(
llm=OpenAI(),
prompt=prompt_template,
)
response = name_chain({'animal_type': animal_type})
print(response)
if __name__ == "__main__":
generate_pet_name(animal_type="dog")
```
While I think the code is okay and I have followed the GitHub get started it doesn't seem to work and throwing me this error:
```console
Traceback (most recent call last):
File "/Users/ss/Workspace/Ai/AMP/main.py", line 28, in <module>
generate_pet_name(animal_type="dog")
File "/Users/ss/Workspace/Ai/AMP/main.py", line 18, in generate_pet_name
name_chain = LLMChain(
^^^^^^^^^
File "/Users/ss/Workspace/Ai/AMP/.venv/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/Users/ss/Workspace/Ai/AMP/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
Please Help me as I am not that proficient in Python.
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just copy the code given there and run it. You will see the error.
### Expected behavior
Run the code and get a response without error | OpenAI instance of Runnable expected | https://api.github.com/repos/langchain-ai/langchain/issues/13053/comments | 7 | 2023-11-08T09:27:14Z | 2024-02-17T07:04:10Z | https://github.com/langchain-ai/langchain/issues/13053 | 1,983,141,722 | 13,053 |
[
"hwchase17",
"langchain"
]
| ### System Info
AWS Sagemaker DataScience3.0 Image.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
here is my code, it worked before Nov 7th.
`Chroma.from_documents(documents=document, embedding=embeddings,)`
Then i get this error:
`ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs'])
Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.
Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023 `
### Expected behavior
Is anyone know how to fix this? | Bug after the openai updated in Embedding | https://api.github.com/repos/langchain-ai/langchain/issues/13051/comments | 23 | 2023-11-08T07:56:36Z | 2024-04-02T12:17:34Z | https://github.com/langchain-ai/langchain/issues/13051 | 1,982,969,374 | 13,051 |
[
"hwchase17",
"langchain"
]
| ### System Info
Running on google colab.
Everything was working up until today, which makes me think it's openAi update-related.
Versions:
Requirement already satisfied: langchain in /usr/local/lib/python3.10/dist-packages (**0.0.331**)
Requirement already satisfied: chromadb in /usr/local/lib/python3.10/dist-packages (**0.4.16**)
Openai version pinned to 0.28.1 as @hwchase17 recommended prior -- which had fixed my embeddings issue.
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python3
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents=splits, embedding_function=embeddings, persist_directory='/content/wtf')
vectorstore.persist()
retriever = vectorstore.as_retriever()
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-5-d086aa9b6782>](https://localhost:8080/#) in <cell line: 7>()
5
6 embeddings = OpenAIEmbeddings()
----> 7 vectorstore = Chroma.from_documents(documents=splits, embedding_function=embeddings, persist_directory='/content/wtf')
8 vectorstore.persist()
9 retriever = vectorstore.as_retriever()
1 frames
[/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py](https://localhost:8080/#) in from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
618 Chroma: Chroma vectorstore.
619 """
--> 620 chroma_collection = cls(
621 collection_name=collection_name,
622 embedding_function=embedding,
TypeError: langchain.vectorstores.chroma.Chroma() got multiple values for keyword argument 'embedding_function'
### Expected behavior
It should run without an error and correctly embed the document splits, outputting the data in the persist directory. | Multiple values for keyword argument 'embedding_function' | https://api.github.com/repos/langchain-ai/langchain/issues/13050/comments | 4 | 2023-11-08T07:26:31Z | 2024-04-05T19:39:47Z | https://github.com/langchain-ai/langchain/issues/13050 | 1,982,926,699 | 13,050 |
[
"hwchase17",
"langchain"
]
| ### System Info
Windows WSL 2 Ubuntu
Python 3.10.6
langchain-0.0.331 langchain-cli-0.0.15 langserve-0.0.24 langsmith-0.0.60
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
In a new virtual environment with Python 3.10.6, After running:
`pip install -U "langchain-cli[serve]"`
which installs "langchain-0.0.331 langchain-cli-0.0.15 langserve-0.0.24 langsmith-0.0.60" as shown below:
Successfully installed PyYAML-6.0.1 SQLAlchemy-2.0.23 aiohttp-3.8.6 aiosignal-1.3.1 annotated-types-0.6.0 anyio-3.7.1 async-timeout-4.0.3 attrs-23.1.0 certifi-2023.7.22 charset-normalizer-3.3.2 click-8.1.7 colorama-0.4.6 dataclasses-json-0.6.1 exceptiongroup-1.1.3 fastapi-0.104.1 frozenlist-1.4.0 gitdb-4.0.11 gitpython-3.1.40 greenlet-3.0.1 h11-0.14.0 httpcore-1.0.1 httpx-0.25.1 httpx-sse-0.3.1 idna-3.4 jsonpatch-1.33 jsonpointer-2.4 langchain-0.0.331 langchain-cli-0.0.15 langserve-0.0.24 langsmith-0.0.60 markdown-it-py-3.0.0 marshmallow-3.20.1 mdurl-0.1.2 multidict-6.0.4 mypy-extensions-1.0.0 numpy-1.26.1 packaging-23.2 pydantic-2.4.2 pydantic-core-2.10.1 pygments-2.16.1 requests-2.31.0 rich-13.6.0 shellingham-1.5.4 smmap-5.0.1 sniffio-1.3.0 sse-starlette-1.6.5 starlette-0.27.0 tenacity-8.2.3 tomli-2.0.1 typer-0.9.0 typing-extensions-4.8.0 typing-inspect-0.9.0 urllib3-2.0.7 uvicorn-0.23.2 yarl-1.9.2
I try to run `langchain app new my-app --package rag-conversation`, and get the following error
Traceback (most recent call last):
File "/home/mert/REPOSITORIES/GenAI/langserve-template/langserve-env/bin/langchain", line 5, in <module>
from langchain_cli.cli import app
File "/home/mert/REPOSITORIES/GenAI/langserve-template/langserve-env/lib/python3.10/site-packages/langchain_cli/cli.py", line 6, in <module>
from langchain_cli.namespaces import app as app_namespace
File "/home/mert/REPOSITORIES/GenAI/langserve-template/langserve-env/lib/python3.10/site-packages/langchain_cli/namespaces/app.py", line 12, in <module>
from langserve.packages import get_langserve_export
ModuleNotFoundError: No module named 'langserve.packages'
### Expected behavior
Pulling template into a local repository | No module named 'langserve.packages' when creating langchain-cli apps | https://api.github.com/repos/langchain-ai/langchain/issues/13049/comments | 12 | 2023-11-08T07:02:02Z | 2023-11-14T18:24:49Z | https://github.com/langchain-ai/langchain/issues/13049 | 1,982,890,810 | 13,049 |
[
"hwchase17",
"langchain"
]
|
from langchain.document_loaders import UnstructuredExcelLoader
loader = UnstructuredExcelLoader("example_data/stanley-cups.xlsx", mode="elements")
docs = loader.load()
docs[0]
in the above it gives ouput as
Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table'})
when i pass the above to CharacterTextSplitter, it gives error because it expects in the different format.
text_splitter = CharacterTextSplitter(chunk_overlap=0, chunk_size=1000)
texts = text_splitter.split_documents(docs[0])
### Suggestion:
_No response_ | Issue: UnstructuredExcelLoader | https://api.github.com/repos/langchain-ai/langchain/issues/13047/comments | 3 | 2023-11-08T06:49:27Z | 2024-02-14T16:07:08Z | https://github.com/langchain-ai/langchain/issues/13047 | 1,982,871,020 | 13,047 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
新的chatgpt可以直接帮助人类分析,规划,似乎已经脱离了原本的软件开发模式,从人设计,到了ai设计了,langchain是不是已经过时了?
### Suggestion:
_No response_ | 看了新的OpenAI开发者大会,langchain还有存在的必要吗 | https://api.github.com/repos/langchain-ai/langchain/issues/13046/comments | 6 | 2023-11-08T06:46:32Z | 2024-02-14T16:07:13Z | https://github.com/langchain-ai/langchain/issues/13046 | 1,982,867,010 | 13,046 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using github tools along with an agent following example given (https://python.langchain.com/docs/integrations/toolkits/github), but the comment_on_issue tool is not able to parse the Action Input given by agent (the format seems to be same as given in the prompts), there seems to be some issue with escap sequnces.
Action Input: 2\n\nStarting to review docstrings and add sphinx\n\n
Error: ValueError: invalid literal for int() with base 10: '2\n\nStarting to review docstrings and add sphinx\n\n'
I think there might be some issue decoding (extra \ added to newline character). can anyone pls help with this?
### Suggestion:
_No response_ | help: github util `comment_on_issue` unable to parse `action input` from agent | https://api.github.com/repos/langchain-ai/langchain/issues/13045/comments | 4 | 2023-11-08T06:38:13Z | 2023-11-12T18:52:11Z | https://github.com/langchain-ai/langchain/issues/13045 | 1,982,856,821 | 13,045 |
[
"hwchase17",
"langchain"
]
| @dosu-bot
Below is my code and everytime I ask it a question, it rephrases the question then answers it for me. Help me to remove the rephrasing part. I did set it to False yet it still does it.
Also, I would like to return the source of the documents but its showing me this error:
File "C:\Users\Asus\Documents\Vendolista\hacka.py", line 178, in
main()
File "C:\Users\Asus\Documents\Vendolista\hacka.py", line 172, in main
result = qa({"question": user_input})
File "C:\Users\Asus\Documents\Vendolista.venv\lib\site-packages\langchain\chains\base.py", line 294, in call
final_outputs: Dict[str, Any] = self.prep_outputs(
File "C:\Users\Asus\Documents\Vendolista.venv\lib\site-packages\langchain\chains\base.py", line 390, in prep_outputs
self.memory.save_context(inputs, outputs)
File "C:\Users\Asus\Documents\Vendolista.venv\lib\site-packages\langchain\memory\chat_memory.py", line 35, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
File "C:\Users\Asus\Documents\Vendolista.venv\lib\site-packages\langchain\memory\chat_memory.py", line 27, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
Below is my code
import os
import json
import pandas as pd
LLM
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
Prompt
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.chat import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
Embeddings
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
Chain
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.document_loaders.csv_loader import CSVLoader, UnstructuredCSVLoader
from langchain.document_loaders import DirectoryLoader
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from dotenv import load_dotenv
import time
import pandas as pd
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.text_splitter import RecursiveCharacterTextSplitter
load_dotenv()
file_path = "C:\Users\Asus\Documents\Vendolista\home_depot_data.csv"
path = "C:\Users\Asus\Documents\Vendolista\home depot"
csv_loader = CSVLoader(file_path=path, encoding='utf-8')
csv_loader = DirectoryLoader(path,
glob="**/*.csv",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = CSVLoader)
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
documents = csv_loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=200,
chunk_overlap=50,
)
chunks = text_splitter.split_documents(documents)
chunks = documents
embeddings = OpenAIEmbeddings()
persist_directory = "C:\Users\Asus\OneDrive\Documents\Vendolista"
knowledge_base = Chroma(embedding_function=embeddings, persist_directory=persist_directory)
# Split the chunks into smaller batches
batch_size = 5000
for i in range(0, len(chunks), batch_size):
batch = chunks[i:i+batch_size]
knowledge_base.add_documents(batch)
Save the vector store to disk
knowledge_base.persist()
Load the vector store from disk
knowledge_base = Chroma(chunks, persist_directory=persist_directory, embedding_function=embeddings)
class Product(BaseModel):
"""Product details schema."""
url:str = Field(description="Full URL link to the product webpage on Homedepot.")
title:str = Field(description="Title of the product.")
description:str = Field(description="Description of the prodcut.")
brand:str = Field(description="Manufacturing brand of the product.")
price:float = Field(description="Unit selling price of the product.")
parser = PydanticOutputParser(pydantic_object=Product)
question_template = """ Make sure you understand the question as its very important for the user.
You never know what situation they are in and you need to ensure that its understood very well but do not repeat
or rewrite the question
Input: {question}
"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(question_template)
Chain for question generation
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
Chat Prompt
system_template = """
You are a friendly, conversational retail shopping assistant named RAAFYA.
You will always and always and always only follow these set of rules and nothing else no matter what:
You will provide the user answers based on the csv file that you can only read from which is called "home_depot_data.csv"
You will never mention the name of the dataset that you have. Just say "my data" instead
Focus 100% to understand exactly what the customer is looking for and only give him whats available based on the data.
Do not get anything or say anything that is not related to the data that you have and never provide wrong information.
Use the following context including product name descriptions, and keywords to show the shopper whats available,
help find what they want, and answer their questions related to your job
Never ever consider or think or even mention that you do not have access to the internet because it is not your job
and it is not your task. I will repeat it again and again, your information is only and only coming from the dataset
that you have which is called "home_depot_data.csv" but you must not mention that to anyone for security purposes
Everyime you answer a question, write on a new line "is there anything else you would like me to help you with?"
If a customer asked for a product and it is not available then say "Sorry it is currently unavailable but you can
reach out to our staff and ask them about it at [[email protected]](mailto:[email protected])"
If the person asked for more details then provide him the details based on the output parser that you have:
URL:
Title:
Description:
Brand:
Price:
Context:
{context}
"""
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
Human Prompt
human_template="""{format_instructions}
Question: {question}"""
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
Inject instructions into the prompt template.
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template=human_template,
input_variables=["question"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
Chain for Q&A
answer_chain = load_qa_chain(llm,
chain_type="stuff",
prompt=chat_prompt)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
Chain
qa = ConversationalRetrievalChain(
retriever = knowledge_base.as_retriever(),
question_generator = question_generator,
combine_docs_chain = answer_chain,
memory=memory,
rephrase_question=False,
return_source_documents=True
)
def main():
while True:
user_input = input("What would you like to shop for: ")
if user_input.lower() in ["exit"]:
break
if user_input != "":
with get_openai_callback() as cb:
result = qa({"question": user_input})
print()
# print(cb)
# print()
if name == "main":
main() | My LLM keeps rephrasing the question and it doesnt return source documents | https://api.github.com/repos/langchain-ai/langchain/issues/13044/comments | 6 | 2023-11-08T06:06:50Z | 2024-02-27T01:13:22Z | https://github.com/langchain-ai/langchain/issues/13044 | 1,982,815,371 | 13,044 |
[
"hwchase17",
"langchain"
]
| @dosu-bot
Below is my code and everytime I ask it a question, it rephrases the question then answers it for me. Help me to remove the rephrasing part. I did set it to False yet it still does it.
Also, I would like to return the source of the documents but its showing me this error:
File "C:\Users\Asus\Documents\Vendolista\hacka.py", line 178, in <module>
main()
File "C:\Users\Asus\Documents\Vendolista\hacka.py", line 172, in main
result = qa({"question": user_input})
File "C:\Users\Asus\Documents\Vendolista\.venv\lib\site-packages\langchain\chains\base.py", line 294, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
File "C:\Users\Asus\Documents\Vendolista\.venv\lib\site-packages\langchain\chains\base.py", line 390, in prep_outputs
self.memory.save_context(inputs, outputs)
File "C:\Users\Asus\Documents\Vendolista\.venv\lib\site-packages\langchain\memory\chat_memory.py", line 35, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
File "C:\Users\Asus\Documents\Vendolista\.venv\lib\site-packages\langchain\memory\chat_memory.py", line 27, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
Below is my code
import os
import json
import pandas as pd
# LLM
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
# Prompt
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.chat import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
# Embeddings
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
# Chain
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.document_loaders.csv_loader import CSVLoader, UnstructuredCSVLoader
from langchain.document_loaders import DirectoryLoader
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from dotenv import load_dotenv
import time
import pandas as pd
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.text_splitter import RecursiveCharacterTextSplitter
load_dotenv()
file_path = "C:\\Users\\Asus\\Documents\\Vendolista\\home_depot_data.csv"
path = "C:\\Users\\Asus\\Documents\\Vendolista\\home depot"
# csv_loader = CSVLoader(file_path=path, encoding='utf-8')
csv_loader = DirectoryLoader(path,
glob="**/*.csv",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = CSVLoader)
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
documents = csv_loader.load()
# text_splitter = RecursiveCharacterTextSplitter(
# chunk_size=200,
# chunk_overlap=50,
# )
# chunks = text_splitter.split_documents(documents)
chunks = documents
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma(embedding_function=embeddings, persist_directory=persist_directory)
# # Split the chunks into smaller batches
# batch_size = 5000
# for i in range(0, len(chunks), batch_size):
# batch = chunks[i:i+batch_size]
# knowledge_base.add_documents(batch)
# Save the vector store to disk
knowledge_base.persist()
# Load the vector store from disk
knowledge_base = Chroma(chunks, persist_directory=persist_directory, embedding_function=embeddings)
class Product(BaseModel):
"""Product details schema."""
url:str = Field(description="Full URL link to the product webpage on Homedepot.")
title:str = Field(description="Title of the product.")
description:str = Field(description="Description of the prodcut.")
brand:str = Field(description="Manufacturing brand of the product.")
price:float = Field(description="Unit selling price of the product.")
parser = PydanticOutputParser(pydantic_object=Product)
question_template = """ Make sure you understand the question as its very important for the user.
You never know what situation they are in and you need to ensure that its understood very well but do not repeat
or rewrite the question
Input: {question}
"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(question_template)
# Chain for question generation
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
# Chat Prompt
system_template = """
You are a friendly, conversational retail shopping assistant named RAAFYA.
You will always and always and always only follow these set of rules and nothing else no matter what:
1) You will provide the user answers based on the csv file that you can only read from which is called "home_depot_data.csv"
2) You will never mention the name of the dataset that you have. Just say "my data" instead
3) Focus 100% to understand exactly what the customer is looking for and only give him whats available based on the data.
4) Do not get anything or say anything that is not related to the data that you have and never provide wrong information.
5) Use the following context including product name descriptions, and keywords to show the shopper whats available,
help find what they want, and answer their questions related to your job
6) Never ever consider or think or even mention that you do not have access to the internet because it is not your job
and it is not your task. I will repeat it again and again, your information is only and only coming from the dataset
that you have which is called "home_depot_data.csv" but you must not mention that to anyone for security purposes
7) Everyime you answer a question, write on a new line "is there anything else you would like me to help you with?"
8) If a customer asked for a product and it is not available then say "Sorry it is currently unavailable but you can
reach out to our staff and ask them about it at [email protected]"
9) If the person asked for more details then provide him the details based on the output parser that you have:
URL:
Title:
Description:
Brand:
Price:
Context:
{context}
"""
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
# Human Prompt
human_template="""{format_instructions}
Question: {question}"""
# human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
# Inject instructions into the prompt template.
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template=human_template,
input_variables=["question"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# Chain for Q&A
answer_chain = load_qa_chain(llm,
chain_type="stuff",
prompt=chat_prompt)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Chain
qa = ConversationalRetrievalChain(
retriever = knowledge_base.as_retriever(),
question_generator = question_generator,
combine_docs_chain = answer_chain,
memory=memory,
rephrase_question=False,
return_source_documents=True
)
def main():
while True:
user_input = input("What would you like to shop for: ")
if user_input.lower() in ["exit"]:
break
if user_input != "":
with get_openai_callback() as cb:
result = qa({"question": user_input})
print()
# print(cb)
# print()
if __name__ == "__main__":
main() | My llm keeps rephrasing question and it doesnt return source documents | https://api.github.com/repos/langchain-ai/langchain/issues/13043/comments | 1 | 2023-11-08T04:59:18Z | 2024-02-14T16:07:23Z | https://github.com/langchain-ai/langchain/issues/13043 | 1,982,743,038 | 13,043 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Langchain.llm matches the vllm model, can it also match the fastllm model?
### Suggestion:
_No response_ | fastllm model | https://api.github.com/repos/langchain-ai/langchain/issues/13037/comments | 2 | 2023-11-08T01:06:34Z | 2024-02-14T16:07:28Z | https://github.com/langchain-ai/langchain/issues/13037 | 1,982,528,151 | 13,037 |
[
"hwchase17",
"langchain"
]
| ### Feature request
[This idea](https://arxiv.org/abs/2310.06117) is promising and it can be implemented with LangChain
### Motivation
To add a new chaining technique.
### Your contribution
I'm not sure I can implement it. | Step-Back Prompting | https://api.github.com/repos/langchain-ai/langchain/issues/13036/comments | 4 | 2023-11-08T00:58:46Z | 2023-11-08T16:56:36Z | https://github.com/langchain-ai/langchain/issues/13036 | 1,982,516,375 | 13,036 |
[
"hwchase17",
"langchain"
]
| ### Feature request
[Chain-of-Verification](https://arxiv.org/pdf/2309.11495.pdf) looks like an interesting idea and LangChain can implement it.
### Motivation
to add one more effective method of the chaining.
### Your contribution
Not sure I can implement it. :( | Chain-of-Verification | https://api.github.com/repos/langchain-ai/langchain/issues/13035/comments | 10 | 2023-11-08T00:48:40Z | 2024-02-09T16:45:57Z | https://github.com/langchain-ai/langchain/issues/13035 | 1,982,507,472 | 13,035 |
[
"hwchase17",
"langchain"
]
| ### Description
Compatibility issue with the Langchain library due to the recent changes in the OpenAI Python package (version 1.1.1). The Langchain library relies on certain structures and imports from the OpenAI package, which have been modified in the new version. Specifically, the issue seems to be related to the following changes:
- In the Langchain code, the error handling imports in [langchain/llms/openai.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L83 ) at line 90 were based on the older structure of the OpenAI package. In the newer version, these imports have been restructured and are available in openai._exceptions.
- In [langchain/llms/openai.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L240 ) at line 266, values["client"] = openai.Completion is no longer valid in the new version of OpenAI (version 1.1.1).
```
!pip install langchain openai
from langchain import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "key"
llm = OpenAI(
model_name="text-davinci-003",
temperature= 0.2,
max_tokens= 64,
openai_api_key=os.environ["OPENAI_API_KEY"],
)
```

Also

**Note:**
To avoid the above error, users should downgrade the OpenAI package to version 0.28.1.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
!pip install langchain openai
from langchain import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "key"
llm = OpenAI(
model_name="text-davinci-003",
temperature= 0.2,
max_tokens= 64,
openai_api_key=os.environ["OPENAI_API_KEY"],
)
### Expected behavior
Langchain should work without errors when using OpenAI version 1.1.1. | Compatibility Issue with OpenAI Python Package (Version 1.1.1) | https://api.github.com/repos/langchain-ai/langchain/issues/13027/comments | 15 | 2023-11-07T22:18:36Z | 2024-03-01T20:03:05Z | https://github.com/langchain-ai/langchain/issues/13027 | 1,982,332,029 | 13,027 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How can I run embedding with "gte-large" on a multi-GPU machine?
Trying per issue #3486
```
model_name = "thenlper/gte-large"
encode_kwargs = {"normalize_embeddings": True}
model_kwargs = {"device": "cuda", "multi_process":True}
hf = HuggingFaceBgeEmbeddings(
model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs,
)
embedding = hf.embed_query("hi this is harrison")
len(embedding)
```
uses a single GPU
| Embedding on Multi-GPU | https://api.github.com/repos/langchain-ai/langchain/issues/13026/comments | 3 | 2023-11-07T21:39:50Z | 2024-02-13T16:06:47Z | https://github.com/langchain-ai/langchain/issues/13026 | 1,982,276,784 | 13,026 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Load complex PDFs similar to:
https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb
### Motivation
RAG apps that need complex data loading
### Your contribution
Add template | [Loader template] Unstructured pdf partition to vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/13024/comments | 3 | 2023-11-07T21:11:05Z | 2024-02-13T20:05:39Z | https://github.com/langchain-ai/langchain/issues/13024 | 1,982,234,376 | 13,024 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `add_texts` pipeline.
Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
* Thread pool for pinecone index
* Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
* Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
Similar to the #9855
One additional requirement is to set `flag` for single threaded vs multithreaded upsert implementation.
### Motivation
The function `add_texts` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
* Take a small batch ie 32/64 of documents
* Calculate embeddings --> WAIT
* Upsert a batch --> WAIT
We can benefit from using paralellised upsert.
### Your contribution
I will do it. | Support Pinecone Hybrid Search upsert parallelization | https://api.github.com/repos/langchain-ai/langchain/issues/13017/comments | 2 | 2023-11-07T19:54:54Z | 2024-02-13T16:06:57Z | https://github.com/langchain-ai/langchain/issues/13017 | 1,982,113,086 | 13,017 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have a STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent which uses a QA (retreival based) tool to answer questions. The QA tool returns references in the response. However the final answer provided by the agent is missing the sources. What can I do to ensure that sources are provided in the final_answer?
### Suggestion:
_No response_ | Issue: Sources are lost in the final_answer by agents | https://api.github.com/repos/langchain-ai/langchain/issues/13011/comments | 8 | 2023-11-07T18:07:48Z | 2024-02-23T16:07:02Z | https://github.com/langchain-ai/langchain/issues/13011 | 1,981,937,876 | 13,011 |
[
"hwchase17",
"langchain"
]
| ### System Info
https://python.langchain.com/docs/integrations/llms/huggingface_hub
I followed this documentation and increased max_length. But, It seems like the response max_length not increasing. At max, I would get the response text with 110 characters. Please help me to fix my issue
<img width="528" alt="image" src="https://github.com/langchain-ai/langchain/assets/68229944/7d47b36c-21f4-42df-bf0f-48a4f6f03a5a">
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install dependencies and use the template code in the above link and screenshot
2. Try to increase the max length and check whether increasing max_length is refelecting in the response.
### Expected behavior
increasing max_length isn't reflecting. This value should enable to increase the generated response text length. | Max Characters doesn't increase when value is updated | https://api.github.com/repos/langchain-ai/langchain/issues/13009/comments | 3 | 2023-11-07T17:31:54Z | 2024-02-13T16:07:02Z | https://github.com/langchain-ai/langchain/issues/13009 | 1,981,874,805 | 13,009 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to create a rag using langchain, aws bedrock and opensearch. For I created an index using the code,
```
from opensearchpy import OpenSearch
aos_client = OpenSearch(
hosts=[{"host": opensearch_cluster_domain, "port": ops_port}],
http_auth=auth,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection,
pool_maxsize=20,
)
settings = {"settings": {"index": {"knn": True, "knn.space_type": "cosinesimil"}}}
response = aos_client.indices.create(index=index_name, body=settings)
```
For the retrieval part I am using langchain "RetrievalQA". Code for the same is
```
gen_qa = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever=retriever.as_retriever(
search_kwargs={"k": int(k), "score_threshold": float(score_threshold)}
),
chain_type_kwargs=self.general_chain_type_kwargs,
return_source_documents=True,
)
```
I need to limit the docs returned from opensearch index based on similarity score. Eventhough I am giving the score threshold in search_kwargs, seems like it has no effect.
Also tried this code from the langchain [doc](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.html#langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch.similarity_search_with_relevance_scores)
```
docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
```
And this is throwing NotImplementedError.
Is there any way that I can say to the retriever that I only need relevant docs which has a similarity score greater than a certain threshold. Any help will be appreciated, thanks.
### Suggestion:
_No response_ | Issue: Opensearch retiever threshold based on similarity score | https://api.github.com/repos/langchain-ai/langchain/issues/13007/comments | 6 | 2023-11-07T16:52:58Z | 2024-02-17T16:06:23Z | https://github.com/langchain-ai/langchain/issues/13007 | 1,981,801,097 | 13,007 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi there,
Thank a lot for the amazing framework. More or less all vector stores must be used with `<CLASS>.from_documents` because the classmethod in almost every case does some weird pre processing (e.g. setting the vector config taken from only god knows where etc).
For instance
```python
vector_store = Qdrant(
client=QdrantClient(url=os.environ["VECTOR_DB_URL"])
collection_name=VECTOR_DB_COLLECTION_NAME,
embeddings=embeddings,
)
```
This won't work, since without the classmethod the collection is not even created
Thanks
### Motivation
So I can avoid creating 1k classes if I need to process 1k documents :)
### Your contribution
Raising the issue | Using Vector stores without being forced to use classmethod | https://api.github.com/repos/langchain-ai/langchain/issues/13005/comments | 4 | 2023-11-07T16:08:47Z | 2024-04-30T16:13:59Z | https://github.com/langchain-ai/langchain/issues/13005 | 1,981,716,853 | 13,005 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi everyone,
I am receveing the following error message when following your [documentation on the summarization.]( https://python.langchain.com/docs/use_cases/summarization): AttributeError: 'NoneType' object has no attribute 'startswith'
Any ideas what the problem is?
Looks like this is point where the notebook breaks:
` 62 # Check if the model matches a known prefix
63 # Prefix matching avoids needing library updates for every model version release
64 # Note that this can match on non-existent models (e.g., gpt-3.5-turbo-FAKE)
65 for model_prefix, model_encoding_name in MODEL_PREFIX_TO_ENCODING.items():
---> 66 if model_name.startswith(model_prefix):
67 return get_encoding(model_encoding_name)`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/docs/use_cases/summarization
### Expected behavior
I exepect to receive a summary of summaries as explained here: https://python.langchain.com/docs/use_cases/summarization | map_reduce Summarization results in AttributeError: 'NoneType' object has no attribute 'startswith' | https://api.github.com/repos/langchain-ai/langchain/issues/13004/comments | 3 | 2023-11-07T15:58:05Z | 2024-02-13T16:07:12Z | https://github.com/langchain-ai/langchain/issues/13004 | 1,981,693,748 | 13,004 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.