issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "hwchase17", "langchain" ]
### Feature request Fork off from this issue: https://github.com/hwchase17/langchain/issues/5300 The idea is to provide the WeaviateHybridSearchRetriever with the ability to use local embeddings similar to the Weaviate vectorstore. Specifically, for the `WeaviateHybridSearchRetriever.add_documents()` and `WeaviateHybridSearchRetriever.get_relevant_documents()` functions to work similar to the `Weviate.from_texts()` function where there is the option to use local embeddings if passed during creation. Additionally, the `WeaviateHybridSearchRetriever._create_schema_if_missing()` function likely needs to remove the default addition of a vectorizer in the schema object (related issue here: https://github.com/hwchase17/langchain/issues/5300). ### Motivation This will allow those of us running Weaviate without embedding modules (like myself) to use the Weaviate Hybrid Search Retriever. ### Your contribution I am planning on working to get a fix locally, I can potentially submit this as a PR down the line. Busy this week so others would probably beat me to it. I can review though.
Allow users to pass local embeddings to Weaviate Hybrid Search Retriever
https://api.github.com/repos/langchain-ai/langchain/issues/5539/comments
3
2023-06-01T05:19:27Z
2023-12-06T17:45:40Z
https://github.com/langchain-ai/langchain/issues/5539
1,735,478,342
5,539
[ "hwchase17", "langchain" ]
### System Info langchain - 0.0.174 / 0.0.178 / 0.0.187 python3 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Call OpenAI setting the parameters - openai_api_type , openai_api_version , openai_api_base , openai_api_key -- Successful OpenAI request response 2. Call Azure OpenAI setting the parameters - openai_api_type , openai_api_version , openai_api_base , openai_api_key -- Fails All subsequent calls fails Alternatively, if you first call Azure OpenAI with parameters set correctly - that succeeds, but OpenAI fails. And all subsequent fails. Each independently works - so guessing the. parameter values work as expected. But when one after other is called, the second API (OpenAI or Azure OpenAI - which ever is called second) - fails ### Expected behavior If parameters are set correctly, both should work as required. If independently they work if app is restarted, why would it fail if they are called sequentiially?
OpenAI and Azure OpenAI - calls one after another
https://api.github.com/repos/langchain-ai/langchain/issues/5537/comments
4
2023-06-01T04:20:05Z
2023-09-18T16:09:34Z
https://github.com/langchain-ai/langchain/issues/5537
1,735,414,868
5,537
[ "hwchase17", "langchain" ]
### System Info System Info (Docker Dev Container): ``` PRETTY_NAME="Debian GNU/Linux 10 (buster)" NAME="Debian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" ``` Python: 3.10 Pip: ``` absl-py 1.4.0 aiohttp 3.8.4 aiosignal 1.3.1 antlr4-python3-runtime 4.9.3 anyio 3.6.2 argilla 1.6.0 async-timeout 4.0.2 attrs 23.1.0 backoff 2.2.1 cachetools 5.3.0 certifi 2022.12.7 cffi 1.15.1 charset-normalizer 3.1.0 click 8.1.3 cloudpickle 2.2.1 cmake 3.26.3 coloredlogs 15.0.1 commonmark 0.9.1 contourpy 1.0.7 cryptography 40.0.2 cycler 0.11.0 dataclasses-json 0.5.7 Deprecated 1.2.13 detectron2 0.4 effdet 0.3.0 et-xmlfile 1.1.0 exceptiongroup 1.1.1 fastapi 0.95.1 filelock 3.11.0 flatbuffers 23.3.3 fonttools 4.39.3 frozenlist 1.3.3 future 0.18.3 fvcore 0.1.3.post20210317 google-auth 2.17.3 google-auth-oauthlib 1.0.0 gptcache 0.1.11 greenlet 2.0.2 grpcio 1.53.0 h11 0.14.0 httpcore 0.16.3 httpx 0.23.3 huggingface-hub 0.13.4 humanfriendly 10.0 idna 3.4 iniconfig 2.0.0 iopath 0.1.10 Jinja2 3.1.2 joblib 1.2.0 kiwisolver 1.4.4 langchain 0.0.141 layoutparser 0.3.4 lit 16.0.1 lxml 4.9.2 Markdown 3.4.3 MarkupSafe 2.1.2 marshmallow 3.19.0 marshmallow-enum 1.5.1 matplotlib 3.7.1 monotonic 1.6 mpmath 1.3.0 msg-parser 1.2.0 multidict 6.0.4 mypy-extensions 1.0.0 networkx 3.1 nltk 3.8.1 numpy 1.23.5 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.2 olefile 0.46 omegaconf 2.3.0 onnxruntime 1.14.1 openai 0.27.4 openapi-schema-pydantic 1.2.4 opencv-python 4.6.0.66 openpyxl 3.1.2 packaging 23.1 pandas 1.5.3 pdf2image 1.16.3 pdfminer.six 20221105 pdfplumber 0.9.0 pgvector 0.1.6 Pillow 9.5.0 pip 23.1 pluggy 1.0.0 portalocker 2.7.0 protobuf 4.22.3 psycopg2-binary 2.9.6 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycocotools 2.0.6 pycparser 2.21 pydantic 1.10.7 pydot 1.4.2 Pygments 2.15.0 pypandoc 1.11 pyparsing 3.0.9 pypdf 3.9.0 pytesseract 0.3.10 pytest 7.3.1 python-dateutil 2.8.2 python-docx 0.8.11 python-dotenv 1.0.0 python-magic 0.4.27 python-multipart 0.0.6 python-poppler 0.4.0 python-pptx 0.6.21 pytz 2023.3 PyYAML 6.0 regex 2023.3.23 requests 2.28.2 requests-oauthlib 1.3.1 rfc3986 1.5.0 rich 13.0.1 rsa 4.9 scipy 1.10.1 setuptools 65.5.1 six 1.16.0 sniffio 1.3.0 SQLAlchemy 1.4.47 starlette 0.26.1 sympy 1.11.1 tabulate 0.9.0 tenacity 8.2.2 tensorboard 2.12.2 tensorboard-data-server 0.7.0 tensorboard-plugin-wit 1.8.1 termcolor 2.2.0 tiktoken 0.3.3 timm 0.6.13 tokenizers 0.13.3 tomli 2.0.1 torch 2.0.0 torchaudio 2.0.1 torchvision 0.15.1 tqdm 4.65.0 transformers 4.28.1 triton 2.0.0 typing_extensions 4.5.0 typing-inspect 0.8.0 unstructured 0.5.12 unstructured-inference 0.3.2 urllib3 1.26.15 uvicorn 0.21.1 Wand 0.6.11 Werkzeug 2.2.3 wheel 0.40.0 wrapt 1.14.1 XlsxWriter 3.1.0 yacs 0.1.8 yarl 1.8.2 ``` ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Write the code below: ``` chain = RetrievalQAWithSourcesChain.from_chain_type( llm=ChatOpenAI(openai_api_key=api_key), chain_type="map_reduce", retriever=retriever, ) llm_call = "random llm call" result = chain({ "question": llm_call, }, return_only_outputs=True ) ``` ### Expected behavior I'm expecting that I'll be having a `result["answer"]` and non empty `result["sources"]` but here's what I get instead: ![image](https://github.com/hwchase17/langchain/assets/69240261/a49a54eb-315c-45d5-a4e7-a647c3d818e3) As you can see, `sources` is empty but it's included in `result["answer"]` as a string.
`RetrievalQAWithSourcesChain` not returning sources in `sources` field.
https://api.github.com/repos/langchain-ai/langchain/issues/5536/comments
4
2023-06-01T04:17:08Z
2024-01-16T08:36:32Z
https://github.com/langchain-ai/langchain/issues/5536
1,735,412,964
5,536
[ "hwchase17", "langchain" ]
### Feature request Support Tigris as a vector search backend ### Motivation Tigris is a Serverless NoSQL Database and Search Platform and have their [vector search](https://www.tigrisdata.com/docs/concepts/vector-search/python/) product. It will be great option for users to use an integrated database and search product. ### Your contribution I can submit a a PR
Add Tigris vectorstore for vector search
https://api.github.com/repos/langchain-ai/langchain/issues/5535/comments
3
2023-06-01T03:18:00Z
2023-06-06T03:39:17Z
https://github.com/langchain-ai/langchain/issues/5535
1,735,366,931
5,535
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. i follow the chapter “Chat Over Documents with Chat History” to build a bot chat with pdf, i want to streanming return, but when i use stuff chain like this ```python doc_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=QA_PROMPT) chain = ConversationalRetrievalChain( retriever=vector_db.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain ) ``` it return "This model's maximum context length is 4097 tokens, however you requested 5741 tokens (5485 in your prompt; 256 for the completion). Please reduce your prompt; or completion length" when i use map_reduce chain ```python doc_chain = load_qa_chain(OpenAI(temperature=0,streaming=True,callbacks=[StreamingStdOutCallbackHandler()]), chain_type="map_reduce", combine_prompt=getQaMap_reducePromot()) ``` it return "Cannot stream results with multiple prompts." how to resolve it when the context is too long ### Suggestion: _No response_
Issue: how stream results with long context
https://api.github.com/repos/langchain-ai/langchain/issues/5532/comments
4
2023-06-01T00:52:51Z
2024-02-07T16:30:03Z
https://github.com/langchain-ai/langchain/issues/5532
1,735,238,661
5,532
[ "hwchase17", "langchain" ]
start_chat() constructs a vertextai _ChatSession and sets class variables with the parameters, but send_message() will not use those parameters if send_message is called w/o parameters. This is because send_message() has default values for the parameters which are set to global variables. You can fix this by moving **self._default_params to the send_message() call. https://github.com/hwchase17/langchain/blob/359fb8fa3ae0b0904dbb36f998cd2339ea0aec0f/langchain/chat_models/vertexai.py#LL122C75-L122C75
Sampling parameters are ignored by vertexai
https://api.github.com/repos/langchain-ai/langchain/issues/5531/comments
2
2023-05-31T22:19:11Z
2023-06-05T14:06:42Z
https://github.com/langchain-ai/langchain/issues/5531
1,735,113,256
5,531
[ "hwchase17", "langchain" ]
### Feature request Chat models relying on `SystemMessage`, ... instead of simple text, hinder creating longer prompts. It would have been much simpler to avoid special casing chat models, and instead parse special tokens in the text prompt to separate system, human, ai, ... ### Motivation Something similar to [this](https://github.com/microsoft/MM-REACT/blob/main/langchain/llms/openai.py#L211) that uses `<|im_start|>system\nsystem message<|im_end|>` would make it easier to keep the same code for models, and just use different prompts for chat endpints. For example, it is perfectly valid to have 2 system messages, and I found it improves the results to have a system message at the beginning, and [one after](https://github.com/microsoft/MM-REACT/blob/main/langchain/agents/assistant/prompt.py#L191) some zero-shot examples right before the input. ### Your contribution I can send the PR if there is any interest.
Remove Chat Models
https://api.github.com/repos/langchain-ai/langchain/issues/5530/comments
2
2023-05-31T22:08:48Z
2023-09-13T16:07:48Z
https://github.com/langchain-ai/langchain/issues/5530
1,735,101,156
5,530
[ "hwchase17", "langchain" ]
Hi, I am building a chatbot using LLM like fastchat-t5-3b-v1.0 and want to reduce my inference time. I am loading the entire model on GPU, using device_map parameter, and making use of `langchain.llms.HuggingFacePipeline` agent for querying the LLM model. Also specifying the device=0 ( which is the 1st rank GPU) for hugging face pipeline as well. I am monitoring the GPU and CPU usage throughout the entire execution, and I can see that though my model is on GPU, at the time of querying the model, it makes use of CPU. The spike in CPU usage shows that query execution is happening on CPU. Below is the code that I am using to do inference on Fastchat LLM. ``` from llama_index import SimpleDirectoryReader, GPTVectorStoreIndex, PromptHelper, LLMPredictor from langchain.embeddings.huggingface import HuggingFaceEmbeddings from llama_index import LangchainEmbedding, ServiceContext from transformers import T5Tokenizer, T5ForConditionalGeneration from accelerate import init_empty_weights, infer_auto_device_map model_name = 'lmsys/fastchat-t5-3b-v1.0' config = T5Config.from_pretrained(model_name ) with init_empty_weights(): model_layer = T5ForConditionalGeneration(config=config) device_map = infer_auto_device_map(model_layer, max_memory={0: "12GiB",1: "12GiB", "cpu": "0GiB"}, no_split_module_classes=["T5Block"]) # the value for is : device_map = {'': 0}. i.e loading model in 1st GPU model = T5ForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.float16, device_map=device_map, offload_folder="offload", offload_state_dict=True) tokenizer = T5Tokenizer.from_pretrained(model_name) from transformers import pipeline pipe = pipeline( "text2text-generation", model=model, tokenizer=tokenizer, device= 0, max_length=1536, temperature=0, top_p = 1, num_beams=1, early_stopping=False ) from langchain.llms import HuggingFacePipeline llm = HuggingFacePipeline(pipeline=pipe) embed_model = LangchainEmbedding(HuggingFaceEmbeddings()) # set maximum input size max_input_size = 2048 # set number of output tokens num_outputs = 512 # set maximum chunk overlap max_chunk_overlap = 20 # set chunk size limit chunk_size_limit = 300 prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap) service_context = ServiceContext.from_defaults(embed_model=embed_model, llm_predictor=LLMPredictor(llm), prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit) # build index documents = SimpleDirectoryReader('data').load_data() new_index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context) query_engine = new_index.as_query_engine( response_mode='no_text', verbose=True, similarity_top_k=2 ) template = """ A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. ### Human: Given the context: --- {context} --- Answer the following question: --- {input} ### Assistant: """ from langchain import LLMChain, PromptTemplate prompt = PromptTemplate( input_variables=["context", "input"], template=template, ) chain = LLMChain( llm=llm, prompt=prompt, verbose=True ) user_input= "sample query question?" context = query_engine.query(user_input) concatenated_context = ' '.join(map(str, [node.node.text for node in context.source_nodes])) response = chain.run({"context": concatenated_context, "input": user_input}) ``` Here the “data” folder has my full input text in pdf format, and am using the GPTVectoreStoreIndex and hugging face pipeline to build the index on that and fetch the relevant chunk to generate the prompt with context and user_input Then using LLMChain agent from langchain library to generate the response from FastChat model as shown in the code. Please have a look, and let me know if this is the expected behaviour. how can I make use of GPU for query execution as well? to reduce the inference response time.
Query execution with langchain LLM pipeline is happening on CPU, even if model is loaded on GPU
https://api.github.com/repos/langchain-ai/langchain/issues/5522/comments
2
2023-05-31T20:18:21Z
2023-09-21T16:08:57Z
https://github.com/langchain-ai/langchain/issues/5522
1,734,953,214
5,522
[ "hwchase17", "langchain" ]
### System Info ``` $ langchain env LangChain Environment: library_version:0.0.184 platform:Linux-5.4.0-146-generic-x86_64-with-glibc2.31 runtime:python runtime_version:3.11.3 ``` ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction Following the [tracing v2 instructions](https://python.langchain.com/en/latest/tracing/agent_with_tracing.html#beta-tracing-v2), run: ``` $ langchain plus start WARN[0000] The "OPENAI_API_KEY" variable is not set. Defaulting to a blank string. [+] Running 2/2 ⠿ langchain-frontend Pulled 5.3s ⠿ langchain-backend Pulled 9.5s unable to prepare context: path "frontend-react/." not found langchain plus server is running at http://localhost. To connect locally, set the following environment variable when running your LangChain application. LANGCHAIN_TRACING_V2=true ``` It looks like neither the `frontend-react` or `backend` folders referenced by the [`docker-compose.yaml`](https://github.com/hwchase17/langchain/blob/f72bb966f894f99c9ffc2c730be392c71d020ac8/langchain/cli/docker-compose.yaml#L14) are in the repository, thus docker won't build them. Maybe we should remove the `build:` section of the YAML when deploying to users so they simply pull the images from the Docker Hub. ### Expected behavior It should start properly.
Tracing V2 doesn't work
https://api.github.com/repos/langchain-ai/langchain/issues/5521/comments
1
2023-05-31T19:04:38Z
2023-09-10T16:09:29Z
https://github.com/langchain-ai/langchain/issues/5521
1,734,829,837
5,521
[ "hwchase17", "langchain" ]
### System Info Most recent version of Langchain Python: 3.10.8 MacOS 13.4 - M1 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The Chroma constructor in the vectorstore section uses the document function when it should be the query function for embeddings. As a result, if the documents parameter is blank when using Chroma, Langchain will error out with a ValidationError. Please change line 95 to be embed_query instead of embed_documents [here](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/chroma.py) in order for this to work / be consistent with the rest of the vectorstore wrappers ### Expected behavior Use the query function instead of the documents function for use with embeddings in Chroma
Chroma: Constructor takes wrong embedding function (document vs query)
https://api.github.com/repos/langchain-ai/langchain/issues/5519/comments
4
2023-05-31T18:02:20Z
2023-10-18T16:07:54Z
https://github.com/langchain-ai/langchain/issues/5519
1,734,732,180
5,519
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I have created a pipeline. And want to use the same pipeline in openapi_agent. When I run the following command: ibm_agent = planner.create_openapi_agent(ibm_api_spec, requests_wrapper, hf_pipeline) I get error out of memory error. I'm using flan-t5-xxl llm, which consumes 22GB of memory. I have 18GB left. ``` tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xxl") model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") instruct_pipeline = pipeline("text2text-generation", model=model, tokenizer=tokenizer, pad_token_id=tokenizer.eos_token_id, torch_dtype=torch.bfloat16, device='cuda:0', max_length=2000) hf_pipeline = HuggingFacePipeline(pipeline=instruct_pipeline) agent = planner.create_openapi_agent(api_spec, requests_wrapper, hf_pipeline) user_query = "query" agent.run(user_query) ``` When i run code i get out following error ``` > Entering new AgentExecutor chain... Action: api_planner Action Input: api_planner(query) api_planner(query) api_controller(api_planner(query)) Traceback (most recent call last): File "/home/kiran/dolly/agents.py", line 79, in <module> ibm_agent.run(user_query) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 236, in run return self(args[0], callbacks=callbacks)[self.output_keys[0]] File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__ raise e File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__ self._call(inputs, run_manager=run_manager) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 953, in _call next_step_output = self._take_next_step( File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 820, in _take_next_step observation = tool.run( File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 294, in run raise e File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 266, in run self._run(*tool_args, run_manager=run_manager, **tool_kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 409, in _run self.func( File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 236, in run return self(args[0], callbacks=callbacks)[self.output_keys[0]] File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__ raise e File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__ self._call(inputs, run_manager=run_manager) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 69, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 79, in generate return self.llm.generate_prompt( File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 134, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 191, in generate raise e File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 185, in generate self._generate(prompts, stop=stop, run_manager=run_manager) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 436, in _generate self._call(prompt, stop=stop, run_manager=run_manager) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/langchain/llms/huggingface_pipeline.py", line 168, in _call response = self.pipeline(prompt) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/text2text_generation.py", line 165, in __call__ result = super().__call__(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1119, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1126, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1025, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/pipelines/text2text_generation.py", line 187, in _forward output_ids = self.model.generate(**model_inputs, **generate_kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1322, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 638, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1086, in forward layer_outputs = layer_module( File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 693, in forward self_attention_outputs = self.layer[0]( File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 600, in forward attention_output = self.SelfAttention( File "/home/kiran/dolly/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/kiran/dolly/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 530, in forward scores = torch.matmul( torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 28.28 GiB (GPU 0; 39.43 GiB total capacity; 25.09 GiB already allocated; 13.13 GiB free; 25.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ```
Flan-t5-xxl doesnot work with openapi_agent
https://api.github.com/repos/langchain-ai/langchain/issues/5513/comments
5
2023-05-31T16:21:02Z
2023-10-19T16:07:59Z
https://github.com/langchain-ai/langchain/issues/5513
1,734,577,939
5,513
[ "hwchase17", "langchain" ]
### Feature request If input_variables is not passed, try to detect them automatically as those which are surrounded by curly braces: E.g. ``` prompt_template = PromptTemplate(template="What is the price of {product_name}?") ## Automatically detects the input_variables to be ['product_name'] ``` ### Motivation This has been bugging me for a while and makes it more cumbersome. ### Your contribution You can use the code mentioned below, it's literally that simple (at least for f-strings). I can submit a PR. ``` def str_format_args(x: str, named_only: bool = True) -> List[str]: ## Ref: https://stackoverflow.com/a/46161774/4900327 args: List[str] = [ str(tup[1]) for tup in string.Formatter().parse(x) if tup[1] is not None ] if named_only: args: List[str] = [ arg for arg in args if not arg.isdigit() and len(arg) > 0 ] return args str_format_args("What is the price of {product_name}?") ## Returns ['product_name'] ```
Automatically detect input_variables from PromptTemplate string
https://api.github.com/repos/langchain-ai/langchain/issues/5511/comments
2
2023-05-31T15:59:04Z
2023-09-18T16:09:45Z
https://github.com/langchain-ai/langchain/issues/5511
1,734,540,520
5,511
[ "hwchase17", "langchain" ]
### Discussed in https://github.com/hwchase17/langchain/discussions/5499 <div type='discussions-op-text'> <sup>Originally posted by **lucasiscovici** May 31, 2023</sup> Hello and thank you for this amazing library. Here we : - get question - get new_question with the question_generator - retrieve docs with _get_docs and the new_question - call the combine_docs_chain with the new_question and the docs 1/ It's possible to allow to call the question_generator event if the chat_history_str is empty ? i have to transform the question to search query to call the search engine even if the chat history is empty 2/ It's possible to not use the new_question in the combine_docs_chain call ? i need the true question and not the new question (the search query) to call the llm for the qa Thanks in advance ```python def _call( self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None, ) -> Dict[str, Any]: _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() question = inputs["question"] get_chat_history = self.get_chat_history or _get_chat_history chat_history_str = get_chat_history(inputs["chat_history"]) if chat_history_str: callbacks = _run_manager.get_child() new_question = self.question_generator.run( question=question, chat_history=chat_history_str, callbacks=callbacks ) else: new_question = question docs = self._get_docs(new_question, inputs) new_inputs = inputs.copy() new_inputs["question"] = new_question new_inputs["chat_history"] = chat_history_str answer = self.combine_docs_chain.run( input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs ) if self.return_source_documents: return {self.output_key: answer, "source_documents": docs} else: return {self.output_key: answer} ```
ConversationalRetrievalChain new_question only from the question_generator only for retrieval and not for combine_docs_chain
https://api.github.com/repos/langchain-ai/langchain/issues/5508/comments
0
2023-05-31T15:43:29Z
2023-06-12T13:21:03Z
https://github.com/langchain-ai/langchain/issues/5508
1,734,515,334
5,508
[ "hwchase17", "langchain" ]
### Feature request Hey Team, I would like to propose a new feature that will enhance the visibility of the LLM's response time. In addition to providing information about token usage and cost, I suggest incorporating the time taken to generate the text. This additional metric will offer valuable insights into the efficiency and performance of the system. ### Motivation By including the response time, we can provide a comprehensive picture of the different LLM's API's performance, ensuring that we have a more accurate measure of its capabilities. This information will be particularly useful for evaluating and optimizing different LLMs, as it will shed light on the latency of the system. ### Your contribution We can easily implement this by adding additional variables in callbacks of LLM's. I would like to implement this feature. Here the example code: ``` class BaseCallbackHandler: """Base callback handler that can be used to handle callbacks from langchain.""" time_take_by_llm_to_generate_text: float = 0 start_time: float = 0 def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> Any: """Run when LLM starts running.""" self.start_time = datetime.now() def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any: """Run when LLM ends running.""" self.end_time = datetime.now() self.time_take_by_llm_to_generate_text += end_time - start_time ```
Tracking of time to generate text
https://api.github.com/repos/langchain-ai/langchain/issues/5498/comments
5
2023-05-31T12:33:47Z
2023-12-16T05:54:01Z
https://github.com/langchain-ai/langchain/issues/5498
1,734,126,410
5,498
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I have a Flask with LangChain setup in docker-compose, and I don't see LLM ChatOpenAI streaming output from CallbackHandlers in console, but everything works when I run it locally without Docker. My CallbackHandler code (StreamingStdOutCallbackHandler also doesn't work): ``` from typing import Any from langchain.callbacks.base import BaseCallbackHandler class StreamingOutput(BaseCallbackHandler): def on_llm_new_token(self, token: str, **kwargs: Any) -> None: print(token, end="", flush=True) ``` - ChatOpenAI has streaming and verbose flags set to true - ConversationChain has verbose flag set to True - Flask is run with `CMD ["flask", "run", "--debug", "--with-threads"]` I tried setting the PYTHONUNBUFFERED env variable but it didn't help - what am I doing wrong? ### Suggestion: _No response_
Issue: LLM callback handler not printing in Docker
https://api.github.com/repos/langchain-ai/langchain/issues/5493/comments
2
2023-05-31T10:43:14Z
2023-11-16T02:12:25Z
https://github.com/langchain-ai/langchain/issues/5493
1,733,916,668
5,493
[ "hwchase17", "langchain" ]
Can I connect to my RDBMS? ### Suggestion: _No response_
Can I connect to my RDBMS?
https://api.github.com/repos/langchain-ai/langchain/issues/5492/comments
4
2023-05-31T10:17:58Z
2023-09-18T16:09:50Z
https://github.com/langchain-ai/langchain/issues/5492
1,733,868,611
5,492
[ "hwchase17", "langchain" ]
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [6], in <cell line: 1>() ----> 1 import langchain File ~\anaconda3\lib\site-packages\langchain\__init__.py:6, in <module> 3 from importlib import metadata 4 from typing import Optional ----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain 7 from langchain.cache import BaseCache 8 from langchain.chains import ( 9 ConversationChain, 10 LLMBashChain, (...) 18 VectorDBQAWithSourcesChain, 19 ) File ~\anaconda3\lib\site-packages\langchain\agents\__init__.py:2, in <module> 1 """Interface for agents.""" ----> 2 from langchain.agents.agent import ( 3 Agent, 4 AgentExecutor, 5 AgentOutputParser, 6 BaseMultiActionAgent, 7 BaseSingleActionAgent, 8 LLMSingleActionAgent, 9 ) 10 from langchain.agents.agent_toolkits import ( 11 create_csv_agent, 12 create_json_agent, (...) 21 create_vectorstore_router_agent, 22 ) 23 from langchain.agents.agent_types import AgentType File ~\anaconda3\lib\site-packages\langchain\agents\agent.py:13, in <module> 10 from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union 12 import yaml ---> 13 from pydantic import BaseModel, root_validator 15 from langchain.agents.agent_types import AgentType 16 from langchain.agents.tools import InvalidTool File ~\anaconda3\lib\site-packages\pydantic\__init__.py:2, in init pydantic.__init__() File ~\anaconda3\lib\site-packages\pydantic\dataclasses.py:48, in init pydantic.dataclasses() File ~\anaconda3\lib\site-packages\pydantic\main.py:120, in init pydantic.main() TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers'
Error while importing Langchain
https://api.github.com/repos/langchain-ai/langchain/issues/5490/comments
4
2023-05-31T09:18:01Z
2023-09-18T16:09:56Z
https://github.com/langchain-ai/langchain/issues/5490
1,733,741,544
5,490
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I cant find any way to add custom metadata with the character splitter, it adds source as metadata but I cant seem to change it or define what kind of metadata I want ### Suggestion: _No response_
the text splitter adds metadata by itself
https://api.github.com/repos/langchain-ai/langchain/issues/5489/comments
3
2023-05-31T08:27:59Z
2023-11-30T16:09:16Z
https://github.com/langchain-ai/langchain/issues/5489
1,733,653,438
5,489
[ "hwchase17", "langchain" ]
Is there a way to pass parameters to Elasticvectorsearch to disable ssl verification. I tried to add verify_certs=False and ssl_verify=None ; but both didnt work.
Connecting to Elastic vector store throws ssl error
https://api.github.com/repos/langchain-ai/langchain/issues/5488/comments
5
2023-05-31T08:15:41Z
2023-09-26T16:06:29Z
https://github.com/langchain-ai/langchain/issues/5488
1,733,633,884
5,488
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.186 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Running the following code snippet: ```python from langchain import HuggingFacePipeline llm = HuggingFacePipeline.from_model_id(model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature":0, "max_length":64}) from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm) llm_chain.save("/tmp/hfp/model.yaml") from langchain.chains.loading import load_chain local_loaded_model = load_chain("/tmp/hfp/model.yaml") question = "What is electroencephalography?" local_loaded_model.run(question) ``` Gives the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) File <command-826248432925795>:1 ----> 1 local_loaded_model.run(question) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/base.py:236, in Chain.run(self, callbacks, *args, **kwargs) 234 if len(args) != 1: 235 raise ValueError("`run` supports only one positional argument.") --> 236 return self(args[0], callbacks=callbacks)[self.output_keys[0]] 238 if kwargs and not args: 239 return self(kwargs, callbacks=callbacks)[self.output_keys[0]] File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) --> 140 raise e 141 run_manager.on_chain_end(outputs) 142 return self.prep_outputs(inputs, outputs, return_only_outputs) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 128 run_manager = callback_manager.on_chain_start( 129 {"name": self.__class__.__name__}, 130 inputs, 131 ) 132 try: 133 outputs = ( --> 134 self._call(inputs, run_manager=run_manager) 135 if new_arg_supported 136 else self._call(inputs) 137 ) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/llm.py:69, in LLMChain._call(self, inputs, run_manager) 64 def _call( 65 self, 66 inputs: Dict[str, Any], 67 run_manager: Optional[CallbackManagerForChainRun] = None, 68 ) -> Dict[str, str]: ---> 69 response = self.generate([inputs], run_manager=run_manager) 70 return self.create_outputs(response)[0] File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/chains/llm.py:79, in LLMChain.generate(self, input_list, run_manager) 77 """Generate LLM result from inputs.""" 78 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) ---> 79 return self.llm.generate_prompt( 80 prompts, stop, callbacks=run_manager.get_child() if run_manager else None 81 ) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/base.py:134, in BaseLLM.generate_prompt(self, prompts, stop, callbacks) 127 def generate_prompt( 128 self, 129 prompts: List[PromptValue], 130 stop: Optional[List[str]] = None, 131 callbacks: Callbacks = None, 132 ) -> LLMResult: 133 prompt_strings = [p.to_string() for p in prompts] --> 134 return self.generate(prompt_strings, stop=stop, callbacks=callbacks) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/base.py:191, in BaseLLM.generate(self, prompts, stop, callbacks) 189 except (KeyboardInterrupt, Exception) as e: 190 run_manager.on_llm_error(e) --> 191 raise e 192 run_manager.on_llm_end(output) 193 return output File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/base.py:185, in BaseLLM.generate(self, prompts, stop, callbacks) 180 run_manager = callback_manager.on_llm_start( 181 {"name": self.__class__.__name__}, prompts, invocation_params=params 182 ) 183 try: 184 output = ( --> 185 self._generate(prompts, stop=stop, run_manager=run_manager) 186 if new_arg_supported 187 else self._generate(prompts, stop=stop) 188 ) 189 except (KeyboardInterrupt, Exception) as e: 190 run_manager.on_llm_error(e) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/base.py:436, in LLM._generate(self, prompts, stop, run_manager) 433 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager") 434 for prompt in prompts: 435 text = ( --> 436 self._call(prompt, stop=stop, run_manager=run_manager) 437 if new_arg_supported 438 else self._call(prompt, stop=stop) 439 ) 440 generations.append([Generation(text=text)]) 441 return LLMResult(generations=generations) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-51383c0b-9b1f-48ec-a6a9-ac979dcdc755/lib/python3.10/site-packages/langchain/llms/huggingface_pipeline.py:168, in HuggingFacePipeline._call(self, prompt, stop, run_manager) 162 def _call( 163 self, 164 prompt: str, 165 stop: Optional[List[str]] = None, 166 run_manager: Optional[CallbackManagerForLLMRun] = None, 167 ) -> str: --> 168 response = self.pipeline(prompt) 169 if self.pipeline.task == "text-generation": 170 # Text generation return includes the starter text. 171 text = response[0]["generated_text"][len(prompt) :] TypeError: 'NoneType' object is not callable ``` ### Expected behavior `local_loaded_model.run(question)` should behave the same way as: ```python llm_chain.run(question) ```
HuggingFacePipeline is not loaded correctly
https://api.github.com/repos/langchain-ai/langchain/issues/5487/comments
7
2023-05-31T08:05:06Z
2024-02-20T16:09:06Z
https://github.com/langchain-ai/langchain/issues/5487
1,733,616,462
5,487
[ "hwchase17", "langchain" ]
### System Info langchain: 0.0.181 platform: windows python: 3.11.3 ### Who can help? @eyurtsev ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```py site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml") docs = site_loader.load() print(docs[0]) # ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002) ``` ### Expected behavior print the frist doc
[SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader
https://api.github.com/repos/langchain-ai/langchain/issues/5483/comments
0
2023-05-31T07:52:33Z
2023-06-19T01:34:19Z
https://github.com/langchain-ai/langchain/issues/5483
1,733,595,290
5,483
[ "hwchase17", "langchain" ]
### Feature request I want langchain to implement caching for document loaders in a way similar to how it caches LLM calls, like this: ```python from langchain.cache import InMemoryCache langchain.document_loader_cache = InMemoryCache() ``` ### Motivation Loading from certain documents with langchain document loader can be an expensive operation (for example, I implemneted a custom PDF loader using OCR that's slow, or loaders that involves network calls). ### Your contribution If langchain would accept such a PR, I'd try to implement the logic and file a PR.
[Feature Request] Supoprts document loader caching
https://api.github.com/repos/langchain-ai/langchain/issues/5481/comments
4
2023-05-31T04:29:06Z
2023-11-14T16:08:14Z
https://github.com/langchain-ai/langchain/issues/5481
1,733,366,807
5,481
[ "hwchase17", "langchain" ]
### System Info Lang Chain 0.0.186 Mac OS Ventura Python 3.10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction why i got IndexError: list index out of range when use Chroma.from_documents import os from langchain.document_loaders import BiliBiliLoader from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import RecursiveCharacterTextSplitter os.environ["OPENAI_API_KEY"] = "***" loader = BiliBiliLoader(["https://www.bilibili.com/video/BV18o4y137n1/"]) documents = loader.load() text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=20 ) documents = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = Chroma.from_documents(documents, embeddings, persist_directory="./db") db.persist() Traceback (most recent call last): File "/bilibili/bilibili_embeddings.py", line 28, in <module> db = Chroma.from_documents(documents, embeddings, persist_directory="./db") File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 422, in from_documents return cls.from_texts( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 390, in from_texts chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 160, in add_texts self._collection.add( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 103, in add ids, embeddings, metadatas, documents = self._validate_embedding_set( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 354, in _validate_embedding_set ids = validate_ids(maybe_cast_one_to_many(ids)) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/chromadb/api/types.py", line 82, in maybe_cast_one_to_many if isinstance(target[0], (int, float)): IndexError: list index out of range ### Expected behavior index gen succefully in the persist_directory
IndexError: list index out of range when use Chroma.from_documents
https://api.github.com/repos/langchain-ai/langchain/issues/5476/comments
10
2023-05-31T02:51:19Z
2024-07-27T17:27:50Z
https://github.com/langchain-ai/langchain/issues/5476
1,733,300,168
5,476
[ "hwchase17", "langchain" ]
### Feature request Hi, Currently the from_documents method will add the embeddings and then return the instance of the store. Why don't we have a method to just return the store. This is useful when I already have a loaded vector store and I just need the instance of the store. It will be like the below code without the _store.add_texts_ ``` store = cls( connection_string=connection_string, collection_name=collection_name, embedding_function=embedding, distance_strategy=distance_strategy, pre_delete_collection=pre_delete_collection, ) store.add_texts(texts=texts, metadatas=metadatas, ids=ids, **kwargs) return store ``` ### Motivation This is required when I already have a loaded vector store ### Your contribution If this change is acceptable, I can add this functionality and create a PR
Getting only the instance of the vector store without adding text
https://api.github.com/repos/langchain-ai/langchain/issues/5475/comments
3
2023-05-31T01:56:08Z
2023-08-30T17:39:07Z
https://github.com/langchain-ai/langchain/issues/5475
1,733,256,921
5,475
[ "hwchase17", "langchain" ]
### System Info llm_chain.llm.save("llm.json") # method not found bug in .ipynb: docs/modules/chains/generic/serialization.ipynb ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction llm_chain.llm.save("llm.json") # method not found bug in .ipynb: docs/modules/chains/generic/serialization.ipynb ### Expected behavior llm_chain.llm.save("llm.json") # method not found bug in .ipynb: docs/modules/chains/generic/serialization.ipynb
llm_chain.llm.save("llm.json") # method not found
https://api.github.com/repos/langchain-ai/langchain/issues/5474/comments
1
2023-05-31T00:43:10Z
2023-09-10T16:09:41Z
https://github.com/langchain-ai/langchain/issues/5474
1,733,206,572
5,474
[ "hwchase17", "langchain" ]
### System Info Langchain Version: 0.0.186 MacOS Ventura 13.3 - M1 Python 3.10.8 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction There is an error in the Qdrant Vectorstore code ([`qdrant.py`](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/qdrant.py)). Specifically, with the function `_document_from_scored_point` on line 468 of [`qdrant.py`](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/qdrant.py). The Document object is from the [`schema.py`](https://github.com/hwchase17/langchain/blob/master/langchain/schema.py) The function takes as few arguments: page_content: str metadata: dict = Field(default_factory=dict) The [`qdrant.py`](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/qdrant.py) file incorrectly makes the *metadata* parameter a string instead of a dict. This creates a few problems: 1. If the *metadata* parameter in the function `_document_from_scored_point` is passed anything but None or a key that is not in the score_point object (a.k.a. None), it will error out. This is because this variable should be a dict, which is not returned from a dictionary *get* method. 2. The *metadata_payload_key* parameter does not seem to have a purpose / does not make sense given the above context. 3. It is impossible for metadata to be returned when using the Qdrant *similarity_search* function within Langchain due to this issue. ### Expected behavior I would like to be able to return metadata when using similarity_search with Qdrant. If you run [this](https://www.pinecone.io/learn/langchain-retrieval-augmentation/) example / focus on the vectorstore part and swap out the Pinecone work for Qdrant, there does not seem to be a way to use similarity search with metadata similar to how the example shows it.
Qdrant Document object is not behaving correct
https://api.github.com/repos/langchain-ai/langchain/issues/5473/comments
4
2023-05-30T23:55:06Z
2023-06-01T16:00:54Z
https://github.com/langchain-ai/langchain/issues/5473
1,733,174,360
5,473
[ "hwchase17", "langchain" ]
Does langchain support Oracle database as VectorStores?If yes, how to use the Oracle as VectorStore?
Does langchain support Oracle database as VectorStores?If yes, how to use the Oracle as VectorStore?
https://api.github.com/repos/langchain-ai/langchain/issues/5472/comments
4
2023-05-30T22:45:24Z
2023-09-06T01:55:37Z
https://github.com/langchain-ai/langchain/issues/5472
1,733,121,843
5,472
[ "hwchase17", "langchain" ]
### Feature request I would like to be able to provide credentials to the bigquery.client object ### Motivation I cannot access protected datasets without use of a service account or other credentials ### Your contribution I will submit a PR.
Google BigQuery Loader doesn't take credentials
https://api.github.com/repos/langchain-ai/langchain/issues/5465/comments
0
2023-05-30T21:18:13Z
2023-05-30T23:25:25Z
https://github.com/langchain-ai/langchain/issues/5465
1,733,027,963
5,465
[ "hwchase17", "langchain" ]
### System Info ``` $ pip show langchain Name: langchain Version: 0.0.186 Summary: Building applications with LLMs through composability Home-page: https://www.github.com/hwchase17/langchain Author: Author-email: License: MIT Location: /home/mteoh/temp_venv/venv/lib/python3.10/site-packages Requires: PyYAML, pydantic, tenacity, dataclasses-json, numexpr, numpy, openapi-schema-pydantic, aiohttp, async-timeout, requests, SQLAlchemy Required-by: ``` ``` $ python --version Python 3.10.2 ``` ### Who can help? @vowelpa ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. install langchain version 0.0.186, python version 3.10.2 2. Run the code below (I put in a file called `mwe.py`) ```python from langchain.tools import StructuredTool from typing import Dict from pydantic import BaseModel def foo(args_dict: Dict[str, str]): return "hi there" class FooSchema(BaseModel): args_dict: Dict[str, str] foo_tool = StructuredTool.from_function( foo, name="FooTool", description="min working example of a bug?", # args_schema=FooSchema # inferring this schema does not work ) result = foo_tool.run(tool_input={ "args_dict": {"aa": "bb"} }) print(result) ``` 4. observe the error below: ``` Traceback (most recent call last): File "/home/mteoh/temp_venv/mwe.py", line 18, in <module> result = foo_tool.run(tool_input={ File "/home/mteoh/temp_venv/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 247, in run parsed_input = self._parse_input(tool_input) File "/home/mteoh/temp_venv/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 190, in _parse_input result = input_args.parse_obj(tool_input) File "pydantic/main.py", line 526, in pydantic.main.BaseModel.parse_obj File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for FooToolSchemaSchema args_dict str type expected (type=type_error.str) ``` ### Expected behavior We expect to see the output of `foo()` which is `"hi there"`. You can get this result by uncommenting `args_schema=FooSchema` above. This is a problem, because this line below in `StructuredTool.from_function()` https://github.com/hwchase17/langchain/blob/58e95cd11e2c2fc31ed6551b5a2b876143d57429/langchain/tools/base.py#L469 suggests that the schema gets inferred, if not provided one. Instead, what's happening is that the tool "infers" that the arguments involve just one string, which is incorrect. I don't mind fixing this myself. In that case, any guidance is very welcome. Thank you!
Structured tools cannot properly infer function schema
https://api.github.com/repos/langchain-ai/langchain/issues/5463/comments
2
2023-05-30T20:51:13Z
2023-09-10T16:09:44Z
https://github.com/langchain-ai/langchain/issues/5463
1,732,993,542
5,463
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am using conversationalRetrievalChain. I cannot seem to change the system template. Any suggestion how to do this? `retriever = vectorstore.as_retriever(search_kwargs={"k": source_amount}, qa_template=QA_PROMPT, question_generator_template=CONDENSE_PROMPT)` `qa = ConversationalRetrievalChain.from_llm(llm=model, retriever=retriever, return_source_documents=True)` When printing QA: [SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context'], output_parser=None, partial_variables={}, template="Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n{context}", template_format='f-string', validate_template=True), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], Whatever I try, I seem to be unable to change the template "Use the following pieces of context to answer..." ### Suggestion: _No response_
conversationalRetrievalChain - how to set the template
https://api.github.com/repos/langchain-ai/langchain/issues/5462/comments
8
2023-05-30T20:43:46Z
2023-10-25T13:25:13Z
https://github.com/langchain-ai/langchain/issues/5462
1,732,984,618
5,462
[ "hwchase17", "langchain" ]
### System Info Langchain 0.0.184 Python 3.9.2 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I'm using as_retriever in a RetrievalQA with Pinecone as the vector store. If i use search_type="similarity_score_threshold" code below works. If I change this to `similarity_score_threshold` and set a `score_threshold`, then when I run the qa I get NotImplementedError: The code looks like this ```python db = Pinecone.from_existing_index(index_name=os.environ.get('INDEX'), namespace='SCA_H5', embedding=OpenAIEmbeddings()) retriever=db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"k":3, "score_threshold":0.5}) qa = RetrievalQA.from_chain_type( llm=ChatOpenAI(temperature=0), # uses 'gpt-3.5-turbo' which is cheaper and better chain_type="stuff", retriever=retriever, return_source_documents=True) ``` The python Traceback is ```python NotImplementedError Traceback (most recent call last) Cell In[4], line 1 ----> 1 result = Simon("What does the legisltation cover", sources=True, content=False) Cell In[3], line 26, in Simon(query, sources, content) 21 def Simon(query, sources=True, content=False): 23 instructions = '''You are an expert in Western Australia "Strata Titles Act" 24 answering questions from a citizen. Only use information provided to you from the 25 legislation below. If you do not know say "I do not know"''' ---> 26 result = qa({"query": f'{instructions} \n\n {query}'}) 27 process_llm_response(result, sources=sources, content=content) 28 return (result) File [~/Projects/Personal/SCAWA/.venv/lib/python3.9/site-packages/langchain/chains/base.py:140](https://file+.vscode-resource.vscode-cdn.net/home/kmcisaac/Projects/Personal/SCAWA/~/Projects/Personal/SCAWA/.venv/lib/python3.9/site-packages/langchain/chains/base.py:140), in Chain.__call__(self, inputs, return_only_outputs, callbacks) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) --> 140 raise e 141 run_manager.on_chain_end(outputs) 142 return self.prep_outputs(inputs, outputs, return_only_outputs) File [~/Projects/Personal/SCAWA/.venv/lib/python3.9/site-packages/langchain/chains/base.py:134](https://file+.vscode-resource.vscode-cdn.net/home/kmcisaac/Projects/Personal/SCAWA/~/Projects/Personal/SCAWA/.venv/lib/python3.9/site-packages/langchain/chains/base.py:134), in Chain.__call__(self, inputs, return_only_outputs, callbacks) 128 run_manager = callback_manager.on_chain_start( 129 {"name": self.__class__.__name__}, 130 inputs, ... 165 0 is dissimilar, 1 is most similar. 166 """ --> 167 raise NotImplementedError ``` ### Expected behavior The qa call does not fail.
similarity_score_threshold NotImplementedError
https://api.github.com/repos/langchain-ai/langchain/issues/5458/comments
4
2023-05-30T17:36:59Z
2023-10-26T16:07:38Z
https://github.com/langchain-ai/langchain/issues/5458
1,732,692,820
5,458
[ "hwchase17", "langchain" ]
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
Tools: Inconsistent callbacks/run_manager parameter
https://api.github.com/repos/langchain-ai/langchain/issues/5456/comments
4
2023-05-30T17:09:02Z
2023-06-23T08:48:28Z
https://github.com/langchain-ai/langchain/issues/5456
1,732,655,629
5,456
[ "hwchase17", "langchain" ]
### Issue with current documentation: Unable to recreate return source documents from prompt in the current [Vectorstore Agent Documentation](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/vectorstore.html). I tried adding `return_source_documents=True` to my `create_vectorstore_agent` (as discussed [here](https://github.com/hwchase17/langchain/issues/4562)) method and explicitly asking for the source document: `agent_executor.run("What did biden say about ketanji brown jackson is the state of the union address? Show me the source document")` But this only returns the content of the `answer`, i.e. ``` { "answer":"message returned here.\n", "sources":"13421341235123" } ``` ### Idea or request for content: Would like either a way to link to a custom output parser / memory for this use case ([memory does seem to work out of the box](https://python.langchain.com/en/latest/modules/agents/agent_executors/examples/sharedmemory_for_tools.html)) or a demo of how to configure the underlying tools to force output to string or something.
DOC: Return Source Documents to Vectorstore Agent
https://api.github.com/repos/langchain-ai/langchain/issues/5455/comments
1
2023-05-30T17:07:58Z
2023-09-15T16:09:32Z
https://github.com/langchain-ai/langchain/issues/5455
1,732,654,338
5,455
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. As we know, we can build a agent with tool by following way: ```python llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) ``` We can also use `FileManagementToolKit` to manage the file system. But I want to know, how to build a agent of file management of file system. `Langchain` still not provide agent for file management. So I want to know how to build agent with file system. I tried to use `load_tools` as follows, but failed. `FileManagementToolKit` can not be imported to `load_tools()` because `load_tools` does not provider file-related options. ```python import os from langchain.agents.agent_toolkits import FileManagementToolkit from tempfile import TemporaryDirectory from langchain.agents import load_tools working_directory = TemporaryDirectory(dir=os.getcwd()) toolkit = FileManagementToolkit(root_dir=str(working_directory.name)) tool_names = list(map(lambda item: item.name,toolkit.get_tools())) tools = load_tools(tool_names) ``` ``` ValueError Traceback (most recent call last) Cell In[27], line 3 1 from langchain.agents import load_tools 2 tool_names = list(map(lambda item: item.name,toolkit.get_tools())) ----> 3 tools = load_tools(tool_names) File E:\Programming\anaconda\lib\site-packages\langchain\agents\load_tools.py:341, in load_tools(tool_names, llm, callback_manager, **kwargs) 339 tools.append(tool) 340 else: --> 341 raise ValueError(f"Got unknown tool {name}") 342 return tools ValueError: Got unknown tool copy_file ``` the tools of `FileManagementToolkit`: ```python list(map(lambda item: item.name,toolkit.get_tools())) ``` ``` ['copy_file', 'file_delete', 'file_search', 'move_file', 'read_file', 'write_file', 'list_directory'] ``` ### Suggestion: Maybe we can build a like `create_file_agent()` like `create_sql_agent()`. As we all know, we can build sql agent as follows: ```python from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.sql_database import SQLDatabase from langchain.llms.openai import OpenAI def create_mysql_kit(): db = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db") llm = OpenAI(temperature=0.3) toolkit = SQLDatabaseToolkit(db=db, llm=llm) agent_executor = create_sql_agent( llm=OpenAI(temperature=0), toolkit=toolkit, verbose=True ) # agent_executor.run("Who are the users of sysuser in this system? Tell me the username of all users") agent_executor.run("How many people are in this system?") if __name__ == '__main__': create_mysql_kit() ``` I think we can build the `file agent` in the same way. ### More - There may be some way to achieve the same functionality as the file agent, but I don't know. If so, please tell to how to use it. - Can we provide a method to make an agent use all tools, including tools in toolkit and tools of `load_tools()`
Cannot build a file agent
https://api.github.com/repos/langchain-ai/langchain/issues/5454/comments
5
2023-05-30T17:06:27Z
2023-12-09T16:06:21Z
https://github.com/langchain-ai/langchain/issues/5454
1,732,652,179
5,454
[ "hwchase17", "langchain" ]
https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/chains/conversational_retrieval/base.py#L34 It seems the `_get_chat_history` is building the chat_history string but if the history is already a string then it should. The check might even be in the BaseConversationalRetrievalChain `_call` methods. What would be the correct way of using this if the chat_history is already a string?
Why raise an error in conversation retrieval chain if the chat history is a string?
https://api.github.com/repos/langchain-ai/langchain/issues/5452/comments
3
2023-05-30T16:41:33Z
2023-10-12T16:09:23Z
https://github.com/langchain-ai/langchain/issues/5452
1,732,618,952
5,452
[ "hwchase17", "langchain" ]
### Feature request Example URL of a Text Fragment to the README of this project that highlights the About: https://github.com/hwchase17/langchain#:~:text=About-,%E2%9A%A1,%E2%9A%A1,-Resources A SO: https://stackoverflow.com/questions/62989058/how-does-text-in-url-works-to-highlight-text Example of splitter I'm talking about: https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/markdown.html It'll be awesome if these text splitters could get the ability to try and generate [Text Fragments](https://web.dev/text-fragments/) for the text that was split up so that a URL could be generated that a user can click through and have the browser auto-scroll to the highlighted fragment. I'm sure the system could also be used outside the browser world for some tooling that could itself scroll to as well as it should be a well developed pattern/algorithm. The system wouldn't be perfect due to issues such as duplicate text on page, impossible to generate unique split text, but I'm sure most citations would still find it useful. ### Motivation I'm a little disappointed at the [notion db employee handbook example](https://github.com/hwchase17/notion-qa) where the sources are just filenames. What if the info was in a big doc? `Source: Office d0ebcaaa2074442ba155c67a41d315dd.md` ? Eh. How about as an option: ``` Source: Office%20d0ebcaaa2074442ba155c67a41d315dd.md#:~:text=~12%20o%27%20clock%2C%20there%20is%20lunch%20in%20the%20canteen%2C%20free%20of%20cost.%20Jo%C3%ABlle%20is%20in%20charge%20of%20lunch%20%E2%80%94%C2%A0ask%20her%20if%20you%20need%20anything%20(allergies%20for%20example). ``` [Hyperlink to raw with text fragment](https://github.com/hwchase17/notion-qa/blob/71610847545c97041b93ecb3b19d9746623ce80f/Notion_DB/Blendle's%20Employee%20Handbook%20a834d55573614857a48a9ce9ec4194e3/Office%20d0ebcaaa2074442ba155c67a41d315dd.md#:~:text=~12%20o%27%20clock%2C%20there%20is%20lunch%20in%20the%20canteen%2C%20free%20of%20cost.%20Jo%C3%ABlle%20is%20in%20charge%20of%20lunch%20%E2%80%94%C2%A0ask%20her%20if%20you%20need%20anything%20(allergies%20for%20example).) Of course, that looks ugly in a terminal, but on a web page where links can be hyperlinks like above, it'll be a much better experience. edit: Hmm, that link doesn't work very well on GitHub and it's turbolink'd pages. ### Your contribution I wish, I'm still trying to grasp Langchain itself. I'm particularly interested in Langchain and friends or rivals for Q/A answering and some of my personal hobby's, my work's, and the notion DB example's pages are quite long.
Text Fragments from text splitters for deep linking with browsers (or compatible systems) to specific text chunks in source documents
https://api.github.com/repos/langchain-ai/langchain/issues/5451/comments
1
2023-05-30T16:13:04Z
2023-09-10T16:09:55Z
https://github.com/langchain-ai/langchain/issues/5451
1,732,574,359
5,451
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. We can get the intermediate messages printed when the verbose is set to True in the chains. But is there a way we can get the intermediate messages from the chain as a return value? ### Suggestion: Take the code below as example. ``` from langchain import OpenAI, ConversationChain from langchain.llms import OpenAI llm = OpenAI(engine="text-davinci-003", temperature=0.9) conversation = ConversationChain(llm=llm, verbose=True) conversation.predict(input="How are you?") conversation.predict(input="I am Ricardo Kaka, what is your name?") conversation.predict(input="What is the first thing I said to you?") ``` We get the messages below printed in the shell. But I am wondering if there is a way I can get the messages as an return value, something like conversation.verbose_message, or conversation.get_verbose_message()?
Issue: Get the verbose messages from chain
https://api.github.com/repos/langchain-ai/langchain/issues/5448/comments
4
2023-05-30T15:46:42Z
2023-12-11T16:07:28Z
https://github.com/langchain-ai/langchain/issues/5448
1,732,533,742
5,448
[ "hwchase17", "langchain" ]
### Issue with current documentation: In https://docs.langchain.com/docs/components/agents/agent-executor the statement: The agent executor is responsible for calling the agent, getting back **and** action and action input, calling the tool that the action references with the corresponding input, getting the output of the tool, and then passing all that information back into the Agent to get the next action it should take ### Idea or request for content: The agent executor is responsible for calling the agent, getting back action and action input, calling the tool that the action references with the corresponding input, getting the output of the tool, and then passing all that information back into the Agent to get the next action it should take.
DOC: Small typo in the docs, "and" should be removed, and maybe a period in the end would be ok.
https://api.github.com/repos/langchain-ai/langchain/issues/5447/comments
4
2023-05-30T15:43:50Z
2023-10-31T16:06:50Z
https://github.com/langchain-ai/langchain/issues/5447
1,732,528,994
5,447
[ "hwchase17", "langchain" ]
### Feature request An interesting takeway for Meta TOT - Meta Tree of Thoughts. aims to enhance the Tree of Thoughts (TOT) language algorithm by using a secondary agent to critique and improve the primary agent's prompts. This innovative approach allows the primary agent to generate more accurate and relevant responses based on the feedback from the secondary agent. https://github.com/kyegomez/Meta-Tree-Of-Thoughts I would like add it to the overall offering. If ok, I can pick it up. ### Motivation Optimization on the continuous feedback loop. ### Your contribution I will like to work on this issue.
Support for Meta ToT
https://api.github.com/repos/langchain-ai/langchain/issues/5444/comments
1
2023-05-30T15:15:18Z
2023-09-10T16:10:00Z
https://github.com/langchain-ai/langchain/issues/5444
1,732,484,112
5,444
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. ElasticVectorSearch currently assigns a uuid as identifier while indexing documents. This is not idempotent: if we run the code twice duplicates are created. Also it would be beneficial to be able to insert new docs, update existing ones and ignore unchanged. ### Suggestion: I propose to check first if _id or id is present in metadata before setting it to a UUID.
Allow ElasticVectorSearch#add_texts to explicitely set the _ids
https://api.github.com/repos/langchain-ai/langchain/issues/5437/comments
2
2023-05-30T13:40:49Z
2023-09-10T16:10:05Z
https://github.com/langchain-ai/langchain/issues/5437
1,732,303,481
5,437
[ "hwchase17", "langchain" ]
### Issue with current documentation: I am trying to use LLM from hugging face as shown in the documentation below, it is working only when write the same prompt in the documentation but when I am changing it I don't have responses. https://github.com/hwchase17/langchain/blob/master/docs/modules/models/llms/integrations/huggingface_hub.ipynb ### Idea or request for content: _No response_
LLM from hugging face not working
https://api.github.com/repos/langchain-ai/langchain/issues/5436/comments
1
2023-05-30T13:05:46Z
2023-09-10T16:10:18Z
https://github.com/langchain-ai/langchain/issues/5436
1,732,233,152
5,436
[ "hwchase17", "langchain" ]
### System Info LangChain version : 0.0.177 Python version : 3.10.8 Platform : WSL 2 ### Who can help? @vowelparrot ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.tools import DuckDuckGoSearchRun search = DuckDuckGoSearchRun(verbose=True) search.run("What is Captial of Ireland?") /home/vishal_kesti/miniconda3/envs/video_audio_env/lib/python3.10/site-packages/duckduckgo_search/compat.py:20: UserWarning: ddg is deprecated. Use DDGS().text() generator warnings.warn("ddg is deprecated. Use DDGS().text() generator") /home/vishal_kesti/miniconda3/envs/video_audio_env/lib/python3.10/site-packages/duckduckgo_search/compat.py:22: UserWarning: parameter time is deprecated, use parameter timelimit warnings.warn("parameter time is deprecated, use parameter timelimit") /home/vishal_kesti/miniconda3/envs/video_audio_env/lib/python3.10/site-packages/duckduckgo_search/compat.py:24: UserWarning: parameter page is deprecated, use DDGS().text() generator warnings.warn("parameter page is deprecated, use DDGS().text() generator") /home/vishal_kesti/miniconda3/envs/video_audio_env/lib/python3.10/site-packages/duckduckgo_search/compat.py:26: UserWarning: parameter max_results is deprecated, use DDGS().text() warnings.warn("parameter max_results is deprecated, use DDGS().text()") "No good DuckDuckGo Search Result was found" ### Expected behavior There is a change in the duckduckgo python library where in they have specifically mentioned to use DDGS instead of ddg and more specifically the "text" if we want to use the api. They also do not support time, page and max_result parameter directly but there is a way to do it too. For eg: from duckduckgo_search import DDGS ddgs = DDGS() keywords = 'live free or die' ddgs_text_gen = ddgs.text(keywords, region='wt-wt', safesearch='Off', timelimit='y') for r in ddgs_text_gen: print(r) # Using lite backend and limit the number of results to 10 from itertools import islice ddgs_text_gen = DDGS().text("notes from a dead house", backend="lite") for r in islice(ddgs_text_gen, 10): print(r) I got it working by making the following code changes: ========================================= """Util that calls DuckDuckGo Search. No setup required. Free. https://pypi.org/project/duckduckgo-search/ """ from typing import Dict, List, Optional from pydantic import BaseModel, Extra from pydantic.class_validators import root_validator class DuckDuckGoSearchAPIWrapper(BaseModel): """Wrapper for DuckDuckGo Search API. Free and does not require any setup """ region: Optional[str] = "wt-wt" safesearch: str = "moderate" **timelimit: Optional[str] = "y"** **backend: str = "api"** class Config: """Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that python package exists in environment.""" try: from duckduckgo_search import DDGS # noqa: F401 except ImportError: raise ValueError( "Could not import duckduckgo-search python package. " "Please install it with `pip install duckduckgo-search`." ) return values def run(self, query: str) -> str: from duckduckgo_search import **DDGS** """Run query through DuckDuckGo and return results.""" **ddgs = DDGS()** results = **ddgs.text**( query, region=self.region, safesearch=self.safesearch, time=self.**timelimit** ) if len(results) == 0: return "No good DuckDuckGo Search Result was found" snippets = [result["body"] for result in results] return " ".join(snippets) def results(self, query: str, num_results: int) -> List[Dict]: """Run query through DuckDuckGo and return metadata. Args: query: The query to search for. num_results: The number of results to return. Returns: A list of dictionaries with the following keys: snippet - The description of the result. title - The title of the result. link - The link to the result. """ from duckduckgo_search import **DDGS ddgs = DDGS()** results = ddgs.text( query, region=self.region, safesearch=self.safesearch, time=self.timelimit ) if len(results) == 0: return [{"Result": "No good DuckDuckGo Search Result was found"}] def to_metadata(result: Dict) -> Dict: return { "snippet": result["body"], "title": result["title"], "link": result["href"], } return [to_metadata(result) for result in results]
DuckDuckGo search always returns "No good DuckDuckGo Search Result was found"
https://api.github.com/repos/langchain-ai/langchain/issues/5435/comments
4
2023-05-30T12:40:28Z
2024-03-31T22:10:37Z
https://github.com/langchain-ai/langchain/issues/5435
1,732,178,510
5,435
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I cant seem to add memory to llmChain which takes in 3 inputs as the input_key param only takes in one input so the program looses context template = """" As a helpful chatbot agent of the company, provide a answer to the customer query. Strictly limit to the information provided. {chat_history} calculate out of pocket or cost if deductable is available in the user's plan info Question: {query} f"Information from user's plan: {plan_info} f"Information from company database: {faq_info} """ prompt = PromptTemplate( input_variables=["chat_history","query", "plan_info", "faq_info"], template=template ) memory = ConversationBufferMemory(memory_key="chat_history", input_key=["query","plan_info","faq_info"]) chain = LLMChain(llm=model, prompt=prompt,memory=memory) return chain.predict(query=query, plan_info=plan_info,faq_info=faq_info) ### Suggestion: is there a way to get memory working in my case or memory with more than 1 inputs isnt implemented yet?
Memory with multi input
https://api.github.com/repos/langchain-ai/langchain/issues/5434/comments
0
2023-05-30T11:13:02Z
2023-05-30T11:17:09Z
https://github.com/langchain-ai/langchain/issues/5434
1,732,040,122
5,434
[ "hwchase17", "langchain" ]
### System Info Hi :) I tested the new callback stream handler `FinalStreamingStdOutCallbackHandler` and noticed an issue with it. I copied the code from the documentation and made just one change - use `ChatOpenAI` instead of `OpenAI` ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction `llm = ChatOpenAI(streaming=True, callbacks=[FinalStreamingStdOutCallbackHandler()], temperature=0)` here is my only change `tools = load_tools(["wikipedia", "llm-math"], llm=llm)` `agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)` `agent.run("It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.")` ### Expected behavior The code above returns the response from the agent but does not stream it. In my project, I must use the `ChatOpenAI` LLM, so I would appreciate it if someone could fix this issue, please.
FinalStreamingStdOutCallbackHandler not working with ChatOpenAI LLM
https://api.github.com/repos/langchain-ai/langchain/issues/5433/comments
6
2023-05-30T10:51:06Z
2023-07-31T22:23:44Z
https://github.com/langchain-ai/langchain/issues/5433
1,732,005,171
5,433
[ "hwchase17", "langchain" ]
### System Info Ubuntu 20.04.6 Python 3.8.5 Langchain 0.0.184 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python import torch from langchain.vectorstores import Qdrant from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.chains import RetrievalQA from langchain.llms import HuggingFacePipeline from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, TextIteratorStreamer, ) # embeddings embeddings_model_name = "hkunlp/instructor-base" embeddings_model = HuggingFaceInstructEmbeddings( model_name=embeddings_model_name, model_kwargs={"device": "cuda"}, ) contents = ["bla", "blabla", "blablabla"] vector_store = Qdrant.from_texts( contents, embeddings_model, location=":memory:", collection_name="test", ) retriever = vector_store.as_retriever() # llm chatbot_model_name = ""togethercomputer/RedPajama-INCITE-Chat-3B-v1" model = AutoModelForCausalLM.from_pretrained( chatbot_model_name, load_in_8bit=True, torch_dtype=torch.float16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(chatbot_model_name) streamer = TextIteratorStreamer( tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True, ) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, streamer=streamer, max_length=1024, temperature=0, top_p=0.95, repetition_penalty=1.15, ) pipe = HuggingFacePipeline(pipeline=pipe) # qa qa = RetrievalQA.from_chain_type( llm=pipe, chain_type="stuff", retriever=retriever, return_source_documents=False, ) qa.run("What is the capital of France") ``` which lead to ```python TypeError: cannot pickle '_thread.lock' object ```` ### Expected behavior I should be able to get the `streamer` outputs
`RetrievalQA` and `HuggingFacePipeline` lead to `TypeError: cannot pickle '_thread.lock' object`
https://api.github.com/repos/langchain-ai/langchain/issues/5431/comments
8
2023-05-30T10:00:43Z
2024-02-12T18:50:10Z
https://github.com/langchain-ai/langchain/issues/5431
1,731,930,284
5,431
[ "hwchase17", "langchain" ]
### System Info Langchain version 0.0.184, python 3.9.13 Function `parse_json_markdown` in langchain/output_parsers/json.py fails with input text string: \`\`\`json { "action": "Final Answer", "action_input": "Here's a Python script to remove backticks at the beginning and end of a string:\n\n\`\`\`python\ndef remove_backticks(s):\n return s.strip('\`')\n\nstring_with_backticks = '\`example string\`'\nresult = remove_backticks(string_with_backticks)\nprint(result)\n\`\`\`\n\nThis script defines a function called \`remove_backticks\` that takes a string as input and returns a new string with backticks removed from the beginning and end. It then demonstrates how to use the function with an example string." } \`\`\` Potential case of error: `match.group(2)` in the function `parse_json_markdown` contains only the string up to the first occurrence of the second triple backticks: { "action": "Final Answer", "action_input": "Here's a Python script to remove backticks at the beginning and end of a string:\n\n ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Called function `parse_json_markdown` in langchain/output_parsers/json.py with input text string: \`\`\`json { "action": "Final Answer", "action_input": "Here's a Python script to remove backticks at the beginning and end of a string:\n\n\`\`\`python\ndef remove_backticks(s):\n return s.strip('\`')\n\nstring_with_backticks = '\`example string\`'\nresult = remove_backticks(string_with_backticks)\nprint(result)\n\`\`\`\n\nThis script defines a function called \`remove_backticks\` that takes a string as input and returns a new string with backticks removed from the beginning and end. It then demonstrates how to use the function with an example string." } \`\`\` ### Expected behavior Function `parse_json_markdown` should return the following json string { "action": "Final Answer", "action_input": "Here's a Python script to remove backticks at the beginning and end of a string:\n\n\`\`\`python\ndef remove_backticks(s):\n return s.strip('\`')\n\nstring_with_backticks = '\`example string\`'\nresult = remove_backticks(string_with_backticks)\nprint(result)\n\`\`\`\n\nThis script defines a function called \`remove_backticks\` that takes a string as input and returns a new string with backticks removed from the beginning and end. It then demonstrates how to use the function with an example string." }
parse_json_markdown is unable to parse json strings with nested triple backticks
https://api.github.com/repos/langchain-ai/langchain/issues/5428/comments
8
2023-05-30T08:37:30Z
2024-08-07T09:27:58Z
https://github.com/langchain-ai/langchain/issues/5428
1,731,789,217
5,428
[ "hwchase17", "langchain" ]
### System Info Langchain: 0.0.184 Python: 3.10.9 Platform: Windows 10 with Jupyter lab ### Who can help? @vowelparrot ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop. Below is the initial code snapshot. ![image](https://github.com/hwchase17/langchain/assets/38554198/6a444508-4774-4962-8ae2-e5513c756535) And when I executed it. ![image](https://github.com/hwchase17/langchain/assets/38554198/5d3c76fe-8151-4caf-9970-03c84473f925) The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks. ### Expected behavior Won't truncate the last double quotation marks for PostgreSql.
SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL
https://api.github.com/repos/langchain-ai/langchain/issues/5423/comments
4
2023-05-30T04:02:36Z
2023-06-01T01:25:23Z
https://github.com/langchain-ai/langchain/issues/5423
1,731,469,889
5,423
[ "hwchase17", "langchain" ]
### System Info Dear Developer: I have encounter an error that I am not able run OpenAI and AzureChatOpenAI together.Here is how to reproduce the error langchain Version: 0.0.184 pyython: 3.9.12 ```python from langchain.llms import OpenAI, AzureOpenAI from langchain.chat_models import ChatOpenAI, AzureChatOpenAI openai_params = { "openai_api_key" : "key", "openai_api_base": "url" } openaiazure_params = { "deployment_name" : "db", "openai_api_base" : "https://azure.com/", "openai_api_version" : "2023-03-15-preview", "openai_api_type" : "azure", "openai_api_key" : "key" } llm = OpenAI(temperature=0.5, max_tokens=1024, **openai_params) print(llm("tell me joke")). # note that this line works fun it wiil call the api without any error llmazure = AzureChatOpenAI(**openaiazure_params) print(llm("tell me joke")). # now it seems that running the AzureChatOpenAI would somehow change the class attribute of the OpenAI. If I rerun this line it would give the following error ``` ```text File "/Users/xyn/anaconda3/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request raise error.InvalidRequestError( openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'> ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.llms import OpenAI, AzureOpenAI from langchain.chat_models import ChatOpenAI, AzureChatOpenAI openai_params = { "openai_api_key" : "key", "openai_api_base": "url" } openaiazure_params = { "deployment_name" : "db", "openai_api_base" : "https://azure.com/", "openai_api_version" : "2023-03-15-preview", "openai_api_type" : "azure", "openai_api_key" : "key" } llm = OpenAI(temperature=0.5, max_tokens=1024, **openai_params) print(llm("tell me joke")). # note that this line works fun it wiil call the api without any error llmazure = AzureChatOpenAI(**openaiazure_params) print(llm("tell me joke")) # now it seems that running the AzureChatOpenAI would somehow change the class attribute of the OpenAI. If I rerun this line it would give the following error ``` ### Expected behavior ```python print(llm("tell me joke")) # still gives the result after using the AzureChatOpenAI from langchain.schema import HumanMessage llmazure([HumanMessage(content="tell me joke")]) # could also do appropriate calls # was worried attributes would be changed back, so what if I reset the OpenAI and test AzureChatOpenAI again llm = OpenAI(temperature=0.5, max_tokens=1024, **openai_params) # then test AzureChatOpenAI llmazure([HumanMessage(content="tell me joke")]) # do appropriate calls print(llm("tell me joke")) # do appropriate calls ```
Can not Use OpenAI and AzureChatOpenAI together
https://api.github.com/repos/langchain-ai/langchain/issues/5422/comments
3
2023-05-30T03:52:01Z
2023-09-18T16:09:59Z
https://github.com/langchain-ai/langchain/issues/5422
1,731,462,914
5,422
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. LangChain framework mostly aims to build applications which interact with LLM. Many online applications themselves are implements by other languages such as Java,C++, but LangChain only supports Python and JS by now. How about implement other language version of LangChain? ### Suggestion: _No response_
Implement other Language version of LangChain
https://api.github.com/repos/langchain-ai/langchain/issues/5421/comments
9
2023-05-30T03:23:46Z
2024-03-31T20:44:24Z
https://github.com/langchain-ai/langchain/issues/5421
1,731,445,889
5,421
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I try, but don't work. my code: ``` let { HNSWLib } = await import('langchain/vectorstores/hnswlib') let { OpenAIEmbeddings } = await import('langchain/embeddings/openai') let vectors1 = await HNSWLib.load( "D:/workcode/nodejs/chatgpt_server/vectors/32202", new OpenAIEmbeddings() ) let vectors2 = await HNSWLib.load( "D:/workcode/nodejs/chatgpt_server/vectors/60551", new OpenAIEmbeddings() ) let vectors3 = await vectors1.addVectors(vectors2, vectors2.docstore._docs) ``` ### Suggestion: _No response_
Issue: How to merge two vector in HNSWLib
https://api.github.com/repos/langchain-ai/langchain/issues/5420/comments
0
2023-05-30T02:44:14Z
2023-05-30T08:52:51Z
https://github.com/langchain-ai/langchain/issues/5420
1,731,421,091
5,420
[ "hwchase17", "langchain" ]
### Feature request Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure. [Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search) ### Motivation This capability described in the feature request is currently not available for Langchain Python. ### Your contribution I will be submitting a PR for this feature request.
Add AzureCosmosDBVectorSearch VectorStore
https://api.github.com/repos/langchain-ai/langchain/issues/5419/comments
1
2023-05-29T23:58:25Z
2023-09-18T16:10:05Z
https://github.com/langchain-ai/langchain/issues/5419
1,731,329,471
5,419
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. My code: ``` from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings( openai_api_base='https://api.openai.com/v1', openai_api_type='open_ai', model='text-embedding-ada-002', openai_api_version='2023-05-15', openai_api_key=openai_api_key, max_retries=3, ) ``` And I got `2023-05-30 07:28:40,163 INFO error_code=None error_message='Unsupported OpenAI-Version header provided: 2023-05-15. (HINT: you can provide any of the following supported versions: 2020-10-01, 2020-11-07. Alternatively, you can simply omit this header to use the default version associated with your account.)' error_param=headers:openai-version error_type=invalid_request_error message='OpenAI API error received' stream_error=False ` Why this is supported versions so old (they are 2020-10-01, 2020-11-07)? Is this normal ? Anybody got this error ? ### Suggestion: _No response_
Issue: Unsupported OpenAI-Version header provided: 2023-05-15. (HINT: you can provide any of the following support ed versions: 2020-10-01, 2020-11-07.'
https://api.github.com/repos/langchain-ai/langchain/issues/5418/comments
1
2023-05-29T23:56:40Z
2023-09-10T16:10:21Z
https://github.com/langchain-ai/langchain/issues/5418
1,731,327,871
5,418
[ "hwchase17", "langchain" ]
### Feature request It will be very nice to have similarity score from docsearch. Now I have list of documents even if model says "I'm sorry, but I don't have enough information to answer your question". ``` from langchain.chains import RetrievalQA qa = RetrievalQA.from_chain_type( llm=llm, chain_type= "stuff", retriever = docsearch.as_retriever( search_type="similarity", search_kwargs={"k":3} ), return_source_documents=True ) ``` ### Motivation Now it returns list of documents even if model says "I'm sorry, but I don't have enough information to answer your question." And also I got list of not really relevant documents. ### Your contribution Not sure here. For example in `chroma.py` I can add output of similarity: ``` def similarity_search( self, query: str, k: int = DEFAULT_K, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: docs_and_scores = self.similarity_search_with_score(query, k, filter=filter) result = [] for doc, score in docs_and_scores: doc.metadata['score'] = score result.append(doc) return result ``` **BUT** it's not relevant to the output - I got similarity 0.3-0.4 for correct answers and at the same time - for "he context does not provide any information about what...". So this method is good to return score, but not good to solve all my questions.
Is it possible to return similarity score from RetrievalQA/docsearch
https://api.github.com/repos/langchain-ai/langchain/issues/5416/comments
14
2023-05-29T21:02:00Z
2024-03-06T06:46:21Z
https://github.com/langchain-ai/langchain/issues/5416
1,731,233,003
5,416
[ "hwchase17", "langchain" ]
### System Info Langchain 0.0.182, python 3.11.3, mac os ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [x] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [x] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction FLARE with azure open ai causes this issue. InvalidRequestError Traceback (most recent call last) in <cell line: 1>() ----> 1 flare.run(query) 20 frames [/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py](https://localhost:8080/#) in __prepare_create_request(cls, api_key, api_base, api_type, api_version, organization, **params) 81 if typed_api_type in (util.ApiType.AZURE, util.ApiType.AZURE_AD): 82 if deployment_id is None and engine is None: ---> 83 raise error.InvalidRequestError( 84 "Must provide an 'engine' or 'deployment_id' parameter to create a %s" 85 % cls, InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'> ### Expected behavior > Entering new FlareChain chain... Current Response: Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> RESPONSE: > Entering new QuestionGeneratorChain chain... Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " decentralized platform for natural language processing" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " uses a blockchain" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " distributed ledger to" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " process data, allowing for secure and transparent data sharing." is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " set of tools" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " help developers create" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " create an AI system" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " NLP applications" is:
FLARE | Azure open Ai doesn't work
https://api.github.com/repos/langchain-ai/langchain/issues/5414/comments
11
2023-05-29T17:50:06Z
2023-10-10T12:40:52Z
https://github.com/langchain-ai/langchain/issues/5414
1,731,079,465
5,414
[ "hwchase17", "langchain" ]
### Feature request It would be useful to introduce an EarlyStoppingException for capture by the AgentExecutor. ### Motivation It would be nice to throw a `EarlyStoppingException` in a tool used by an Agent when certain pre-conditions are met (e.g. a validation of something has failed). The `EarlyStoppingException` would be caught by the AgentExector where it would immediately return the error message of the tool as the output. My current work around to achieve something like this without customising the AgentExecutor requires me to throw the error message as part of the JSON output of the tool in the output format the agent expects from the tool. The agent would then take this information and run it through an LLM. This is not as ideal as an immediate return of the error message as the LLM doesn't return the exact message and it has to pass through a LLM again. The `return_immediate_results` is not ideal in this case as I would like the Agent to summarise the output of the tool on most passes, except when the condition is met where a `EarlyStoppingException` is thrown and caught by the `AgentExector`. ### Your contribution Happy to create a PR for this if it is wanted.
Early Stopping Exception
https://api.github.com/repos/langchain-ai/langchain/issues/5412/comments
1
2023-05-29T17:02:26Z
2023-09-10T16:10:25Z
https://github.com/langchain-ai/langchain/issues/5412
1,731,041,874
5,412
[ "hwchase17", "langchain" ]
### System Info ![Screen Shot 2023-05-29 at 9 33 50 AM](https://github.com/hwchase17/langchain/assets/36281899/25d386c4-1769-486c-a664-baf235068b97) ![Screen Shot 2023-05-29 at 9 33 25 AM](https://github.com/hwchase17/langchain/assets/36281899/1750f2be-a049-4b7b-a555-0505ecfcfb51) ![Screen Shot 2023-05-29 at 9 33 00 AM](https://github.com/hwchase17/langchain/assets/36281899/ede64632-7098-4c87-aeca-43463cc2a40a) python 3.10. Langchain 0.0.184 Seems that MultiRetrievalQAChain is requiring a chat model to run, but feels like this shouldn't be required. Other QA pipelines don't require a chat model, and I don't see why they should. I'm guessing this is just a symptom of some parts of the system being upgraded to use chatmodels, since OpenAI is used for a lot of little things under the hood with langchain, like deciding which vectorDB to use based on the retriever_infos dict. However, if you are trying to use another LLM aside from openAI, you cannot use MultiRetrievalQA. Further, I've run into some issues with this sort of thing across otherQA pipelines when using huggingface LLMs. Generally, better support for huggingface is important to me, and I think would be important to many others in the near future as the open source model ecosystem continues to grow. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction import langchain from langchain.llms import HuggingFacePipeline, HuggingFaceHub from langchain.embeddings import HuggingFaceEmbeddings from langchain.schema import Document from langchain.document_loaders import DirectoryLoader, TextLoader from langchain.indexes import VectorstoreIndexCreator from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.chains import RetrievalQA from langchain.chains.router import MultiRetrievalQAChain from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, GPTJForCausalLM, AutoModelForQuestionAnswering, GPTJForQuestionAnswering from abc import ABC, abstractmethod from typing import List, Dict, Optional import os import requests import shutil #device object device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # hf embeddings model embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") # Cerebras GPT2.7B model = AutoModelForCausalLM.from_pretrained( "cerebras/Cerebras-GPT-2.7B", torch_dtype=torch.float16, ).to(device) tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-2.7B") # hf pipeline gen_pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=200, early_stopping=True, no_repeat_ngram_size=2, device=0 ) # define llm gen_llm = HuggingFacePipeline(pipeline=gen_pipe) # load documents and create retrievers docs1 = TextLoader('docs/dir1/mydoc1.txt').load_and_split() retreiver1 = Chroma.from_documents(docs1, embeddings).as_retriever() docs2 = TextLoader('docs/dir2/mydoc2.txt').load_and_split() retriever2 = Chroma.from_documents(docs2, embeddings).as_retriever() retriever_infos = [ { "name": "space-knowledge", "description": "Good for answering general questions about outer space.", "retriever": retriever1 }, { "name": "earth-knowledge", "description": "Good for answer questions about planet earth", "retriever": retriever2 } ] chain = MultiRetrievalQAChain.from_retrievers(llm=gen_llm, retriever_infos=retriever_infos, verbose=True) ### Expected behavior I expect this to work the same way it does when using an openAI model. i.e., the model chooses the vectorDB to use based on the question, searches, and returns a response.
MultiRetrievalQAChain requires ChatModel... but should it?
https://api.github.com/repos/langchain-ai/langchain/issues/5411/comments
7
2023-05-29T16:47:37Z
2023-09-20T16:09:16Z
https://github.com/langchain-ai/langchain/issues/5411
1,731,026,192
5,411
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I have sucessfully set up streaming in HTTP call with FastApi and OpenAI + ConversationalRetrievalChain If I don't use streaming and just return the whole response, like I was doing previously, I also get metadata displayed with the answer. If I enable streaming, i get displayed only the answer and the '%' at the end of response. Like: ......dummytext.% Code, which is responsible for streaming: ``` from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler import queue class ThreadedGenerator: def __init__(self): self.queue = queue.Queue() def __iter__(self): return self def __next__(self): item = self.queue.get() if item is StopIteration: raise item return item def send(self, data): self.queue.put(data) def close(self): self.queue.put(StopIteration) class ChainStreamHandler(StreamingStdOutCallbackHandler): def __init__(self, gen): super().__init__() self.gen = gen def on_llm_new_token(self, token: str, **kwargs): self.gen.send(token) def on_llm_new_token(self, token: str, **kwargs): self.gen.send(token) ``` Ask question function: ``` def askQuestion(self, generator, collection_id, question): try: collection_name = "collection-" + str(collection_id) self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=settings.OPENAI_API_KEY, streaming=True, verbose=VERBOSE, callback_manager=CallbackManager([ChainStreamHandler(generator)])) self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key='answer') self.chain = ConversationalRetrievalChain.from_llm(self.llm, chroma_Vectorstore.as_retriever(similarity_search_with_score=True), return_source_documents=True,verbose=VERBOSE, memory=self.memory) result = self.chain({"question": question}) res_dict = { "answer": result["answer"], } res_dict["source_documents"] = [] for source in result["source_documents"]: res_dict["source_documents"].append({ "page_content": source.page_content, "metadata": source.metadata }) return res_dict finally: generator.close() ``` And the API route itself ``` def stream(question, collection_id): generator = ThreadedGenerator() threading.Thread(target=thread_handler.askQuestion, args=(generator, collection_id, question)).start() return generator @router.post("/collection/{collection_id}/ask_question") async def ask_question(collection_id: str, request: Request): form_data = await request.form() question = form_data["question"] return StreamingResponse(stream(question, collection_id), media_type='text/event-stream') ``` In askQuestion function I am creating the res_dict object, which has answer and also source from metadata stored. How can I also display the source after the answer is stopped being streamable? (I have source already in metadata) Is for this a better way to make separate API call, or are there some other practices or,..? Thanks for everyone for advice!
Issue: Display metadata after streaming a response with FastAPI
https://api.github.com/repos/langchain-ai/langchain/issues/5409/comments
3
2023-05-29T15:31:49Z
2023-09-18T16:10:20Z
https://github.com/langchain-ai/langchain/issues/5409
1,730,952,051
5,409
[ "hwchase17", "langchain" ]
### Feature request Add the ability to pass the original prompt through to the ExecutorAgent so that the original explicit context is not lost during a PlanAndExecute run. ### Motivation PlanAndExecute agents can create a plan of steps dependent on context given in the original prompt. However, this context is lost after the plan is created and is being executed. However, often the plan is formed in a way which refers to the prior context, losing information. For example, I gave the following prompt, and gave the agent access only to the PythonREPL tool: ```py prompt = ( f"Task: Analyse the customer data available in the database with path '{db_path}'. Tell me the average " "sales by month." ) ``` In the above example, `db_path` is a fully formed string which can be passed directly to `sqlalchemy.create_engine`. The first step in the plan formed was: `Connect to the database using the given path`. This would ordinarily be fine, however, the context of the "given path" was lost, as it was not part of the reformed prompt passed to the executor. Optionally including the original prompt in the template should assist with this. ### Your contribution I will be submitting a PR shortly with a proposed solution :)
Add the ability to pass the prompt through to Executor Agents for enrichment during PlanAndExecute
https://api.github.com/repos/langchain-ai/langchain/issues/5400/comments
0
2023-05-29T13:19:30Z
2023-06-03T21:59:11Z
https://github.com/langchain-ai/langchain/issues/5400
1,730,753,427
5,400
[ "hwchase17", "langchain" ]
### System Info langchain 0.0.183 , Platform Anaconda, Python version 3.10.9 ![Capture](https://github.com/hwchase17/langchain/assets/42122075/b09e3f03-d63a-4aa6-94bf-79722f2118ab) ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction [issue.txt](https://github.com/hwchase17/langchain/files/11591485/issue.txt) ### Expected behavior Cypher Query along with Explanation
GraphCypherQAChain Authentication error
https://api.github.com/repos/langchain-ai/langchain/issues/5399/comments
2
2023-05-29T11:52:53Z
2023-09-12T16:12:01Z
https://github.com/langchain-ai/langchain/issues/5399
1,730,626,909
5,399
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. when use the BooleanOutputParser in the chain_filter.py, the LLMs output 'yes|no',the function 'parse' will throw ValueError。 ### Suggestion: I fix it,as follows: def parse(self, text: str) -> bool: """Parse the output of an LLM call to a boolean. Args: text: output of language model Returns: boolean """ cleaned_text = text.upper().strip() if cleaned_text not in (self.true_val, self.false_val): raise ValueError( f"BooleanOutputParser expected output value to either be " f"{self.true_val} or {self.false_val}. Received {cleaned_text}." ) return cleaned_text == self.true_val
Issue: value error in BooleanOutputParser
https://api.github.com/repos/langchain-ai/langchain/issues/5396/comments
1
2023-05-29T10:56:23Z
2023-09-10T16:10:36Z
https://github.com/langchain-ai/langchain/issues/5396
1,730,545,468
5,396
[ "hwchase17", "langchain" ]
### System Info langchain 0.183 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction If GPT returns a message formatted this way: Thought: I will produce a list of things. Action: ``` { "action": "Final Answer", "action_input": [ { "thing" : "A", "attribute" : "X" }, { "thing" : "B", "attribute" : "Y" } ] } ``` Where action_input is a list instead of a string, the output parser will recognize it as valid and return it all the way back. output_parser.py: line 23->32: ``` def parse(self, text: str) -> Union[AgentAction, AgentFinish]: try: action_match = re.search(r"```(.*?)```?", text, re.DOTALL) if action_match is not None: response = json.loads(action_match.group(1).strip(), strict=False) if isinstance(response, list): # gpt turbo frequently ignores the directive to emit a single action logger.warning("Got multiple action responses: %s", response) response = response[0] if response["action"] == "Final Answer": ``` reponse = json.loads only checks that the response is a valid json, not that the type of "reponse['action']" is a string. While not necessarily an issue functionally speaking, this means that the function prototype in base.py:225 ``` def run(self, *args: Any, callbacks: Callbacks = None, **kwargs: Any) -> str: """Run the chain as text in, text out or multiple variables, text out.""" ``` is incorrect. the return value usually is a string, but could be a dictionary or a list based on the output of gpt. ### Expected behavior Either force GPT to reformulate, or change the function prototype
Parser output may not always produce a string (based on what GPT returns), any valid json construct is possible
https://api.github.com/repos/langchain-ai/langchain/issues/5393/comments
1
2023-05-29T08:50:13Z
2023-09-10T16:10:41Z
https://github.com/langchain-ai/langchain/issues/5393
1,730,352,093
5,393
[ "hwchase17", "langchain" ]
### System Info Langchain v0.0.183 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain import Wikipedia from langchain.chat_models import ChatOpenAI from langchain.agents import initialize_agent, Tool from langchain.agents import AgentType from langchain.agents.react.base import DocstoreExplorer docstore=DocstoreExplorer(Wikipedia()) tools = [ Tool( name="Search", func=docstore.search, description="useful for when you need to ask with search" ), Tool( name="Lookup", func=docstore.lookup, description="useful for when you need to ask with lookup" ) ] llm = ChatOpenAI() react = initialize_agent(tools, llm, agent=AgentType.REACT_DOCSTORE, verbose=True) question = 'Question' react.run(question) ``` Running this snippet initializes a ReActDocStoreAgent and runs it on the given question. [initialize_agent( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/initialize.py#L12) returns an AgentExecutor. During this call an agent is created using the [from_llm_and_tools( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent.py#L528) function. This function creates a new llm_chain for the agent and sets the prompt using agent_cls.create_prompt( ). Since we are using a ReActDocStoreAgent, [the function](https://github.com/hwchase17/langchain/blob/master/langchain/agents/react/base.py#L35) simply returns the WIKI_PROMPT, which is a PromptTemplate object. [run( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent.py#L934) from AgentExecutor calls [_take_next_step( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent.py#L748) which calls agent.plan( ). [plan( )](https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent.py#L425) eventually calls llm_chain.predict( ). Eventually llm_chain calls [generate( )](https://github.com/hwchase17/langchain/blob/master/langchain/chains/llm.py#L72). This function creates a prompt by calling [prep_prompt( )](https://github.com/hwchase17/langchain/blob/master/langchain/chains/llm.py#L94). The prompt is a StringPromptValue since it is created using [format_prompt( )](https://github.com/hwchase17/langchain/blob/master/langchain/prompts/base.py#L230). This variable is then used as an argument for the function llm.generate_prompt( ). [generate_prompt( )](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/base.py#L136) converts the prompt to message format using [to_messages( )](https://github.com/hwchase17/langchain/blob/master/langchain/prompts/base.py#L98), which treats the entire prompt as a HumanMessage. ### Expected behavior The issue here is that the prompt includes both human and AI messages. Ideally, the questions and observations should be human messages whereas the thoughts and actions should be AI messages. Treating the entire prompt as a human message may decrease prompt quality and lead to suboptimal performance.
Incorrect prompt formatting when initializing ReActDocstoreAgent with a chat model
https://api.github.com/repos/langchain-ai/langchain/issues/5390/comments
2
2023-05-29T07:13:52Z
2023-09-10T16:10:46Z
https://github.com/langchain-ai/langchain/issues/5390
1,730,216,162
5,390
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Issues #1026 and #5294 raise legitimate concerns with the security of executing LLM-generated code via the `exec()` function. Out of that discussion came a link to [this blog](https://til.simonwillison.net/webassembly/python-in-a-wasm-sandbox) that demonstrated a way to use Wasm to execute Python code in an isolated interpreter. I have developed a library based on that code called [wasm-exec](https://github.com/Jflick58/wasm_exec) to provide a clean interface to this solution. I’d like to next make a PR in LangChain replacing instances of `exec()` with `wasm_exec()` but wanted to get some feedback on this solution before doing so. Right now, the largest unknown is the extent of support for arbitrary packages, which may make running something like Pandas in the sandbox untenable until a solution is found. I believe I have a path forward on that (via installing when configuring the wasm runtime), but will need to continue to experiment. ### Suggestion: _No response_
RFC: Use wasm-exec package to sandbox code execution by the Python REPL tool
https://api.github.com/repos/langchain-ai/langchain/issues/5388/comments
3
2023-05-29T05:39:42Z
2023-09-18T16:10:26Z
https://github.com/langchain-ai/langchain/issues/5388
1,730,105,141
5,388
[ "hwchase17", "langchain" ]
### System Info Langchain version: NA Python 3.10.9 WSL2 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. pip install langchain[all] 2. Wait 3. Observe: ``` Preparing metadata (setup.py) ... done Downloading openai-0.0.2.tar.gz (741 bytes) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [6 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-9c_7ozkg/openai_9e8b55b2ec17406d8b64d964c29099f3/setup.py", line 6, in <module> raise RuntimeError( RuntimeError: This package is a placeholder package on the public PyPI instance, and is not the correct version to install. If you are having trouble figuring out the correct package to install, please contact us. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. ``` ### Expected behavior Installation complete.
Failure when running pip install langchain[all]
https://api.github.com/repos/langchain-ai/langchain/issues/5387/comments
14
2023-05-29T05:23:41Z
2024-03-20T11:25:41Z
https://github.com/langchain-ai/langchain/issues/5387
1,730,087,228
5,387
[ "hwchase17", "langchain" ]
### System Info File "d:\langchain\pdfqa-app.py", line 46, in _upload_data Pinecone.from_texts(self.doc_chunk,embeddings,batch_size=16,index_name=self.index_name) File "E:\anaconda\envs\langchain\lib\site-packages\langchain\vectorstores\pinecone.py", line 232, in from_texts embeds = embedding.embed_documents(lines_batch) File "E:\anaconda\envs\langchain\lib\site-packages\langchain\embeddings\openai.py", line 297, in embed_documents return self._get_len_safe_embeddings(texts, engine=self.deployment) File "E:\anaconda\envs\langchain\lib\site-packages\langchain\embeddings\openai.py", line 221, in _get_len_safe_embeddings token = encoding.encode( File "E:\anaconda\envs\langchain\lib\site-packages\tiktoken\core.py", line 117, in encode if match := _special_token_regex(disallowed_special).search(text): TypeError: expected string or buffer ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction def _load_docs(self): loader=PyPDFLoader("D:\langchain\data_source\\1706.03762.pdf") self.doc= loader.load() def _split_docs (self): text_splitter = RecursiveCharacterTextSplitter(chunk_size=self.chunk_size, chunk_overlap=self.chunk_overlap,separators= ["\n\n", ""]) self.doc_chunk = text_splitter.split_documents(self.doc) def _upload_data(self): embeddings = OpenAIEmbeddings() list_of_index = pinecone.list_indexes() if self.index_name in list_of_index: Pinecone.from_texts(self.doc_chunk,embeddings,batch_size=16,index_name=self.index_name) else: pinecone.create_index(self.index_name, dimension=1024) # for open AI Pinecone.from_texts(self.doc_chunk,embeddings,batch_size=16,index_name=self.index_name) def dataloader(self): self._load_docs() self._split_docs() self._upload_data() ### Expected behavior Please help with the solution
Running into this error while creating embeddings out of pdf file
https://api.github.com/repos/langchain-ai/langchain/issues/5384/comments
2
2023-05-29T01:10:32Z
2024-02-15T07:44:39Z
https://github.com/langchain-ai/langchain/issues/5384
1,729,806,672
5,384
[ "hwchase17", "langchain" ]
### System Info BaseConversationalRetrievalChain._call() and ._acall() run into errors eventually in LLMChain.prep_prompts() on referencing input_list[0]. This will trigger an index error with an empty input_list, which happens when there are no Documents retrieved during _call() by BaseConversationalRetrievalChain._get_docs(). There should be a check for this. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Set up a qa call to ConversationalRetrievalChain.from_llm() on an retriever to which a filter has been applied that will result in no Documents being matched. ### Expected behavior If 0 docs are found, don't bother running the rest of the process.
BaseConversationalRetrievalChain error on 0 docs found
https://api.github.com/repos/langchain-ai/langchain/issues/5378/comments
1
2023-05-28T23:03:16Z
2023-09-10T16:10:51Z
https://github.com/langchain-ai/langchain/issues/5378
1,729,730,030
5,378
[ "hwchase17", "langchain" ]
Windows 11, Anaconda, python 3.9.16 LanchChain 0.0.183 My goal is to extend the tools used by baby AGI, more specifically to use At least the basic WriteFileTool() and ReadFileTool(). they use two inputs though, so I cannot stick with the vanilla ZeroShotAgent. forgive my poor understanding of Python but I have tried to use AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION and replace it in the code. Agent initialization is the only modification I've made to the code in the link below, though it throws an error. Could anyone kindly help updating the code in order to be able to leverage multiple input tools or provide guidance or resources? https://python.langchain.com/en/latest/use_cases/agents/baby_agi_with_agent.html?highlight=babyagi%20with%20tools ```python from langchain.agents import AgentType from langchain.agents import initialize_agent @classmethod def from_llm( cls, llm: BaseLLM, vectorstore: VectorStore, verbose: bool = False, **kwargs ) -> "BabyAGI": """Initialize the BabyAGI Controller.""" task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose) task_prioritization_chain = TaskPrioritizationChain.from_llm( llm, verbose=verbose ) llm_chain = LLMChain(llm=llm, prompt=prompt) tool_names = [tool.name for tool in tools] agent = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,llm=llm_chain, tools=tool_names) agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True ) return cls( task_creation_chain=task_creation_chain, task_prioritization_chain=task_prioritization_chain, execution_chain=agent_executor, vectorstore=vectorstore, **kwargs, ) ``` I get the error ```python File ~\anaconda3\envs\aagi\lib\site-packages\langchain\agents\structured_chat\base.py:83 in create_prompt args_schema = re.sub("}", "}}}}", re.sub("{", "{{{{", str(tool.args))) AttributeError: 'str' object has no attribute 'args' ``` ### Suggestion: _No response_
error thrown when trying to implement BabyAGI with tools with multiple inputs (requiring Structured Tool Chat Agent)
https://api.github.com/repos/langchain-ai/langchain/issues/5375/comments
1
2023-05-28T20:52:13Z
2023-09-10T16:10:56Z
https://github.com/langchain-ai/langchain/issues/5375
1,729,668,382
5,375
[ "hwchase17", "langchain" ]
### Feature request Creation of a chain or wrapper that uses two LLMs with different models to force correct formatting: - do the "difficult" work with a model like GPT-4: output format is free text. - have a second LLM (davinci-003 or a specialized model for this purpose) that is good at following formatting instructions to convert the output of the first model into (say) JSON. ### Motivation When a model is set up to be creative, or the use case demands that the output is consistently formatted correctly, we are often out of luck, and the output breaks an app. This proposed use of two models will most likely increase the accuracy of the output a lot. ### Your contribution If maintainers think this is a good idea, please add your input, and I will be happy to provide a PR.
Formatter chain
https://api.github.com/repos/langchain-ai/langchain/issues/5374/comments
4
2023-05-28T19:35:23Z
2023-11-05T16:07:09Z
https://github.com/langchain-ai/langchain/issues/5374
1,729,634,384
5,374
[ "hwchase17", "langchain" ]
I cannot figure out why this is happening: File "c:\Users\Yaseen\Documents\AVA\main.py", line 2, in <module> import langchain File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\__init__.py", line 6, in <module> from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\__init__.py", line 2, in <module> from langchain.agents.agent import ( File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py", line 16, in <module> from langchain.agents.tools import InvalidTool File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\tools.py", line 8, in <module> from langchain.tools.base import BaseTool, Tool, tool File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\tools\__init__.py", line 45, in <module> from langchain.tools.powerbi.tool import ( File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\tools\powerbi\tool.py", line 10, in <module> from langchain.chains.llm import LLMChain File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\__init__.py", line 18, in <module> from langchain.chains.llm_math.base import LLMMathChain File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm_math\base.py", line 9, in <module> import numexpr File "C:\Users\Yaseen\AppData\Local\Programs\Python\Python311\Lib\site-packages\numexpr\__init__.py", line 24, in <module> from numexpr.interpreter import MAX_THREADS, use_vml, __BLOCK_SIZE1__ ImportError: DLL load failed while importing interpreter: The specified module could not be found.
ImportError: DLL load failed while importing interpreter: The specified module could not be found.
https://api.github.com/repos/langchain-ai/langchain/issues/5366/comments
8
2023-05-28T16:24:50Z
2024-01-30T00:43:34Z
https://github.com/langchain-ai/langchain/issues/5366
1,729,552,762
5,366
[ "hwchase17", "langchain" ]
# This code: `from langchain.experimental import AutoGPT from langchain import HuggingFaceHub repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options agent = AutoGPT.from_llm_and_tools( ai_name="Tom", ai_role="Assistant", tools=tools, llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}), memory=vectorstore.as_retriever() ) agent.chain.verbose = True` agent.run(["write a weather report for SF today"]) # outputs the error: `AssertionError Traceback (most recent call last) Cell In[21], line 1 ----> 1 agent.run(["write a weather report for SF today"]) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\experimental\autonomous_agents\autogpt\agent.py:91, in AutoGPT.run(self, goals) 88 loop_count += 1 90 # Send message to AI, get response ---> 91 assistant_reply = self.chain.run( 92 goals=goals, 93 messages=self.full_message_history, 94 memory=self.memory, 95 user_input=user_input, 96 ) 98 # Print Assistant thoughts 99 print(assistant_reply) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:239, in Chain.run(self, callbacks, *args, **kwargs) 236 return self(args[0], callbacks=callbacks)[self.output_keys[0]] 238 if kwargs and not args: --> 239 return self(kwargs, callbacks=callbacks)[self.output_keys[0]] 241 if not kwargs and not args: 242 raise ValueError( 243 "`run` supported with either positional arguments or keyword arguments," 244 " but none were provided." 245 ) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) --> 140 raise e 141 run_manager.on_chain_end(outputs) 142 return self.prep_outputs(inputs, outputs, return_only_outputs) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks) 128 run_manager = callback_manager.on_chain_start( 129 {"name": self.__class__.__name__}, 130 inputs, 131 ) 132 try: 133 outputs = ( --> 134 self._call(inputs, run_manager=run_manager) 135 if new_arg_supported 136 else self._call(inputs) 137 ) 138 except (KeyboardInterrupt, Exception) as e: 139 run_manager.on_chain_error(e) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:69, in LLMChain._call(self, inputs, run_manager) 64 def _call( 65 self, 66 inputs: Dict[str, Any], 67 run_manager: Optional[CallbackManagerForChainRun] = None, 68 ) -> Dict[str, str]: ---> 69 response = self.generate([inputs], run_manager=run_manager) 70 return self.create_outputs(response)[0] File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:78, in LLMChain.generate(self, input_list, run_manager) 72 def generate( 73 self, 74 input_list: List[Dict[str, Any]], 75 run_manager: Optional[CallbackManagerForChainRun] = None, 76 ) -> LLMResult: 77 """Generate LLM result from inputs.""" ---> 78 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager) 79 return self.llm.generate_prompt( 80 prompts, stop, callbacks=run_manager.get_child() if run_manager else None 81 ) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\chains\llm.py:106, in LLMChain.prep_prompts(self, input_list, run_manager) 104 for inputs in input_list: 105 selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} --> 106 prompt = self.prompt.format_prompt(**selected_inputs) 107 _colored_text = get_colored_text(prompt.to_string(), "green") 108 _text = "Prompt after formatting:\n" + _colored_text File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\prompts\chat.py:144, in BaseChatPromptTemplate.format_prompt(self, **kwargs) 143 def format_prompt(self, **kwargs: Any) -> PromptValue: --> 144 messages = self.format_messages(**kwargs) 145 return ChatPromptValue(messages=messages) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\experimental\autonomous_agents\autogpt\prompt.py:51, in AutoGPTPrompt.format_messages(self, **kwargs) 49 memory: VectorStoreRetriever = kwargs["memory"] 50 previous_messages = kwargs["messages"] ---> 51 relevant_docs = memory.get_relevant_documents(str(previous_messages[-10:])) 52 relevant_memory = [d.page_content for d in relevant_docs] 53 relevant_memory_tokens = sum( 54 [self.token_counter(doc) for doc in relevant_memory] 55 ) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\base.py:377, in VectorStoreRetriever.get_relevant_documents(self, query) 375 def get_relevant_documents(self, query: str) -> List[Document]: 376 if self.search_type == "similarity": --> 377 docs = self.vectorstore.similarity_search(query, **self.search_kwargs) 378 elif self.search_type == "similarity_score_threshold": 379 docs_and_similarities = ( 380 self.vectorstore.similarity_search_with_relevance_scores( 381 query, **self.search_kwargs 382 ) 383 ) File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:255, in FAISS.similarity_search(self, query, k, **kwargs) 243 def similarity_search( 244 self, query: str, k: int = 4, **kwargs: Any 245 ) -> List[Document]: 246 """Return docs most similar to query. 247 248 Args: (...) 253 List of Documents most similar to the query. 254 """ --> 255 docs_and_scores = self.similarity_search_with_score(query, k) 256 return [doc for doc, _ in docs_and_scores] File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:225, in FAISS.similarity_search_with_score(self, query, k) 215 """Return docs most similar to query. 216 217 Args: (...) 222 List of Documents most similar to the query and score for each 223 """ 224 embedding = self.embedding_function(query) --> 225 docs = self.similarity_search_with_score_by_vector(embedding, k) 226 return docs File ~\anaconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\faiss.py:199, in FAISS.similarity_search_with_score_by_vector(self, embedding, k) 197 if self._normalize_L2: 198 faiss.normalize_L2(vector) --> 199 scores, indices = self.index.search(vector, k) 200 docs = [] 201 for j, i in enumerate(indices[0]): File ~\anaconda3\envs\langchain\Lib\site-packages\faiss\class_wrappers.py:329, in handle_Index.<locals>.replacement_search(self, x, k, params, D, I) 327 n, d = x.shape 328 x = np.ascontiguousarray(x, dtype='float32') --> 329 assert d == self.d 331 assert k > 0 333 if D is None: AssertionError: ` # How can I resolve this behaviour?
AssertionError when using AutoGPT with Huggingface
https://api.github.com/repos/langchain-ai/langchain/issues/5365/comments
4
2023-05-28T16:16:57Z
2023-09-10T03:09:29Z
https://github.com/langchain-ai/langchain/issues/5365
1,729,549,900
5,365
[ "hwchase17", "langchain" ]
### Issue with current documentation: The example in https://python.langchain.com/en/latest/reference/modules/embeddings.html ``` from langchain.embeddings import HuggingFaceEmbeddings model_name = "sentence-transformers/all-mpnet-base-v2" model_kwargs = {'device': 'cpu'} hf = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs) ``` does not work. I get an error: > pydantic.error_wrappers.ValidationError: 1 validation error for HuggingFaceEmbeddings model_kwargs extra fields not permitted (type=value_error.extra) ### Idea or request for content: fixed example
DOC: the example on setting model_kwrgs in HuggingFaceEmbeddings does not work
https://api.github.com/repos/langchain-ai/langchain/issues/5363/comments
5
2023-05-28T09:35:29Z
2023-09-18T16:10:30Z
https://github.com/langchain-ai/langchain/issues/5363
1,729,341,841
5,363
[ "hwchase17", "langchain" ]
### System Info Name: langchain, Version: 0.0.180 Name: openai, Version: 0.27.7 macOS Mojave 10.14.6 ### Who can help? @vowelparrot ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps: 1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm ``` from dotenv import dotenv_values openai_api_key = dotenv_values('.env')['OPENAI_API_KEY'] ``` 2. Load the planner: ``` from langchain.llms.openai import OpenAI from langchain.agents.agent_toolkits.openapi import planner ``` ### Expected behavior A validation error should not be raised during the importing of the module. We should be able to pass the open api key as an argument. That is, the following should work: ``` from langchain.llms.openai import OpenAI from langchain.agents.agent_toolkits.openapi import planner llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key) ```
Validation Error importing OpenAPI planner when OpenAI credentials not in environment
https://api.github.com/repos/langchain-ai/langchain/issues/5361/comments
1
2023-05-28T08:18:12Z
2023-05-29T13:22:37Z
https://github.com/langchain-ai/langchain/issues/5361
1,729,290,674
5,361
[ "hwchase17", "langchain" ]
### System Info Pycharm in macOS ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction # change in the template string will cause the SSL connection or the server response to hang,code here: `question = "Who won the FIFA World Cup in the year 1994? " template = """Question: {question} Answer: Let's think step by step.""" def execute_llm(repo_id:str,template:str,question:str): prompt = PromptTemplate(template=template, input_variables=["question"]) llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature": 0, "max_length": 64}) llm_chain = LLMChain(prompt=prompt, llm=llm) print(llm_chain.run(question))` execute_llm(repo_id="google/flan-t5-xl",template=template,question=question) Upon executing llm_chain.run(question), it accesses the HuggingFaceHub service. This results in the code getting stuck at the following line in ssl.py, which is part of the SSLSocket class: self._sslobj.read(len, buffer). This method of the SSLSocket class holds a _sslobj object, which is also of SSLSocket class. The SSLSocket class defines a method read(self, len=1024, buffer=None) which internally calls _sslobj's read(self, len=1024, buffer=None) method. The problem is that any modification to the template (whether it's adding spaces, newline characters, or changing the string) will cause self._sslobj.read(len, buffer) to block until it times out. Since it's completely impossible to step into it for tracking and debugging, I'm unable to determine the specific reason. ### Expected behavior Could you explain what might be causing this issue? It seems to be related to the SSL network connection, not the content of the template string. However, it's puzzling that the change in the template string is causing the SSL connection or the server response to hang. This behavior is not entirely reasonable and may be a bug or a problem caused by other factors.
change in the template string is causing the SSL connection or the server response to hang
https://api.github.com/repos/langchain-ai/langchain/issues/5360/comments
1
2023-05-28T02:31:02Z
2023-09-10T16:11:01Z
https://github.com/langchain-ai/langchain/issues/5360
1,729,102,111
5,360
[ "hwchase17", "langchain" ]
### Issue with current documentation: When running multi_modal_output_agent.ipynb: https://python.langchain.com/en/latest/use_cases/agents/multi_modal_output_agent.html, I get ConfigError: field "steamship" not yet prepared so type is still a ForwardRef, you might need to call SteamshipImageGenerationTool.update_forward_refs(). tools = [ SteamshipImageGenerationTool(model_name= "dall-e") ] --------------------------------------------------------------------------- ConfigError Traceback (most recent call last) Cell In[7], line 2 1 tools = [ ----> 2 SteamshipImageGenerationTool(model_name= "dall-e") 3 ] File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__() File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model() File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/fields.py:860, in pydantic.fields.ModelField.validate() ConfigError: field "steamship" not yet prepared so type is still a ForwardRef, you might need to call SteamshipImageGenerationTool.update_forward_refs(). After I call SteamshipImageGenerationTool.update_forward_refs(), I get another error. SteamshipImageGenerationTool.update_forward_refs() --------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In[10], line 1 ----> 1 SteamshipImageGenerationTool.update_forward_refs() File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/main.py:815, in pydantic.main.BaseModel.update_forward_refs() File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/typing.py:562, in pydantic.typing.update_model_forward_refs() File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/typing.py:528, in pydantic.typing.update_field_forward_refs() File ~/miniconda3/envs/langchain/lib/python3.11/site-packages/pydantic/typing.py:66, in pydantic.typing.evaluate_forwardref() File ~/miniconda3/envs/langchain/lib/python3.11/typing.py:864, in ForwardRef._evaluate(self, globalns, localns, recursive_guard) 859 if self.__forward_module__ is not None: 860 globalns = getattr( 861 sys.modules.get(self.__forward_module__, None), '__dict__', globalns 862 ) 863 type_ = _type_check( --> 864 eval(self.__forward_code__, globalns, localns), 865 "Forward references must evaluate to types.", 866 is_argument=self.__forward_is_argument__, 867 allow_special_forms=self.__forward_is_class__, 868 ) 869 self.__forward_value__ = _eval_type( 870 type_, globalns, localns, recursive_guard | {self.__forward_arg__} 871 ) 872 self.__forward_evaluated__ = True File <string>:1 NameError: name 'Steamship' is not defined ### Idea or request for content: _No response_
DOC: SteamshipImageGenerationTool returns Config Error in multi_modal_output_agent.ipynb
https://api.github.com/repos/langchain-ai/langchain/issues/5358/comments
7
2023-05-27T22:42:18Z
2024-02-15T05:48:15Z
https://github.com/langchain-ai/langchain/issues/5358
1,728,991,454
5,358
[ "hwchase17", "langchain" ]
### Chat agent reliability fix: put format instructions and other important information in a human message There have been a few raised issues specifically around agent reliability when using chat models. @emilsedgh brought up in the JS Discord that OpenAI's 3.5 turbo model is documented as "not pay[ing] strong attention to the system message, and therefore important instructions are often better placed in a user message.": https://platform.openai.com/docs/guides/chat/instructing-chat-models The `ConversationalChatAgent` is implemented this way: https://github.com/hwchase17/langchain/blob/master/langchain/agents/conversational_chat/base.py#L90 But the base `ChatAgent` and the `StructuredChatAgent` are not: https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/base.py#L81 https://github.com/hwchase17/langchain/blob/master/langchain/agents/structured_chat/base.py#L88 Need to do a little bit more experimenting, but moving things into the human message may help with reliability issues. ### Suggestion: _No response_
Issue: Chat agents should put format instructions and other important information in a human message
https://api.github.com/repos/langchain-ai/langchain/issues/5353/comments
0
2023-05-27T20:44:27Z
2023-05-27T21:21:10Z
https://github.com/langchain-ai/langchain/issues/5353
1,728,932,672
5,353
[ "hwchase17", "langchain" ]
### Feature request [Improving Factuality and Reasoning in Language Models through Multiagent Debate](https://arxiv.org/pdf/2305.14325.pdf) - looks very promising ### Motivation This method is orthogonal to other methods like CoT. It looks like, this method is beneficial almost in any case when we need the highest quality answer. ### Your contribution I can help with testing. Not sure about quick implementation.
implement `Multiagent Debate`
https://api.github.com/repos/langchain-ai/langchain/issues/5348/comments
5
2023-05-27T19:13:41Z
2023-12-11T17:05:44Z
https://github.com/langchain-ai/langchain/issues/5348
1,728,907,870
5,348
[ "hwchase17", "langchain" ]
### System Info Running latest versions of langchain, openai, openlm, python 3.10, mac M1, trying this example I saw on Twitter https://python.langchain.com/en/latest/modules/models/llms/integrations/openlm.html?highlight=openlm ``` from langchain.llms import OpenLM from langchain.llms import OpenAI from langchain.llms import openai from langchain import PromptTemplate, LLMChain openai.api_key = os.getenv("OPENAI_API_KEY") question = "What is the capital of France?" template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm1 = OpenAI() llm2 = OpenLM(model="text-davinci-003") llm_chain1 = LLMChain(prompt=prompt, llm=llm1) llm_chain2 = LLMChain(prompt=prompt, llm=llm2) result1 = llm_chain1.run(question) result2 = llm_chain2.run(question) ``` result 1 runs, result 2 gives ValueError: OPENAI_API_KEY is not set or passed as an argument ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.llms import OpenLM from langchain.llms import OpenAI from langchain.llms import openai from langchain import PromptTemplate, LLMChain openai.api_key = os.getenv("OPENAI_API_KEY") question = "What is the capital of France?" template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm1 = OpenAI() llm2 = OpenLM(model="text-davinci-003") llm_chain1 = LLMChain(prompt=prompt, llm=llm1) llm_chain2 = LLMChain(prompt=prompt, llm=llm2) result1 = llm_chain1.run(question) result2 = llm_chain2.run(question) ``` ### Expected behavior Run without error
Using OpenLM example giving error: "ValueError: OPENAI_API_KEY is not set or passed as an argument"
https://api.github.com/repos/langchain-ai/langchain/issues/5347/comments
2
2023-05-27T18:13:32Z
2023-09-12T16:12:17Z
https://github.com/langchain-ai/langchain/issues/5347
1,728,877,787
5,347
[ "hwchase17", "langchain" ]
### Feature request Extend `langchain/embeddings/elasticsearch.py` to support kNN indexing and searching. The high-level objectives will be: 1. Allow for the [creation of an index with the correct mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html#index-vectors-knn-search) to store documents including dense_vectors so they can be used for kNN search 2. Store embeddings in elasticsearch in [dense_vector](https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html) field type 3. Perform [kNN search](https://www.elastic.co/guide/en/elasticsearch/reference/current/knn-search.html) 4. Perform [Hybrid](https://www.elastic.co/guide/en/elasticsearch/reference/current/knn-search.html#_combine_approximate_knn_with_other_features) BM25 (query) + kNN search ### Motivation Elasticsearch support approximate k-nearest neighbor search with dense vectors. The current module only support script score / exact match vector search. ### Your contribution I will work on the code and create the pull request
Extend elastic_vector_search.py to allow for kNN indexing/searching
https://api.github.com/repos/langchain-ai/langchain/issues/5346/comments
4
2023-05-27T17:12:57Z
2023-06-02T19:49:19Z
https://github.com/langchain-ai/langchain/issues/5346
1,728,842,882
5,346
[ "hwchase17", "langchain" ]
--------------------------------------------------------------------------- InvalidRequestError Traceback (most recent call last) [<ipython-input-33-f877209e86e7>](https://localhost:8080/#) in <cell line: 1>() ----> 1 flare.run(query) 20 frames [/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py](https://localhost:8080/#) in __prepare_create_request(cls, api_key, api_base, api_type, api_version, organization, **params) 81 if typed_api_type in (util.ApiType.AZURE, util.ApiType.AZURE_AD): 82 if deployment_id is None and engine is None: ---> 83 raise error.InvalidRequestError( 84 "Must provide an 'engine' or 'deployment_id' parameter to create a %s" 85 % cls, InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
FLARE | Azure open Ai doesn't work
https://api.github.com/repos/langchain-ai/langchain/issues/5345/comments
3
2023-05-27T16:08:32Z
2023-09-15T16:10:19Z
https://github.com/langchain-ai/langchain/issues/5345
1,728,810,122
5,345
[ "hwchase17", "langchain" ]
### System Info I operate according to the official documentation (https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html), but I can't get the information through loader.load(), but the output is still [], the language I use is Python, and the version of langchain is 0.0.181, I tried reinstalling the dependent environment but it didn't work well, hopefully it can be solved soon ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.document_loaders import UnstructuredURLLoader urls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023" ] loader = UnstructuredURLLoader(urls=urls) data = loader.load() ### Expected behavior data not empty
UnstructuredURLLoader can't load data from url
https://api.github.com/repos/langchain-ai/langchain/issues/5342/comments
24
2023-05-27T13:55:09Z
2024-04-06T08:47:27Z
https://github.com/langchain-ai/langchain/issues/5342
1,728,738,044
5,342
[ "hwchase17", "langchain" ]
### Feature request I've been playing around with OpenAI GPT-4 and ran into situation when response generation might take quite some time - say 5 minutes. I switched over to streaming, but often I can immediately see the response is not what want, and therefore I'd like to cancel request. Now here is the part that is unclear to me: **is there an official way to cancel request in Python's version of LangChain?** I have found this [described](https://js.langchain.com/docs/modules/models/chat/additional_functionality#cancelling-requests) in JS/TS version of the framework, however scanning docs, sources and issues yields nothing for this repo. For now I simply terminate process, which works good enough for something like Jupyter notebooks, but quite problematic for say web application. Besides termination, it's also unclear if I may incur any unwanted costs or not for the abandoned request. Should some sort of feature parity be made with JS LangChain? ### Motivation Provide a documented way to cancel long-running requests ### Your contribution At this point I have capacity only to test out potential implementation. May work on the implementation in later weeks.
Implement a way to abort / cancel request
https://api.github.com/repos/langchain-ai/langchain/issues/5340/comments
23
2023-05-27T12:07:10Z
2024-08-01T16:05:23Z
https://github.com/langchain-ai/langchain/issues/5340
1,728,681,110
5,340
[ "hwchase17", "langchain" ]
### Feature request MongoDB Atlas is a fully managed DBaaS, powered by the MongoDB database. It also enables Lucene (collocated with the mongod process) for full-text search - this is know as Atlas Search. The PR has to allow Langchain users from using the functionality related to the MongoDB Atlas Vector Search feature where you can store your embeddings in MongoDB documents and create a Lucene vector index to perform a KNN search. ### Motivation There is currently no way in Langchain to connect to MongoDB Atlas and perform a KNN search. ### Your contribution I am submitting a PR for this issue soon.
Add MongoDBAtlasVectorSearch vectorstore
https://api.github.com/repos/langchain-ai/langchain/issues/5337/comments
0
2023-05-27T11:41:39Z
2023-05-30T14:59:03Z
https://github.com/langchain-ai/langchain/issues/5337
1,728,669,494
5,337
[ "hwchase17", "langchain" ]
### System Info MacOS Langchain Version 0.0.181 Python Version 3.11.3 ### Who can help? @eyurtsev I wasn't sure who to reach out to. The following is the signature for adding embeddings to FAISS: ```python FAISS.add_embeddings( self, text_embeddings: 'Iterable[Tuple[str, List[float]]]', metadatas: 'Optional[List[dict]]' = None, **kwargs: 'Any', ) -> 'List[str]' ``` Notice that `text_embeddings` takes an iterable. However, when I do this I get a failure with my iterable, but when wrapped in a `list` function then it is successful. ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings vs = FAISS.from_texts(['a'], embedding=OpenAIEmbeddings()) vector = OpenAIEmbeddings().embed_query('b') # error happens with this next line, see "Expected behavior" below. vs.add_embeddings(iter([('b', vector)])) # no error happens when wrapped in a list vs.add_embeddings(list(iter([('b', vector)]))) ``` ### Expected behavior ```bash --------------------------------------------------------------------------- ValueError Traceback (most recent call last) ... File ~/.pyenv/versions/3.11.3/envs/myenv/lib/python3.11/site-packages/faiss/class_wrappers.py:227, in handle_Index.<locals>.replacement_add(self, x) 214 def replacement_add(self, x): 215 """Adds vectors to the index. 216 The index must be trained before vectors can be added to it. 217 The vectors are implicitly numbered in sequence. When `n` vectors are (...) 224 `dtype` must be float32. 225 """ --> 227 n, d = x.shape 228 assert d == self.d 229 x = np.ascontiguousarray(x, dtype='float32') ValueError: not enough values to unpack (expected 2, got 1) ```
FAISS.add_embeddings is typed to take iterables but does not.
https://api.github.com/repos/langchain-ai/langchain/issues/5336/comments
3
2023-05-27T11:29:03Z
2023-12-07T16:08:35Z
https://github.com/langchain-ai/langchain/issues/5336
1,728,661,570
5,336
[ "hwchase17", "langchain" ]
### System Info * Langchain: 0.0.181 * OS: Ubuntu Linux 20.04 * Kernel: `Linux iZt4n78zs78m7gw0tztt8lZ 5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux` * Ubuntu version: ```plain LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal ``` * Python: Python 3.8.2 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Use the example code provided in [Quick Start: Agents with Chat Models](https://python.langchain.com/en/latest/getting_started/getting_started.html#agents-with-chat-models), but replace the 'serpapi' tool with 'google-serper' tool . Here's the modified code: ```python from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI chat = ChatOpenAI(temperature=0.3) llm = OpenAI(temperature=0) tools = load_tools(["google-serper", "llm-math"], llm=llm) agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) result = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?") print(result) ``` When I execute the code above. Error occurred. Here's the error text: ~~~plain (openai-test) dsdashun@iZt4n78zs78m7gw0tztt8lZ:~/workspaces/openai-test/langchain$ python3 get_started_chat_agent.py > Entering new AgentExecutor chain... Traceback (most recent call last): File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/chat/output_parser.py", line 22, in parse response = json.loads(action.strip()) File "/usr/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "/usr/lib/python3.8/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 4 column 2 (char 75) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "get_started_chat_agent.py", line 14, in <module> result = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?") File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 213, in run return self(args[0])[self.output_keys[0]] File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__ raise e File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__ outputs = self._call(inputs) File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 792, in _call next_step_output = self._take_next_step( File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 672, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 385, in plan return self.output_parser.parse(full_output) File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/chat/output_parser.py", line 26, in parse raise OutputParserException(f"Could not parse LLM output: {text}") langchain.schema.OutputParserException: Could not parse LLM output: Question: Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power? Thought: I should use Serper Search to find out who Olivia Wilde's boyfriend is and then use Calculator to calculate his age raised to the 0.23 power. Action: ``` { "action": "Serper Search", "action_input": "Olivia Wilde boyfriend" }, { "action": "Calculator", "action_input": "Age of Olivia Wilde's boyfriend raised to the 0.23 power" } ``` ~~~ However, if I use the `pdb` debugger to debug the program step by step, and pause a little bit after running `initialize_agent`, everything is fine. I didn't use the 'serpapi' tool, because I don't have an API key on it. So I cannot verify whether the original example code can be executed successfully on my machine using the 'serpapi' tool ### Expected behavior I expect the code can run successfully without any problems, even if I replace the search tool with a similar one.
`Agents with Chat Models` Example Code Abnormal When Using `google-serper` Tool
https://api.github.com/repos/langchain-ai/langchain/issues/5335/comments
3
2023-05-27T09:47:29Z
2023-09-15T16:10:24Z
https://github.com/langchain-ai/langchain/issues/5335
1,728,609,263
5,335
[ "hwchase17", "langchain" ]
### System Info Langchain Version: 0.0.176 Ubuntu x86 23.04 Memory 24gb AMD EPYC ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager # Make sure the model path is correct for your system! llm_cpp = LlamaCpp(model_path="/vicuna/ggml-vic7b-q4_0.bin", callback_manager=callback_manager) llm = llm_cpp tools = load_tools(["serpapi"], llm=llm_cpp) agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) agent.run("What is football?") ``` Result ``` Action Input: "what is football?" I should probably start by defining what football actually is. Action: Let's [Search] "what is football?" Action Input: "what is football?" Observation: Let's [Search] "what is football?" is not a valid tool, try another one. Thought: ``` ### Expected behavior Expected behavior: search google and return correct results If i change model from vicuna to openAI api, works fine
Serp APi and google search API won't work with LLama models like vicuna
https://api.github.com/repos/langchain-ai/langchain/issues/5329/comments
2
2023-05-27T04:23:16Z
2023-06-30T07:51:03Z
https://github.com/langchain-ai/langchain/issues/5329
1,728,448,457
5,329
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. What are the key differences between these methods of trying to query/ask a database, then return the answer along with its relevant sources? My main objective is to have a Chatbot that has knowledge from a knowledge base, and can still maintain conversation history. Their answer must return me the source document as well. Which option is the best among so many choices? There are 1. [Question Answering with Sources](https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html), ``` chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) ``` 2. [Retrieval Question Answering with Sources](https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html) ``` from langchain import OpenAI chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever()) chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) ``` 3. [Question Answering over Docs](https://python.langchain.com/en/latest/use_cases/question_answering.html) ``` from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader]) query = "What did the president say about Ketanji Brown Jackson" index.query_with_sources(query) ``` ++ probably quite a few more examples I could find if I dig through the documentation.
Difference among various ways to query database and return source information? (Question Answering with Sources, Retrieval Question Answering with Sources, index.query_with_sources, etc.)
https://api.github.com/repos/langchain-ai/langchain/issues/5328/comments
4
2023-05-27T03:28:23Z
2023-09-18T16:10:35Z
https://github.com/langchain-ai/langchain/issues/5328
1,728,430,768
5,328
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. LC Release announcements seem to be missing from Discord's Announcements channel since 0.0.166. Looking more closely, these seem to be manual, added by hwchase17. On Twitter, the most recent release announcement from the LangChainAI account is 0.0.170, viz: https://twitter.com/search?q=(from%3ALangChainAI)%20release&src=typed_query&f=live ### Suggestion: I couldn't tell from this project's various actions whether such postings are meant to be automated upon release (Github search on actions isn't great), and just need to be fixed. If not, I think it would be would be very useful for the community to add such release notification actions, so that the various places people keep up to date are all, well, up to date.
Issue: Fix or automate sync of releases to Discord Announcements channel, Twitter, etc.
https://api.github.com/repos/langchain-ai/langchain/issues/5324/comments
2
2023-05-27T01:24:20Z
2023-09-10T16:11:21Z
https://github.com/langchain-ai/langchain/issues/5324
1,728,392,677
5,324
[ "hwchase17", "langchain" ]
### System Info OS = MACOS langchain=0.0.179 (also tried 0.0.174 and 0.0.178) ### Who can help? @hwchase17 @agola11 The full code below is single file. imports and other information not added to keep it crisp. The following works with no issues: ``` llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", engine="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", deployment_id="text-davinci-003", openai_api_key=openai_api_key) resp = llm("Tell me pub joke") print(resp) ``` The following does not work. ``` #get document store store = getfromstore(collection_name="sou_coll") # Create vectorstore info object - metadata repo? vectorstore_info = VectorStoreInfo( name="sou", description="sou folder", vectorstore=store ) # Convert the document store into a langchain toolkit toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info) # Add the toolkit to an end-to-end LC agent_executor = create_vectorstore_agent( llm=llm, toolkit=toolkit, verbose=True ) response = agent_executor.run(prompt) print(response) ``` I can confirm the document store exists and the same code with appropriate OpenAI (not Azure OpenAI) works as expected with no issue. Azure OpenAI gives the following error - ``` File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request raise error.InvalidRequestError( openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'> ``` Observation LLM is correct since the first part (write a joke) works. The agent does not. Please help! ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction #!/usr/bin/env python3 import sys from dotenv import load_dotenv # Load default environment variables (.env) load_dotenv() # Import os to set API key import os # Import OpenAI as main LLM service from langchain.llms import AzureOpenAI from langchain.callbacks import get_openai_callback # Bring in streamlit for UI/app interface import streamlit as st # Import PDF document loaders...there's other ones as well! from langchain.document_loaders import PyPDFLoader # Import chroma as the vector store from langchain.vectorstores import Chroma from common.funs import getfromstore # Import vector store stuff from langchain.agents.agent_toolkits import ( create_vectorstore_agent, VectorStoreToolkit, VectorStoreInfo ) # Set this to `azure` openai_api_type = os.environ["OPENAI_API_TYPE"] ="azure" openai_api_version = os.environ["OPENAI_API_VERSION"] = os.environ["AOAI_OPENAI_API_VERSION"] openai_api_base = os.environ["OPENAI_API_BASE"] = os.environ["AOAI_OPENAI_API_BASE"] openai_api_key = os.environ["OPENAI_API_KEY"] = os.environ["AOAI_OPENAI_API_KEY"] # Create instance of OpenAI LLM #llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", openai_api_key=openai_api_key) llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", engine="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", deployment_id="text-davinci-003", openai_api_key=openai_api_key) resp = llm("Tell me pub joke") print(resp) print("------------") st.write(resp) st.write("----------------------") #get document store store = getfromstore(collection_name="sou_coll") #print(store1.get(["metadatas"])) # Create vectorstore info object - metadata repo? vectorstore_info = VectorStoreInfo( name="sou", description="sou folder", vectorstore=store ) # Convert the document store into a langchain toolkit toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info) # Add the toolkit to an end-to-end LC agent_executor = create_vectorstore_agent( llm=llm, toolkit=toolkit, verbose=True ) st.title("🦜🔗🤗 What would you like to know?") st.write("This sample uses Azure OpenAI") # Create a text input box for the user prompt = st.text_input('Input your prompt here:') # If the user hits enter if prompt: with get_openai_callback() as cb: #try: # Then pass the prompt to the LLM response = agent_executor.run(prompt) # ...and write it out to the screen st.write(response) st.write(cb) #except Exception as e: # st.warning # st.write("That was a difficult question! I choked on it!! Can you please try again with rephrasing it a bit?") # st.write(cb) # print(e) # Find the relevant pages search = store.similarity_search_with_score(prompt) # Write out the first try: st.write("This information was found in:") for doc in search: score = doc[1] try: page_num = doc[0].metadata['page'] except: page_num = "txt snippets" source = doc[0].metadata['source'] # With a streamlit expander with st.expander("Source: " + str(source) + " - Page: " + str(page_num) + "; Similarity Score: " + str(score) ): st.write(doc[0].page_content) except: print("unable to get source document detail") ### Expected behavior The video shows the expected output - https://www.youtube.com/watch?v=q27RbxcfGvE The OpenAI code in this sample is exact except for changes to LLM and env variables - file https://github.com/ushakrishnan/SearchWithOpenAI/blob/main/pages/6_Q%26A_with_Open_AI.py.
Issues with Azure OpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/5322/comments
2
2023-05-26T23:47:15Z
2023-05-31T23:40:55Z
https://github.com/langchain-ai/langchain/issues/5322
1,728,349,391
5,322
[ "hwchase17", "langchain" ]
### System Info langchain='0.0.161' python='3.9.13' IPython= '7.31.1' ipykernel='6.15.2' ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` retriever1 = Pinecone.from_documents(texts, embeddings,index_name='taxation').as_retriever() retriever2 = Pinecone.from_documents(texts, embeddings,index_name='taxation').as_retriever() retriever_infos = [ { "name": "sindh", "description": "Good for answering questions about Sindh", "retriever": retriever1 }, { "name": "punjab", "description": "Good for answering questions about Punjab", "retriever": retriever2 }] chain = MultiRetrievalQAChain.from_retrievers(ChatOpenAI(model_name='gpt-3.5-turbo',temperature=0), retriever_infos,verbose=True) chain.save('chain.json') ``` --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_20160\230129054.py in <module> ----> 1 chain.save('chain.json') ~\anaconda3\lib\site-packages\langchain\chains\base.py in save(self, file_path) 294 295 # Fetch dictionary to save --> 296 chain_dict = self.dict() 297 298 if save_path.suffix == ".json": ~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs) 269 if self.memory is not None: 270 raise ValueError("Saving of memory is not yet supported.") --> 271 _dict = super().dict() 272 _dict["_type"] = self._chain_type 273 return _dict ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict() ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter() ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value() ~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs) 269 if self.memory is not None: 270 raise ValueError("Saving of memory is not yet supported.") --> 271 _dict = super().dict() 272 _dict["_type"] = self._chain_type 273 return _dict ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict() ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter() ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value() ~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs) 269 if self.memory is not None: 270 raise ValueError("Saving of memory is not yet supported.") --> 271 _dict = super().dict() 272 _dict["_type"] = self._chain_type 273 return _dict ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict() ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter() ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value() ~\anaconda3\lib\site-packages\langchain\prompts\base.py in dict(self, **kwargs) 186 def dict(self, **kwargs: Any) -> Dict: 187 """Return dictionary representation of prompt.""" --> 188 prompt_dict = super().dict(**kwargs) 189 prompt_dict["_type"] = self._prompt_type 190 return prompt_dict ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict() ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter() ~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value() ~\anaconda3\lib\site-packages\langchain\schema.py in dict(self, **kwargs) 354 """Return dictionary representation of output parser.""" 355 output_parser_dict = super().dict() --> 356 output_parser_dict["_type"] = self._type 357 return output_parser_dict 358 ~\anaconda3\lib\site-packages\langchain\schema.py in _type(self) 349 def _type(self) -> str: 350 """Return the type key.""" --> 351 raise NotImplementedError 352 353 def dict(self, **kwargs: Any) -> Dict: NotImplementedError: ### Expected behavior I expected to save the chain on disk for future use.
MultiRetrievalQAChain
https://api.github.com/repos/langchain-ai/langchain/issues/5318/comments
1
2023-05-26T22:32:19Z
2023-09-10T16:11:27Z
https://github.com/langchain-ai/langchain/issues/5318
1,728,309,373
5,318
[ "hwchase17", "langchain" ]
### System Info langchain 0.0.181 Python 3.10 OS: Ubuntu ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [X] Async ### Reproduction ``` import asyncio from functools import lru_cache from typing import AsyncGenerator from langchain.text_splitter import RecursiveCharacterTextSplitter from fastapi import FastAPI from fastapi.responses import StreamingResponse from langchain.callbacks import AsyncIteratorCallbackHandler from langchain.chains import ConversationalRetrievalChain from langchain.chat_models import ChatOpenAI from langchain.memory import ConversationBufferMemory from pydantic import BaseModel from langchain.vectorstores import Chroma from langchain.embeddings.openai import OpenAIEmbeddings ``` ``` api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" app = FastAPI() ``` ``` with open('state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, ) doc_text = text_splitter.create_documents([state_of_the_union]) ``` ``` embeddings = OpenAIEmbeddings(openai_api_key=api_key) vector_db = Chroma.from_documents(doc_text, embeddings,persist_directory='db') retriever = vector_db.as_retriever() ``` ``` class ChatRequest(BaseModel): """Request model for chat requests. Includes the conversation ID and the message from the user. """ conversation_id: str message: str ``` ``` class StreamingConversationChain: """ Class for handling streaming conversation chains. It creates and stores memory for each conversation, and generates responses using the ChatOpenAI model from LangChain. """ def __init__(self, openai_api_key: str, temperature: float = 0.0): self.memories = {} self.openai_api_key = openai_api_key self.temperature = temperature async def generate_response( self, conversation_id: str, message: str ) -> AsyncGenerator[str, None]: """ Asynchronous function to generate a response for a conversation. It creates a new conversation chain for each message and uses a callback handler to stream responses as they're generated. :param conversation_id: The ID of the conversation. :param message: The message from the user. """ callback_handler = AsyncIteratorCallbackHandler() llm = ChatOpenAI( callbacks=[callback_handler], streaming=True, temperature=self.temperature, openai_api_key=self.openai_api_key, ) memory = self.memories.get(conversation_id) if memory is None: memory = ConversationBufferMemory(memory_key="chat_history",output_key='answer', return_messages=True) self.memories[conversation_id] = memory chain = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory, chain_type="stuff", # return_source_documents=True ) run = asyncio.create_task(chain(({"question": message}))) async for token in callback_handler.aiter(): yield token await run() ``` ``` streaming_conversation_chain = StreamingConversationChain( openai_api_key=api_key ) ``` ``` @app.post("/chat", response_class=StreamingResponse) async def generate_response(data: ChatRequest) -> StreamingResponse: """Endpoint for chat requests. It uses the StreamingConversationChain instance to generate responses, and then sends these responses as a streaming response. :param data: The request data. """ return StreamingResponse( streaming_conversation_chain.generate_response( data.conversation_id, data.message ), media_type="text/event-stream", ) ``` ``` if __name__ == "__main__": import uvicorn uvicorn.run(app) ``` Here is error traceback ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/talha/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 435, in run_asgi result = await app( # type: ignore[func-returns-value] File "/home/talha/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__ return await self.app(scope, receive, send) File "/home/talha/venv/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__ await super().__call__(scope, receive, send) File "/home/talha/venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/home/talha/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__ raise e File "/home/talha/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 69, in app await response(scope, receive, send) File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 270, in __call__ async with anyio.create_task_group() as task_group: File "/home/talha/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__ raise exceptions[0] File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap await func() File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 262, in stream_response async for chunk in self.body_iterator: File "/media/talha/data/nlp/langchain/fastapi/error_rep.py", line 93, in generate_response run = asyncio.create_task(chain(({"question": message}))) File "/usr/lib/python3.10/asyncio/tasks.py", line 337, in create_task task = loop.create_task(coro) File "uvloop/loop.pyx", line 1435, in uvloop.loop.Loop.create_task TypeError: a coroutine was expected, got {'question': 'what is cnn', 'chat_history': [HumanMessage(content='what is cnn', additional_kwargs={}, example=False), AIMessage(content='CNN (Cable News Network) is a news-based cable television channel and website that provides 24-hour news coverage, analysis, and commentary on current events happening around the world.', additional_kwargs={}, example=False)], 'answer': 'CNN (Cable News Network) is a news-based cable television channel and website that provides 24-hour news coverage, analysis, and commentary on current events happening around the world.'} ### Expected behavior This code worked with `ConversationChain` and produce streaming output ``` chain = ConversationChain( memory=memory, prompt=CHAT_PROMPT_TEMPLATE, llm=llm, ) run = asyncio.create_task(chain.arun(input=message)) ``` But i want to use ConversationalRetrievalChain
TypeError: a coroutine was expected, got {'question': query, 'chat_history': {...}}
https://api.github.com/repos/langchain-ai/langchain/issues/5317/comments
3
2023-05-26T21:09:00Z
2023-09-25T10:16:10Z
https://github.com/langchain-ai/langchain/issues/5317
1,728,248,298
5,317
[ "hwchase17", "langchain" ]
### System Info google-cloud-aiplatform==1.25.0 langchain==0.0.181 python 3.10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Any list with len > 5 will cause an error. ```python from langchain.vectorstores import FAISS from langchain.embeddings import VertexAIEmbeddings text = ['text_1', 'text_2', 'text_3', 'text_4', 'text_5', 'text_6'] embeddings = VertexAIEmbeddings() vectorstore = FAISS.from_texts(text, embeddings) ``` ```python InvalidArgument Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs) 72 return callable_(*args, **kwargs) 73 except grpc.RpcError as exc: ---> 74 raise exceptions.from_grpc_error(exc) from exc 75 76 return error_remapped_callable InvalidArgument: 400 5 instance(s) is allowed per prediction. Actual: 6 ``` ### Expected behavior Excepted to successfully be able to vectorize a larger list of items. Maybe implement a step to
VertexAIEmbeddings error when passing a list with of length greater than 5.
https://api.github.com/repos/langchain-ai/langchain/issues/5316/comments
2
2023-05-26T20:31:56Z
2023-05-29T13:57:42Z
https://github.com/langchain-ai/langchain/issues/5316
1,728,211,849
5,316
[ "hwchase17", "langchain" ]
### Feature request Is there any way to convert ChatOpenAI models to ONNX format? I've noticed that other models can be converted to ONNX (example: https://github.com/openai/whisper/discussions/134) and I was wondering if similar logic could be applied in this case as well. ### Motivation I want to save these models in the ONNX format (a single file) so I can easily retrieve them and use them for question-answering. I want to be able to save the model as a single file in this case. ### Your contribution Not sure. I could create a PR if I'm able to succeed in this.
Converting ChatOpenAI model to ONNX format
https://api.github.com/repos/langchain-ai/langchain/issues/5313/comments
3
2023-05-26T19:19:53Z
2023-09-27T16:06:50Z
https://github.com/langchain-ai/langchain/issues/5313
1,728,123,134
5,313
[ "hwchase17", "langchain" ]
I've built many langchain agents with many types of memory to remember the context of the context in a correct way. The problem is if I send the same message again, he will generate the same thought and answer again, which I don't refer to. Example of the incorrect conversation that would happen between me and the agent when using **GPTIndexChatMemory** **User**:- Hi **AI**:- Hi, how can I help you today? **User**: What's machine learning? **AI**:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(good answer referring to my previous question)** **User**:- I didn't get it. **AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(good answer referring to my previous message and he knew that I didn't the definition of ML)** **User**:- what's the physics? **AI**:- Physics is a natural science that studies the fundamental principles and laws governing the physical world. It seeks to understand the behavior of matter, energy, and the interactions between them. **(good answer referring to my previous message)** **User**:- I didn't get it. **(same message again which saved in the previous chat history)** **AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(not the correct answer, I'm referring to the definition of physics not the definition of ML)** **User**: What's machine learning? ** AI **:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(He generated the same answer from the previous chat history while I need him to give more information to the user not just the same previous answer)** Example of the correct conversation I want to accomplish when using **GPTIndexChatMemory** **User**:- Hi **AI**:- Hi, how can I help you today? **User**: What's machine learning? **AI**:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(good answer referring to my previous question)** **User**:- I didn't get it. **AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(good answer referring to my previous message and he knew that I didn't the definition of ML)** **User**:- what's the physics? **AI**:- Physics is a natural science that studies the fundamental principles and laws governing the physical world. It seeks to understand the behavior of matter, energy, and the interactions between them. **(good answer referring to my previous message)** **User**:- I didn't get it. **AI**:- I apologize for any confusion caused. Let me provide a more simplified explanation of physics: Physics is the branch of science that deals with understanding and explaining how the world around us works. It is concerned with studying matter (everything that makes up the physical world) and energy (the ability to do work). **(now it's good because he knows that I'm referring to the definition of physics not the definition of ML, although the "I didn't get it." message was saved in the previous chat history.** **User**: What's machine learning? **AI**:- Machine learning is a field of artificial intelligence that focuses on developing algorithms and models capable of learning from data and making predictions or decisions. The primary idea behind machine learning is to enable computers to learn and improve automatically without explicit programming. **(better answer although I repeated the same question, but he didn't get the same answer from the previous chat history)** I know that the problem with the memory because if I build my agent with **ConversationBufferWindowMemory** with k = he 1 will perform this type of conversation, but since I'm using **GPTIndexChatMemory** he saved all the messages and the questions and the answers of the full conversation in this memory and bring the same answer from the previous chat history if the **message/question** repeated, which is totally wrong. This is my prompt to instruct my agent and its **CONVERSATIONAL_REACT_DESCRIPTION** """ SMSM bot, your main objective is to provide the most helpful and accurate responses to the user Zeyad. To do this, you have a powerful toolset and the ability to learn and adapt to the conversation's context GOAL: The priority is to keep the conversation flowing smoothly. Offer new insights, avoid repetitive responses, and refrain from chat history without considering the most recent context. Always place emphasis on the most recent question or topic raised by the user, and tailor your responses to match his inquiries. Consider the following scenarios: **Scenario 1**: Whenever the user introduces a new topic, all his subsequent messages are assumed to refer to this latest topic, even if this message/question already exists in the previous chat history as it is in previous conversations under different topics. This context remains until the user changes the topic explicitly. Do not seek clarification on the topic unless the user's message is ambiguous within the context of the latest topic, For example, if the user asked about Machine Learning and then about Physics, and subsequently said, "I didn't get it," your responsibility is to provide further explanation about Physics (the latest topic), and not Machine Learning (the previous topic) or ask which topic he's referring to. The phrase "I didn't get it" must be associated with the most recent topic discussed. **Scenario 2:** If the user asks the same question or a general knowledge question that has been asked before and you answered it, don't just repeat the previous answer verbatim or without relying on the previous chat history answer. Instead, try to add more value, provide a different perspective, or delve deeper into the topic and aim to generate a better and different answer that provides additional value. You MUST use the following format to provide the answer to the user: **Thought**: I have to see what the current topic we are currently discussing with the user based on the current topic, deeply analyze the user's message, find out his intention, and see if the user refers to the current topic or not regardless of previous chat history and with regarding (Scenario 1, GOAL) **AI**: [your response here] Begin! Prvious chat history: {chat_history} New input: {input} """ That's the way I define the agent and my memory. `embed_model = LangchainEmbedding(HuggingFaceEmbeddings()) service_context = ServiceContext.from_defaults(embed_model=embed_model) index = GPTListIndex([],service_context=service_context) from llama_index.query_engine import RetrieverQueryEngine #retriever = index.as_retriever(retriever_mode='embedding') #query_engine = RetrieverQueryEngine(retriever) memory = GPTIndexChatMemory( index=index, memory_key="chat_history", query_kwargs={"response_mode": "compact"}, input_key="input", ) agent_chain = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION , verbose=True, handle_parsing_errors=True, memory = memory )`
Issue: All types of langchain memories don't work in a proper way.
https://api.github.com/repos/langchain-ai/langchain/issues/5308/comments
2
2023-05-26T16:33:34Z
2023-09-18T16:10:41Z
https://github.com/langchain-ai/langchain/issues/5308
1,727,924,219
5,308
[ "hwchase17", "langchain" ]
### Feature request Add a langchain.embeddings.AnthropicEmbeddings class, similar to the langchain.embeddings.OpenAIEmbeddings class ### Motivation I am trying to modify this notebook to use Claude by Anthropic instead of OpenAI: https://github.com/pinecone-io/examples/blob/master/generation/langchain/handbook/05-langchain-retrieval-augmentation.ipynb This notebook uses Pinecone and an OpenAI LLM to do retrieval augmentation, but I would like to use Claude by Anthropic However, I am stuck because of the lack of a corresponding langchain.embeddings.AnthropicEmbeddings to replace the langchain.embeddings.OpenAIEmbeddings class that is used in this example ### Your contribution I am willing to contribute, but would appreciate some guidance. I am very new to this project
Add a langchain.embeddings.AnthropicEmbeddings class
https://api.github.com/repos/langchain-ai/langchain/issues/5307/comments
1
2023-05-26T16:25:57Z
2023-05-26T17:02:37Z
https://github.com/langchain-ai/langchain/issues/5307
1,727,914,846
5,307
[ "hwchase17", "langchain" ]
### System Info - 5.19.0-42-generic # 43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2 x86_64 x86_64 x86_64 GNU/Linux - langchain==0.0.180 - Python 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set up a retriever using any type of retriever (for example, I used Pinecone). 2. Pass it into the ContextualCompressionRetriever. 3. If the base retriever returns empty documents, 4. It throws an error: **cohere.error.CohereAPIError: invalid request: list of documents must not be empty** > File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/contextual_compression.py", line 37, in get_relevant_documents > compressed_docs = self.base_compressor.compress_documents(docs, query) > File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py", line 57, in compress_documents > results = self.client.rerank( > File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 633, in rerank > reranking = Reranking(self._request(cohere.RERANK_URL, json=json_body)) > File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 692, in _request > self._check_response(json_response, response.headers, response.status_code) > File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 642, in _check_response > raise CohereAPIError( > **cohere.error.CohereAPIError: invalid request: list of documents must not be empty** Code is Like ```python retriever = vectorstore.as_retriever() compressor = CohereRerank() compression_retriever = ContextualCompressionRetriever( base_compressor=compressor, base_retriever=retriever ) return compression_retriever ``` ### Expected behavior **no error throws** and return empty list
CohereAPIError thrown when base retriever returns empty documents in ContextualCompressionRetriever using Cohere Rank
https://api.github.com/repos/langchain-ai/langchain/issues/5304/comments
2
2023-05-26T16:10:47Z
2023-05-28T20:19:35Z
https://github.com/langchain-ai/langchain/issues/5304
1,727,893,507
5,304
[ "hwchase17", "langchain" ]
### System Info google-cloud-aiplatform==1.25.0 langchain==0.0.180 python 3.11 ### Who can help? @dev2049 @Jflick58 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction question1 = "I am axa, I'm a 2 months old baby.." question2 = "I like eating 🍌 🍉 🫐 but dislike 🥑" question3 = "what is my name?" question4 = "Do i disklike 🍌?" agent_chain = initialize_agent( agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, tools=[], llm=llm, verbose=True, max_iterations=3, memory=ConversationBufferMemory( memory_key="chat_history", return_messages=True), ) agent_chain.run(input=question1) agent_chain.run(input=question2) agent_chain.run(input=question3) agent_chain.run(input=question4) File "/Users/axa/workspace/h/default/genai_learning/post/api/app/routes/v1/quiz_chat.py", line 271, in ask agent_chain.run(input=question1) File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 239, in run return self(kwargs, callbacks=callbacks)[self.output_keys[0]] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 140, in __call__ raise e File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 134, in __call__ self._call(inputs, run_manager=run_manager) File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 951, in _call next_step_output = self._take_next_step( ^^^^^^^^^^^^^^^^^^^^^ File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 773, in _take_next_step raise e File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 762, in _take_next_step output = self.agent.plan( ^^^^^^^^^^^^^^^^ File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 444, in plan return self.output_parser.parse(full_output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/conversational/output_parser.py", line 23, in parse raise OutputParserException(f"Could not parse LLM output: `{text}`") langchain.schema.OutputParserException: **Could not parse LLM output: `Hi Axa, it's nice to meet you! I'm Bard, a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.** ### Expected behavior When I used same code but ChatOpenAI() it worked perfectly. > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No AI: Hello Axa! As an AI language model, I'm not able to see or interact with you physically, but I'm here to assist you with any questions or topics you might have. How can I assist you today? > Finished chain. > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No AI: It's great to hear that you enjoy eating bananas, watermelons, and blueberries! However, it's understandable that you might not like avocados. Everyone has their own preferences when it comes to food. Is there anything else you would like to discuss or ask about? > Finished chain. > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No AI: Your name is Axa, as you mentioned earlier. > Finished chain. > Entering new AgentExecutor chain... Thought: Do I need to use a tool? No AI: You did not mention that you dislike bananas, so I cannot say for sure. However, based on your previous message, it seems that you enjoy eating bananas. > Finished chain. INFO: 127.0.0.1:57044 - "POST /api/v1/quiz/ask HTTP/1.1" 200 OK
Vertex ChatVertexAI() doesn't support initialize_agent() as OutputParserException error
https://api.github.com/repos/langchain-ai/langchain/issues/5301/comments
1
2023-05-26T15:29:14Z
2023-09-10T16:11:32Z
https://github.com/langchain-ai/langchain/issues/5301
1,727,836,402
5,301
[ "hwchase17", "langchain" ]
### Feature request We should add support for the following vectorizers in the [weaviate hybrid search](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html): 1. [cohere](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere) 2. [palm](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere) 3. [huggingface](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere) ### Motivation more flexibility to users ### Your contribution code review
Weaviate: Add support for other vectorizers in hybrid search
https://api.github.com/repos/langchain-ai/langchain/issues/5300/comments
9
2023-05-26T14:54:53Z
2023-09-18T16:10:45Z
https://github.com/langchain-ai/langchain/issues/5300
1,727,780,002
5,300
[ "hwchase17", "langchain" ]
### System Info Version: 0.0.180 Python: 3.10.11 OS: MacOs Monterrey 12.5.1 (Apple Silicon) Steps to reproduce: ``` from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. chat = ChatOpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("What is EPAM price in NYSE? What is that number raised to the 0.23 power?") ``` Output: ``` { "name": "OutputParserException", "message": "Could not parse LLM output: Thought: I need to use a search engine to find the current price of EPAM on NYSE and a calculator to raise it to the 0.23 power.\n\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"EPAM NYSE price\"\n}\n```\n\n", "stack": "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/chat/output_parser.py:21\u001b[0m, in \u001b[0;36mChatOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[0;32m---> 21\u001b[0m response \u001b[39m=\u001b[39m parse_json_markdown(text)\n\u001b[1;32m 22\u001b[0m \u001b[39mreturn\u001b[39;00m AgentAction(response[\u001b[39m\"\u001b[39m\u001b[39maction\u001b[39m\u001b[39m\"\u001b[39m], response[\u001b[39m\"\u001b[39m\u001b[39maction_input\u001b[39m\u001b[39m\"\u001b[39m], text)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/output_parsers/json.py:17\u001b[0m, in \u001b[0;36mparse_json_markdown\u001b[0;34m(json_string)\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[39m# Parse the JSON string into a Python dictionary\u001b[39;00m\n\u001b[0;32m---> 17\u001b[0m parsed \u001b[39m=\u001b[39m json\u001b[39m.\u001b[39;49mloads(json_string)\n\u001b[1;32m 19\u001b[0m \u001b[39mreturn\u001b[39;00m parsed\n\nFile \u001b[0;32m/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py:346\u001b[0m, in \u001b[0;36mloads\u001b[0;34m(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[1;32m 343\u001b[0m \u001b[39mif\u001b[39;00m (\u001b[39mcls\u001b[39m \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m object_hook \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m\n\u001b[1;32m 344\u001b[0m parse_int \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m parse_float \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m\n\u001b[1;32m 345\u001b[0m parse_constant \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m object_pairs_hook \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m \u001b[39mnot\u001b[39;00m kw):\n\u001b[0;32m--> 346\u001b[0m \u001b[39mreturn\u001b[39;00m _default_decoder\u001b[39m.\u001b[39;49mdecode(s)\n\u001b[1;32m 347\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mcls\u001b[39m \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m:\n\nFile \u001b[0;32m/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py:337\u001b[0m, in \u001b[0;36mJSONDecoder.decode\u001b[0;34m(self, s, _w)\u001b[0m\n\u001b[1;32m 333\u001b[0m \u001b[39m\u001b[39m\u001b[39m\"\"\"Return the Python representation of ``s`` (a ``str`` instance\u001b[39;00m\n\u001b[1;32m 334\u001b[0m \u001b[39mcontaining a JSON document).\u001b[39;00m\n\u001b[1;32m 335\u001b[0m \n\u001b[1;32m 336\u001b[0m \u001b[39m\"\"\"\u001b[39;00m\n\u001b[0;32m--> 337\u001b[0m obj, end \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mraw_decode(s, idx\u001b[39m=\u001b[39;49m_w(s, \u001b[39m0\u001b[39;49m)\u001b[39m.\u001b[39;49mend())\n\u001b[1;32m 338\u001b[0m end \u001b[39m=\u001b[39m _w(s, end)\u001b[39m.\u001b[39mend()\n\nFile \u001b[0;32m/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py:355\u001b[0m, in \u001b[0;36mJSONDecoder.raw_decode\u001b[0;34m(self, s, idx)\u001b[0m\n\u001b[1;32m 354\u001b[0m \u001b[39mexcept\u001b[39;00m \u001b[39mStopIteration\u001b[39;00m \u001b[39mas\u001b[39;00m err:\n\u001b[0;32m--> 355\u001b[0m \u001b[39mraise\u001b[39;00m JSONDecodeError(\u001b[39m\"\u001b[39m\u001b[39mExpecting value\u001b[39m\u001b[39m\"\u001b[39m, s, err\u001b[39m.\u001b[39mvalue) \u001b[39mfrom\u001b[39;00m \u001b[39mNone\u001b[39;00m\n\u001b[1;32m 356\u001b[0m \u001b[39mreturn\u001b[39;00m obj, end\n\n\u001b[0;31mJSONDecodeError\u001b[0m: Expecting value: line 1 column 1 (char 0)\n\nDuring handling of the above exception, another exception occurred:\n\n\u001b[0;31mOutputParserException\u001b[0m Traceback (most recent call last)\nCell \u001b[0;32mIn[13], line 19\u001b[0m\n\u001b[1;32m 16\u001b[0m agent \u001b[39m=\u001b[39m initialize_agent(tools, chat, agent\u001b[39m=\u001b[39mAgentType\u001b[39m.\u001b[39mCHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose\u001b[39m=\u001b[39m\u001b[39mTrue\u001b[39;00m)\n\u001b[1;32m 18\u001b[0m \u001b[39m# Now let's test it out!\u001b[39;00m\n\u001b[0;32m---> 19\u001b[0m agent\u001b[39m.\u001b[39;49mrun(\u001b[39m\"\u001b[39;49m\u001b[39mWhat is EPAM price in NYSE? What is that number raised to the 0.23 power?\u001b[39;49m\u001b[39m\"\u001b[39;49m)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:236\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, callbacks, *args, **kwargs)\u001b[0m\n\u001b[1;32m 234\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mlen\u001b[39m(args) \u001b[39m!=\u001b[39m \u001b[39m1\u001b[39m:\n\u001b[1;32m 235\u001b[0m \u001b[39mraise\u001b[39;00m \u001b[39mValueError\u001b[39;00m(\u001b[39m\"\u001b[39m\u001b[39m`run` supports only one positional argument.\u001b[39m\u001b[39m\"\u001b[39m)\n\u001b[0;32m--> 236\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m(args[\u001b[39m0\u001b[39;49m], callbacks\u001b[39m=\u001b[39;49mcallbacks)[\u001b[39mself\u001b[39m\u001b[39m.\u001b[39moutput_keys[\u001b[39m0\u001b[39m]]\n\u001b[1;32m 238\u001b[0m \u001b[39mif\u001b[39;00m kwargs \u001b[39mand\u001b[39;00m \u001b[39mnot\u001b[39;00m args:\n\u001b[1;32m 239\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m(kwargs, callbacks\u001b[39m=\u001b[39mcallbacks)[\u001b[39mself\u001b[39m\u001b[39m.\u001b[39moutput_keys[\u001b[39m0\u001b[39m]]\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:140\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks)\u001b[0m\n\u001b[1;32m 138\u001b[0m \u001b[39mexcept\u001b[39;00m (\u001b[39mKeyboardInterrupt\u001b[39;00m, \u001b[39mException\u001b[39;00m) \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 139\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_error(e)\n\u001b[0;32m--> 140\u001b[0m \u001b[39mraise\u001b[39;00m e\n\u001b[1;32m 141\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_end(outputs)\n\u001b[1;32m 142\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mprep_outputs(inputs, outputs, return_only_outputs)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:134\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks)\u001b[0m\n\u001b[1;32m 128\u001b[0m run_manager \u001b[39m=\u001b[39m callback_manager\u001b[39m.\u001b[39mon_chain_start(\n\u001b[1;32m 129\u001b[0m {\u001b[39m\"\u001b[39m\u001b[39mname\u001b[39m\u001b[39m\"\u001b[39m: \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__class__\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__name__\u001b[39m},\n\u001b[1;32m 130\u001b[0m inputs,\n\u001b[1;32m 131\u001b[0m )\n\u001b[1;32m 132\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m 133\u001b[0m outputs \u001b[39m=\u001b[39m (\n\u001b[0;32m--> 134\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_call(inputs, run_manager\u001b[39m=\u001b[39;49mrun_manager)\n\u001b[1;32m 135\u001b[0m \u001b[39mif\u001b[39;00m new_arg_supported\n\u001b[1;32m 136\u001b[0m \u001b[39melse\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_call(inputs)\n\u001b[1;32m 137\u001b[0m )\n\u001b[1;32m 138\u001b[0m \u001b[39mexcept\u001b[39;00m (\u001b[39mKeyboardInterrupt\u001b[39;00m, \u001b[39mException\u001b[39;00m) \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 139\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_error(e)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:951\u001b[0m, in \u001b[0;36mAgentExecutor._call\u001b[0;34m(self, inputs, run_manager)\u001b[0m\n\u001b[1;32m 949\u001b[0m \u001b[39m# We now enter the agent loop (until it returns something).\u001b[39;00m\n\u001b[1;32m 950\u001b[0m \u001b[39mwhile\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_should_continue(iterations, time_elapsed):\n\u001b[0;32m--> 951\u001b[0m next_step_output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_take_next_step(\n\u001b[1;32m 952\u001b[0m name_to_tool_map,\n\u001b[1;32m 953\u001b[0m color_mapping,\n\u001b[1;32m 954\u001b[0m inputs,\n\u001b[1;32m 955\u001b[0m intermediate_steps,\n\u001b[1;32m 956\u001b[0m run_manager\u001b[39m=\u001b[39;49mrun_manager,\n\u001b[1;32m 957\u001b[0m )\n\u001b[1;32m 958\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(next_step_output, AgentFinish):\n\u001b[1;32m 959\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_return(\n\u001b[1;32m 960\u001b[0m next_step_output, intermediate_steps, run_manager\u001b[39m=\u001b[39mrun_manager\n\u001b[1;32m 961\u001b[0m )\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:773\u001b[0m, in \u001b[0;36mAgentExecutor._take_next_step\u001b[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001b[0m\n\u001b[1;32m 771\u001b[0m raise_error \u001b[39m=\u001b[39m \u001b[39mFalse\u001b[39;00m\n\u001b[1;32m 772\u001b[0m \u001b[39mif\u001b[39;00m raise_error:\n\u001b[0;32m--> 773\u001b[0m \u001b[39mraise\u001b[39;00m e\n\u001b[1;32m 774\u001b[0m text \u001b[39m=\u001b[39m \u001b[39mstr\u001b[39m(e)\n\u001b[1;32m 775\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mhandle_parsing_errors, \u001b[39mbool\u001b[39m):\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:762\u001b[0m, in \u001b[0;36mAgentExecutor._take_next_step\u001b[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001b[0m\n\u001b[1;32m 756\u001b[0m \u001b[39m\u001b[39m\u001b[39m\"\"\"Take a single step in the thought-action-observation loop.\u001b[39;00m\n\u001b[1;32m 757\u001b[0m \n\u001b[1;32m 758\u001b[0m \u001b[39mOverride this to take control of how the agent makes and acts on choices.\u001b[39;00m\n\u001b[1;32m 759\u001b[0m \u001b[39m\"\"\"\u001b[39;00m\n\u001b[1;32m 760\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m 761\u001b[0m \u001b[39m# Call the LLM to see what to do.\u001b[39;00m\n\u001b[0;32m--> 762\u001b[0m output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49magent\u001b[39m.\u001b[39;49mplan(\n\u001b[1;32m 763\u001b[0m intermediate_steps,\n\u001b[1;32m 764\u001b[0m callbacks\u001b[39m=\u001b[39;49mrun_manager\u001b[39m.\u001b[39;49mget_child() \u001b[39mif\u001b[39;49;00m run_manager \u001b[39melse\u001b[39;49;00m \u001b[39mNone\u001b[39;49;00m,\n\u001b[1;32m 765\u001b[0m \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49minputs,\n\u001b[1;32m 766\u001b[0m )\n\u001b[1;32m 767\u001b[0m \u001b[39mexcept\u001b[39;00m OutputParserException \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 768\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mhandle_parsing_errors, \u001b[39mbool\u001b[39m):\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:444\u001b[0m, in \u001b[0;36mAgent.plan\u001b[0;34m(self, intermediate_steps, callbacks, **kwargs)\u001b[0m\n\u001b[1;32m 442\u001b[0m full_inputs \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mget_full_inputs(intermediate_steps, \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mkwargs)\n\u001b[1;32m 443\u001b[0m full_output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mllm_chain\u001b[39m.\u001b[39mpredict(callbacks\u001b[39m=\u001b[39mcallbacks, \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mfull_inputs)\n\u001b[0;32m--> 444\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49moutput_parser\u001b[39m.\u001b[39;49mparse(full_output)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/chat/output_parser.py:25\u001b[0m, in \u001b[0;36mChatOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 22\u001b[0m \u001b[39mreturn\u001b[39;00m AgentAction(response[\u001b[39m\"\u001b[39m\u001b[39maction\u001b[39m\u001b[39m\"\u001b[39m], response[\u001b[39m\"\u001b[39m\u001b[39maction_input\u001b[39m\u001b[39m\"\u001b[39m], text)\n\u001b[1;32m 24\u001b[0m \u001b[39mexcept\u001b[39;00m \u001b[39mException\u001b[39;00m:\n\u001b[0;32m---> 25\u001b[0m \u001b[39mraise\u001b[39;00m OutputParserException(\u001b[39mf\u001b[39m\u001b[39m\"\u001b[39m\u001b[39mCould not parse LLM output: \u001b[39m\u001b[39m{\u001b[39;00mtext\u001b[39m}\u001b[39;00m\u001b[39m\"\u001b[39m)\n\n\u001b[0;31mOutputParserException\u001b[0m: Could not parse LLM output: Thought: I need to use a search engine to find the current price of EPAM on NYSE and a calculator to raise it to the 0.23 power.\n\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"EPAM NYSE price\"\n}\n```\n\n" } ``` ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. chat = ChatOpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("What is EPAM price in NYSE? What is that number raised to the 0.23 power?") ``` ### Expected behavior Should work
Failure to run docpage examples
https://api.github.com/repos/langchain-ai/langchain/issues/5299/comments
3
2023-05-26T13:48:05Z
2023-06-05T08:51:01Z
https://github.com/langchain-ai/langchain/issues/5299
1,727,665,260
5,299
[ "hwchase17", "langchain" ]
### System Info `python 3.11` ``` fastapi==0.95.1 langchain==0.0.180 pydantic==1.10.7 uvicorn==0.21.1 openai==0.27.4 ``` ### Who can help? @agola11 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [X] Async ### Reproduction I am trying to create a streaming endpoint in Fast API, below are the files `main.py` ```python from fastapi import FastAPI from src.chat_stream import ChatOpenAIStreamingResponse, send_message, StreamRequest app = FastAPI() @app.post("/chat_streaming", response_class=StreamingResponse) async def chat(body: StreamRequest ): return ChatOpenAIStreamingResponse(send_message(body.message), media_type="text/event-stream") ``` `src/chat_stream.py` ```python from typing import Awaitable, Callable, Union Sender = Callable[[Union[str, bytes]], Awaitable[None]] from starlette.types import Send from typing import Any, Optional, Awaitable, Callable, Iterator, Union from langchain.schema import HumanMessage from pydantic import BaseModel from fastapi.responses import StreamingResponse from langchain.chat_models import ChatOpenAI from langchain.callbacks.base import AsyncCallbackHandler from langchain.callbacks.manager import AsyncCallbackManager class EmptyIterator(Iterator[Union[str, bytes]]): def __iter__(self): return self def __next__(self): raise StopIteration class AsyncStreamCallbackHandler(AsyncCallbackHandler): """Callback handler for streaming, inheritance from AsyncCallbackHandler.""" def __init__(self, send: Sender): super().__init__() self.send = send async def on_llm_new_token(self, token: str, **kwargs: Any) -> None: """Rewrite on_llm_new_token to send token to client.""" await self.send(f"data: {token}\n\n") class ChatOpenAIStreamingResponse(StreamingResponse): """Streaming response for openai chat model, inheritance from StreamingResponse.""" def __init__( self, generate: Callable[[Sender], Awaitable[None]], status_code: int = 200, media_type: Optional[str] = None, ) -> None: super().__init__( content=EmptyIterator(), status_code=status_code, media_type=media_type ) self.generate = generate async def stream_response(self, send: Send) -> None: """Rewrite stream_response to send response to client.""" await send( { "type": "http.response.start", "status": self.status_code, "headers": self.raw_headers, } ) async def send_chunk(chunk: Union[str, bytes]): if not isinstance(chunk, bytes): chunk = chunk.encode(self.charset) await send({"type": "http.response.body", "body": chunk, "more_body": True}) # send body to client await self.generate(send_chunk) # send empty body to client to close connection await send({"type": "http.response.body", "body": b"", "more_body": False}) def send_message(message: str) -> Callable[[Sender], Awaitable[None]]: async def generate(send: Sender): model = ChatOpenAI( streaming=True, verbose=True, callback_manager=AsyncCallbackManager([AsyncStreamCallbackHandler(send)]), ) await model.agenerate(messages=[[HumanMessage(content=message)]]) return generate class StreamRequest(BaseModel): """Request body for streaming.""" message: str ``` ### Expected behavior The Endpoint should stream the response from LLM Chain, instead I am getting this error ``` Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised APIConnectionError: Error communicating with OpenAI. Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 2.0 seconds as it raised APIConnectionError: Error communicating with OpenAI. Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI. Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI. Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 16.0 seconds as it raised APIConnectionError: Error communicating with OpenAI. ``` ```python ERROR: Exception in ASGI application Traceback (most recent call last): File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1098, in create_connection transport, protocol = await self._create_connection_transport( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1131, in _create_connection_transport await waiter File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 577, in _on_handshake_complete raise handshake_exc File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 559, in _do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 979, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 588, in arequest_raw result = await session.request(**request_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request conn = await self._connector.connect( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect proto = await self._create_connection(req, traces, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection _, proto = await self._create_direct_connection(req, traces, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection raise last_exc File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection transp, proto = await self._wrap_create_connection( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection raise ClientConnectorCertificateError(req.connection_key, exc) from exc aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host api.openai.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')] The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Project/venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 429, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/fastapi/applications.py", line 276, in __call__ await super().__call__(scope, receive, send) File "/Project/venv/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/Project/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__ raise e File "/Project/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 69, in app await response(scope, receive, send) File "/Project/venv/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__ async with anyio.create_task_group() as task_group: File "/Project/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__ raise exceptions[0] File "/Project/venv/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap await func() File "/Project/src/app.py", line 67, in stream_response await self.generate(send_chunk) File "/Project/src/app.py", line 80, in generate await model.agenerate(messages=[[HumanMessage(content=message)]]) File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 63, in agenerate results = await asyncio.gather( ^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 297, in _agenerate async for stream_resp in await acompletion_with_retry( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 63, in acompletion_with_retry return await _completion_with_retry(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped return await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__ do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 325, in iter raise retry_exc.reraise() ^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 158, in reraise raise self.last_attempt.result() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__ result = await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 61, in _completion_with_retry return await llm.client.acreate(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate return await super().acreate(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate response, _, api_key = await requestor.arequest( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 300, in arequest result = await self.arequest_raw( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 605, in arequest_raw raise error.APIConnectionError("Error communicating with OpenAI") from e openai.error.APIConnectionError: Error communicating with OpenAI ```
APIConnectionError: Error communicating with OpenAI.
https://api.github.com/repos/langchain-ai/langchain/issues/5296/comments
12
2023-05-26T12:14:47Z
2024-04-19T15:23:44Z
https://github.com/langchain-ai/langchain/issues/5296
1,727,514,993
5,296
[ "hwchase17", "langchain" ]
### System Info ValueError: `run` not supported when there is not exactly one output key. Got ['result', 'source_documents' ] ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction qa = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type = "stuff", retriever=db.as_retriever(), return_source_documents=True ) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)#, output_keys=['result','source_documents']) agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)# return_intermediate_steps=True)#, output_keys=['result','source_documents'] #) ### Expected behavior Returns the answer and source doc as well
Get the source document info with result
https://api.github.com/repos/langchain-ai/langchain/issues/5295/comments
4
2023-05-26T11:41:44Z
2023-10-23T16:08:27Z
https://github.com/langchain-ai/langchain/issues/5295
1,727,465,007
5,295