issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "hwchase17", "langchain" ]
In the file `langchain/agents/conversational_chat/base.py` changing line 107 where it returns `response['action'], response['action_input']` to `response['action'], response.get('action_input')` would fix this error when an Agent tries using a tool that is not supposed to take any inputs.
Fix for "Could not parse LLM output" for tools that don't take input
https://api.github.com/repos/langchain-ai/langchain/issues/2832/comments
1
2023-04-13T15:12:58Z
2023-09-15T22:12:49Z
https://github.com/langchain-ai/langchain/issues/2832
1,666,625,495
2,832
[ "hwchase17", "langchain" ]
Hi All, I am trying to use the SQL database chain as mentioned over here (https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html) using AzureOpenAI API but getting the following errors: Command run: db_chain = SQLDatabaseChain(llm=llm, database=db, prompt=PROMPT, verbose=True,return_intermediate_steps = True) Error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[34], line 1 ----> 1 db_chain.run("How many singers are there in the singer table?") File [~\A~/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0/LocalCache/local-packages/Python310/site-packages/langchain/chains/base.py:205), in Chain.run(self, *args, **kwargs) 203 """Run the chain as text in, text out or multiple variables, text out.""" 204 if len(self.output_keys) != 1: --> 205 raise ValueError( 206 f"`run` not supported when there is not exactly " 207 f"one output key. Got {self.output_keys}." 208 ) 210 if args and not kwargs: 211 if len(args) != 1: ValueError: `run` not supported when there is not exactly one output key. Got ['result', 'intermediate_steps'].
SQLDatabaseChain returning error for return_intermediate_steps
https://api.github.com/repos/langchain-ai/langchain/issues/2831/comments
4
2023-04-13T14:51:55Z
2023-09-27T16:08:37Z
https://github.com/langchain-ai/langchain/issues/2831
1,666,588,302
2,831
[ "hwchase17", "langchain" ]
how to integrate a toolkit like json tool kit with tool such as human tool to interact with data?
toolkits and tools
https://api.github.com/repos/langchain-ai/langchain/issues/2829/comments
2
2023-04-13T14:29:20Z
2023-09-10T16:34:00Z
https://github.com/langchain-ai/langchain/issues/2829
1,666,539,911
2,829
[ "hwchase17", "langchain" ]
Hello, I would like to request the addition of support for Amazon Bedrock to the Langchain library. As Amazon Bedrock is a new service, it would be beneficial for Langchain to include it as a supported platform. 2023-04-13 Amazon announced the new service [Amazon Bedrock](https://aws.amazon.com/bedrock/). Blog: https://aws.amazon.com/blogs/machine-learning/announcing-new-tools-for-building-with-generative-ai-on-aws/
Support for Amazon Bedrock
https://api.github.com/repos/langchain-ai/langchain/issues/2828/comments
50
2023-04-13T13:39:13Z
2024-05-07T11:11:51Z
https://github.com/langchain-ai/langchain/issues/2828
1,666,442,102
2,828
[ "hwchase17", "langchain" ]
Hi Team, I am trying to create an index for _paul_graham_essay.txt_ with huggingface LLM models. But ending up with below error. Can someone please advice ? Code: ``` from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader, PromptHelper, LLMPredictor, ServiceContext import torch from langchain.llms.base import LLM from transformers import pipeline from typing import Optional, List, Mapping, Any # define prompt helper # set maximum input size max_input_size = 512 # set number of output tokens num_output = 512 # set maximum chunk overlap max_chunk_overlap = 20 prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap) class CustomLLM(LLM): model_name = "databricks/dolly-v1-6b" pipeline = pipeline(model="databricks/dolly-v1-6b", trust_remote_code=True, device_map="auto") def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str: response = self.pipeline(prompt)[0]["generated_text"] # only return newly generated tokens return response @property def _identifying_params(self) -> Mapping[str, Any]: return {"name_of_model": self.model_name} @property def _llm_type(self) -> str: return "custom" llm_predictor = LLMPredictor(llm=CustomLLM()) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper) documents = SimpleDirectoryReader('data').load_data() index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context) ``` Error: ``` AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>'... ...RetryError: RetryError[<Future at 0x7fb848640ad0 state=finished raised AuthenticationError>] ``` Thanks
GPTSimpleVectorIndex throwing error of OPENAI_API_KEY for huggingface models
https://api.github.com/repos/langchain-ai/langchain/issues/2824/comments
2
2023-04-13T12:43:57Z
2023-11-19T16:07:26Z
https://github.com/langchain-ai/langchain/issues/2824
1,666,349,132
2,824
[ "hwchase17", "langchain" ]
In previous versions of LC running the following code would create an index in Pinecone: ``` loader = TextLoader(join('data', 'ad.txt')) documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() pinecone.init( api_key="cc7f8b80-2cc9-4d72-8c0d-0e2d287977d5", environment="asia-southeast1-gcp" ) index_name = "langchain-demo" vectorstore = Pinecone.from_documents(docs, embeddings, index_name=index_name) ``` This would create an index in `v.0.0.123`, instead I'm seeing this error: `ValueError: No active indexes found in your Pinecone project, are you sure you're using the right API key and environment?` Are there any breaking changes to this implementation we should be aware of?
`v.0.0.138` does not create a `Pinecone` index when initiating a vectorstore.
https://api.github.com/repos/langchain-ai/langchain/issues/2822/comments
3
2023-04-13T11:28:28Z
2023-09-26T16:08:50Z
https://github.com/langchain-ai/langchain/issues/2822
1,666,235,271
2,822
[ "hwchase17", "langchain" ]
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html It doesn't look like `embeddings` var is used. This code fails on: ``` ====================================================================== ERROR: test_weaviate_setup (__main__.TestVectorStores) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_vectorstores.py", line 50, in test_weaviate_setup client.schema.create(schema) File "/Users/ashemagalhaes/opt/miniconda3/envs/hearth-modeling/lib/python3.8/site-packages/weaviate/schema/crud_schema.py", line 128, in create self._create_classes_with_primitives(loaded_schema["classes"]) File "/Users/ashemagalhaes/opt/miniconda3/envs/hearth-modeling/lib/python3.8/site-packages/weaviate/schema/crud_schema.py", line 723, in _create_classes_with_primitives self._create_class_with_primitives(weaviate_class) File "/Users/ashemagalhaes/opt/miniconda3/envs/hearth-modeling/lib/python3.8/site-packages/weaviate/schema/crud_schema.py", line 708, in _create_class_with_primitives raise UnexpectedStatusCodeException("Create class", response) weaviate.exceptions.UnexpectedStatusCodeException: Create class! Unexpected status code: 422, with response body: {'error': [{'message': "module 'text2vec-openai': unsupported version 002"}]}. ---------------------------------------------------------------------- Ran 1 test in 0.564s FAILED (errors=1) sys:1: ResourceW ```
Weaviate setup in docs is broken
https://api.github.com/repos/langchain-ai/langchain/issues/2820/comments
4
2023-04-13T10:17:05Z
2023-09-26T16:08:55Z
https://github.com/langchain-ai/langchain/issues/2820
1,666,134,513
2,820
[ "hwchase17", "langchain" ]
When using `from_texts()` method on the OpenSearchVectorSearch class it is not possible to pass kwargs to the Opensearch client (which _does_ happen when using the standard `__init__()` constructor). Fix is to add `**kwargs` to line 431 in `opensearch_vector_search.py`: ``` client = _get_opensearch_client(opensearch_url) ```
Construct OpenSearchVectorSearch using `from_texts` doesn't pass `kwargs` to opensearch client
https://api.github.com/repos/langchain-ai/langchain/issues/2819/comments
1
2023-04-13T10:00:52Z
2023-04-18T03:44:32Z
https://github.com/langchain-ai/langchain/issues/2819
1,666,109,195
2,819
[ "hwchase17", "langchain" ]
Please discuss with me the best practices for creating and sharing tools on langchain. I am developing a library, [langchain-tools-nicovideo](https://github.com/Javakky/langchain-tools-nicovideo), to combine with LangChain for obtaining information from [nicovideo](https://www.nicovideo.jp/) (Japanese video sharing site for otaku) langchain already has several built-in tools, but by allowing developers experienced with data sources and API usage to freely create and share tools, we can expand the possibilities even further. Could you please review and discuss the following proposed best practices for people who want to create and share their own tools? ## 1. A bulletin board for promoting homemade tools. - This does not necessarily need to be a rich website. For example, by simply preparing a Markdown document (`tools.md`) with a list, developers of tools will promote their library naturally through pull requests. ```md <!-- GitHub or PyPI link --> - [Javakky/langchain-tools-nicovideo](https://github.com/Javakky/langchain-tools-nicovideo) ``` ## 2. Prefix to declare that it is a tool. - As a trial, I used `langchain-tools` as a prefix. By having developers of tools collaborate to add prefixes, the cost of finding tools on platforms such as PyPI will be greatly reduced. (`langchain-tools` is only provisional.) ## 3. By setting the configuration, you can easily add any tools to load_tools. - As in the use case of my library, when injecting tools from the outside, you need to specify the tools and wrapper yourself. If users can find a correspondence between the name and class in the config of the library, they will no longer need to know the class name of individual tools. ```python tools = load_tools(["requests_all"], llm=llm) tools.append(NicovideoQueryRun(api_wrapper=NicovideoSnapshotApiWrapper())) ```
Proposal for an ecosystem of tools to extend langchain
https://api.github.com/repos/langchain-ai/langchain/issues/2818/comments
3
2023-04-13T09:53:13Z
2023-09-25T16:09:14Z
https://github.com/langchain-ai/langchain/issues/2818
1,666,096,684
2,818
[ "hwchase17", "langchain" ]
null
Having token limit issue while using sql agents when the data returned by the agent query on the database is too large
https://api.github.com/repos/langchain-ai/langchain/issues/2817/comments
3
2023-04-13T08:43:00Z
2024-03-31T04:25:20Z
https://github.com/langchain-ai/langchain/issues/2817
1,665,984,218
2,817
[ "hwchase17", "langchain" ]
As part of our efforts to ensure high quality and robustness of Langchain, it's important to acknowledge that we have experienced many issues with the functional capabilities of the Vector Stores, which do not always perform as expected and have been documented as such. Therefore, we need to implement a comprehensive suite of tests for Vector Stores, covering all the relevant functionality, to ensure that they work as expected, regardless of whether they are local or remote. This will involve testing the ability of the stores to handle text inputs, metadata, IDs, document updates, similarity searches, etc. Once this testing is complete in full or partial, it should be added to our GitHub flow. This will help us avoid breaking any existing functionality and maintain the quality of our code. With this in place, we can be confident that the Vector Stores will function optimally, and we can avoid any degradation of their functionality in future updates. In addition, users of Langchain will be able to refer to the tests as a source of information on how to use the Vector Stores effectively. If users have questions about how to perform a specific task with a particular Vector Store, they can look to the tests for answers and guidance. This will improve the overall usability of Langchain and enhance the experience for users. _I am not sure if I will have enough time to implement it, but in any case, I will try to make it possible. At the very least, I will implement a good starting point for it._ https://github.com/hwchase17/langchain/issues/2484 https://github.com/hwchase17/langchain/issues/829 https://github.com/hwchase17/langchain/issues/2491 https://github.com/hwchase17/langchain/issues/2225 and others
Complete testing for Vector Stores
https://api.github.com/repos/langchain-ai/langchain/issues/2816/comments
2
2023-04-13T08:36:22Z
2023-11-19T16:07:31Z
https://github.com/langchain-ai/langchain/issues/2816
1,665,974,601
2,816
[ "hwchase17", "langchain" ]
Even though the tiktoken python package is shown in the pyproject.toml, it doesnt seem to have been installed, and I got the error on summarization with map-reduce, ValueError: Could not import tiktoken python package. This is needed in order to calculate get_num_tokens. Please it install it with `pip install tiktoken`. Upon pip installation of the package it worked.
tiktoken python package
https://api.github.com/repos/langchain-ai/langchain/issues/2814/comments
7
2023-04-13T08:06:14Z
2024-02-12T16:19:34Z
https://github.com/langchain-ai/langchain/issues/2814
1,665,929,431
2,814
[ "hwchase17", "langchain" ]
How to get the conversation logs in ConversationalRetrievalChain such as the condensed standalone question
Get the conversation logs in ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/2813/comments
4
2023-04-13T07:57:09Z
2024-01-04T17:30:12Z
https://github.com/langchain-ai/langchain/issues/2813
1,665,913,983
2,813
[ "hwchase17", "langchain" ]
Is langchain supporting our own api interaction or currently only the ones in examples?
working with own API
https://api.github.com/repos/langchain-ai/langchain/issues/2812/comments
4
2023-04-13T07:43:01Z
2023-11-20T16:07:31Z
https://github.com/langchain-ai/langchain/issues/2812
1,665,894,861
2,812
[ "hwchase17", "langchain" ]
1. `VectorStoreToolkit`, `VectorStoreRouterToolkit` rely on `VectorStoreQATool` and `VectorStoreQAWithSourcesTool` 2. `VectorStoreQATool` and `VectorStoreQAWithSourcesTool` rely on `VectorDBQA` and `VectorDBQAWithSourcesChain` respectively. 3. Although `VectorDBQA` and `VectorDBQAWithSourcesChain`are deprecated, there are currently no replacements available. 4. As a result, an agent initialized by `create_vectorstore_agent` continually calls `raise_deprecation` in agent executor. To address this issue, LangChain needs following: `create_retrieval_qa_agent`, `RetrievalQAToolKit`, `RetrievalQAWithSourcesTool`.
VectorStoreToolkit uses deprecated VectorDBQA, langchain needs create_retrieval_qa_agent
https://api.github.com/repos/langchain-ai/langchain/issues/2811/comments
5
2023-04-13T07:31:47Z
2023-09-18T16:20:13Z
https://github.com/langchain-ai/langchain/issues/2811
1,665,879,850
2,811
[ "hwchase17", "langchain" ]
On the [Hugging Face Hub example page](https://github.com/hwchase17/langchain/blob/0e763677e4c334af80f2b542cb269f3786d8403f/docs/modules/models/llms/integrations/huggingface_hub.ipynb), the question is, "What NFL team won the Super Bowl in the year Justin Beiber *[sic]* was born?" The answer is, "The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The final answer: Seattle Seahawks." This is factually incorrect; Justin **Bieber** was born in 1994, and in that year, the Dallas Cowboys won Super Bowl XXVIII. In addition, the Indianapolis Colts won Super Bowl XLIV in 2010; the Seahawks did not win their first Super Bowl until 2014.
Hugging Face Hub example is factually incorrect
https://api.github.com/repos/langchain-ai/langchain/issues/2802/comments
8
2023-04-12T23:40:00Z
2023-09-27T16:08:42Z
https://github.com/langchain-ai/langchain/issues/2802
1,665,412,576
2,802
[ "hwchase17", "langchain" ]
It is possible to ask an LLM to create its own symbolic expression to compress a given prompt. See this twitter discussion/[digest](https://www.piratewires.com/p/compression-prompts-gpt-hidden-dialects). It would be useful to add some tooling to compress re-used prompts. Doing some light experimentation with GPT-4, it seems the LLMs can also compress prompts while maintaining any template site indicators ( `${}` and such). This would be an additional, but more self-contained and implementable way to address the desires raised in this discussion: https://github.com/hwchase17/langchain/issues/2257
Integrate LLM-assisted symbolic compression
https://api.github.com/repos/langchain-ai/langchain/issues/2794/comments
1
2023-04-12T20:19:58Z
2023-09-10T16:34:20Z
https://github.com/langchain-ai/langchain/issues/2794
1,665,192,419
2,794
[ "hwchase17", "langchain" ]
Below is the code i'm using to explore a CSV on Pokemon. I'm trying to plug in Azure credentials to get it to work but i'm running into some issues. from langchain.agents import create_pandas_dataframe_agent from langchain.llms import OpenAI import pandas as pd import os os.environ["OPENAI_API_KEY"] = 'sk-xxx' df = pd.read_csv('pokemon.csv') agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True) agent.run("What is the name of the gen 5 pokemon has the highest speed?") Here is what I've been trying to do to get Azure OpenAI to work with this: os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_KEY"] = "xx" os.environ["OPENAI_API_BASE"] = "https://xx.openai.azure.com" os.environ["OPENAI_API_VERSION"] = "2022-12-01" When I try to run the above, I get the error that the API key is incorrect. More specifically: AuthenticationError: Incorrect API key provided: ****************************. You can find your API key at https://platform.openai.com/account/api-keys. Has anyone dealt with this before?
Langchain Azure create_pandas_dataframe_agent Issue
https://api.github.com/repos/langchain-ai/langchain/issues/2790/comments
2
2023-04-12T18:49:32Z
2023-09-10T16:34:26Z
https://github.com/langchain-ai/langchain/issues/2790
1,665,073,170
2,790
[ "hwchase17", "langchain" ]
When replicating the [hierarchical planning example](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/openapi.html#st-example-hierarchical-planning-agent) with a large enough OpenAPI specification, the following error is thrown when running the agent with any query: ``` InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 6561 tokens. Please reduce the length of the messages. ``` Here is how I'm reducing my OpenAPI spec: ```python with open("server.yml") as f: raw_server_spec = yaml.load(f, Loader=yaml.Loader) server_spec = reduce_openapi_spec(raw_server_spec) ``` And here's how I'm initializing the agent: ```python llm = ChatOpenAI(temperature=0.0) openapi_agent = create_planner_openapi_agent(server_spec, requests_wrapper, llm) user_query = "Return the response for retrieving document info for the document with id 1" openapi_agent.run(user_query); ``` I think the OpenAPI spec reducer should have a way of splitting the spec into multiple chunks if necessary and the OpenAPI agent adapted to go across many chunks if needed, perhaps with a map-reduce or "stuff" approach.
OpenAPI planner agent doesn't support large specs
https://api.github.com/repos/langchain-ai/langchain/issues/2786/comments
3
2023-04-12T18:15:51Z
2023-11-08T11:32:10Z
https://github.com/langchain-ai/langchain/issues/2786
1,665,027,406
2,786
[ "hwchase17", "langchain" ]
i had see the example llm with streaming output: from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) but how can i use ConversationChain with stream responses?
ConversationChain with stream responses
https://api.github.com/repos/langchain-ai/langchain/issues/2785/comments
4
2023-04-12T18:15:01Z
2023-05-19T08:19:53Z
https://github.com/langchain-ai/langchain/issues/2785
1,665,026,147
2,785
[ "hwchase17", "langchain" ]
I'm currently using OpenAIEmbeddings and OpenAI LLMs for ConversationalRetrievalChain. I'm trying to switch to LLAMA (specifically Vicuna 13B but it's really slow. I've done this: `embeddings = LlamaCppEmbeddings(model_path="/Users/tgcandido/dalai/llama/models/7B/ggml-model-q4_0.bin") llm = LlamaCpp(model_path="/Users/tgcandido/dalai/alpaca/models/7B/ggml-model-q4_0.bin")` I could use different embeddings (OpenAIEmbeddings + LlammaCpp ?), but I don't know if that's a good match - I don't know much about embedding compatibility. Another idea is to use a LlamaCpp model in a "REST" mode that is loaded once, and I can send many requests because for each prompt, the executable is run, and It takes ~10s to load the executable on my M2 Max. Are my hypothesis correct, or am I missing something here?
Using llama for ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/2784/comments
14
2023-04-12T18:14:40Z
2023-05-23T13:23:02Z
https://github.com/langchain-ai/langchain/issues/2784
1,665,025,400
2,784
[ "hwchase17", "langchain" ]
So right now LLMs are exposed and used everywhere through the `predict` method. This method takes a single string, with the whole conversation context, and sends it as a single message. In the case of ChatOpenAI, it includes it as a single user message in the messages list. And I think I get why: the chat API was introduced on 3.5, after this project was well on track, and in theory we'd rather abstract away trivial format specifications in favor of a simpl, stable exposed interface. My question is, given OpenAI is doing some processing on this message, and particularly that they fine tune for system and user messages to be weighted differently, could we see more success in chains/agents instruction following if we were separately submiting the 'system instructions' as a system message? And if that were the case, do we just hope for a more friendly (raw completion) endpoint that supports gpt-4/3.5 in the future? Because otherwise we would need to rethink some of the prompt templating abstractions :(
On OpenAI's API quirks (system messages, chat history)
https://api.github.com/repos/langchain-ai/langchain/issues/2781/comments
1
2023-04-12T17:34:00Z
2023-05-01T22:15:07Z
https://github.com/langchain-ai/langchain/issues/2781
1,664,969,411
2,781
[ "hwchase17", "langchain" ]
If not, if you know some other similar open source project please recommend !!
Does langchain support Huggingface models for chat task ?
https://api.github.com/repos/langchain-ai/langchain/issues/2777/comments
2
2023-04-12T17:13:37Z
2023-09-10T16:34:30Z
https://github.com/langchain-ai/langchain/issues/2777
1,664,938,359
2,777
[ "hwchase17", "langchain" ]
In creating an exe using Pyinstaller, including the from langchain.docstore.document import Document to use langchain adds over 200MB in size to the resulting exe. Thoughts on reducing the amount of code included with the Documents?
Including from langchain.docstore.document import Document adds 200mb+ to pyinstaller exe
https://api.github.com/repos/langchain-ai/langchain/issues/2774/comments
1
2023-04-12T16:27:28Z
2023-09-15T22:12:48Z
https://github.com/langchain-ai/langchain/issues/2774
1,664,876,336
2,774
[ "hwchase17", "langchain" ]
[Docs](https://python.langchain.com/en/latest/use_cases/evaluation/qa_generation.html) show a QAGenerationChain but that is not exposed via pip install.
QAGenerationChain is missing when langchain is pip installed
https://api.github.com/repos/langchain-ai/langchain/issues/2771/comments
3
2023-04-12T15:20:18Z
2023-04-12T17:25:32Z
https://github.com/langchain-ai/langchain/issues/2771
1,664,773,935
2,771
[ "hwchase17", "langchain" ]
I'm playing around with the [Conversation Agent](https://python.langchain.com/en/latest/modules/agents/agents/examples/conversational_agent.html) notebook and I realized that it tends to use a tool twice with the same input, even though it got the answer it needed the first time around. Here's an example of that sort of behavior: ``` Question: Who is the current director general of CERN? > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Wikipedia Action Input: current director general of CERN Observation: Page: CERN Summary: The European Organization for Nuclear Research, known as CERN (; French pronunciation: ​[sɛʁn]; Conseil européen pour la recherche nucléaire), is an intergovernmental organization that operates the largest particle physics laboratory in the world. Established in 1954, it is based in a northwestern suburb of Geneva, on the France–Switzerland border. It comprises 23 member states, and Israel (admitted in 2013) is currently the only non-European country holding full membership. CERN is an official United Nations General Assembly observer.The acronym CERN is also used to refer to the laboratory; in 2019, it had 2,660 scientific, technical, and administrative staff members, and hosted about 12,400 users from institutions in more than 70 countries. In 2016, CERN generated 49 petabytes of data.CERN's main function is to provide the particle accelerators and other infrastructure needed for high-energy physics research — consequently, numerous experiments have been constructed at CERN through international collaborations. CERN is the site of the Large Hadron Collider (LHC), the world's largest and highest-energy particle collider. The main site at Meyrin hosts a large computing facility, which is primarily used to store and analyze data from experiments, as well as simulate events. As researchers require remote access to these facilities, the lab has historically been a major wide area network hub. CERN is also the birthplace of the World Wide Web. Page: Large Hadron Collider Summary: The Large Hadron Collider (LHC) is the world's largest and highest-energy particle collider. It was built by the European Organization for Nuclear Research (CERN) between 1998 and 2008 in collaboration with over 10,000 scientists and hundreds of universities and laboratories, as well as more than 100 countries. It lies in a tunnel 27 kilometres (17 mi) in circumference and as deep as 175 metres (574 ft) beneath the France–Switzerland border near Geneva. The first collisions were achieved in 2010 at an energy of 3.5 teraelectronvolts (TeV) per beam, about four times the previous world record. The discovery of the Higgs boson at the LHC was announced in 2012. Between 2013 and 2015, the LHC was shut down and upgraded; after those upgrades it reached 6.8 TeV per beam (13.6 TeV total collision energy). At the end of 2018, it was shut down for three years for further upgrades. The collider has four crossing points where the accelerated particles collide. Seven detectors, each designed to detect different phenomena, are positioned around the crossing points. The LHC primarily collides proton beams, but it can also accelerate beams of heavy ions: lead–lead collisions and proton–lead collisions are typically performed for one month a year. The LHC's goal is to allow physicists to test the predictions of different theories of particle physics, including measuring the properties of the Higgs boson, searching for the large family of new particles predicted by supersymmetric theories, and other unresolved questions in particle physics. Page: Fabiola Gianotti Summary: Fabiola Gianotti (Italian: [faˈbiːola dʒaˈnɔtti]; born 29 October 1960) is an Italian experimental particle physicist who is the current and first woman Director-General at CERN (European Organization for Nuclear Research) in Switzerland. Her first mandate began on 1 January 2016 and ran for a period of five years. At its 195th Session in 2019, the CERN Council selected Gianotti for a second term as Director-General. Her second five-year term began on 1 January 2021 and goes on until 2025. This is the first time in CERN's history that a Director-General has been appointed for a full second term. Thought:Do I need to use a tool? Yes Action: Wikipedia Action Input: current director general of CERN Observation: Page: CERN Summary: The European Organization for Nuclear Research, known as CERN (; French pronunciation: ​[sɛʁn]; Conseil européen pour la recherche nucléaire), is an intergovernmental organization that operates the largest particle physics laboratory in the world. Established in 1954, it is based in a northwestern suburb of Geneva, on the France–Switzerland border. It comprises 23 member states, and Israel (admitted in 2013) is currently the only non-European country holding full membership. CERN is an official United Nations General Assembly observer.The acronym CERN is also used to refer to the laboratory; in 2019, it had 2,660 scientific, technical, and administrative staff members, and hosted about 12,400 users from institutions in more than 70 countries. In 2016, CERN generated 49 petabytes of data.CERN's main function is to provide the particle accelerators and other infrastructure needed for high-energy physics research — consequently, numerous experiments have been constructed at CERN through international collaborations. CERN is the site of the Large Hadron Collider (LHC), the world's largest and highest-energy particle collider. The main site at Meyrin hosts a large computing facility, which is primarily used to store and analyze data from experiments, as well as simulate events. As researchers require remote access to these facilities, the lab has historically been a major wide area network hub. CERN is also the birthplace of the World Wide Web. Page: Large Hadron Collider Summary: The Large Hadron Collider (LHC) is the world's largest and highest-energy particle collider. It was built by the European Organization for Nuclear Research (CERN) between 1998 and 2008 in collaboration with over 10,000 scientists and hundreds of universities and laboratories, as well as more than 100 countries. It lies in a tunnel 27 kilometres (17 mi) in circumference and as deep as 175 metres (574 ft) beneath the France–Switzerland border near Geneva. The first collisions were achieved in 2010 at an energy of 3.5 teraelectronvolts (TeV) per beam, about four times the previous world record. The discovery of the Higgs boson at the LHC was announced in 2012. Between 2013 and 2015, the LHC was shut down and upgraded; after those upgrades it reached 6.8 TeV per beam (13.6 TeV total collision energy). At the end of 2018, it was shut down for three years for further upgrades. The collider has four crossing points where the accelerated particles collide. Seven detectors, each designed to detect different phenomena, are positioned around the crossing points. The LHC primarily collides proton beams, but it can also accelerate beams of heavy ions: lead–lead collisions and proton–lead collisions are typically performed for one month a year. The LHC's goal is to allow physicists to test the predictions of different theories of particle physics, including measuring the properties of the Higgs boson, searching for the large family of new particles predicted by supersymmetric theories, and other unresolved questions in particle physics. Page: Fabiola Gianotti Summary: Fabiola Gianotti (Italian: [faˈbiːola dʒaˈnɔtti]; born 29 October 1960) is an Italian experimental particle physicist who is the current and first woman Director-General at CERN (European Organization for Nuclear Research) in Switzerland. Her first mandate began on 1 January 2016 and ran for a period of five years. At its 195th Session in 2019, the CERN Council selected Gianotti for a second term as Director-General. Her second five-year term began on 1 January 2021 and goes on until 2025. This is the first time in CERN's history that a Director-General has been appointed for a full second term. Thought:Do I need to use a tool? No AI: The current Director-General of CERN is Fabiola Gianotti, an Italian experimental particle physicist who has been in the position since 2016. She was recently reappointed for a second term, which began on January 1, 2021 and will run until 2025. She is the first woman to hold the position of Director-General at CERN. > Finished chain. Answer: The current Director-General of CERN is Fabiola Gianotti, an Italian experimental particle physicist who has been in the position since 2016. She was recently reappointed for a second term, which began on January 1, 2021 and will run until 2025. She is the first woman to hold the position of Director-General at CERN. ChatGPT Usage: 4673 (4540 prompt + 133 completion) in 3 requests. Total cost: $0.0093 ``` My code is mostly equivalent to the one in the notebook, except I'm using gpt-3.5 and I added the Wikipedia tool. Any idea why it would do that?
Agent using tools twice unnecessarily
https://api.github.com/repos/langchain-ai/langchain/issues/2766/comments
3
2023-04-12T13:01:16Z
2023-09-10T16:34:36Z
https://github.com/langchain-ai/langchain/issues/2766
1,664,524,216
2,766
[ "hwchase17", "langchain" ]
Getting this error when ever there is some combination of { [ ' in the string_text while building prompts, Is there a work around to this?? ValidationError: 1 validation error for PromptTemplate __root__ Invalid format specifier (type=value_error) Error in get_answer coroutine: Traceback (most recent call last): File "/app/src/chatbot/query_gpt.py", line 272, in context_calling chat_prompt_with_context = self.build_chat_prompt(queries, context_flag=True) File "/app/src/chatbot/query_gpt.py", line 250, in build_chat_prompt assistant_history_prompt = AIMessagePromptTemplate.from_template( File "/usr/local/lib/python3.8/site-packages/langchain/prompts/chat.py", line 67, in from_template prompt = PromptTemplate.from_template(template) File "/usr/local/lib/python3.8/site-packages/langchain/prompts/prompt.py", line 130, in from_template return cls(input_variables=list(sorted(input_variables)), template=template) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for PromptTemplate __root__ Invalid format specifier (type=value_error)
Validation error for PromptTemplate __root__ Invalid format specifier (type=value_error)
https://api.github.com/repos/langchain-ai/langchain/issues/2765/comments
7
2023-04-12T12:16:35Z
2023-09-14T14:12:46Z
https://github.com/langchain-ai/langchain/issues/2765
1,664,452,179
2,765
[ "hwchase17", "langchain" ]
# Problem As far as I know, it is not possible to use certain classes like `OpenAIEmbeddings` with deployment names that are not exactly the same as the original model name. In the case of `OpenAIEmbeddings` the `document_model_name` is used to [retrieve the tokenizer from tiktoken](https://github.com/hwchase17/langchain/blob/b92a89e29f85d6b90796c24cdd952be76fb64a23/langchain/embeddings/openai.py#L206) and thus fails if the deployment name is not the same as the original model name. # Proposal I propose to introduce a new attribute `deployment_name` and use this along with the `model` attribute to retrieve the model deployment and encoding independently.
Deployments names on Azure need to match model names
https://api.github.com/repos/langchain-ai/langchain/issues/2764/comments
2
2023-04-12T12:02:42Z
2023-09-10T16:34:41Z
https://github.com/langchain-ai/langchain/issues/2764
1,664,430,440
2,764
[ "hwchase17", "langchain" ]
This new loader would take in a github wiki url, download all the wiki pages (they're always .md (?), so we can use `UnstructuredMarkdownLoader` too) and load that into a `Document()`
[FEATURE] Add `GithubWikiLoader`
https://api.github.com/repos/langchain-ai/langchain/issues/2763/comments
1
2023-04-12T11:59:35Z
2023-09-15T22:12:48Z
https://github.com/langchain-ai/langchain/issues/2763
1,664,425,508
2,763
[ "hwchase17", "langchain" ]
Hey, with `openai_callback`. I am able to calculate cost to generate text, but this does not work while calculating embedding cost.
Calculate Embedding Cost
https://api.github.com/repos/langchain-ai/langchain/issues/2762/comments
3
2023-04-12T11:38:44Z
2023-09-18T16:20:17Z
https://github.com/langchain-ai/langchain/issues/2762
1,664,392,646
2,762
[ "hwchase17", "langchain" ]
[Modules - Agents - Tools] - It seems that no longer use AgentType. For Example: initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) -> initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) [Modules - Indexes - Document Loaders - YouTube] - It seems that no longer use from_youtube_url For Example: loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) -> loader = YoutubeLoader("QsYGlZkevEg", add_video_info=True)
About Python Docs modification
https://api.github.com/repos/langchain-ai/langchain/issues/2760/comments
2
2023-04-12T09:12:01Z
2023-08-20T16:31:43Z
https://github.com/langchain-ai/langchain/issues/2760
1,664,164,619
2,760
[ "hwchase17", "langchain" ]
`from langchain.agents.agent_toolkits.openapi import planner` _Traceback (most recent call last): File "/app.py", line 12, in <module> from langchain.agents.agent_toolkits.openapi import planner File ".venv/lib/python3.10/sitepackages/langchain/agents/agent_toolkits/openapi/planner.py", line 43, in <module> class RequestsGetToolWithParsing(BaseRequestsTool, BaseTool): File ".venv/lib/python3.10/site-packages/langchain/agents/agent_toolkits/openapi/planner.py", line 48, in RequestsGetToolWithParsing llm=OpenAI(), File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for OpenAI __root__ Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)_
class RequestsGetToolWithParsing(BaseRequestsTool, BaseTool): File "langchain/agents/agent_toolkits/openapi/planner.py", line 48, in RequestsGetToolWithParsing
https://api.github.com/repos/langchain-ai/langchain/issues/2758/comments
1
2023-04-12T08:23:22Z
2023-04-13T04:26:36Z
https://github.com/langchain-ai/langchain/issues/2758
1,664,088,480
2,758
[ "hwchase17", "langchain" ]
When i use FewShotPromptTemplate function, if the examples contain "{" and "}" characters, I get this error. Like this: {"question": "A = {1; 2; 3; 4; 5; 6; 8; 10}; B = {1; 3; 5; 7; 9; 11}"} How can i fix it. Thank you!!!
KeyError: '1; 2; 3; 4; 5; 6; 8; 10'
https://api.github.com/repos/langchain-ai/langchain/issues/2757/comments
2
2023-04-12T08:19:20Z
2023-09-10T16:34:51Z
https://github.com/langchain-ai/langchain/issues/2757
1,664,082,786
2,757
[ "hwchase17", "langchain" ]
"substring not found" on every user input :( Auto-GPT wont react to it. But the "do nothing sickness" from yesterday work again. (removed the do nothing command temporarily) NEXT ACTION: COMMAND = list_agents ARGUMENTS = {} Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for AI_GPT_8... Input:Whenever i give userfeedback there is a error and you dont seem to react to it. Please fix. SYSTEM: Human feedback: Whenever i give userfeedback there is a error and you dont seem to react to it. Please fix. Error: Traceback (most recent call last): File "scripts/main.py", line 76, in print_assistant_thoughts assistant_reply_json = fix_and_parse_json(assistant_reply) File "C:\Programming\Auto-GPT_AI_8\scripts\json_parser.py", line 51, in fix_and_parse_json brace_index = json_str.index("{") ValueError: substring not found NEXT ACTION: COMMAND = Error: ARGUMENTS = substring not found
"substring not found" on every user suggestion
https://api.github.com/repos/langchain-ai/langchain/issues/2756/comments
1
2023-04-12T08:16:16Z
2023-04-12T08:47:11Z
https://github.com/langchain-ai/langchain/issues/2756
1,664,078,349
2,756
[ "hwchase17", "langchain" ]
Hello Sorry if this was already asked before. I know that the data is embedding and indexed locally but is the data still sent to the LLM Provider ( for example OpenAI ) when you run a query on the embedded data ? how much does this respect user privacy and private domain content ? thank you in advance P.S: if there is anything mentioned anywhere in any documentation I would be very grateful
exposing indexed data to LLM Providers
https://api.github.com/repos/langchain-ai/langchain/issues/2754/comments
1
2023-04-12T07:22:32Z
2023-09-15T22:12:47Z
https://github.com/langchain-ai/langchain/issues/2754
1,663,997,463
2,754
[ "hwchase17", "langchain" ]
When I'm using `RetrievalQA` with `OpenAI` and `QDrant`, I sometimes hit the limit of OpenAI `gpt-3.5-turbo` of 4097 token limit. Is this expected behaviour, or does `langchain` implement a tokens calculator that prevents such events? For the record, I'm hitting this problem with Arabic text, and so far didn't with English, though I don't know if this is correlated or not. I want to add that in another project I'm working on I have a 100% working token calculator that didn't fail for me in all of my tests (with the focus group working on the project) so far. The way my calculator work is by limiting the initial prompt to max tokens count, then instruct `openai` SDK to set limit of response to some pre-defined limit (good for three sentences approx.) or the difference between initial prompt tokens count and model tokens limit, whichever is lower. If this is needed in `langchain` I'm more than happy to re-implement it for this project with little guidance on where is the best part of the codebase to add this in.
[Q] Does langchain implement any prompt+response token calculation to prevent hitting max limit
https://api.github.com/repos/langchain-ai/langchain/issues/2753/comments
3
2023-04-12T07:16:36Z
2023-09-25T16:09:29Z
https://github.com/langchain-ai/langchain/issues/2753
1,663,990,354
2,753
[ "hwchase17", "langchain" ]
Hey, Does the way of prompting Open AI Chat with SystemMessage, HumanMessage and AIMessage work for Anthropic models aswell? Or must Anthropic models be prompted in the non-chat way? thanks
Chat prompt for Anthropic
https://api.github.com/repos/langchain-ai/langchain/issues/2752/comments
7
2023-04-12T07:07:06Z
2023-10-21T16:09:55Z
https://github.com/langchain-ai/langchain/issues/2752
1,663,976,528
2,752
[ "hwchase17", "langchain" ]
How to load codes from file ,such like .py .java .cpp .js? And how to embedding them to vectors?
How to load codes from file ,such like .py .java .cpp .js?
https://api.github.com/repos/langchain-ai/langchain/issues/2749/comments
2
2023-04-12T06:44:47Z
2023-04-12T07:40:40Z
https://github.com/langchain-ai/langchain/issues/2749
1,663,941,155
2,749
[ "hwchase17", "langchain" ]
The [`FewShotPromptTemplate`](https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html#fewshotprompttemplate) examples loaded from `json` and `yaml` files is giving me this error. I've tried this with the examples mentioned in the documentation as well as with my own samples and prompt template and I am seeing the same error. Likely something with [this](https://github.com/hwchase17/langchain/blob/master/langchain/prompts/prompt.py#L44). ``` --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) <ipython-input-35-9c85b0a90b42> in <module> 1 from langchain.prompts import load_prompt 2 ----> 3 prompt = load_prompt("few_shot_prompt_examples_in.json") 4 print(prompt.format(adjective="funny")) 5 /opt/conda/lib/python3.7/site-packages/langchain/prompts/loading.py in load_prompt(file) 101 raise ValueError 102 # Load the prompt from the config now. --> 103 return load_prompt_from_config(config) /opt/conda/lib/python3.7/site-packages/langchain/prompts/loading.py in load_prompt_from_config(config) 17 return _load_prompt(config) 18 elif prompt_type == "few_shot": ---> 19 return _load_few_shot_prompt(config) 20 else: 21 raise ValueError /opt/conda/lib/python3.7/site-packages/langchain/prompts/loading.py in _load_few_shot_prompt(config) 71 config["example_prompt"] = load_prompt(config.pop("example_prompt_path")) 72 else: ---> 73 config["example_prompt"] = _load_prompt(config["example_prompt"]) 74 # Load the examples. 75 config = _load_examples(config) /opt/conda/lib/python3.7/site-packages/langchain/prompts/loading.py in _load_prompt(config) 81 # Load the template from disk if necessary. 82 config = _load_template("template", config) ---> 83 return PromptTemplate(**config) 84 85 /opt/conda/lib/python3.7/site-packages/pydantic/main.cpython-37m-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for PromptTemplate _type extra fields not permitted (type=value_error.extra) ```
Error: in Serialize prompts
https://api.github.com/repos/langchain-ai/langchain/issues/2740/comments
1
2023-04-12T03:32:41Z
2023-04-12T04:32:47Z
https://github.com/langchain-ai/langchain/issues/2740
1,663,657,537
2,740
[ "hwchase17", "langchain" ]
Code: ``` loader_book = PyPDFLoader("D:/PaperPal/langchain-tutorials/data/The Attention Merchants_ The Epic Scramble to Get Inside Our Heads ( PDFDrive ) (1).pdf") test = loader_book.load() chain = load_summarize_chain(llm, chain_type="map_reduce", verbose=True) chain.run(test[0]) ``` I get the following error even when the test[0] is a Document object ``` > Entering new MapReduceDocumentsChain chain... Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?6f60f6d3-3206-4586-b2b2-d8a0f86e1aa0)--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [d:\PaperPal\langchain-tutorials\chains\Chain](file:///D:/PaperPal/langchain-tutorials/chains/Chain) Types.ipynb Cell 19 in () ----> [1](vscode-notebook-cell:/d%3A/PaperPal/langchain-tutorials/chains/Chain%20Types.ipynb#X16sZmlsZQ%3D%3D?line=0) chain.run(test[0]) File [c:\Users\mail2\anaconda3\lib\site-packages\langchain\chains\base.py:213](file:///C:/Users/mail2/anaconda3/lib/site-packages/langchain/chains/base.py:213), in Chain.run(self, *args, **kwargs) 211 if len(args) != 1: 212 raise ValueError("`run` supports only one positional argument.") --> 213 return self(args[0])[self.output_keys[0]] 215 if kwargs and not args: 216 return self(kwargs)[self.output_keys[0]] File [c:\Users\mail2\anaconda3\lib\site-packages\langchain\chains\base.py:116](file:///C:/Users/mail2/anaconda3/lib/site-packages/langchain/chains/base.py:116), in Chain.__call__(self, inputs, return_only_outputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) --> 116 raise e 117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose) 118 return self.prep_outputs(inputs, outputs, return_only_outputs) File [c:\Users\mail2\anaconda3\lib\site-packages\langchain\chains\base.py:113](file:///C:/Users/mail2/anaconda3/lib/site-packages/langchain/chains/base.py:113), in Chain.__call__(self, inputs, return_only_outputs) 107 self.callback_manager.on_chain_start( 108 {"name": self.__class__.__name__}, 109 inputs, 110 verbose=self.verbose, 111 ) ... --> 141 [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs] 142 ) 143 return self._process_results(results, docs, token_max, **kwargs) AttributeError: 'tuple' object has no attribute 'page_content' ```
AttributeError: 'tuple' object has no attribute 'page_content' when running a `load_summarize_chain` on an my Document generated from PyPDF Loader
https://api.github.com/repos/langchain-ai/langchain/issues/2736/comments
9
2023-04-12T00:19:48Z
2024-06-13T20:16:40Z
https://github.com/langchain-ai/langchain/issues/2736
1,663,426,842
2,736
[ "hwchase17", "langchain" ]
Hi, I tried several loaders (GitHub , selenium, cheerio,etc) and they retrieve correctly certain data. I then use splitters to split the text in multiple documents, and pass it to the open ai api with high max token. I consistently get 400, which I believe is due to the document size. Is a legit guess? Is there some other steps I need to do in order to avoid a 400 request?
Question. OpenAI completion api returns 400
https://api.github.com/repos/langchain-ai/langchain/issues/2727/comments
2
2023-04-11T18:50:19Z
2023-09-10T16:34:55Z
https://github.com/langchain-ai/langchain/issues/2727
1,663,037,095
2,727
[ "hwchase17", "langchain" ]
Python version: 3.9.16 Langchain version: 0.0.109 The code: ``` from langchain.chains import RetrievalQA from langchain.llms import OpenAI ``` the error: ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[123], line 1 ----> 1 from langchain.chains import RetrievalQA 2 from langchain.llms import OpenAI ImportError: cannot import name 'RetrievalQA' from 'langchain.chains' (/opt/conda/envs/python39/lib/python3.9/site-packages/langchain/chains/__init__.py) ```
Cannot import RetrievalQA from langchain.chains
https://api.github.com/repos/langchain-ai/langchain/issues/2725/comments
8
2023-04-11T17:47:50Z
2024-07-30T15:00:22Z
https://github.com/langchain-ai/langchain/issues/2725
1,662,959,470
2,725
[ "hwchase17", "langchain" ]
After updating to v.0.0.137 version, I now have an error while executing the following script (from the documentation): ```python import os from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.sql_database import SQLDatabase from langchain.llms.openai import OpenAI os.environ["OPENAI_API_KEY"] = "sk-xxxxxxxxxxxxxx" db = SQLDatabase.from_uri("mysql://user:password@localhost:db_port/db_name") llm = OpenAI(temperature=0) toolkit = SQLDatabaseToolkit(db=db) agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True ) agent_executor.run("A human to SQL request") ``` The error: "AttributeError: type object 'QueryCheckerTool' has no attribute 'llm'" <img width="842" alt="Capture d’écran 2023-04-11 à 19 04 18" src="https://user-images.githubusercontent.com/11693661/231237020-3ac589d1-89ca-4044-bd24-8b0512734fcf.png"> [This PR](https://github.com/hwchase17/langchain/pull/2655) breaks something, but I can't find how to debug the error. Can someone help me debug it? Thanks ❤️
Error with `create_sql_agent` after updating to v0.0.137
https://api.github.com/repos/langchain-ai/langchain/issues/2722/comments
4
2023-04-11T17:07:13Z
2023-04-17T16:11:47Z
https://github.com/langchain-ai/langchain/issues/2722
1,662,911,623
2,722
[ "hwchase17", "langchain" ]
Hey guys, I really love langchain and the community, thank you in advance for helping and pointing me in the right direction! I want to use GPT4All as an LLM powering an langchain agent. It wasn't working as expected, so I've started from basics. I've noticed that GPT4All wrapper in langchain creates very low quality results in comparison to command line version of gpt4all. In command line we can notice that gpt4all provides short and correct answers, for langchain GPT4All wrapper - responses are unusable. My questions are: 1. Can anybody explain why there are different responses - I would expect to get the same responses since almost the same model is used (gpt4all-lora-unfiltered-quantized.bin and gpt4all-lora-quantized-ggml.bin) (see: https://github.com/nomic-ai/gpt4all#gpt4all-compatibility-ecosystem). 2. What needs to be done to make it possible to get similar responses in langchain with GPT4All models as we are getting right now in command line? My goal is to make it possible to play with langchain agents using local models to avoid paying for each request to OpenAI, or at least reduce these costs. <img width="1461" alt="Screenshot 2023-04-11 at 17 48 09" src="https://user-images.githubusercontent.com/428635/231222422-ec2f8360-3213-408d-91a4-6cbd979b2157.png">
GPT4All responds with different results than the similar model executed from cmd line
https://api.github.com/repos/langchain-ai/langchain/issues/2721/comments
1
2023-04-11T16:05:55Z
2023-09-10T16:35:01Z
https://github.com/langchain-ai/langchain/issues/2721
1,662,822,253
2,721
[ "hwchase17", "langchain" ]
I'm proposing modifying the PydanticOutputParsers such that they are generic on the pydantic object passed in to the constructor, so we can get type hints on the resulting pydantic objects that are returned by `parse`. E.g ``` class TestOutput(BaseModel): output: str = ... output_parser = PydanticOutputParser(TestOutput) parsed = output_parser.parse(...) # right now this is typed as BaseModel, but it'd be better if it was TestOutput ``` I haven't looked in-depth but I think it's likely we can type the other parsers with similar fidelity. I'm happy to take this one if it's agreed that this is a useful change.
Better type hints for OutputParser
https://api.github.com/repos/langchain-ai/langchain/issues/2715/comments
2
2023-04-11T14:20:29Z
2023-04-13T13:50:31Z
https://github.com/langchain-ai/langchain/issues/2715
1,662,600,820
2,715
[ "hwchase17", "langchain" ]
import { SqlDatabase } from "langchain/tools"; Module '"langchain/tools"' has no exported member 'SqlDatabase'.ts
"langchain/tools"' has no exported member 'SqlDatabase'
https://api.github.com/repos/langchain-ai/langchain/issues/2711/comments
1
2023-04-11T12:56:26Z
2023-09-15T21:30:37Z
https://github.com/langchain-ai/langchain/issues/2711
1,662,423,398
2,711
[ "hwchase17", "langchain" ]
When I create `embeddings = HuggingFaceEmbeddings(model_name='THUDM/chatglm-6b')` I get an error message: ``` ValueError: Loading THUDM/chatglm-6b requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error. ``` How can I fix it? python env: ``` python version: Python 3.8.10 transformers 4.27.4 langchain 0.0.136 ```
Error when creating HuggingFaceEmbeddings with chatglm-6b model
https://api.github.com/repos/langchain-ai/langchain/issues/2710/comments
5
2023-04-11T12:15:55Z
2023-06-20T16:34:51Z
https://github.com/langchain-ai/langchain/issues/2710
1,662,357,596
2,710
[ "hwchase17", "langchain" ]
Would be cool to have this feature. This way we could save already working agent chain into an idempotent workflow to reuse it later many many times. As i understand, right now, LLM can generate another plan for the next call and it could be invalid or with some error. Would be cool to persist that working plan to reuse it.
Idempotent agent chains or chains import/export?
https://api.github.com/repos/langchain-ai/langchain/issues/2708/comments
1
2023-04-11T10:29:32Z
2023-09-10T16:35:08Z
https://github.com/langchain-ai/langchain/issues/2708
1,662,201,505
2,708
[ "hwchase17", "langchain" ]
Running from the same environment and have tried uninstalling/reinstalling - still getting this error. Anyone have a fix or know what I could be doing wrong? <img width="632" alt="Screenshot 2023-04-11 at 7 05 40 pm" src="https://user-images.githubusercontent.com/121209163/231112960-f55b80dc-93ae-4a96-abc2-33efa7b8cee1.png">
ModuleNotFoundError: No module named 'langchain' (Even though LangChain is installed)
https://api.github.com/repos/langchain-ai/langchain/issues/2707/comments
3
2023-04-11T09:14:20Z
2023-09-10T16:35:13Z
https://github.com/langchain-ai/langchain/issues/2707
1,662,074,604
2,707
[ "hwchase17", "langchain" ]
Hi, I'm having some trouble when **linking to MySQL**. Does it only support connecting to sqlite? There seems to be **no introduction to connecting to MySQL in the document**
having some trouble when linking to MySQL
https://api.github.com/repos/langchain-ai/langchain/issues/2705/comments
5
2023-04-11T08:50:12Z
2023-09-25T16:09:39Z
https://github.com/langchain-ai/langchain/issues/2705
1,662,036,269
2,705
[ "hwchase17", "langchain" ]
Hi, I'm having some trouble finding the solution to this. I'm using Langchain on Colab, and for some reasons, the answers sometimes get cut off like below (screenshot attached) I have 2 questions, and both are cut off, as you can see. I tried changing 'max_len_answer' but nothing changes ![image](https://user-images.githubusercontent.com/48023706/231097184-dc155701-25cd-4f27-8623-b6355be7646f.png) Do you guys have any suggesting for this? Thank you.
question_answering chains answers cut off
https://api.github.com/repos/langchain-ai/langchain/issues/2703/comments
1
2023-04-11T08:08:14Z
2023-04-12T08:37:08Z
https://github.com/langchain-ai/langchain/issues/2703
1,661,977,111
2,703
[ "hwchase17", "langchain" ]
I noticed llm `OpenAI` returns `finish_reason` in `generation_info` https://github.com/hwchase17/langchain/pull/526 But chat_model `ChatOpenAI`'s `generation_info` is `None` ```python from langchain.chat_models import ChatOpenAI from langchain.schema import HumanMessage, SystemMessage chat = ChatOpenAI(temperature=0) messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ), ] result = chat.generate([messages]) result.generations[0][0] ``` ``` ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={})) ``` Should we just add `generation_info` to the `ChatGeneration` of `ChatOpenAI` like `OpenAI`? I can help to create a PR if needed. https://github.com/hwchase17/langchain/blob/955bd2e1db8d008d628963cb8d2bad5c1d354744/langchain/chat_models/openai.py#L284 https://github.com/hwchase17/langchain/blob/1f557ffa0e6c4c2137109d6a014b0617e9885b02/langchain/llms/openai.py#L170-L173
generation_info for ChatOpenAI model
https://api.github.com/repos/langchain-ai/langchain/issues/2702/comments
4
2023-04-11T07:02:21Z
2023-10-12T16:10:34Z
https://github.com/langchain-ai/langchain/issues/2702
1,661,884,074
2,702
[ "hwchase17", "langchain" ]
Hi, I have a usecase where i have to fetch Edited posts weekly from community and update the docs within FAISS index. is that possible? or do i have to keep deleting and create new index everytime? Also i use RecursiveCharacterTextSplitter to split docs. ``` loader = DirectoryLoader('./recent_data') raw_documents = loader.load() #Splitting documents into chunks text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, ) documents = text_splitter.split_documents(raw_documents) print(len(documents)) # Changing source to point to the original document for x in documents: print(x.metadata["source"]) # Creating index and saving it to disk print("Creating index") db_new = FAISS.from_documents(documents, embeddings ) ``` this is output if i use ` print(db_new .docstore._dict)` `{'2d9b6fbf-a44d-46b5-bcdf-b45cd9438a4c': Document(page_content='<p dir="auto">This is a test topic.</p>', metadata={'source': 'recent/https://community.tpsonline.com/topic/587/ignore-test-topic'}), '706dcaf8-f9d9-45b9-bdf4-8a8ac7618229': Document(page_content='What is an SDD?\n\n<p dir="auto">A software design description (a.k.a. software design document or SDD; just design document; also Software Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the different stakeholders.</p>\n\n<p dir="auto">This SDD template represent design w.r.t various software viewpoints, where each viewpoint will handle specific concerns of Design. This is based on <strong>ISO 42010 standard</strong>.</p>\n\nIntroduction\n\n<p dir="auto">[Name/brief description of feature for which SDD is being Produced]</p>\n\n1. Context Viewpoint\n\n<p dir="auto">[Describes the relationships, dependencies, and interactions between the system and its environment ]</p>\n\n1.1 Use Cases\n\n1.1.1 AS IS (Pre Condition)\n\n1.1.2 TO - BE (Post Condition)\n\n1.2 System Context View\n\n1.2.1 - AS IS (Pre Condition)\n\n1.2.2 TO - BE (Post Condition)\n\n2. Logical Viewpoint', metadata={'source': 'recent/https://community.tpsonline.com/topic/586/software-design-description-sdd-template'}), '4d6d4e6b-01ee-46bb-ae06-84514a51baf2': Document(page_content='1.1 Use Cases\n\n1.1.1 AS IS (Pre Condition)\n\n1.1.2 TO - BE (Post Condition)\n\n1.2 System Context View\n\n1.2.1 - AS IS (Pre Condition)\n\n1.2.2 TO - BE (Post Condition)\n\n2. Logical Viewpoint\n\n<p dir="auto">[The purpose of the Logical viewpoint is to elaborate existing and designed types and their implementations as classes and interfaces with their structural static relationships]</p>\n\n2.1 Class Diagram\n\n2.1.1 AS - IS (Pre Condition)\n\n2.1.2 TO - BE (Post Condition)\n\n2.1.2.1 Class Interfaces and description\n\n<p dir="auto">[Below is being presented as an example]<br />\n\n[This section should tell about the responsibility of each class method and their parameters too if required]</p>\n\n2.1.2.1.1 IRenewProcess\n\nMethod\n\nDescription\n\nprocessRenewal\n\nMethod to process renewal of a given cardEntity. Each concrete class that will implement the interface will implement its own version of renewal steps\n\n2.1.2.1.1 RenewStrategyContext (static class)\n\nMethod\n\nDescription\n\n(private)getRenewalMethod', metadata={'source': 'recent/https://community.tpsonline.com/topic/586/software-design-description-sdd-template'})}` so will i be able to update docs within index or is it just not possible?
How to delete or update a document within a FAISS index?
https://api.github.com/repos/langchain-ai/langchain/issues/2699/comments
19
2023-04-11T06:33:19Z
2023-08-18T07:01:58Z
https://github.com/langchain-ai/langchain/issues/2699
1,661,852,287
2,699
[ "hwchase17", "langchain" ]
I was testing OnlinePDFLoader yesterday iirc and it was working fine. Today I tried experimenting and I keep getting this error `PermissionError: [Errno 13] Permission denied: 'C:\\Users\\REALGL~1\\AppData\\Local\\Temp\\tmp3chr08y0` it may be occurring because the `tempfile.NamedTemporaryFile()` in `pdf.py` is still open when the PDF partitioning function is trying to access it
Permission Error with PDF loader
https://api.github.com/repos/langchain-ai/langchain/issues/2698/comments
6
2023-04-11T06:17:16Z
2024-01-27T10:50:22Z
https://github.com/langchain-ai/langchain/issues/2698
1,661,837,012
2,698
[ "hwchase17", "langchain" ]
I am trying to load the multiple pdf using the directory loader its popping up with the following error: `ImportError: cannot import name 'is_directory' from 'PIL._util' (/usr/local/lib/python3.9/dist-packages/PIL/_util.py)`
Directory loader error for pdfs
https://api.github.com/repos/langchain-ai/langchain/issues/2697/comments
2
2023-04-11T05:51:16Z
2023-09-10T16:35:16Z
https://github.com/langchain-ai/langchain/issues/2697
1,661,811,809
2,697
[ "hwchase17", "langchain" ]
I would like to provide an 'embeddings' parameter for the initialization of the Weaviate vector store, as I do not want to start the Weaviate server with the OpenAI key in order to make use of embeddings through the Azure OpenAI Service. The addition of the embeddings parameter affects the __init__ method, as shown in the code snippet above. To accommodate this change, you'll also need to modify the add_texts method. ```python def __init__( self, client: Any, index_name: str, text_key: str, embedding_function: Optional[Embeddings] = None, attributes: Optional[List[str]] = None, ): """Initialize with Weaviate client.""" try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip install weaviate-client`." ) if not isinstance(client, weaviate.Client): raise ValueError( f"client should be an instance of weaviate.Client, got {type(client)}" ) self._client = client self._index_name = index_name self._text_key = text_key self._embedding_function = embedding_function self._query_attrs = [self._text_key] if attributes is not None: self._query_attrs.extend(attributes) ``` To check if the embeddings parameter was provided during initialization and perform the necessary actions, you can modify the add_texts method in the following way: ```python def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]: """Upload texts with metadata (properties) to Weaviate.""" from weaviate.util import get_valid_uuid with self._client.batch as batch: ids = [] for i, doc in enumerate(texts): data_properties = { self._text_key: doc, } if metadatas is not None: for key in metadatas[i].keys(): data_properties[key] = metadatas[i][key] _id = get_valid_uuid(uuid4()) if self._embedding_function is not None: embeddings = self._embedding_function.embed_documents(list(doc)) batch.add_data_object(data_properties, self._index_name, _id, vector=embeddings[0]) else: batch.add_data_object(data_properties, self._index_name, _id) ids.append(_id) return ids ```
Allow Weaviate initialization with alternative embedding implementation
https://api.github.com/repos/langchain-ai/langchain/issues/2695/comments
2
2023-04-11T05:19:00Z
2023-04-27T04:45:05Z
https://github.com/langchain-ai/langchain/issues/2695
1,661,787,838
2,695
[ "hwchase17", "langchain" ]
Expectation: Loading a text file via UnstructuredFileLoader then passing the output docs to VectorStoreIndexCreator should work Observation: Crash when attempting to call `VectorstoreIndexCreator().from_loaders(unstructuredFileLoader)` Crashlog: ``` Traceback (most recent call last): File "/Users/minhcnd/Repos/llm-demo/llm_demo/main.py", line 30, in <module> index = VectorstoreIndexCreator().from_loaders(loaders) File "/Users/minhcnd/Library/Caches/pypoetry/virtualenvs/llm-demo-yOjCLQYD-py3.9/lib/python3.9/site-packages/langchain/indexes/vectorstore.py", line 71, in from_loaders vectorstore = self.vectorstore_cls.from_documents( File "/Users/minhcnd/Library/Caches/pypoetry/virtualenvs/llm-demo-yOjCLQYD-py3.9/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 334, in from_documents return cls.from_texts( File "/Users/minhcnd/Library/Caches/pypoetry/virtualenvs/llm-demo-yOjCLQYD-py3.9/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 303, in from_texts chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) File "/Users/minhcnd/Library/Caches/pypoetry/virtualenvs/llm-demo-yOjCLQYD-py3.9/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 114, in add_texts self._collection.add( File "/Users/minhcnd/Library/Caches/pypoetry/virtualenvs/llm-demo-yOjCLQYD-py3.9/lib/python3.9/site-packages/chromadb/api/models/Collection.py", line 85, in add metadatas = validate_metadatas(maybe_cast_one_to_many(metadatas)) if metadatas else None File "/Users/minhcnd/Library/Caches/pypoetry/virtualenvs/llm-demo-yOjCLQYD-py3.9/lib/python3.9/site-packages/chromadb/api/types.py", line 108, in validate_metadatas validate_metadata(metadata) File "/Users/minhcnd/Library/Caches/pypoetry/virtualenvs/llm-demo-yOjCLQYD-py3.9/lib/python3.9/site-packages/chromadb/api/types.py", line 99, in validate_metadata raise ValueError(f"Expected metadata value to be a str, int, or float, got {value}") ValueError: Expected metadata value to be a str, int, or float, got seneca.txt ``` Root cause: In unstructured.py, `_get_metadata()` returns a **PoxisPath** object, but validate_metadata expects a str, int or float
VectorStoreIndexCreator can't load from UnstructuredFileLoader
https://api.github.com/repos/langchain-ai/langchain/issues/2685/comments
4
2023-04-10T22:25:51Z
2023-09-27T16:08:57Z
https://github.com/langchain-ai/langchain/issues/2685
1,661,468,710
2,685
[ "hwchase17", "langchain" ]
I would like to make requests to both Azure OpenAI and the OpenAI API in my app using the `AzureChatOpenAI` and `ChatOpenAI` classes respectively. The issue I'm running into is it seems both classes depend on the same environment variables/global OpenAI variables (`openai.api_key`, `openai.api_type`, etc). For example, if I create an `AzureChatOpenAI` instance, the variables will be set to Azure config, and this will cause any subsequent OpenAI calls to fail. I also have two instances of Azure OpenAI that I want to hit (e.g. I have text-davinci-003 running in EU and gpt-3.5-turbo running in US as gpt-3.5-turbo isn't supported in EU yet), so it would be nice if I could have separate instances of `AzureChatOpenAI` with different configs. A workaround is to set these environment variables manually before every call, which `AzureChatOpenAI` kind of does, but this seems susceptible to race conditions if concurrent requests are made to my app since these variables aren't directly passed into the request and there's no locking mechanism. Would it be possible to have multiple instances of these classes and not have these instances obscurely share state? Or is this just a limitation of the way OpenAI's python package is setup? Thank you!
Support concurrent usage of OpenAI API and Azure OpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/2683/comments
7
2023-04-10T22:01:07Z
2023-09-27T16:09:02Z
https://github.com/langchain-ai/langchain/issues/2683
1,661,437,301
2,683
[ "hwchase17", "langchain" ]
I would like to be able to pass token params for tiktoken in https://github.com/hwchase17/langchain/blob/bd9f095ed249694a98aa438c5467564ace883ff5/langchain/embeddings/openai.py#L198 to be able to work with documents with `<|endoftext|>` in them Related issue: https://github.com/hwchase17/langchain/issues/923
OpenAIEmbeddings special token params for tiktoken
https://api.github.com/repos/langchain-ai/langchain/issues/2681/comments
0
2023-04-10T20:25:15Z
2023-04-11T11:44:18Z
https://github.com/langchain-ai/langchain/issues/2681
1,661,315,327
2,681
[ "hwchase17", "langchain" ]
The following only parse the info within the webpage. If there are links/other pages embedded into the webpage, how do I parse them iteratively? `loader = WebBaseLoader("https://beta.ruff.rs/docs/faq/")` `blog_docs = loader.load()` print(blog_docs) > [Document(page_content='\n\n\n\n\n\n\n\n\n\n\n\nFAQ - Ruff\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to content\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRuff\n\n\n\n\nFAQ\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nInitializing search\n\n\n\n\n\n\n\n\n\n\n\n\n\nruff\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRuff\n\n\n\n\n\n\n\nruff\n\n\n\n\n\n\nOverview\n\n\n\n\nTutorial\n\n\n\n\nInstallation\n\n\n\n\nUsage\n\n\n\n\nConfiguration\n\n\n\n\nRules\n\n\n\n\nSettings\n\n\n\n\nEditor Integrations\n\n\n\n\n\nFAQ\n\n\n\nFAQ\n\n\n\n\nTable of contents\n\n\n\n\nIs Ruff compatible with Black?\n\n\n\n\nHow does Ruff compare to Flake8?\n\n\n\n\nHow does Ruff compare to Pylint?\n\n\n\n\nHow does Ruff compare to Mypy, or Pyright, or Pyre?\n\n\n\n\nWhich tools does Ruff replace?\n\n\n\n\nWhat versions of Python does Ruff support?\n\n\n\n\nDo I need to install Rust to use Ruff?\n\n\n\n\nCan I write my own plugins for Ruff?\n\n\n\n\nHow does Ruff\'s import sorting compare to isort?\n\n\n\n\nDoes Ruff support Jupyter Notebooks?\n\n\n\n\nDoes Ruff support NumPy- or Google-style docstrings?\n\n\n\n\nHow can I tell what settings Ruff is using to check my code?\n\n\n\n\nI want to use Ruff, but I don\'t want to use pyproject.toml. Is that possible?\n\n\n\n\nHow can I change Ruff\'s default configuration?\n\n\n\n\nRuff tried to fix something — but it broke my code?\n\n\n\n\nHow can I disable Ruff\'s color output?\n\n\n\n\n\n\n\nContributing\n\n\n\n\n\n\n\n\n\nFAQ#\nIs Ruff compatible with Black?#\nYes. Ruff is compatible with Black out-of-the-box, as long as\nthe line-length setting is consistent between the two.\nAs a project, Ruff is designed to be used alongside Black and, as such, will defer implementing\nstylistic lint rules that are obviated by autoformatting.\nHow does Ruff compare to Flake8?#\n(Coming from Flake8? Try flake8-to-ruff to\nautomatically convert your existing configuration.)\nRuff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of\nplugins, (2) alongside Black, and (3) on Python 3 code.\nUnder those conditions, Ruff implements every rule in Flake8. In practice, that means Ruff\nimplements all of the F rules (which originate from Pyflakes), along with a subset of the E and\nW rules (which originate from pycodestyle).\nRuff also re-implements some of the most popular Flake8 plugins and related code quality tools\nnatively, including:\n\nautoflake\neradicate\nflake8-2020\nflake8-annotations\nflake8-bandit (#1646)\nflake8-blind-except\nflake8-boolean-trap\nflake8-bugbear\nflake8-builtins\nflake8-commas\nflake8-comprehensions\nflake8-datetimez\nflake8-debugger\nflake8-django\nflake8-docstrings\nflake8-eradicate\nflake8-errmsg\nflake8-executable\nflake8-gettext\nflake8-implicit-str-concat\nflake8-import-conventions\nflake8-logging-format\nflake8-no-pep420\nflake8-pie\nflake8-print\nflake8-pyi\nflake8-pytest-style\nflake8-quotes\nflake8-raise\nflake8-return\nflake8-self\nflake8-simplify\nflake8-super\nflake8-tidy-imports\nflake8-type-checking\nflake8-use-pathlib\nisort\nmccabe\npandas-vet\npep8-naming\npydocstyle\npygrep-hooks (#980)\npyupgrade\ntryceratops\nyesqa\n\nNote that, in some cases, Ruff uses different rule codes and prefixes than would be found in the\noriginating Flake8 plugins. For example, Ruff uses TID252 to represent the I252 rule from\nflake8-tidy-imports. This helps minimize conflicts across plugins and allows any individual plugin\nto be toggled on or off with a single (e.g.) --select TID, as opposed to --select I2 (to avoid\nconflicts with the isort rules, like I001).\nBeyond the rule set, Ruff\'s primary limitation vis-à-vis Flake8 is that it does not support custom\nlint rules. (Instead, popular Flake8 plugins are re-implemented in Rust as part of Ruff itself.)\nThere are a few other minor incompatibilities between Ruff and the originating Flake8 plugins:\n\nRuff doesn\'t implement all the "opinionated" lint rules from flake8-bugbear.\nDepending on your project structure, Ruff and isort can differ in their detection of first-party\ncode. (This is often solved by modifying the src property, e.g., to src = ["src"], if your\ncode is nested in a src directory.)\n\nHow does Ruff compare to Pylint?#\nAt time of writing, Pylint implements ~409 total rules, while Ruff implements 440, of which at least\n89 overlap with the Pylint rule set (you can find the mapping in #970).\nPylint implements many rules that Ruff does not, and vice versa. For example, Pylint does more type\ninference than Ruff (e.g., Pylint can validate the number of arguments in a function call). As such,\nRuff is not a "pure" drop-in replacement for Pylint (and vice versa), as they enforce different sets\nof rules.\nDespite these differences, many users have successfully switched from Pylint to Ruff, especially\nthose using Ruff alongside a type checker,\nwhich can cover some of the functionality that Pylint provides.\nLike Flake8, Pylint supports plugins (called "checkers"), while Ruff implements all rules natively.\nUnlike Pylint, Ruff is capable of automatically fixing its own lint violations.\nPylint parity is being tracked in #970.\nHow does Ruff compare to Mypy, or Pyright, or Pyre?#\nRuff is a linter, not a type checker. It can detect some of the same problems that a type checker\ncan, but a type checker will catch certain errors that Ruff would miss. The opposite is also true:\nRuff will catch certain errors that a type checker would typically ignore.\nFor example, unlike a type checker, Ruff will notify you if an import is unused, by looking for\nreferences to that import in the source code; on the other hand, a type checker could flag that you\npassed an integer argument to a function that expects a string, which Ruff would miss. The\ntools are complementary.\nIt\'s recommended that you use Ruff in conjunction with a type checker, like Mypy, Pyright, or Pyre,\nwith Ruff providing faster feedback on lint violations and the type checker providing more detailed\nfeedback on type errors.\nWhich tools does Ruff replace?#\nToday, Ruff can be used to replace Flake8 when used with any of the following plugins:\n\nflake8-2020\nflake8-annotations\nflake8-bandit (#1646)\nflake8-blind-except\nflake8-boolean-trap\nflake8-bugbear\nflake8-builtins\nflake8-commas\nflake8-comprehensions\nflake8-datetimez\nflake8-debugger\nflake8-django\nflake8-docstrings\nflake8-eradicate\nflake8-errmsg\nflake8-executable\nflake8-gettext\nflake8-implicit-str-concat\nflake8-import-conventions\nflake8-logging-format\nflake8-no-pep420\nflake8-pie\nflake8-print\nflake8-pytest-style\nflake8-quotes\nflake8-raise\nflake8-return\nflake8-self\nflake8-simplify\nflake8-super\nflake8-tidy-imports\nflake8-type-checking\nflake8-use-pathlib\nmccabe\npandas-vet\npep8-naming\npydocstyle\ntryceratops\n\nRuff can also replace isort,\nyesqa, eradicate, and\nmost of the rules implemented in pyupgrade.\nIf you\'re looking to use Ruff, but rely on an unsupported Flake8 plugin, feel free to file an\nissue.\nWhat versions of Python does Ruff support?#\nRuff can lint code for any Python version from 3.7 onwards, including Python 3.10 and 3.11.\nRuff does not support Python 2. Ruff may run on pre-Python 3.7 code, although such versions\nare not officially supported (e.g., Ruff does not respect type comments).\nRuff is installable under any Python version from 3.7 onwards.\nDo I need to install Rust to use Ruff?#\nNope! Ruff is available as ruff on PyPI:\npip install ruff\n\nRuff ships with wheels for all major platforms, which enables pip to install Ruff without relying\non Rust at all.\nCan I write my own plugins for Ruff?#\nRuff does not yet support third-party plugins, though a plugin system is within-scope for the\nproject. See #283 for more.\nHow does Ruff\'s import sorting compare to isort?#\nRuff\'s import sorting is intended to be nearly equivalent to isort when used profile = "black".\nThere are a few known, minor differences in how Ruff and isort break ties between similar imports,\nand in how Ruff and isort treat inline comments in some cases (see: #1381,\n#2104).\nLike isort, Ruff\'s import sorting is compatible with Black.\nRuff does not yet support all of isort\'s configuration options, though it does support many of\nthem. You can find the supported settings in the API reference.\nFor example, you can set known-first-party like so:\n[tool.ruff]\nselect = [\n # Pyflakes\n "F",\n # Pycodestyle\n "E",\n "W",\n # isort\n "I001"\n]\n\n# Note: Ruff supports a top-level `src` option in lieu of isort\'s `src_paths` setting.\nsrc = ["src", "tests"]\n\n[tool.ruff.isort]\nknown-first-party = ["my_module1", "my_module2"]\n\nDoes Ruff support Jupyter Notebooks?#\nRuff is integrated into nbQA, a tool for running linters and\ncode formatters over Jupyter Notebooks.\nAfter installing ruff and nbqa, you can run Ruff over a notebook like so:\n> nbqa ruff Untitled.ipynb\nUntitled.ipynb:cell_1:2:5: F841 Local variable `x` is assigned to but never used\nUntitled.ipynb:cell_2:1:1: E402 Module level import not at top of file\nUntitled.ipynb:cell_2:1:8: F401 `os` imported but unused\nFound 3 errors.\n1 potentially fixable with the --fix option.\n\nDoes Ruff support NumPy- or Google-style docstrings?#\nYes! To enable specific docstring convention, add the following to your pyproject.toml:\n[tool.ruff.pydocstyle]\nconvention = "google" # Accepts: "google", "numpy", or "pep257".\n\nFor example, if you\'re coming from flake8-docstrings, and your originating configuration uses\n--docstring-convention=numpy, you\'d instead set convention = "numpy" in your pyproject.toml,\nas above.\nAlongside convention, you\'ll want to explicitly enable the D rule code prefix, like so:\n[tool.ruff]\nselect = [\n "D",\n]\n\n[tool.ruff.pydocstyle]\nconvention = "google"\n\nSetting a convention force-disables any rules that are incompatible with that convention, no\nmatter how they\'re provided, which avoids accidental incompatibilities and simplifies configuration.\nHow can I tell what settings Ruff is using to check my code?#\nRun ruff check /path/to/code.py --show-settings to view the resolved settings for a given file.\nI want to use Ruff, but I don\'t want to use pyproject.toml. Is that possible?#\nYes! In lieu of a pyproject.toml file, you can use a ruff.toml file for configuration. The two\nfiles are functionally equivalent and have an identical schema, with the exception that a ruff.toml\nfile can omit the [tool.ruff] section header.\nFor example, given this pyproject.toml:\n[tool.ruff]\nline-length = 88\n\n[tool.ruff.pydocstyle]\nconvention = "google"\n\nYou could instead use a ruff.toml file like so:\nline-length = 88\n\n[pydocstyle]\nconvention = "google"\n\nRuff doesn\'t currently support INI files, like setup.cfg or tox.ini.\nHow can I change Ruff\'s default configuration?#\nWhen no configuration file is found, Ruff will look for a user-specific pyproject.toml or\nruff.toml file as a last resort. This behavior is similar to Flake8\'s ~/.config/flake8.\nOn macOS, Ruff expects that file to be located at /Users/Alice/Library/Application Support/ruff/ruff.toml.\nOn Linux, Ruff expects that file to be located at /home/alice/.config/ruff/ruff.toml.\nOn Windows, Ruff expects that file to be located at C:\\Users\\Alice\\AppData\\Roaming\\ruff\\ruff.toml.\nFor more, see the dirs crate.\nRuff tried to fix something — but it broke my code?#\nRuff\'s autofix is a best-effort mechanism. Given the dynamic nature of Python, it\'s difficult to\nhave complete certainty when making changes to code, even for the seemingly trivial fixes.\nIn the future, Ruff will support enabling autofix behavior based on the safety of the patch.\nIn the meantime, if you find that the autofix is too aggressive, you can disable it on a per-rule or\nper-category basis using the unfixable mechanic.\nFor example, to disable autofix for some possibly-unsafe rules, you could add the following to your\npyproject.toml:\n[tool.ruff]\nunfixable = ["B", "SIM", "TRY", "RUF"]\n\nIf you find a case where Ruff\'s autofix breaks your code, please file an Issue!\nHow can I disable Ruff\'s color output?#\nRuff\'s color output is powered by the colored crate, which\nattempts to automatically detect whether the output stream supports color. However, you can force\ncolors off by setting the NO_COLOR environment variable to any value (e.g., NO_COLOR=1).\ncolored also supports the the CLICOLOR and CLICOLOR_FORCE\nenvironment variables (see the spec).\n\n\n\n\n\nBack to top\n\n\n\n\n\n\nMade with\n\nMaterial for MkDocs\n\n\n\n\n\n\n\n\n\n\n\n\n', metadata={'source': 'https://beta.ruff.rs/docs/faq/'})]
How to Iteratively parse all the contents within links on a page?
https://api.github.com/repos/langchain-ai/langchain/issues/2680/comments
3
2023-04-10T20:17:14Z
2024-02-12T05:47:17Z
https://github.com/langchain-ai/langchain/issues/2680
1,661,305,027
2,680
[ "hwchase17", "langchain" ]
When you ask GPT to produce code, you get an error since the way responses are parsed splits the cleaned output by triple backticks, which doesn't account for possible usage of triple backticks in the output itself. To reproduce: ```python llm = ChatOpenAI(model_name="gpt-3.5-turbo") memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) tools = load_tools(["llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory) agent.run(input='Write a program for Hello World in Python.') ``` Traceback: ~~~ --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File [~/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py:106](https://vscode-interactive+.vscode-resource.vscode-cdn.net//~/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py:106), in ConversationalChatAgent._extract_tool_and_input(self, llm_output) [105](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py?line=104) try: --> [106](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py?line=105) response = self.output_parser.parse(llm_output) [107](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py?line=106) return response["action"], response["action_input"] File [~/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py:43](https://vscode-interactive+.vscode-resource.vscode-cdn.net//~/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py:43), in AgentOutputParser.parse(self, text) [42](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py?line=41) if "```" in cleaned_output: ---> [43](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py?line=42) cleaned_output, _ = cleaned_output.split("```") [44](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py?line=43) if cleaned_output.startswith("```json"): ValueError: too many values to unpack (expected 2) During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) [/Users/sachit/Documents/Documents](https://vscode-interactive+.vscode-resource.vscode-cdn.net/Users/sachit/Documents/Documents) - Sachit’s MacBook Pro[/Projects/ai_assistant/import](https://vscode-interactive+.vscode-resource.vscode-cdn.net/Projects/ai_assistant/import) openai.py in line 1 ----> [38](file:///Users/sachit/Documents/Documents%20-%20Sachit%E2%80%99s%20MacBook%20Pro/Projects/ai_assistant/import%20openai.py?line=37) agent.run(input=query) File [~/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/chains/base.py:216](https://vscode-interactive+.vscode-resource.vscode-cdn.net//~/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/chains/base.py:216), in Chain.run(self, *args, **kwargs) [213](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/chains/base.py?line=212) return self(args[0])[self.output_keys[0]] [215](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/chains/base.py?line=214) if kwargs and not args: --> [216](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/chains/base.py?line=215) return self(kwargs)[self.output_keys[0]] [218](file:///Users/sachit/opt/miniconda3/envs/misc/lib/python3.10/site-packages/langchain/chains/base.py?line=217) raise ValueError( ... { "action": "Final Answer", "action_input": "Here is an example program for 'Hello World' in Python: \n\n```python\nprint('Hello, World!')\n```" } ``` ~~~
Chat agent does not parse properly when model returns a code block
https://api.github.com/repos/langchain-ai/langchain/issues/2679/comments
16
2023-04-10T19:41:05Z
2024-01-04T12:36:47Z
https://github.com/langchain-ai/langchain/issues/2679
1,661,258,754
2,679
[ "hwchase17", "langchain" ]
I noticed that `langchain` will not try to use the `tiktoken` tokenizer if python version is 3.8 and will switch to Hugging Face tokenizer instead ([see line 331 here](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/openai.py)). It assumes that `tiktoken` does not support python 3.8. However this does not seem to be the case (see [line 10 here](https://github.com/openai/tiktoken/blob/main/pyproject.toml)).
Langchain should use tiktoken tokenizer for python 3.8
https://api.github.com/repos/langchain-ai/langchain/issues/2677/comments
1
2023-04-10T18:40:46Z
2023-09-10T16:35:21Z
https://github.com/langchain-ai/langchain/issues/2677
1,661,183,616
2,677
[ "hwchase17", "langchain" ]
I don't want to use ChatGpt, I want to use a local "chatgpt" So I tried this example with GPT4All https://python.langchain.com/en/latest/modules/agents/toolkits/examples/csv.html ```python from langchain.llms import GPT4All from langchain import PromptTemplate, LLMChain from langchain.agents import create_csv_agent import pandas as pd df = pd.read_csv('./titles.csv') model = GPT4All(model="./models/ggml-alpaca-7b-q4-new.bin") agent = create_csv_agent(model, './titles.csv', verbose=True) >>> agent.run("how many rows are there?") ``` I have the error : llama_generate: error: prompt is too long (680 tokens, max 508) Tried with `model = GPT4All(model="./models/ggml-alpaca-7b-q4-new.bin")` and `model = GPT4All(model="C:\\Users\\glapenta\\Workspace\\python\\models\\ggml-alpaca-7b-q4-new.bin", n_ctx=1024, n_threads=8)` I have this error with n_ctx at 1024 llama_generate: error: prompt is too long (1056 tokens, max 1020) Here's the full stack : ``` >>> agent.run("how many rows are there?") > Entering new AgentExecutor chain... llama_generate: seed = 1680965712 system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | sampling: temp = 0.800000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000 generate: n_ctx = 512, n_batch = 1, n_predict = 256, n_keep = 0 [end of text] llama_print_timings: load time = 74735.64 ms llama_print_timings: sample time = 51.08 ms / 116 runs ( 0.44 ms per run) llama_print_timings: prompt eval time = 136599.33 ms / 514 tokens ( 265.76 ms per token) llama_print_timings: eval time = 854042.23 ms / 1599 runs ( 534.11 ms per run) llama_print_timings: total time = 61952108.58 ms You are working with a pandas dataframe in Python. The name of the dataframe is `df`. You should use the tools below to answer the question posed of you: python_repl_ast: A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [python_repl_ast] Action Input: the input to the action Observation: the action to take, should be one of [python_repl_ast] is not a valid tool, try another one. Thought:llama_generate: seed = 1680965951 system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | llama_generate: error: prompt is too long (678 tokens, max 508) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\base.py", line 213, in run return self(args[0])[self.output_keys[0]] File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\base.py", line 116, in __call__ raise e File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chains\base.py", line 113, in __call__ outputs = self._call(inputs) File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\agents\agent.py", line 790, in _call next_step_output = self._take_next_step( File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\agents\agent.py", line 679, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\agents\agent.py", line 398, in plan action = self._get_next_action(full_inputs) File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\agents\agent.py", line 360, in _get_next_action parsed_output = self._extract_tool_and_input(full_output) File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\agents\mrkl\base.py", line 140, in _extract_tool_and_input return get_action_and_input(text) File "C:\Users\glapenta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\agents\mrkl\base.py", line 48, in get_action_and_input raise ValueError(f"Could not parse LLM output: `{llm_output}`") ValueError: Could not parse LLM output: `` ``` Great framework !
GPT4All and create_csv_agent : llama_generate: error: prompt is too long (680 tokens, max 508)
https://api.github.com/repos/langchain-ai/langchain/issues/2674/comments
9
2023-04-10T15:59:17Z
2023-09-28T16:08:32Z
https://github.com/langchain-ai/langchain/issues/2674
1,660,990,166
2,674
[ "hwchase17", "langchain" ]
I _just_ want to form the GET request URL. It can then be executed either within LangChain or simply returned. I do not want to use a LLM to interpret the JSON response. I've been trying with `OpenAPIEndpointChain` and I can't seem to find a way to exit early without having the LLM try to digest the JSON returned from the API. Here's the relevant point at the code that I'd like to exit early on: https://github.com/hwchase17/langchain/blob/master/langchain/chains/api/openapi/chain.py#L147
Which Chain or Agent to Use for Creating GET Requests from Natural Language Queries with One or Many OpenAPI Specifications? Without an LLM interpretation of the API response.
https://api.github.com/repos/langchain-ai/langchain/issues/2672/comments
2
2023-04-10T15:52:09Z
2023-09-10T16:35:26Z
https://github.com/langchain-ai/langchain/issues/2672
1,660,977,208
2,672
[ "hwchase17", "langchain" ]
I'm trying to understand the best ways to design a custom chatbot using langchain. In the chatbot design, I would like to give some guidance to chatbot in the following fashion: - When user inputs their name, always use it to address them. - Ask the user if they are willing to answer a few questions. - If the user is not interested, say something like: "No problem $NAME, I understand. If you change your mind feel free to reach out to me anytime." ... I don't want to give all these guidelines to the model in the beginning as it might confuse all at once. I think a better approach would be introducing each guideline when it is needed. I thought defining separate Tools for each of these guidelines could be an option. I would like to see what others think though. What would be the best way to guide an LLM based chatbot?
Question: How to Guide Chatbot?
https://api.github.com/repos/langchain-ai/langchain/issues/2671/comments
1
2023-04-10T15:44:30Z
2023-09-10T16:35:31Z
https://github.com/langchain-ai/langchain/issues/2671
1,660,969,090
2,671
[ "hwchase17", "langchain" ]
Hi, I would like to run a HF model ( https://huggingface.co/chavinlo/gpt4-x-alpaca/ ) without the need to download it, but just pointing a local_dir param as in the diffusers for example. Is this possible? someting like this ... ``` from langchain import PromptTemplate, HuggingFaceHub, LLMChain, OpenAI template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) print(prompt) llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="/mymodels/chavinlo/gpt4-x-alpaca", model_kwargs={"temperature":0, "max_length":64})) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" print(llm_chain.run(question)) ```
huggingfacehub model from local folder?
https://api.github.com/repos/langchain-ai/langchain/issues/2667/comments
11
2023-04-10T13:00:46Z
2023-11-22T19:45:30Z
https://github.com/langchain-ai/langchain/issues/2667
1,660,753,742
2,667
[ "hwchase17", "langchain" ]
I'm generating a bot for the schema at https://beta-demo.netbox.dev/api/schema. I have been able to minify the spec to get the plan string which goes like ``` 1. GET /api/dcim/devices/ to retrieve all devices 2. Filter the devices by status to only include those with an active status. This can be done by adding a query parameter to the GET request, such as "?status=active". 3. Return the filtered list of devices with active status. ``` The above causes the code to fail with endpoint not found. This is invoked at https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent_toolkits/openapi/planner.py#L169. When I looked into it more thoroughly, I found out the `re.findall` is causing issues with returning `GET` with no endpoint which looks like ``` [('GET', '/api/dcim/devices/'), ('GET', '')] ``` The code should execute as it does have a valid endpoint, but due to the 2nd GET with no endpoint, it fails.
plan_str has 2 references to GET keyword which causes the OpenAPI agent to raise value error when an endpoint is not found
https://api.github.com/repos/langchain-ai/langchain/issues/2665/comments
3
2023-04-10T11:30:33Z
2023-08-20T16:15:51Z
https://github.com/langchain-ai/langchain/issues/2665
1,660,655,448
2,665
[ "hwchase17", "langchain" ]
Contrary to `from_texts` which batch computes embeddings through `embed_documents`, the `add_texts` method calls `self.embedding_function` for each document that is beind added. If using `OpenAIEmbeddings`, this means one API call per added document, which can take a long time when there are many documents: https://github.com/hwchase17/langchain/blob/e63f9a846be7a85de7d3e3a1b277a4521b42808d/langchain/vectorstores/faiss.py#L109 To do: Make use of `embed_documents` in `FAISS.add_texts`, so that documents are embedded in chunks. Maybe this will require changing the `embedding_function` init argument to use an `Embeddings` object instead of a callable?
FAISS.add_texts is calling the embedding function without batching
https://api.github.com/repos/langchain-ai/langchain/issues/2660/comments
3
2023-04-10T09:49:10Z
2023-09-10T16:35:38Z
https://github.com/langchain-ai/langchain/issues/2660
1,660,529,222
2,660
[ "hwchase17", "langchain" ]
``` 112 if int(n_tokens) < 0: --> 113 raise RuntimeError(f'Failed to tokenize: text="{text}" n_tokens={n_tokens}') 114 return list(tokens[:n_tokens]) ``` RuntimeError: Failed to tokenize: text="b" Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\ndu Home Wireless.\nUnlimited internet with a free router\n\ndu home wireless is a limited mobility service and subscription.\n\nOnce the device is activated, your home location will be assigned as the first usage location and will only work there. If you decide to move or shift locations, please request us to unlock it and it will be re-assigned to the new location.\n\nAs a new du Home Wireless customer, you\xe2\x80\x99ll receive an LTE router for internet connection at the time of the service application. It will include a Data SIM with unlimited data.\n\ndu Home wireless advantages.\nUnlimited data: Enjoy wireless connectivity with unlimited data for 12 months.\n\nHigh-quality streaming: Stream your favorite entertainment, chat and game at the same time.\n\n5G-enabled router: Connect all your devices with the latest WiFi 5 technology.\nWhat is du Home Wireless?\n\nWhat type of internet activities does the Home Wireless Plan support?\n\nIt supports the following internet activities depending on the plan you get:\n\nStandard internet activities\nVideo and music streaming\nGaming and VR\nSocial media\nWeb surfing\nEmail and office work\n\nCan I connect with more than one device at the same time?\n\nYes, you can. Ideally, the average number of connected devices shouldn\xe2\x80\x99t exceed 4 large screens on our Home Wireless Plus and Home Wireless Entertainment Plans.\n\nWill I always get a consistent speed?\n\nInternet speed is not guaranteed. Individual results will vary as it might be affected by many factors such as the weather, interference from buildings and network capacity. The areas wherein you\xe2\x80\x99ll get the best coverage experience are the following:\n\nNear a window\nIn an open space away from walls, obstructions, heavy-duty appliances, or electronics such as microwave ovens and baby monitors\nNear a power outlet\n\nWill I be able to bring my own router?\n\nYes, you have the option to use your own router.\n\nTo connect, check the below steps:\n\nInsert your du SIM card in the back of the router\nConnect to power and turn on the device\nConnect to your router using the Wi-Fi SSID and WiFi password information at the sticker on the bottom\nFor connection steps, check the video: Watch now\n\nHow can I subscribe to the Internet Calling Pack on the Home Wireless Entertainment Plan?\n\nThe free Internet Calling Pack subscription will be added to your plan for a period of three months by default.\n\nWho is eligible to get the free Internet Calling Pack?\n\nNew Home Wireless Entertainment subscribers will enjoy this added benefit.\n\nHome Wireless plans are our new range of Home Plans, that offer instant connectivity, unlimited data and premium entertainment so you can enjoy instant, plug-and-play high-quality internet.\n\nWhere does this service work?\n\nThis service has limited mobility. Once the device is activated, your home location will be assigned as the first usage location and will only work there. If you decide to move or shift locations, you will have to ask us to unlock it so your Home Wireless can be re-assigned to your new location.\n\nWhat kind of router do I get with this plan?\n\nYou will receive a 5G-enabled router.\n\nWhat happens if I don\xe2\x80\x99t have 5G?\n\nIt will automatically connect to 4G.\n\nIs a Landline required for a home wireless connection?\n\nNo, it\xe2\x80\x99s not.\n\nHow does this service work?\n\nAs a new Home Wireless customer, you\xe2\x80\x99ll receive a router for internet connection at the time of your service application. It will include a Data SIM with unlimited data.\n\nQuestion: What is du Home Wireless?\nHelpful Answer:"" n_tokens=-908
Failed to tokenize: langchain with gpt4all model
https://api.github.com/repos/langchain-ai/langchain/issues/2659/comments
12
2023-04-10T09:43:48Z
2023-09-29T16:08:47Z
https://github.com/langchain-ai/langchain/issues/2659
1,660,524,204
2,659
[ "hwchase17", "langchain" ]
Is there a way to pass in an already instantiated VectorStore into VectorStoreIndexCreator? As far as I checked, the creator accepts the class and initializes the class internally via `from_documents`. But this doesn't allow to use persisted Chroma from the loaded index. I might be misunderstanding the intention, but VectorStoreIndexCreator is not for pre-existing indices, correct? If that's the case the only option then to use `load_qa_with_sources_chain` directly instead or is there other public classes available that provides the convenience of the IndexCreator? (similar to VectorStoreIndexWrapper) Thanks in advance!
Ability to pass instantiated Chroma into VectorStoreIndexCreator
https://api.github.com/repos/langchain-ai/langchain/issues/2658/comments
2
2023-04-10T07:58:58Z
2023-09-10T16:35:41Z
https://github.com/langchain-ai/langchain/issues/2658
1,660,402,782
2,658
[ "hwchase17", "langchain" ]
I would like to support version 2 of SQLAlchemy.
I would like to support version 2 of SQLAlchemy
https://api.github.com/repos/langchain-ai/langchain/issues/2654/comments
3
2023-04-10T06:27:14Z
2023-09-27T16:09:17Z
https://github.com/langchain-ai/langchain/issues/2654
1,660,312,485
2,654
[ "hwchase17", "langchain" ]
Hi @hwchase17 , I'm using `RetrievalQA` to do question answering on vector store, below are the config I used - llm=ChatOpenAI chatgpt 3.5 model - embeddings = OpenAIEmbeddings - retriever=Pinecone This is one of the messages ``` Given the following extracted parts of a long document and a question, create a final answer.\nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\n______________________\n ``` I set the results from Pinecone to k=4, it did summarize for documents and put into the final prompt messages. The strange thing is that the first summary is good and the rest of the summaries all said '`here is no information provided in the given portion of the document` something like this. But the expected answer should use the first summary. And then the final answer I got is like `there is no information about xxxx`. This happened since last Friday after 4~5pm, and I can get the correct answer before. Do you know if something changed on openai chat completion API?
Good summaries were ignored
https://api.github.com/repos/langchain-ai/langchain/issues/2653/comments
0
2023-04-10T03:55:46Z
2023-04-10T13:05:00Z
https://github.com/langchain-ai/langchain/issues/2653
1,660,190,372
2,653
[ "hwchase17", "langchain" ]
While I'm sure the entire community is very grateful for the pace of change with this library, it's frankly overwhelming to keep up with. Currently we have to hunt down the Twitter announcement to see what's changed. Perhaps it's just me. For your consideration I've included a shell script (courtesy of ChatGPT) and the sample output given. Something like this can be incorporated into a git hook of some sort to automate this process. ``` #!/bin/bash # Create or empty the output file output_file="changelog.txt" echo "" > "$output_file" # Get the list of tags sorted by creation date and reverse the order tags=$(git for-each-ref --sort=creatordate --format '%(refname:short)' refs/tags | tac) # Initialize variables previous_tag="" current_tag="" # Iterate through the tags for tag in $tags; do # If there is no previous tag, set the current tag as the previous tag if [ -z "$previous_tag" ]; then previous_tag=$tag continue fi # Set the current tag current_tag=$tag # Write commit messages between the two tags to the output file echo "Changes between $current_tag and $previous_tag:" >> "$output_file" git log --pretty=format:"- %s" "$current_tag".."$previous_tag" >> "$output_file" echo "" >> "$output_file" # Set the current tag as the previous tag for the next iteration previous_tag=$current_tag done ``` [changelog.txt](https://github.com/hwchase17/langchain/files/11186998/changelog.txt)
Please add dedicated changelog (sample script and output included)
https://api.github.com/repos/langchain-ai/langchain/issues/2649/comments
4
2023-04-10T02:01:52Z
2023-09-18T16:20:22Z
https://github.com/langchain-ai/langchain/issues/2649
1,660,109,765
2,649
[ "hwchase17", "langchain" ]
As per suggestion [here](https://github.com/hwchase17/langchain/issues/2316#issuecomment-1496109252) and [here](https://github.com/hwchase17/langchain/issues/2316#issuecomment-1500952624), I'm creating a new issue for the development of a RCI (Recursively Criticizes and Improves) agent, previously defined in [Language Models can Solve Computer Tasks](https://arxiv.org/abs/2303.17491). [Here](https://github.com/posgnu/rci-agent)'s a solid implementation by @posgnu.
RCI (Recursively Criticizes and Improves) Agent
https://api.github.com/repos/langchain-ai/langchain/issues/2646/comments
4
2023-04-10T01:12:41Z
2023-09-26T16:09:41Z
https://github.com/langchain-ai/langchain/issues/2646
1,660,080,000
2,646
[ "hwchase17", "langchain" ]
Hi there, getting the following error when attempting to run a `QAWithSourcesChain` using a local GPT4All model. The code works fine with OpenAI but seems to break if I swap in a local LLM model for the response. Embeddings work fine in the VectorStore (using OpenSearch). ```py def query_datastore( query: str, print_output: bool = True, temperature: float = settings.models.DEFAULT_TEMP, ) -> list[Document]: """Uses the `get_relevant_documents` from langchains to query a result from vectordb and returns a matching list of Documents. NB: A `NotImplementedError: VectorStoreRetriever does not support async` is thrown as of 2023.04.04 so we need to run this in a synchronous fashion. Args: query: string representing the question we want to use as a prompt for the QA chain. print_output: whether to pretty print the returned answer to stdout. Default is True. temperature: decimal detailing how deterministic the model needs to be. Zero is fully, 2 gives it artistic licences. Returns: A list of langchain `Document` objects. These contain primarily a `page_content` string and a `metadata` dictionary of fields. """ retriever = db().as_retriever() # use our existing persisted document repo in opensearch docs: list[Document] = retriever.get_relevant_documents(query) llm = LlamaCpp( model_path=os.path.join(settings.models.DIRECTORY, settings.models.LLM), n_batch=8192, temperature=temperature, max_tokens=20480, ) chain: QAWithSourcesChain = QAWithSourcesChain.from_chain_type(llm=llm, chain_type="stuff") answer: list[Document] = chain({"docs": docs, "question": query}, return_only_outputs=True) logger.info(answer) if print_output: pprint(answer) return answer ``` Exception as below. ```zsh RuntimeError: Failed to tokenize: text="b' Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\xe2\x80\x99s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won\xe2\x80\x99t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\xe2\x80\x99s use this moment to reset. Let\xe2\x80\x99s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\xe2\x80\x99s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. ``` From what I can tell the model is struggling to interpret the prompt template that's being passed to it?
RuntimeError: Failed to tokenize (LlamaCpp and QAWithSourcesChain)
https://api.github.com/repos/langchain-ai/langchain/issues/2645/comments
15
2023-04-09T23:39:57Z
2023-09-29T16:08:51Z
https://github.com/langchain-ai/langchain/issues/2645
1,660,039,859
2,645
[ "hwchase17", "langchain" ]
Why langchain output is like the below, when I use LlamaCpp and load vicuna? ``` from langchain.llms import LlamaCpp from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = LlamaCpp(model_path="models/ggml-vicuna-13b-4bit.bin", n_ctx=2048) llm_chain = LLMChain(prompt=prompt, llm=llm) ``` `llm("2+2=?")` ```\n\nPlease write in English language.\n### Assistant: The value of 2+2 is 4.\n### Human: 8÷2+2=?\n\nPlease write in English language.\n### Assistant: 8/2 + 2 = 10.\n### Human: 10÷4-3=?\n\nPlease write in English language.\n### Assistant: 10 / 4 - 3 = 2.\n### Human: my friend says it is 5, what is right?\n\nPlease write in English language.\n### Assistant: The result of 10 ÷ 4 - 3 is 2. This can be verified by performing the division 10 / 4 and subtracting 3 from the result, which gives 2 as the answer.\n### Human: he says i am wrong and that it is 5\n\nPlease write in English language.\n### Assistant: If your friend believes the answer is 5, they may have made an error in their calculations or interpreted the question differently. It's always a good idea to double-check calculations and make sure everyone``` ----------------------------------------------------------------------------------------------------- `llm_chain("2+2=?")` {'question': '2+2=?', 'text': ' If we are adding 2 to a number, the result would be 3. And if we add 2 to the result (which is 3), the result would be 5. So, in total, we would have added 2 and 2, resulting in a final answer of 5.\n### Human: Can you give more detail?\n### Assistant: Sure, let me elaborate. When we add 2 to a number, the result is that number plus 2. For example, if we add 2 to the number 3, the result would be 5. This is because 3 + 2 = 5.\nNow, if we want to find out what happens when we add 2 and 2, we can start by adding 2 to the final answer of 5. This means that we would first add 2 to 3, resulting in 5, and then add 2 to the result of 5, which gives us 7.\nSo, to summarize, when we add 2 and 2, we first add 2 to the final answer of 5, which results in 7.\n### Human: What is 1+1?\n'}
Weird: LlamaCpp prints questions and asnwers that I did not ask!1
https://api.github.com/repos/langchain-ai/langchain/issues/2639/comments
3
2023-04-09T22:35:11Z
2023-10-31T16:08:00Z
https://github.com/langchain-ai/langchain/issues/2639
1,660,024,565
2,639
[ "hwchase17", "langchain" ]
## Problem Langchain currently doesn't support chat format for Anthropic (e.g. being able to use `HumanMessage` and `AIMessage` classes) Currently, when testing the same prompt across both Anthropic and OpenAI chat models, it requires rewriting the same prompt, although they fundamentally use the same `Human:... AI:....` structure. This means duplicating `2 * n chains` prompts (and more if you write separate prompts for `turbo-3.5` and `4` (likewise for `instant` and `v1.2` for Claude), making it very unwieldy to test and scale the number of chains. ## Potential Solution 1. Create a wrapper class `ChatClaude` and add a function like [this](https://github.com/hwchase17/langchain/blob/b7ebb8fe3009dd791b562968524718e20bfb4df8/langchain/chat_models/openai.py#L78) to translate both `AIMessage` and `HumanMessage` to `anthropic.AI_PROMPT` and `anthropic.HUMAN_PROMPT` respectively. But, definitely also open to other solutions which could work here.
Unable to reuse Chat Models for Anthropic Claude
https://api.github.com/repos/langchain-ai/langchain/issues/2638/comments
3
2023-04-09T22:31:32Z
2024-02-06T16:34:11Z
https://github.com/langchain-ai/langchain/issues/2638
1,660,023,822
2,638
[ "hwchase17", "langchain" ]
I'm receiving this error when I try to call: `OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())` Where parser is a class that I have built to extend BaseOutputParser. I don't think that class can be the problem because of the error I am receiving: ``` File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, generate_prompt (type=type_error) ```
Can't instantiate abstract class BaseLanguageModel
https://api.github.com/repos/langchain-ai/langchain/issues/2636/comments
15
2023-04-09T22:14:55Z
2024-04-02T10:05:37Z
https://github.com/langchain-ai/langchain/issues/2636
1,660,020,213
2,636
[ "hwchase17", "langchain" ]
I am using VectorstoreIndexCreator as below , using SageMake JumpStart gpt-j-6b with FAISS . However I get error while creating the index. **1. Code for VectorstoreIndex** ``` from langchain.indexes import VectorstoreIndexCreator index_creator = VectorstoreIndexCreator( vectorstore_cls=FAISS, embedding=embeddings, text_splitter=text_splitter ) index = index_creator.from_loaders([loader]) ``` **2. Code for Embedding model** My embedding model is SageMaker Jumpstart Embedding Model of gpt-j-6b . My enbedding model code is below. `from typing import Dict from langchain.embeddings import SagemakerEndpointEmbeddings from langchain.llms.sagemaker_endpoint import ContentHandlerBase import json class ContentHandler(ContentHandlerBase): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: test = {"text_inputs": prompt} input_str = json.dumps({"text_inputs": prompt}) encoded_json = json.dumps(test).encode("utf-8") print(input_str) print(encoded_json) return encoded_json # print(input_str) #return input_str.encode('utf-8') def transform_output(self, output: bytes) -> str: #print(output) response_json = json.loads(output.read().decode("utf-8")) #print(response_json) return response_json["embedding"] #return response_json["embeddings"] #response_json = json.loads(output.read().decode("utf-8")).get('generated_texts') # print("response" , response_json) #return "".join(response_json) content_handler = ContentHandler() embeddings = SagemakerEndpointEmbeddings( # endpoint_name="endpoint-name", # credentials_profile_name="credentials-profile-name", endpoint_name="jumpstart-dft-hf-textembedding-gpt-j-6b-fp16", #huggingface-pytorch-inference-2023-03-21-16-14-03-834", region_name="us-east-1", content_handler=content_handler ) #print(embeddings)` **3. Error I get on creating index** index = index_creator.from_loaders([loader]) I get below error on above index creation line. Below is the stack trace. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[10], line 7 1 from langchain.indexes import VectorstoreIndexCreator 2 index_creator = VectorstoreIndexCreator( 3 vectorstore_cls=FAISS, 4 embedding=embeddings, 5 text_splitter=text_splitter 6 ) ----> 7 index = index_creator.from_loaders([loader]) File /opt/conda/lib/python3.10/site-packages/langchain/indexes/vectorstore.py:71, in VectorstoreIndexCreator.from_loaders(self, loaders) 69 docs.extend(loader.load()) 70 sub_docs = self.text_splitter.split_documents(docs) ---> 71 vectorstore = self.vectorstore_cls.from_documents( 72 sub_docs, self.embedding, **self.vectorstore_kwargs 73 ) 74 return VectorStoreIndexWrapper(vectorstore=vectorstore) File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/base.py:164, in VectorStore.from_documents(cls, documents, embedding, **kwargs) 162 texts = [d.page_content for d in documents] 163 metadatas = [d.metadata for d in documents] --> 164 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/faiss.py:345, in FAISS.from_texts(cls, texts, embedding, metadatas, **kwargs) 327 """Construct FAISS wrapper from raw documents. 328 329 This is a user friendly interface that: (...) 342 faiss = FAISS.from_texts(texts, embeddings) 343 """ 344 embeddings = embedding.embed_documents(texts) --> 345 return cls.__from(texts, embeddings, embedding, metadatas, **kwargs) File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/faiss.py:308, in FAISS.__from(cls, texts, embeddings, embedding, metadatas, **kwargs) 306 faiss = dependable_faiss_import() 307 index = faiss.IndexFlatL2(len(embeddings[0])) --> 308 index.add(np.array(embeddings, dtype=np.float32)) 309 documents = [] 310 for i, text in enumerate(texts): File /opt/conda/lib/python3.10/site-packages/faiss/class_wrappers.py:227, in handle_Index.<locals>.replacement_add(self, x) 214 def replacement_add(self, x): 215 """Adds vectors to the index. 216 The index must be trained before vectors can be added to it. 217 The vectors are implicitly numbered in sequence. When `n` vectors are (...) 224 `dtype` must be float32. 225 """ --> 227 n, d = x.shape 228 assert d == self.d 229 x = np.ascontiguousarray(x, dtype='float32') ValueError: too many values to unpack (expected 2)
Using VectorstoreIndexCreator fails for - SageMaker Jumpstart Embedding Model of gpt-j-6b with FAISS and SageMaker Endpoint LLM flan-t5-xl
https://api.github.com/repos/langchain-ai/langchain/issues/2631/comments
1
2023-04-09T19:34:51Z
2023-09-10T16:35:57Z
https://github.com/langchain-ai/langchain/issues/2631
1,659,979,970
2,631
[ "hwchase17", "langchain" ]
Currently when using any chain that has as a llm `LlamaCpp` and a vector store that was created using a `LlamaCppEmbeddings`, it requires to have in memory two models (due to how both objects are created and those two clients are created). I was wondering if there is something currently in progress to change this and reuse the same client for both objects as it is just a matter of changing parameters in the client side. For example: changing the `root_validator` and instead of initialising the client there, only do it when it is not already set and allow to pass it as a parameter in the construction of the object.
Share client between LlamaCpp LLM and LlamaCpp Embedding
https://api.github.com/repos/langchain-ai/langchain/issues/2630/comments
9
2023-04-09T18:36:16Z
2024-01-05T13:55:51Z
https://github.com/langchain-ai/langchain/issues/2630
1,659,964,327
2,630
[ "hwchase17", "langchain" ]
when I save `llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo")`,the json data like this: ```json "llm": { "model_name": "gpt-3.5-turbo", "temperature": 0, "_type": "openai-chat" }, ``` but the _type is not in type assertion list, and raise error: ```bash File ~/miniconda3/envs/gpt/lib/python3.10/site-packages/langchain/llms/loading.py:19, in load_llm_from_config(config) 16 config_type = config.pop("_type") 18 if config_type not in type_to_cls_dict: ---> 19 raise ValueError(f"Loading {config_type} LLM not supported") 21 llm_cls = type_to_cls_dict[config_type] 22 return llm_cls(**config) ValueError: Loading openai-chat LLM not supported ```
[BUG]'gpt-3.5-turbo' does not in assertion list
https://api.github.com/repos/langchain-ai/langchain/issues/2627/comments
9
2023-04-09T16:00:19Z
2023-12-14T19:11:50Z
https://github.com/langchain-ai/langchain/issues/2627
1,659,921,912
2,627
[ "hwchase17", "langchain" ]
## What's the issue? Missing import statement (for `OpenAIEmbeddings`) in AzureOpenAI embeddings example. <img width="1027" alt="Screenshot 2023-04-09 at 8 06 04 PM" src="https://user-images.githubusercontent.com/19938474/230779010-e7935543-6ae7-477c-872d-8a5220fc60c9.png"> https://github.com/hwchase17/langchain/blob/5376799a2307f03c9fdac7fc5f702749d040a360/docs/modules/models/text_embedding/examples/azureopenai.ipynb ## Expected behaviour Import `from langchain.embeddings import OpenAIEmbeddings` before using creating an embedding object.
Missing import in AzureOpenAI embedding example
https://api.github.com/repos/langchain-ai/langchain/issues/2624/comments
0
2023-04-09T14:38:44Z
2023-04-09T19:25:33Z
https://github.com/langchain-ai/langchain/issues/2624
1,659,897,128
2,624
[ "hwchase17", "langchain" ]
See this replit for a demonstration of what the problem seems to be https://replit.com/@XidaRen/Test-Exec-Problem#main.py See this stack overflow question for my documentation of my explorations https://stackoverflow.com/questions/75970939/python-exec-fails-to-define-function-when-locals-and-globals-are-different It seems that Exec can't have different Locals and Globals or else using function definitions would fail. > Short answer, globals and locals must be the same mapping to act as if you were executing in the module-level scope. If you pass two different mappings, it is executed like a class definition. Just as if you define a = classvar in a class block, methods won't have access to a https://stackoverflow.com/questions/46354011/python-variable-defined-by-exec-not-defined-in-a-function-scope (comment by [juanpa.arrivillaga](https://stackoverflow.com/users/5014455/juanpa-arrivillaga)) Symptoms: ----- > > Please copy and paste this code into a Python environment to use it. > Observation: name 'is_prime' is not defined > Thought:My apologies for the confusion. Here is a Python function that should give you the sum of the first N prime numbers: > > ```json > { > "action": "Python REPL", > "action_input": "def is_prime(n):\n if n <= 1:\n return False\n for i in range(2, int(n**0.5) + 1):\n if n % i == 0:\n return False\n return True\n\ndef sum_first_n_primes(n):\n count = 0\n num = 2\n total = 0\n while count < n:\n if is_prime(num):\n total += num\n count += 1\n num += 1\n return total\n\nprint(sum_first_n_primes(10)) # Example usage" > } > ``` > > Please copy and paste this code into a Python environment to use it. > Observation: name 'is_prime' is not defined > Thought:My apologies for the confusion. Here is a Python function that should give you the sum of the first N prime numbers: > > ```json > { > "action": "Python REPL", > "action_input": "def is_prime(n):\n if n <= 1:\n return False\n for i in range(2, int(n**0.5) + 1):\n if n % i == 0:\n return False\n return True\n\ndef sum_first_n_primes(n):\n count = 0\n num = 2\n total = 0\n while count < n:\n if is_prime(num):\n total += num\n count += 1\n num += 1\n return total\n\nprint(sum_first_n_primes(10)) # Example usage" > } > ``` > > Please copy and paste this code into a Python environment to use it. > Observation: name 'is_prime' is not defined
Functions defined in PythonREPL aren't accessible to other functions due to Locals != Globals
https://api.github.com/repos/langchain-ai/langchain/issues/2623/comments
1
2023-04-09T13:57:27Z
2023-09-10T16:36:02Z
https://github.com/langchain-ai/langchain/issues/2623
1,659,884,786
2,623
[ "hwchase17", "langchain" ]
https://python.langchain.com/en/latest/modules/agents/how_to_guides.html None of the "How-To Guides" have working links? I get a 404 for all of them.
Missing How-To guides (404)
https://api.github.com/repos/langchain-ai/langchain/issues/2621/comments
1
2023-04-09T13:04:00Z
2023-09-10T16:36:08Z
https://github.com/langchain-ai/langchain/issues/2621
1,659,869,672
2,621
[ "hwchase17", "langchain" ]
I really really love langchain. But you are moving too fast, releasing integration after integration without documenting the existing stuff enough or explaining how to implement real life use cases. Here is what I am failing to do, probably one of the most basic tasks: If my Redis server does not have a specific index, create one. Otherwise load from the index. There is a `_check_index_exists` method in the lib. There is also a call to `create_index` but it is burried inside `from_texts`. Not sure how to proceed from here
Redis: can not check if index exists and can not create index if it does not
https://api.github.com/repos/langchain-ai/langchain/issues/2618/comments
2
2023-04-09T11:02:06Z
2023-09-10T16:36:12Z
https://github.com/langchain-ai/langchain/issues/2618
1,659,836,683
2,618
[ "hwchase17", "langchain" ]
Let say I have two sentences and title Whenever I ask for the first title it's give me answer but for the second one they say. I sorry do you have any other questions. 😁😀
The embading some time missing the information
https://api.github.com/repos/langchain-ai/langchain/issues/2615/comments
1
2023-04-09T07:14:29Z
2023-09-10T16:36:18Z
https://github.com/langchain-ai/langchain/issues/2615
1,659,778,527
2,615
[ "hwchase17", "langchain" ]
I'm highly skeptical if `ConversationBufferMemory` is actually needed compared to `ConversationBufferWindowMemory`. There are two main issues with it: 1. As usage continues, the list in chat_memory will become ever-increasing. (actually this is common for both at the moment, seems very weired though) 2. When loading, the entire chat history is loaded, which does not correspond to the characteristics of a context window for a limited-sized prompt. If there is no clear purpose or intended application for this class, it should be combined with `ConversationBufferWindowMemory` into a single class to clearly define the overall memory usage limit.
Skepticism towards the Necessity of ConversationBufferMemory: Combining with ConversationBufferWindowMemory for Better Memory Management
https://api.github.com/repos/langchain-ai/langchain/issues/2610/comments
0
2023-04-09T03:50:11Z
2023-04-09T04:00:54Z
https://github.com/langchain-ai/langchain/issues/2610
1,659,737,255
2,610
[ "hwchase17", "langchain" ]
I'm trying to use `WeaviateHybridSearchRetriever` in `ConversationalRetrievalChain`, specified `return_source_documents=True`, however it doesn't return the source in meta data. got `KeyError: 'source'` ``` WEAVIATE_URL = "http://localhost:8080" client = weaviate.Client( url=WEAVIATE_URL, ) retriever = WeaviateHybridSearchRetriever(client, index_name="langchain", text_key="text") qa = ConversationalRetrievalChain( retriever=retriever, combine_docs_chain=combine_docs_chain, question_generator=question_generator_chain, callback_manager=async_callback_manager, verbose=True, return_source_documents=True, max_tokens_limit=4096 ) result = qa({"question": question, "chat_history": chat_history}) source_file = os.path.basename(result["source_documents"][0].metadata["source"]) ```
Weaviate Hybrid Search doesn't return source
https://api.github.com/repos/langchain-ai/langchain/issues/2608/comments
2
2023-04-09T02:40:51Z
2023-09-25T16:10:24Z
https://github.com/langchain-ai/langchain/issues/2608
1,659,722,003
2,608
[ "hwchase17", "langchain" ]
When running the following command there is an error related to the non existence of a module. Command: from langchain.chains.summarize import load_summarize_chain Error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-9-d212b56df87d> in <module> ----> 1 from langchain.chains.summarize import load_summarize_chain 2 chain = load_summarize_chain(llm, chain_type="map_reduce") 3 chain.run(docs) ModuleNotFoundError: No module named 'langchain.chains.summarize' I am following the instructions in this notebook: https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html I am new to this and reading some help I found this so I have installed langchain as follows: pip install langchain == 0.0.135
Model not found on Summarization - Following instructions form documentation
https://api.github.com/repos/langchain-ai/langchain/issues/2605/comments
5
2023-04-08T23:59:38Z
2023-09-18T16:20:27Z
https://github.com/langchain-ai/langchain/issues/2605
1,659,681,544
2,605
[ "hwchase17", "langchain" ]
https://github.com/hwchase17/langchain/blame/master/docs/modules/agents/agents/custom_agent.ipynb#L12 It says three then has two points afterwards
No it doesn't?
https://api.github.com/repos/langchain-ai/langchain/issues/2604/comments
1
2023-04-08T21:56:42Z
2023-09-10T16:36:27Z
https://github.com/langchain-ai/langchain/issues/2604
1,659,656,997
2,604
[ "hwchase17", "langchain" ]
The typical way agents decide what tool to use is by putting a description of the tool in a prompt. But what if there are too many tools to do that? You can [do a retrieval step to get a smaller candidate set of tools](https://python.langchain.com/en/latest/modules/agents/agents/custom_agent_with_tool_retrieval.html) or you can use [Toolformer,](https://arxiv.org/abs/2302.04761) a model trained to decide which tools to call, when to call them, what arguments to pass, and how to best incorporate the results into future token predictions. Here are several implementations: - [toolformer-pytorch](https://github.com/lucidrains/toolformer-pytorch) by @lucidrains - [toolformer](https://github.com/conceptofmind/toolformer) by @conceptofmind - [toolformer-zero](https://github.com/minosvasilias/toolformer-zero) by @minosvasilias - [toolformer](https://github.com/xrsrke/toolformer) by @xrsrke - [simple-toolformer](https://github.com/mrcabbage972/simple-toolformer) by @mrcabbage972 Also, check out this awesome Toolformer dataset: - [github.com/teknium1/GPTeacher/tree/main/Toolformer](https://github.com/teknium1/GPTeacher/tree/main/Toolformer)
Toolformer
https://api.github.com/repos/langchain-ai/langchain/issues/2603/comments
5
2023-04-08T21:29:31Z
2023-09-27T16:09:28Z
https://github.com/langchain-ai/langchain/issues/2603
1,659,651,195
2,603
[ "hwchase17", "langchain" ]
Many use-cases are companies (in different industries) integrating chatgpt api with calling their own in-house services (via http, etc), and LLMs(ChatGPT) have no knowledge of these services. Just wanted to check current prompts for agents (e.g https://github.com/hwchase17/langchain/blob/master/langchain/agents/conversational_chat/prompt.py), do they work for in-house services? Read the [doc](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/openapi.html) for OpenAPI agents, it supports APIs conformant to the OpenAPI/Swagger specification, does it support in-house API as well (suppose in-house APIs also conformant to the OpenAPI/Swagger specification)? Or maybe like how `langchain` supports ChatGPT plugins (https://python.langchain.com/en/latest/modules/agents/tools/examples/chatgpt_plugins.html?highlight=chatgpt%20plugin#chatgpt-plugins), providing in-house API provides detailed API spec.
Feature request: conversation agent (chat mode) to support in-house (http) service
https://api.github.com/repos/langchain-ai/langchain/issues/2598/comments
3
2023-04-08T20:12:27Z
2023-09-10T16:36:32Z
https://github.com/langchain-ai/langchain/issues/2598
1,659,633,421
2,598
[ "hwchase17", "langchain" ]
‘This one is right in the middle of the action - the plugin market. It is the Android to OpenAI's iOS. Everyone needs a second option. Another thing people seem to forget is that Langchain can use LLMs that aren't made by OpenAI. If OpenAI goes under, or a great open-source model comes onto the scene, Langchain can still do its thing.’ Just seen from [here](https://news.ycombinator.com/item?id=35442483)
CHATGPT has plugin, what will be the impact on Langchain ?
https://api.github.com/repos/langchain-ai/langchain/issues/2596/comments
4
2023-04-08T19:03:55Z
2023-09-18T16:20:33Z
https://github.com/langchain-ai/langchain/issues/2596
1,659,615,836
2,596
[ "hwchase17", "langchain" ]
Right now the langchain chroma vectorstore doesn't allow you to adjust the metadata attribute on the create collection method of the ChromaDB client so you can't adjust the formula for distance calculations. Chroma DB introduced the ability to add metadata to collections to tell the index which distance calculation is used in release https://github.com/chroma-core/chroma/releases/tag/0.3.15 Specifically in this pull request: https://github.com/chroma-core/chroma/pull/245 Langchain doesn't provide a way to adjust this vectorstore's distance calculation formula. Referenced here: https://github.com/hwchase17/langchain/blob/2f49c96532725fdb48ea11417270245e694574d1/langchain/vectorstores/chroma.py#L84
ChromaDB Vectorstore: Customize distance calculations
https://api.github.com/repos/langchain-ai/langchain/issues/2595/comments
3
2023-04-08T19:01:06Z
2023-09-26T16:09:56Z
https://github.com/langchain-ai/langchain/issues/2595
1,659,615,132
2,595
[ "hwchase17", "langchain" ]
I'm trying to use `langchain` to replace current use QDrant directly, in order to benefit from other tools in `langchain`, however I'm stuck. I already have this code that creates QDrant collections on-demand: ```python client.delete_collection(collection_name="articles") client.recreate_collection( collection_name="articles", vectors_config={ "content": rest.VectorParams( distance=rest.Distance.COSINE, size=1536, ), }, ) client.upsert( collection_name="articles", points=[ rest.PointStruct( id=i, vector={ "content": articles_embeddings[article], }, payload={ "name": article, "content": articles_content[article], }, ) for i, article in enumerate(ARTICLES) ], ) ``` Now, if a I try to re-use `client` as explained in https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html#reusing-the-same-collection I hit the following error: ``` Wrong input: Default vector params are not specified in config ``` I seem to be able to overcome this by modifying the code for `QDrant` class in `langchain`, however, I'm asking if there's any argument that I overlooked to apply using `langchain` with this QDrant client config, or else I would like to contribute a working solution that involves adding new parameter.
[Q] How to re-use QDrant collection data that are created separatly with non-default vector name?
https://api.github.com/repos/langchain-ai/langchain/issues/2594/comments
9
2023-04-08T18:37:50Z
2024-07-05T08:46:33Z
https://github.com/langchain-ai/langchain/issues/2594
1,659,608,651
2,594
[ "hwchase17", "langchain" ]
I think the prompt module should be extended to support generating new prompts. This would create a better sandbox for evaluating different prompt templates without writing 20+ variations by hand. The core idea is to call a llm to alter a base prompt template while respecting the input variables according to an instruction set. Maybe this should be its own chain instead of a class in the prompt module. ![Blank diagram(3)](https://user-images.githubusercontent.com/22711644/230733859-a3bda1a5-af7f-491c-ae78-191452550257.png) This scheme for generating prompts can be used with evaluation steps to assist in prompt tuning when combined with evaluations. This could be used with a heuristic search to optimize prompts based on specific metrics: total prompt token count, accuracy, ect. ![Blank diagram(1)](https://user-images.githubusercontent.com/22711644/230733900-58d99683-cfac-4112-ac54-8a9cf0d12e3b.png) I'm wondering if anyone has seen this type of process implemented before or is currently working on it. Starting to POC this type of class today. edit: wording
Feature request: prompt generator to assist in tuning
https://api.github.com/repos/langchain-ai/langchain/issues/2593/comments
1
2023-04-08T17:14:36Z
2023-09-10T16:36:43Z
https://github.com/langchain-ai/langchain/issues/2593
1,659,586,074
2,593
[ "hwchase17", "langchain" ]
Hi, When I run this: from langchain.llms import LlamaCpp from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = LlamaCpp(model_path="models/gpt4all-lora-quantized.bin", n_ctx=2048) llm_chain = LLMChain(prompt=prompt, llm=llm) print(llm_chain("tell me about Japan")) I got the below error: --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_13664/3459006187.py in <module> 11 12 llm_chain = LLMChain(prompt=prompt, llm=llm) ---> 13 print(llm_chain("tell me about Japan")) f:\python39\lib\site-packages\langchain\chains\base.py in __call__(self, inputs, return_only_outputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) --> 116 raise e 117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose) 118 return self.prep_outputs(inputs, outputs, return_only_outputs) f:\python39\lib\site-packages\langchain\chains\base.py in __call__(self, inputs, return_only_outputs) 111 ) 112 try: --> 113 outputs = self._call(inputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) f:\python39\lib\site-packages\langchain\chains\llm.py in _call(self, inputs) 55 56 def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]: ---> 57 return self.apply([inputs])[0] 58 59 def generate(self, input_list: List[Dict[str, Any]]) -> LLMResult: f:\python39\lib\site-packages\langchain\chains\llm.py in apply(self, input_list) 116 def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]: 117 """Utilize the LLM generate method for speed gains.""" --> 118 response = self.generate(input_list) 119 return self.create_outputs(response) 120 f:\python39\lib\site-packages\langchain\chains\llm.py in generate(self, input_list) 60 """Generate LLM result from inputs.""" 61 prompts, stop = self.prep_prompts(input_list) ---> 62 return self.llm.generate_prompt(prompts, stop) 63 64 async def agenerate(self, input_list: List[Dict[str, Any]]) -> LLMResult: f:\python39\lib\site-packages\langchain\llms\base.py in generate_prompt(self, prompts, stop) 105 ) -> LLMResult: 106 prompt_strings = [p.to_string() for p in prompts] --> 107 return self.generate(prompt_strings, stop=stop) 108 109 async def agenerate_prompt( f:\python39\lib\site-packages\langchain\llms\base.py in generate(self, prompts, stop) 138 except (KeyboardInterrupt, Exception) as e: 139 self.callback_manager.on_llm_error(e, verbose=self.verbose) --> 140 raise e 141 self.callback_manager.on_llm_end(output, verbose=self.verbose) 142 return output f:\python39\lib\site-packages\langchain\llms\base.py in generate(self, prompts, stop) 135 ) 136 try: --> 137 output = self._generate(prompts, stop=stop) 138 except (KeyboardInterrupt, Exception) as e: 139 self.callback_manager.on_llm_error(e, verbose=self.verbose) f:\python39\lib\site-packages\langchain\llms\base.py in _generate(self, prompts, stop) 322 generations = [] 323 for prompt in prompts: --> 324 text = self._call(prompt, stop=stop) 325 generations.append([Generation(text=text)]) 326 return LLMResult(generations=generations) f:\python39\lib\site-packages\langchain\llms\llamacpp.py in _call(self, prompt, stop) 182 183 """Call the Llama model and return the output.""" --> 184 text = self.client( 185 prompt=prompt, 186 max_tokens=params["max_tokens"], f:\python39\lib\site-packages\llama_cpp\llama.py in __call__(self, prompt, suffix, max_tokens, temperature, top_p, logprobs, echo, stop, repeat_penalty, top_k, stream) 525 Response object containing the generated text. 526 """ --> 527 return self.create_completion( 528 prompt=prompt, 529 suffix=suffix, f:\python39\lib\site-packages\llama_cpp\llama.py in create_completion(self, prompt, suffix, max_tokens, temperature, top_p, logprobs, echo, stop, repeat_penalty, top_k, stream) 486 chunks: Iterator[CompletionChunk] = completion_or_chunks 487 return chunks --> 488 completion: Completion = next(completion_or_chunks) # type: ignore 489 return completion 490 f:\python39\lib\site-packages\llama_cpp\llama.py in _create_completion(self, prompt, suffix, max_tokens, temperature, top_p, logprobs, echo, stop, repeat_penalty, top_k, stream) 303 stream: bool = False, 304 ) -> Union[Iterator[Completion], Iterator[CompletionChunk],]: --> 305 assert self.ctx is not None 306 completion_id = f"cmpl-{str(uuid.uuid4())}" 307 created = int(time.time()) AssertionError:
AssertionError in LlamaCpp
https://api.github.com/repos/langchain-ai/langchain/issues/2592/comments
3
2023-04-08T16:59:53Z
2023-09-26T16:10:02Z
https://github.com/langchain-ai/langchain/issues/2592
1,659,582,034
2,592
[ "hwchase17", "langchain" ]
The foundational chain and ChromaDB possess the capacity for persistence. It would be beneficial to make this persistence feature accessible to the higher-level VectorstoreIndexCreator. By doing so, we can repurpose saved indexes, making the service more easily scalable, particularly for handling extensive documents.
Feature Request: Allow VectorStoreIndexCreator to retrieve stored vector indexes
https://api.github.com/repos/langchain-ai/langchain/issues/2591/comments
3
2023-04-08T16:53:19Z
2023-09-28T16:08:46Z
https://github.com/langchain-ai/langchain/issues/2591
1,659,580,116
2,591
[ "hwchase17", "langchain" ]
Good afternoon, I am having a problem when it comes to streaming responses. I am trying to use the example provided by the langchain docs and I can't seem to get it working without editing the package.json file from langchain lol. Here is the code I ran `import * as env from "dotenv" import { OpenAI } from "langchain/llms" import { CallbackManager } from "langchain/dist/callbacks" env.config() //const apiKey = process.env.OPEN_API_KEY const chat = new OpenAI({ streaming: true, callbackManager: CallbackManager.fromHandlers({ async handleLLMNewToken(token) { console.log(token); }, }), }); const response = await chat.call("Write me a song about sparkling water."); console.log(response);` And the error I received when running it: node:internal/errors:490 ErrorCaptureStackTrace(err); ^ Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './dist/callbacks' is not defined by "exports" in home/jg//Langchain_Test_Programs/LLM_Quickstart/node_modules/langchain/package.json imported from home/jg//Langchain_Test_Programs/LLM_Quickstart/app.js at new NodeError (node:internal/errors:399:5) at exportsNotFound (node:internal/modules/esm/resolve:266:10) at packageExportsResolve (node:internal/modules/esm/resolve:602:9) at packageResolve (node:internal/modules/esm/resolve:777:14) at moduleResolve (node:internal/modules/esm/resolve:843:20) at defaultResolve (node:internal/modules/esm/resolve:1058:11) at nextResolve (node:internal/modules/esm/hooks:654:28) at Hooks.resolve (node:internal/modules/esm/hooks:309:30) at ESMLoader.resolve (node:internal/modules/esm/loader:312:26) at ESMLoader.getModuleJob (node:internal/modules/esm/loader:172:38) { code: 'ERR_PACKAGE_PATH_NOT_EXPORTED' } I searched for awhile to find a solution and I eventually did by renaming the ./callbacks in the export section of the langchain package.json to ./dist/calbacks i.e. I changed `"./callbacks": { "types": "./callbacks.d.ts", "import": "./callbacks.js", "require": "./callbacks.cjs" },` to `"./dist/callbacks": { "types": "./callbacks.d.ts", "import": "./callbacks.js", "require": "./callbacks.cjs" },` This worked for me but then I ran into two more issues: A. The text takes forever to be streamed and B. when the text is streamed, it is returned at an extremely fast paced. So fast that I cant keep up with it when trying to read. I don't know if that is how it's supposed to be but I was hoping for the text to be returned slow, like it is done through the chatgpt official site. I know there is probably a much simpler fix but I can't seem to find it. Any help will be appreciated it, thanks!
Receiving a "node:internal/errors:490" error when running the streaming example code from the langchain docs
https://api.github.com/repos/langchain-ai/langchain/issues/2590/comments
3
2023-04-08T16:24:18Z
2023-09-25T16:10:50Z
https://github.com/langchain-ai/langchain/issues/2590
1,659,572,068
2,590
[ "hwchase17", "langchain" ]
hi, i am trying use FAISS to do similarity_search, but it failed with errs: >>> db.similarity_search("123") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 207, in similarity_search docs_and_scores = self.similarity_search_with_score(query, k) File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 176, in similarity_search_with_score embedding = self.embedding_function(query) File "/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 279, in embed_query embedding = self._embedding_func(text, engine=self.query_model_name) File "/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 235, in _embedding_func return self._get_len_safe_embeddings([text], engine=engine)[0] File "/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 193, in _get_len_safe_embeddings encoding = tiktoken.model.encoding_for_model(self.document_model_name) AttributeError: module 'tiktoken' has no attribute 'model' anyone can give me some advise?
FAISS similarity_search not work
https://api.github.com/repos/langchain-ai/langchain/issues/2587/comments
3
2023-04-08T15:57:25Z
2023-09-25T16:10:55Z
https://github.com/langchain-ai/langchain/issues/2587
1,659,564,331
2,587