issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "hwchase17", "langchain" ]
Summarize returns empty result when intermediate documents contain hashtag `#`. Otherwise using different prompt which doesn't generate hashtags all is cool. prompt_template = """Write a tweet from the following text: {text} """ PROMPT = PromptTemplate (template=prompt_template, input_variables=["text"]) chain = load_summarize_chain (llm, chain_type="map_reduce", map_prompt=PROMPT, combine_prompt=PROMPT) chain.run (docs) ''
[Chain] Summarize returns empty result when intermediate documents contain hashtag `#`
https://api.github.com/repos/langchain-ai/langchain/issues/603/comments
4
2023-01-13T00:24:38Z
2023-01-24T08:29:01Z
https://github.com/langchain-ai/langchain/issues/603
1,531,528,097
603
[ "hwchase17", "langchain" ]
Pinecone's vector DB allows for per-vector metadata which is filterable in index.query() ([pinecone docs](https://docs.pinecone.io/docs/metadata-filtering#querying-an-index-with-metadata-filters)). Suggesting exposing this filters argument in Pinecone.similarity_search() and Pinecone.similarity_search_with_score().
Add metadata filtering to Pinecone index query
https://api.github.com/repos/langchain-ai/langchain/issues/600/comments
0
2023-01-12T22:11:04Z
2023-01-13T05:15:53Z
https://github.com/langchain-ai/langchain/issues/600
1,531,431,925
600
[ "hwchase17", "langchain" ]
I occasionally get this error when using the 'open-meteo-api' tool: openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6536 tokens (6280 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
Error when using the 'open-meteo-api' tool
https://api.github.com/repos/langchain-ai/langchain/issues/598/comments
5
2023-01-12T17:50:38Z
2023-09-25T16:20:23Z
https://github.com/langchain-ai/langchain/issues/598
1,531,137,139
598
[ "hwchase17", "langchain" ]
What is the best way to build an agent that can read from courtlistener.com & perform inference based on it? Generally curious if it would be possible to integrate courtlistener.com's API into the Langchain ecosystem. They already have their own internal search engine accessible via API & the web, but would love to have a connector to it. https://www.courtlistener.com/help/api/rest/ https://www.courtlistener.com/api/rest/v3/
[Feature Request] Courtlistener.com API
https://api.github.com/repos/langchain-ai/langchain/issues/589/comments
2
2023-01-12T00:43:40Z
2023-08-24T16:20:22Z
https://github.com/langchain-ai/langchain/issues/589
1,529,917,685
589
[ "hwchase17", "langchain" ]
It would be convenient if perhaps the PythonREPL returned the last output of the REPL if there's no stdout Also, improve the tool description to note that All Python outputs must use print().
Improve python repl tool
https://api.github.com/repos/langchain-ai/langchain/issues/586/comments
4
2023-01-11T20:44:57Z
2023-09-28T16:12:59Z
https://github.com/langchain-ai/langchain/issues/586
1,529,670,774
586
[ "hwchase17", "langchain" ]
Hi I always feel thank you for opening such a great repo! I read the code of [ConversationalAgent](https://github.com/hwchase17/langchain/blob/74932f25167331101e6ca22b0612ccd57f78b81b/langchain/agents/conversational/base.py#L16) and found `human_prefix` is not used in [FORMAT_INSTRUCTIONS](https://github.com/hwchase17/langchain/blob/74932f25167331101e6ca22b0612ccd57f78b81b/langchain/agents/conversational/prompt.py#L14) So I wonder human_prefix is necessary or not. thank you!
Is human_prefix necessary for ConversationalAgent?
https://api.github.com/repos/langchain-ai/langchain/issues/571/comments
1
2023-01-10T02:39:55Z
2023-08-24T16:20:28Z
https://github.com/langchain-ai/langchain/issues/571
1,526,667,087
571
[ "hwchase17", "langchain" ]
When using a conversational agent, a correct answer from a search tool was ignored. The search was for "who is the monarch of England?" Here's the output: ``` > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Search Action Input: monarch of England Observation: Charles III Thought: Do I need to use a tool? No AI: The current monarch of England is Queen Elizabeth II. > Finished chain. ``` Here's the chain relevant code: ``` llm = OpenAI(temperature=0) search = GoogleSearchAPIWrapper() tools = [ Tool( "Search", SerpAPIWrapper().run, "A search engine. Useful for when you need to answer questions about current events. Input should be a search query.", ), Tool( "PAL-MATH", PALChain.from_math_prompt(llm).run, "A language model that is really good at solving complex word math problems. Input should be a math problem.", ), ] memory = ConversationBufferMemory(memory_key="chat_history") chain = initialize_agent(tools, llm, agent="conversational-react-description", verbose=True, memory=memory) ```
Conversational Agent ignored correct answer from Search
https://api.github.com/repos/langchain-ai/langchain/issues/566/comments
5
2023-01-09T18:48:18Z
2023-09-25T16:20:34Z
https://github.com/langchain-ai/langchain/issues/566
1,526,129,520
566
[ "hwchase17", "langchain" ]
I'm trying to implement an agent that has a Yelp tool to find the best restaurants in an area. I have been using LLMRequests, but it seem as though the beautiful soup output of the requests response is very noisy (see below). For something like this, should one implement a different chain (e.g. a 'Yelp' chain that integrates with Yelp API) instead? ` 'Between >>> and <<< are the raw search result text from yelp\n Extract the names of top restaurants or say "not found" if there are no results.\n >>> \n\n\nSearch Businesses In San Francisco, CA - Yelp\nYelpFor BusinessesWrite a ReviewLog InSign UpRestaurantsHome ServicesAuto ServicesMoreMoreFilters$$$$$$$$$$SuggestedOpen Now\xa0--:--WaitlistFeaturesOffering a DealOffers DeliveryOffers TakeoutGood for KidsSee allNeighborhoodsAlamo SquareAnza VistaAshbury HeightsBalboa TerraceSee allDistanceBird\'s-eye ViewDriving (5 mi.)Biking (2 mi.)Walking (1 mi.)Within 4 blocksBrowsing San Francisco, CA businessesSort:RecommendedAllPriceOpen NowWaitlist1.\xa0Bi-Rite Creamery10004Ice Cream & Frozen Yogurt$$MissionThis is a placeholder“Nice little shop near Mission Dolores Park! Like a 1 min walk away\n\nThey had a buncha unique flavors to choose from, I tried banana, cream brûlée, honey…”\xa0more2.\xa0Brenda’s French Soul Food11895Breakfast & BrunchSouthernCajun/Creole$$TenderloinThis is a placeholder“Today I go there with my coworker and then have orders foods \nAfter having the food \nWe settle bills and the server charges more than usual price \nAfter that I…”\xa0more3.\xa0Gary Danko5800American (New)FrenchWine Bars$$$$Russian HillThis is a placeholder“Amazing service and food! An error was made with our reservation so we showed up and weren\'t on their books! After showing proof of the res, they offered for…”\xa0more4.\xa0Tartine Bakery8666BakeriesCafesDesserts$$MissionThis is a placeholder“So cute here! The treats were so delicious. Honestly, the brownies were so very delish! We had a a good variety of treats to enjoy and they were all really…”\xa0more5.\xa0Mitchells Ice Cream4623Ice Cream & Frozen YogurtCustom Cakes$Bernal HeightsThis is a placeholder6.\xa0Hog Island Oyster6782SeafoodSeafood MarketsLive/Raw Food$$EmbarcaderoThis is a placeholder“Great place. Fresh oysters ! I love the area and the view of the bay! Great food and ambiance!”\xa0more7.\xa0House of Prime Rib8300American (Traditional)SteakhousesWine Bars$$$Nob HillThis is a placeholder“I\'m not going to say much but this is basically the best Prime Rib restaurant in America. It was so good that I brought my mom here from out of town. Here is…”\xa0more8.\xa0Fog Harbor Fish House8700SeafoodWine BarsCocktail Bars$$Fisherman\'s WharfThis is a placeholderOutdoor seatingFull barLive wait time: 12 - 22 mins“First time here and I wasn\'t disappointed, the foods are yummy. The staffs are very good specially to our server Shawn.”\xa0moreFind a Table9.\xa0B Patisserie3196BakeriesPatisserie/Cake ShopMacarons$$Lower Pacific HeightsThis is a placeholder“One of my absolute favorite pastry shops in SF. Their Kouign amann are the best that I\'ve ever tasted. They have classic flavors (plain and chocolate) and also…”\xa0more10.\xa0Burma Superstar7367Burmese$$Inner RichmondThis is a placeholder“Great food. great service. \nOutdoor seating has heater when it\'s cold out. \nHighly recommend the place.”\xa0moreStart Order1234567891 of 24Can\'t find the business?Adding a business to Yelp is always free.Add businessGot search feedback? Help us improve.More NearbyActive LifeArts & EntertainmentAutomotiveBeauty & SpasEducationEvent Planning & ServicesFoodHealth & MedicalHome ServicesHotels & TravelLocal FlavorLocal ServicesNightlifePetsProfessional ServicesPublic Services & GovernmentRestaurantsShoppingMore categoriesAboutAbout YelpCareersPressInvestor RelationsTrust & SafetyContent GuidelinesAccessibility StatementTerms of ServicePrivacy PolicyAd ChoicesYour Privacy ChoicesDiscoverYelp Project Cost GuidesCollectionsTalkEventsYelp BlogSupportYelp MobileDevelopersRSSYelp for BusinessYelp for BusinessBusiness Owner LoginClaim your Business PageAdvertise on YelpYelp for Restaurant OwnersTable ManagementBusiness Success StoriesBusiness SupportYelp Blog for BusinessLanguagesEnglishCountriesUnited StatesAboutBlogSupportTermsPrivacy PolicyYour Privacy ChoicesCopyright © 2004–2023 Yelp Inc. Yelp, , and related marks are registered trademarks of Yelp.Some Data By Acxiom\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n <<<\n Extracted:`
Question on LLMRequests
https://api.github.com/repos/langchain-ai/langchain/issues/560/comments
4
2023-01-08T03:34:54Z
2023-11-07T16:09:14Z
https://github.com/langchain-ai/langchain/issues/560
1,524,303,212
560
[ "hwchase17", "langchain" ]
should be doable with callbacks
count tokens used in chain
https://api.github.com/repos/langchain-ai/langchain/issues/558/comments
4
2023-01-07T20:44:41Z
2023-01-24T08:29:36Z
https://github.com/langchain-ai/langchain/issues/558
1,524,139,524
558
[ "hwchase17", "langchain" ]
This line seems to be incorrect: https://github.com/hwchase17/langchain/blob/1f248c47f36af986be7ea647a6af223c9bb34b2b/langchain/llms/base.py#L112. Should be either `prompts = prompts[missing_prompt_idxs[i]` or `prompts = missing_prompts[i]`.
Bug with LLM caching
https://api.github.com/repos/langchain-ai/langchain/issues/551/comments
1
2023-01-05T23:37:12Z
2023-01-06T15:30:12Z
https://github.com/langchain-ai/langchain/issues/551
1,521,621,356
551
[ "hwchase17", "langchain" ]
```python from langchain.agents import load_tools, initialize_agent, get_all_tool_names from langchain.llms.huggingface_pipeline import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "sberbank-ai/mGPT" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) news_api_key = "oops my api key" tool_names = ["news-api", "llm-math"] tools = load_tools(tool_names, llm=hf, news_api_key=news_api_key) agent = initialize_agent(tools, hf, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True) response = agent({"input":"how many people live in Brazil?"}) print(response["intermediate_steps"]) ``` I have the code above but it doesnt work as expected, it just says: `ValueError: Could not parse LLM output: ' I know that there are more than 100,000'` What am I doing wrong?
Could not parse LLM output
https://api.github.com/repos/langchain-ai/langchain/issues/549/comments
5
2023-01-05T20:19:27Z
2023-02-04T18:47:40Z
https://github.com/langchain-ai/langchain/issues/549
1,521,353,737
549
[ "hwchase17", "langchain" ]
Hi LangChain Team!! Thank you for this awesome library. Hands down one of the best projects I have seen so far. It would be helpful if there is a way to access provider-specific information when they're loaded as tools as part of an Agent or a Chain. For Example: LLMs After running this: `llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)` One can run this: `llm_result.llm_output` But if the above LLM was added as tools like the following, is there a way to access the `some_result.llm_output`? `tools = load_tools(["serpapi", "llm-math"], llm=llm)`
Ability to access provider specific information from with Chains or Agents
https://api.github.com/repos/langchain-ai/langchain/issues/547/comments
5
2023-01-05T17:25:05Z
2023-09-10T16:46:40Z
https://github.com/langchain-ai/langchain/issues/547
1,521,104,086
547
[ "hwchase17", "langchain" ]
``` ValidationError: 1 validation error for RefineDocumentsChain return_intermediate_steps extra fields not permitted (type=value_error.extra) ```
bug in new `return_intermediate_steps` arg
https://api.github.com/repos/langchain-ai/langchain/issues/545/comments
2
2023-01-05T16:02:14Z
2023-07-14T17:07:22Z
https://github.com/langchain-ai/langchain/issues/545
1,520,982,156
545
[ "hwchase17", "langchain" ]
Something like: ```python PROMPT = PromptTemplate( input_variables=[ "name", "input", "time", ], default_values={ "name": "John", "time": lambda: str(datetime.now()), }, template=some_template, ) ```
Add option for specifying default values in PromptTemplate (maybe getters?)
https://api.github.com/repos/langchain-ai/langchain/issues/544/comments
2
2023-01-05T07:07:58Z
2023-09-10T16:46:45Z
https://github.com/langchain-ai/langchain/issues/544
1,520,209,619
544
[ "hwchase17", "langchain" ]
Hi, I just noticed a dead link in the documentation. Description: If you are [here](https://langchain.readthedocs.io/en/latest/modules/llms.html) and would like to click on _References_ at the bottom of the page, you end up with a [404](https://langchain.readthedocs.io/reference/modules/llms.html). I think the correct link would be [this](https://langchain.readthedocs.io/en/latest/reference/modules/llms.html) one. Thought I might bring this up :)
Dead link in Modules/LLMs section of documentation
https://api.github.com/repos/langchain-ai/langchain/issues/532/comments
0
2023-01-04T11:00:43Z
2023-01-05T08:52:04Z
https://github.com/langchain-ai/langchain/issues/532
1,518,764,115
532
[ "hwchase17", "langchain" ]
Would it be possible to have the textsplitter ensure the text it's splitting is always <= chunk_size? Seeing this issue while trying to embed large documents
[Feature Request] Have textsplitter always be <= chunk_size
https://api.github.com/repos/langchain-ai/langchain/issues/528/comments
7
2023-01-04T06:35:57Z
2023-09-27T16:15:33Z
https://github.com/langchain-ai/langchain/issues/528
1,518,407,159
528
[ "hwchase17", "langchain" ]
[Here](https://github.com/hwchase17/langchain/blob/master/langchain/llms/base.py#L70), instead of: ```python generations = [existing_prompts[i] for i in range(len(prompts))] ``` it should be ```python generations = [ existing_prompts[i] for i in range(len(prompts)) if i not in missing_prompt_idxs ] ``` @hwchase17 lemme know if I am missing something.
`langchain.llms.base.BaseLLM` has a bug
https://api.github.com/repos/langchain-ai/langchain/issues/527/comments
4
2023-01-04T05:59:06Z
2023-01-05T02:39:07Z
https://github.com/langchain-ai/langchain/issues/527
1,518,368,739
527
[ "hwchase17", "langchain" ]
Greetings! Thanks for all the work on a great library. A suggestion I'd recommend is the addition of separate methods to the serpapi wrapper to pull values from the SerpAPI json produced for searches. There's a lot of data in there that could be used to aid downstream tasks. Here's an [example JSON](https://serpapi.com/searches/c5ce8ca22344ac58/63b3101c3517646fe749a0e0.json). In particular, I'd be interested in methods to retrieve the 'snippet' and 'url' values, or to pull the search id values for each search to enable further API calls directly from [SerpAPI's search archive](https://serpapi.com/search-archive-api).
Add methods to pull json data from SerpAPI wrapper
https://api.github.com/repos/langchain-ai/langchain/issues/522/comments
4
2023-01-03T16:37:46Z
2023-11-21T16:08:05Z
https://github.com/langchain-ai/langchain/issues/522
1,517,645,443
522
[ "hwchase17", "langchain" ]
Hello! First of all, thank you for opening such an excellent code! I am truly excited and want to contribute somehow. Now I try to read codes regarding [openai.py](https://github.com/hwchase17/langchain/blob/master/langchain/llms/openai.py) and found that` Extra.forbid` at [here](https://github.com/hwchase17/langchain/blob/3efec55f939a9758682488bf23c1d7646ee35a6f/langchain/llms/openai.py#L58). However, I can initialize it like the code below. ```python OpenAI(temperatures=0, hey = "hey", hello = "how are you?") ``` The reason is OpenAI class sets all extras inputs as model_args at [here](https://github.com/hwchase17/langchain/blob/3efec55f939a9758682488bf23c1d7646ee35a6f/langchain/llms/openai.py#L60) I assume this validation was added to accommodate future changes in the GPT API parameter, but I think it is likely to cause confusion for the reader. Therefore my suggestion is simply to delete one of the following part 1. the part of `Extra.forbid` at line 58 2. Or delete the validation part, which starts from line 60, and add a document like `if you want to set more parameter of GPT API, please add parameter in model_args`. please tell me how you feel about my suggestion. thank you. Yongtae
Is Extra.forbid necessary?
https://api.github.com/repos/langchain-ai/langchain/issues/519/comments
10
2023-01-03T09:37:54Z
2023-01-05T02:33:09Z
https://github.com/langchain-ai/langchain/issues/519
1,517,138,568
519
[ "hwchase17", "langchain" ]
- windows 10 - python 3.7 - langchain 0.0.27 ``` ImportError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_35084\2350772706.py in <module> ----> 1 from gpt_index import GPTTreeIndex, SimpleDirectoryReader 2 documents = SimpleDirectoryReader('data').load_data() 3 index = GPTTreeIndex(documents) c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\__init__.py in <module> 13 14 # indices ---> 15 from gpt_index.indices.keyword_table import ( 16 GPTKeywordTableIndex, 17 GPTRAKEKeywordTableIndex, c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\__init__.py in <module> 2 3 # indices ----> 4 from gpt_index.indices.keyword_table.base import GPTKeywordTableIndex 5 from gpt_index.indices.keyword_table.rake_base import GPTRAKEKeywordTableIndex 6 from gpt_index.indices.keyword_table.simple_base import GPTSimpleKeywordTableIndex c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\keyword_table\__init__.py in <module> 2 3 # indices ----> 4 from gpt_index.indices.keyword_table.base import GPTKeywordTableIndex 5 from gpt_index.indices.keyword_table.rake_base import GPTRAKEKeywordTableIndex 6 from gpt_index.indices.keyword_table.simple_base import GPTSimpleKeywordTableIndex c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\keyword_table\base.py in <module> 13 14 from gpt_index.data_structs.data_structs import KeywordTable ---> 15 from gpt_index.indices.base import DOCUMENTS_INPUT, BaseGPTIndex 16 from gpt_index.indices.keyword_table.utils import extract_keywords_given_response 17 from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\base.py in <module> 17 from gpt_index.data_structs.data_structs import IndexStruct, Node 18 from gpt_index.data_structs.struct_type import IndexStructType ---> 19 from gpt_index.indices.prompt_helper import PromptHelper 20 from gpt_index.indices.query.query_runner import QueryRunner 21 from gpt_index.indices.query.schema import QueryConfig, QueryMode c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\indices\prompt_helper.py in <module> 10 from gpt_index.constants import MAX_CHUNK_OVERLAP 11 from gpt_index.data_structs.data_structs import Node ---> 12 from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor 13 from gpt_index.langchain_helpers.text_splitter import TokenTextSplitter 14 from gpt_index.prompts.base import Prompt c:\Users\faris\.conda\envs\gpt\lib\site-packages\gpt_index\langchain_helpers\chain_wrapper.py in <module> 5 6 from langchain import Cohere, LLMChain, OpenAI ----> 7 from langchain.llms import AI21 8 from langchain.llms.base import BaseLLM 9 ImportError: cannot import name 'AI21' from 'langchain.llms' (c:\Users\faris\.conda\envs\gpt\lib\site-packages\langchain\llms\__init__.py) ```
cannot import name 'AI21' from 'langchain.llms' (...\.conda\envs\gpt\lib\site-packages\langchain\llms\__init__.py)
https://api.github.com/repos/langchain-ai/langchain/issues/510/comments
12
2023-01-02T15:52:28Z
2023-09-12T01:56:44Z
https://github.com/langchain-ai/langchain/issues/510
1,516,524,603
510
[ "hwchase17", "langchain" ]
What is the best way to build an agent that can read from Google docs and perform inference based on it (and an input)? I see there are a lot of Search functionality, but it seems to be general search (versus search over a particular source).
Integration with Google Docs
https://api.github.com/repos/langchain-ai/langchain/issues/509/comments
3
2023-01-02T03:29:07Z
2023-08-24T16:20:44Z
https://github.com/langchain-ai/langchain/issues/509
1,515,992,797
509
[ "hwchase17", "langchain" ]
right now, if max iterations is exceeded for an agent run, we just exit early with a constant string. a more graceful way would be allowing one final call to the llm and asking it to make a prediction with the information that currently exists
more graceful handling of "max_iterations" for agent
https://api.github.com/repos/langchain-ai/langchain/issues/508/comments
0
2023-01-02T03:27:38Z
2023-01-03T15:46:10Z
https://github.com/langchain-ai/langchain/issues/508
1,515,992,387
508
[ "hwchase17", "langchain" ]
Hi I'm trying to use the class StuffDocumentsChain but have not seen any usage example. I simply wish to reload existing code fragment and re-shape it (iterate). When doing so from scratch it works fine, since the memory is provided to the next calls and the context is maintained, so work can be while in the same "session" The problem is when I load an existing file and try to start with it, after the response arrived propery, I get this error: raise ValueError(f"One input key expected got {prompt_input_keys}") ValueError: One input key expected got ['reshape', 'human_input'] this is my template, and I'm using {reshape} as the starting input of the file contents, but for some reason it fails, that's why i started looking into the StuffDocumentsChain. ''' {reshape} /* {history} */ /* {human_input} */ ''' I also tried to change the memory before sending so to "populate it" with pre-loaded file contents, but was not able to do so.
How to use StuffDocumentsChain / Error when using third input variable
https://api.github.com/repos/langchain-ai/langchain/issues/504/comments
3
2023-01-01T09:11:10Z
2023-09-10T16:46:50Z
https://github.com/langchain-ai/langchain/issues/504
1,515,412,705
504
[ "hwchase17", "langchain" ]
for `load_qa_chain` we could unify the args by having a new arg name `return_steps` to replace the names `return_refine_steps` and `return_map_steps` (it would do the same thing as those existing args)
load_qa_chain return_steps arg
https://api.github.com/repos/langchain-ai/langchain/issues/499/comments
0
2022-12-30T21:30:04Z
2023-01-03T15:45:10Z
https://github.com/langchain-ai/langchain/issues/499
1,514,853,240
499
[ "hwchase17", "langchain" ]
Hi there, I'm trying to use huggingface embeddings with elasticvectorsearch, but can't initialize `ElasticVectorSearch` class. I run elastic search locally in docker: ``` docker network create elastic docker run --name es01 --net elastic -p 9200:9200 -it docker.elastic.co/elasticsearch/elasticsearch:8.5.3 ``` Then I run the following code: ```python ... hf = HuggingFaceEmbeddings( model_name="sentence-transformers/all-mpnet-base-v2" ) elastic_vector_search = ElasticVectorSearch.from_texts( texts, hf, elasticsearch_url="https://elastic:FR1lY9OEo89NxkbWoj9Z@localhost:9200" ) ``` and get ``` elastic_transport.TlsError: TLS error caused by: TlsError(TLS error caused by: SSLError([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129))) ``` It seems that to fix this issue I'd need to provide additional params like cert, etc. but `ElasticVectorSearch` only takes a connection string. e.g. when I make a curl request: ``` curl -u elastic https://localhost:9200 ``` gives me the following ``` curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. ``` when I run the same command with -k, it works ``` curl -u elastic https://localhost:9200 -k ``` it also works when I pass certificate file ``` curl --cacert http_ca.crt -u elastic https://localhost:9200 ``` and works in python if I pass cert file: ```python es_client = elasticsearch.Elasticsearch(elasticsearch_url, ca_certs="http_ca.crt") ``` Any ideas how to fix it?
ElasticVectorSearch - SSL: CERTIFICATE_VERIFY_FAILED
https://api.github.com/repos/langchain-ai/langchain/issues/498/comments
3
2022-12-30T15:03:40Z
2023-01-02T11:20:45Z
https://github.com/langchain-ai/langchain/issues/498
1,514,580,482
498
[ "hwchase17", "langchain" ]
how is it possible to tell to the `serpapi` tool to use DuckDuckGo instead of Google?
different search engine
https://api.github.com/repos/langchain-ai/langchain/issues/474/comments
8
2022-12-29T17:51:40Z
2023-01-18T07:14:40Z
https://github.com/langchain-ai/langchain/issues/474
1,513,936,430
474
[ "hwchase17", "langchain" ]
Our chat bot would occasionally run into this error: ``` e.py", line 65, in generate new_results = self._generate(missing_prompts, stop=stop) File "/home/ubuntu/.cache/pypoetry/virtualenvs/lm-bot-w2c4PdJl-py3.8/lib/python3.8/site-packages/langchain/llms/openai.py", line 151, in _generate token_usage[_key] = response["usage"][_key] KeyError: 'completion_tokens' ```
KeyError: 'completion tokens'
https://api.github.com/repos/langchain-ai/langchain/issues/472/comments
1
2022-12-29T16:11:06Z
2022-12-30T03:16:36Z
https://github.com/langchain-ai/langchain/issues/472
1,513,869,625
472
[ "hwchase17", "langchain" ]
not sure what best integration looks like
integrate with https://github.com/stanford-crfm/helm
https://api.github.com/repos/langchain-ai/langchain/issues/465/comments
3
2022-12-29T04:42:06Z
2023-08-24T16:20:54Z
https://github.com/langchain-ai/langchain/issues/465
1,513,369,552
465
[ "hwchase17", "langchain" ]
Hi, I'm trying to use LLM chain with HuggingFaceHub (facebook/opt-1.3b) and I'm getting time out error: `ValueError: Error raised by inference API: Model facebook/opt-1.3b time out` I would like to know if I'm doing something wrong? or there is a real issue. This is my [collab](https://colab.research.google.com/drive/158xobl37kvMlvAvwzmSngENYm3GvrFlC?usp=sharing ). Appreciate your assistance.
Getting time our while trying to use HuggingFaceHub
https://api.github.com/repos/langchain-ai/langchain/issues/461/comments
2
2022-12-28T20:55:00Z
2022-12-29T08:40:15Z
https://github.com/langchain-ai/langchain/issues/461
1,513,159,946
461
[ "hwchase17", "langchain" ]
API seams easy https://wolframalpha.readthedocs.io/en/latest/?badge=latest
Wolfram Alpha connection as tool
https://api.github.com/repos/langchain-ai/langchain/issues/455/comments
6
2022-12-28T14:56:23Z
2023-09-18T17:03:37Z
https://github.com/langchain-ai/langchain/issues/455
1,512,875,310
455
[ "hwchase17", "langchain" ]
tool chains...so that a agent can direcly funnel the returns of a webrequest to a summerization tool that would save a lot of context so the agent can call something like summerizeAndanwser(Webrequest("gpt provided url"), "gpt provided question")
Let Agent use Tool chains
https://api.github.com/repos/langchain-ai/langchain/issues/451/comments
2
2022-12-28T13:57:55Z
2023-08-24T16:20:59Z
https://github.com/langchain-ai/langchain/issues/451
1,512,823,574
451
[ "hwchase17", "langchain" ]
for refine output: `out[i]['refine_steps'][j]` for map output: `out[i]['map_steps'][j]['output_text']`
different output types from map vs refine intermediate steps
https://api.github.com/repos/langchain-ai/langchain/issues/440/comments
3
2022-12-27T20:11:03Z
2022-12-28T11:59:46Z
https://github.com/langchain-ai/langchain/issues/440
1,512,128,567
440
[ "hwchase17", "langchain" ]
Hi @hwchase17 Thanks for making this, amazing open source project. I'm looking to make Agents more usable in production environment (eg for building a user-facing "chatbot") and some parts of the existing langchain api might need some changes to make that possible. Namely, "streaming" (some kind of) progress to the user while an agent does its thing is key for a low latency experience. Another concern is to keep track of the "lineage" (ie. which LLM prompts / API calls) of a particular output, in order to eg. selectively display it to the user or support improving the agent over time. From looking at the code of the library so far it looks to me like the best starting point to make this happen is to evolve the logger api, namely 1. Make it (optionally) local to each instance, so that multiple agents can run independently in the same process without mixing things up, see PR #437 2. Evolve the BaseLogger API to be more amenable to structured logging, eg. the `log_llm_response` method could/should have available the prompt and inputs too, otherwise we don't know what that output is for, etc Would welcome your thoughts? Thanks Nuno
Discussion: Evolving the logger api
https://api.github.com/repos/langchain-ai/langchain/issues/439/comments
4
2022-12-27T14:30:51Z
2023-09-25T16:20:44Z
https://github.com/langchain-ai/langchain/issues/439
1,511,855,972
439
[ "hwchase17", "langchain" ]
probably involves making them NOT namedtuples (since those are immutable)
ability to change the tool descriptions after loading
https://api.github.com/repos/langchain-ai/langchain/issues/438/comments
1
2022-12-27T14:21:11Z
2023-08-24T16:21:04Z
https://github.com/langchain-ai/langchain/issues/438
1,511,848,249
438
[ "hwchase17", "langchain" ]
I tried to do a map-reduce summary using curie instead a davinci. Code: ``` llm=OpenAI(model_name="text-curie-001",temperature=0) summary_chain = load_summarize_chain(llm, chain_type="map_reduce") summary=summary_chain.run(docs) ``` Error: ``` Token indices sequence length is longer than the specified maximum sequence length for this model (2741 > 1024). Running this sequence through the model will result in indexing errors --------------------------------------------------------------------------- InvalidRequestError Traceback (most recent call last) [<ipython-input-72-4ced0b77973e>](https://localhost:8080/#) in <module> ----> 1 summary=summary_chain.run(docs) 16 frames [/usr/local/lib/python3.8/dist-packages/openai/api_requestor.py](https://localhost:8080/#) in _interpret_response_line(self, rbody, rcode, rheaders, stream) 427 stream_error = stream and "error" in resp.data 428 if stream_error or not 200 <= rcode < 300: --> 429 raise self.handle_error_response( 430 rbody, rcode, resp.data, rheaders, stream_error=stream_error 431 ) InvalidRequestError: This model's maximum context length is 2049 tokens, however you requested 2997 tokens (2741 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. ``` NB: Error doesn't occur if I use "refine" instead (though the summary does something weird and doesn't summarize the whole document)
map_reduce summarizer errors when using LLM with smaller context window
https://api.github.com/repos/langchain-ai/langchain/issues/434/comments
13
2022-12-26T20:09:12Z
2023-10-12T16:11:21Z
https://github.com/langchain-ai/langchain/issues/434
1,511,209,670
434
[ "hwchase17", "langchain" ]
Use `logging` or another thread-safe method
Print statements cause langchain runs to not be thead safe
https://api.github.com/repos/langchain-ai/langchain/issues/431/comments
1
2022-12-26T18:31:12Z
2023-08-24T16:21:08Z
https://github.com/langchain-ai/langchain/issues/431
1,511,143,486
431
[ "hwchase17", "langchain" ]
The PALChain generates code that uses math and np libraries, but those aren't imported in PythonREPL. Here is the output of a recent run: > Entering new PALChain chain... def solution(): """A particle moves so that it is at (3 sin (t/4), 3 cos (t/4)) at time t. Find the speed of the particle, measured in unit of distance per unit of time.""" x = 3 * np.sin(t / 4) y = 3 * np.cos(t / 4) dx = 3 * np.cos(t / 4) / 4 dy = -3 * np.sin(t / 4) / 4 speed = np.sqrt(dx ** 2 + dy ** 2) result = speed return result > Finished PALChain chain. ==== date/time: 2022-12-26 17:21:25.817242 ==== calc_gpt_pal math problem: A particle moves so that it is at (3 sin (t/4), 3 cos (t/4)) at time t. Find the speed of the particle, measured in unit of distance per unit of time. calc_gpt_pal answer: name 'np' is not defined
PALChain generates code that uses math and np libraries, but those aren't imported in PythonREPL
https://api.github.com/repos/langchain-ai/langchain/issues/430/comments
5
2022-12-26T16:34:32Z
2023-09-10T16:46:56Z
https://github.com/langchain-ai/langchain/issues/430
1,511,075,608
430
[ "hwchase17", "langchain" ]
AWS CLI has the concept of a `--dryrun` parameter that will not actually execute a command, but will show you the commands that will be issued. This is useful to prevent costly errors. It would be useful to have a similar `--dryrun` or similar parameter that could show the query plan or the number of LLM calls that would be issued. This is very useful in recursive summarization, where large input docs can be passed, and your OpenAI bill can explode :-). The expected output would be either 1) an *estimate* of the number of queries that would be issued, or 2) some execution plan. The concept of a query estimate may need to be refined b/c it's impossible to know beforehand how big the returned text will be from the LLM. So recursive summarization could never know. But it might be possible to estimate the worst case if the LLM returns the max possible tokens at each recursive summary request. Interested in ideas here!
Support for dry-run to estimate LLM calls that will be made
https://api.github.com/repos/langchain-ai/langchain/issues/425/comments
1
2022-12-26T03:37:27Z
2023-08-24T16:21:19Z
https://github.com/langchain-ai/langchain/issues/425
1,510,557,474
425
[ "hwchase17", "langchain" ]
Data is in csv here: https://github.com/sylinrl/TruthfulQA/blob/main/TruthfulQA.csv
Add TruthfulQA as QA examples [Eval Branch]
https://api.github.com/repos/langchain-ai/langchain/issues/415/comments
2
2022-12-24T03:10:36Z
2023-08-24T16:21:23Z
https://github.com/langchain-ai/langchain/issues/415
1,509,918,983
415
[ "hwchase17", "langchain" ]
We could import in these functions and integrate https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/truthfulqa.py#L253-L417
Add traditional eval measures to eval methods [Eval branch]
https://api.github.com/repos/langchain-ai/langchain/issues/413/comments
1
2022-12-24T02:29:21Z
2023-08-24T16:21:29Z
https://github.com/langchain-ai/langchain/issues/413
1,509,910,695
413
[ "hwchase17", "langchain" ]
Hey Team, `https://github.com/hwchase17/langchain/blob/master/docs/explanation/combine_docs.md#refine` Here as we can see There is an error in the description of the `Refine` method. The text describes the `Refine`method as having "Pros" (advantages) that include "Can pull in more relevant context, and may be less lossy than `RefineDocumentsChain.`" However, this makes no sense, as the `Refine` method is being described, not the `RefineDocumentsChain` method.
Reference error in Documentation
https://api.github.com/repos/langchain-ai/langchain/issues/403/comments
2
2022-12-23T10:10:34Z
2022-12-23T13:54:50Z
https://github.com/langchain-ai/langchain/issues/403
1,509,152,730
403
[ "hwchase17", "langchain" ]
Feature request: Support Reinforcement Learning with Human Feedback Example https://github.com/lucidrains/PaLM-rlhf-pytorch
Feature Request: Reinforcement Learning with Human Feedback
https://api.github.com/repos/langchain-ai/langchain/issues/399/comments
5
2022-12-22T14:14:47Z
2023-09-10T16:47:00Z
https://github.com/langchain-ai/langchain/issues/399
1,507,966,977
399
[ "hwchase17", "langchain" ]
https://github.com/hwchase17/langchain/pull/367 introduced a way to invoke `run` with `**kwargs`, making `predict` unnecessary https://github.com/hwchase17/langchain/blob/766b84a9d9155494d7cf4d46b30cf8c92e2d5788/langchain/chains/llm.py#L83
`LLMChain.predict` unnecessary
https://api.github.com/repos/langchain-ai/langchain/issues/381/comments
1
2022-12-19T07:32:40Z
2023-09-12T21:30:06Z
https://github.com/langchain-ai/langchain/issues/381
1,502,451,649
381
[ "hwchase17", "langchain" ]
Since #372 docs on agents with memory are no longer valid
Agent + Memory Docs are outdated
https://api.github.com/repos/langchain-ai/langchain/issues/380/comments
3
2022-12-19T07:16:05Z
2022-12-20T09:37:24Z
https://github.com/langchain-ai/langchain/issues/380
1,502,436,404
380
[ "hwchase17", "langchain" ]
as per @agola11's comment in https://github.com/hwchase17/langchain/pull/368 the naming convention for base classes inconsistent. We should pick a convention and stick with it. For chain the base is `Chain`, for agents, the base is `Agent`, for llms the base is `BaseLLM` according to this PR I want to stick with Base (we use it for example selector, and prompt, and other things)
make base names consistent
https://api.github.com/repos/langchain-ai/langchain/issues/373/comments
1
2022-12-18T20:56:46Z
2023-09-12T21:30:05Z
https://github.com/langchain-ai/langchain/issues/373
1,502,028,640
373
[ "hwchase17", "langchain" ]
null
implement https://arxiv.org/pdf/2211.13892.pdf example selector
https://api.github.com/repos/langchain-ai/langchain/issues/371/comments
1
2022-12-18T16:01:54Z
2022-12-19T22:09:28Z
https://github.com/langchain-ai/langchain/issues/371
1,501,952,225
371
[ "hwchase17", "langchain" ]
``` # Load the tool configs that are needed. from langchain import LLMMathChain, SerpAPIWrapper llm = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm, verbose=True) tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ) ] agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) agent.run("Who won the US Open men's tennis final in 2022? What is the next prime number after his age?") ``` Produces: ```> Entering new ZeroShotAgent chain... Who won the US Open men's tennis final in 2022? What is the next prime number after his age? Thought: I need to find out who won the US Open and then calculate the next prime number Action: Search Action Input: "US Open men's tennis final 2022" Observation: Spain's Carlos Alcaraz, 19, defeated Casper Ruud in the 2022 US Open men's singles final to earn his first Grand Slam title. Thought: I need to calculate the next prime number Action: Calculator Action Input: 19 > Entering new LLMMathChain chain... 19 + 4 Answer: 23Traceback (most recent call last): File "/Users/ankushgola/Code/langchain/tests/agent_test.py", line 70, in <module> main() File "/Users/ankushgola/Code/langchain/tests/agent_test.py", line 65, in main agent.run("Who won the US Open men's tennis final in 2022? What is the next prime number after his age?") File "/Users/ankushgola/Code/langchain/langchain/chains/base.py", line 134, in run return self({self.input_keys[0]: text})[self.output_keys[0]] File "/Users/ankushgola/Code/langchain/langchain/chains/base.py", line 107, in __call__ outputs = self._call(inputs) File "/Users/ankushgola/Code/langchain/langchain/agents/agent.py", line 150, in _call observation = chain(output.tool_input) File "/Users/ankushgola/Code/langchain/langchain/chains/base.py", line 134, in run return self({self.input_keys[0]: text})[self.output_keys[0]] File "/Users/ankushgola/Code/langchain/langchain/chains/base.py", line 107, in __call__ outputs = self._call(inputs) File "/Users/ankushgola/Code/langchain/langchain/chains/llm_math/base.py", line 70, in _call raise ValueError(f"unknown format from LLM: {t}") ValueError: unknown format from LLM: + 4 Answer: 23 ``` Very interesting that the answer is actually correct. The problem is that the output from the calculator is: ``` ' + 4 Answer: 23' ``` `ValueError` is being thrown bc output of Calculator doesn't being with `'Answer:'` Can repro with another input (also involving prime numbers): ``` agent.run("Who won the US Open men's tennis final in 2019? What is the next prime number after his age?") ``` In this case, the output from the calculator is ``` '? Answer: 37' ``` Again, interesting that answer is actually correct.
`LLMMatchChain` ValueError encountered when using `ZeroShotAgent`
https://api.github.com/repos/langchain-ai/langchain/issues/370/comments
2
2022-12-18T05:08:52Z
2022-12-29T13:20:56Z
https://github.com/langchain-ai/langchain/issues/370
1,501,750,204
370
[ "hwchase17", "langchain" ]
Add support for server-sent events from the OpenAI API. Because some of these generations take a bit of time to finish I'd like to stream tokens back as they become ready. Documentation for this feature can be found [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-stream). This seems like it could require a pretty substantial refactor. LangChain would need to continuously return `LLMResult`s while maintaining a connection to the server.
Add Support For OpenAI SSE Response
https://api.github.com/repos/langchain-ai/langchain/issues/363/comments
3
2022-12-16T21:25:22Z
2023-04-11T17:34:05Z
https://github.com/langchain-ai/langchain/issues/363
1,500,884,185
363
[ "hwchase17", "langchain" ]
https://github.com/hwchase17/langchain/blob/c1b50b7b13545b1549c051182f547cfc2f8ed0be/langchain/chains/combine_documents/map_reduce.py#L120
_collapse_docs should use self.llm_chain not elf.combine_document_chain.combine_docs
https://api.github.com/repos/langchain-ai/langchain/issues/356/comments
5
2022-12-16T02:37:41Z
2022-12-16T14:42:30Z
https://github.com/langchain-ai/langchain/issues/356
1,499,469,690
356
[ "hwchase17", "langchain" ]
Add support for running your own HF pipeline locally. This would allow you to get a lot more dynamic with what HF features and models you support since you wouldn't be beholden to what is hosted in HF hub. You could also do stuff with HF Optimum to quantize your models and stuff to get pretty fast inference even running on a laptop. Let me know if you want this code and I will clean it up.
Add Support for Running Your Own HF Pipeline Locally
https://api.github.com/repos/langchain-ai/langchain/issues/354/comments
0
2022-12-16T00:12:15Z
2022-12-17T17:24:18Z
https://github.com/langchain-ai/langchain/issues/354
1,499,319,014
354
[ "hwchase17", "langchain" ]
Currently only done for OpenAI, but should be done for all
Add batching support to all LLMs
https://api.github.com/repos/langchain-ai/langchain/issues/348/comments
3
2022-12-15T14:54:52Z
2023-08-24T16:21:35Z
https://github.com/langchain-ai/langchain/issues/348
1,498,582,775
348
[ "hwchase17", "langchain" ]
Hello, Could the `OpenAIEmbeddings` class support the other OpenAI embeddings, such as the text-similarity and code-search engines? Currently we can only specify the engine type (ada, babbage, curie, davinci) during instantiation and are forced to use the text-search engines. Native support for this would be appreciated instead of my workaround - happy to contribute a PR to this if needed. Thanks!
Support for other OpenAI Embeddings (text-similarity, code-search)
https://api.github.com/repos/langchain-ai/langchain/issues/342/comments
5
2022-12-15T00:18:26Z
2022-12-16T00:55:40Z
https://github.com/langchain-ai/langchain/issues/342
1,497,597,312
342
[ "hwchase17", "langchain" ]
https://github.com/yangkevin2/emnlp22-re3-story-generation
Story generation
https://api.github.com/repos/langchain-ai/langchain/issues/341/comments
1
2022-12-14T18:24:56Z
2023-09-12T21:30:04Z
https://github.com/langchain-ai/langchain/issues/341
1,497,167,348
341
[ "hwchase17", "langchain" ]
Some agents get stuck in looping behavior. Add a max iteration argument to avoid infinite while loops.
Add max iteration argument to Agents
https://api.github.com/repos/langchain-ai/langchain/issues/336/comments
1
2022-12-14T05:46:12Z
2023-09-12T21:30:03Z
https://github.com/langchain-ai/langchain/issues/336
1,495,748,711
336
[ "hwchase17", "langchain" ]
null
Caching for LLMChain calls
https://api.github.com/repos/langchain-ai/langchain/issues/335/comments
1
2022-12-14T05:45:13Z
2022-12-19T14:24:25Z
https://github.com/langchain-ai/langchain/issues/335
1,495,747,358
335
[ "hwchase17", "langchain" ]
If you have a chain that involves agents and that could have a quantitative answer, have a query which allows you to run the chain N times and take some reduce function on the result. This would pair well with caching, in case any steps of the chain are running the same inferences. It would also pair well with a max iteration loop for an agent, so that one of the agents doesn't get stuck in a while loop.
Rerun agent N times and do majority voting on output
https://api.github.com/repos/langchain-ai/langchain/issues/334/comments
1
2022-12-14T05:44:51Z
2023-09-12T21:30:02Z
https://github.com/langchain-ai/langchain/issues/334
1,495,746,810
334
[ "hwchase17", "langchain" ]
A chain where you first call an LLM to do classification choose which subchain to call, then you call that specific subchain @sjwhitmore @johnmcdonnell were you guys going to work on this?
ForkChain
https://api.github.com/repos/langchain-ai/langchain/issues/320/comments
5
2022-12-12T03:47:12Z
2023-09-10T16:47:07Z
https://github.com/langchain-ai/langchain/issues/320
1,490,868,845
320
[ "hwchase17", "langchain" ]
There is an open PR (https://github.com/hwchase17/langchain/pull/131) but its not complete. Would be nice to finish it up and add it in
Add APE
https://api.github.com/repos/langchain-ai/langchain/issues/319/comments
1
2022-12-12T03:45:37Z
2023-09-12T21:30:01Z
https://github.com/langchain-ai/langchain/issues/319
1,490,867,226
319
[ "hwchase17", "langchain" ]
something like ``` from langchain.chains.base import Chain class BaseWhileChain(Chain): def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: state = self.get_initial_state(inputs) while True: outputs = self.inner_chain(state) if self.stopping_critera(outputs): return outputs else: state = self.update_state(state, inputs) ```
WhileLoop chain
https://api.github.com/repos/langchain-ai/langchain/issues/314/comments
2
2022-12-11T22:34:58Z
2023-09-10T16:47:11Z
https://github.com/langchain-ai/langchain/issues/314
1,490,521,742
314
[ "hwchase17", "langchain" ]
In the OpenAI llm model, frequency_penalty and presence_penalty are annotated as ints when they should allow decimals between 0.0 and 2.0 inclusive.
OpenAI llm rounds frequency & presence penalties to ints
https://api.github.com/repos/langchain-ai/langchain/issues/310/comments
3
2022-12-11T15:34:28Z
2022-12-11T20:34:04Z
https://github.com/langchain-ai/langchain/issues/310
1,490,129,581
310
[ "hwchase17", "langchain" ]
Can we add a couple lines here that ensures that the len() of the combined text will not cause a context window error? https://github.com/hwchase17/langchain/blob/9d08384d5ff3de40dac72d99efb99288e6ad4ee1/langchain/chains/combine_documents/map_reduce.py#L60 Otherwise, one sees this: ``` InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6213 tokens (5957 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. ```
Ensure length of the text will not cause a context window error
https://api.github.com/repos/langchain-ai/langchain/issues/301/comments
5
2022-12-10T20:06:01Z
2022-12-19T14:24:58Z
https://github.com/langchain-ai/langchain/issues/301
1,488,901,201
301
[ "hwchase17", "langchain" ]
Missing sources on this line: https://github.com/hwchase17/langchain/blob/master/langchain/chains/qa_with_sources/map_reduce_prompt.py#L41
Missing text in map_reduce_prompt.py file
https://api.github.com/repos/langchain-ai/langchain/issues/298/comments
2
2022-12-10T15:20:22Z
2022-12-11T01:15:07Z
https://github.com/langchain-ai/langchain/issues/298
1,488,576,891
298
[ "hwchase17", "langchain" ]
The current `Memory` objects allow a user to store the [entire conversation history](https://github.com/hwchase17/langchain/blob/master/langchain/chains/conversation/memory.py#L22), a [subset of history](https://github.com/hwchase17/langchain/blob/master/langchain/chains/conversation/memory.py#L50), and a [summary of history](https://github.com/hwchase17/langchain/blob/master/langchain/chains/conversation/memory.py#L79). Users might want to store a summary of the conversation in addition to a fixed-size history. You could imagine the following algorithm: ``` 1. Check if current conversation history has reached maximum size 2. If so, generate a conversation summary. 3. Optionally, merge the prior summary with the new summary. 4. Trim the conversation history 5. Append new exchange to conversation history ``` If the user wants to maintain a fixed-size conversation history as well as a summary, a simpler approach might be to create a more general `MemoryBank` container. There would need to be validation that the memory key is unique. ``` class MemoryBank(Memory, BaseModel): resources: list[Memory] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]: """Return history buffer.""" return {r.memory_key: r.buffer for r in self.resources} ... ```
Support long-term and short-term memory in Conversation chain
https://api.github.com/repos/langchain-ai/langchain/issues/296/comments
2
2022-12-10T03:43:37Z
2023-08-24T16:21:39Z
https://github.com/langchain-ai/langchain/issues/296
1,487,881,928
296
[ "hwchase17", "langchain" ]
This will allow the LLMs to hold on to previous commands.
Add support for naming the AI entity differently (other than "AI", which seems to be hardcoded)
https://api.github.com/repos/langchain-ai/langchain/issues/295/comments
5
2022-12-09T21:02:07Z
2023-08-24T16:21:44Z
https://github.com/langchain-ai/langchain/issues/295
1,487,452,383
295
[ "hwchase17", "langchain" ]
Hey, I'm looking to create an open-source tool that can be used to have conversations about pricing out different cloud services. Does anyone want to help out? or does anyone have any thoughts?
Connect a GPT model to cloud cost calculators
https://api.github.com/repos/langchain-ai/langchain/issues/292/comments
2
2022-12-09T15:50:22Z
2023-08-24T16:21:49Z
https://github.com/langchain-ai/langchain/issues/292
1,486,981,181
292
[ "hwchase17", "langchain" ]
https://cut-hardhat-23a.notion.site/code-for-webGPT-44485e5c97bd403ba4e1c2d5197af71d ``` from serpapi import GoogleSearch import requests import openai import logging import sys, os openai.api_key = "YOUR OPEN_AI API KEY" headers = {'Cache-Control': 'no-cache', 'Content-Type': 'application/json'} params = {'token': 'YOUR BROWSERLESS API KEY'} def scarpe_webpage(link): json_data = { 'url': link, 'elements': [{'selector': 'body'}], } response = requests.post('https://chrome.browserless.io/scrape', params=params, headers=headers, json=json_data) webpage_text = response.json()['data'][0]['results'][0]['text'] return webpage_text def summarize_webpage(question, webpage_text): prompt = """You are an intelligent summarization engine. Extract and summarize the most relevant information from a body of text related to a question. Question: {} Body of text to extract and summarize information from: {} Relevant information:""".format(question, webpage_text[0:2500]) completion = openai.Completion.create(engine="text-davinci-003", prompt=prompt, temperature=0.8, max_tokens=800) return completion.choices[0].text def summarize_final_answer(question, summaries): prompt = """You are an intelligent summarization engine. Extract and summarize relevant information from the four points below to construct an answer to a question. Question: {} Relevant Information: 1. {} 2. {} 3. {} 4. {}""".format(question, summaries[0], summaries[1], summaries[2], summaries[3]) completion = openai.Completion.create(engine="text-davinci-003", prompt=prompt, temperature=0.8, max_tokens=800) return completion.choices[0].text def get_link(r): return r['link'] def get_search_results(question): search = GoogleSearch({ "q": question, "api_key": "YOUR SERP API KEY", "logging": False }) result = search.get_dict() return list(map(get_link, result['organic_results'])) def print_citations(links, summaries): print("Citations:") i = 0 while i < 4: print("\n","[{}]".format(i+1), links[i],"\n", summaries[i], "\n") i += 1 def main(): print("\nTell me about:\n") question = input() print("\n") sys.stdout = open(os.devnull, 'w') #disable print links = get_search_results(question) sys.stdout = sys.__stdout__ #enable print webpages = list(map(scarpe_webpage, links[:4])) summaries = [] for x in webpages: summaries.append(summarize_webpage(question, x)) final_summary = summarize_final_answer(question, summaries) print("Here is the answer:", final_summary, "\n") print_citations(links, summaries) if __name__ == "__main__": main() ```
WebGPT
https://api.github.com/repos/langchain-ai/langchain/issues/289/comments
2
2022-12-09T07:00:33Z
2023-09-26T16:18:08Z
https://github.com/langchain-ai/langchain/issues/289
1,486,171,161
289
[ "hwchase17", "langchain" ]
We need an analogous structure to LengthBasedExampleSelector but for Memory. MemorySelector? This would provide an interface which would allow some mutation of the Memory if max context window is reached (& perhaps upon other conditions as well)
Add support for memory selection by length or other features (e.g. semantic similarity)
https://api.github.com/repos/langchain-ai/langchain/issues/287/comments
5
2022-12-08T19:51:22Z
2023-09-10T16:47:16Z
https://github.com/langchain-ai/langchain/issues/287
1,485,301,364
287
[ "hwchase17", "langchain" ]
It would be useful in certain circumstances to be able to manually flush the memory buffer of an agent. There's no defined interface for doing it at present (though it can be accomplished by holding on to a buffer and clearing it directly). Possibly something like and have define a `flush()` method on the `Memory` abstract class: ``` memory = ConversationBufferMemory(memory_key="chat_history") llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt, memory=memory) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent.run("....") ... agent.flush() # calls Memory.flush() ``` Thoughts?
Add support for memory flushing
https://api.github.com/repos/langchain-ai/langchain/issues/285/comments
10
2022-12-08T12:50:45Z
2022-12-11T16:03:33Z
https://github.com/langchain-ai/langchain/issues/285
1,484,573,489
285
[ "hwchase17", "langchain" ]
would be cool to do a demo with clustering on the embeddings classes to cluster similar documents / document chunks
Add clustering on embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/280/comments
3
2022-12-07T17:20:40Z
2023-08-24T16:22:00Z
https://github.com/langchain-ai/langchain/issues/280
1,482,429,496
280
[ "hwchase17", "langchain" ]
right now, agents only take in a single string. would be nice to let them take in multiple arguments.
allow agents to take multiple inputs
https://api.github.com/repos/langchain-ai/langchain/issues/279/comments
6
2022-12-07T16:41:22Z
2022-12-19T14:20:56Z
https://github.com/langchain-ai/langchain/issues/279
1,482,347,904
279
[ "hwchase17", "langchain" ]
to handle cases where it does not generate an action/action input
add fix_text method to MRKL
https://api.github.com/repos/langchain-ai/langchain/issues/277/comments
1
2022-12-07T15:26:43Z
2023-09-12T21:30:00Z
https://github.com/langchain-ai/langchain/issues/277
1,482,186,449
277
[ "hwchase17", "langchain" ]
https://arxiv.org/pdf/2207.05221.pdf should be a separate LLMchain with a prompt that people can add on at the end
add in method for getting confidence
https://api.github.com/repos/langchain-ai/langchain/issues/266/comments
4
2022-12-05T18:55:06Z
2023-08-24T16:22:04Z
https://github.com/langchain-ai/langchain/issues/266
1,477,208,775
266
[ "hwchase17", "langchain" ]
calling an LLM can be expensive - would make sense to cache queries!
add caching for LLM calls
https://api.github.com/repos/langchain-ai/langchain/issues/263/comments
17
2022-12-05T15:37:59Z
2022-12-20T03:03:15Z
https://github.com/langchain-ai/langchain/issues/263
1,476,836,692
263
[ "hwchase17", "langchain" ]
https://github.com/acheong08/ChatGPT or others
Add ChatGPT chain/API support
https://api.github.com/repos/langchain-ai/langchain/issues/258/comments
5
2022-12-04T21:45:50Z
2023-03-10T04:50:57Z
https://github.com/langchain-ai/langchain/issues/258
1,475,217,337
258
[ "hwchase17", "langchain" ]
<img width="724" alt="image" src="https://user-images.githubusercontent.com/6690839/205323170-75c4e8e2-dbd1-4813-aee2-cb112e371dee.png">
docs for SemanticSimilarityExampleSelector & method signature don't match
https://api.github.com/repos/langchain-ai/langchain/issues/243/comments
0
2022-12-02T15:05:58Z
2022-12-03T22:48:43Z
https://github.com/langchain-ai/langchain/issues/243
1,473,011,785
243
[ "hwchase17", "langchain" ]
I'm storing examples in both a vectorstore and a database. I would like to add the vectorstore id field to the database after the example has been successfully indexed. I think I could do this if add_texts in vectorstore could return a list of vectorstore IDs, and add_example in SemanticSimilarityExampleSelector propagated the returned ID back as well.
add_texts in VectorStore and add_example in SemanticSimilarityExampleSelector should return id
https://api.github.com/repos/langchain-ai/langchain/issues/238/comments
0
2022-12-01T22:46:45Z
2022-12-05T20:50:51Z
https://github.com/langchain-ai/langchain/issues/238
1,472,057,347
238
[ "hwchase17", "langchain" ]
when initializing with a sql database, check to see if it already exists raise error if it does not
check if sql database exists when initializing
https://api.github.com/repos/langchain-ai/langchain/issues/237/comments
1
2022-12-01T17:58:11Z
2023-09-12T21:29:59Z
https://github.com/langchain-ai/langchain/issues/237
1,471,720,332
237
[ "hwchase17", "langchain" ]
null
using poetry for dependency management?
https://api.github.com/repos/langchain-ai/langchain/issues/236/comments
2
2022-12-01T17:55:03Z
2022-12-04T19:17:46Z
https://github.com/langchain-ai/langchain/issues/236
1,471,717,027
236
[ "hwchase17", "langchain" ]
I would like to use SequentialChain with the option to use a different LLM class at each step. The rationale behind this is that I am using different temperature settings for different prompts within my chain. I also potentially may use different models for each step in the future. a rough idea for config -- have a json dict specifying LLM config, and pass in a list of configs (or list of LLM objects) which is the same length as the number of prompttemplates in the chain if you want to use different objects per chain, or one LLM object or config object in the case where you want to use the same for all
Allow different LLM objects for each PromptTemplate in SequentialChain
https://api.github.com/repos/langchain-ai/langchain/issues/231/comments
2
2022-11-30T17:37:25Z
2022-12-01T22:46:58Z
https://github.com/langchain-ai/langchain/issues/231
1,469,999,237
231
[ "hwchase17", "langchain" ]
Improved prompts for [harrison/combine_documents_chain](https://github.com/hwchase17/langchain/tree/harrison/combine_documents_chain) ``` """QUESTION_PROMPT is the prompt used in phase 1 where we run the LLM on each chunk of the doc.""" question_prompt_template = """Use the following portion of a legal contract to see if any of the text is relevant to answer the question. Return any relevant text verbatim. {context} Question: {question} Relevant text, if any:""" QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"] ) """ """ combine_prompt_template = """Given the following extracted parts of a contract and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. QUESTION: Which state/country's law governs the interpretation of the contract? ========= Content: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights. Source: 28-pl Content: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries. Source: 30-pl Content: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur, Source: 4-pl ========= FINAL ANSWER: This Agreement is governed by English law. SOURCES: 28-pl QUESTION: What did the president say about Michael Jackson? ========= Content: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. Source: 0-pl Content: And we won’t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet’s use this moment to reset. Let’s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. Source: 24-pl Content: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. \n\nBut I want you to know that we are going to be okay. Source: 5-pl Content: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation. Source: 34-pl ========= FINAL ANSWER: The president did not mention Michael Jackson. SOURCES: QUESTION: {question} ========= {summaries} ========= FINAL ANSWER:""" COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"] ) """COMBINE_DOCUMENT_PROMPT is fed into CombineDocumentsChain() at langchain/chains/combine_documents.py""" COMBINE_DOCUMENT_PROMPT = PromptTemplate( template="Content: {page_content}\nSource: {source}", input_variables=["page_content", "source"], ) ```
Prompts for harrison/combine_documents_chain
https://api.github.com/repos/langchain-ai/langchain/issues/230/comments
2
2022-11-30T17:35:43Z
2022-12-01T20:31:45Z
https://github.com/langchain-ai/langchain/issues/230
1,469,997,319
230
[ "hwchase17", "langchain" ]
I would like to track versions of PromptTemplates through time. an optional version attribute would help with this.
Add optional version attribute to BasePromptTemplate
https://api.github.com/repos/langchain-ai/langchain/issues/229/comments
3
2022-11-30T17:33:18Z
2022-11-30T20:00:36Z
https://github.com/langchain-ai/langchain/issues/229
1,469,994,597
229
[ "hwchase17", "langchain" ]
I want to save prompt templates in a JSONField in a django db. the current save() method on BasePromptTemplate outputs to a file rather than a json object. I'd prefer to have a method that loads the prompt to & from json, & I decide what to do with the JSON myself.
Serialize BasePromptTemplate to json rather than a file
https://api.github.com/repos/langchain-ai/langchain/issues/228/comments
1
2022-11-30T17:32:17Z
2022-11-30T20:00:21Z
https://github.com/langchain-ai/langchain/issues/228
1,469,993,469
228
[ "hwchase17", "langchain" ]
For example this list of entities but also the conversation summary
Support multiple memory modules in a chain
https://api.github.com/repos/langchain-ai/langchain/issues/224/comments
2
2022-11-29T16:29:16Z
2023-09-18T16:25:21Z
https://github.com/langchain-ai/langchain/issues/224
1,468,351,009
224
[ "hwchase17", "langchain" ]
Given: - Chain A generates N outputs, e.g. we get back the text and immediately split it into a list based on post-processing we control and expect. - Chain B should then run over each of those outputs from Chain A. Is there any way to do this elegantly in langchain? Perhaps some way to provide an output formatter to a chain, or some for_each pre- / post-processor? Or does this seem like just two independent chains with processing in between?
SequentialChain that runs next chain on each output of the prior chain?
https://api.github.com/repos/langchain-ai/langchain/issues/223/comments
3
2022-11-29T15:54:32Z
2022-12-11T22:22:54Z
https://github.com/langchain-ai/langchain/issues/223
1,468,297,308
223
[ "hwchase17", "langchain" ]
Running some CRUD-like statements to an agent throws `ResourceClosedError: This result object does not return rows. It has been closed automatically.` From the implementation it appears that it always expects to see rows which are then cast to `str` and returned as part of the chain. What would the impact of modifying this behaviour be on the expected usecase for the SQL chain as it is? https://github.com/hwchase17/langchain/blob/261029cef3e7c30277027f5d5283b87197eab520/langchain/sql_database.py#L70-L71
SQLDatabaseChain expects only select statements
https://api.github.com/repos/langchain-ai/langchain/issues/213/comments
2
2022-11-28T11:45:25Z
2022-11-29T17:17:50Z
https://github.com/langchain-ai/langchain/issues/213
1,466,279,563
213
[ "hwchase17", "langchain" ]
Sorry to disturb,I wonder if langchain could process a batch of prompts or it just process each text by calling llm(text)?
Is langchain able to process batch prompt?
https://api.github.com/repos/langchain-ai/langchain/issues/209/comments
5
2022-11-27T05:22:39Z
2023-09-10T16:47:22Z
https://github.com/langchain-ai/langchain/issues/209
1,465,336,559
209
[ "hwchase17", "langchain" ]
null
improve search chain by letting it parse results from json directly
https://api.github.com/repos/langchain-ai/langchain/issues/204/comments
1
2022-11-26T15:14:18Z
2023-08-24T16:22:14Z
https://github.com/langchain-ai/langchain/issues/204
1,465,180,488
204
[ "hwchase17", "langchain" ]
``` File "C:\ProgramData\Anaconda3\envs\LangChain\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 8: character maps to <undefined> ```
Unicode error on Windows
https://api.github.com/repos/langchain-ai/langchain/issues/200/comments
2
2022-11-26T13:54:28Z
2023-03-20T09:48:59Z
https://github.com/langchain-ai/langchain/issues/200
1,465,161,192
200
[ "hwchase17", "langchain" ]
too few free trials, too expensive
New search chain that doesnt use serpapi
https://api.github.com/repos/langchain-ai/langchain/issues/199/comments
4
2022-11-26T13:52:11Z
2023-01-24T08:31:27Z
https://github.com/langchain-ai/langchain/issues/199
1,465,160,696
199
[ "hwchase17", "langchain" ]
Is there a current API for hooking a conversational chain with an agent so, during the conversation, actions can be performed (e.g. db lookup etc)? If not, how would you see this working architecturally?
Combining Conversation Chain and Agents
https://api.github.com/repos/langchain-ai/langchain/issues/189/comments
4
2022-11-25T14:42:51Z
2022-11-27T21:05:18Z
https://github.com/langchain-ai/langchain/issues/189
1,464,640,819
189
[ "hwchase17", "langchain" ]
Do you have a bibtex citation for this repo? e.g. something like the following (from https://github.com/bigscience-workshop/promptsource) ``` @misc{bach2022promptsource, title={PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts}, author={Stephen H. Bach and Victor Sanh and Zheng-Xin Yong and Albert Webson and Colin Raffel and Nihal V. Nayak and Abheesht Sharma and Taewoon Kim and M Saiful Bari and Thibault Fevry and Zaid Alyafeai and Manan Dey and Andrea Santilli and Zhiqing Sun and Srulik Ben-David and Canwen Xu and Gunjan Chhablani and Han Wang and Jason Alan Fries and Maged S. Al-shaibani and Shanya Sharma and Urmish Thakker and Khalid Almubarak and Xiangru Tang and Xiangru Tang and Mike Tian-Jian Jiang and Alexander M. Rush}, year={2022}, eprint={2202.01279}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
Bibtex Citation
https://api.github.com/repos/langchain-ai/langchain/issues/188/comments
3
2022-11-25T05:08:29Z
2023-04-22T19:00:14Z
https://github.com/langchain-ai/langchain/issues/188
1,464,042,816
188
[ "hwchase17", "langchain" ]
- [PAL: PROGRAM-AIDED LANGUAGE MODELS](https://arxiv.org/pdf/2211.10435.pdf) - [Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks](https://wenhuchen.github.io/images/Program_of_Thoughts.pdf)
Add Program of Thoughts Prompting or Program-Aided Language Model
https://api.github.com/repos/langchain-ai/langchain/issues/185/comments
0
2022-11-25T00:10:23Z
2022-12-03T16:43:15Z
https://github.com/langchain-ai/langchain/issues/185
1,463,900,699
185
[ "hwchase17", "langchain" ]
null
Add docs for custom agents
https://api.github.com/repos/langchain-ai/langchain/issues/184/comments
0
2022-11-24T17:17:52Z
2022-11-26T14:49:27Z
https://github.com/langchain-ai/langchain/issues/184
1,463,638,512
184
[ "hwchase17", "langchain" ]
null
Add docs for custom LLMs
https://api.github.com/repos/langchain-ai/langchain/issues/183/comments
0
2022-11-24T17:17:37Z
2022-11-26T14:49:27Z
https://github.com/langchain-ai/langchain/issues/183
1,463,638,301
183
[ "hwchase17", "langchain" ]
Hi, is there any chance you could add support for the suffix parameter to the OpenAI class/call? [https://beta.openai.com/docs/api-reference/completions/create](https://beta.openai.com/docs/api-reference/completions/create) I could land this if you don't have the time, but it should be simple enough that you could land it faster than I could fork the repo and make a PR.
Add support for OpenAI Completion's Suffix Parameter
https://api.github.com/repos/langchain-ai/langchain/issues/181/comments
5
2022-11-24T02:04:44Z
2022-11-25T19:54:36Z
https://github.com/langchain-ai/langchain/issues/181
1,462,647,223
181
[ "hwchase17", "langchain" ]
make the calls in the "map" part concurrently
parrallize the map/reduce chain
https://api.github.com/repos/langchain-ai/langchain/issues/176/comments
4
2022-11-23T05:48:18Z
2022-12-19T16:07:23Z
https://github.com/langchain-ai/langchain/issues/176
1,461,068,354
176
[ "hwchase17", "langchain" ]
Hello, When running the "Map Reduce" [demo](https://langchain.readthedocs.io/en/latest/examples/demos/map%20reduce.html), I see the error below: ``` ImportError Traceback (most recent call last) Cell In [3], line 1 ----> 1 from langchain import OpenAI, PromptTemplate, LLMChain 2 from langchain.text_splitter import CharacterTextSplitter 3 from langchain.chains.mapreduce import MapReduceChain ImportError: cannot import name 'PromptTemplate' from 'langchain' (/Users/mteoh/projects/langchain_sandbox/.env/lib/python3.9/site-packages/langchain/__init__.py) ``` I see it defined here: https://github.com/hwchase17/langchain/blob/c02eb199b6587aeeb50fbb083693572bd2f030cc/langchain/prompts/prompt.py#L13 And mentioned here: https://github.com/hwchase17/langchain/blob/c02eb199b6587aeeb50fbb083693572bd2f030cc/langchain/__init__.py#L35 However, when grepping in the library directory, I do not find it: ``` :~/projects/langchain_sandbox/.env/lib/python3.9/site-packages/langchain $ grep -r PromptTemplate . ``` Relevant versions of my packages: ``` $ pip freeze | grep "langchain\|openai" langchain==0.0.16 openai==0.25.0 ``` Any advice? Thanks! Excited about this work.
Cannot import `PromptTemplate`
https://api.github.com/repos/langchain-ai/langchain/issues/160/comments
5
2022-11-20T08:41:13Z
2022-11-21T01:40:14Z
https://github.com/langchain-ai/langchain/issues/160
1,456,819,990
160
[ "hwchase17", "langchain" ]
Current implementation OpenAI embeddings are hard coded to return only text embeddings via. GPT-3. For example, ```python def embed_documents(self, texts: List[str]) -> List[List[float]]: ... responses = [ self._embedding_func(text, engine=f"text-search-{self.model_name}-doc-001") for text in texts ] ``` ```python def embed_query(self, text: str) -> List[float]: ... embedding = self._embedding_func( text, engine=f"text-search-{self.model_name}-query-001" ) ``` However, recent literature on reasoning shows CODEX to be more powerful on reasoning tasks than GPT-3. `OpenAIEmbeddings` should be modified to support both text and code embeddings.
Support Codex embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/137/comments
2
2022-11-14T16:37:52Z
2023-09-18T16:25:26Z
https://github.com/langchain-ai/langchain/issues/137
1,448,386,956
137
[ "hwchase17", "langchain" ]
The current implementation of `HuggingFaceEmbeddings` local sentence-transformers to derive the encodings. This can be limiting as it requires a fairly capable machine to download the model, load it, and run inference. An alternative is to support embeddings derived directly via the HuggingFaceHub. See this [blog post](https://huggingface.co/blog/getting-started-with-embeddings) for details. This implementation will set similar expectations as Cohere and OpenAI embeddings API.
Support HuggingFaceHub embeddings endpoint
https://api.github.com/repos/langchain-ai/langchain/issues/136/comments
2
2022-11-14T16:32:02Z
2022-11-14T16:41:57Z
https://github.com/langchain-ai/langchain/issues/136
1,448,379,289
136
[ "hwchase17", "langchain" ]
I remember doing this in the early versions, but since 0.10 I can't seem to `%env` in Jupyter notebook to pass API tokens (tested this only with HuggingFaceHub, btw). Passing it as an argument, however, works fine. Steps to reproduce: ![image](https://user-images.githubusercontent.com/347398/201581940-9620407f-36b2-44c5-b746-ca0ea7d84122.png)
0.13 does not pick up API token environment variable set in jupyter
https://api.github.com/repos/langchain-ai/langchain/issues/134/comments
2
2022-11-14T05:29:21Z
2022-11-14T16:34:26Z
https://github.com/langchain-ai/langchain/issues/134
1,447,417,239
134