date_collected
stringclasses 1
value | repo_name
stringlengths 6
116
| file_name
stringlengths 2
220
| file_contents
stringlengths 13
357k
| prompts
sequence |
---|---|---|---|---|
2024-01-10 | whyismynamerudy/Nzyme | keyword_extraction.py | """extract keywords from input text"""
import string
import cohere
import pandas as pd
import requests
import datetime
from tqdm import tqdm
pd.set_option('display.max_colwidth', None)
api_key = 'MklIKiJvqX1nFagSi1jRU4k9YxoxfLwZvRG6xIUJ'
co = cohere.Client(api_key)
# keywords = [
# (['multiplexer', 'decoder', 'priority circuit'], 'Logic gates are combined to produce larger circuits such as multiplexers, decoders, and priority circuits. A multiplexer chooses one of the data inputs based on the select input. A decoder sets one of the outputs HIGH according to the input. A priority circuit produces an output indicating the highest priority input.'),
# (['genes', 'genome', 'cellular functions'], 'In genetics, the basic approach to understanding any biological property is to find the subset of genes in the genome that influence that property. After these genes have been identified, their cellular functions can be elucidated through further research.')
# ]
def extract(prompt: str):
"""extract keywords from the input prompt
prompt: string contiaining the input from which keywords are extracted"""
my_prompt = '''
Text: "Logic gates are combined to produce larger circuits such as multiplexers, decoders, and priority circuits. A multiplexer chooses one of the data inputs based on the select input. A decoder sets one of the outputs HIGH according to the input. A priority circuit produces an output indicating the highest priority input."
Keywords extracted from Text: 'multiplexer', 'decoder', 'priority circuit'
---
Text: "Gene discovery starts with a “hunt” to amass mutants in which the biological function under investigation is altered or destroyed. Even though mutants are individually rare, there are ways of enhancing their recovery. One widely used method is to treat the organism with radiation or chemicals that increase the mutation rate. After treatment, the most direct way to identify mutants is to visually screen a very large number of individuals, looking for a chance occurrence of mutants in that population. Also, various selection methods can be devised to enrich for the types sought. Armed with a set of mutants affecting the property of interest, one hopes that each mutant represents a lesion in one of a set of genes that control the property. Hence, the hope is that a reasonably complete gene pathway or network is represented. However, not all mutants are caused by lesions within one gene (some have far more complex determination), so first each mutant has to be tested to see if indeed it is caused by a single-gene mutation. The test for single-gene inheritance is to mate individuals showing the mutant property with wild-type and then analyze the first and second generation of descendants. As an example, a mutant plant with white flowers would be crossed to the wild type showing red flowers. The progeny of this cross are analyzed, and then they themselves are interbred to produce a second generation of descendants. In each generation, the diagnostic ratios of plants with red flowers to those with white flowers will reveal whether a single gene controls flower color. If so, then by inference, the wild type would be encoded by the wild-type form of the gene and the mutant would be encoded by a form of the same gene in which a mutation event has altered the DNA sequence in some way. Other mutations affecting flower color (perhaps mauve, blotched, striped, and so on) would be analyzed in the same way, resulting overall in a set of defined “flower-color genes.” The use of mutants in this way is sometimes called genetic dissection, because the biological property in question (flower color in this case) is picked apart to reveal its underlying genetic program, not with a scalpel but with mutants. Each mutant potentially identifies a separate gene affecting that property. After a set of key genes has been defined in this way, several different molecular methods can be used to establish the functions of each of the genes. These methods will be covered in later chapters. Hence, genetics has been used to define the set of gene functions that interact to produce the property we call flower color (in this example). This type of approach to gene discovery is sometimes called forward genetics, a strategy to understanding biological function starting with random single-gene mutants and ending with their DNA sequence and biochemical function"
Keywords extracted from Text: 'heterozygous state', 'assortment', 'meiosis', 'intercrossed'
---
Text: "The timing specification of a combinational circuit consists of the propagation and contamination delays through the circuit. These indicate the longest and shortest times between an input change and the consequent output change. Calculating the propagation delay of a circuit involves identifying the critical path through the circuit, then adding up the propagation delays of each element along that path. There are many different ways to implement complicated combinational circuits; these ways offer trade-offs between speed and cost."
Keywords extracted from Text: 'propagation', 'contamination delay', 'critical path'
---
Text: "********"
Keywords extracted from Text:'''
my_prompt = my_prompt.replace("********", prompt)
extraction = co.generate(
model='xlarge',
prompt=my_prompt,
max_tokens=30,
temperature=0.4,
stop_sequences=['"']
)
# print(type(extraction.generations[0].text[:-1]))
return extraction.generations[0].text[:-1]
def get_keywords(prompt: str):
"""get keywords from the input text
prompt: string from which the keywrods should be extracted"""
extraction = extract(prompt).split()
keywords = []
for word in extraction:
new_word = word.translate(str.maketrans('', '', string.punctuation))
if new_word != '' and new_word != 'Text':
keywords.append(new_word)
my_set = set(prompt.split())
subset = []
new = []
for word in keywords:
if word in my_set:
subset.append(word)
else:
new.append(word)
return (subset, new)
# my_text="The experiments conducted by Avery and his colleagues were definitive, but many scientists were very reluctant to accept DNA (rather than proteins) as the genetic material. After all, how could such a low-complexity molecule as DNA encode the diversity of life on this planet? Alfred Hershey and Martha Chase provided additional evidence in 1952 in an experiment that made use of phage T2, a virus that infects bacteria. They reasoned that the infecting phage must inject into the bacterium the specific information that dictates the reproduction of new viral particles. If they could find out what material the phage was injecting into the bacterial host, they would have determined the genetic material of phages. The phage is relatively simple in molecular constitution. The T2 structure is similar to T4 shown in Figures 5-22 to 5-24. Most of its structure is protein, with DNA contained inside the protein sheath of its 'head.'"
# print(get_keywords(my_text))
# # print(extract(my_text))
| [
"contamination delay",
"Logic gates are combined to produce larger circuits such as multiplexers, decoders, and priority circuits. A multiplexer chooses one of the data inputs based on the select input. A decoder sets one of the outputs HIGH according to the input. A priority circuit produces an output indicating the highest priority input.",
"critical path",
"multiplexer",
"intercrossed",
"\n Text: \"Logic gates are combined to produce larger circuits such as multiplexers, decoders, and priority circuits. A multiplexer chooses one of the data inputs based on the select input. A decoder sets one of the outputs HIGH according to the input. A priority circuit produces an output indicating the highest priority input.\"\n Keywords extracted from Text: 'multiplexer', 'decoder', 'priority circuit'\n\n ---\n \n Text: \"Gene discovery starts with a “hunt” to amass mutants in which the biological function under investigation is altered or destroyed. Even though mutants are individually rare, there are ways of enhancing their recovery. One widely used method is to treat the organism with radiation or chemicals that increase the mutation rate. After treatment, the most direct way to identify mutants is to visually screen a very large number of individuals, looking for a chance occurrence of mutants in that population. Also, various selection methods can be devised to enrich for the types sought. Armed with a set of mutants affecting the property of interest, one hopes that each mutant represents a lesion in one of a set of genes that control the property. Hence, the hope is that a reasonably complete gene pathway or network is represented. However, not all mutants are caused by lesions within one gene (some have far more complex determination), so first each mutant has to be tested to see if indeed it is caused by a single-gene mutation. The test for single-gene inheritance is to mate individuals showing the mutant property with wild-type and then analyze the first and second generation of descendants. As an example, a mutant plant with white flowers would be crossed to the wild type showing red flowers. The progeny of this cross are analyzed, and then they themselves are interbred to produce a second generation of descendants. In each generation, the diagnostic ratios of plants with red flowers to those with white flowers will reveal whether a single gene controls flower color. If so, then by inference, the wild type would be encoded by the wild-type form of the gene and the mutant would be encoded by a form of the same gene in which a mutation event has altered the DNA sequence in some way. Other mutations affecting flower color (perhaps mauve, blotched, striped, and so on) would be analyzed in the same way, resulting overall in a set of defined “flower-color genes.” The use of mutants in this way is sometimes called genetic dissection, because the biological property in question (flower color in this case) is picked apart to reveal its underlying genetic program, not with a scalpel but with mutants. Each mutant potentially identifies a separate gene affecting that property. After a set of key genes has been defined in this way, several different molecular methods can be used to establish the functions of each of the genes. These methods will be covered in later chapters. Hence, genetics has been used to define the set of gene functions that interact to produce the property we call flower color (in this example). This type of approach to gene discovery is sometimes called forward genetics, a strategy to understanding biological function starting with random single-gene mutants and ending with their DNA sequence and biochemical function\"\n Keywords extracted from Text: 'heterozygous state', 'assortment', 'meiosis', 'intercrossed'\n\n ---\n \n Text: \"The timing specification of a combinational circuit consists of the propagation and contamination delays through the circuit. These indicate the longest and shortest times between an input change and the consequent output change. Calculating the propagation delay of a circuit involves identifying the critical path through the circuit, then adding up the propagation delays of each element along that path. There are many different ways to implement complicated combinational circuits; these ways offer trade-offs between speed and cost.\"\n Keywords extracted from Text: 'propagation', 'contamination delay', 'critical path'\n\n ---\n\n Text: \"********\"\n Keywords extracted from Text:",
"heterozygous state",
"priority circuit",
"Gene discovery starts with a “hunt” to amass mutants in which the biological function under investigation is altered or destroyed. Even though mutants are individually rare, there are ways of enhancing their recovery. One widely used method is to treat the organism with radiation or chemicals that increase the mutation rate. After treatment, the most direct way to identify mutants is to visually screen a very large number of individuals, looking for a chance occurrence of mutants in that population. Also, various selection methods can be devised to enrich for the types sought. Armed with a set of mutants affecting the property of interest, one hopes that each mutant represents a lesion in one of a set of genes that control the property. Hence, the hope is that a reasonably complete gene pathway or network is represented. However, not all mutants are caused by lesions within one gene (some have far more complex determination), so first each mutant has to be tested to see if indeed it is caused by a single-gene mutation. The test for single-gene inheritance is to mate individuals showing the mutant property with wild-type and then analyze the first and second generation of descendants. As an example, a mutant plant with white flowers would be crossed to the wild type showing red flowers. The progeny of this cross are analyzed, and then they themselves are interbred to produce a second generation of descendants. In each generation, the diagnostic ratios of plants with red flowers to those with white flowers will reveal whether a single gene controls flower color. If so, then by inference, the wild type would be encoded by the wild-type form of the gene and the mutant would be encoded by a form of the same gene in which a mutation event has altered the DNA sequence in some way. Other mutations affecting flower color (perhaps mauve, blotched, striped, and so on) would be analyzed in the same way, resulting overall in a set of defined “flower-color genes.” The use of mutants in this way is sometimes called genetic dissection, because the biological property in question (flower color in this case) is picked apart to reveal its underlying genetic program, not with a scalpel but with mutants. Each mutant potentially identifies a separate gene affecting that property. After a set of key genes has been defined in this way, several different molecular methods can be used to establish the functions of each of the genes. These methods will be covered in later chapters. Hence, genetics has been used to define the set of gene functions that interact to produce the property we call flower color (in this example). This type of approach to gene discovery is sometimes called forward genetics, a strategy to understanding biological function starting with random single-gene mutants and ending with their DNA sequence and biochemical function",
"propagation",
"The timing specification of a combinational circuit consists of the propagation and contamination delays through the circuit. These indicate the longest and shortest times between an input change and the consequent output change. Calculating the propagation delay of a circuit involves identifying the critical path through the circuit, then adding up the propagation delays of each element along that path. There are many different ways to implement complicated combinational circuits; these ways offer trade-offs between speed and cost.",
"assortment"
] |
2024-01-10 | Intelligent-CAT-Lab/PLTranslationEmpirical | src~translation~repair.py | import os
import openai
import logging
import tiktoken
import json
from pathlib import Path
from dotenv import load_dotenv
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from tqdm import tqdm
import re
import argparse
os.makedirs(f'logs', exist_ok=True)
logging.basicConfig(filename=f"logs/repair.log", level=logging.INFO, format='%(asctime)s %(levelname)s %(module)s - %(funcName)s: %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
class Repair:
EXTENTIONS = {
"Java": "java",
"Python": "py",
"C": "c",
"C++": "cpp",
"Go": "go",
}
def __init__(self, args) -> None:
self.args = args
def __enter__(self):
# Set up OpenAI API key
api_key = os.getenv("OPENAI_API_KEY")
openai.api_key = api_key
self.main_dir = os.getcwd()
self.translation_dir = Path(self.main_dir).joinpath(f'output/{self.args.model}/{self.args.dataset}')
# Find the input directory with all the code examples
self.input_dir = Path(self.main_dir).joinpath('dataset', self.args.dataset)
if not self.input_dir.exists():
raise FileNotFoundError(
f"Directory {str(self.input_dir)} does not exist.")
# Make a directory at output/<model>/<dataset>
self.out_dir = Path(self.main_dir).joinpath("output", self.args.model + f'_IO_{self.args.attempt}', self.args.dataset)
if not self.out_dir.exists():
self.out_dir.mkdir(parents=True)
return self
def send_message_to_openai(self, message_log):
"Use OpenAI's ChatCompletion API to get the chatbot's response"
encoding = tiktoken.encoding_for_model("gpt-4")
num_tokens = len(encoding.encode(message_log[1]["content"]))
response = "exceptional case"
is_success = False
max_attempts = 5
while max_attempts > 0:
try:
response = openai.ChatCompletion.create(
model="gpt-4", # The name of the OpenAI chatbot model to use
# The conversation history up to this point, as a list of dictionaries
messages=message_log,
# The maximum number of tokens (words or subwords) in the generated response
max_tokens=max(1, 8000-num_tokens),
# The "creativity" of the generated response (higher temperature = more creative)
temperature=0.7,
)
is_success = True
break
except openai.error.InvalidRequestError as e:
return "# Token size exceeded."
except:
max_attempts -= 1
continue
if not is_success:
return response
# Find the first response from the chatbot that has text in it (some responses may not have text)
for choice in response.choices:
if "text" in choice:
return choice.text
# If no response with text is found, return the first response's content (which may be empty)
return response.choices[0].message.content
def translate_with_OPENAI(self, source, target, source_code, translated_code, stderr, test_inputs, test_outputs, generated) -> str:
content = ''
if self.args.error_type in ['compile', 'runtime'] and self.args.dataset == 'evalplus':
content = f"You were asked to translate the following {source} code to {target}:\n\n{source_code}\n\nYour response was the following {target} code:\n\n{translated_code}\n\nExecuting your generated code gives the following error because it is syntactically incorrect:\n\n{stderr}\n\nCan you re-generate your response and translate the above {source} code to {target}. Print only the {target} code inside ```{target.lower()}{{response}}```and do not add any other natural language description in your output, and do not change the method signature from incorrect translation. Make sure your generated code is syntactically correct."
elif self.args.error_type in ['compile', 'runtime']:
content = f"You were asked to translate the following {source} code to {target}:\n\n{source_code}\n\nYour response was the following {target} code:\n\n{translated_code}\n\nExecuting your generated code gives the following error because it is syntactically incorrect:\n\n{stderr}\n\nCan you re-generate your response and translate the above {source} code to {target}. Print only the {target} code inside ```{target.lower()}{{response}}```and do not add any other natural language description in your output. Make sure your generated code is syntactically correct."
elif self.args.error_type == 'incorrect' and self.args.dataset == 'evalplus':
content = f"You were asked to translate the following {source} code to {target}:\n\n{source_code}\n\nYour response was the following {target} code:\n\n{translated_code}\n\nExecuting your generated code gives the following test failure:\n\n{stderr}\n\nCan you re-generate your response and translate the above {source} code to {target}. Print only the {target} code inside ```{target.lower()}{{response}}```and do not add any other natural language description in your output, and do not change the method signature from incorrect translation. Make sure your generated code is syntactically correct."
elif self.args.error_type == 'incorrect':
content = f"You were asked to translate the following {source} code to {target}:\n\n{source_code}\n\nYour response was the following {target} code:\n\n{translated_code}\n\nExecuting your generated code gives the following output:\n{generated}\n\ninstead of the following expected output:\n{test_outputs}\n\nCan you re-generate your response and translate the above {source} code to {target}. Print only the {target} code inside ```{target.lower()}{{response}}```and do not add any other natural language description in your output. Make sure your generated code is syntactically correct. Your generated {target} code should take the following input and generate the expected output:\n\nInput:\n{test_inputs}\n\nExpected Output:\n{test_outputs}"
message = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": content}]
response = self.send_message_to_openai(message)
return response.replace(f"```{target.lower()}", "").replace("```", "")
def translate_with_HF(self, model, tokenizer, device, source, target, source_code, translated_code, stderr, test_inputs, test_outputs, generated) -> str:
content = ''
if self.args.error_type in ['compile', 'runtime'] and self.args.dataset == 'evalplus':
content = f"You were asked to translate the following {source} code to {target}:\n\n{source_code}\n\nYour response was the following {target} code:\n\n{translated_code}\n\nExecuting your generated code gives the following error because it is syntactically incorrect:\n\n{stderr}\n\nCan you re-generate your response and translate the above {source} code to {target}. Do not add any natural language description in your response, and do not change the method signature from incorrect translation.\n\n{target} Code:\n"
elif self.args.error_type in ['compile', 'runtime']:
content = f"You were asked to translate the following {source} code to {target}:\n\n{source_code}\n\nYour response was the following {target} code:\n\n{translated_code}\n\nExecuting your generated code gives the following error because it is syntactically incorrect:\n\n{stderr}\n\nCan you re-generate your response and translate the above {source} code to {target}. Do not add any natural language description in your response.\n\n{target} Code:\n"
elif self.args.error_type == 'incorrect' and self.args.dataset == 'evalplus':
content = f"You were asked to translate the following {source} code to {target}:\n\n{source_code}\n\nYour response was the following {target} code:\n\n{translated_code}\n\nExecuting your generated code gives the following test failure:\n\n{stderr}\n\nCan you re-generate your response and translate the above {source} code to {target}. Do not add any natural language description in your output, and do not change the method signature from incorrect translation.\n\n{target} Code:\n"
elif self.args.error_type == 'incorrect':
content = f"You were asked to translate the following {source} code to {target}:\n\n{source_code}\n\nYour response was the following {target} code:\n\n{translated_code}\n\nExecuting your generated code gives the following output:\n{generated}\n\ninstead of the following expected output:\n{test_outputs}\n\nCan you re-generate your response and translate the above {source} code to {target}. Do not add any natural language description in your response. Your generated {target} code should take the following input and generate the expected output:\n\nInput:\n{test_inputs}\n\nExpected Output:\n{test_outputs}\n\n{target} Code:\n"
if self.args.model == 'StarCoder':
content = "<fim_prefix>" + content + "<fim_suffix><fim_middle>"
inputs = tokenizer.encode(content, return_tensors="pt").to(device)
total_input_tokens = inputs.shape[1]
context_length = 4096 if self.args.model == 'LLaMa' else 2048
if total_input_tokens >= context_length:
return "# Token size exceeded."
max_new_tokens = context_length - total_input_tokens
raw_outputs = model.generate(
inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
top_p=self.args.p,
top_k=self.args.k,
temperature=self.args.temperature,
repetition_penalty=1,
pad_token_id=tokenizer.eos_token_id,
)
return tokenizer.decode(raw_outputs[0])
def fix(self, source, target):
self.errors = {}
with open(f'fix_reports/{self.args.model}_{self.args.dataset}_errors_from_{source}_to_{target}_{self.args.attempt}.json', 'r') as f:
self.errors = json.load(f)
tokenizer, model = None, None
device = f'cuda:{self.args.gpu_id}' if torch.cuda.is_available() else 'cpu'
if self.args.model != 'GPT-4':
model_path = ''
auth_token = None
kwargs = {}
if self.args.model == 'LLaMa':
model_path = 'meta-llama/Llama-2-13b-chat-hf'
auth_token = os.environ['LLAMA2_AUTH_TOKEN']
elif self.args.model == 'StarCoder':
model_path = 'bigcode/starcoder'
auth_token = os.environ['STARCODER_AUTH_TOKEN']
elif self.args.model == 'CodeGen':
model_path = 'Salesforce/codegen-16B-multi'
kwargs["torch_dtype"] = torch.float16
tokenizer = AutoTokenizer.from_pretrained(model_path, use_auth_token=auth_token, cache_dir='./huggingface')
model = AutoModelForCausalLM.from_pretrained(model_path, use_auth_token=auth_token, cache_dir='./huggingface', **kwargs).to(device)
snippets = self.errors[self.args.error_type]
for snippet in tqdm(snippets, total=len(snippets), bar_format="{desc:<5.5}{percentage:3.0f}%|{bar:10}{r_bar}"):
code_id = snippet[0].split('.')[0]
stderr, test_inputs, test_outputs, generated = '', '', '', ''
if self.args.error_type in ['compile', 'runtime']:
filename, stderr = snippet
elif self.args.error_type == 'incorrect' and self.args.dataset == 'evalplus':
filename, stderr = snippet
elif self.args.error_type == 'incorrect':
filename, test_inputs, test_outputs, generated = snippet
stderr_output = stderr.strip()
test_inputs = test_inputs.strip()
test_outputs = test_outputs.strip()
generated = generated.strip()
source_code = ''
with open(f'{self.input_dir}/{source}/Code/' + filename.split('.')[0] + f'.{self.EXTENTIONS[source]}', 'r') as f:
source_code = f.read().strip()
recent_translated_code = ''
with open(f'{self.translation_dir}/{source}/{target}/{filename}', 'r') as f:
recent_translated_code = f.read().strip()
# Save the generated code
target_dir = self.out_dir.joinpath(f"{source}", f"{target}")
if not target_dir.exists():
target_dir.mkdir(parents=True)
filename_of_translated_code = target_dir.joinpath(filename)
translated_code_fp = Path(filename_of_translated_code)
if translated_code_fp.exists():
continue
translated_code = ''
if self.args.model == 'GPT-4':
translated_code = self.translate_with_OPENAI(
source, target, source_code, recent_translated_code, stderr_output, test_inputs, test_outputs, generated)
elif self.args.model in ['LLaMa', 'StarCoder', 'CodeGen']:
translated_code = self.translate_with_HF(
model, tokenizer, device, source, target, source_code, recent_translated_code, stderr_output, test_inputs, test_outputs, generated)
translated_code = re.sub('public\s*class\s*.+', 'public class ' + code_id + ' {', translated_code)
with open(filename_of_translated_code, "w") as f:
print(translated_code, file=f)
def __exit__(self, exception, _, __):
print(exception)
if __name__ == "__main__":
# Initialize OPENAI-API keys
load_dotenv()
parser = argparse.ArgumentParser(description='run repair with a given model, dataset and languages')
parser.add_argument('--model', help='model to use for code translation. should be one of [GPT-4,LLaMa,StarCoder,CodeGen]', required=True, type=str)
parser.add_argument('--dataset', help='dataset to use for code translation. should be one of [codenet,avatar,evalplus]', required=True, type=str)
parser.add_argument('--source_lang', help='source language to use for code translation. should be one of [Python,Java,C,C++,Go]', required=True, type=str)
parser.add_argument('--target_lang', help='target language to use for code translation. should be one of [Python,Java,C,C++,Go]', required=True, type=str)
parser.add_argument('--k', help='The number of highest probability vocabulary tokens to keep for top-k-filtering. Only applies for sampling mode, with range from 1 to 100.', required=True, type=int)
parser.add_argument('--p', help='Only the most probable tokens with probabilities that add up to top_p or higher are considered during decoding. The valid range is 0.0 to 1.0. 1.0 is equivalent to disabled and is the default. Only applies to sampling mode. Also known as nucleus sampling.', required=True, type=float)
parser.add_argument('--temperature', help='A value used to warp next-token probabilities in sampling mode. Values less than 1.0 sharpen the probability distribution, resulting in "less random" output. Values greater than 1.0 flatten the probability distribution, resulting in "more random" output. A value of 1.0 has no effect and is the default. The allowed range is 0.0 to 2.0.', required=True, type=float)
parser.add_argument('--gpu_id', help='GPU ID to use for translation.', required=True, type=int)
parser.add_argument('--error_type', help='Error type to repair. should be one of [compile,runtime,incorrect]', required=True, type=str)
parser.add_argument('--attempt', help='Attempt number to repair.', required=True, type=int)
args = parser.parse_args()
source = args.source_lang
target = args.target_lang
with Repair(args) as repair:
logging.info(f"Repairing examples from {source} to {target} in {args.dataset} dataset for {args.error_type} errors using {args.model} model.")
repair.fix(source, target)
| [
"You are a helpful assistant."
] |
2024-01-10 | saviocunhaa/LojaMadamy | pages~2_1%EF%B8%8F%E2%83%A3_MadamyUBJ.py | import os
import mysql.connector
import openai
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import streamlit as st
st.set_page_config(
page_title="Dash Madamy Ubajara",
page_icon=":bar_chart:",
layout="wide",
initial_sidebar_state="collapsed",
)
def conexãoUbj():
######################### CONEXÃO LOJA UBAJARA ########################################
global conexao
conexao = mysql.connector.connect(
host="srv496.hstgr.io",
database="u771541202_db_madamy",
user="u771541202_madamy",
password="visioWebsolucao2",
)
if conexao.is_connected():
print("conexão ok")
querycaixa = "SELECT * FROM caixas"
querycaixaItens = "SELECT * FROM caixa_items"
querycargos = "SELECT * FROM cargos"
queryclientes = "SELECT * FROM clientes"
querycolaboradores = "SELECT * FROM colaboradors"
querycomandas = "SELECT * FROM comandas"
queryempresas = "SELECT * FROM empresas"
queryestoques = "SELECT * FROM estoques"
queryforma_pagamentos = "SELECT * FROM forma_pagamentos"
queryfornecedores = "SELECT * FROM fornecedors"
querygrupo_produtos = "SELECT * FROM grupo_produtos"
queryitem_comandas = "SELECT * FROM item_comandas"
queryitem_estoques = "SELECT * FROM item_estoques"
querylojas = "SELECT * FROM lojas"
queryprodutos = "SELECT * FROM produtos"
queryproduto_precos = "SELECT * FROM produto_precos"
querypromocaos = "SELECT * FROM promocaos"
global dfCaixa, dfCaixaItens, dfCargos, dfClientes, dfColaboradores, dfComandas, dfEmpresas, dfEstoques, dfFormaPagamento, dfFornecedores, dfGrupoProdutos, dfItemComandas, dfItemEstoque, dfLojas, dfProdutos, dfProdutoPreco, dfPromocoes
dfCaixa = pd.read_sql(querycaixa, conexao)
dfCaixaItens = pd.read_sql(querycaixaItens, conexao)
dfCargos = pd.read_sql(querycargos, conexao)
dfClientes = pd.read_sql(queryclientes, conexao)
dfColaboradores = pd.read_sql(querycolaboradores, conexao)
dfComandas = pd.read_sql(querycomandas, conexao)
dfEmpresas = pd.read_sql(queryempresas, conexao)
dfEstoques = pd.read_sql(queryestoques, conexao)
dfFormaPagamento = pd.read_sql(queryforma_pagamentos, conexao)
dfFornecedores = pd.read_sql(queryfornecedores, conexao)
dfGrupoProdutos = pd.read_sql(querygrupo_produtos, conexao)
dfItemComandas = pd.read_sql(queryitem_comandas, conexao)
dfItemEstoque = pd.read_sql(queryitem_estoques, conexao)
dfLojas = pd.read_sql(querylojas, conexao)
dfProdutos = pd.read_sql(queryprodutos, conexao)
dfProdutoPreco = pd.read_sql(queryproduto_precos, conexao)
dfPromocoes = pd.read_sql(querypromocaos, conexao)
return (
dfCaixa,
dfCaixaItens,
dfCargos,
dfClientes,
dfColaboradores,
dfComandas,
dfEmpresas,
dfEstoques,
dfFormaPagamento,
dfFornecedores,
dfGrupoProdutos,
dfItemComandas,
dfItemEstoque,
dfLojas,
dfProdutos,
dfProdutoPreco,
dfPromocoes,
)
def plot_total_vendas_por_mes(dfCaixaItens, dfComandas, dfColaboradores, year, vendedores_selecionados):
# Filtrar o DataFrame com base no ano selecionado
df_filtrado = dfCaixaItens[dfCaixaItens["Year"].isin(year)]
if vendedores_selecionados:
# Obter IDs dos vendedores selecionados
ids_vendedores_selecionados = dfColaboradores[dfColaboradores['nome'].isin(vendedores_selecionados)]['id'].tolist()
# Filtrar comandas com base nos IDs dos vendedores
df_filtrado = df_filtrado[df_filtrado['comanda_id'].isin(dfComandas[dfComandas['colaborador_id'].isin(ids_vendedores_selecionados)]['id'])]
# Agrupar as vendas por mês e calcular o valor total de vendas
vendas_por_mes = df_filtrado.groupby("Month")["valor"].sum()
# Gráfico de linhas com o total de vendas por mês
fig = go.Figure(
data=[go.Scatter(x=vendas_por_mes.index, y=vendas_por_mes.values, mode="lines")]
)
fig.update_layout(
title="Total de Vendas por Mês",
xaxis_title="Mês",
yaxis_title="Valor Total de Vendas",
)
return fig
def plot_total_vendas_por_Ano(dfCaixaItens, year):
# Filtrar o DataFrame com base no ano selecionado na sidebar
df_filtrado = dfCaixaItens[dfCaixaItens["Year"].isin(year)]
# Agrupar as vendas por mês e calcular o valor total de vendas
vendas_por_ano = df_filtrado.groupby("Year")["valor"].sum()
# Gráfico de linhas com o total de vendas por mês
fig = go.Figure(data=[go.Line(x=vendas_por_ano.index, y=vendas_por_ano.values)])
fig.update_layout(
title="Total de Vendas por Ano",
xaxis_title="Ano",
yaxis_title="Valor Total de Vendas",
)
return fig
def plot_formas_pagamento(dfCaixaItens, dfFormaPagamento, year, month):
# Filtrar o DataFrame com base nos valores selecionados na sidebar
df_filtrado = dfCaixaItens.loc[
(dfCaixaItens["Year"].isin(year)) & (dfCaixaItens["Month"].isin(month))
]
# Mapear os IDs de forma de pagamento para as descrições correspondentes
df_filtrado_com_descricao = df_filtrado.merge(
dfFormaPagamento, left_on="forma_pagamento_id", right_on="id"
)
# Contagem das formas de pagamento
formas_pagamento = df_filtrado_com_descricao["forma_pagamento"].value_counts()
# Gráfico de barras com as formas de pagamento
fig = go.Figure(data=[go.Bar(x=formas_pagamento.index, y=formas_pagamento.values)])
fig.update_layout(
title="Formas de Pagamento",
xaxis_title="Forma de Pagamento",
yaxis_title="Quantidade",
)
return fig
def plot_quantidade_por_grupo_produto(dfGrupoProdutos, dfProdutos):
# Merge entre os DataFrames de grupo de produtos e produtos
df_merge = dfProdutos.merge(
dfGrupoProdutos, left_on="grupo_produto_id", right_on="id"
)
# Contagem da quantidade de produtos por grupo de produtos
quantidade_por_grupo_produto = df_merge["grupo"].value_counts()
# Gráfico de barras com a quantidade de produtos por grupo de produtos
fig = go.Figure(
data=[
go.Bar(
x=quantidade_por_grupo_produto.index,
y=quantidade_por_grupo_produto.values,
)
]
)
fig.update_layout(
title="Quantidade de Produtos por Grupo de Produtos",
xaxis_title="Grupo de Produtos",
yaxis_title="Quantidade de Produtos",
)
return fig
def plot_hist_vendas_por_dia_semana(dfCaixaItens, year, month):
# Filtrar o DataFrame dfCaixaItens com base nas opções selecionadas na sidebar
df_filtrado = dfCaixaItens[
(dfCaixaItens["Year"].isin(year)) & (dfCaixaItens["Month"].isin(month))
]
# Criar uma nova coluna "DiaSemana" com base na coluna "created_at"
df_filtrado["DiaSemana"] = df_filtrado["created_at"].apply(lambda x: x.weekday())
# Mapear o número do dia da semana para o nome do dia
dias_semana = [
"Segunda",
"Terça",
"Quarta",
"Quinta",
"Sexta",
"Sábado",
"Domingo",
]
df_filtrado["DiaSemana"] = df_filtrado["DiaSemana"].map(lambda x: dias_semana[x])
# Agrupar por "DiaSemana" e contar as vendas
vendas_por_dia = (
df_filtrado[df_filtrado["descricao"] == "venda"].groupby("DiaSemana").size()
)
# Criar o gráfico de barras
fig = px.bar(
vendas_por_dia,
x=vendas_por_dia.index,
y=vendas_por_dia.values,
labels={"x": "Dia da Semana", "y": "Quantidade de Vendas"},
)
# Personalizar o layout do gráfico
fig.update_layout(
title="Histórico de Vendas por Dia da Semana",
xaxis_tickmode="linear",
yaxis=dict(title="Quantidade de Vendas"),
)
# Exibir o gráfico# Filtrar o DataFrame dfCaixaItens com base nas opções selecionadas na sidebar
df_filtrado = dfCaixaItens[
(dfCaixaItens["Year"].isin(year)) & (dfCaixaItens["Month"].isin(month))
]
# Criar uma nova coluna "DiaSemana" com base na coluna "created_at"
df_filtrado["DiaSemana"] = df_filtrado["created_at"].apply(lambda x: x.weekday())
# Mapear o número do dia da semana para o nome do dia
dias_semana = [
"Segunda",
"Terça",
"Quarta",
"Quinta",
"Sexta",
"Sábado",
"Domingo",
]
df_filtrado["DiaSemana"] = df_filtrado["DiaSemana"].map(lambda x: dias_semana[x])
# Agrupar por "DiaSemana" e contar as vendas
vendas_por_dia = (
df_filtrado[df_filtrado["descricao"] == "venda"].groupby("DiaSemana").size()
)
# Criar o gráfico de barras
fig = px.bar(
vendas_por_dia,
x=vendas_por_dia.index,
y=vendas_por_dia.values,
labels={"x": "Dia da Semana", "y": "Quantidade de Vendas"},
)
# Personalizar o layout do gráfico
fig.update_layout(
title="Histórico de Vendas por Dia da Semana",
xaxis_tickmode="linear",
yaxis=dict(title="Quantidade de Vendas"),
)
# Exibir o gráfico
return fig
def plot_vendas_por_forma_pagamento(dfCaixaItens, dfFormaPagamento, year, month):
# Filtrar o DataFrame dfCaixaItens com base nos valores de year e month
df_filtered = dfCaixaItens[
(dfCaixaItens["Year"].isin(year)) & (dfCaixaItens["Month"].isin(month))
]
# Juntar os DataFrames de comandas e forma de pagamento
df_joined = df_filtered.merge(
dfFormaPagamento, left_on="forma_pagamento_id", right_on="id"
)
# Calcular o total de vendas por forma de pagamento
vendas_por_forma_pagamento = (
df_joined.groupby("forma_pagamento")["valor"].sum().reset_index()
)
# Criar o gráfico de pizza
fig = px.pie(vendas_por_forma_pagamento, values="valor", names="forma_pagamento")
fig.update_layout(title="Vendas por Forma de Pagamento")
return fig
def criarDash():
st.header("🎉 Bem-vindo ao Dashboard da Madamy Acessórios Ubajara! 🛍️")
st.markdown("---")
################### CRIAR COLUNAS DE MES E ANO DE ALGUNS DATAFRAME #######################
dfCaixaItens["Month"] = dfCaixaItens["created_at"].apply(lambda x: x.month)
dfCaixaItens["Year"] = dfCaixaItens["created_at"].apply(lambda x: x.year)
################## SIDEBAR ######################################################
valores_unicos_year = dfCaixaItens["Year"].unique()
default_values_year = list(valores_unicos_year)
year = st.sidebar.multiselect(
key=1,
label="Ano",
options=dfCaixaItens["Year"].unique(),
default=default_values_year,
)
valores_unicos_moth = dfCaixaItens["Month"].unique()
default_values_moth = list(valores_unicos_moth)
month = st.sidebar.multiselect(
key=2,
label="Mês",
options=dfCaixaItens["Month"].unique(),
default=default_values_moth,
)
# Join entre dfComandas e dfColaboradores
dfComandasComNomes = dfComandas.merge(dfColaboradores, left_on='colaborador_id', right_on='id', how='left')
# Substitua 'nome_colaborador' pela coluna correta que representa os nomes dos vendedores em dfColaboradores
vendedores_unicos = dfComandasComNomes['nome'].unique()
vendedores_selecionados = st.sidebar.multiselect(
key=3,
label="Vendedores",
options=vendedores_unicos
)
##################### TELA PRINCIPAL ###########################
# Filtrar o DataFrame dfCaixaItens com base nas opções selecionadas na sidebar
df_filtrado = dfCaixaItens[
(dfCaixaItens["Year"].isin(year)) & (dfCaixaItens["Month"].isin(month))
]
# 1 - Quantidade de dfCaixaItens["descricao"] == "venda"
quantidade_vendas = len(df_filtrado[df_filtrado["descricao"] == "venda"])
# 2 - Valor Total de Entrada de Todos as Vendas de entrada
valor_total_entradas = df_filtrado[df_filtrado["descricao"] == "venda"][
"valor"
].sum()
valor_total_entradas_Format = f"R$ {valor_total_entradas:,.2f}".replace(",", ".")
# 3 - Valor Total de Desconto
valor_total_desconto = df_filtrado["valor_desconto"].sum()
valor_total_desconto_Format = f"R$ {valor_total_desconto:,.2f}".replace(",", ".")
# 4 - Porcentagem de Vendas dos últimos 7 dias
hoje = pd.Timestamp.today().date()
data_7_dias_atras = hoje - pd.DateOffset(days=7)
df_ultimos_7_dias = df_filtrado[
(df_filtrado["created_at"].dt.date >= pd.Timestamp(data_7_dias_atras).date())
& (df_filtrado["created_at"].dt.date <= pd.Timestamp(hoje).date())
]
quantidade_vendas_7_dias = len(
df_ultimos_7_dias[df_ultimos_7_dias["descricao"] == "venda"]
)
porcentagem_vendas_7_dias = quantidade_vendas_7_dias / quantidade_vendas * 100
# 6 - Valor Total de Entradas dos últimos 7 dias
valor_total_entradas_7_dias = df_ultimos_7_dias[
df_ultimos_7_dias["descricao"] == "venda"
]["valor"].sum()
porcentagem_valor_entradas_7_dias = (
valor_total_entradas_7_dias / valor_total_entradas * 100
)
# 5 - Porcentagem de Vendas do dia atual
df_dia_atual = df_filtrado[
df_filtrado["created_at"].dt.date == pd.Timestamp(hoje).date()
]
quantidade_vendas_dia_atual = len(
df_dia_atual[df_dia_atual["descricao"] == "venda"]
)
porcentagem_vendas_dia_atual = quantidade_vendas_dia_atual / quantidade_vendas * 100
# st.text(quantidade_vendas_dia_atual)
col1, col2, col3, col4 = st.columns([1, 1, 1, 1])
col1.metric(
"🤝 Quantidade de Vendas e Vendas últimos 7 dias (%)",
quantidade_vendas,
f"{porcentagem_vendas_7_dias:.2f}%",
)
col2.metric(
"Total 💵 Pedidos",
valor_total_entradas_Format,
f"{porcentagem_valor_entradas_7_dias:.2f}%",
)
col3.metric("Valor Desconto 📊", valor_total_desconto_Format)
col4.metric("Vendas Dia (%)", porcentagem_vendas_dia_atual)
col1_, col2_ = st.columns(2)
with col1_:
# Gráfico de formas de pagamento
fig_formas_pagamento = plot_formas_pagamento(
dfCaixaItens, dfFormaPagamento, year, month
)
st.plotly_chart(fig_formas_pagamento)
with col2_:
# Gráfico de vendas por forma de pagamento
fig_vendas_por_forma_pagamento = plot_vendas_por_forma_pagamento(
dfCaixaItens, dfFormaPagamento, year, month
)
st.plotly_chart(fig_vendas_por_forma_pagamento)
with col1_:
fig_total_vendas = plot_total_vendas_por_mes(dfCaixaItens, dfComandas, dfColaboradores, year, vendedores_selecionados)
st.plotly_chart(fig_total_vendas)
with col2_:
fig_total_vendas = plot_total_vendas_por_Ano(dfCaixaItens, year)
st.plotly_chart(fig_total_vendas)
with col1_:
fig_plot_hist_vendas_por_dia_semana = plot_hist_vendas_por_dia_semana(
dfCaixaItens, year, month
)
st.plotly_chart(fig_plot_hist_vendas_por_dia_semana)
with col2_:
fig_quantidade_por_grupo_produto = plot_quantidade_por_grupo_produto(
dfGrupoProdutos, dfProdutos
)
st.plotly_chart(fig_quantidade_por_grupo_produto)
openai.api_key = os.getenv("OPENAI_API_KEY")
chatgpt = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": f"""A loja Madamy Acessorios, vende acesorios femininos como brincos e colares, as informações de {quantidade_vendas} representa a quantidae de pedidos, o {valor_total_entradas_Format} repreenta o valor em R$ de vendas, o {valor_total_desconto_Format} é o desconto aplicado em vendas, o Dataset {fig_formas_pagamento} é o grafico que corresponde a forma de pagamento mais usada nas comandas, o dataset {fig_vendas_por_forma_pagamento} é o grafico que corresponde os valores pagos por cadas forma de pagamento, o dataset {fig_total_vendas} é o grafico que correponde ao total de vendas em cada mês, o dataset {fig_plot_hist_vendas_por_dia_semana} é o grafico que corresponde o historico de vendas por dia da semana. Me informe 5 insight para minha loja """,
},
],
)
insights = chatgpt["choices"][0]["message"]["content"]
if st.button("Analisar"):
st.write(insights)
(
dfCaixa,
dfCaixaItens,
dfCargos,
dfClientes,
dfColaboradores,
dfComandas,
dfEmpresas,
dfEstoques,
dfFormaPagamento,
dfFornecedores,
dfGrupoProdutos,
dfItemComandas,
dfItemEstoque,
dfLojas,
dfProdutos,
dfProdutoPreco,
dfPromocoes,
) = conexãoUbj()
criarDash()
| [
"A loja Madamy Acessorios, vende acesorios femininos como brincos e colares, as informações de PLACEHOLDER representa a quantidae de pedidos, o PLACEHOLDER repreenta o valor em R$ de vendas, o PLACEHOLDER é o desconto aplicado em vendas, o Dataset PLACEHOLDER é o grafico que corresponde a forma de pagamento mais usada nas comandas, o dataset PLACEHOLDER é o grafico que corresponde os valores pagos por cadas forma de pagamento, o dataset PLACEHOLDER é o grafico que correponde ao total de vendas em cada mês, o dataset PLACEHOLDER é o grafico que corresponde o historico de vendas por dia da semana. Me informe 5 insight para minha loja "
] |
2024-01-10 | wattsjs/askjamie | pipeline~src~pipeline~ingest.py | import json
import logging
import os
import sys
import time
from uuid import uuid4
from langchain.document_loaders import (
PlaywrightURLLoader,
OnlinePDFLoader,
)
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.schema import Document
from openai.error import APIError
from pydantic.parse import load_file
from qdrant_client import QdrantClient
from qdrant_client.http import models
from qdrant_client.models import (
FieldCondition,
Filter,
FilterSelector,
MatchText,
PointStruct,
)
from pipeline.tokens import text_splitter
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") or "OPENAI_API_KEY"
QDRANT_API_KEY = os.getenv("QDRANT_API_KEY") or "QDRANT_API_KEY"
QDRANT_URL = os.getenv("QDRANT_URL") or "QDRANT_URL"
model_name = "text-embedding-ada-002"
collection_name = "askcisco.com"
def main():
qdrant_client = QdrantClient(
url=QDRANT_URL, api_key=QDRANT_API_KEY, prefer_grpc=True
)
embed = OpenAIEmbeddings(client=None, model=model_name, show_progress_bar=True)
create_collection(qdrant_client)
# check if --force flag is passed
force = False
if len(sys.argv) > 1 and sys.argv[1] == "--force":
force = True
ingest(qdrant_client, embed, "docs", force=force)
ingest(qdrant_client, embed, "pdfs", force=force)
ingest(qdrant_client, embed, "urls", force=force)
def ingest(client: QdrantClient, embed: OpenAIEmbeddings, type: str, force=False):
datas = get_queued_data(type)
for data in datas:
# check if "force" or "update" is a key in the data
document_force = False
if data.get("force") or data.get("update"):
document_force = True
if not force and not document_force:
results, _ = client.scroll(
collection_name,
scroll_filter=Filter(
must=[
FieldCondition(
key="metadata.source",
# match on text because there may be a # in the url
match=MatchText(text=data["source"].replace(".html", "")),
)
]
),
)
if len(results):
logging.info(f"⤵️ skipping {data['source']} - already exists")
continue
else:
delete_result = client.delete(
collection_name,
points_selector=FilterSelector(
filter=models.Filter(
must=[
FieldCondition(
key="metadata.source",
# match on text because there may be a # in the url
match=MatchText(
text=data["source"].replace(".html", "")
),
)
]
)
),
)
logging.info(f"🗑 deleted {delete_result}")
logging.info(f"‼️ force adding {data['source']}")
# if the document was forced, remove the force key
if "force" in data:
del data["force"]
if "update" in data:
del data["update"]
update_queued_data(type, datas)
if type == "urls":
docs = get_documents_from_queued_urls([data])
elif type == "docs":
docs = get_documents_from_queued_docs([data])
elif type == "pdfs":
docs = get_docs_from_queued_pdfs([data])
else:
raise Exception("unknown type")
new_embeddings = []
docs_content = [d.page_content for d in docs]
batch = 100
for i in range(0, len(docs_content), batch):
try:
logging.info(f"adding documents {i} to {i+batch}")
new_embeddings_batch = embed.embed_documents(
docs_content[i : i + batch]
)
new_embeddings.extend(new_embeddings_batch)
time.sleep(0.1)
except APIError:
logging.error(
"⚠️ openai api error - waiting 60 seconds and retrying..."
)
time.sleep(30)
new_embeddings_batch = embed.embed_documents(
docs_content[i : i + batch]
)
new_embeddings.extend(new_embeddings_batch)
if not new_embeddings:
logging.info(f"⤵️ skipping {data['source']} - no embeddings")
continue
client.upsert(
collection_name=collection_name,
points=[
PointStruct(
id=str(uuid4()),
vector=vector,
payload=docs[idx].dict(),
)
for idx, vector in enumerate(new_embeddings)
],
)
logging.info(f"🧠 added {len(new_embeddings)} new {type} embeddings")
def create_collection(client: QdrantClient, dimension: int = 1536):
collections = client.get_collections()
collection_names = [c.name for c in collections.collections]
# only create collection if it doesn't exist
if collection_name not in collection_names:
logging.info(f"creating collection {collection_name}")
client.recreate_collection(
collection_name=collection_name,
vectors_config=models.VectorParams(
size=dimension,
distance=models.Distance.COSINE,
),
)
def get_documents_from_queued_docs(docs: list[dict] | None = None):
"""Get all queued scraped docs if they exist"""
if not docs:
docs = get_queued_data("docs")
if not docs:
return []
logging.info(f"processing {len(docs)} docs")
parsed_docs = []
for doc in docs:
if "source" not in doc:
logging.warning(f"no source found for {doc}")
continue
logging.info(f"📄 processing {doc['source']}")
try:
# load the document as json
slug = doc["slug"]
versions = doc.get("versions", [])
if "version" in doc:
versions.append(doc["version"])
if not versions:
file_name = f"data/queue/docs/{slug}.json"
else:
file_name = f"data/queue/docs/{slug}-{versions[0]}.json"
sections = load_file(file_name)
except Exception as e:
logging.error(f"failed to parse {doc['source']}: {e}")
continue
products = doc.get("products", [])
if "product" in doc:
products.append(doc["product"])
if "firepower" in products or "firewall" in products:
product = "Secure Firewall"
elif "ise" in products:
product = "Identity Services Engine"
elif "umbrella" in products:
product = "Cisco Umbrella"
else:
product = products[0].title()
for section in sections:
meta = doc
meta.update(
{
"slug": doc["slug"],
"title": section["title"],
"subtitle": section["subtitle"]
if "subtitle" in section
else section["header"]
if "header" in section
else None,
"source": section["url"],
}
)
if versions:
meta["versions"] = versions
sec = Document(
page_content=section["content"],
metadata=meta,
)
parsed_docs.extend(text_splitter.split_documents([sec]))
return parsed_docs
def get_documents_from_queued_urls(urls: list[dict] | None = None) -> list[Document]:
"""Get all urls from the urls.json file in data/queue"""
# open the urls.json file and read its contents
if not urls:
urls = get_queued_data("urls")
if not urls:
return []
logging.info(f"processing url {urls[0]['source']} ...")
loader = PlaywrightURLLoader(
urls=[u["source"] for u in urls], remove_selectors=["header", "footer"]
)
data = loader.load()
split_docs = text_splitter.split_documents(data)
# load any metadata from the urls.json file into the document
for doc in split_docs:
for url in urls:
if doc.metadata["source"].strip(".") == url["source"].strip("."):
doc.metadata.update(url)
if "version" in doc.metadata:
doc.metadata["versions"] = [doc.metadata["version"]]
return split_docs
def get_docs_from_queued_pdfs(pdfs: list[dict] | None = None) -> list[Document]:
# open the urls.json file and read its contents
if not pdfs:
pdfs = get_queued_data("pdfs")
if not pdfs:
return []
logging.info(f"processing pdf {pdfs[0]['source']} ...")
docs = []
for pdf in pdfs:
try:
loader = OnlinePDFLoader(pdf["source"])
doc = loader.load()
# replace some boilerplate
if doc:
doc[0].page_content = (
doc[0]
.page_content.replace("Cisco Public", "")
.replace("All Rights Reserved", "")
)
pages = text_splitter.split_documents(doc)
except Exception as e:
logging.error(f"failed to parse {pdf['source']}: {e}")
continue
# load any metadata from the urls.json file into the document
for doc in pages:
doc.metadata.update(pdf)
docs.extend(pages)
return docs
def get_queued_data(type: str):
data: list[dict] = []
try:
with open(f"pipeline/data/queue/{type}.json", "r") as f:
d = f.read()
if not d:
logging.info(f"no {type}s to process")
return data
data = json.loads(d)
data = [u for u in data if "source" in u]
except FileNotFoundError:
with open(f"data/queue/{type}.json", "r") as f:
d = f.read()
if not d:
logging.info(f"no {type}s to process")
return data
data = json.loads(d)
data = [u for u in data if "source" in u]
return data
def update_queued_data(type: str, data: list[dict]):
# override the existing file with the new data
try:
with open(f"data/queue/{type}.json", "w") as f:
f.write(json.dumps(data))
except FileNotFoundError:
with open(f"pipeline/data/queue/{type}.json", "w") as f:
f.write(json.dumps(data))
logging.info(f"✅ updated {type}.json")
if __name__ == "__main__":
main()
| [] |
2024-01-10 | wattsjs/askjamie | pipeline~src~pipeline~tokens.py | import tiktoken
from langchain.text_splitter import RecursiveCharacterTextSplitter
tokenizer_name = tiktoken.encoding_for_model("gpt-3.5-turbo")
tokenizer = tiktoken.get_encoding(tokenizer_name.name)
# create the length function
def tiktoken_len(text):
tokens = tokenizer.encode(text, disallowed_special=())
return len(tokens)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=100,
length_function=tiktoken_len,
separators=["\n\n", "\n", " ", ""],
)
| [] |
2024-01-10 | JosephQiu21/batchGPT | batchGPT.py | import sys
import os
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
from openai import OpenAI
nltk.download("punkt")
client = OpenAI()
# We use 2k token inputs for GPT-3.5-turbo-1106 model
# as it returns a maximum of 4096 tokens per completion
# You might want to make some adjustment
MAX_SIZE = 2048
# Split long texts into chunks of sentences
def split_into_chunks(text, max_size=MAX_SIZE):
sentences = sent_tokenize(text)
chunks = []
current_chunk = []
current_chunk_size = 0
for sentence in sentences:
if current_chunk_size + len(word_tokenize(sentence)) <= max_size:
current_chunk_size += len(word_tokenize(sentence))
current_chunk.append(sentence)
else:
chunks.append(" ".join(current_chunk))
current_chunk = [sentence]
current_chunk_size = len(word_tokenize(sentence))
if current_chunk:
chunks.append(" ".join(current_chunk))
return chunks
def process(input_file, action, model):
output_file = os.path.splitext(input_file)[0] + (
".refined.md" if action == "refine" else ".translated.md"
)
with open(input_file, "r") as f:
content = f.read()
chunks = split_into_chunks(content)
print(f"Split into {len(chunks)} chunks")
if action == "refine":
system_text = """
As a Computer Science professor, your task is to proofread and correct a raw transcript of your course.
The text has been transcribed using Google's Speech-to-Text API, resulting in grammar mistakes and recognition errors.
Your goal is to recover the original lecture transcript and provide the entire corrected text.
To successfully complete this task, please consider the following guidelines:
1. Error correction: Carefully examine the transcript and correct any grammar mistakes and recognition errors. Ensure that the corrected text accurately reflects the content of the lecture.
2. Maintain tone and voice: While correcting errors, it is important to preserve the original tone and voice of the lecture. Pay attention to the professor's style of delivery, ensuring that the corrected text captures the same essence.
3. Improve readability: Seperate the transcript into paragraphs of appropriate length.
4. Basic fact-checking: Use your knowledge in Computer Science to fact-check the transcript. For example, if the transcript mentions a operation in Java called 'instance of', you should know that the correct name is 'instanceof'.
5. Never add any response other than the corrected text like "Here's the entire corrected transcript:".
"""
elif action == "translate":
system_text = """
Translate the following text from English to Chinese. Do not add any response other than the translated text.
The system should exhibit a strong understanding of the context and produce high-quality and fluent translations.
To successfully complete this task, please consider the following guidelines:
1. Use Markdown syntax: Use Markdown syntax to format the text.
2. Improve readability: Use appropriate paragraph breaks to enhance the readability of the translated text.
3. Improve coherence: Ensure that the translated text is coherent and flows naturally.
"""
else:
raise ValueError(f"Unsupported action: {action}")
print("Processing...")
with open(output_file, "w") as f:
for chunk in chunks:
completion = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system_text},
{"role": "user", "content": chunk},
],
seed=12345,
)
edited_chunk = completion.choices[0].message.content
print(edited_chunk)
f.write("\n" + edited_chunk + "\n")
print(f"Output file saved as {output_file}")
if __name__ == "__main__":
if len(sys.argv) < 4:
print("Usage: python script.py input_file [refine/translate] [model]")
else:
input_file = sys.argv[1]
action = sys.argv[2]
model_number = int(sys.argv[3])
model = "gpt-3.5-turbo-1106" if model_number == 3 else "gpt-4-1106-preview"
if action not in ["refine", "translate"]:
print("Invalid action. Please choose either 'refine' or 'translate'.")
elif model_number not in [3, 4]:
print(
"Invalid model. Please choose either '3' for gpt-3.5-turbo-1106 or '4' for gpt-4-1106-preview."
)
else:
process(input_file, action, model)
| [] |
2024-01-10 | SaiSanthoshS/github_complexity_analyzer | gpt_utils.py | import openai
import requests
import re
# Set up your OpenAI API credentials
openai.api_key = 'YOUR API KEY HERE'
def generate_prompt(repository_name, code):
"""
Generate a prompt for evaluating the technical complexity of a repository.
Customize the prompt based on specific parameters for evaluation.
"""
prompt = f"What are the technical complexities of the {repository_name} repository?\n"
# Add specific parameters for evaluation
prompt += "\n1. Code Quality: Evaluate the code quality of the repository."
prompt += "\n2. Documentation Quality: Evaluate the documentation quality of the repository."
prompt += "\n3. Readability: Evaluate the readability of the code in the repository."
prompt += "\n4. Activity Level: Evaluate the activity level of the repository."
prompt += "\n5. Community Engagement: Evaluate the community engagement of the repository."
# Add the code for evaluation
prompt += f"\n\nCode:\n{code}"
prompt += "\n\nThe response should be strictly in the format 'Score: X and Reason: Y' with x being the score and y being the reason for that score with the above 5 parameters included\n"
return prompt
def extract_score_and_reason(response):
"""
Extract the score and reason for each criterion from the response text.
The response text should be in the format:
"1. Code Quality: Score: X and Reason: Y
2. Documentation Quality: Score: X and Reason: Y
3. Readability: Score: X and Reason: Y
4. Activity Level: Score: X and Reason: Y
5. Community Engagement: Score: X and Reason: Y"
Returns a dictionary containing the scores and reasons for each criterion.
"""
results = {}
try:
pattern = r'(\d+)\. (\w+(?: \w+)*): Score: (\d+) and Reason: (.+)'
matches = re.findall(pattern, response)
for match in matches:
criterion = match[1]
score = int(match[2])
reason = match[3]
results[criterion] = {'score': score, 'reason': reason}
except Exception:
# Handle any potential parsing errors
pass
return results
def evaluate_repository(prompt):
"""Evaluate the technical complexity of a repository using the GPT-3.5 model."""
# Set the parameters for the completion
completion_kwargs = {
'model': 'text-davinci-003',
'prompt': prompt,
'max_tokens': 400,
'temperature': 0.6,
'n': 1,
'stop': None
}
# Generate the completion using the GPT-3.5 model
completion_response = openai.Completion.create(**completion_kwargs)
# Extract the generated response
response = completion_response.choices[0].text.strip()
print(response)
# Process the response to extract the score and reason
result = extract_score_and_reason(response)
print(result)
return result
def fetch_repositories(github_username):
url = f'https://api.github.com/users/{github_username}/repos'
response = requests.get(url)
if response.status_code == 200:
repositories = [repo['name'] for repo in response.json()]
return repositories
else:
# Handle API request error
return []
def fetch_repository_code(repository_name):
"""
Fetch the code from a given repository.
Replace this function with your code to fetch the code from the repository.
"""
# Make a GET request to fetch the repository code
url = f"https://api.github.com/repos/{repository_name}/contents"
response = requests.get(url)
MAX_TOKENS_THRESHOLD = 1024
if response.status_code == 200:
contents = response.json()
code_files = []
for item in contents:
if item['type'] == 'file':
file_url = item['download_url']
code = fetch_code_from_url(file_url)
if code is not None:
code_files.append(code)
return "\n".join(code_files)
else:
# Handle request error
return ""
def fetch_code_from_url(file_url):
"""
Fetch the code content from the given URL.
Replace this function with your code to fetch the code content.
"""
response = requests.get(file_url)
if response.status_code == 200:
code = response.text
return code
else:
# Handle request error
return None
| [
"\n\nCode:\nPLACEHOLDER",
"\n5. Community Engagement: Evaluate the community engagement of the repository.",
"\n3. Readability: Evaluate the readability of the code in the repository.",
"What are the technical complexities of the PLACEHOLDER repository?\n",
"\n\nThe response should be strictly in the format 'Score: X and Reason: Y' with x being the score and y being the reason for that score with the above 5 parameters included\n",
"\n1. Code Quality: Evaluate the code quality of the repository.",
"\n2. Documentation Quality: Evaluate the documentation quality of the repository.",
"\n4. Activity Level: Evaluate the activity level of the repository."
] |
2024-01-10 | Atabek786/Projects | TextIntoImage~vary.py | import json
import os
from base64 import b64decode
from pathlib import Path
import openai
DATA_DIR = Path.cwd() / "responses"
SOURCE_FILE = DATA_DIR / "Mosqu-1690491417.json"
openai.api_key = os.getenv("OPENAI_API_KEY")
with open(SOURCE_FILE, mode="r", encoding="utf-8") as json_file:
saved_response = json.load(json_file)
image_data = b64decode(saved_response["data"][0]["b64_json"])
response = openai.Image.create_variation(
image=image_data,
n=3,
size="512x512",
response_format="b64_json",
)
new_file_name = f"vary-{SOURCE_FILE.stem[:5]}-{response['created']}.json"
with open(DATA_DIR / new_file_name, mode="w", encoding="utf-8") as file:
json.dump(response, file) | [] |
2024-01-10 | Atabek786/Projects | TextIntoImage~project.py | import os
import json
from pathlib import Path
import openai
userprompt = input("Type text that will transform into an image: ")
DATA_DIR = Path.cwd() / "responses"
DATA_DIR.mkdir(exist_ok=True)
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.Image.create(
prompt=userprompt,
n=1,
size="512x512",
response_format="b64_json",
)
file_name = DATA_DIR / f"{userprompt[:5]}-{response['created']}.json"
with open(file_name, mode="w", encoding="utf-8") as file:
json.dump(response, file)
| [
"Type text that will transform into an image: "
] |
2024-01-10 | ArturMistiuk/clarify_docs | clarify_docs~llm_chat~local_views.py | import os
import tempfile
import docx2txt
from PyPDF2 import PdfReader
from django.contrib.auth.decorators import login_required
from langchain.callbacks import get_openai_callback
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings, HuggingFaceInstructEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from django.shortcuts import render, redirect, get_object_or_404
from django.http import JsonResponse
import openai
from dotenv import load_dotenv
from langchain.vectorstores.faiss import FAISS
from pptx import Presentation
from .forms import PDFUploadForm, PDFUpdateForm, PDFDocumentForm2
from .models import ChatMessage, PDFDocument, UserData
from langchain.llms import HuggingFaceHub
CHUNK_SIZE = 1024
CHUNK_OVERLAP = 128
EMBEDDING_MODEL = "hkunlp/instructor-xl"
MAX_LEN = 768
TEMPERATURE = 0.5
MODEL_LLM = "google/flan-t5-base" #tiiuae/falcon-7b-instruct google/flan-t5-small
MAX_SIZE_FILE = 20 * 1024 * 1024 # 10 Mb
load_dotenv()
def main(request):
# avatar = Avatar.objects.filter(user_id=request.user.id).first()
return render(request, 'llm_chat/index.html', context={}) # home.html
def get_pdf_text(file):
text = None
if file:
with tempfile.NamedTemporaryFile(delete=False) as temp_file:
temp_file.write(file.read())
temp_file.flush()
if file.name.endswith('.pdf'):
pdf_reader = PdfReader(temp_file.name)
text = ''.join(page.extract_text() for page in pdf_reader.pages)
elif file.name.endswith('.txt'):
with open(temp_file.name, 'r') as f:
text = f.read()
elif file.name.endswith('.docx'):
text = docx2txt.process(temp_file.name)
elif file.name.endswith('.pptx'):
prs = Presentation(temp_file.name)
text_runs = []
for slide in prs.slides:
for shape in slide.shapes:
if hasattr(shape, "text"):
text_runs.append(shape.text)
text = '\n'.join(text_runs)
return text
def get_text_chunks(text):
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=CHUNK_SIZE,
chunk_overlap=CHUNK_OVERLAP,
length_function=len)
chunks = text_splitter.split_text(text)
return chunks
def get_vectorstore(text_chunks):
api_key = os.getenv("OPENAI_API_KEY")
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", openai_api_key=api_key)
# embeddings = OllamaEmbeddings(model_name=EMBEDDING_MODEL)
# embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-large", model_kwargs={"device": 'cpu'})
# embeddings = HuggingFaceInstructEmbeddings(model_name=EMBEDDING_MODEL)
vectorstore = FAISS.from_texts(text_chunks, embeddings)
return vectorstore
def get_conversation_chain(vectorstore):
# llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)
llm = HuggingFaceHub(repo_id=MODEL_LLM, model_kwargs={"temperature": TEMPERATURE, "max_length": MAX_LEN})
# llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7b-Chat-GGUF",
# model_file="llama-2-7b-chat.q4_K_M.gguf", model_type="llama",
# gpu_layers=0, from_tf=True)
memory = ConversationBufferMemory(memory_key='chat_history', return_only_outputs=True)
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=vectorstore.as_retriever(),
memory=memory
)
return conversation_chain
@login_required(login_url="/login/")
def upload_pdf(request):
user = request.user
try:
user_data = UserData.objects.get(user=user)
except UserData.DoesNotExist:
user_data = UserData.objects.create(user=user, total_files_uploaded=0, total_questions_asked=0)
if request.method == 'POST': # and request.FILES.get('pdf_document'):
form = PDFUploadForm(request.POST, request.FILES)
if form.is_valid():
pdf_document = request.FILES['pdf_document']
_, file_extension = os.path.splitext(pdf_document.name)
if file_extension.lower() not in [".pdf", ".txt", ".docx", ".pptx"]:
return JsonResponse({'error': 'Only PDF, TXT, DOCX, PPTX files'}, status=400)
if pdf_document.size > MAX_SIZE_FILE: # 10 Mb
return JsonResponse({'error': "File size exceeds 50 MB."}, status=400)
# Check if a file with the same name already exists for this user
if PDFDocument.objects.filter(user=user, title=pdf_document.name).exists():
return JsonResponse({'error': 'A file with the same name already exists.'}, status=400)
user_data.total_files_uploaded += 1
user_data.save()
# Save PDF -> DB
pdf = PDFDocument(user=user, title=pdf_document.name)
pdf.documentContent = get_pdf_text(pdf_document)
pdf.save()
return JsonResponse({'success': 'File uploaded successfully.'})
else:
form = PDFUploadForm()
user_pdfs = PDFDocument.objects.filter(user=request.user)
# not needed for now
chat_message = ChatMessage.objects.all()
return render(request, 'llm_chat/home.html', {'form': form, 'user_pdfs': user_pdfs})
@login_required(login_url="/login/")
def ask_question(request):
user = request.user
try:
user_data = UserData.objects.get(user=user)
except UserData.DoesNotExist:
user_data = UserData.objects.create(user=user, total_files_uploaded=0, total_questions_asked=0)
chat_history = ChatMessage.objects.filter(user=request.user).order_by('timestamp')[:10]
chat_response = ''
user_pdfs = PDFDocument.objects.filter(user=request.user)
user_question = ""
selected_pdf = None # selected_pdf_id або об'єкт PDFDocument, треба перевіряти ще
if request.method == 'POST':
# Check if the user has exceeded the limit for questions per file
user_question = request.POST.get('user_question')
selected_pdf_id = request.POST.get('selected_pdf')
selected_pdf = get_object_or_404(PDFDocument, id=selected_pdf_id)
text_chunks = get_text_chunks(selected_pdf.documentContent)
knowledge_base = get_vectorstore(text_chunks)
conversation_chain = get_conversation_chain(knowledge_base)
# with get_openai_callback() as cb:
response = conversation_chain({'question': user_question})
chat_response = response["answer"] #response['chat_history']
chat_message = ChatMessage(user=request.user, message=user_question, answer=chat_response,
pdf_document=selected_pdf) # об'єкт PDFDocument
user_data.total_questions_asked += 1
user_data.save()
chat_message.save()
# Отримуємо повідомлення, які відносяться до обраного PDFDocument
chat_message = ChatMessage.objects.filter(user=request.user, pdf_document=selected_pdf).order_by('timestamp')
context = {'chat_response': chat_response, 'chat_history': chat_history, 'user_question': user_question,
'user_pdfs': user_pdfs, 'chat_message': chat_message}
return render(request, 'llm_chat/home.html', context)
@login_required(login_url="/login/")
def get_chat_history(request):
pdf_id = request.GET.get('pdf_id')
if not pdf_id:
return JsonResponse({"error": "PDF ID not provided"}, status=400)
# Тут історія чату для даного пдф, дивитись фронт зда'ться там проблема
messages = ChatMessage.objects.filter(pdf_document_id=pdf_id).values('message', 'answer', 'timestamp')
return JsonResponse(list(messages), safe=False)
# TO DO for future
# def new_chat(request):
# clear the messages list
# request.session.pop('messages', None)
# return redirect('home')
# this is the view for handling errors
# def error_handler(request):
# return render(request, 'llm_chat/404.html')
| [] |
2024-01-10 | Pythagora-io/gpt-pilot | pilot~helpers~AgentConvo.py | import json
import re
import subprocess
import uuid
from utils.style import color_yellow, color_yellow_bold
from database.database import get_saved_development_step, save_development_step, delete_all_subsequent_steps
from helpers.exceptions.TokenLimitError import TokenLimitError
from utils.function_calling import parse_agent_response, FunctionCallSet
from utils.llm_connection import create_gpt_chat_completion
from utils.utils import get_prompt, get_sys_message, capitalize_first_word_with_underscores
from logger.logger import logger
from prompts.prompts import ask_user
from const.llm import END_RESPONSE
from helpers.cli import running_processes
class AgentConvo:
"""
Represents a conversation with an agent.
Args:
agent: An instance of the agent participating in the conversation.
"""
def __init__(self, agent):
# [{'role': 'system'|'user'|'assistant', 'content': ''}, ...]
self.messages: list[dict] = []
self.branches = {}
self.log_to_user = True
self.agent = agent
self.high_level_step = self.agent.project.current_step
# add system message
system_message = get_sys_message(self.agent.role, self.agent.project.args)
logger.info('\n>>>>>>>>>> System Prompt >>>>>>>>>>\n%s\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>',
system_message['content'])
self.messages.append(system_message)
def send_message(self, prompt_path=None, prompt_data=None, function_calls: FunctionCallSet = None, should_log_message=True):
"""
Sends a message in the conversation.
Args:
prompt_path: The path to a prompt.
prompt_data: Data associated with the prompt.
function_calls: Optional function calls to be included in the message.
should_log_message: Flag if final response should be logged.
Returns:
The response from the agent.
"""
# craft message
self.construct_and_add_message_from_prompt(prompt_path, prompt_data)
# TODO: move this if block (and the other below) to Developer agent - https://github.com/Pythagora-io/gpt-pilot/issues/91#issuecomment-1751964079
# check if we already have the LLM response saved
if self.agent.__class__.__name__ == 'Developer':
self.agent.project.llm_req_num += 1
development_step = get_saved_development_step(self.agent.project)
if development_step is not None and self.agent.project.skip_steps:
# if we do, use it
print(color_yellow(f'Restoring development step with id {development_step.id}'))
self.agent.project.checkpoints['last_development_step'] = development_step
self.agent.project.restore_files(development_step.id)
response = development_step.llm_response
self.messages = development_step.messages
if self.agent.project.skip_until_dev_step and str(
development_step.id) == self.agent.project.skip_until_dev_step:
self.agent.project.finish_loading()
delete_all_subsequent_steps(self.agent.project)
if 'delete_unrelated_steps' in self.agent.project.args and self.agent.project.args[
'delete_unrelated_steps']:
self.agent.project.delete_all_steps_except_current_branch()
else:
should_log_message = True
if development_step.token_limit_exception_raised:
raise TokenLimitError(development_step.token_limit_exception_raised)
else:
# if we don't, get the response from LLM
try:
self.replace_files()
response = create_gpt_chat_completion(self.messages, self.high_level_step, self.agent.project,
function_calls=function_calls)
except TokenLimitError as e:
save_development_step(self.agent.project, prompt_path, prompt_data, self.messages, '', str(e))
raise e
# TODO: move this code to Developer agent - https://github.com/Pythagora-io/gpt-pilot/issues/91#issuecomment-1751964079
if hasattr(self.agent, 'save_dev_steps') and self.agent.save_dev_steps:
save_development_step(self.agent.project, prompt_path, prompt_data, self.messages, response)
# TODO handle errors from OpenAI
# It's complicated because calling functions are expecting different types of responses - string or tuple
# https://github.com/Pythagora-io/gpt-pilot/issues/165 & #91
if response == {}:
logger.error('Aborting with "OpenAI API error happened"')
raise Exception("OpenAI API error happened.")
response = parse_agent_response(response, function_calls)
message_content = self.format_message_content(response, function_calls)
# TODO we need to specify the response when there is a function called
# TODO maybe we can have a specific function that creates the GPT response from the function call
logger.info('\n>>>>>>>>>> Assistant Prompt >>>>>>>>>>\n%s\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>',
message_content)
self.messages.append({"role": "assistant", "content": message_content})
if should_log_message:
self.log_message(message_content)
return response
def format_message_content(self, response, function_calls):
# TODO remove this once the database is set up properly
if isinstance(response, str):
return response
else:
# string_response = []
# for key, value in response.items():
# string_response.append(f'# {key}')
#
# if isinstance(value, list):
# if 'to_message' in function_calls:
# string_response.append(function_calls['to_message'](value))
# elif len(value) > 0 and isinstance(value[0], dict):
# string_response.extend([
# f'##{i}\n' + array_of_objects_to_string(d)
# for i, d in enumerate(value)
# ])
# else:
# string_response.extend(['- ' + r for r in value])
# else:
# string_response.append(str(value))
#
# return '\n'.join(string_response)
return json.dumps(response)
# TODO END
def continuous_conversation(self, prompt_path, prompt_data, function_calls=None):
"""
Conducts a continuous conversation with the agent.
Args:
prompt_path: The path to a prompt.
prompt_data: Data associated with the prompt.
function_calls: Optional function calls to be included in the conversation.
Returns:
List of accepted messages in the conversation.
"""
self.log_to_user = False
accepted_messages = []
response = self.send_message(prompt_path, prompt_data, function_calls)
# Continue conversation until GPT response equals END_RESPONSE
while response != END_RESPONSE:
user_message = ask_user(self.agent.project,
'Do you want to add anything else? If not, just press ENTER.',
hint=response,
require_some_input=False)
if user_message == "":
accepted_messages.append(response)
logger.info('\n>>>>>>>>>> User Message >>>>>>>>>>\n%s\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>', user_message)
self.messages.append({"role": "user", "content": user_message})
response = self.send_message(None, None, function_calls)
self.log_to_user = True
return accepted_messages
def save_branch(self, branch_name=None):
if branch_name is None:
branch_name = str(uuid.uuid4())
self.branches[branch_name] = self.messages.copy()
return branch_name
def load_branch(self, branch_name, reload_files=True):
self.messages = self.branches[branch_name].copy()
if reload_files:
# TODO make this more flexible - with every message, save metadata so every time we load a branch, reconstruct all messages from scratch
self.replace_files()
def replace_files(self):
files = self.agent.project.get_all_coded_files()
for msg in self.messages:
if msg['role'] == 'user':
for file in files:
msg['content'] = self.replace_file_content(msg['content'], f"{file['path']}/{file['name']}", file['content'])
def escape_specials(self, s):
s = s.replace("\\", "\\\\")
# List of sequences to preserve
sequences_to_preserve = [
# todo check if needed "\\\\", # Backslash - note: probably not eg. paths on Windows
"\\'", # Single quote
'\\"', # Double quote
# todo check if needed '\\a', # ASCII Bell (BEL)
# todo check if needed '\\b', # ASCII Backspace (BS) - note: different from regex \b
# todo check if needed '\\f', # ASCII Formfeed (FF)
'\\n', # ASCII Linefeed (LF)
# todo check if needed '\\r', # ASCII Carriage Return (CR)
'\\t', # ASCII Horizontal Tab (TAB)
# todo check if needed '\\v' # ASCII Vertical Tab (VT)
]
for seq in sequences_to_preserve:
s = s.replace('\\\\' + seq[-1], seq)
return s
def replace_file_content(self, message, file_path, new_content):
pattern = rf'\*\*{re.escape(file_path)}\*\*:\n```\n(.*?)\n```'
# Escape special characters in new_content for the sake of regex replacement
new_content_escaped = self.escape_specials(new_content)
file_path_escaped = self.escape_specials(file_path)
new_section_content = f'**{file_path_escaped}**\n```\n{new_content_escaped}\n```'
updated_message, num_replacements = re.subn(pattern, new_section_content, message, flags=re.DOTALL)
if num_replacements == 0:
return message
return updated_message
def convo_length(self):
return len([msg for msg in self.messages if msg['role'] != 'system'])
def log_message(self, content):
"""
Logs a message in the conversation.
Args:
content: The content of the message to be logged.
"""
print_msg = capitalize_first_word_with_underscores(self.high_level_step)
if self.log_to_user:
if self.agent.project.checkpoints['last_development_step'] is not None:
dev_step_msg = f'\nDev step {str(self.agent.project.checkpoints["last_development_step"])}\n'
print(color_yellow_bold(dev_step_msg), end='')
logger.info(dev_step_msg)
print(f"\n{content}\n", type='local')
logger.info(f"{print_msg}: {content}\n")
def to_context_prompt(self):
logger.info(f'to_context_prompt({self.agent.project.current_step})')
# TODO: get dependencies & versions from the project (package.json, requirements.txt, pom.xml, etc.)
# Ideally, the LLM could do this, and we update it on load & whenever the file changes
# ...or LLM generates a script for `.gpt-pilot/get_dependencies` that we run
# https://github.com/Pythagora-io/gpt-pilot/issues/189
return get_prompt('development/context.prompt', {
'directory_tree': self.agent.project.get_directory_tree(),
'running_processes': running_processes,
})
def to_playground(self):
# Internal function to help debugging in OpenAI Playground, not to be used in production
with open('const/convert_to_playground_convo.js', 'r', encoding='utf-8') as file:
content = file.read()
process = subprocess.Popen('pbcopy', stdin=subprocess.PIPE)
process.communicate(content.replace('{{messages}}', str(self.messages)).encode('utf-8'))
def remove_last_x_messages(self, x):
logger.info('removing last %d messages: %s', x, self.messages[-x:])
self.messages = self.messages[:-x]
def construct_and_add_message_from_prompt(self, prompt_path, prompt_data):
if prompt_path is not None and prompt_data is not None:
prompt = get_prompt(prompt_path, prompt_data)
logger.info('\n>>>>>>>>>> User Prompt >>>>>>>>>>\n%s\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>', prompt)
self.messages.append({"role": "user", "content": prompt})
def get_additional_info_from_user(self, function_calls: FunctionCallSet = None):
"""
Asks user if he wants to make any changes to last message in conversation.
Args:
function_calls: Optional function calls to be included in the message.
Returns:
The response from the agent OR None if user didn't ask for change.
"""
llm_response = None
while True:
print(color_yellow(
"Please check this message and say what needs to be changed. If everything is ok just press ENTER", ))
changes = ask_user(self.agent.project, self.messages[-1]['content'], require_some_input=False)
if changes.lower() == '':
break
llm_response = self.send_message('utils/update.prompt',
{'changes': changes},
function_calls)
logger.info('Getting additional info from user done')
return llm_response
| [] |
2024-01-10 | Pythagora-io/gpt-pilot | pilot~utils~llm_connection.py | import re
import requests
import os
import sys
import time
import json
import tiktoken
from prompt_toolkit.styles import Style
from jsonschema import validate, ValidationError
from utils.style import color_red
from typing import List
from const.llm import MAX_GPT_MODEL_TOKENS
from const.messages import AFFIRMATIVE_ANSWERS
from logger.logger import logger, logging
from helpers.exceptions import TokenLimitError, ApiKeyNotDefinedError
from utils.utils import fix_json, get_prompt
from utils.function_calling import add_function_calls_to_request, FunctionCallSet, FunctionType
from utils.questionary import styled_text
from .telemetry import telemetry
def get_tokens_in_messages(messages: List[str]) -> int:
tokenizer = tiktoken.get_encoding("cl100k_base") # GPT-4 tokenizer
tokenized_messages = [tokenizer.encode(message['content']) for message in messages]
return sum(len(tokens) for tokens in tokenized_messages)
def num_tokens_from_functions(functions):
"""Return the number of tokens used by a list of functions."""
encoding = tiktoken.get_encoding("cl100k_base")
num_tokens = 0
for function in functions:
function_tokens = len(encoding.encode(function['name']))
function_tokens += len(encoding.encode(function['description']))
if 'parameters' in function:
parameters = function['parameters']
if 'properties' in parameters:
for propertiesKey in parameters['properties']:
function_tokens += len(encoding.encode(propertiesKey))
v = parameters['properties'][propertiesKey]
for field in v:
if field == 'type':
function_tokens += 2
function_tokens += len(encoding.encode(v['type']))
elif field == 'description':
function_tokens += 2
function_tokens += len(encoding.encode(v['description']))
elif field == 'enum':
function_tokens -= 3
for o in v['enum']:
function_tokens += 3
function_tokens += len(encoding.encode(o))
function_tokens += 11
num_tokens += function_tokens
num_tokens += 12
return num_tokens
def create_gpt_chat_completion(messages: List[dict], req_type, project,
function_calls: FunctionCallSet = None):
"""
Called from:
- AgentConvo.send_message() - these calls often have `function_calls`, usually from `pilot/const/function_calls.py`
- convo.continuous_conversation()
- prompts.get_additional_info_from_openai()
- prompts.get_additional_info_from_user() after the user responds to each
"Please check this message and say what needs to be changed... {message}"
:param messages: [{ "role": "system"|"assistant"|"user", "content": string }, ... ]
:param req_type: 'project_description' etc. See common.STEPS
:param project: project
:param function_calls: (optional) {'definitions': [{ 'name': str }, ...]}
see `IMPLEMENT_CHANGES` etc. in `pilot/const/function_calls.py`
:return: {'text': new_code}
or if `function_calls` param provided
{'function_calls': {'name': str, arguments: {...}}}
"""
gpt_data = {
'model': os.getenv('MODEL_NAME', 'gpt-4'),
'n': 1,
'temperature': 1,
'top_p': 1,
'presence_penalty': 0,
'frequency_penalty': 0,
'messages': messages,
'stream': True
}
# delete some keys if using "OpenRouter" API
if os.getenv('ENDPOINT') == 'OPENROUTER':
keys_to_delete = ['n', 'max_tokens', 'temperature', 'top_p', 'presence_penalty', 'frequency_penalty']
for key in keys_to_delete:
if key in gpt_data:
del gpt_data[key]
# Advise the LLM of the JSON response schema we are expecting
messages_length = len(messages)
add_function_calls_to_request(gpt_data, function_calls)
try:
response = stream_gpt_completion(gpt_data, req_type, project)
# Remove JSON schema and any added retry messages
while len(messages) > messages_length:
messages.pop()
return response
except TokenLimitError as e:
raise e
except Exception as e:
logger.error(f'The request to {os.getenv("ENDPOINT")} API failed: %s', e)
print(f'The request to {os.getenv("ENDPOINT")} API failed. Here is the error message:')
print(e)
return {} # https://github.com/Pythagora-io/gpt-pilot/issues/130 - may need to revisit how we handle this
def delete_last_n_lines(n):
for _ in range(n):
# Move the cursor up one line
sys.stdout.write('\033[F')
# Clear the current line
sys.stdout.write('\033[K')
def count_lines_based_on_width(content, width):
lines_required = sum(len(line) // width + 1 for line in content.split('\n'))
return lines_required
def get_tokens_in_messages_from_openai_error(error_message):
"""
Extract the token count from a message.
Args:
message (str): The message to extract the token count from.
Returns:
int or None: The token count if found, otherwise None.
"""
match = re.search(r"your messages resulted in (\d+) tokens", error_message)
if match:
return int(match.group(1))
else:
return None
def retry_on_exception(func):
def update_error_count(args):
function_error_count = 1 if 'function_error' not in args[0] else args[0]['function_error_count'] + 1
args[0]['function_error_count'] = function_error_count
return function_error_count
def set_function_error(args, err_str: str):
logger.info(err_str)
args[0]['function_error'] = err_str
if 'function_buffer' in args[0]:
del args[0]['function_buffer']
def wrapper(*args, **kwargs):
wait_duration_ms = None
while True:
try:
# spinner_stop(spinner)
return func(*args, **kwargs)
except Exception as e:
# Convert exception to string
err_str = str(e)
if isinstance(e, json.JSONDecodeError):
# codellama-34b-instruct seems to send incomplete JSON responses.
# We ask for the rest of the JSON object for the following errors:
# - 'Expecting value' (error if `e.pos` not at the end of the doc: True instead of true)
# - "Expecting ':' delimiter"
# - 'Expecting property name enclosed in double quotes'
# - 'Unterminated string starting at'
if e.msg.startswith('Expecting') or e.msg == 'Unterminated string starting at':
if e.msg == 'Expecting value' and len(e.doc) > e.pos:
# Note: clean_json_response() should heal True/False boolean values
err_str = re.split(r'[},\\n]', e.doc[e.pos:])[0]
err_str = f'Invalid value: `{err_str}`'
else:
# if e.msg == 'Unterminated string starting at' or len(e.doc) == e.pos:
logger.info('Received incomplete JSON response from LLM. Asking for the rest...')
args[0]['function_buffer'] = e.doc
if 'function_error' in args[0]:
del args[0]['function_error']
continue
# TODO: (if it ever comes up) e.msg == 'Extra data' -> trim the response
# 'Invalid control character at', 'Invalid \\escape', 'Invalid control character',
# or `Expecting value` with `pos` before the end of `e.doc`
function_error_count = update_error_count(args)
logger.warning('Received invalid character in JSON response from LLM. Asking to retry...')
logger.info(f' received: {e.doc}')
set_function_error(args, err_str)
if function_error_count < 3:
continue
elif isinstance(e, ValidationError):
function_error_count = update_error_count(args)
logger.warning('Received invalid JSON response from LLM. Asking to retry...')
# eg:
# json_path: '$.type'
# message: "'command' is not one of ['automated_test', 'command_test', 'manual_test', 'no_test']"
set_function_error(args, f'at {e.json_path} - {e.message}')
# Attempt retry if the JSON schema is invalid, but avoid getting stuck in a loop
if function_error_count < 3:
continue
if "context_length_exceeded" in err_str:
# If the specific error "context_length_exceeded" is present, simply return without retry
# spinner_stop(spinner)
raise TokenLimitError(get_tokens_in_messages_from_openai_error(err_str), MAX_GPT_MODEL_TOKENS)
if "rate_limit_exceeded" in err_str:
# Extracting the duration from the error string
match = re.search(r"Please try again in (\d+)ms.", err_str)
if match:
# spinner = spinner_start(colored("Rate limited. Waiting...", 'yellow'))
if wait_duration_ms is None:
wait_duration_ms = int(match.group(1))
elif wait_duration_ms < 6000:
# waiting 6ms isn't usually long enough - exponential back-off until about 6 seconds
wait_duration_ms *= 2
logger.debug(f'Rate limited. Waiting {wait_duration_ms}ms...')
time.sleep(wait_duration_ms / 1000)
continue
print(color_red('There was a problem with request to openai API:'))
# spinner_stop(spinner)
print(err_str)
logger.error(f'There was a problem with request to openai API: {err_str}')
project = args[2]
print('yes/no', type='buttons-only')
user_message = styled_text(
project,
'Do you want to try make the same request again? If yes, just press ENTER. Otherwise, type "no".',
style=Style.from_dict({
'question': '#FF0000 bold',
'answer': '#FF910A bold'
})
)
# TODO: take user's input into consideration - send to LLM?
# https://github.com/Pythagora-io/gpt-pilot/issues/122
if user_message.lower() not in AFFIRMATIVE_ANSWERS:
return {}
return wrapper
@retry_on_exception
def stream_gpt_completion(data, req_type, project):
"""
Called from create_gpt_chat_completion()
:param data:
:param req_type: 'project_description' etc. See common.STEPS
:param project: NEEDED FOR WRAPPER FUNCTION retry_on_exception
:return: {'text': str} or {'function_calls': {'name': str, arguments: '{...}'}}
"""
# TODO add type dynamically - this isn't working when connected to the external process
try:
terminal_width = os.get_terminal_size().columns
except OSError:
terminal_width = 50
lines_printed = 2
gpt_response = ''
buffer = '' # A buffer to accumulate incoming data
expecting_json = None
received_json = False
if 'functions' in data:
expecting_json = data['functions']
if 'function_buffer' in data:
incomplete_json = get_prompt('utils/incomplete_json.prompt', {'received_json': data['function_buffer']})
data['messages'].append({'role': 'user', 'content': incomplete_json})
gpt_response = data['function_buffer']
received_json = True
elif 'function_error' in data:
invalid_json = get_prompt('utils/invalid_json.prompt', {'invalid_reason': data['function_error']})
data['messages'].append({'role': 'user', 'content': invalid_json})
received_json = True
# Don't send the `functions` parameter to Open AI, but don't remove it from `data` in case we need to retry
data = {key: value for key, value in data.items() if not key.startswith('function')}
def return_result(result_data, lines_printed):
if buffer:
lines_printed += count_lines_based_on_width(buffer, terminal_width)
logger.debug(f'lines printed: {lines_printed} - {terminal_width}')
# delete_last_n_lines(lines_printed) # TODO fix and test count_lines_based_on_width()
return result_data
# spinner = spinner_start(yellow("Waiting for OpenAI API response..."))
# print(yellow("Stream response from OpenAI:"))
# Configure for the selected ENDPOINT
model = os.getenv('MODEL_NAME', 'gpt-4')
endpoint = os.getenv('ENDPOINT')
# This will be set many times but we don't care, as there are no side-effects to it.
telemetry.set("model", model)
telemetry.inc("num_llm_requests")
logger.info(f'> Request model: {model}')
if logger.isEnabledFor(logging.DEBUG):
logger.debug('\n'.join([f"{message['role']}: {message['content']}" for message in data['messages']]))
if endpoint == 'AZURE':
# If yes, get the AZURE_ENDPOINT from .ENV file
endpoint_url = os.getenv('AZURE_ENDPOINT') + '/openai/deployments/' + model + '/chat/completions?api-version=2023-05-15'
headers = {
'Content-Type': 'application/json',
'api-key': get_api_key_or_throw('AZURE_API_KEY')
}
elif endpoint == 'OPENROUTER':
# If so, send the request to the OpenRouter API endpoint
endpoint_url = os.getenv('OPENROUTER_ENDPOINT', 'https://openrouter.ai/api/v1/chat/completions')
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer ' + get_api_key_or_throw('OPENROUTER_API_KEY'),
'HTTP-Referer': 'https://github.com/Pythagora-io/gpt-pilot',
'X-Title': 'GPT Pilot'
}
data['max_tokens'] = MAX_GPT_MODEL_TOKENS
data['model'] = model
else:
# If not, send the request to the OpenAI endpoint
endpoint_url = os.getenv('OPENAI_ENDPOINT', 'https://api.openai.com/v1/chat/completions')
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer ' + get_api_key_or_throw('OPENAI_API_KEY')
}
data['model'] = model
response = requests.post(
endpoint_url,
headers=headers,
json=data,
stream=True
)
if response.status_code != 200:
project.dot_pilot_gpt.log_chat_completion(endpoint, model, req_type, data['messages'], response.text)
logger.info(f'problem with request (status {response.status_code}): {response.text}')
raise Exception(f"API responded with status code: {response.status_code}. Response text: {response.text}")
# function_calls = {'name': '', 'arguments': ''}
for line in response.iter_lines():
# Ignore keep-alive new lines
if line and line != b': OPENROUTER PROCESSING':
line = line.decode("utf-8") # decode the bytes to string
if line.startswith('data: '):
line = line[6:] # remove the 'data: ' prefix
# Check if the line is "[DONE]" before trying to parse it as JSON
if line == "[DONE]":
continue
try:
json_line = json.loads(line)
if len(json_line['choices']) == 0:
continue
if 'error' in json_line:
logger.error(f'Error in LLM response: {json_line}')
raise ValueError(f'Error in LLM response: {json_line["error"]["message"]}')
choice = json_line['choices'][0]
# if 'finish_reason' in choice and choice['finish_reason'] == 'function_call':
# function_calls['arguments'] = load_data_to_json(function_calls['arguments'])
# return return_result({'function_calls': function_calls}, lines_printed)
json_line = choice['delta']
except json.JSONDecodeError as e:
logger.error(f'Unable to decode line: {line} {e.msg}')
continue # skip to the next line
# handle the streaming response
# if 'function_call' in json_line:
# if 'name' in json_line['function_call']:
# function_calls['name'] = json_line['function_call']['name']
# print(f'Function call: {function_calls["name"]}')
#
# if 'arguments' in json_line['function_call']:
# function_calls['arguments'] += json_line['function_call']['arguments']
# print(json_line['function_call']['arguments'], type='stream', end='', flush=True)
if 'content' in json_line:
content = json_line.get('content')
if content:
buffer += content # accumulate the data
# If you detect a natural breakpoint (e.g., line break or end of a response object), print & count:
if buffer.endswith('\n'):
if expecting_json and not received_json:
received_json = assert_json_response(buffer, lines_printed > 2)
# or some other condition that denotes a breakpoint
lines_printed += count_lines_based_on_width(buffer, terminal_width)
buffer = "" # reset the buffer
gpt_response += content
print(content, type='stream', end='', flush=True)
print('\n', type='stream')
# if function_calls['arguments'] != '':
# logger.info(f'Response via function call: {function_calls["arguments"]}')
# function_calls['arguments'] = load_data_to_json(function_calls['arguments'])
# return return_result({'function_calls': function_calls}, lines_printed)
logger.info('<<<<<<<<<< LLM Response <<<<<<<<<<\n%s\n<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<', gpt_response)
project.dot_pilot_gpt.log_chat_completion(endpoint, model, req_type, data['messages'], gpt_response)
if expecting_json:
gpt_response = clean_json_response(gpt_response)
assert_json_schema(gpt_response, expecting_json)
# Note, we log JSON separately from the YAML log above incase the JSON is invalid and an error is raised
project.dot_pilot_gpt.log_chat_completion_json(endpoint, model, req_type, expecting_json, gpt_response)
new_code = postprocessing(gpt_response, req_type) # TODO add type dynamically
return return_result({'text': new_code}, lines_printed)
def get_api_key_or_throw(env_key: str):
api_key = os.getenv(env_key)
if api_key is None:
raise ApiKeyNotDefinedError(env_key)
return api_key
def assert_json_response(response: str, or_fail=True) -> bool:
if re.match(r'.*(```(json)?|{|\[)', response):
return True
elif or_fail:
logger.error(f'LLM did not respond with JSON: {response}')
raise ValueError('LLM did not respond with JSON')
else:
return False
def clean_json_response(response: str) -> str:
response = re.sub(r'^.*```json\s*', '', response, flags=re.DOTALL)
response = re.sub(r': ?True(,)?$', r':true\1', response, flags=re.MULTILINE)
response = re.sub(r': ?False(,)?$', r':false\1', response, flags=re.MULTILINE)
return response.strip('` \n')
def assert_json_schema(response: str, functions: list[FunctionType]) -> True:
for function in functions:
schema = function['parameters']
parsed = json.loads(response)
validate(parsed, schema)
return True
def postprocessing(gpt_response: str, req_type) -> str:
return gpt_response
def load_data_to_json(string):
return json.loads(fix_json(string))
| [] |
2024-01-10 | Ali-Razmjoo/gpt-scripts | summarize-book.py | import time
from openai import AzureOpenAI
# Set your OpenAI API key
api_key = '264baae4c11546cea94468cf39fe3e76'
client = AzureOpenAI(
api_version="2023-07-01-preview",
azure_endpoint="https://oai-int-azg-we-001.openai.azure.com/",
azure_deployment="dep-gtp-4-32k",
api_key=api_key
)
data = open('cissp.txt').read().split('========================')
def generate_article(prompt):
response = client.chat.completions.create(
model="gpt-4-32k",
messages=[
{"role": "system", "content": "Turn this content to bullet point; ensure you are summarizing and not"
" missing important information since it will be used to study for CISSP"
" exam as a preparation. Sound friendly, in basic words, SEO friendly and "
"more like human all the time."},
{"role": "user", "content": prompt}
],
max_tokens=16 * 1024
)
return response.choices[0].message.content
def main():
n = 1
for prompt in data:
while True:
try:
print(f"trying {n}")
f = open("cissp-bulletpoints.md", 'a')
answer = generate_article(prompt)
f.write(answer + '\n========================\n')
f.close()
n += 1
break
except Exception as e:
print(f"error while requesting {n}, error: {e}")
time.sleep(1)
print("done")
if __name__ == "__main__":
main()
| [
"Turn this content to bullet point; ensure you are summarizing and not missing important information since it will be used to study for CISSP exam as a preparation. Sound friendly, in basic words, SEO friendly and more like human all the time."
] |
2024-01-10 | jeremiedecock/snippets | python~azure_openai_service~getting_started_chat.py | #!/usr/bin/env python3
# See: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/quickstart?pivots=programming-language-python&tabs=command-line
import openai
from credentials import AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, AZURE_OPENAI_DEPLOYMENT_NAME
openai.api_key = AZURE_OPENAI_API_KEY
openai.api_base = AZURE_OPENAI_ENDPOINT # Your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
openai.api_type = 'azure'
openai.api_version = '2023-05-15' # This may change in the future
# Send a completion call to generate an answer
print('Sending a test completion job')
start_phrase = 'Write a tagline for an ice cream shop. '
response = openai.Completion.create(engine=AZURE_OPENAI_DEPLOYMENT_NAME, prompt=start_phrase, max_tokens=10)
text = response['choices'][0]['text'].replace('\n', '').replace(' .', '.').strip()
print(start_phrase + text) | [
"Write a tagline for an ice cream shop. "
] |
2024-01-10 | CrazyBonze/PaperBoy | paperboy~summerizer.py | import asyncio
from textwrap import shorten
import openai
import pysrt
from settings import settings
from sumy.nlp.stemmers import Stemmer
from sumy.nlp.tokenizers import Tokenizer
from sumy.parsers.plaintext import PlaintextParser
from sumy.summarizers.lsa import LsaSummarizer as Summarizer
from sumy.utils import get_stop_words
LANGUAGE = "english"
SENTENCES_COUNT = 5
async def new_summarize(text):
openai.api_key = settings.openai_api_key
messages = [
{"role": "system", "content": settings.system_message},
{"role": "user", "content": text},
]
loop = asyncio.get_event_loop()
response = await loop.run_in_executor(
None,
lambda: openai.ChatCompletion.create(
model=settings.openai_model, messages=messages
),
)
ai_message = response["choices"][0]["message"]["content"]
return ai_message
def _summarize(text, sentences=SENTENCES_COUNT, language=LANGUAGE):
parser = PlaintextParser.from_string(text, Tokenizer(language))
stemmer = Stemmer(LANGUAGE)
summarizer = Summarizer(stemmer)
summarizer.stop_words = get_stop_words(LANGUAGE)
return shorten(
" ".join([str(s) for s in summarizer(parser.document, SENTENCES_COUNT)]),
width=1800,
placeholder="...",
)
async def summarize(text, sentences=SENTENCES_COUNT, language=LANGUAGE):
loop = asyncio.get_running_loop()
done = loop.run_in_executor(None, _summarize, text, sentences, language)
return await done
| [] |
2024-01-10 | CrazyBonze/PaperBoy | paperboy~ml_text.py | from langchain.text_splitter import RecursiveCharacterTextSplitter
def vectorize_article(article_path):
with open(article_path) as f:
text = f.read()
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size=1000,
chunk_overlap=0,
length_function=len,
is_separator_regex=False,
)
texts = text_splitter.create_documents([text])
print(texts[0])
print(texts[1])
return texts
| [] |
2024-01-10 | hootigger/nas-tools | app~plugins~modules~_autosignin~chdbits.py | import json
import os
import random
import re
from lxml import etree
from app.helper.openai_helper import OpenAiHelper
from app.plugins.modules._autosignin._base import _ISiteSigninHandler
from app.utils import StringUtils, RequestUtils
from config import Config
class CHDBits(_ISiteSigninHandler):
"""
彩虹岛签到
如果填写openai key则调用chatgpt获取答案
否则随机
"""
# 匹配的站点Url,每一个实现类都需要设置为自己的站点Url
site_url = "chdbits.co"
# 已签到
_sign_regex = ['今天已经签过到了']
# 签到成功,待补充
_success_regex = ['\\d+点魔力值']
# 存储正确的答案,后续可直接查
_answer_path = os.path.join(Config().get_temp_path(), "signin")
_answer_file = _answer_path + "/chdbits.json"
@classmethod
def match(cls, url):
"""
根据站点Url判断是否匹配当前站点签到类,大部分情况使用默认实现即可
:param url: 站点Url
:return: 是否匹配,如匹配则会调用该类的signin方法
"""
return True if StringUtils.url_equal(url, cls.site_url) else False
def signin(self, site_info: dict):
"""
执行签到操作
:param site_info: 站点信息,含有站点Url、站点Cookie、UA等信息
:return: 签到结果信息
"""
site = site_info.get("name")
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
proxy = Config().get_proxies() if site_info.get("proxy") else None
# 创建正确答案存储目录
if not os.path.exists(os.path.dirname(self._answer_file)):
os.makedirs(os.path.dirname(self._answer_file))
# 判断今日是否已签到
index_res = RequestUtils(cookies=site_cookie,
headers=ua,
proxies=proxy
).get_res(url='https://chdbits.co/bakatest.php')
if not index_res or index_res.status_code != 200:
self.error(f"签到失败,请检查站点连通性")
return False, f'【{site}】签到失败,请检查站点连通性'
if "login.php" in index_res.text:
self.error(f"签到失败,cookie失效")
return False, f'【{site}】签到失败,cookie失效'
sign_status = self.sign_in_result(html_res=index_res.text,
regexs=self._sign_regex)
if sign_status:
self.info(f"今日已签到")
return True, f'【{site}】今日已签到'
# 没有签到则解析html
html = etree.HTML(index_res.text)
if not html:
return False, f'【{site}】签到失败'
# 获取页面问题、答案
questionid = html.xpath("//input[@name='questionid']/@value")[0]
option_ids = html.xpath("//input[@name='choice[]']/@value")
option_values = html.xpath("//input[@name='choice[]']/following-sibling::text()")
question_str = html.xpath("//td[@class='text' and contains(text(),'请问:')]/text()")[0]
answers = list(zip(option_ids, option_values))
# 正则获取问题
match = re.search(r'请问:(.+)', question_str)
if match:
question_str = match.group(1)
self.debug(f"获取到签到问题 {question_str}")
else:
self.error(f"未获取到签到问题")
return False, f"【{site}】签到失败,未获取到签到问题"
# 查询已有答案
exits_answers = {}
try:
with open(self._answer_file, 'r') as f:
json_str = f.read()
exits_answers = json.loads(json_str)
# 查询本地本次验证码hash答案
question_answer = exits_answers[question_str]
# question_answer是数组
if not isinstance(question_answer, list):
question_answer = [question_answer]
# 本地存在本次hash对应的正确答案再遍历查询
choice = []
for q in question_answer:
for num, answer in answers:
if str(q) == str(num):
choice.append(int(q))
if len(choice) > 0:
# 签到
return self.__signin(questionid=questionid,
choice=choice,
site_cookie=site_cookie,
ua=ua,
proxy=proxy,
site=site)
except (FileNotFoundError, IOError, OSError) as e:
self.debug("查询本地已知答案失败,继续请求豆瓣查询")
# 正确答案,默认随机,如果gpt返回则用gpt返回的答案提交
choice = [option_ids[random.randint(0, len(option_ids) - 1)]]
# 组装gpt问题
gpt_options = "{\n" + ",\n".join([f"{num}:{value}" for num, value in answers]) + "\n}"
gpt_question = f"题目:{question_str}\n" \
f"选项:{gpt_options}"
self.debug(f"组装chatgpt问题 {gpt_question}")
# chatgpt获取答案
answer = OpenAiHelper().get_question_answer(question=gpt_question)
self.debug(f"chatpgt返回结果 {answer}")
# 处理chatgpt返回的答案信息
if answer is None:
self.warn(f"ChatGPT未启用, 开始随机签到")
# return f"【{site}】签到失败,ChatGPT未启用"
elif answer:
# 正则获取字符串中的数字
answer_nums = list(map(int, re.findall("\d+", answer)))
if not answer_nums:
self.warn(f"无法从chatgpt回复 {answer} 中获取答案, 将采用随机签到")
else:
choice = []
for answer in answer_nums:
# 如果返回的数字在option_ids范围内,则直接作为答案
if str(answer) in option_ids:
choice.append(int(answer))
self.info(f"chatgpt返回答案id {answer} 在签到选项 {option_ids} 中")
# 签到
return self.__signin(questionid=questionid,
choice=choice,
site_cookie=site_cookie,
ua=ua,
proxy=proxy,
site=site,
exits_answers=exits_answers,
question=question_str)
def __signin(self, questionid, choice, site, site_cookie, ua, proxy, exits_answers=None, question=None):
"""
签到请求
questionid: 450
choice[]: 8
choice[]: 4
usercomment: 此刻心情:无
submit: 提交
多选会有多个choice[]....
"""
data = {
'questionid': questionid,
'choice[]': choice[0] if len(choice) == 1 else choice,
'usercomment': '太难了!',
'wantskip': '不会'
}
self.debug(f"签到请求参数 {data}")
sign_res = RequestUtils(cookies=site_cookie,
headers=ua,
proxies=proxy
).post_res(url='https://chdbits.co/bakatest.php', data=data)
if not sign_res or sign_res.status_code != 200:
self.error(f"签到失败,签到接口请求失败")
return False, f'【{site}】签到失败,签到接口请求失败'
# 判断是否签到成功
sign_status = self.sign_in_result(html_res=sign_res.text,
regexs=self._success_regex)
if sign_status:
self.info(f"签到成功")
if exits_answers and question:
# 签到成功写入本地文件
self.__write_local_answer(exits_answers=exits_answers or {},
question=question,
answer=choice)
return True, f'【{site}】签到成功'
else:
sign_status = self.sign_in_result(html_res=sign_res.text,
regexs=self._sign_regex)
if sign_status:
self.info(f"今日已签到")
return True, f'【{site}】今日已签到'
self.error(f"签到失败,请到页面查看")
return False, f'【{site}】签到失败,请到页面查看'
def __write_local_answer(self, exits_answers, question, answer):
"""
签到成功写入本地文件
"""
try:
exits_answers[question] = answer
# 序列化数据
formatted_data = json.dumps(exits_answers, indent=4)
with open(self._answer_file, 'w') as f:
f.write(formatted_data)
except (FileNotFoundError, IOError, OSError) as e:
self.debug("签到成功写入本地文件失败")
| [] |
2024-01-10 | CCCBora/auto-draft | utils~gpt_interaction.py | import os
import time
import openai
import logging
import requests
import json
log = logging.getLogger(__name__)
class GPTModel:
def __init__(self, model="gpt-3.5-turbo", temperature=0.9, presence_penalty=0,
frequency_penalty=0, max_attempts=1, delay=20):
self.model = model
self.temperature = temperature
self.presence_penalty = presence_penalty
self.frequency_penalty = frequency_penalty
self.max_attempts = max_attempts
self.delay = delay
def __call__(self, systems, prompts, return_json=False):
conversation_history = [
{"role": "system", "content": systems},
{"role": "user", "content": prompts}
]
for _ in range(self.max_attempts):
try:
response = openai.ChatCompletion.create(
model=self.model,
messages=conversation_history,
n=1,
temperature=self.temperature,
presence_penalty=self.presence_penalty,
frequency_penalty=self.frequency_penalty,
stream=False
)
assistant_message = response['choices'][0]["message"]["content"]
usage = response['usage']
log.info(assistant_message)
if return_json:
assistant_message = json.loads(assistant_message)
return assistant_message, usage
except openai.error.APIConnectionError as e:
print(f"Failed to get response. Error: {e}")
time.sleep(self.delay)
raise RuntimeError("Failed to get response from OpenAI.")
class GPTModel_API2D_SUPPORT:
def __init__(self, model="gpt-4", temperature=0, presence_penalty=0,
frequency_penalty=0, url=None, key=None, max_attempts=1, delay=20):
if url is None:
url = "https://api.openai.com/v1/chat/completions"
if key is None:
key = os.getenv("OPENAI_API_KEY")
self.model = model
self.temperature = temperature
self.url = url
self.key = key
self.presence_penalty = presence_penalty
self.frequency_penalty = frequency_penalty
self.max_attempts = max_attempts
self.delay = delay
def __call__(self, systems, prompts, return_json=False):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.key}",
'Content-type': 'text/plain; charset=utf-8'
}
data = {
"model": f"{self.model}",
"messages": [
{"role": "system", "content": systems},
{"role": "user", "content": prompts}],
"temperature": self.temperature,
"n": 1,
"stream": False,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty
}
for _ in range(self.max_attempts):
try:
# todo: in some cases, UnicodeEncodeError is raised:
# 'gbk' codec can't encode character '\xdf' in position 1898: illegal multibyte sequence
response = requests.post(self.url, headers=headers, data=json.dumps(data))
response = response.json()
assistant_message = response['choices'][0]["message"]["content"]
usage = response['usage']
log.info(assistant_message)
if return_json:
assistant_message = json.loads(assistant_message)
return assistant_message, usage
except Exception as e:
print(f"Failed to get response. Error: {e}")
time.sleep(self.delay)
raise RuntimeError("Failed to get response from OpenAI.")
if __name__ == "__main__":
bot = GPTModel(model="gpt-3.5-turbo-16k")
r = bot("You are an assistant.", "Hello.")
print(r) | [] |
2024-01-10 | CCCBora/auto-draft | auto_generators.py | import json
import os.path
import logging
import time
from utils import References, Knowledge
from utils.file_operations import copy_templates
from utils.tex_processing import create_copies
from prompts.draft import generate_paper_prompts
from prompts import SYSTEM, SECTION_GENERATION_SYSTEM
from langchain.vectorstores import FAISS
from utils.gpt_interaction import GPTModel
from models import EMBEDDINGS
TOTAL_TOKENS = 0
TOTAL_PROMPTS_TOKENS = 0
TOTAL_COMPLETION_TOKENS = 0
def log_usage(usage, generating_target, print_out=True):
global TOTAL_TOKENS
global TOTAL_PROMPTS_TOKENS
global TOTAL_COMPLETION_TOKENS
prompts_tokens = usage['prompt_tokens']
completion_tokens = usage['completion_tokens']
total_tokens = usage['total_tokens']
TOTAL_TOKENS += total_tokens
TOTAL_PROMPTS_TOKENS += prompts_tokens
TOTAL_COMPLETION_TOKENS += completion_tokens
message = f">>USAGE>> For generating {generating_target}, {total_tokens} tokens have been used " \
f"({prompts_tokens} for prompts; {completion_tokens} for completion). " \
f"{TOTAL_TOKENS} tokens have been used in total."
if print_out:
print(message)
logging.info(message)
def _generation_setup(title, description="", template="ICLR2022",
tldr=False, max_kw_refs=10, refs=None, max_tokens_ref=2048, # generating references
knowledge_database=None, max_tokens_kd=2048, query_counts=10, # querying from knowledge database
debug=True):
"""
This function handles the setup process for paper generation. It mainly does the following:
1. Copies the provided template to the outputs folder and creates the log file `generation.log`.
2. Generates a set of contributions based on the given `title` (if `description` is not given).
3. Collects references using the generated keywords.
4. Generates domain knowledge (or related concepts).
5. Generates necessary media based on the title and contributions.
6. Returns a paper object containing the collected information,
the destination folder path, and a list of all collected paper IDs.
Parameters:
title (str): The title of the paper.
description (str, optional): Contributions for the paper. Defaults to an empty string.
template (str, optional): The template to be used for paper generation. Defaults to "ICLR2022".
tldr (bool, optional): A flag indicating whether a TL;DR (Too Long; Didn't Read) summary should be used
for the collected papers. Defaults to False.
max_kw_refs (int, optional): The maximum number of references that can be associated with each keyword.
Defaults to 10.
refs (None, optional): If provided, load the existing papers from this reference. Defaults to None.
max_tokens_ref (int, optional): The maximum number of tokens for the references. Defaults to 2048.
knowledge_database (None, str, optional): The name of the knowledge database to be queried. Defaults to None.
max_tokens_kd (int, optional): The maximum number of tokens for the domain knowledge. Defaults to 2048.
query_counts (int, optional): The number of queries to perform against the knowledge database. Defaults to 10.
debug (bool, optional): A flag that if set to True, will raise exceptions,
otherwise, it will print the error message and continue. Defaults to True.
Returns:
tuple: A tuple containing the following elements:
- paper (dict): A dictionary containing the generated paper information.
- destination_folder (str): The path to the destination folder where the generation log is saved.
- all_paper_ids (list): A list of all paper IDs collected for the references.
"""
llm = GPTModel(model="gpt-3.5-turbo")
# Create a copy in the outputs folder.
bibtex_path, destination_folder = copy_templates(template, title)
logging.basicConfig(level=logging.INFO, filename=os.path.join(destination_folder, "generation.log"))
###################################################################################################################
# Generate contributions
###################################################################################################################
if description:
contributions = description
else:
try:
contributions, usage = llm(systems=SYSTEM["contributions"], prompts=title, return_json=True)
contributions = [f"Contribution {idx}: {contributions[contribution]['statement']}\n" \
f"Novelty of Contribution {idx}: {contributions[contribution]['reason']}\n"
for idx, contribution in enumerate(contributions)]
contributions = "".join(contributions)
log_usage(usage, "contributions")
except RuntimeError:
if debug:
raise RuntimeError("Failed to generate contributions.")
else:
print("Failed to generate contributions. Use empty contributions.")
contributions = ""
print("Contributions:\n{}".format(contributions))
###################################################################################################################
# Generate references
###################################################################################################################
try:
keywords, usage = llm(systems=SYSTEM["keywords"], prompts=title, return_json=True)
log_usage(usage, "keywords")
keywords = {keyword: max_kw_refs for keyword in keywords}
except RuntimeError:
if debug:
raise RuntimeError("Failed to generate keywords.")
else:
print("Failed to generate keywords. Use default keywords.")
keywords = {"machine learning": max_kw_refs, "artificial intelligence": max_kw_refs} # DEFAULT KEYWORDS
print("Keywords: \n", keywords)
# todo: in some rare situations, collected papers will be an empty list. handle this issue
ref = References(title, load_papers=refs)
ref.collect_papers(keywords, tldr=tldr)
references = ref.to_prompts(max_tokens=max_tokens_ref)
all_paper_ids = ref.to_bibtex(bibtex_path)
###################################################################################################################
# Generate domain knowledge
###################################################################################################################
prompts = f"Title: {title}\n Contributions: {contributions}"
preliminaries_kw, _ = llm(systems=SYSTEM["preliminaries"], prompts=prompts)
# check if the database exists or not
db_path = f"knowledge_databases/{knowledge_database}"
db_config_path = os.path.join(db_path, "db_meta.json")
db_index_path = os.path.join(db_path, "faiss_index")
if os.path.isdir(db_path):
try:
# load configuration file
with open(db_config_path, "r", encoding="utf-8") as f:
db_config = json.load(f)
model_name = db_config["embedding_model"]
embeddings = EMBEDDINGS[model_name]
db = FAISS.load_local(db_index_path, embeddings)
knowledge = Knowledge(db=db)
knowledge.collect_knowledge(preliminaries_kw, max_query=query_counts)
domain_knowledge = knowledge.to_prompts(max_tokens_kd)
except Exception as e:
if debug:
raise RuntimeError(f"Failed to query from FAISS. Error {e}.")
else:
print(f"Failed to query from FAISS. Error {e}. Use empty domain knowledge instead.")
domain_knowledge = ""
else:
print("Selected database doesn't exist or no database is selected.")
domain_knowledge = ""
###################################################################################################################
# Generate necessary media
###################################################################################################################
prompts = f"Title: {title}\n Contributions: {contributions}"
try:
components, usage = llm(systems=SYSTEM["components"], prompts=prompts, return_json=True)
log_usage(usage, "media")
except RuntimeError:
if debug:
raise RuntimeError("Failed to generate media.")
else:
print("Failed to generate media. Use default media.")
components = {}
print(f"The paper information has been initialized. References are saved to {bibtex_path}.")
paper = {}
paper_body = {}
paper["title"] = title
paper["description"] = contributions
paper["references"] = references
paper["body"] = paper_body
paper["bibtex"] = bibtex_path
paper["domain_knowledge"] = domain_knowledge
paper["components"] = components
# print(json.dumps(paper, indent=4))
return paper, destination_folder, all_paper_ids
# todo: use `all_paper_ids` to check if all citations are in this list
def generate_draft(title, description="", # main input
tldr=True, max_kw_refs=10, refs=None, max_tokens_ref=2048, # references
knowledge_database=None, max_tokens_kd=2048, query_counts=10, # domain knowledge
sections=None, model="gpt-4", template="ICLR2022", prompts_mode=False, # outputs parameters
):
"""
This function generates a draft paper using the provided information. The process is divided into three steps:
1. Pre-processing: Initializes the setup for paper generation and arranges the sections in the desired order.
2. Processing: Generates each section of the paper using the specified language model and writes the generated
contents into a .tex file.
3. Post-processing: Saves the prompts used for each section into a .json file and returns the path to the
destination folder containing all the generated files.
Parameters:
title (str): The title of the paper.
description (str, optional): Contributions for the paper. Defaults to an empty string.
tldr (bool, optional): A flag indicating whether a TL;DR (Too Long; Didn't Read) summary should be used.
Defaults to True.
max_kw_refs (int, optional): The maximum number of references that can be associated with each keyword.
Defaults to 10.
refs (optional): A list of references to be used in the paper.
max_tokens_ref (int, optional): The maximum number of tokens that can be used for each reference. Defaults to
2048.
knowledge_database (optional): A database containing domain-specific knowledge to be used in the paper.
max_tokens_kd (int, optional): The maximum number of tokens that can be used from the domain-specific
knowledge database. Defaults to 2048.
query_counts (int, optional): The number of queries to be made to the domain-specific knowledge database.
Defaults to 10.
sections (list, optional): The sections to be included in the paper. If not provided, all the standard
sections are included. Defaults to None.
model (str, optional): The language model to be used for paper generation. Defaults to "gpt-4".
template (str, optional): The template to be used for paper generation. Defaults to "ICLR2022".
prompts_mode (bool, optional): A flag indicating whether to generate only the prompts for each section
without generating the section contents. Defaults to False.
Returns:
str: The path to the destination folder containing the generated files.
Note: OpenAI API Error will be handled by `GPTModel`.
"""
def _filter_sections(sections):
# desired order for generating contents
ordered_sections = ["introduction", "related works", "backgrounds", "methodology", "experiments", "conclusion",
"abstract"]
return [section for section in ordered_sections if section in sections]
# pre-processing `sections` parameter;
print("================START================")
print(f"Generating the paper '{title}'.")
print("================PRE-PROCESSING================")
# make `sections` in a correct order
if sections is None:
sections = ["introduction", "related works", "backgrounds", "methodology", "experiments", "conclusion",
"abstract"]
else:
sections = _filter_sections(sections)
paper, destination_folder, _ = _generation_setup(title, description, template, tldr, max_kw_refs, refs,
max_tokens_ref=max_tokens_ref, max_tokens_kd=max_tokens_kd,
query_counts=query_counts,
knowledge_database=knowledge_database)
# main components
prompts_dict = {}
print(f"================PROCESSING================")
for section in sections:
prompts = generate_paper_prompts(paper, section)
prompts_dict[section] = prompts
if prompts_mode:
continue
print(f"Generate {section} part...")
prompts = generate_paper_prompts(paper, section)
chatgpt = GPTModel(model=model)
output, usage = chatgpt(systems=SECTION_GENERATION_SYSTEM.format(research_field="machine learning"),
prompts=prompts)
paper["body"][section] = output
tex_file = os.path.join(destination_folder, f"{section}.tex")
with open(tex_file, "w", encoding="utf-8") as f:
f.write(output)
time.sleep(5)
print(f"{section} part has been generated. ")
log_usage(usage, section)
# post-processing
print("================POST-PROCESSING================")
create_copies(destination_folder)
filename = "prompts.json"
with open(os.path.join(destination_folder, filename), "w") as f:
json.dump(prompts_dict, f)
print("\nMission completed.\n")
return destination_folder
if __name__ == "__main__":
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
# openai.api_base = os.getenv("OPENAI_API_BASE")
target_title = "Playing Atari with Decentralized Reinforcement Learning"
# output = generate_draft(target_title, model="gpt-3.5-turbo-16k")
output = generate_draft(target_title, model="gpt-4")
print(output)
| [
"0",
"{}",
"Title: PLACEHOLDER\n Contributions: PLACEHOLDER",
"prompt_tokens"
] |
2024-01-10 | CCCBora/auto-draft | cyber-supervisor-openai.py | import os
import openai
import ast
from tools import functions, TOOLS
MAX_ITER = 99
openai.api_key = os.getenv("OPENAI_API_KEY")
default_model = os.getenv("DEFAULT_MODEL")
if default_model is None:
default_model = "gpt-3.5-turbo-16k"
import chainlit as cl
async def process_new_delta(new_delta, openai_message, content_ui_message, function_ui_message):
if "role" in new_delta:
openai_message["role"] = new_delta["role"]
if "content" in new_delta:
new_content = new_delta.get("content") or ""
openai_message["content"] += new_content
await content_ui_message.stream_token(new_content)
if "function_call" in new_delta:
if "name" in new_delta["function_call"]:
openai_message["function_call"] = {
"name": new_delta["function_call"]["name"]}
await content_ui_message.send()
function_ui_message = cl.Message(
author=new_delta["function_call"]["name"],
content="", indent=1, language="json")
await function_ui_message.stream_token(new_delta["function_call"]["name"])
if "arguments" in new_delta["function_call"]:
if "arguments" not in openai_message["function_call"]:
openai_message["function_call"]["arguments"] = ""
openai_message["function_call"]["arguments"] += new_delta["function_call"]["arguments"]
await function_ui_message.stream_token(new_delta["function_call"]["arguments"])
return openai_message, content_ui_message, function_ui_message
system_message = "You are a mighty cyber professor. Follow the following instructions: " \
"1. You always response in the same language as your student." \
"2. Ask your student for further information if necessary to provide more assistance. " \
"3. If your student asks you to do something out of your responsibility, please say no. "
@cl.on_chat_start
def start_chat():
cl.user_session.set(
"message_history",
[{"role": "system", "content": system_message}],
)
@cl.on_message
async def run_conversation(user_message: str):
message_history = cl.user_session.get("message_history")
message_history.append({"role": "user", "content": user_message})
cur_iter = 0
while cur_iter < MAX_ITER:
# OpenAI call
openai_message = {"role": "", "content": ""}
function_ui_message = None
content_ui_message = cl.Message(content="")
async for stream_resp in await openai.ChatCompletion.acreate(
model=default_model,
messages=message_history,
stream=True,
function_call="auto",
functions=functions,
temperature=0.9
):
new_delta = stream_resp.choices[0]["delta"]
openai_message, content_ui_message, function_ui_message = await process_new_delta(
new_delta, openai_message, content_ui_message, function_ui_message)
message_history.append(openai_message)
if function_ui_message is not None:
await function_ui_message.send()
if stream_resp.choices[0]["finish_reason"] == "stop":
break
elif stream_resp.choices[0]["finish_reason"] != "function_call":
raise ValueError(stream_resp.choices[0]["finish_reason"])
# if code arrives here, it means there is a function call
function_name = openai_message.get("function_call").get("name")
arguments = ast.literal_eval(
openai_message.get("function_call").get("arguments"))
if function_name == "find_research_directions":
function_response = TOOLS[function_name](
research_field=arguments.get("research_description"),
)
else:
function_response = TOOLS[function_name](
title=arguments.get("title"),
contributions=arguments.get("contributions"),
)
message_history.append(
{
"role": "function",
"name": function_name,
"content": f"{function_response}",
}
)
await cl.Message(
author=function_name,
content=str(function_response),
language='json',
indent=1,
).send()
cur_iter += 1 | [
"PLACEHOLDER"
] |
2024-01-10 | sidp24/AI-Budgeter | ai_module.py | import openai
# Set your OpenAI API key here
openai.api_key = 'API KEY'
def generate_budget_recommendation(account_balance, spending_timeframe):
prompt = f"You are a budget advising assistant. Given an account balance of ${account_balance:.2f} and a spending timeframe of {spending_timeframe} days, provide a budget recommendation for a college student."
response = openai.Completion.create(
engine = "text-davinci-002",
prompt = prompt,
temperature = 0.7,
max_tokens = 2048
)
return response.choices[0].text.strip()
| [
"f\"You are a budget advising assistant. Given an account balance of ${account_balance:.2f} and a spending timeframe of {spending_timeframe} days, provide a budget recommendation for a college student."
] |
2024-01-10 | GPTKing/ChatPaper | chat_paper.py | import argparse
import base64
import configparser
import datetime
import json
import os
import re
from collections import namedtuple
import arxiv
import numpy as np
import openai
import requests
import tenacity
import tiktoken
from get_paper_from_pdf import Paper
PaperParams = namedtuple(
"PaperParams",
[
"pdf_path",
"query",
"key_word",
"filter_keys",
"max_results",
"sort",
"save_image",
"file_format",
"language",
],
)
# 定义Reader类
class Reader:
# 初始化方法,设置属性
def __init__(self, key_word, query, filter_keys,
root_path='./',
gitee_key='',
sort=arxiv.SortCriterion.SubmittedDate, user_name='defualt', args=None):
self.user_name = user_name # 读者姓名
self.key_word = key_word # 读者感兴趣的关键词
self.query = query # 读者输入的搜索查询
self.sort = sort # 读者选择的排序方式
if args.language == 'en':
self.language = 'English'
elif args.language == 'zh':
self.language = 'Chinese'
else:
self.language = 'Chinese'
self.filter_keys = filter_keys # 用于在摘要中筛选的关键词
self.root_path = root_path
# 创建一个ConfigParser对象
self.config = configparser.ConfigParser()
# 读取配置文件
self.config.read('apikey.ini')
OPENAI_KEY = os.environ.get("OPENAI_KEY", "")
# 获取某个键对应的值
self.chat_api_list = self.config.get('OpenAI', 'OPENAI_API_KEYS')[1:-1].replace('\'', '').split(',')
self.chat_api_list.append(OPENAI_KEY)
# prevent short strings from being incorrectly used as API keys.
self.chat_api_list = [api.strip() for api in self.chat_api_list if len(api) > 20]
self.cur_api = 0
self.file_format = args.file_format
if args.save_image:
self.gitee_key = self.config.get('Gitee', 'api')
else:
self.gitee_key = ''
self.max_token_num = 4096
self.encoding = tiktoken.get_encoding("gpt2")
def get_arxiv(self, max_results=30):
search = arxiv.Search(query=self.query,
max_results=max_results,
sort_by=self.sort,
sort_order=arxiv.SortOrder.Descending,
)
return search
def filter_arxiv(self, max_results=30):
search = self.get_arxiv(max_results=max_results)
print("all search:")
for index, result in enumerate(search.results()):
print(index, result.title, result.updated)
filter_results = []
filter_keys = self.filter_keys
print("filter_keys:", self.filter_keys)
# 确保每个关键词都能在摘要中找到,才算是目标论文
for index, result in enumerate(search.results()):
abs_text = result.summary.replace('-\n', '-').replace('\n', ' ')
meet_num = 0
for f_key in filter_keys.split(" "):
if f_key.lower() in abs_text.lower():
meet_num += 1
if meet_num == len(filter_keys.split(" ")):
filter_results.append(result)
# break
print("筛选后剩下的论文数量:")
print("filter_results:", len(filter_results))
print("filter_papers:")
for index, result in enumerate(filter_results):
print(index, result.title, result.updated)
return filter_results
def validateTitle(self, title):
# 将论文的乱七八糟的路径格式修正
rstr = r"[\/\\\:\*\?\"\<\>\|]" # '/ \ : * ? " < > |'
new_title = re.sub(rstr, "_", title) # 替换为下划线
return new_title
def download_pdf(self, filter_results):
# 先创建文件夹
date_str = str(datetime.datetime.now())[:13].replace(' ', '-')
key_word = str(self.key_word.replace(':', ' '))
path = self.root_path + 'pdf_files/' + self.query.replace('au: ', '').replace('title: ', '').replace('ti: ',
'').replace(
':', ' ')[:25] + '-' + date_str
try:
os.makedirs(path)
except:
pass
print("All_paper:", len(filter_results))
# 开始下载:
paper_list = []
for r_index, result in enumerate(filter_results):
try:
title_str = self.validateTitle(result.title)
pdf_name = title_str + '.pdf'
# result.download_pdf(path, filename=pdf_name)
self.try_download_pdf(result, path, pdf_name)
paper_path = os.path.join(path, pdf_name)
print("paper_path:", paper_path)
paper = Paper(path=paper_path,
url=result.entry_id,
title=result.title,
abs=result.summary.replace('-\n', '-').replace('\n', ' '),
authers=[str(aut) for aut in result.authors],
)
# 下载完毕,开始解析:
paper.parse_pdf()
paper_list.append(paper)
except Exception as e:
print("download_error:", e)
pass
return paper_list
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def try_download_pdf(self, result, path, pdf_name):
result.download_pdf(path, filename=pdf_name)
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def upload_gitee(self, image_path, image_name='', ext='png'):
"""
上传到码云
:return:
"""
with open(image_path, 'rb') as f:
base64_data = base64.b64encode(f.read())
base64_content = base64_data.decode()
date_str = str(datetime.datetime.now())[:19].replace(':', '-').replace(' ', '-') + '.' + ext
path = image_name + '-' + date_str
payload = {
"access_token": self.gitee_key,
"owner": self.config.get('Gitee', 'owner'),
"repo": self.config.get('Gitee', 'repo'),
"path": self.config.get('Gitee', 'path'),
"content": base64_content,
"message": "upload image"
}
# 这里需要修改成你的gitee的账户和仓库名,以及文件夹的名字:
url = f'https://gitee.com/api/v5/repos/' + self.config.get('Gitee', 'owner') + '/' + self.config.get('Gitee',
'repo') + '/contents/' + self.config.get(
'Gitee', 'path') + '/' + path
rep = requests.post(url, json=payload).json()
print("rep:", rep)
if 'content' in rep.keys():
image_url = rep['content']['download_url']
else:
image_url = r"https://gitee.com/api/v5/repos/" + self.config.get('Gitee', 'owner') + '/' + self.config.get(
'Gitee', 'repo') + '/contents/' + self.config.get('Gitee', 'path') + '/' + path
return image_url
def summary_with_chat(self, paper_list):
htmls = []
for paper_index, paper in enumerate(paper_list):
# 第一步先用title,abs,和introduction进行总结。
text = ''
text += 'Title:' + paper.title
text += 'Url:' + paper.url
text += 'Abstrat:' + paper.abs
text += 'Paper_info:' + paper.section_text_dict['paper_info']
# intro
text += list(paper.section_text_dict.values())[0]
chat_summary_text = ""
try:
chat_summary_text = self.chat_summary(text=text)
except Exception as e:
print("summary_error:", e)
if "maximum context" in str(e):
current_tokens_index = str(e).find("your messages resulted in") + len(
"your messages resulted in") + 1
offset = int(str(e)[current_tokens_index:current_tokens_index + 4])
summary_prompt_token = offset + 1000 + 150
chat_summary_text = self.chat_summary(text=text, summary_prompt_token=summary_prompt_token)
htmls.append('## Paper:' + str(paper_index + 1))
htmls.append('\n\n\n')
htmls.append(chat_summary_text)
# 第二步总结方法:
# TODO,由于有些文章的方法章节名是算法名,所以简单的通过关键词来筛选,很难获取,后面需要用其他的方案去优化。
method_key = ''
for parse_key in paper.section_text_dict.keys():
if 'method' in parse_key.lower() or 'approach' in parse_key.lower():
method_key = parse_key
break
if method_key != '':
text = ''
method_text = ''
summary_text = ''
summary_text += "<summary>" + chat_summary_text
# methods
method_text += paper.section_text_dict[method_key]
text = summary_text + "\n\n<Methods>:\n\n" + method_text
chat_method_text = ""
try:
chat_method_text = self.chat_method(text=text)
except Exception as e:
print("method_error:", e)
if "maximum context" in str(e):
current_tokens_index = str(e).find("your messages resulted in") + len(
"your messages resulted in") + 1
offset = int(str(e)[current_tokens_index:current_tokens_index + 4])
method_prompt_token = offset + 800 + 150
chat_method_text = self.chat_method(text=text, method_prompt_token=method_prompt_token)
htmls.append(chat_method_text)
else:
chat_method_text = ''
htmls.append("\n" * 4)
# 第三步总结全文,并打分:
conclusion_key = ''
for parse_key in paper.section_text_dict.keys():
if 'conclu' in parse_key.lower():
conclusion_key = parse_key
break
text = ''
conclusion_text = ''
summary_text = ''
summary_text += "<summary>" + chat_summary_text + "\n <Method summary>:\n" + chat_method_text
if conclusion_key != '':
# conclusion
conclusion_text += paper.section_text_dict[conclusion_key]
text = summary_text + "\n\n<Conclusion>:\n\n" + conclusion_text
else:
text = summary_text
chat_conclusion_text = ""
try:
chat_conclusion_text = self.chat_conclusion(text=text)
except Exception as e:
print("conclusion_error:", e)
if "maximum context" in str(e):
current_tokens_index = str(e).find("your messages resulted in") + len(
"your messages resulted in") + 1
offset = int(str(e)[current_tokens_index:current_tokens_index + 4])
conclusion_prompt_token = offset + 800 + 150
chat_conclusion_text = self.chat_conclusion(text=text,
conclusion_prompt_token=conclusion_prompt_token)
htmls.append(chat_conclusion_text)
htmls.append("\n" * 4)
# # 整合成一个文件,打包保存下来。
date_str = str(datetime.datetime.now())[:13].replace(' ', '-')
export_path = os.path.join(self.root_path, 'export')
if not os.path.exists(export_path):
os.makedirs(export_path)
mode = 'w' if paper_index == 0 else 'a'
file_name = os.path.join(export_path,
date_str + '-' + self.validateTitle(paper.title[:80]) + "." + self.file_format)
self.export_to_markdown("\n".join(htmls), file_name=file_name, mode=mode)
# file_name = os.path.join(export_path, date_str+'-'+self.validateTitle(paper.title)+".md")
# self.export_to_markdown("\n".join(htmls), file_name=file_name, mode=mode)
htmls = []
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def chat_conclusion(self, text, conclusion_prompt_token=800):
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
text_token = len(self.encoding.encode(text))
clip_text_index = int(len(text) * (self.max_token_num - conclusion_prompt_token) / text_token)
clip_text = text[:clip_text_index]
messages = [
{"role": "system",
"content": "You are a reviewer in the field of [" + self.key_word + "] and you need to critically review this article"},
# chatgpt 角色
{"role": "assistant",
"content": "This is the <summary> and <conclusion> part of an English literature, where <summary> you have already summarized, but <conclusion> part, I need your help to summarize the following questions:" + clip_text},
# 背景知识,可以参考OpenReview的审稿流程
{"role": "user", "content": """
8. Make the following summary.Be sure to use {} answers (proper nouns need to be marked in English).
- (1):What is the significance of this piece of work?
- (2):Summarize the strengths and weaknesses of this article in three dimensions: innovation point, performance, and workload.
.......
Follow the format of the output later:
8. Conclusion: \n\n
- (1):xxx;\n
- (2):Innovation point: xxx; Performance: xxx; Workload: xxx;\n
Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous <summary>, the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write.
""".format(self.language, self.language)},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
# prompt需要用英语替换,少占用token。
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print("conclusion_result:\n", result)
print("prompt_token_used:", response.usage.prompt_tokens,
"completion_token_used:", response.usage.completion_tokens,
"total_token_used:", response.usage.total_tokens)
print("response_time:", response.response_ms / 1000.0, 's')
return result
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def chat_method(self, text, method_prompt_token=800):
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
text_token = len(self.encoding.encode(text))
clip_text_index = int(len(text) * (self.max_token_num - method_prompt_token) / text_token)
clip_text = text[:clip_text_index]
messages = [
{"role": "system",
"content": "You are a researcher in the field of [" + self.key_word + "] who is good at summarizing papers using concise statements"},
# chatgpt 角色
{"role": "assistant",
"content": "This is the <summary> and <Method> part of an English document, where <summary> you have summarized, but the <Methods> part, I need your help to read and summarize the following questions." + clip_text},
# 背景知识
{"role": "user", "content": """
7. Describe in detail the methodological idea of this article. Be sure to use {} answers (proper nouns need to be marked in English). For example, its steps are.
- (1):...
- (2):...
- (3):...
- .......
Follow the format of the output that follows:
7. Methods: \n\n
- (1):xxx;\n
- (2):xxx;\n
- (3):xxx;\n
....... \n\n
Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous <summary>, the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write.
""".format(self.language, self.language)},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print("method_result:\n", result)
print("prompt_token_used:", response.usage.prompt_tokens,
"completion_token_used:", response.usage.completion_tokens,
"total_token_used:", response.usage.total_tokens)
print("response_time:", response.response_ms / 1000.0, 's')
return result
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def chat_summary(self, text, summary_prompt_token=1100):
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
text_token = len(self.encoding.encode(text))
clip_text_index = int(len(text) * (self.max_token_num - summary_prompt_token) / text_token)
clip_text = text[:clip_text_index]
messages = [
{"role": "system",
"content": "You are a researcher in the field of [" + self.key_word + "] who is good at summarizing papers using concise statements"},
{"role": "assistant",
"content": "This is the title, author, link, abstract and introduction of an English document. I need your help to read and summarize the following questions: " + clip_text},
{"role": "user", "content": """
1. Mark the title of the paper (with Chinese translation)
2. list all the authors' names (use English)
3. mark the first author's affiliation (output {} translation only)
4. mark the keywords of this article (use English)
5. link to the paper, Github code link (if available, fill in Github:None if not)
6. summarize according to the following four points.Be sure to use {} answers (proper nouns need to be marked in English)
- (1):What is the research background of this article?
- (2):What are the past methods? What are the problems with them? Is the approach well motivated?
- (3):What is the research methodology proposed in this paper?
- (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?
Follow the format of the output that follows:
1. Title: xxx\n\n
2. Authors: xxx\n\n
3. Affiliation: xxx\n\n
4. Keywords: xxx\n\n
5. Urls: xxx or xxx , xxx \n\n
6. Summary: \n\n
- (1):xxx;\n
- (2):xxx;\n
- (3):xxx;\n
- (4):xxx.\n\n
Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not have too much repetitive information, numerical values using the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed.
""".format(self.language, self.language, self.language)},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print("summary_result:\n", result)
print("prompt_token_used:", response.usage.prompt_tokens,
"completion_token_used:", response.usage.completion_tokens,
"total_token_used:", response.usage.total_tokens)
print("response_time:", response.response_ms / 1000.0, 's')
return result
def export_to_markdown(self, text, file_name, mode='w'):
# 使用markdown模块的convert方法,将文本转换为html格式
# html = markdown.markdown(text)
# 打开一个文件,以写入模式
with open(file_name, mode, encoding="utf-8") as f:
# 将html格式的内容写入文件
f.write(text)
# 定义一个方法,打印出读者信息
def show_info(self):
print(f"Key word: {self.key_word}")
print(f"Query: {self.query}")
print(f"Sort: {self.sort}")
def chat_paper_main(args):
# 创建一个Reader对象,并调用show_info方法
if args.sort == 'Relevance':
sort = arxiv.SortCriterion.Relevance
elif args.sort == 'LastUpdatedDate':
sort = arxiv.SortCriterion.LastUpdatedDate
else:
sort = arxiv.SortCriterion.Relevance
if args.pdf_path:
reader1 = Reader(key_word=args.key_word,
query=args.query,
filter_keys=args.filter_keys,
sort=sort,
args=args
)
reader1.show_info()
# 开始判断是路径还是文件:
paper_list = []
if args.pdf_path.endswith(".pdf"):
paper_list.append(Paper(path=args.pdf_path))
else:
for root, dirs, files in os.walk(args.pdf_path):
print("root:", root, "dirs:", dirs, 'files:', files) # 当前目录路径
for filename in files:
# 如果找到PDF文件,则将其复制到目标文件夹中
if filename.endswith(".pdf"):
paper_list.append(Paper(path=os.path.join(root, filename)))
print("------------------paper_num: {}------------------".format(len(paper_list)))
[print(paper_index, paper_name.path.split('\\')[-1]) for paper_index, paper_name in enumerate(paper_list)]
reader1.summary_with_chat(paper_list=paper_list)
else:
reader1 = Reader(key_word=args.key_word,
query=args.query,
filter_keys=args.filter_keys,
sort=sort,
args=args
)
reader1.show_info()
filter_results = reader1.filter_arxiv(max_results=args.max_results)
paper_list = reader1.download_pdf(filter_results)
reader1.summary_with_chat(paper_list=paper_list)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# parser.add_argument("--pdf_path", type=str, default=r'demo.pdf', help="if none, the bot will download from arxiv with query")
# parser.add_argument("--pdf_path", type=str, default=r'C:\Users\Administrator\Desktop\DHER\RHER_Reset\ChatPaper', help="if none, the bot will download from arxiv with query")
parser.add_argument("--pdf_path", type=str, default='', help="if none, the bot will download from arxiv with query")
parser.add_argument("--query", type=str, default='all: ChatGPT robot',
help="the query string, ti: xx, au: xx, all: xx,")
parser.add_argument("--key_word", type=str, default='reinforcement learning',
help="the key word of user research fields")
parser.add_argument("--filter_keys", type=str, default='ChatGPT robot',
help="the filter key words, 摘要中每个单词都得有,才会被筛选为目标论文")
parser.add_argument("--max_results", type=int, default=1, help="the maximum number of results")
# arxiv.SortCriterion.Relevance
parser.add_argument("--sort", type=str, default="Relevance", help="another is LastUpdatedDate")
parser.add_argument("--save_image", default=False,
help="save image? It takes a minute or two to save a picture! But pretty")
parser.add_argument("--file_format", type=str, default='md', help="导出的文件格式,如果存图片的话,最好是md,如果不是的话,txt的不会乱")
parser.add_argument("--language", type=str, default='zh', help="The other output lauguage is English, is en")
paper_args = PaperParams(**vars(parser.parse_args()))
import time
start_time = time.time()
chat_paper_main(args=paper_args)
print("summary time:", time.time() - start_time)
| [
" \n 7. Describe in detail the methodological idea of this article. Be sure to use {} answers (proper nouns need to be marked in English). For example, its steps are.\n - (1):...\n - (2):...\n - (3):...\n - .......\n Follow the format of the output that follows: \n 7. Methods: \n\n\n - (1):xxx;\n \n - (2):xxx;\n \n - (3):xxx;\n \n ....... \n\n \n \n Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous <summary>, the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write. \n ",
"] and you need to critically review this article",
"This is the <summary> and <Method> part of an English document, where <summary> you have summarized, but the <Methods> part, I need your help to read and summarize the following questions.PLACEHOLDER",
" \n 1. Mark the title of the paper (with Chinese translation)\n 2. list all the authors' names (use English)\n 3. mark the first author's affiliation (output {} translation only) \n 4. mark the keywords of this article (use English)\n 5. link to the paper, Github code link (if available, fill in Github:None if not)\n 6. summarize according to the following four points.Be sure to use {} answers (proper nouns need to be marked in English)\n - (1):What is the research background of this article?\n - (2):What are the past methods? What are the problems with them? Is the approach well motivated?\n - (3):What is the research methodology proposed in this paper?\n - (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?\n Follow the format of the output that follows: \n 1. Title: xxx\n\n\n 2. Authors: xxx\n\n\n 3. Affiliation: xxx\n\n \n 4. Keywords: xxx\n\n \n 5. Urls: xxx or xxx , xxx \n\n \n 6. Summary: \n\n\n - (1):xxx;\n \n - (2):xxx;\n \n - (3):xxx;\n \n - (4):xxx.\n\n \n \n Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not have too much repetitive information, numerical values using the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed. \n ",
" \n 8. Make the following summary.Be sure to use {} answers (proper nouns need to be marked in English).\n - (1):What is the significance of this piece of work?\n - (2):Summarize the strengths and weaknesses of this article in three dimensions: innovation point, performance, and workload. \n .......\n Follow the format of the output later: \n 8. Conclusion: \n\n\n - (1):xxx;\n \n - (2):Innovation point: xxx; Performance: xxx; Workload: xxx;\n \n \n Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous <summary>, the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write. \n ",
"] who is good at summarizing papers using concise statements",
"This is the <summary> and <conclusion> part of an English literature, where <summary> you have already summarized, but <conclusion> part, I need your help to summarize the following questions:PLACEHOLDER",
"This is the title, author, link, abstract and introduction of an English document. I need your help to read and summarize the following questions: PLACEHOLDER",
"You are a researcher in the field of [",
"You are a reviewer in the field of ["
] |
2024-01-10 | GPTKing/ChatPaper | chat_arxiv.py | import argparse
import base64
import configparser
import datetime
import io
import json
import os
import re
from collections import namedtuple
import arxiv
import fitz
import numpy as np
import openai
# 导入所需的库
import requests
import tenacity
import tiktoken
from bs4 import BeautifulSoup
from PIL import Image
ArxivParams = namedtuple(
"ArxivParams",
[
"query",
"key_word",
"page_num",
"max_results",
"days",
"sort",
"save_image",
"file_format",
"language",
],
)
class Paper:
def __init__(self, path, title='', url='', abs='', authers=[]):
# 初始化函数,根据pdf路径初始化Paper对象
self.url = url # 文章链接
self.path = path # pdf路径
self.section_names = [] # 段落标题
self.section_texts = {} # 段落内容
self.abs = abs
self.title_page = 0
self.title = title
self.pdf = fitz.open(self.path) # pdf文档
self.parse_pdf()
self.authers = authers
self.roman_num = ["I", "II", 'III', "IV", "V", "VI", "VII", "VIII", "IIX", "IX", "X"]
self.digit_num = [str(d + 1) for d in range(10)]
self.first_image = ''
def parse_pdf(self):
self.pdf = fitz.open(self.path) # pdf文档
self.text_list = [page.get_text() for page in self.pdf]
self.all_text = ' '.join(self.text_list)
self.section_page_dict = self._get_all_page_index() # 段落与页码的对应字典
print("section_page_dict", self.section_page_dict)
self.section_text_dict = self._get_all_page() # 段落与内容的对应字典
self.section_text_dict.update({"title": self.title})
self.section_text_dict.update({"paper_info": self.get_paper_info()})
self.pdf.close()
def get_paper_info(self):
first_page_text = self.pdf[self.title_page].get_text()
if "Abstract" in self.section_text_dict.keys():
abstract_text = self.section_text_dict['Abstract']
else:
abstract_text = self.abs
first_page_text = first_page_text.replace(abstract_text, "")
return first_page_text
def get_image_path(self, image_path=''):
"""
将PDF中的第一张图保存到image.png里面,存到本地目录,返回文件名称,供gitee读取
:param filename: 图片所在路径,"C:\\Users\\Administrator\\Desktop\\nwd.pdf"
:param image_path: 图片提取后的保存路径
:return:
"""
# open file
max_size = 0
image_list = []
with fitz.Document(self.path) as my_pdf_file:
# 遍历所有页面
for page_number in range(1, len(my_pdf_file) + 1):
# 查看独立页面
page = my_pdf_file[page_number - 1]
# 查看当前页所有图片
images = page.get_images()
# 遍历当前页面所有图片
for image_number, image in enumerate(page.get_images(), start=1):
# 访问图片xref
xref_value = image[0]
# 提取图片信息
base_image = my_pdf_file.extract_image(xref_value)
# 访问图片
image_bytes = base_image["image"]
# 获取图片扩展名
ext = base_image["ext"]
# 加载图片
image = Image.open(io.BytesIO(image_bytes))
image_size = image.size[0] * image.size[1]
if image_size > max_size:
max_size = image_size
image_list.append(image)
for image in image_list:
image_size = image.size[0] * image.size[1]
if image_size == max_size:
image_name = f"image.{ext}"
im_path = os.path.join(image_path, image_name)
print("im_path:", im_path)
max_pix = 480
origin_min_pix = min(image.size[0], image.size[1])
if image.size[0] > image.size[1]:
min_pix = int(image.size[1] * (max_pix / image.size[0]))
newsize = (max_pix, min_pix)
else:
min_pix = int(image.size[0] * (max_pix / image.size[1]))
newsize = (min_pix, max_pix)
image = image.resize(newsize)
image.save(open(im_path, "wb"))
return im_path, ext
return None, None
# 定义一个函数,根据字体的大小,识别每个章节名称,并返回一个列表
def get_chapter_names(self, ):
# # 打开一个pdf文件
doc = fitz.open(self.path) # pdf文档
text_list = [page.get_text() for page in doc]
all_text = ''
for text in text_list:
all_text += text
# # 创建一个空列表,用于存储章节名称
chapter_names = []
for line in all_text.split('\n'):
line_list = line.split(' ')
if '.' in line:
point_split_list = line.split('.')
space_split_list = line.split(' ')
if 1 < len(space_split_list) < 5:
if 1 < len(point_split_list) < 5 and (
point_split_list[0] in self.roman_num or point_split_list[0] in self.digit_num):
print("line:", line)
chapter_names.append(line)
# 这段代码可能会有新的bug,本意是为了消除"Introduction"的问题的!
elif 1 < len(point_split_list) < 5:
print("line:", line)
chapter_names.append(line)
return chapter_names
def get_title(self):
doc = self.pdf # 打开pdf文件
max_font_size = 0 # 初始化最大字体大小为0
max_string = "" # 初始化最大字体大小对应的字符串为空
max_font_sizes = [0]
for page_index, page in enumerate(doc): # 遍历每一页
text = page.get_text("dict") # 获取页面上的文本信息
blocks = text["blocks"] # 获取文本块列表
for block in blocks: # 遍历每个文本块
if block["type"] == 0 and len(block['lines']): # 如果是文字类型
if len(block["lines"][0]["spans"]):
font_size = block["lines"][0]["spans"][0]["size"] # 获取第一行第一段文字的字体大小
max_font_sizes.append(font_size)
if font_size > max_font_size: # 如果字体大小大于当前最大值
max_font_size = font_size # 更新最大值
max_string = block["lines"][0]["spans"][0]["text"] # 更新最大值对应的字符串
max_font_sizes.sort()
print("max_font_sizes", max_font_sizes[-10:])
cur_title = ''
for page_index, page in enumerate(doc): # 遍历每一页
text = page.get_text("dict") # 获取页面上的文本信息
blocks = text["blocks"] # 获取文本块列表
for block in blocks: # 遍历每个文本块
if block["type"] == 0 and len(block['lines']): # 如果是文字类型
if len(block["lines"][0]["spans"]):
cur_string = block["lines"][0]["spans"][0]["text"] # 更新最大值对应的字符串
font_flags = block["lines"][0]["spans"][0]["flags"] # 获取第一行第一段文字的字体特征
font_size = block["lines"][0]["spans"][0]["size"] # 获取第一行第一段文字的字体大小
# print(font_size)
if abs(font_size - max_font_sizes[-1]) < 0.3 or abs(font_size - max_font_sizes[-2]) < 0.3:
# print("The string is bold.", max_string, "font_size:", font_size, "font_flags:", font_flags)
if len(cur_string) > 4 and "arXiv" not in cur_string:
# print("The string is bold.", max_string, "font_size:", font_size, "font_flags:", font_flags)
if cur_title == '':
cur_title += cur_string
else:
cur_title += ' ' + cur_string
self.title_page = page_index
# break
title = cur_title.replace('\n', ' ')
return title
def _get_all_page_index(self):
# 定义需要寻找的章节名称列表
section_list = ["Abstract",
'Introduction', 'Related Work', 'Background',
"Introduction and Motivation", "Computation Function", " Routing Function",
"Preliminary", "Problem Formulation",
'Methods', 'Methodology', "Method", 'Approach', 'Approaches',
# exp
"Materials and Methods", "Experiment Settings",
'Experiment', "Experimental Results", "Evaluation", "Experiments",
"Results", 'Findings', 'Data Analysis',
"Discussion", "Results and Discussion", "Conclusion",
'References']
# 初始化一个字典来存储找到的章节和它们在文档中出现的页码
section_page_dict = {}
# 遍历每一页文档
for page_index, page in enumerate(self.pdf):
# 获取当前页面的文本内容
cur_text = page.get_text()
# 遍历需要寻找的章节名称列表
for section_name in section_list:
# 将章节名称转换成大写形式
section_name_upper = section_name.upper()
# 如果当前页面包含"Abstract"这个关键词
if "Abstract" == section_name and section_name in cur_text:
# 将"Abstract"和它所在的页码加入字典中
section_page_dict[section_name] = page_index
# 如果当前页面包含章节名称,则将章节名称和它所在的页码加入字典中
else:
if section_name + '\n' in cur_text:
section_page_dict[section_name] = page_index
elif section_name_upper + '\n' in cur_text:
section_page_dict[section_name] = page_index
# 返回所有找到的章节名称及它们在文档中出现的页码
return section_page_dict
def _get_all_page(self):
"""
获取PDF文件中每个页面的文本信息,并将文本信息按照章节组织成字典返回。
Returns:
section_dict (dict): 每个章节的文本信息字典,key为章节名,value为章节文本。
"""
text = ''
text_list = []
section_dict = {}
# 再处理其他章节:
text_list = [page.get_text() for page in self.pdf]
for sec_index, sec_name in enumerate(self.section_page_dict):
print(sec_index, sec_name, self.section_page_dict[sec_name])
if sec_index <= 0 and self.abs:
continue
else:
# 直接考虑后面的内容:
start_page = self.section_page_dict[sec_name]
if sec_index < len(list(self.section_page_dict.keys())) - 1:
end_page = self.section_page_dict[list(self.section_page_dict.keys())[sec_index + 1]]
else:
end_page = len(text_list)
print("start_page, end_page:", start_page, end_page)
cur_sec_text = ''
if end_page - start_page == 0:
if sec_index < len(list(self.section_page_dict.keys())) - 1:
next_sec = list(self.section_page_dict.keys())[sec_index + 1]
if text_list[start_page].find(sec_name) == -1:
start_i = text_list[start_page].find(sec_name.upper())
else:
start_i = text_list[start_page].find(sec_name)
if text_list[start_page].find(next_sec) == -1:
end_i = text_list[start_page].find(next_sec.upper())
else:
end_i = text_list[start_page].find(next_sec)
cur_sec_text += text_list[start_page][start_i:end_i]
else:
for page_i in range(start_page, end_page):
# print("page_i:", page_i)
if page_i == start_page:
if text_list[start_page].find(sec_name) == -1:
start_i = text_list[start_page].find(sec_name.upper())
else:
start_i = text_list[start_page].find(sec_name)
cur_sec_text += text_list[page_i][start_i:]
elif page_i < end_page:
cur_sec_text += text_list[page_i]
elif page_i == end_page:
if sec_index < len(list(self.section_page_dict.keys())) - 1:
next_sec = list(self.section_page_dict.keys())[sec_index + 1]
if text_list[start_page].find(next_sec) == -1:
end_i = text_list[start_page].find(next_sec.upper())
else:
end_i = text_list[start_page].find(next_sec)
cur_sec_text += text_list[page_i][:end_i]
section_dict[sec_name] = cur_sec_text.replace('-\n', '').replace('\n', ' ')
return section_dict
# 定义Reader类
class Reader:
# 初始化方法,设置属性
def __init__(self, key_word, query,
root_path='./',
gitee_key='',
sort=arxiv.SortCriterion.SubmittedDate, user_name='defualt', args=None):
self.user_name = user_name # 读者姓名
self.key_word = key_word # 读者感兴趣的关键词
self.query = query # 读者输入的搜索查询
self.sort = sort # 读者选择的排序方式
self.args = args
if args.language == 'en':
self.language = 'English'
elif args.language == 'zh':
self.language = 'Chinese'
else:
self.language = 'Chinese'
self.root_path = root_path
# 创建一个ConfigParser对象
self.config = configparser.ConfigParser()
# 读取配置文件
self.config.read('apikey.ini')
OPENAI_KEY = os.environ.get("OPENAI_KEY", "")
# 获取某个键对应的值
self.chat_api_list = self.config.get('OpenAI', 'OPENAI_API_KEYS')[1:-1].replace('\'', '').split(',')
self.chat_api_list.append(OPENAI_KEY)
# prevent short strings from being incorrectly used as API keys.
self.chat_api_list = [api.strip() for api in self.chat_api_list if len(api) > 20]
self.cur_api = 0
self.file_format = args.file_format
if args.save_image:
self.gitee_key = self.config.get('Gitee', 'api')
else:
self.gitee_key = ''
self.max_token_num = 4096
self.encoding = tiktoken.get_encoding("gpt2")
# 定义一个函数,根据关键词和页码生成arxiv搜索链接
def get_url(self, keyword, page):
base_url = "https://arxiv.org/search/?"
params = {
"query": keyword,
"searchtype": "all", # 搜索所有字段
"abstracts": "show", # 显示摘要
"order": "-announced_date_first", # 按日期降序排序
"size": 50 # 每页显示50条结果
}
if page > 0:
params["start"] = page * 50 # 设置起始位置
return base_url + requests.compat.urlencode(params)
# 定义一个函数,根据链接获取网页内容,并解析出论文标题
def get_titles(self, url, days=1):
titles = []
# 创建一个空列表来存储论文链接
links = []
dates = []
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
articles = soup.find_all("li", class_="arxiv-result") # 找到所有包含论文信息的li标签
today = datetime.date.today()
last_days = datetime.timedelta(days=days)
for article in articles:
title = article.find("p", class_="title").text # 找到每篇论文的标题,并去掉多余的空格和换行符
link = article.find("span").find_all("a")[0].get('href')
date_text = article.find("p", class_="is-size-7").text
date_text = date_text.split('\n')[0].split("Submitted ")[-1].split("; ")[0]
date_text = datetime.datetime.strptime(date_text, "%d %B, %Y").date()
if today - date_text <= last_days:
titles.append(title.strip())
links.append(link)
dates.append(date_text)
# print("links:", links)
return titles, links, dates
# 定义一个函数,根据关键词获取所有可用的论文标题,并打印出来
def get_all_titles_from_web(self, keyword, page_num=1, days=1):
title_list, link_list, date_list = [], [], []
for page in range(page_num):
url = self.get_url(keyword, page) # 根据关键词和页码生成链接
titles, links, dates = self.get_titles(url, days) # 根据链接获取论文标题
if not titles: # 如果没有获取到任何标题,说明已经到达最后一页,退出循环
break
for title_index, title in enumerate(titles): # 遍历每个标题,并打印出来
print(page, title_index, title, links[title_index], dates[title_index])
title_list.extend(titles)
link_list.extend(links)
date_list.extend(dates)
print("-" * 40)
return title_list, link_list, date_list
def get_arxiv(self, max_results=30):
search = arxiv.Search(query=self.query,
max_results=max_results,
sort_by=self.sort,
sort_order=arxiv.SortOrder.Descending,
)
return search
def get_arxiv_web(self, args, page_num=1, days=2):
titles, links, dates = self.get_all_titles_from_web(args.query, page_num=page_num, days=days)
paper_list = []
for title_index, title in enumerate(titles):
if title_index + 1 > args.max_results:
break
print(title_index, title, links[title_index], dates[title_index])
url = links[title_index] + ".pdf" # the link of the pdf document
filename = self.try_download_pdf(url, title)
paper = Paper(path=filename,
url=links[title_index],
title=title,
)
paper_list.append(paper)
return paper_list
def validateTitle(self, title):
# 将论文的乱七八糟的路径格式修正
rstr = r"[\/\\\:\*\?\"\<\>\|]" # '/ \ : * ? " < > |'
new_title = re.sub(rstr, "_", title) # 替换为下划线
return new_title
def download_pdf(self, url, title):
response = requests.get(url) # send a GET request to the url
date_str = str(datetime.datetime.now())[:13].replace(' ', '-')
path = self.root_path + 'pdf_files/' + self.validateTitle(self.args.query) + '-' + date_str
try:
os.makedirs(path)
except:
pass
filename = os.path.join(path, self.validateTitle(title)[:80] + '.pdf')
with open(filename, "wb") as f: # open a file with write and binary mode
f.write(response.content) # write the content of the response to the file
return filename
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def try_download_pdf(self, url, title):
return self.download_pdf(url, title)
def summary_with_chat(self, paper_list):
htmls = []
for paper_index, paper in enumerate(paper_list):
# 第一步先用title,abs,和introduction进行总结。
text = ''
text += 'Title:' + paper.title
text += 'Url:' + paper.url
text += 'Abstrat:' + paper.abs
text += 'Paper_info:' + paper.section_text_dict['paper_info']
# intro
text += list(paper.section_text_dict.values())[0]
try:
chat_summary_text = self.chat_summary(text=text)
except Exception as e:
print("summary_error:", e)
if "maximum context" in str(e):
current_tokens_index = str(e).find("your messages resulted in") + len(
"your messages resulted in") + 1
offset = int(str(e)[current_tokens_index:current_tokens_index + 4])
summary_prompt_token = offset + 1000 + 150
chat_summary_text = self.chat_summary(text=text, summary_prompt_token=summary_prompt_token)
htmls.append('## Paper:' + str(paper_index + 1))
htmls.append('\n\n\n')
chat_summary_text = ""
if "chat_summary_text" in locals():
htmls.append(chat_summary_text)
# 第二步总结方法:
# TODO,由于有些文章的方法章节名是算法名,所以简单的通过关键词来筛选,很难获取,后面需要用其他的方案去优化。
method_key = ''
for parse_key in paper.section_text_dict.keys():
if 'method' in parse_key.lower() or 'approach' in parse_key.lower():
method_key = parse_key
break
if method_key != '':
text = ''
method_text = ''
summary_text = ''
summary_text += "<summary>" + chat_summary_text
# methods
method_text += paper.section_text_dict[method_key]
text = summary_text + "\n\n<Methods>:\n\n" + method_text
# chat_method_text = self.chat_method(text=text)
try:
chat_method_text = self.chat_method(text=text)
except Exception as e:
print("method_error:", e)
if "maximum context" in str(e):
current_tokens_index = str(e).find("your messages resulted in") + len(
"your messages resulted in") + 1
offset = int(str(e)[current_tokens_index:current_tokens_index + 4])
method_prompt_token = offset + 800 + 150
chat_method_text = self.chat_method(text=text, method_prompt_token=method_prompt_token)
chat_method_text = ""
if "chat_method_text" in locals():
htmls.append(chat_method_text)
# htmls.append(chat_method_text)
else:
chat_method_text = ''
htmls.append("\n" * 4)
# 第三步总结全文,并打分:
conclusion_key = ''
for parse_key in paper.section_text_dict.keys():
if 'conclu' in parse_key.lower():
conclusion_key = parse_key
break
text = ''
conclusion_text = ''
summary_text = ''
summary_text += "<summary>" + chat_summary_text + "\n <Method summary>:\n" + chat_method_text
if conclusion_key != '':
# conclusion
conclusion_text += paper.section_text_dict[conclusion_key]
text = summary_text + "\n\n<Conclusion>:\n\n" + conclusion_text
else:
text = summary_text
# chat_conclusion_text = self.chat_conclusion(text=text)
try:
chat_conclusion_text = self.chat_conclusion(text=text)
except Exception as e:
print("conclusion_error:", e)
if "maximum context" in str(e):
current_tokens_index = str(e).find("your messages resulted in") + len(
"your messages resulted in") + 1
offset = int(str(e)[current_tokens_index:current_tokens_index + 4])
conclusion_prompt_token = offset + 800 + 150
chat_conclusion_text = self.chat_conclusion(text=text,
conclusion_prompt_token=conclusion_prompt_token)
chat_conclusion_text = ""
if "chat_conclusion_text" in locals():
htmls.append(chat_conclusion_text)
htmls.append("\n" * 4)
# # 整合成一个文件,打包保存下来。
date_str = str(datetime.datetime.now())[:13].replace(' ', '-')
export_path = os.path.join(self.root_path, 'export')
if not os.path.exists(export_path):
os.makedirs(export_path)
mode = 'w' if paper_index == 0 else 'a'
file_name = os.path.join(export_path,
date_str + '-' + self.validateTitle(self.query) + "." + self.file_format)
self.export_to_markdown("\n".join(htmls), file_name=file_name, mode=mode)
htmls = []
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def chat_conclusion(self, text, conclusion_prompt_token=800):
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
text_token = len(self.encoding.encode(text))
clip_text_index = int(len(text) * (self.max_token_num - conclusion_prompt_token) / text_token)
clip_text = text[:clip_text_index]
messages = [
{"role": "system",
"content": "You are a reviewer in the field of [" + self.key_word + "] and you need to critically review this article"},
# chatgpt 角色
{"role": "assistant",
"content": "This is the <summary> and <conclusion> part of an English literature, where <summary> you have already summarized, but <conclusion> part, I need your help to summarize the following questions:" + clip_text},
# 背景知识,可以参考OpenReview的审稿流程
{"role": "user", "content": """
8. Make the following summary.Be sure to use {} answers (proper nouns need to be marked in English).
- (1):What is the significance of this piece of work?
- (2):Summarize the strengths and weaknesses of this article in three dimensions: innovation point, performance, and workload.
.......
Follow the format of the output later:
8. Conclusion: \n\n
- (1):xxx;\n
- (2):Innovation point: xxx; Performance: xxx; Workload: xxx;\n
Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous <summary>, the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write.
""".format(self.language, self.language)},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
# prompt需要用英语替换,少占用token。
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print("conclusion_result:\n", result)
print("prompt_token_used:", response.usage.prompt_tokens,
"completion_token_used:", response.usage.completion_tokens,
"total_token_used:", response.usage.total_tokens)
print("response_time:", response.response_ms / 1000.0, 's')
return result
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def chat_method(self, text, method_prompt_token=800):
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
text_token = len(self.encoding.encode(text))
clip_text_index = int(len(text) * (self.max_token_num - method_prompt_token) / text_token)
clip_text = text[:clip_text_index]
messages = [
{"role": "system",
"content": "You are a researcher in the field of [" + self.key_word + "] who is good at summarizing papers using concise statements"},
# chatgpt 角色
{"role": "assistant",
"content": "This is the <summary> and <Method> part of an English document, where <summary> you have summarized, but the <Methods> part, I need your help to read and summarize the following questions." + clip_text},
# 背景知识
{"role": "user", "content": """
7. Describe in detail the methodological idea of this article. Be sure to use {} answers (proper nouns need to be marked in English). For example, its steps are.
- (1):...
- (2):...
- (3):...
- .......
Follow the format of the output that follows:
7. Methods: \n\n
- (1):xxx;\n
- (2):xxx;\n
- (3):xxx;\n
....... \n\n
Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous <summary>, the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write.
""".format(self.language, self.language)},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print("method_result:\n", result)
print("prompt_token_used:", response.usage.prompt_tokens,
"completion_token_used:", response.usage.completion_tokens,
"total_token_used:", response.usage.total_tokens)
print("response_time:", response.response_ms / 1000.0, 's')
return result
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def chat_summary(self, text, summary_prompt_token=1100):
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
text_token = len(self.encoding.encode(text))
clip_text_index = int(len(text) * (self.max_token_num - summary_prompt_token) / text_token)
clip_text = text[:clip_text_index]
messages = [
{"role": "system",
"content": "You are a researcher in the field of [" + self.key_word + "] who is good at summarizing papers using concise statements"},
{"role": "assistant",
"content": "This is the title, author, link, abstract and introduction of an English document. I need your help to read and summarize the following questions: " + clip_text},
{"role": "user", "content": """
1. Mark the title of the paper (with Chinese translation)
2. list all the authors' names (use English)
3. mark the first author's affiliation (output {} translation only)
4. mark the keywords of this article (use English)
5. link to the paper, Github code link (if available, fill in Github:None if not)
6. summarize according to the following four points.Be sure to use {} answers (proper nouns need to be marked in English)
- (1):What is the research background of this article?
- (2):What are the past methods? What are the problems with them? Is the approach well motivated?
- (3):What is the research methodology proposed in this paper?
- (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?
Follow the format of the output that follows:
1. Title: xxx\n\n
2. Authors: xxx\n\n
3. Affiliation: xxx\n\n
4. Keywords: xxx\n\n
5. Urls: xxx or xxx , xxx \n\n
6. Summary: \n\n
- (1):xxx;\n
- (2):xxx;\n
- (3):xxx;\n
- (4):xxx.\n\n
Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not have too much repetitive information, numerical values using the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed.
""".format(self.language, self.language, self.language)},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print("summary_result:\n", result)
print("prompt_token_used:", response.usage.prompt_tokens,
"completion_token_used:", response.usage.completion_tokens,
"total_token_used:", response.usage.total_tokens)
print("response_time:", response.response_ms / 1000.0, 's')
return result
def export_to_markdown(self, text, file_name, mode='w'):
# 打开一个文件,以写入模式
with open(file_name, mode, encoding="utf-8") as f:
# 将html格式的内容写入文件
f.write(text)
# 定义一个方法,打印出读者信息
def show_info(self):
print(f"Key word: {self.key_word}")
print(f"Query: {self.query}")
print(f"Sort: {self.sort}")
def chat_arxiv_main(args):
reader1 = Reader(key_word=args.key_word,
query=args.query,
args=args
)
reader1.show_info()
paper_list = reader1.get_arxiv_web(args=args, page_num=args.page_num, days=args.days)
reader1.summary_with_chat(paper_list=paper_list)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--query", type=str, default='GPT-4', help="the query string, ti: xx, au: xx, all: xx,")
parser.add_argument("--key_word", type=str, default='GPT robot', help="the key word of user research fields")
parser.add_argument("--page_num", type=int, default=1, help="the maximum number of page")
parser.add_argument("--max_results", type=int, default=1, help="the maximum number of results")
parser.add_argument("--days", type=int, default=1, help="the last days of arxiv papers of this query")
parser.add_argument("--sort", type=str, default="web", help="another is LastUpdatedDate")
parser.add_argument("--save_image", default=False,
help="save image? It takes a minute or two to save a picture! But pretty")
parser.add_argument("--file_format", type=str, default='md', help="导出的文件格式,如果存图片的话,最好是md,如果不是的话,txt的不会乱")
parser.add_argument("--language", type=str, default='zh', help="The other output lauguage is English, is en")
arxiv_args = ArxivParams(**vars(parser.parse_args()))
import time
start_time = time.time()
chat_arxiv_main(args=arxiv_args)
print("summary time:", time.time() - start_time)
| [
" \n 7. Describe in detail the methodological idea of this article. Be sure to use {} answers (proper nouns need to be marked in English). For example, its steps are.\n - (1):...\n - (2):...\n - (3):...\n - .......\n Follow the format of the output that follows: \n 7. Methods: \n\n\n - (1):xxx;\n \n - (2):xxx;\n \n - (3):xxx;\n \n ....... \n\n \n \n Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous <summary>, the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write. \n ",
"] and you need to critically review this article",
"This is the <summary> and <Method> part of an English document, where <summary> you have summarized, but the <Methods> part, I need your help to read and summarize the following questions.PLACEHOLDER",
" \n 1. Mark the title of the paper (with Chinese translation)\n 2. list all the authors' names (use English)\n 3. mark the first author's affiliation (output {} translation only) \n 4. mark the keywords of this article (use English)\n 5. link to the paper, Github code link (if available, fill in Github:None if not)\n 6. summarize according to the following four points.Be sure to use {} answers (proper nouns need to be marked in English)\n - (1):What is the research background of this article?\n - (2):What are the past methods? What are the problems with them? Is the approach well motivated?\n - (3):What is the research methodology proposed in this paper?\n - (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?\n Follow the format of the output that follows: \n 1. Title: xxx\n\n\n 2. Authors: xxx\n\n\n 3. Affiliation: xxx\n\n \n 4. Keywords: xxx\n\n \n 5. Urls: xxx or xxx , xxx \n\n \n 6. Summary: \n\n\n - (1):xxx;\n \n - (2):xxx;\n \n - (3):xxx;\n \n - (4):xxx.\n\n \n \n Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not have too much repetitive information, numerical values using the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed. \n ",
" \n 8. Make the following summary.Be sure to use {} answers (proper nouns need to be marked in English).\n - (1):What is the significance of this piece of work?\n - (2):Summarize the strengths and weaknesses of this article in three dimensions: innovation point, performance, and workload. \n .......\n Follow the format of the output later: \n 8. Conclusion: \n\n\n - (1):xxx;\n \n - (2):Innovation point: xxx; Performance: xxx; Workload: xxx;\n \n \n Be sure to use {} answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous <summary>, the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write. \n ",
"] who is good at summarizing papers using concise statements",
"This is the <summary> and <conclusion> part of an English literature, where <summary> you have already summarized, but <conclusion> part, I need your help to summarize the following questions:PLACEHOLDER",
"This is the title, author, link, abstract and introduction of an English document. I need your help to read and summarize the following questions: PLACEHOLDER",
"You are a researcher in the field of [",
"You are a reviewer in the field of ["
] |
2024-01-10 | GPTKing/ChatPaper | chat_reviewer.py | import argparse
import configparser
import datetime
import json
import os
import re
import time
from collections import namedtuple
import numpy as np
import openai
import tenacity
import tiktoken
from get_paper import Paper
ReviewerParams = namedtuple(
"Params", ["paper_path", "file_format", "research_fields", "language"]
)
# 定义Reviewer类
class Reviewer:
# 初始化方法,设置属性
def __init__(self, args=None):
if args.language == 'en':
self.language = 'English'
elif args.language == 'zh':
self.language = 'Chinese'
else:
self.language = 'Chinese'
# 创建一个ConfigParser对象
self.config = configparser.ConfigParser()
# 读取配置文件
self.config.read('apikey.ini')
OPENAI_KEY = os.environ.get("OPENAI_KEY", "")
# 获取某个键对应的值
self.chat_api_list = self.config.get('OpenAI', 'OPENAI_API_KEYS')[1:-1].replace('\'', '').split(',')
self.chat_api_list.append(OPENAI_KEY)
# prevent short strings from being incorrectly used as API keys.
self.chat_api_list = [api.strip() for api in self.chat_api_list if len(api) > 20]
self.cur_api = 0
self.file_format = args.file_format
self.max_token_num = 4096
self.encoding = tiktoken.get_encoding("gpt2")
def validateTitle(self, title):
# 修正论文的路径格式
rstr = r"[\/\\\:\*\?\"\<\>\|]" # '/ \ : * ? " < > |'
new_title = re.sub(rstr, "_", title) # 替换为下划线
return new_title
def review_by_chatgpt(self, paper_list):
htmls = []
for paper_index, paper in enumerate(paper_list):
sections_of_interest = self.stage_1(paper)
# extract the essential parts of the paper
text = ''
text += 'Title:' + paper.title + '. '
text += 'Abstract: ' + paper.section_texts['Abstract']
intro_title = next((item for item in paper.section_names if 'ntroduction' in item.lower()), None)
if intro_title is not None:
text += 'Introduction: ' + paper.section_texts[intro_title]
# Similar for conclusion section
conclusion_title = next((item for item in paper.section_names if 'onclusion' in item), None)
if conclusion_title is not None:
text += 'Conclusion: ' + paper.section_texts[conclusion_title]
for heading in sections_of_interest:
if heading in paper.section_names:
text += heading + ': ' + paper.section_texts[heading]
chat_review_text = self.chat_review(text=text)
htmls.append('## Paper:' + str(paper_index + 1))
htmls.append('\n\n\n')
htmls.append(chat_review_text)
# 将审稿意见保存起来
date_str = str(datetime.datetime.now())[:13].replace(' ', '-')
export_path = os.path.join(self.root_path, 'export')
if not os.path.exists(export_path):
os.makedirs(export_path)
mode = 'w' if paper_index == 0 else 'a'
file_name = os.path.join(export_path,
date_str + '-' + self.validateTitle(paper.title) + "." + self.file_format)
self.export_to_markdown("\n".join(htmls), file_name=file_name, mode=mode)
htmls = []
def stage_1(self, paper):
htmls = []
text = ''
text += 'Title: ' + paper.title + '. '
text += 'Abstract: ' + paper.section_texts['Abstract']
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
messages = [
{"role": "system",
"content": f"You are a professional reviewer in the field of {args.research_fields}. "
f"I will give you a paper. You need to review this paper and discuss the novelty and originality of ideas, correctness, clarity, the significance of results, potential impact and quality of the presentation. "
f"Due to the length limitations, I am only allowed to provide you the abstract, introduction, conclusion and at most two sections of this paper."
f"Now I will give you the title and abstract and the headings of potential sections. "
f"You need to reply at most two headings. Then I will further provide you the full information, includes aforementioned sections and at most two sections you called for.\n\n"
f"Title: {paper.title}\n\n"
f"Abstract: {paper.section_texts['Abstract']}\n\n"
f"Potential Sections: {paper.section_names[2:-1]}\n\n"
f"Follow the following format to output your choice of sections:"
f"{{chosen section 1}}, {{chosen section 2}}\n\n"},
{"role": "user", "content": text},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print(result)
return result.split(',')
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def chat_review(self, text):
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
review_prompt_token = 1000
text_token = len(self.encoding.encode(text))
input_text_index = int(len(text) * (self.max_token_num - review_prompt_token) / text_token)
input_text = "This is the paper for your review:" + text[:input_text_index]
with open('ReviewFormat.txt', 'r') as file: # 读取特定的审稿格式
review_format = file.read()
messages = [
{"role": "system",
"content": "You are a professional reviewer in the field of " + args.research_fields + ". Now I will give you a paper. You need to give a complete review opinion according to the following requirements and format:" + review_format + " Please answer in {}.".format(
self.language)},
{"role": "user", "content": input_text},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print("********" * 10)
print(result)
print("********" * 10)
print("prompt_token_used:", response.usage.prompt_tokens)
print("completion_token_used:", response.usage.completion_tokens)
print("total_token_used:", response.usage.total_tokens)
print("response_time:", response.response_ms / 1000.0, 's')
return result
def export_to_markdown(self, text, file_name, mode='w'):
# 使用markdown模块的convert方法,将文本转换为html格式
# html = markdown.markdown(text)
# 打开一个文件,以写入模式
with open(file_name, mode, encoding="utf-8") as f:
# 将html格式的内容写入文件
f.write(text)
def chat_reviewer_main(args):
reviewer1 = Reviewer(args=args)
# 开始判断是路径还是文件:
paper_list = []
if args.paper_path.endswith(".pdf"):
paper_list.append(Paper(path=args.paper_path))
else:
for root, dirs, files in os.walk(args.paper_path):
print("root:", root, "dirs:", dirs, 'files:', files) # 当前目录路径
for filename in files:
# 如果找到PDF文件,则将其复制到目标文件夹中
if filename.endswith(".pdf"):
paper_list.append(Paper(path=os.path.join(root, filename)))
print("------------------paper_num: {}------------------".format(len(paper_list)))
[print(paper_index, paper_name.path.split('\\')[-1]) for paper_index, paper_name in enumerate(paper_list)]
reviewer1.review_by_chatgpt(paper_list=paper_list)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--paper_path", type=str, default='', help="path of papers")
parser.add_argument("--file_format", type=str, default='txt', help="output file format")
parser.add_argument("--research_fields", type=str,
default='computer science, artificial intelligence and reinforcement learning',
help="the research fields of paper")
parser.add_argument("--language", type=str, default='en', help="output lauguage, en or zh")
reviewer_args = ReviewerParams(**vars(parser.parse_args()))
start_time = time.time()
chat_reviewer_main(args=reviewer_args)
print("review time:", time.time() - start_time)
| [
"You are a professional reviewer in the field of ",
" Please answer in {}.",
". Now I will give you a paper. You need to give a complete review opinion according to the following requirements and format:",
"I will give you a paper. You need to review this paper and discuss the novelty and originality of ideas, correctness, clarity, the significance of results, potential impact and quality of the presentation. ",
"{chosen section 1}, {chosen section 2}\n\n",
"Follow the following format to output your choice of sections:",
"Now I will give you the title and abstract and the headings of potential sections. ",
"You need to reply at most two headings. Then I will further provide you the full information, includes aforementioned sections and at most two sections you called for.\n\n",
"Due to the length limitations, I am only allowed to provide you the abstract, introduction, conclusion and at most two sections of this paper.",
"1000"
] |
2024-01-10 | GPTKing/ChatPaper | chat_response.py | import argparse
import configparser
import datetime
import json
import os
import re
import time
from collections import namedtuple
import numpy as np
import openai
import tenacity
import tiktoken
from get_paper import Paper
# ChatResponse
ResponseParams = namedtuple(
"Params", ["comment_path", "file_format", "language"]
)
# 定义Response类
class Response:
# 初始化方法,设置属性
def __init__(self, args=None):
if args.language == 'en':
self.language = 'English'
elif args.language == 'zh':
self.language = 'Chinese'
else:
self.language = 'Chinese'
# 创建一个ConfigParser对象
self.config = configparser.ConfigParser()
# 读取配置文件
self.config.read('apikey.ini')
OPENAI_KEY = os.environ.get("OPENAI_KEY", "")
# 获取某个键对应的值
self.chat_api_list = self.config.get('OpenAI', 'OPENAI_API_KEYS')[1:-1].replace('\'', '').split(',')
self.chat_api_list.append(OPENAI_KEY)
# prevent short strings from being incorrectly used as API keys.
self.chat_api_list = [api.strip() for api in self.chat_api_list if len(api) > 20]
self.cur_api = 0
self.file_format = args.file_format
self.max_token_num = 4096
self.encoding = tiktoken.get_encoding("gpt2")
def response_by_chatgpt(self, comment_path):
htmls = []
# 读取回复的内容
with open(comment_path, 'r') as file:
comments = file.read()
chat_response_text = self.chat_response(text=comments)
htmls.append(chat_response_text)
# 将审稿意见保存起来
date_str = str(datetime.datetime.now())[:13].replace(' ', '-')
export_path = os.path.join('./', 'response_file')
if not os.path.exists(export_path):
os.makedirs(export_path)
file_name = os.path.join(export_path, date_str + '-Response.' + self.file_format)
self.export_to_markdown("\n".join(htmls), file_name=file_name)
htmls = []
@tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10),
stop=tenacity.stop_after_attempt(5),
reraise=True)
def chat_response(self, text):
openai.api_key = self.chat_api_list[self.cur_api]
self.cur_api += 1
self.cur_api = 0 if self.cur_api >= len(self.chat_api_list) - 1 else self.cur_api
response_prompt_token = 1000
text_token = len(self.encoding.encode(text))
input_text_index = int(len(text) * (self.max_token_num - response_prompt_token) / text_token)
input_text = "This is the review comments:" + text[:input_text_index]
messages = [
{"role": "system", "content": """You are the author, you submitted a paper, and the reviewers gave the review comments.
Please reply with what we have done, not what we will do.
You need to extract questions from the review comments one by one, and then respond point-to-point to the reviewers’ concerns.
Please answer in {}. Follow the format of the output later:
- Response to reviewers
#1 reviewer
Concern #1: xxxx
Author response: xxxxx
Concern #2: xxxx
Author response: xxxxx
...
#2 reviewer
Concern #1: xxxx
Author response: xxxxx
Concern #2: xxxx
Author response: xxxxx
...
#3 reviewer
Concern #1: xxxx
Author response: xxxxx
Concern #2: xxxx
Author response: xxxxx
...
""".format(self.language)
},
{"role": "user", "content": input_text},
]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
result = ''
for choice in response.choices:
result += choice.message.content
print("********" * 10)
print(result)
print("********" * 10)
print("prompt_token_used:", response.usage.prompt_tokens)
print("completion_token_used:", response.usage.completion_tokens)
print("total_token_used:", response.usage.total_tokens)
print("response_time:", response.response_ms / 1000.0, 's')
return result
def export_to_markdown(self, text, file_name, mode='w'):
# 使用markdown模块的convert方法,将文本转换为html格式
# html = markdown.markdown(text)
# 打开一个文件,以写入模式
with open(file_name, mode, encoding="utf-8") as f:
# 将html格式的内容写入文件
f.write(text)
def chat_response_main(args):
Response1 = Response(args=args)
Response1.response_by_chatgpt(comment_path=args.comment_path)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--comment_path", type=str, default='review_comments.txt', help="path of comment")
parser.add_argument("--file_format", type=str, default='txt', help="output file format")
parser.add_argument("--language", type=str, default='en', help="output lauguage, en or zh")
response_args = ResponseParams(**vars(parser.parse_args()))
# args = parser.parse_args()
start_time = time.time()
chat_response_main(args=response_args)
print("response time:", time.time() - start_time)
| [
"You are the author, you submitted a paper, and the reviewers gave the review comments. \n Please reply with what we have done, not what we will do.\n You need to extract questions from the review comments one by one, and then respond point-to-point to the reviewers’ concerns. \n Please answer in {}. Follow the format of the output later: \n - Response to reviewers\n #1 reviewer\n Concern #1: xxxx\n Author response: xxxxx\n\n Concern #2: xxxx\n Author response: xxxxx\n ...\n\n #2 reviewer\n Concern #1: xxxx\n Author response: xxxxx\n\n Concern #2: xxxx\n Author response: xxxxx\n ...\n\n #3 reviewer\n Concern #1: xxxx\n Author response: xxxxx\n\n Concern #2: xxxx\n Author response: xxxxx\n ...\n \n ",
"This is the review comments:PLACEHOLDER",
"1000"
] |
2024-01-10 | azure-openai-tf/mtc-azure-openai-back | source~utils~azure_openai_utils.py | """
@created_by ayaan
@created_at 2023.05.08
"""
import os
import requests
import time
from custom_exception import APIException
from utils.common_utils import CommonUtils
from collections import OrderedDict
from azure.core.credentials import AzureKeyCredential
from azure.search.documents.models import QueryType
from azure.search.documents import SearchClient
from azure.search.documents.indexes.aio import SearchIndexerClient
from azure.search.documents.indexes.aio import SearchIndexClient
# LangCahin & OpenAI 패키지
import openai
from langchain.chat_models import AzureChatOpenAI
# from langchain.vectorstores import Chroma
from langchain.vectorstores import FAISS
from langchain.docstore.document import Document
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.embeddings import OpenAIEmbeddings
from dotenv import load_dotenv
# Log DB Insert
from database import MysqlEngine
from domains.models import ChatRequestHistory
load_dotenv()
class AzureOpenAIUtils:
"""Azure OpenAI Utilities"""
azure_search_endpoint = os.environ.get("SEARCH_ENDPOINT")
azure_search_api_version = "2021-04-30-preview"
content_field = "pages"
sourcepage_field = "metadata_storage_name"
chatgpt_deployment = "gpt-35-turbo"
cognitive_search_credential = AzureKeyCredential(os.environ.get("SEARCH_KEY"))
headers = {"Content-Type": "application/json", "api-key": os.environ.get("SEARCH_KEY")}
params = {"api-version": "2021-04-30-preview"}
search_top = 2
azure_openai_key = os.environ.get("OPEN_AI_KEY")
azure_openai_endpoint = os.environ.get("OPEN_AI_ENDPOINT")
azure_openai_api_version = "2023-03-15-preview"
openai.api_type = "azure"
openai.api_version = "2023-03-15-preview" # 최신 preview 버전 2023-03-15-preview
openai.api_base = os.environ.get("OPEN_AI_ENDPOINT")
openai.api_key = os.environ.get("OPEN_AI_KEY")
prompt_prefix = """<|im_start|>system
Find the contents of the query in Source. If you found it, and if you couldn't find it, say you '자료를 찾지 못하였습니다.
All answers are in Korean.
Query:
{query}
Sources:
{sources}
<|im_end|>
"""
follow_up_questions_prompt_content = """Generate three very brief follow-up questions that the user would likely ask next about their healthcare plan and employee handbook.
Use double angle brackets to reference the questions, e.g. <<Are there exclusions for prescriptions?>>.
Try not to repeat questions that have already been asked.
Only generate questions and do not generate any text before or after the questions, such as 'Next Questions'"""
def nonewlines(self, text: str) -> str:
return text.replace("\n", " ").replace("\r", " ")
async def get_index(self, index_name):
"""cognitive Search Get Index"""
search_index_client = SearchIndexClient(self.azure_search_endpoint, self.cognitive_search_credential)
index = await search_index_client.get_index(index_name)
await search_index_client.close()
return index
async def get_indexer(self, indexer_name):
"""cognitive Search Get Indexer"""
search_indexer_client = SearchIndexerClient(self.azure_search_endpoint, self.cognitive_search_credential)
indexer = await search_indexer_client.get_index(indexer_name)
await search_indexer_client.close()
return indexer
async def get_index_list(self):
"""cognitive Search Get Index List"""
search_index_client = SearchIndexClient(self.azure_search_endpoint, self.cognitive_search_credential)
index_list = []
async for index in search_index_client.list_indexes():
index_list.append({"index_name": index.name})
await search_index_client.close()
return index_list
async def get_indexer_list(self):
"""cognitive Search Get Indexer List"""
search_indexer_client = SearchIndexerClient(self.azure_search_endpoint, self.cognitive_search_credential)
indexer_list = []
for indexer in await search_indexer_client.get_indexers():
indexer_list.append({"indexer_name": indexer.name, "target_index_name": indexer.target_index_name})
await search_indexer_client.close()
return indexer_list
async def cognitive_search_run_indexer(self, index_name):
"""cognitive_search_run_indexer"""
search_indexer_client = SearchIndexerClient(self.azure_search_endpoint, self.cognitive_search_credential)
# indexer = await search_indexer_client.get_indexer(indexer_name)
await search_indexer_client.run_indexer(index_name)
await search_indexer_client.close()
async def cognitive_search_get_indexer_status(self, indexer_name):
"""cognitive_search_get_indexer_status"""
search_indexer_client = SearchIndexerClient(self.azure_search_endpoint, self.cognitive_search_credential)
# indexer = await search_indexer_client.get_indexer(indexer_name)
result = await search_indexer_client.get_indexer_status(indexer_name)
await search_indexer_client.close()
return result
async def query_openai(self, query, messages):
"""Query Open AI
Args:
query (str): 질의
messages (list): Messages
Returns:
dict: messages & answer
"""
if len(messages) == 0:
messages = [
{"role": "system", "content": "Assistant is a large language model trained by OpenAI."},
{"role": "user", "content": query},
]
else:
messages.append({"role": "user", "content": query})
response = openai.ChatCompletion.create(engine=self.chatgpt_deployment, messages=messages)
messages.append(response["choices"][0]["message"])
result = {"messages": messages, "answer": response["choices"][0]["message"]["content"]}
return result
async def execute_openai(self, session, question, index_name, vector_store_name):
"""Excute OpenAI"""
return_dict = {"question": question, "answer": "자료를 찾지 못하였습니다.", "reference_file": []}
# 로그 저장
start = time.time()
chat_request_history = ChatRequestHistory(selected_index=index_name, query=question, user_id=22, status=ChatRequestHistory.Statues.running)
session.add(chat_request_history)
session.commit()
# Call Cognitive API
url = self.azure_search_endpoint + "/indexes/" + index_name + "/docs"
params = {
"api-version": self.azure_search_api_version,
"search": question,
"queryLanguage": "en-US",
"queryType": "semantic",
"semanticConfiguration": "semantic-config",
"select": "*",
"$count": "true",
"speller": "lexicon",
"$top": self.search_top,
"answers": "extractive|count-3",
"captions": "extractive|highlight-false",
}
resp = requests.get(url, params=params, headers=self.headers)
search_results = resp.json() # 결과값
if resp.status_code != 200:
chat_request_history.answer = return_dict["answer"]
chat_request_history.status = ChatRequestHistory.Statues.fail
chat_request_history.response_at = CommonUtils.get_kst_now()
chat_request_history.running_time = CommonUtils.get_running_time(start, time.time())
session.commit()
raise APIException(resp.status_code, "Cognitive Search API 실패", error=resp.json())
if search_results["@odata.count"] == 0:
chat_request_history.answer = return_dict["answer"]
chat_request_history.status = ChatRequestHistory.Statues.success
chat_request_history.response_at = CommonUtils.get_kst_now()
chat_request_history.running_time = CommonUtils.get_running_time(start, time.time())
session.commit()
return return_dict
else:
file_content = OrderedDict()
for result in search_results["value"]:
if result["@search.rerankerScore"] > 0.03: # Semantic Search 최대 점수 4점
file_content[result["metadata_storage_path"]] = {
"chunks": result["pages"][:1],
"caption": result["@search.captions"][0]["text"],
"score": result["@search.rerankerScore"],
"file_name": result["metadata_storage_name"],
}
# AzureOpenAI Service 연결
docs = []
for key, value in file_content.items():
for page in value["chunks"]:
docs.append(Document(page_content=page, metadata={"source": value["file_name"]}))
if len(docs) == 0:
chat_request_history.answer = return_dict["answer"]
chat_request_history.status = ChatRequestHistory.Statues.success
chat_request_history.response_at = CommonUtils.get_kst_now()
chat_request_history.running_time = CommonUtils.get_running_time(start, time.time())
session.commit()
return return_dict
# Embedding 모델 생성
# 아래소스에서 chunk_size=1 이 아닌 다른 값을 넣으면 다음 소스에서 에러가 난다.
embeddings = OpenAIEmbeddings(
model="text-embedding-ada-002", chunk_size=1, openai_api_key=self.azure_openai_key
) # Azure OpenAI embedding 사용시 주의
# Vector Store 생성
vector_store = FAISS.from_documents(docs, embeddings)
# if vector_store_name == 'Chroma':
# persist_directory = "db"
# vector_store = Chroma.from_documents(docs, embeddings)
# vector_store = Chroma.from_documents(documents=docs, embedding=embeddings, persist_directory=persist_directory)
# LangChain🦜 & Azure GPT🤖 연결
llm = AzureChatOpenAI(
deployment_name=self.chatgpt_deployment,
openai_api_key=self.azure_openai_key,
openai_api_base=self.azure_openai_endpoint,
openai_api_version=self.azure_openai_api_version,
temperature=0.0,
max_tokens=1000,
)
# https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html
qa = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm,
# chain_type='stuff',
chain_type="map_reduce",
retriever=vector_store.as_retriever(),
return_source_documents=True,
)
prompt_prefix = f"""
Find the contents of the query in Source. If you found it, and if you couldn't find it, say you '자료를 찾지 못하였습니다.
Please answer all answers in Korean with honorific like '~입니다'
question: {question}
"""
result = qa({"question": prompt_prefix})
print("질문 :", question)
print("답변 :", result["answer"])
print("📄 참고 자료 :", result["sources"].replace(",", "\n"))
return_dict["answer"] = result["answer"].replace("\n", "")
return_dict["reference_file"] = result["sources"].split(",")
# return_dict['reference_file_link'] = result["sources"]
chat_request_history.answer = result["answer"]
chat_request_history.status = ChatRequestHistory.Statues.success
chat_request_history.response_at = CommonUtils.get_kst_now()
chat_request_history.running_time = CommonUtils.get_running_time(start, time.time())
chat_request_history.reference_file = result["sources"].replace(",", "\n")
session.commit()
return return_dict
async def execute_openai_v2(self, session, question, index_name, vector_store_name):
"""Excute OpenAI"""
return_dict = {"question": question, "answer": "자료를 찾지 못하였습니다.", "reference_file": []}
# 로그 저장
start = time.time()
chat_request_history = ChatRequestHistory(selected_index=index_name, query=question, user_id=22, status=ChatRequestHistory.Statues.running)
session.add(chat_request_history)
session.commit()
search_client = SearchClient(endpoint=self.azure_search_endpoint, index_name=index_name, credential=self.cognitive_search_credential)
# Step1. Cognitive Search 질문
response = search_client.search(
question,
# filter=filter,
query_type=QueryType.SEMANTIC,
query_language="en-us",
query_speller="lexicon",
semantic_configuration_name="semantic-config",
top=self.search_top,
query_caption="extractive|highlight-false",
)
# Step2. Parsing
reference_files = [doc[self.sourcepage_field] for doc in response]
results = [doc[self.sourcepage_field] + ": " + self.nonewlines(doc[self.content_field][0]) for doc in response]
content = "\n".join(results)
# Step3. Prompt Engineering
# follow_up_questions_prompt = self.follow_up_questions_prompt_content
prompt = self.prompt_prefix.format(sources=content, query=question)
completion = openai.Completion.create(
engine=self.chatgpt_deployment,
prompt=prompt,
temperature=0.7,
max_tokens=1024,
n=1,
stop=["<|im_end|>", "<|im_start|>"],
)
return_dict["answer"] = completion.choices[0].text
return_dict["reference_file"] = reference_files
return return_dict
# return {
# "data_points": results,
# "answer": completion.choices[0].text,
# "thoughts": f"Searched for:<br>{question}<br><br>Prompt:<br>" + prompt.replace("\n", "<br>"),
# }
| [
"Assistant is a large language model trained by OpenAI.",
"\n Find the contents of the query in Source. If you found it, and if you couldn't find it, say you '자료를 찾지 못하였습니다.\n Please answer all answers in Korean with honorific like '~입니다'\n question: PLACEHOLDER\n ",
"Generate three very brief follow-up questions that the user would likely ask next about their healthcare plan and employee handbook. \n Use double angle brackets to reference the questions, e.g. <<Are there exclusions for prescriptions?>>.\n Try not to repeat questions that have already been asked.\n Only generate questions and do not generate any text before or after the questions, such as 'Next Questions'",
"<|im_start|>system\nFind the contents of the query in Source. If you found it, and if you couldn't find it, say you '자료를 찾지 못하였습니다.\nAll answers are in Korean.\nQuery:\n{query}\nSources:\n{sources}\n<|im_end|>\n"
] |
2024-01-10 | fauzaanu/AGF | agf~ArticleGenerator.py | import os
from pprint import pprint
import openai
from dotenv import load_dotenv
class ArticleGenerator:
def __init__(self,text,api_key,filename=None):
self.original_text = text
self.filename = filename
self.openai_key = api_key
def generate_prompt(self, prompt, system, temperature=2.0, model="gpt-3.5-turbo"):
"""
Generate a prompt using the OpenAI API.
"""
# Set the OpenAI API key
openai.api_key = self.openai_key
response = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "system", "content": f"{system}"},
{"role": "user", "content": f"{prompt}"},
],
temperature=temperature,
)
# Return the generated output
response_dict = response.to_dict()
generated_output = response_dict["choices"][0]["message"]["content"]
return generated_output
def text_check(self,text):
if text is None:
return self.original_text
else:
return text
def automate(self) -> str:
"""
This function will automate the whole process of writing an article.
Do not use this method if you want modularity. (Example: If you dont need to maintain clarity but go through the whole process, then dont use this method)
:return:
:rtype:
"""
text = self.original_text
pprint("Progress: Summarizing (1/13)")
summary = self.summarize(text)
pprint("Progress: Extracting keypoints (2/13)")
keypoints = self.keypoints(summary)
pprint("Progress: Paraphrasing (3/13)")
paraphrase = self.paraphrase(keypoints)
pprint("Progress: Eliminating redundancy (4/13)")
eliminate_redundancy = self.eliminate_redundancy(paraphrase)
pprint("Progress: Maintaining clarity (5/13)")
maintain_clarity = self.maintain_clarity(eliminate_redundancy)
pprint("Progress: Using own words (6/13)")
use_own_words = self.use_own_words(maintain_clarity)
pprint("Progress: Shortening lengthy explainations (7/13)")
shorten_lenghty_explainations = self.shorten_lenghty_explainations(use_own_words)
pprint("Progress: Organizing text by importance (8/13)")
organize_text_by_importance = self.organize_text_by_importance(shorten_lenghty_explainations)
pprint("Progress: Expanding (9/13)")
expand = self.expand(organize_text_by_importance)
pprint("Progress: Maintaining coherence (10/13)")
maintain_coherence = self.maintain_coherence(expand)
pprint("Progress: Adding transitions (11/13)")
transitions = self.transitions(maintain_coherence)
pprint("Progress: Adding conclusion (12/13)")
conclusion = self.conclusion(transitions)
pprint("Progress: Proofreading and finalizing (13/13)")
proofread_and_finalize = self.proofread_and_finalize(conclusion)
pprint("Progress: Reviewing for consistency (14/13)")
review_for_consistency = self.review_for_consistency(proofread_and_finalize)
pprint("Progress: Done - Writing to file")
if self.filename is not None:
return self.save(review_for_consistency)
else:
return review_for_consistency
def summarize(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in summarization given a text"
prompt = "Summarize the following text to re-write it as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def keypoints(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in figuring out the keypoints given a text"
prompt = "Extract the keypoints from the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def paraphrase(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in paraphrasing given a text"
prompt = "Paraphrase the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def eliminate_redundancy(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in eliminating redundancy given a text"
prompt = "Eliminate the redundancy in the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def maintain_clarity(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in maintaining clarity given a text"
prompt = "Maintain the clarity in the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def use_own_words(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in using your own words given a text"
prompt = "Use your own words to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def shorten_lenghty_explainations(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in shortening lengthy explainations given a text"
prompt = "Shorten the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def organize_text_by_importance(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in organizing text by importance given a text"
prompt = "Organize the following text by importance to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def expand(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in expanding a text to form an article"
prompt = "Expand the following text to re-write it as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def maintain_coherence(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in maintaining coherence given a text"
prompt = "Maintain the coherence in the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def transitions(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in transitions given a text"
prompt = "Add transitions to the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def conclusion(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in conclusion given a text"
prompt = "Add a conclusion to the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def proofread_and_finalize(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in proofreading and finalizing given a text"
prompt = "Proofread and finalize the following text to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def review_for_consistency(self,text=None)-> str:
text = self.text_check(text)
system = "You are an expert in reviewing for consistency given a text"
prompt = "Review the following text for consistency to re-write the following text as an article:" + text
return self.generate_prompt(prompt, system, temperature=0.5)
def save(self,text):
with open(self.filename, "w") as f:
f.write(text)
print(f"Saved to {self.filename}")
return text
if __name__ == '__main__':
with open("../article.txt") as f:
text = f.read()
load_dotenv()
openai_key = os.environ.get("OPENAI_API_KEY")
article = ArticleGenerator(text, openai_key, "../re_written_article.txt")
result = article.automate()
print(result)
| [
"Use your own words to re-write the following text as an article:PLACEHOLDER",
"Review the following text for consistency to re-write the following text as an article:PLACEHOLDER",
"Maintain the clarity in the following text to re-write the following text as an article:PLACEHOLDER",
"Expand the following text to re-write it as an article:PLACEHOLDER",
"Organize the following text by importance to re-write the following text as an article:PLACEHOLDER",
"Proofread and finalize the following text to re-write the following text as an article:PLACEHOLDER",
"Summarize the following text to re-write it as an article:PLACEHOLDER",
"Paraphrase the following text to re-write the following text as an article:PLACEHOLDER",
"Maintain the coherence in the following text to re-write the following text as an article:PLACEHOLDER",
"Shorten the following text to re-write the following text as an article:PLACEHOLDER",
"Extract the keypoints from the following text to re-write the following text as an article:PLACEHOLDER",
"Add a conclusion to the following text to re-write the following text as an article:PLACEHOLDER",
"Eliminate the redundancy in the following text to re-write the following text as an article:PLACEHOLDER",
"Add transitions to the following text to re-write the following text as an article:PLACEHOLDER"
] |
2024-01-10 | jimmyliao/azure-openai-workshop | fundamentals~langchain~AzureOpenAIUtil~SqlServer.py | import urllib, os
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import AzureOpenAI
from langchain.agents import AgentExecutor
class SqlServer:
def __init__(self, Server, Database, Username, Password, port=1433, odbc_ver=18, topK=10) -> None:
odbc_conn = 'Driver={ODBC Driver '+ str(odbc_ver) + ' for SQL Server};Server=tcp:' + \
Server + f',{port};Database={Database};Uid={Username};Pwd={Password};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'
params = urllib.parse.quote_plus(odbc_conn)
self.conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
db = SQLDatabase.from_uri(self.conn_str)
self.toolkit = SQLDatabaseToolkit(db=db)
self.SQL_PREFIX = """You are an agent designed to interact with a Microsoft Azure SQL database.
Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results using SELECT TOP in SQL Server syntax.
You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
You have access to tools for interacting with the database.
Only use the below tools. Only use the information returned by the below tools to construct your final answer.
You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
If the question does not seem related to the database, just return "I don't know" as the answer.
"""
deploy_name = os.getenv('DEPLOYMENT_NAME')
# print(deploy_name)
self.agent_executor = create_sql_agent(
llm=AzureOpenAI(temperature=0, deployment_name='text-davinci-003'),
toolkit=self.toolkit,
verbose=True,
prefix=self.SQL_PREFIX,
topK = topK
)
def run(self, text: str):
return self.agent_executor.run(text)
| [] |
2024-01-10 | crazydonkey200/neural-symbolic-machines | nsm~graph_factory.py | "Implements several tensorflow graphs and capsulate them as Graph."
import abc
import six
import collections
import os
import pprint
import numpy as np
import tensorflow as tf
import tf_utils
import data_utils
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
RNN_CELL_DICT = dict(
rnn=tf.contrib.rnn.RNNCell,
lstm=tf.contrib.rnn.BasicLSTMCell,
layernorm_lstm=tf.contrib.rnn.LayerNormBasicLSTMCell,
gru=tf.contrib.rnn.GRUCell)
OPTIMIZER_DICT = dict(
sgd=tf.train.GradientDescentOptimizer,
adam=tf.train.AdamOptimizer,
adagrad=tf.train.AdagradOptimizer,
rmsprop=tf.train.RMSPropOptimizer)
ACTIVATION_DICT = dict(
relu=tf.nn.relu, sigmoid=tf.nn.sigmoid,
tanh=tf.nn.tanh)
# Bind a variable length tensor with its sequence_length.
SeqTensor = collections.namedtuple('SeqTensor', ['tensor', 'sequence_length'])
def with_graph_variable_scope(func):
def func_wrapper(*args, **kwargs):
self = args[0]
with self._graph.as_default():
pid = os.getpid()
container_name = 'worker{}'.format(pid)
# print(container_name)
with self._graph.container(container_name):
with tf.variable_scope(self.vs):
return func(*args, **kwargs)
return func_wrapper
class Graph(object):
"""A TensorFlow graph with simpler interface to interact with it.
The neural network architecture (basically all the
tensorflow code) should live within this class. A new
architecture (for example, Seq2seq) should implement a new
subclass (Seq2seqGraph).
"""
def __init__(self, name, meta_graph_fn=''):
self.node_dict = {'summaries': []}
self._graph = tf.Graph()
self.vs_name = name
self.use_gpu = False
with tf.variable_scope(name) as vs:
self.vs = vs
@with_graph_variable_scope
def launch(self, init_model_path=''):
"Launch and initialize the graph."
if self.use_gpu:
n_gpu = 1
else:
n_gpu = 0
session_config = tf.ConfigProto(
device_count={'GPU' : n_gpu},
allow_soft_placement=False,
log_device_placement=False,
)
if n_gpu:
session_config.gpu_options.allow_growth = True
tf.logging.info('number of gpu used {}'.format(n_gpu))
self.session = tf.Session(
graph=self._graph, config=session_config)
self.saver = tf.train.Saver(tf.global_variables(), max_to_keep=3)
self.best_saver = tf.train.Saver(tf.global_variables(), max_to_keep=1)
if init_model_path:
self._graph.finalize()
self.saver.restore(self.session, init_model_path)
else:
init = tf.global_variables_initializer()
self._graph.finalize()
self.session.run(init)
return self.session
def restore(self, model_path):
self.saver.restore(self.session, model_path)
def save(self, model_path, global_step):
return self.saver.save(self.session, model_path, global_step)
def save_best(self, model_path, global_step):
return self.best_saver.save(self.session, model_path, global_step)
def run(self, fetch_list, feed_dict, writer=None):
"""Main interface to interact with the tensorflow graph.
Args:
fetch_list: a list of names (strings) indicating
the name of result operations.
feed_dict: a dictionary with the names of the nodes as keys
and the corresponding values that are fed as values.
writer: a tensorflow summary writer
Returns:
outputs: a dictionary with the names in the fetch_list as
keys, and the outputs from the executing graph as values.
"""
fetch_dict = dict([(name, self.node_dict[name])
for name in fetch_list if name in self.node_dict])
if writer is not None:
fetch_dict['summaries'] = self.node_dict['summaries']
fetch_dict['global_step'] = self.node_dict['global_step']
outputs = self.session.run(
fetch_dict, map_dict(self.node_dict, feed_dict))
if writer is not None:
writer.add_summary(
outputs['summaries'], outputs['global_step'])
writer.flush()
return outputs
@with_graph_variable_scope
def add_train(self, aux_loss_list=None, optimizer='adam',
learning_rate=0.01, max_grad_norm=5.0,
decay_after_n_steps=1000, decay_every_n_steps=1000,
lr_decay_factor=1.0, avg_loss_by_n=False, debug=False,
l2_coeff=0.0,
adam_beta1=0.9,
name='Training'):
"Construct part of the graph that controlls training (SGD optimization)."
self.node_dict['max_batch_size'] = tf.placeholder(tf.int32, None)
with tf.variable_scope(name):
all_summaries = []
batch_size = tf.cast(self.node_dict['max_batch_size'],
dtype=tf.float32)
loss = self.node_dict['loss'] / batch_size
all_summaries.append(
tf.summary.scalar(self.vs_name + '/' + 'loss', loss))
total_loss = loss
if aux_loss_list is not None:
for loss_name, w in aux_loss_list:
if w > 0.0:
aux_loss = self.node_dict[loss_name]
total_loss += aux_loss * w / batch_size
aux_loss_summary = tf.summary.scalar(
self.vs_name + '/' + loss_name, aux_loss)
all_summaries.append(aux_loss_summary)
if debug:
total_loss= tf.Print(
total_loss, [self.node_dict['sequence_loss']], message='seq_loss:', summarize=10000)
total_loss= tf.Print(
total_loss, [self.node_dict['weights'].tensor], message='weights:', summarize=10000)
total_loss= tf.Print(
total_loss, [self.node_dict['targets'].tensor], message='targets:', summarize=10000)
total_loss= tf.Print(
total_loss, [self.node_dict['probs'].tensor], message='probs:', summarize=10000)
total_loss= tf.Print(
total_loss, [self.node_dict['step_loss'].tensor],
message='step_loss:', summarize=10000)
total_loss= tf.Print(
total_loss, [self.node_dict['logits'].tensor], message='logits:', summarize=10000)
total_loss_summary = tf.summary.scalar(
self.vs_name + '/' + 'total_loss', total_loss)
all_summaries.append(total_loss_summary)
lr = tf.Variable(
float(learning_rate), trainable=False)
new_lr = tf.placeholder(dtype=tf.float32, shape=None, name='new_lr')
update_lr = lr.assign(new_lr)
global_step = tf.Variable(0, trainable=False)
decay_step = tf.maximum(
0, global_step - decay_after_n_steps)
decay_exponent = (tf.cast(decay_step, tf.float32) /
tf.cast(decay_every_n_steps, tf.float32))
decay = lr_decay_factor ** decay_exponent
decayed_lr = lr * decay
lr_summary = tf.summary.scalar(self.vs_name + '/' + 'learning_rate', decayed_lr)
all_summaries.append(lr_summary)
params = tf.get_collection(
tf.GraphKeys.TRAINABLE_VARIABLES, scope=self.vs_name)
n_params = 0
tf.logging.info('trainable parameters:')
for tv in params:
n_tv_params = np.product(tv.get_shape().as_list())
n_params += n_tv_params
tf.logging.info('{}: {}'.format(tv.name, n_tv_params))
if 'weights' in tv.name or 'kernel' in tv.name:
total_loss += tf.reduce_sum(tf.nn.l2_loss(tv)) * l2_coeff
tf.logging.info('total number of trainable parameters {}'.format(n_params))
if optimizer == 'adam':
tf.logging.info('adam beta1: {}'.format(adam_beta1))
opt = OPTIMIZER_DICT[optimizer](decayed_lr, beta1=adam_beta1)
else:
opt = OPTIMIZER_DICT[optimizer](decayed_lr)
gradients = tf.gradients(total_loss, params)
clipped_gradients, grad_norm = tf.clip_by_global_norm(
gradients, max_grad_norm)
update = opt.apply_gradients(
zip(clipped_gradients, params), global_step=global_step)
grad_norm_summary = tf.summary.scalar(
self.vs_name + '/' + 'grad_norm', grad_norm)
all_summaries.append(grad_norm_summary)
if debug:
_, clipped_grad_norm = tf.clip_by_global_norm(
clipped_gradients, max_grad_norm)
clipped_grad_norm_summary = tf.summary.scalar(
self.vs_name + '/' + 'clipped_grad_norm', clipped_grad_norm)
n_summary = tf.summary.scalar(
self.vs_name + '/' + 'n', self.node_dict['n'])
seq_loss_summary = tf.summary.histogram(
self.vs_name + '/' + 'seq_loss', self.node_dict['sequence_loss'])
step_loss_summary = tf.summary.histogram(
self.vs_name + '/' + 'step_loss', self.node_dict['step_loss'].tensor)
weights_summary = tf.summary.histogram(
self.vs_name + '/' + 'weights', self.node_dict['weights'].tensor)
all_summaries += [
clipped_grad_norm_summary, n_summary,
step_loss_summary, seq_loss_summary, weights_summary]
batch_size_summary = tf.summary.scalar(
self.vs_name + '/' + 'batch_size', self.node_dict['batch_size'])
all_summaries.append(batch_size_summary)
if 'ent_reg' in self.node_dict:
if avg_loss_by_n:
ent_reg = (self.node_dict['ent_reg'] /
tf.cast(self.node_dict['n'],
dtype=tf.float32))
else:
ent_reg = self.node_dict['ent_reg'] / batch_size
ent_reg_summary = tf.summary.scalar(
self.vs_name + '/' + 'polic_entropy',
(self.node_dict['ent_reg'] /
tf.cast(self.node_dict['n'], tf.float32)))
ent_reg_ppl_summary = tf.summary.scalar(
self.vs_name + '/' + 'policy_entropy_ppl',
tf.exp(self.node_dict['ent_reg'] /
tf.cast(self.node_dict['n'], tf.float32)))
all_summaries.append(ent_reg_summary)
all_summaries.append(ent_reg_ppl_summary)
for s in self.node_dict['summaries']:
all_summaries.append(s)
merged = tf.summary.merge(inputs=all_summaries)
self.node_dict.update(
train=update, global_step=global_step,
summaries=merged,
update_lr=update_lr, new_lr=new_lr)
@property
def final_state(self):
return 'final_state'
@property
def outputs(self):
return 'outputs'
@property
def initial_state(self):
return 'initial_state'
@property
def en_outputs(self):
return 'en_outputs'
@property
def n_examples(self):
return 'n_examples'
@property
def prediction_probs(self):
return 'probs'
@property
def samples(self):
return 'samples'
@property
def predictions(self):
return 'predictions'
@property
def en_initial_state(self):
return 'en_initial_state'
def add_outputs(self, output_type, output_config):
"Create part of the graph that compute final outputs from the RNN output."
if output_type == 'softmax':
self.add_softmax_outputs(**output_config)
elif output_type == 'regression':
self.add_regression_outputs(**output_config)
else:
raise NotImplementedError('Output type {} not supported!'.format(
output_type))
@with_graph_variable_scope
def add_softmax_outputs(self, output_vocab_size=None, use_logits=None, name='Softmax'):
"Add softmax layer on top of RNN outputs."
with tf.variable_scope(name):
seq_targets = create_seq_inputs(
shape=tf.TensorShape([None, None]), dtype=tf.int32)
seq_weights = create_seq_inputs(
shape=tf.TensorShape([None, None]), dtype=tf.float32)
if use_logits:
# Feeding logits instead of outputs (thus no linear transformation needed).
logits, probs, predictions, samples, temperature = create_softmax_from_logits(
self.node_dict['outputs'].tensor)
sequence_length = self.node_dict['outputs'].sequence_length
else:
logits, probs, predictions, samples, temperature = create_softmax(
self.node_dict['outputs'].tensor, output_vocab_size=output_vocab_size)
sequence_length = self.node_dict['outputs'].sequence_length
# From openai baselines to avoid numerical issue.
a0 = logits - tf.reduce_max(logits, axis=-1, keepdims=True)
ea0 = tf.exp(a0)
z0 = tf.reduce_sum(ea0, axis=-1, keepdims=True)
p0 = ea0 / z0
clipped_entropy = p0 * (tf.log(z0) - a0)
seq_entropy = SeqTensor(
tf.reduce_sum(clipped_entropy, axis=-1) *
tf.sequence_mask(
sequence_length, dtype=tf.float32),
sequence_length)
policy_entropy = tf.reduce_sum(
tf.reduce_sum(clipped_entropy, axis=-1) *
tf.sequence_mask(
sequence_length, dtype=tf.float32))
# Compute sequence cross entropy loss.
seq_logits, seq_probs, seq_predictions, seq_samples = [
SeqTensor(x, sequence_length)
for x in (logits, probs, predictions, samples)]
xent_loss, sequence_loss, step_loss, sequence_probs, step_logprobs = create_seq_xent_loss(
seq_logits.tensor,
seq_targets.tensor,
seq_weights.tensor,
sequence_length)
seq_step_loss = SeqTensor(step_loss, sequence_length)
# Add new nodes to the node_dict.
self.node_dict.update(
targets=seq_targets, weights=seq_weights,
temperature=temperature,
sequence_loss=sequence_loss,
step_loss=seq_step_loss,
loss=xent_loss,
ent_reg=policy_entropy,
seq_entropy=seq_entropy,
probs=seq_probs,
sequence_probs=sequence_probs,
step_logprobs=step_logprobs,
samples=seq_samples,
predictions=seq_predictions, logits=seq_logits)
@with_graph_variable_scope
def add_regression_outputs(self, hidden_sizes=(), activation='relu',
stop_gradient=False, name='Regression'):
"Add regression layer (linear transformation and MSE loss)."
with tf.variable_scope(name):
seq_targets = create_seq_inputs(
shape=tf.TensorShape([None, None]), dtype=tf.float32)
seq_weights = create_seq_inputs(
shape=tf.TensorShape([None, None]), dtype=tf.float32)
dropout = tf.placeholder_with_default(
0.0, shape=None, name='regression_dropout')
# Last dimension is the output, thus only one dimension.
output_tensor = self.node_dict['outputs'].tensor
if stop_gradient:
output_tensor = tf.stop_gradient(output_tensor)
if hidden_sizes:
h = create_multilayer_fnn(
output_tensor, dropout, list(hidden_sizes),
activation=activation)
else:
h = output_tensor
predictions = tf.layers.dense(inputs=h, units=1, name='regression_final')
# Turn shape from (batch_size, timesteps, 1) to (batch_size, timesteps).
predictions = tf.squeeze(predictions, axis=-1)
sequence_length = self.node_dict['outputs'].sequence_length
seq_predictions = SeqTensor(predictions, sequence_length)
mse_loss = create_seq_mse_loss(
seq_predictions.tensor, seq_targets.tensor, seq_weights.tensor, sequence_length)
self.node_dict.update(
targets=seq_targets, weights=seq_weights, regression_dropout=dropout,
loss=mse_loss, predictions=seq_predictions)
class SeqGraph(Graph):
"TensorFlow graph for RNN sequence model."
def __init__(self, graph_config, name='seq_graph'):
super(SeqGraph, self).__init__(name)
self.add_seq(**graph_config['core_config'])
self.add_outputs(graph_config['output_type'], graph_config['output_config'])
self.add_train(**graph_config['train_config'])
@with_graph_variable_scope
def add_seq(self, input_shape, input_vocab_size=None,
hidden_size=128, n_layers=2,
cell_type='lstm', bidirectional=False,
dropout=0.0, use_embeddings=True,
embedding_size=64, name='Sequence'):
with tf.variable_scope(name):
batch_size = tf.placeholder(
dtype=tf.int32, shape=(), name='batch_size')
if use_embeddings:
embeddings = tf.get_variable(
'embeddings', shape=(input_vocab_size, embedding_size),
initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
else:
embeddings = None
(seq_inputs, initial_state, seq_outputs, final_state,
input_dropout, rnn_dropout, _) = create_seq_graph(
input_shape, batch_size=batch_size,
hidden_size=hidden_size, n_layers=n_layers,
cell_type=cell_type, bidirectional=bidirectional,
embeddings=embeddings)
n = tf.reduce_sum(seq_inputs.sequence_length)
self.node_dict.update(inputs=seq_inputs,
rnn_dropout=rnn_dropout,
input_dropout=input_dropout,
embeddings=embeddings,
batch_size=batch_size,
final_state=final_state,
outputs=seq_outputs, n=n,
initial_state=initial_state)
class Seq2seqGraph(Graph):
"""TensorFlow graph for seq2seq model.
A basic seq2seq model with attention. The model supports
all the common specifications for a seq2seq model such as
number of layers, whether to use bidirectional encoder,
attention type, etc.
"""
def __init__(self, graph_config, name='seq2seq_graph'):
super(Seq2seqGraph, self).__init__(name)
self.add_seq2seq(**graph_config['core_config'])
self.add_outputs(graph_config['output_type'], graph_config['output_config'])
self.add_train(**graph_config['train_config'])
@with_graph_variable_scope
def add_seq2seq(self, en_input_shape, input_shape,
use_attn=True,
attn_size=128, attn_vec_size=128,
en_input_vocab_size=None,
input_vocab_size=None,
en_hidden_size=128, en_n_layers=2,
hidden_size=128, n_layers=2,
cell_type='lstm',
en_bidirectional=False,
en_use_embeddings=True,
use_embeddings=True,
en_embedding_size=64,
embedding_size=64, name='Seq2seq'):
with tf.variable_scope(name) as scope:
batch_size = tf.placeholder(
dtype=tf.int32, shape=[], name='batch_size')
# Create encoder.
with tf.variable_scope('Encoder'):
if en_use_embeddings:
en_embeddings = tf.get_variable(
'embeddings', shape=(en_input_vocab_size, en_embedding_size),
initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
else:
en_embeddings = None
(en_seq_inputs, en_initial_state, en_seq_outputs, en_final_state,
en_input_dropout, en_rnn_dropout, _) = create_seq_graph(
en_input_shape, batch_size=batch_size,
hidden_size=en_hidden_size, n_layers=en_n_layers,
cell_type=cell_type, bidirectional=en_bidirectional,
embeddings=en_embeddings, output_proj_size=en_hidden_size)
if use_attn:
attn_inputs = en_seq_outputs.tensor
else:
attn_inputs = None
if en_bidirectional:
en_final_state = en_final_state[0]
# Create decoder.
with tf.variable_scope('Decoder'):
if use_embeddings:
embeddings = tf.get_variable(
'embeddings', shape=(input_vocab_size, embedding_size),
initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
else:
embeddings = None
(seq_inputs, initial_state, seq_outputs, final_state,
input_dropout, rnn_dropout, _) = create_seq_graph(
input_shape, batch_size=batch_size,
hidden_size=hidden_size, n_layers=n_layers,
cell_type=cell_type, bidirectional=False,
embeddings=embeddings,
attn_size=attn_size,
attn_vec_size=attn_vec_size,
attn_inputs=attn_inputs,
initial_state=en_final_state)
# Count number of steps.
n = tf.reduce_sum(seq_inputs.sequence_length)
self.node_dict.update(
en_inputs=en_seq_inputs,
en_rnn_dropout=en_rnn_dropout,
en_input_dropout=en_input_dropout,
en_outputs=en_seq_outputs,
en_initial_state=en_initial_state,
en_final_state=en_final_state,
inputs=seq_inputs,
rnn_dropout=rnn_dropout,
input_dropout=input_dropout,
outputs=seq_outputs,
batch_size=batch_size,
final_state=final_state,
initial_state=initial_state, n=n,
encoded_context=en_seq_outputs, context=en_seq_inputs,
en_embeddings=en_embeddings,
embeddings=embeddings)
if use_attn:
self.node_dict['attn_inputs'] = attn_inputs
class MemorySeq2seqGraph(Graph):
def __init__(self, graph_config, name='memory_seq2seq_graph'):
super(MemorySeq2seqGraph, self).__init__(name)
self.use_gpu = graph_config['use_gpu']
if self.use_gpu:
os.environ["CUDA_VISIBLE_DEVICES"] = graph_config['gpu_id']
else:
os.environ["CUDA_VISIBLE_DEVICES"] = ''
self.add_memory_seq2seq(**graph_config['core_config'])
self.add_outputs(graph_config['output_type'], graph_config['output_config'])
self.add_train(**graph_config['train_config'])
self.config = graph_config
@with_graph_variable_scope
def add_memory_seq2seq(
self, max_n_valid_indices=None,
n_mem=None,
n_builtin=None,
use_attn=True,
attn_size=128, attn_vec_size=128,
en_input_vocab_size=None,
input_vocab_size=None,
en_hidden_size=128, en_n_layers=2,
hidden_size=128, n_layers=2,
cell_type='lstm',
en_bidirectional=False,
en_use_embeddings=True,
en_embedding_size=4,
value_embedding_size=128,
en_pretrained_vocab_size=None,
en_pretrained_embedding_size=-1,
tie_en_embeddings=True,
add_lm_loss=False,
n_en_input_features=1,
n_de_output_features=1,
en_attn_on_constants=False,
name='MemorySeq2seq'):
"""Create seq2seq with key variable memory.
Seq2seq with key variable memory is used for semantic
parsing (generating programs from natural language
instructions/questions).
A MemorySeq2seq Model uses a memory cell in decoder.
There are 3 types of tokens in a program:
1) constants that are provided at the before the program
is generated (added before decoding, different for
different examples); 2) variables that saves the results
from executing past expressions (added during decoding,
different for different examples); 3) language
primitives such as built-in functions and reserved
tokens (for example, "(", ")"). (the same for different
examples).
There are two kinds of constants: 1) constants from the
question, whose representation is from the span the
annotated constants; 2) constants from the context,
whose representation is from the constant value
embeddings, for example, table columns.
So the decoder vocab is organized as
[primitives, constants, variables].
For a constant, its embedding is computed as sum of two
parts: 1) embedding of the span (from encoder) on which
the constant is annotated with, for example the span
"barack obama" in "who is barack obama's wife" or the
span "one" in "what is one plus one"; 2) embedding of
the constant, for example, the embedding of the entity
Obama or the embedding of the number one.
For a variable, its embedding is the decoder RNN output
at the step where the variable is created.
For a primitive, its embedding is initialized randomly
and tuned by SGD.
Inspired by the code asistance (such as autocompletion)
in modern IDE, we also apply semantic and syntax
constraint on the decoder vocabulary so that at each
step, only some of the tokens are valid. So the decoder
has a dynamic vocabulary that is changing through
different steps.
"""
input_shape = tf_utils.MemoryInputTuple(
tf.TensorShape([None, None]),
tf.TensorShape([None, None]),
tf.TensorShape([None, None, max_n_valid_indices]))
input_dtype=tf_utils.MemoryInputTuple(
tf.int32, tf.int32, tf.int32)
en_input_shape=tf.TensorShape([None, None])
constant_span_shape=tf.TensorShape([None, n_mem, 2])
constant_value_embedding_shape=tf.TensorShape(
[None, n_mem, value_embedding_size])
builtin_de_embeddings_shape=tf.TensorShape([n_builtin, hidden_size])
with tf.variable_scope('ConstantInput'):
# constant_span_embedding encodes the information
# from the span where the constant is referred to,
# for example the span "obama" in "who is the wife
# of obama".
# constant_value_embedding encodes the information
# from the value of the constant, for example, the
# embedding of the entity Obama.
# constant_span: (B, n_mem, 2)
constant_spans_placeholder = tf.placeholder(tf.int32, constant_span_shape)
constant_spans = constant_spans_placeholder
n_constants_placeholder = tf.placeholder(tf.int32, [None, 1])
n_constants = tf.squeeze(n_constants_placeholder, [-1])
# constant_spans: (B, n_mem, 1)
# 0.0 if the span is [-1, -1], else 1.0.
constant_span_masks = tf.cast(
tf.greater(
tf.reduce_sum(constant_spans, axis=2), 0), tf.float32)
constant_span_masks = tf.expand_dims(constant_span_masks, -1)
# constant_spans: (B, n_mem, 2, 1)
constant_spans = tf.maximum(constant_spans, 0)
constant_spans = tf.expand_dims(constant_spans, axis=-1)
if constant_value_embedding_shape is not None:
constant_value_embeddings_placeholder = tf.placeholder(
tf.float32, shape=constant_value_embedding_shape)
constant_value_embeddings = constant_value_embeddings_placeholder
constant_value_embeddings = tf.layers.dense(
constant_value_embeddings, hidden_size, use_bias=True)
constant_value_masks = tf.squeeze(1 - constant_span_masks, [-1])
if n_en_input_features > 0:
en_input_features_shape = tf.TensorShape([None, None, n_en_input_features])
else:
en_input_features_shape = None
with tf.variable_scope(name) as scope:
batch_size = tf.placeholder(
dtype=tf.int32, shape=[], name='batch_size')
with tf.variable_scope('Encoder'):
if en_use_embeddings:
if en_pretrained_embedding_size < 0:
en_embeddings = tf.get_variable(
'embeddings', shape=(en_input_vocab_size, en_embedding_size),
initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
else:
en_embeddings = tf.get_variable(
'embeddings', shape=(
en_input_vocab_size - en_pretrained_vocab_size, en_embedding_size),
initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
en_pretrained_embeddings = tf.get_variable(
'pretrained_embeddings', shape=(
en_pretrained_vocab_size, en_pretrained_embedding_size),
trainable=False,
initializer=tf.zeros_initializer())
en_pretrained_embeddings_placeholder = tf.placeholder(
tf.float32, [en_pretrained_vocab_size, en_pretrained_embedding_size])
en_pretrained_embeddings_init = en_pretrained_embeddings.assign(
en_pretrained_embeddings_placeholder)
en_pretrained_embeddings = tf.layers.dense(
inputs=en_pretrained_embeddings, units=en_embedding_size,
use_bias=True)
en_embeddings = tf.concat(
values=[en_embeddings, en_pretrained_embeddings], axis=0)
else:
en_embeddings = None
if en_attn_on_constants:
tf.logging.info('Using attention in encoder!!!')
(en_seq_inputs, en_initial_state, en_seq_outputs, en_final_state,
en_input_dropout, en_rnn_dropout, en_rnn_outputs) = create_seq_graph(
en_input_shape, batch_size=batch_size,
hidden_size=en_hidden_size, n_layers=en_n_layers,
cell_type=cell_type, bidirectional=en_bidirectional,
embeddings=en_embeddings, output_proj_size=en_hidden_size,
input_features_shape=en_input_features_shape,
attn_inputs=constant_value_embeddings,
attn_masks=constant_value_masks,
attn_size=attn_size, attn_vec_size=attn_vec_size)
else:
(en_seq_inputs, en_initial_state, en_seq_outputs, en_final_state,
en_input_dropout, en_rnn_dropout, en_rnn_outputs) = create_seq_graph(
en_input_shape, batch_size=batch_size,
hidden_size=en_hidden_size, n_layers=en_n_layers,
cell_type=cell_type, bidirectional=en_bidirectional,
embeddings=en_embeddings, output_proj_size=en_hidden_size,
input_features_shape=en_input_features_shape)
if n_en_input_features > 0:
en_seq_input_features = SeqTensor(
en_seq_inputs.tensor[1], tf.placeholder(tf.int32, [None]))
en_seq_inputs = SeqTensor(
en_seq_inputs.tensor[0], en_seq_inputs.sequence_length)
if add_lm_loss:
sequence_length = tf.maximum(en_seq_inputs.sequence_length - 1, 0)
en_n = tf.cast(tf.reduce_sum(sequence_length), tf.float32)
mask = tf.sequence_mask(sequence_length, dtype=tf.float32)
if en_bidirectional:
en_fw_outputs = en_rnn_outputs[0]
en_bw_outputs = en_rnn_outputs[1]
if tie_en_embeddings:
en_fw_logits = tf_utils.tensormul(
en_fw_outputs[:, :-1, :], tf.transpose(en_embeddings))
en_bw_logits = tf_utils.tensormul(
en_bw_outputs[:, 1:, :], tf.transpose(en_embeddings))
else:
# Use 0 to n-2 to compute logits.
en_fw_logits = tf.layers.dense(
en_fw_outputs[:, :-1, :], en_input_vocab_size, use_bias=True)
en_bw_logits = tf.layers.dense(
en_bw_outputs[:, 1:, :], en_input_vocab_size, use_bias=True)
# Use 1 to n-1 as targets.
en_fw_lm_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=en_seq_inputs.tensor[:, 1:], logits=en_fw_logits) * mask
en_bw_lm_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=en_seq_inputs.tensor[:, :-1], logits=en_bw_logits) * mask
en_lm_loss = tf.reduce_sum(en_fw_lm_loss + en_bw_lm_loss) / en_n
else:
en_fw_outputs = en_rnn_outputs
if tie_en_embeddings:
en_fw_logits = tf_utils.tensormul(
en_fw_outputs[:, :-1, :], tf.transpose(en_embeddings))
else:
en_fw_logits = tf.layers.dense(
en_fw_outputs[:, :-1, :], en_input_vocab_size, use_bias=True)
en_fw_lm_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=en_seq_inputs.tensor[:, 1:], logits=en_fw_logits) * mask
en_lm_step_loss = en_fw_lm_loss
en_lm_loss = tf.reduce_sum(en_lm_step_loss) / en_n
if use_attn:
attn_inputs = en_seq_outputs.tensor
attn_masks = tf.sequence_mask(
en_seq_outputs.sequence_length, dtype=tf.float32)
else:
attn_inputs = None
attn_masks = None
with tf.variable_scope('ConstantEncoder'):
batch_ind = tf.range(batch_size)
# batch_ind: (B, 1, 1, 1)
for i in range(3):
batch_ind = tf.expand_dims(batch_ind, axis=-1)
# batch_ind: (B, n_mem, 2, 1)
batch_ind = tf.tile(batch_ind, [1, n_mem, 2, 1])
# constant_span: (B, n_mem, 2, 2)
constant_spans = tf.concat([batch_ind, constant_spans], axis=-1)
# constant_span_embedding: (B, n_mem, 2, en_output_size)
constant_span_embeddings = tf.gather_nd(en_seq_outputs.tensor, constant_spans)
# constant_embedding: (B, n_mem, en_output_size)
constant_embeddings = tf.reduce_mean(constant_span_embeddings, axis=2)
constant_embeddings = constant_embeddings * constant_span_masks
if constant_value_embedding_shape is not None:
constant_embeddings = constant_embeddings + constant_value_embeddings
# mask out the bad constants.
# constant mask: (B, n_mem)
constant_masks = tf.sequence_mask(
n_constants, maxlen=n_mem, dtype=tf.float32)
# constant mask: (B, n_mem, 1)
constant_masks = tf.expand_dims(constant_masks, -1)
constant_masks = tf.tile(constant_masks, [1, 1, hidden_size])
# constant_embeddings: (B, n_mem, hidden_size)
constant_embeddings = constant_embeddings * constant_masks
# builtin_de_embeddings: (n_builtin, embed_size)
builtin_de_embeddings = tf.get_variable(
'builtin_de_embeddings', builtin_de_embeddings_shape,
initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
# builtin_de_embeddings: (1, n_builtin, embed_size)
builtin_de_embeddings = tf.expand_dims(builtin_de_embeddings, axis=0)
# builtin_de_embeddings: (B, n_builtin, embed_size)
builtin_de_embeddings = tf.tile(
builtin_de_embeddings, [batch_size] + [1] * 2)
# initial_memory: (B, n_builtin + n_mem, embed_size)
initial_memory = tf.concat(
[builtin_de_embeddings, constant_embeddings], axis=1)
# concatenate static and constant embeddings to form
# new memory to create initial states.
if en_bidirectional:
initial_state = en_final_state[0]
else:
initial_state = en_final_state
with tf.variable_scope('Decoder'):
initial_state = tf_utils.MemoryStateTuple(
initial_memory, initial_state)
seq_inputs = create_seq_inputs(shape=input_shape, dtype=input_dtype)
inputs = seq_inputs.tensor
sequence_length = seq_inputs.sequence_length
rnn_dropout = tf.placeholder_with_default(
0.0, shape=None, name='rnn_dropout')
# Create multilayer attention cell then wrap with memory cell.
cell = multilayer_dropout_cell(
cell_fn=RNN_CELL_DICT[cell_type], hidden_size=hidden_size,
n_layers=n_layers, dropout=rnn_dropout)
if attn_inputs is not None:
cell = tf_utils.SeqAttentionCellWrapper(
cell, attn_inputs=attn_inputs,
attn_size=attn_size, attn_vec_size=attn_vec_size,
output_size=hidden_size, attn_masks=attn_masks)
mem_size = builtin_de_embeddings_shape[0] + constant_span_shape[1]
embed_size = hidden_size
cell = tf_utils.MemoryWrapper(
cell, mem_size, embed_size, max_n_valid_indices)
flat_inputs = data_utils.flatten(inputs)
flat_inputs = [
tf.expand_dims(in_, -1) for in_ in flat_inputs[:2]] + flat_inputs[2:]
inputs = data_utils.pack_sequence_as(inputs, flat_inputs)
outputs, final_state = tf.nn.dynamic_rnn(
cell, inputs, sequence_length=sequence_length,
initial_state=initial_state, dtype=tf.float32)
if n_de_output_features > 0:
de_seq_output_features = create_seq_inputs(
shape=tf.TensorShape(
[None, None, max_n_valid_indices, n_de_output_features]),
dtype=tf.int32, name='de_output_features')
output_feature_weights = tf.get_variable(
'de_output_feature_weights',
shape=tf.TensorShape([n_de_output_features, 1]),
initializer=tf.zeros_initializer())
outputs = outputs + tf.squeeze(tf_utils.tensormul(
tf.cast(de_seq_output_features.tensor, tf.float32),
output_feature_weights), axis=-1)
seq_outputs = SeqTensor(outputs, sequence_length)
n = tf.reduce_sum(seq_inputs.sequence_length)
self.node_dict.update(
en_inputs=en_seq_inputs,
en_rnn_dropout=en_rnn_dropout,
en_input_dropout=en_input_dropout,
en_outputs=en_seq_outputs,
en_initial_state=en_initial_state,
en_final_state=en_final_state,
inputs=seq_inputs,
constant_spans=constant_spans_placeholder,
constant_embeddings=constant_embeddings,
constant_masks=constant_masks,
n_constants=n_constants_placeholder,
rnn_dropout=rnn_dropout,
# input_dropout=input_dropout,
outputs=seq_outputs,
batch_size=batch_size,
final_state=final_state,
initial_state=initial_state, n=n,
encoded_context=en_seq_outputs,
context=en_seq_inputs,
en_embeddings=en_embeddings)
if en_pretrained_embedding_size > 0:
self.node_dict['en_pretrained_embeddings'] = en_pretrained_embeddings_placeholder
self.node_dict['en_pretrained_embeddings_init'] = en_pretrained_embeddings_init
if constant_value_embedding_shape is not None:
self.node_dict['constant_value_embeddings'] = constant_value_embeddings_placeholder
if add_lm_loss:
self.node_dict['en_lm_loss'] = en_lm_loss
# self.node_dict['en_lm_step_loss'] = en_lm_step_loss
if use_attn:
self.node_dict['attn_inputs'] = attn_inputs
if n_en_input_features > 0:
self.node_dict['en_input_features'] = en_seq_input_features
self.node_dict['summaries'].append(
tf.summary.scalar(
self.vs_name + '/' + 'en_input_features_sum',
tf.reduce_sum(en_seq_input_features.tensor)))
if n_de_output_features > 0:
self.node_dict['output_features'] = de_seq_output_features
self.node_dict['output_feature_weights'] = output_feature_weights
self.node_dict['summaries'].append(
tf.summary.scalar(
self.vs_name + '/' + 'output_feature_weights_0',
output_feature_weights[0][0]))
self.node_dict['summaries'].append(
tf.summary.scalar(
self.vs_name + '/' + 'output_features_sum',
tf.reduce_sum(de_seq_output_features.tensor)))
class MonitorGraph(object):
"""A tensorflow graph to monitor some values during training.
Generate tensorflow summaries for the values to monitor
them through tensorboard.
"""
def __init__(self):
self.node_dict = {}
self._graph = tf.Graph()
def launch(self):
with self._graph.as_default():
self.merged = tf.summary.merge_all()
init = tf.global_variables_initializer()
self.session = tf.Session(graph=self._graph)
self.session.run(init)
def add_scalar_monitor(self, name, dtype):
with self._graph.as_default():
x = tf.placeholder(
dtype=dtype, shape=None, name=name)
tf.summary.scalar(name, x)
self.node_dict[name] = x
def generate_summary(self, feed_dict):
summary_str = self.session.run(
self.merged, map_dict(self.node_dict, feed_dict))
return summary_str
# Utility functions for creating TensorFlow graphs.
# FNN
def create_multilayer_fnn(inputs, dropout, hidden_sizes, activation='relu'):
x = inputs
for size in hidden_sizes:
x = tf.nn.dropout(x, 1 - dropout)
x = tf.layers.dense(inputs=x, units=size, activation=ACTIVATION_DICT[activation])
return x
# Loss
def create_seq_mse_loss(outputs, targets, weights, sequence_length):
mask = tf.sequence_mask(
sequence_length, dtype=tf.float32)
loss = tf.reduce_sum(tf.squared_difference(outputs, targets) * weights * mask)
return loss
def create_seq_xent_loss(logits, targets, weights, sequence_length):
mask = tf.sequence_mask(
sequence_length, maxlen=tf.reduce_max(sequence_length),
dtype=tf.float32)
step_neg_logprobs = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=targets, logits=logits) * mask
step_logprobs = -1 * step_neg_logprobs
sequence_probs = tf.exp(tf.reduce_sum(step_logprobs, axis=1))
step_loss = step_neg_logprobs * weights
sequence_loss = tf.reduce_sum(step_loss, axis=1)
xent_loss = tf.reduce_sum(sequence_loss)
return xent_loss, sequence_loss, step_loss, sequence_probs, step_logprobs
def create_softmax(
inputs, softmax_w=None,
output_vocab_size=None, use_bias=False, name='Softmax_layer'):
"Create nodes for linear transformation of inputs and softmax computation."
with tf.name_scope(name):
# inputs = tf.nn.dropout(inputs, 1-dropout)
if softmax_w is None:
logits = tf.layers.dense(
inputs=inputs, units=output_vocab_size,
use_bias=use_bias)
else:
logits = tf_utils.tensormul(inputs, softmax_w)
if use_bias:
softmax_b = tf.Variable(
initial_value=np.zeros(
(1, output_vocab_size), dtype=tf.float32),
name='softmax_bias')
logits += softmax_b
return create_softmax_from_logits(logits)
def create_softmax_from_logits(logits):
"Create nodes for softmax computation from logits."
temperature = tf.placeholder_with_default(
1.0, shape=(), name='temperature')
logits = logits / temperature
logits_shape = tf.shape(logits)
logits_dim = logits_shape[-1]
logits_2d = tf.reshape(logits, [-1, logits_dim])
samples = tf.multinomial(logits_2d, 1)
samples = tf.reshape(samples, logits_shape[:-1])
probs = tf.nn.softmax(logits)
predictions = tf.argmax(probs, axis=2)
return logits, probs, predictions, samples, temperature
# Embedding
def embed_inputs(
inputs, embeddings, name='Embedding_layer'):
with tf.name_scope(name):
embedded_inputs = tf.nn.embedding_lookup(embeddings, inputs)
return embedded_inputs
# RNN
def create_rnn(
cell, initial_state, inputs, sequence_length,
hidden_size, bidirectional, cell_bw=None,
name='RNN'):
with tf.name_scope(name):
if bidirectional:
# Note that you can't use bidirectional RNN if you
# want to do decoding.
initial_state_fw = initial_state[0]
initial_state_bw = initial_state[1]
outputs, final_state = tf.nn.bidirectional_dynamic_rnn(
cell, cell_bw, inputs, sequence_length=sequence_length,
initial_state_fw=initial_state_fw,
initial_state_bw=initial_state_bw,
dtype=tf.float32)
else:
outputs, final_state = tf.nn.dynamic_rnn(
cell, inputs, sequence_length=sequence_length,
initial_state=initial_state, dtype=tf.float32)
return outputs, final_state
# RNN Cell
def multilayer_dropout_cell(
cell_fn, hidden_size, n_layers, dropout,
use_skip_connection=True):
"""Create multilayer RNN cell with dropout."""
cells = []
for i in xrange(n_layers):
cell = cell_fn(hidden_size)
if i > 0 and use_skip_connection:
cell = tf.nn.rnn_cell.ResidualWrapper(cell)
cell = tf.contrib.rnn.DropoutWrapper(
cell, output_keep_prob=1.0-dropout)
# variational_recurrent=True,
# state_keep_prob = 1.0 - dropout,
# dtype=tf.float32)
cells.append(cell)
final_cell = tf.contrib.rnn.MultiRNNCell(cells)
return final_cell
# Input placeholders.
def create_seq_inputs(shape, dtype=tf.float32, name='inputs'):
with tf.name_scope(name):
if isinstance(shape, tuple):
flat_input_shape = data_utils.flatten(shape)
assert isinstance(dtype, tuple)
flat_dtype = data_utils.flatten(dtype)
flat_inputs = [tf.placeholder(
dt, sh, name='inputs') for dt, sh in zip(flat_dtype, flat_input_shape)]
inputs = data_utils.pack_sequence_as(shape, flat_inputs)
else:
inputs = tf.placeholder(dtype, shape)
sequence_length = tf.placeholder(
tf.int32, [None], name='sequence_length')
return SeqTensor(inputs, sequence_length)
def create_tuple_placeholders_with_default(inputs, extra_dims, shape):
if isinstance(shape, int):
result = tf.placeholder_with_default(
inputs, list(extra_dims) + [shape])
else:
subplaceholders = [create_tuple_placeholders_with_default(
subinputs, extra_dims, subshape)
for subinputs, subshape in zip(inputs, shape)]
t = type(shape)
if t == tuple:
result = t(subplaceholders)
else:
result = t(*subplaceholders)
return result
def create_tuple_placeholders(dtype, extra_dims, shape):
if isinstance(shape, int):
result = tf.placeholder(dtype, list(extra_dims) + [shape])
else:
subplaceholders = [create_tuple_placeholders(dtype, extra_dims, subshape)
for subshape in shape]
t = type(shape)
# Handles both tuple and LSTMStateTuple.
if t == tuple:
result = t(subplaceholders)
else:
result = t(*subplaceholders)
return result
# Sequence models.
def create_seq_graph(input_shape, batch_size=None,
# input_vocab_size=None,
attn_inputs=None,
attn_size=128, attn_vec_size=128,
# output_size=128,
input_size=None,
hidden_size=128, n_layers=2,
cell_type='lstm', bidirectional=False,
initial_state=None,
#use_embeddings=True,
#embedding_size=64,
embeddings=None,
output_proj_size=None,
input_features_shape=None,
attn_masks=None):
# Create inputs.
seq_inputs = create_seq_inputs(
shape=input_shape, dtype=tf.int32 if embeddings is not None else tf.float32)
rnn_dropout = tf.placeholder_with_default(
0.0, shape=None, name='rnn_dropout')
# Create embedding layer.
if embeddings is not None:
embedded_inputs = embed_inputs(
seq_inputs.tensor, embeddings=embeddings)
else:
embedded_inputs = seq_inputs.tensor
input_dropout = tf.placeholder_with_default(
0.0, shape=None, name='input_dropout')
embedded_inputs = tf.nn.dropout(
embedded_inputs, 1 - input_dropout)
# If we include features in inputs, then add them here.
if input_features_shape is not None:
seq_input_features = create_seq_inputs(
shape=input_features_shape, dtype=tf.int32)
embedded_inputs = tf.concat(
[embedded_inputs, tf.cast(seq_input_features.tensor, tf.float32)],
axis=-1)
seq_inputs = SeqTensor(
(seq_inputs.tensor, seq_input_features.tensor), seq_inputs.sequence_length)
else:
seq_input_features = None
embedded_seq_inputs = SeqTensor(embedded_inputs, seq_inputs.sequence_length)
# Create RNN cell
cell = multilayer_dropout_cell(
RNN_CELL_DICT[cell_type],
hidden_size, n_layers, rnn_dropout)
if bidirectional:
cell_bw = multilayer_dropout_cell(
RNN_CELL_DICT[cell_type],
hidden_size, n_layers, rnn_dropout)
else:
cell_bw = None
# Add attention.
if attn_inputs is not None:
cell = tf_utils.SeqAttentionCellWrapper(
cell, attn_inputs=attn_inputs,
attn_size=attn_size, attn_vec_size=attn_vec_size,
output_size=hidden_size, attn_masks=attn_masks)
if bidirectional:
cell_bw = tf_utils.SeqAttentionCellWrapper(
cell_bw, attn_inputs=attn_inputs,
attn_size=attn_size, attn_vec_size=attn_vec_size,
output_size=hidden_size, attn_masks=attn_masks)
if initial_state is None:
# Create zero state.
zero_state = cell.zero_state(batch_size, tf.float32)
if bidirectional:
zero_state_bw = cell_bw.zero_state(batch_size, tf.float32)
zero_state = (zero_state, zero_state_bw)
initial_state = zero_state
# Create RNN.
outputs, final_state = create_rnn(
cell, initial_state, embedded_seq_inputs.tensor,
embedded_seq_inputs.sequence_length,
hidden_size=hidden_size,
bidirectional=bidirectional, cell_bw=cell_bw)
rnn_outputs = outputs
if bidirectional:
outputs = tf.concat(outputs, axis=2)
hidden_size *= 2
# Whether to add linear transformation to outputs.
if output_proj_size is not None:
outputs = tf.layers.dense(
inputs=outputs, units=output_proj_size, use_bias=True)
seq_outputs = SeqTensor(
outputs,
tf.placeholder_with_default(
seq_inputs.sequence_length, shape=[None]))
return (seq_inputs, initial_state, seq_outputs,
final_state, input_dropout, rnn_dropout,
rnn_outputs)
# General utility functions.
def map_dict(dict_1, main_dict):
new_dict = {}
for k, v in main_dict.iteritems():
if k in dict_1:
new_dict[dict_1[k]] = main_dict[k]
return new_dict
| [] |
2024-01-10 | iLampard/lamp | scripts~gdelt~step_4_query_chatgpt.py | import os
import openai
import elara
from concurrent.futures import ThreadPoolExecutor
from utils.general import file_uri_reader_processor
from utils.gpt_utils import generate_gdelt_prompt_v2, generate_gdelt_comb_prompt
import dictdatabase as DDB
def read_text_file(fn):
with open(fn, 'r') as file:
return file.read()
class EventQuery:
def __init__(self, api_key, prompt_folder: str, num_prompts: int = 12):
openai.api_key = api_key
self.setup_msgs = []
# process prompts
system_msgs = []
prompt_assistant_msgs = []
prompt_user_msgs = []
help_msgs = []
if not os.path.exists(prompt_folder):
raise RuntimeError(f'Prompt folder is not existed: {prompt_folder}')
for fn in sorted(os.listdir(prompt_folder)):
file_path = os.path.join(prompt_folder, fn)
if not os.path.isfile(file_path):
continue
msg = read_text_file(file_path)
if fn.startswith('system_msg'):
system_msgs.append(msg)
elif fn.startswith('prompt_assistant'):
prompt_assistant_msgs.append(msg)
elif fn.startswith('prompt_user'):
prompt_user_msgs.append(msg)
elif fn.startswith('help_msg'):
help_msgs.append(msg)
for msg in system_msgs:
self.setup_msgs.append({
'role': 'system',
'content': msg
})
for msg in help_msgs:
self.setup_msgs.append({
'role': 'user',
'content': msg
})
for user_msg, assistant_msg in zip(prompt_user_msgs[:num_prompts], prompt_assistant_msgs[:num_prompts]):
self.setup_msgs.append({
'role': 'user',
'content': user_msg
})
self.setup_msgs.append({
'role': 'assistant',
'content': assistant_msg
})
def query(self, msg):
msg_list = self.setup_msgs + [{
'role': 'user',
'content': msg
}]
completions = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=msg_list,
stream=False
)
print('Usage:', completions['usage']['total_tokens'])
body = completions['choices'][0]['message']['content']
return body
# for i, choice in enumerate(completions['choices']):
# print('---------------- choice', i)
# print(choice['message']['content'])
def gpt_generate_causal_events(
db_base_name,
gpt,
pred_data,
source_data,
inference_type: str = 'relation',
top_k: int = 5,
num_threads: int = 16
):
msg_head = '''Now I give you an effect event, and you give me three to five cause events.\n\n'''
def _process_one_relation_or_object(idx, rel_or_obj, text):
try:
existed_dict = DDB.at(db_base_name, inference_type, key=str(idx)).read()
if existed_dict is not None and str(rel_or_obj) in existed_dict:
return
ret_body = gpt.query(msg_head + text)
with DDB.at(db_base_name, inference_type).session() as (sess, obj):
if str(idx) in obj:
obj[str(idx)][str(rel_or_obj)] = ret_body
else:
obj[str(idx)] = {
str(rel_or_obj): ret_body
}
sess.write()
print(idx, rel_or_obj)
except Exception as e:
print('Error of', str(e))
executor = ThreadPoolExecutor(max_workers=num_threads)
init_db = DDB.at(db_base_name, inference_type).read()
if init_db is None:
DDB.at(db_base_name, inference_type).create()
for one_prediction in pred_data:
idx = one_prediction['original_idx']
if inference_type == 'relation':
msg_content_dict = generate_gdelt_prompt_v2(one_prediction, source_data, top_k=top_k, pred_relation=True)
else:
msg_content_dict = generate_gdelt_prompt_v2(one_prediction, source_data, top_k=top_k, pred_relation=False)
for rel_or_obj, content in msg_content_dict.items():
# check if the relation of the idx is existed
if str(idx) in init_db and str(rel_or_obj) in init_db[str(idx)]:
continue
executor.submit(_process_one_relation_or_object, str(idx), rel_or_obj, content)
executor.shutdown(wait=True)
def gpt_generate_comb_causal_events(
db_base_name,
gpt,
pred_data,
source_data,
top_k: int = 100,
num_threads: int = 6
):
msg_head = '''Now I give you an effect event, and you give me three to five cause events.\n\n'''
def _process_one_relation_or_object(idx, rel_obj, text):
try:
existed_dict = DDB.at(db_base_name, str(idx), key=str(rel_obj)).read()
if existed_dict is not None:
return
ret_body = gpt.query(msg_head + text)
with DDB.at(db_base_name, str(idx)).session() as (sess, ddb):
ddb[str(rel_obj)] = ret_body
sess.write()
print(idx, rel_obj)
except Exception as e:
print('Error of', str(e))
executor = ThreadPoolExecutor(max_workers=num_threads)
for one_prediction in pred_data:
idx = one_prediction['original_idx']
idx_db = DDB.at(db_base_name, str(idx)).read()
if idx_db is None:
DDB.at(db_base_name, str(idx)).create()
idx_db = {}
msg_content_dict = generate_gdelt_comb_prompt(one_prediction, source_data, top_k=top_k, filter_hit=True)
for rel_obj, content in msg_content_dict.items():
# check if the relation of the idx is existed
if str(idx) in idx_db and str(rel_obj) in idx_db:
continue
executor.submit(_process_one_relation_or_object, str(idx), rel_obj, content)
executor.shutdown(wait=True)
if __name__ == '__main__':
# predictions draw from base model
base_model_prediction_fn = '../../logs/ke_anhp_gdelt_test.pkl'
# original sequence data
raw_seq_data_fn = '../../data/gdelt/gdelt.pkl'
gpt = EventQuery(
api_key='*',
prompt_folder='gdelt_prompts_simplified',
num_prompts=10
)
pred_data = file_uri_reader_processor(base_model_prediction_fn)
source_data = file_uri_reader_processor(raw_seq_data_fn)['data']
gpt_generate_causal_events(
'gdelt_chatgpt',
gpt,
pred_data,
source_data,
inference_type='relation',
top_k=5,
# inference_type='object',
# top_k=20,
num_threads=6
)
| [
"[]"
] |
2024-01-10 | iLampard/lamp | scripts~amazon~step_4_query_chatgpt.py | import os
import openai
import elara
from concurrent.futures import ThreadPoolExecutor
from utils.general import file_uri_reader_processor
from utils.gpt_utils import generate_gdelt_prompt_amazon
import dictdatabase as DDB
def read_text_file(fn):
with open(fn, 'r') as file:
return file.read()
class EventQuery:
def __init__(self, api_key, prompt_folder: str, num_prompts: int = 12):
openai.api_key = api_key
self.setup_msgs = []
# process prompts
system_msgs = []
prompt_assistant_msgs = []
prompt_user_msgs = []
help_msgs = []
if not os.path.exists(prompt_folder):
raise RuntimeError(f'Prompt folder is not existed: {prompt_folder}')
for fn in sorted(os.listdir(prompt_folder)):
file_path = os.path.join(prompt_folder, fn)
if not os.path.isfile(file_path):
continue
msg = read_text_file(file_path)
if fn.startswith('system_msg'):
system_msgs.append(msg)
elif fn.startswith('prompt_assistant'):
prompt_assistant_msgs.append(msg)
elif fn.startswith('prompt_user'):
prompt_user_msgs.append(msg)
elif fn.startswith('help_msg'):
help_msgs.append(msg)
for msg in system_msgs:
self.setup_msgs.append({
'role': 'system',
'content': msg
})
for msg in help_msgs:
self.setup_msgs.append({
'role': 'user',
'content': msg
})
for user_msg, assistant_msg in zip(prompt_user_msgs[:num_prompts], prompt_assistant_msgs[:num_prompts]):
self.setup_msgs.append({
'role': 'user',
'content': user_msg
})
self.setup_msgs.append({
'role': 'assistant',
'content': assistant_msg
})
def query(self, msg):
msg_list = self.setup_msgs + [{
'role': 'user',
'content': msg
}]
completions = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=msg_list,
stream=False
)
print('Usage:', completions['usage']['total_tokens'])
body = completions['choices'][0]['message']['content']
return body
# for i, choice in enumerate(completions['choices']):
# print('---------------- choice', i)
# print(choice['message']['content'])
def gpt_generate_causal_events(
db_base_name,
gpt,
pred_data,
source_data,
inference_type: str = 'type',
top_k: int = 5,
num_threads: int = 16
):
msg_head = '''Now I give you an effect event, and you give me three to four cause events.\n\n'''
def _process_one_type_or_time(idx, type_or_date, text):
try:
existed_dict = DDB.at(db_base_name, inference_type, key=str(idx)).read()
if existed_dict is not None and str(type_or_date) in existed_dict:
return
ret_body = gpt.query(msg_head + text)
with DDB.at(db_base_name, inference_type).session() as (sess, obj):
if str(idx) in obj:
obj[str(idx)][str(type_or_date)] = ret_body
else:
obj[str(idx)] = {
str(type_or_date): ret_body
}
sess.write()
print(idx, type_or_date)
except Exception as e:
print('Error of', str(e))
executor = ThreadPoolExecutor(max_workers=num_threads)
init_db = DDB.at(db_base_name, inference_type).read()
if init_db is None:
DDB.at(db_base_name, inference_type).create()
for prediction_seq in pred_data[-10:]:
for one_prediction in prediction_seq:
seq_idx = one_prediction['seq_idx']
original_idx = one_prediction['original_idx']
idx = (str(seq_idx), str(original_idx))
if inference_type == 'type':
msg_content_dict = generate_gdelt_prompt_amazon(one_prediction, source_data, top_k=top_k, pred_type=True)
else:
msg_content_dict = generate_gdelt_prompt_amazon(one_prediction, source_data, top_k=top_k, pred_type=False)
for rel_or_obj, content in msg_content_dict.items():
# check if the relation of the idx is existed
if init_db is not None and str(idx) in init_db and str(rel_or_obj) in init_db[str(idx)]:
continue
executor.submit(_process_one_type_or_time, str(idx), rel_or_obj, content)
executor.shutdown(wait=True)
if __name__ == '__main__':
# predictions draw from base model
base_model_prediction_fn = '../../logs/tpp_amazon_test.pkl'
# original sequence data
raw_seq_data_fn = '../../data/gdelt/amazon.pkl'
gpt = EventQuery(
api_key='*',
prompt_folder='amazon_prompts',
num_prompts=8
)
pred_data = file_uri_reader_processor(base_model_prediction_fn)
source_data = file_uri_reader_processor(raw_seq_data_fn)['user_seqs']
gpt_generate_causal_events(
'amazon_chatgpt',
gpt,
pred_data,
source_data,
inference_type='time',
top_k=5,
num_threads=6
)
| [
"[]",
"self.setup_msgs + [{\n 'role': 'user',\n 'content': msg\n }]"
] |
2024-01-10 | Rutvik2610/test | openagent~finetune~LLMFinetune.py | from abc import ABC, abstractmethod
from logging import Logger
import openai
class LLMFinetune(ABC):
def __init__(self, logger: Logger, openai_key: str):
self.logger = logger
openai.api_key = openai_key
@abstractmethod
def transform_data(self, train_csv_file: str, val_csv_file: str , train_output_file: str, val_output_file: str) -> str:
pass
@abstractmethod
def finetune(self, **kwargs):
pass
| [] |
2024-01-10 | TakeshiKovacs0610/Information_Retrieval_System | api_call.py | import openai
import sys
if __name__ == '__main__':
# python3 <filename> API_KEY num_paragraphs query.txt
if len(sys.argv) < 4:
print("Usage: python3 api_call.py API_KEY num_paragraphs query.txt")
sys.exit(1)
# Read the API key from the command line
openai.api_key = sys.argv[1]
num_paragraphs = int(sys.argv[2])
#print(num_paragraphs)
paragraphs = []
# Read the paragraphs from the files
for i in range(num_paragraphs):
filename = 'paragraph_' + str(i) + '.txt'
#print(filename)
with open(filename, 'r') as f:
paragraphs.append(f.read())
paragraphs.append('\n')
# add query
query_file = sys.argv[3]
with open(query_file, 'r') as f:
query = f.read()
paragraphs.append(query)
paragraphs.append('\n')
# convert paragraphs to a single string
paragraphs = '\n'.join(paragraphs)
choice=input("\nEnter 1 to see paragraphs else press anything: ")
if(choice=="1"):
print(paragraphs)
print("=================================\nChatGPT's Answer:", end=" ")
query = {
"role": "user", "content": paragraphs
}
chat = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[query]
)
reply = chat.choices[0].message.content
print(reply)
print("=================================")
print("Enter new query: ") | [] |
2024-01-10 | kulapard/articles | youtube-summary-openai-gpt~youtube_summarizer.py | import os
import re
import openai
from dotenv import load_dotenv
from youtube_transcript_api import YouTubeTranscriptApi, TranscriptsDisabled
# Load the environment variables from the .env file
load_dotenv()
def extract_youtube_video_id(url: str) -> str | None:
"""
Extract the video ID from the URL
https://www.youtube.com/watch?v=XXX -> XXX
https://youtu.be/XXX -> XXX
"""
found = re.search(r"(?:youtu\.be\/|watch\?v=)([\w-]+)", url)
if found:
return found.group(1)
return None
def get_video_transcript(video_id: str) -> str | None:
"""
Fetch the transcript of the provided YouTube video
"""
try:
transcript = YouTubeTranscriptApi.get_transcript(video_id)
except TranscriptsDisabled:
# The video doesn't have a transcript
return None
text = " ".join([line["text"] for line in transcript])
return text
def generate_summary(text: str) -> str:
"""
Generate a summary of the provided text using OpenAI API
"""
# Initialize the OpenAI API client
openai.api_key = os.environ["OPENAI_API_KEY"]
# Use GPT to generate a summary
instructions = "Please summarize the provided text"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": instructions},
{"role": "user", "content": text}
],
temperature=0.2,
n=1,
max_tokens=200,
presence_penalty=0,
frequency_penalty=0.1,
)
# Return the generated summary
return response.choices[0].message.content.strip()
def summarize_youtube_video(video_url: str) -> str:
"""
Summarize the provided YouTube video
"""
# Extract the video ID from the URL
video_id = extract_youtube_video_id(video_url)
# Fetch the video transcript
transcript = get_video_transcript(video_id)
# If no transcript is found, return an error message
if not transcript:
return f"No English transcript found " \
f"for this video: {video_url}"
# Generate the summary
summary = generate_summary(transcript)
# Return the summary
return summary
if __name__ == '__main__':
url = "https://www.youtube.com/watch?v=D1R-jKKp3NA"
print(summarize_youtube_video(url))
| [
"Please summarize the provided text"
] |
2024-01-10 | slmatrix/langchain | libs~community~langchain_community~chat_models~anyscale.py | """Anyscale Endpoints chat wrapper. Relies heavily on ChatOpenAI."""
from __future__ import annotations
import logging
import os
import sys
from typing import TYPE_CHECKING, Any, Dict, Optional, Set
import requests
from langchain_core._api.deprecation import suppress_langchain_deprecation_warning
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
from langchain_community.adapters.openai import convert_message_to_dict
from langchain_community.chat_models.openai import (
ChatOpenAI,
_import_tiktoken,
)
from langchain_community.utils.openai import is_openai_v1
if TYPE_CHECKING:
import tiktoken
logger = logging.getLogger(__name__)
DEFAULT_API_BASE = "https://api.endpoints.anyscale.com/v1"
DEFAULT_MODEL = "meta-llama/Llama-2-7b-chat-hf"
class ChatAnyscale(ChatOpenAI):
"""`Anyscale` Chat large language models.
See https://www.anyscale.com/ for information about Anyscale.
To use, you should have the ``openai`` python package installed, and the
environment variable ``ANYSCALE_API_KEY`` set with your API key.
Alternatively, you can use the anyscale_api_key keyword argument.
Any parameters that are valid to be passed to the `openai.create` call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain_community.chat_models import ChatAnyscale
chat = ChatAnyscale(model_name="meta-llama/Llama-2-7b-chat-hf")
"""
@property
def _llm_type(self) -> str:
"""Return type of chat model."""
return "anyscale-chat"
@property
def lc_secrets(self) -> Dict[str, str]:
return {"anyscale_api_key": "ANYSCALE_API_KEY"}
@classmethod
def is_lc_serializable(cls) -> bool:
return False
anyscale_api_key: SecretStr
"""AnyScale Endpoints API keys."""
model_name: str = Field(default=DEFAULT_MODEL, alias="model")
"""Model name to use."""
anyscale_api_base: str = Field(default=DEFAULT_API_BASE)
"""Base URL path for API requests,
leave blank if not using a proxy or service emulator."""
anyscale_proxy: Optional[str] = None
"""To support explicit proxy for Anyscale."""
available_models: Optional[Set[str]] = None
"""Available models from Anyscale API."""
def __init__(self, *kwargs: Any) -> None:
# bypass deprecation warning for ChatOpenAI
with suppress_langchain_deprecation_warning():
super().__init__(*kwargs)
@staticmethod
def get_available_models(
anyscale_api_key: Optional[str] = None,
anyscale_api_base: str = DEFAULT_API_BASE,
) -> Set[str]:
"""Get available models from Anyscale API."""
try:
anyscale_api_key = anyscale_api_key or os.environ["ANYSCALE_API_KEY"]
except KeyError as e:
raise ValueError(
"Anyscale API key must be passed as keyword argument or "
"set in environment variable ANYSCALE_API_KEY.",
) from e
models_url = f"{anyscale_api_base}/models"
models_response = requests.get(
models_url,
headers={
"Authorization": f"Bearer {anyscale_api_key}",
},
)
if models_response.status_code != 200:
raise ValueError(
f"Error getting models from {models_url}: "
f"{models_response.status_code}",
)
return {model["id"] for model in models_response.json()["data"]}
@root_validator(pre=True)
def validate_environment_override(cls, values: dict) -> dict:
"""Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values,
"anyscale_api_key",
"ANYSCALE_API_KEY",
)
values["anyscale_api_key"] = convert_to_secret_str(
get_from_dict_or_env(
values,
"anyscale_api_key",
"ANYSCALE_API_KEY",
)
)
values["openai_api_base"] = get_from_dict_or_env(
values,
"anyscale_api_base",
"ANYSCALE_API_BASE",
default=DEFAULT_API_BASE,
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"anyscale_proxy",
"ANYSCALE_PROXY",
default="",
)
try:
import openai
except ImportError as e:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`.",
) from e
try:
if is_openai_v1():
client_params = {
"api_key": values["openai_api_key"],
"base_url": values["openai_api_base"],
# To do: future support
# "organization": values["openai_organization"],
# "timeout": values["request_timeout"],
# "max_retries": values["max_retries"],
# "default_headers": values["default_headers"],
# "default_query": values["default_query"],
# "http_client": values["http_client"],
}
values["client"] = openai.OpenAI(**client_params).chat.completions
else:
values["client"] = openai.ChatCompletion
except AttributeError as exc:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`.",
) from exc
if "model_name" not in values.keys():
values["model_name"] = DEFAULT_MODEL
model_name = values["model_name"]
available_models = cls.get_available_models(
values["openai_api_key"],
values["openai_api_base"],
)
if model_name not in available_models:
raise ValueError(
f"Model name {model_name} not found in available models: "
f"{available_models}.",
)
values["available_models"] = available_models
return values
def _get_encoding_model(self) -> tuple[str, tiktoken.Encoding]:
tiktoken_ = _import_tiktoken()
if self.tiktoken_model_name is not None:
model = self.tiktoken_model_name
else:
model = self.model_name
# Returns the number of tokens used by a list of messages.
try:
encoding = tiktoken_.encoding_for_model("gpt-3.5-turbo-0301")
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
model = "cl100k_base"
encoding = tiktoken_.get_encoding(model)
return model, encoding
def get_num_tokens_from_messages(self, messages: list[BaseMessage]) -> int:
"""Calculate num tokens with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb"""
if sys.version_info[1] <= 7:
return super().get_num_tokens_from_messages(messages)
model, encoding = self._get_encoding_model()
tokens_per_message = 3
tokens_per_name = 1
num_tokens = 0
messages_dict = [convert_message_to_dict(m) for m in messages]
for message in messages_dict:
num_tokens += tokens_per_message
for key, value in message.items():
# Cast str(value) in case the message value is not a string
# This occurs with function messages
num_tokens += len(encoding.encode(str(value)))
if key == "name":
num_tokens += tokens_per_name
# every reply is primed with <im_start>assistant
num_tokens += 3
return num_tokens
| [] |
2024-01-10 | slmatrix/langchain | libs~community~tests~integration_tests~vectorstores~test_xata.py | """Test Xata vector store functionality.
Before running this test, please create a Xata database by following
the instructions from:
https://python.langchain.com/docs/integrations/vectorstores/xata
"""
import os
from langchain_core.documents import Document
from langchain_community.embeddings.openai import OpenAIEmbeddings
from langchain_community.vectorstores.xata import XataVectorStore
class TestXata:
@classmethod
def setup_class(cls) -> None:
assert os.getenv("XATA_API_KEY"), "XATA_API_KEY environment variable is not set"
assert os.getenv("XATA_DB_URL"), "XATA_DB_URL environment variable is not set"
def test_similarity_search_without_metadata(
self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end constructions and search without metadata."""
texts = ["foo", "bar", "baz"]
docsearch = XataVectorStore.from_texts(
api_key=os.getenv("XATA_API_KEY"),
db_url=os.getenv("XATA_DB_URL"),
texts=texts,
embedding=embedding_openai,
)
docsearch.wait_for_indexing(ndocs=3)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
docsearch.delete(delete_all=True)
def test_similarity_search_with_metadata(
self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search with a metadata filter.
This test requires a column named "a" of type integer to be present
in the Xata table."""
texts = ["foo", "foo", "foo"]
metadatas = [{"a": i} for i in range(len(texts))]
docsearch = XataVectorStore.from_texts(
api_key=os.getenv("XATA_API_KEY"),
db_url=os.getenv("XATA_DB_URL"),
texts=texts,
embedding=embedding_openai,
metadatas=metadatas,
)
docsearch.wait_for_indexing(ndocs=3)
output = docsearch.similarity_search("foo", k=1, filter={"a": 1})
assert output == [Document(page_content="foo", metadata={"a": 1})]
docsearch.delete(delete_all=True)
| [] |
2024-01-10 | slmatrix/langchain | libs~community~langchain_community~cache.py | """
.. warning::
Beta Feature!
**Cache** provides an optional caching layer for LLMs.
Cache is useful for two reasons:
- It can save you money by reducing the number of API calls you make to the LLM
provider if you're often requesting the same completion multiple times.
- It can speed up your application by reducing the number of API calls you make
to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
**Class hierarchy:**
.. code-block::
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
"""
from __future__ import annotations
import hashlib
import inspect
import json
import logging
import uuid
import warnings
from datetime import timedelta
from functools import lru_cache
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
List,
Optional,
Tuple,
Type,
Union,
cast,
)
from sqlalchemy import Column, Integer, String, create_engine, select
from sqlalchemy.engine import Row
from sqlalchemy.engine.base import Engine
from sqlalchemy.orm import Session
try:
from sqlalchemy.orm import declarative_base
except ImportError:
from sqlalchemy.ext.declarative import declarative_base
from langchain_core.caches import RETURN_VAL_TYPE, BaseCache
from langchain_core.embeddings import Embeddings
from langchain_core.language_models.llms import LLM, get_prompts
from langchain_core.load.dump import dumps
from langchain_core.load.load import _loads_suppress_warning
from langchain_core.outputs import ChatGeneration, Generation
from langchain_core.utils import get_from_env
from langchain_community.vectorstores.redis import Redis as RedisVectorstore
logger = logging.getLogger(__file__)
if TYPE_CHECKING:
import momento
from cassandra.cluster import Session as CassandraSession
def _hash(_input: str) -> str:
"""Use a deterministic hashing approach."""
return hashlib.md5(_input.encode()).hexdigest()
def _dump_generations_to_json(generations: RETURN_VAL_TYPE) -> str:
"""Dump generations to json.
Args:
generations (RETURN_VAL_TYPE): A list of language model generations.
Returns:
str: Json representing a list of generations.
Warning: would not work well with arbitrary subclasses of `Generation`
"""
return json.dumps([generation.dict() for generation in generations])
def _load_generations_from_json(generations_json: str) -> RETURN_VAL_TYPE:
"""Load generations from json.
Args:
generations_json (str): A string of json representing a list of generations.
Raises:
ValueError: Could not decode json string to list of generations.
Returns:
RETURN_VAL_TYPE: A list of generations.
Warning: would not work well with arbitrary subclasses of `Generation`
"""
try:
results = json.loads(generations_json)
return [Generation(**generation_dict) for generation_dict in results]
except json.JSONDecodeError:
raise ValueError(
f"Could not decode json to list of generations: {generations_json}"
)
def _dumps_generations(generations: RETURN_VAL_TYPE) -> str:
"""
Serialization for generic RETURN_VAL_TYPE, i.e. sequence of `Generation`
Args:
generations (RETURN_VAL_TYPE): A list of language model generations.
Returns:
str: a single string representing a list of generations.
This function (+ its counterpart `_loads_generations`) rely on
the dumps/loads pair with Reviver, so are able to deal
with all subclasses of Generation.
Each item in the list can be `dumps`ed to a string,
then we make the whole list of strings into a json-dumped.
"""
return json.dumps([dumps(_item) for _item in generations])
def _loads_generations(generations_str: str) -> Union[RETURN_VAL_TYPE, None]:
"""
Deserialization of a string into a generic RETURN_VAL_TYPE
(i.e. a sequence of `Generation`).
See `_dumps_generations`, the inverse of this function.
Args:
generations_str (str): A string representing a list of generations.
Compatible with the legacy cache-blob format
Does not raise exceptions for malformed entries, just logs a warning
and returns none: the caller should be prepared for such a cache miss.
Returns:
RETURN_VAL_TYPE: A list of generations.
"""
try:
generations = [
_loads_suppress_warning(_item_str)
for _item_str in json.loads(generations_str)
]
return generations
except (json.JSONDecodeError, TypeError):
# deferring the (soft) handling to after the legacy-format attempt
pass
try:
gen_dicts = json.loads(generations_str)
# not relying on `_load_generations_from_json` (which could disappear):
generations = [Generation(**generation_dict) for generation_dict in gen_dicts]
logger.warning(
f"Legacy 'Generation' cached blob encountered: '{generations_str}'"
)
return generations
except (json.JSONDecodeError, TypeError):
logger.warning(
f"Malformed/unparsable cached blob encountered: '{generations_str}'"
)
return None
class InMemoryCache(BaseCache):
"""Cache that stores things in memory."""
def __init__(self) -> None:
"""Initialize with empty cache."""
self._cache: Dict[Tuple[str, str], RETURN_VAL_TYPE] = {}
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
return self._cache.get((prompt, llm_string), None)
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
self._cache[(prompt, llm_string)] = return_val
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
self._cache = {}
Base = declarative_base()
class FullLLMCache(Base): # type: ignore
"""SQLite table for full LLM Cache (all generations)."""
__tablename__ = "full_llm_cache"
prompt = Column(String, primary_key=True)
llm = Column(String, primary_key=True)
idx = Column(Integer, primary_key=True)
response = Column(String)
class SQLAlchemyCache(BaseCache):
"""Cache that uses SQAlchemy as a backend."""
def __init__(self, engine: Engine, cache_schema: Type[FullLLMCache] = FullLLMCache):
"""Initialize by creating all tables."""
self.engine = engine
self.cache_schema = cache_schema
self.cache_schema.metadata.create_all(self.engine)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
stmt = (
select(self.cache_schema.response)
.where(self.cache_schema.prompt == prompt) # type: ignore
.where(self.cache_schema.llm == llm_string)
.order_by(self.cache_schema.idx)
)
with Session(self.engine) as session:
rows = session.execute(stmt).fetchall()
if rows:
try:
return [_loads_suppress_warning(row[0]) for row in rows]
except Exception:
logger.warning(
"Retrieving a cache value that could not be deserialized "
"properly. This is likely due to the cache being in an "
"older format. Please recreate your cache to avoid this "
"error."
)
# In a previous life we stored the raw text directly
# in the table, so assume it's in that format.
return [Generation(text=row[0]) for row in rows]
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update based on prompt and llm_string."""
items = [
self.cache_schema(prompt=prompt, llm=llm_string, response=dumps(gen), idx=i)
for i, gen in enumerate(return_val)
]
with Session(self.engine) as session, session.begin():
for item in items:
session.merge(item)
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
with Session(self.engine) as session:
session.query(self.cache_schema).delete()
session.commit()
class SQLiteCache(SQLAlchemyCache):
"""Cache that uses SQLite as a backend."""
def __init__(self, database_path: str = ".langchain.db"):
"""Initialize by creating the engine and all tables."""
engine = create_engine(f"sqlite:///{database_path}")
super().__init__(engine)
class UpstashRedisCache(BaseCache):
"""Cache that uses Upstash Redis as a backend."""
def __init__(self, redis_: Any, *, ttl: Optional[int] = None):
"""
Initialize an instance of UpstashRedisCache.
This method initializes an object with Upstash Redis caching capabilities.
It takes a `redis_` parameter, which should be an instance of an Upstash Redis
client class, allowing the object to interact with Upstash Redis
server for caching purposes.
Parameters:
redis_: An instance of Upstash Redis client class
(e.g., Redis) used for caching.
This allows the object to communicate with
Redis server for caching operations on.
ttl (int, optional): Time-to-live (TTL) for cached items in seconds.
If provided, it sets the time duration for how long cached
items will remain valid. If not provided, cached items will not
have an automatic expiration.
"""
try:
from upstash_redis import Redis
except ImportError:
raise ValueError(
"Could not import upstash_redis python package. "
"Please install it with `pip install upstash_redis`."
)
if not isinstance(redis_, Redis):
raise ValueError("Please pass in Upstash Redis object.")
self.redis = redis_
self.ttl = ttl
def _key(self, prompt: str, llm_string: str) -> str:
"""Compute key from prompt and llm_string"""
return _hash(prompt + llm_string)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
generations = []
# Read from a HASH
results = self.redis.hgetall(self._key(prompt, llm_string))
if results:
for _, text in results.items():
generations.append(Generation(text=text))
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"UpstashRedisCache supports caching of normal LLM generations, "
f"got {type(gen)}"
)
if isinstance(gen, ChatGeneration):
warnings.warn(
"NOTE: Generation has not been cached. UpstashRedisCache does not"
" support caching ChatModel outputs."
)
return
# Write to a HASH
key = self._key(prompt, llm_string)
mapping = {
str(idx): generation.text for idx, generation in enumerate(return_val)
}
self.redis.hset(key=key, values=mapping)
if self.ttl is not None:
self.redis.expire(key, self.ttl)
def clear(self, **kwargs: Any) -> None:
"""
Clear cache. If `asynchronous` is True, flush asynchronously.
This flushes the *whole* db.
"""
asynchronous = kwargs.get("asynchronous", False)
if asynchronous:
asynchronous = "ASYNC"
else:
asynchronous = "SYNC"
self.redis.flushdb(flush_type=asynchronous)
class RedisCache(BaseCache):
"""Cache that uses Redis as a backend."""
def __init__(self, redis_: Any, *, ttl: Optional[int] = None):
"""
Initialize an instance of RedisCache.
This method initializes an object with Redis caching capabilities.
It takes a `redis_` parameter, which should be an instance of a Redis
client class, allowing the object to interact with a Redis
server for caching purposes.
Parameters:
redis_ (Any): An instance of a Redis client class
(e.g., redis.Redis) used for caching.
This allows the object to communicate with a
Redis server for caching operations.
ttl (int, optional): Time-to-live (TTL) for cached items in seconds.
If provided, it sets the time duration for how long cached
items will remain valid. If not provided, cached items will not
have an automatic expiration.
"""
try:
from redis import Redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
if not isinstance(redis_, Redis):
raise ValueError("Please pass in Redis object.")
self.redis = redis_
self.ttl = ttl
def _key(self, prompt: str, llm_string: str) -> str:
"""Compute key from prompt and llm_string"""
return _hash(prompt + llm_string)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
generations = []
# Read from a Redis HASH
results = self.redis.hgetall(self._key(prompt, llm_string))
if results:
for _, text in results.items():
try:
generations.append(_loads_suppress_warning(text))
except Exception:
logger.warning(
"Retrieving a cache value that could not be deserialized "
"properly. This is likely due to the cache being in an "
"older format. Please recreate your cache to avoid this "
"error."
)
# In a previous life we stored the raw text directly
# in the table, so assume it's in that format.
generations.append(Generation(text=text))
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"RedisCache only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
# Write to a Redis HASH
key = self._key(prompt, llm_string)
with self.redis.pipeline() as pipe:
pipe.hset(
key,
mapping={
str(idx): dumps(generation)
for idx, generation in enumerate(return_val)
},
)
if self.ttl is not None:
pipe.expire(key, self.ttl)
pipe.execute()
def clear(self, **kwargs: Any) -> None:
"""Clear cache. If `asynchronous` is True, flush asynchronously."""
asynchronous = kwargs.get("asynchronous", False)
self.redis.flushdb(asynchronous=asynchronous, **kwargs)
class RedisSemanticCache(BaseCache):
"""Cache that uses Redis as a vector-store backend."""
# TODO - implement a TTL policy in Redis
DEFAULT_SCHEMA = {
"content_key": "prompt",
"text": [
{"name": "prompt"},
],
"extra": [{"name": "return_val"}, {"name": "llm_string"}],
}
def __init__(
self, redis_url: str, embedding: Embeddings, score_threshold: float = 0.2
):
"""Initialize by passing in the `init` GPTCache func
Args:
redis_url (str): URL to connect to Redis.
embedding (Embedding): Embedding provider for semantic encoding and search.
score_threshold (float, 0.2):
Example:
.. code-block:: python
from langchain_community.globals import set_llm_cache
from langchain_community.cache import RedisSemanticCache
from langchain_community.embeddings import OpenAIEmbeddings
set_llm_cache(RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
))
"""
self._cache_dict: Dict[str, RedisVectorstore] = {}
self.redis_url = redis_url
self.embedding = embedding
self.score_threshold = score_threshold
def _index_name(self, llm_string: str) -> str:
hashed_index = _hash(llm_string)
return f"cache:{hashed_index}"
def _get_llm_cache(self, llm_string: str) -> RedisVectorstore:
index_name = self._index_name(llm_string)
# return vectorstore client for the specific llm string
if index_name in self._cache_dict:
return self._cache_dict[index_name]
# create new vectorstore client for the specific llm string
try:
self._cache_dict[index_name] = RedisVectorstore.from_existing_index(
embedding=self.embedding,
index_name=index_name,
redis_url=self.redis_url,
schema=cast(Dict, self.DEFAULT_SCHEMA),
)
except ValueError:
redis = RedisVectorstore(
embedding=self.embedding,
index_name=index_name,
redis_url=self.redis_url,
index_schema=cast(Dict, self.DEFAULT_SCHEMA),
)
_embedding = self.embedding.embed_query(text="test")
redis._create_index_if_not_exist(dim=len(_embedding))
self._cache_dict[index_name] = redis
return self._cache_dict[index_name]
def clear(self, **kwargs: Any) -> None:
"""Clear semantic cache for a given llm_string."""
index_name = self._index_name(kwargs["llm_string"])
if index_name in self._cache_dict:
self._cache_dict[index_name].drop_index(
index_name=index_name, delete_documents=True, redis_url=self.redis_url
)
del self._cache_dict[index_name]
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
llm_cache = self._get_llm_cache(llm_string)
generations: List = []
# Read from a Hash
results = llm_cache.similarity_search(
query=prompt,
k=1,
distance_threshold=self.score_threshold,
)
if results:
for document in results:
try:
generations.extend(
_loads_suppress_warning(document.metadata["return_val"])
)
except Exception:
logger.warning(
"Retrieving a cache value that could not be deserialized "
"properly. This is likely due to the cache being in an "
"older format. Please recreate your cache to avoid this "
"error."
)
# In a previous life we stored the raw text directly
# in the table, so assume it's in that format.
generations.extend(
_load_generations_from_json(document.metadata["return_val"])
)
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"RedisSemanticCache only supports caching of "
f"normal LLM generations, got {type(gen)}"
)
llm_cache = self._get_llm_cache(llm_string)
metadata = {
"llm_string": llm_string,
"prompt": prompt,
"return_val": dumps([g for g in return_val]),
}
llm_cache.add_texts(texts=[prompt], metadatas=[metadata])
class GPTCache(BaseCache):
"""Cache that uses GPTCache as a backend."""
def __init__(
self,
init_func: Union[
Callable[[Any, str], None], Callable[[Any], None], None
] = None,
):
"""Initialize by passing in init function (default: `None`).
Args:
init_func (Optional[Callable[[Any], None]]): init `GPTCache` function
(default: `None`)
Example:
.. code-block:: python
# Initialize GPTCache with a custom init function
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain_community.globals import set_llm_cache
# Avoid multiple caches using the same file,
causing different llm model caches to affect each other
def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(
manager="map",
data_dir=f"map_cache_{llm}"
),
)
set_llm_cache(GPTCache(init_gptcache))
"""
try:
import gptcache # noqa: F401
except ImportError:
raise ImportError(
"Could not import gptcache python package. "
"Please install it with `pip install gptcache`."
)
self.init_gptcache_func: Union[
Callable[[Any, str], None], Callable[[Any], None], None
] = init_func
self.gptcache_dict: Dict[str, Any] = {}
def _new_gptcache(self, llm_string: str) -> Any:
"""New gptcache object"""
from gptcache import Cache
from gptcache.manager.factory import get_data_manager
from gptcache.processor.pre import get_prompt
_gptcache = Cache()
if self.init_gptcache_func is not None:
sig = inspect.signature(self.init_gptcache_func)
if len(sig.parameters) == 2:
self.init_gptcache_func(_gptcache, llm_string) # type: ignore[call-arg]
else:
self.init_gptcache_func(_gptcache) # type: ignore[call-arg]
else:
_gptcache.init(
pre_embedding_func=get_prompt,
data_manager=get_data_manager(data_path=llm_string),
)
self.gptcache_dict[llm_string] = _gptcache
return _gptcache
def _get_gptcache(self, llm_string: str) -> Any:
"""Get a cache object.
When the corresponding llm model cache does not exist, it will be created."""
_gptcache = self.gptcache_dict.get(llm_string, None)
if not _gptcache:
_gptcache = self._new_gptcache(llm_string)
return _gptcache
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up the cache data.
First, retrieve the corresponding cache object using the `llm_string` parameter,
and then retrieve the data from the cache based on the `prompt`.
"""
from gptcache.adapter.api import get
_gptcache = self._get_gptcache(llm_string)
res = get(prompt, cache_obj=_gptcache)
if res:
return [
Generation(**generation_dict) for generation_dict in json.loads(res)
]
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache.
First, retrieve the corresponding cache object using the `llm_string` parameter,
and then store the `prompt` and `return_val` in the cache object.
"""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"GPTCache only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
from gptcache.adapter.api import put
_gptcache = self._get_gptcache(llm_string)
handled_data = json.dumps([generation.dict() for generation in return_val])
put(prompt, handled_data, cache_obj=_gptcache)
return None
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
from gptcache import Cache
for gptcache_instance in self.gptcache_dict.values():
gptcache_instance = cast(Cache, gptcache_instance)
gptcache_instance.flush()
self.gptcache_dict.clear()
def _ensure_cache_exists(cache_client: momento.CacheClient, cache_name: str) -> None:
"""Create cache if it doesn't exist.
Raises:
SdkException: Momento service or network error
Exception: Unexpected response
"""
from momento.responses import CreateCache
create_cache_response = cache_client.create_cache(cache_name)
if isinstance(create_cache_response, CreateCache.Success) or isinstance(
create_cache_response, CreateCache.CacheAlreadyExists
):
return None
elif isinstance(create_cache_response, CreateCache.Error):
raise create_cache_response.inner_exception
else:
raise Exception(f"Unexpected response cache creation: {create_cache_response}")
def _validate_ttl(ttl: Optional[timedelta]) -> None:
if ttl is not None and ttl <= timedelta(seconds=0):
raise ValueError(f"ttl must be positive but was {ttl}.")
class MomentoCache(BaseCache):
"""Cache that uses Momento as a backend. See https://gomomento.com/"""
def __init__(
self,
cache_client: momento.CacheClient,
cache_name: str,
*,
ttl: Optional[timedelta] = None,
ensure_cache_exists: bool = True,
):
"""Instantiate a prompt cache using Momento as a backend.
Note: to instantiate the cache client passed to MomentoCache,
you must have a Momento account. See https://gomomento.com/.
Args:
cache_client (CacheClient): The Momento cache client.
cache_name (str): The name of the cache to use to store the data.
ttl (Optional[timedelta], optional): The time to live for the cache items.
Defaults to None, ie use the client default TTL.
ensure_cache_exists (bool, optional): Create the cache if it doesn't
exist. Defaults to True.
Raises:
ImportError: Momento python package is not installed.
TypeError: cache_client is not of type momento.CacheClientObject
ValueError: ttl is non-null and non-negative
"""
try:
from momento import CacheClient
except ImportError:
raise ImportError(
"Could not import momento python package. "
"Please install it with `pip install momento`."
)
if not isinstance(cache_client, CacheClient):
raise TypeError("cache_client must be a momento.CacheClient object.")
_validate_ttl(ttl)
if ensure_cache_exists:
_ensure_cache_exists(cache_client, cache_name)
self.cache_client = cache_client
self.cache_name = cache_name
self.ttl = ttl
@classmethod
def from_client_params(
cls,
cache_name: str,
ttl: timedelta,
*,
configuration: Optional[momento.config.Configuration] = None,
api_key: Optional[str] = None,
auth_token: Optional[str] = None, # for backwards compatibility
**kwargs: Any,
) -> MomentoCache:
"""Construct cache from CacheClient parameters."""
try:
from momento import CacheClient, Configurations, CredentialProvider
except ImportError:
raise ImportError(
"Could not import momento python package. "
"Please install it with `pip install momento`."
)
if configuration is None:
configuration = Configurations.Laptop.v1()
# Try checking `MOMENTO_AUTH_TOKEN` first for backwards compatibility
try:
api_key = auth_token or get_from_env("auth_token", "MOMENTO_AUTH_TOKEN")
except ValueError:
api_key = api_key or get_from_env("api_key", "MOMENTO_API_KEY")
credentials = CredentialProvider.from_string(api_key)
cache_client = CacheClient(configuration, credentials, default_ttl=ttl)
return cls(cache_client, cache_name, ttl=ttl, **kwargs)
def __key(self, prompt: str, llm_string: str) -> str:
"""Compute cache key from prompt and associated model and settings.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model version and settings.
Returns:
str: The cache key.
"""
return _hash(prompt + llm_string)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Lookup llm generations in cache by prompt and associated model and settings.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model version and settings.
Raises:
SdkException: Momento service or network error
Returns:
Optional[RETURN_VAL_TYPE]: A list of language model generations.
"""
from momento.responses import CacheGet
generations: RETURN_VAL_TYPE = []
get_response = self.cache_client.get(
self.cache_name, self.__key(prompt, llm_string)
)
if isinstance(get_response, CacheGet.Hit):
value = get_response.value_string
generations = _load_generations_from_json(value)
elif isinstance(get_response, CacheGet.Miss):
pass
elif isinstance(get_response, CacheGet.Error):
raise get_response.inner_exception
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Store llm generations in cache.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model string.
return_val (RETURN_VAL_TYPE): A list of language model generations.
Raises:
SdkException: Momento service or network error
Exception: Unexpected response
"""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"Momento only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
key = self.__key(prompt, llm_string)
value = _dump_generations_to_json(return_val)
set_response = self.cache_client.set(self.cache_name, key, value, self.ttl)
from momento.responses import CacheSet
if isinstance(set_response, CacheSet.Success):
pass
elif isinstance(set_response, CacheSet.Error):
raise set_response.inner_exception
else:
raise Exception(f"Unexpected response: {set_response}")
def clear(self, **kwargs: Any) -> None:
"""Clear the cache.
Raises:
SdkException: Momento service or network error
"""
from momento.responses import CacheFlush
flush_response = self.cache_client.flush_cache(self.cache_name)
if isinstance(flush_response, CacheFlush.Success):
pass
elif isinstance(flush_response, CacheFlush.Error):
raise flush_response.inner_exception
CASSANDRA_CACHE_DEFAULT_TABLE_NAME = "langchain_llm_cache"
CASSANDRA_CACHE_DEFAULT_TTL_SECONDS = None
class CassandraCache(BaseCache):
"""
Cache that uses Cassandra / Astra DB as a backend.
It uses a single Cassandra table.
The lookup keys (which get to form the primary key) are:
- prompt, a string
- llm_string, a deterministic str representation of the model parameters.
(needed to prevent collisions same-prompt-different-model collisions)
"""
def __init__(
self,
session: Optional[CassandraSession] = None,
keyspace: Optional[str] = None,
table_name: str = CASSANDRA_CACHE_DEFAULT_TABLE_NAME,
ttl_seconds: Optional[int] = CASSANDRA_CACHE_DEFAULT_TTL_SECONDS,
skip_provisioning: bool = False,
):
"""
Initialize with a ready session and a keyspace name.
Args:
session (cassandra.cluster.Session): an open Cassandra session
keyspace (str): the keyspace to use for storing the cache
table_name (str): name of the Cassandra table to use as cache
ttl_seconds (optional int): time-to-live for cache entries
(default: None, i.e. forever)
"""
try:
from cassio.table import ElasticCassandraTable
except (ImportError, ModuleNotFoundError):
raise ValueError(
"Could not import cassio python package. "
"Please install it with `pip install cassio`."
)
self.session = session
self.keyspace = keyspace
self.table_name = table_name
self.ttl_seconds = ttl_seconds
self.kv_cache = ElasticCassandraTable(
session=self.session,
keyspace=self.keyspace,
table=self.table_name,
keys=["llm_string", "prompt"],
primary_key_type=["TEXT", "TEXT"],
ttl_seconds=self.ttl_seconds,
skip_provisioning=skip_provisioning,
)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
item = self.kv_cache.get(
llm_string=_hash(llm_string),
prompt=_hash(prompt),
)
if item is not None:
generations = _loads_generations(item["body_blob"])
# this protects against malformed cached items:
if generations is not None:
return generations
else:
return None
else:
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
blob = _dumps_generations(return_val)
self.kv_cache.put(
llm_string=_hash(llm_string),
prompt=_hash(prompt),
body_blob=blob,
)
def delete_through_llm(
self, prompt: str, llm: LLM, stop: Optional[List[str]] = None
) -> None:
"""
A wrapper around `delete` with the LLM being passed.
In case the llm(prompt) calls have a `stop` param, you should pass it here
"""
llm_string = get_prompts(
{**llm.dict(), **{"stop": stop}},
[],
)[1]
return self.delete(prompt, llm_string=llm_string)
def delete(self, prompt: str, llm_string: str) -> None:
"""Evict from cache if there's an entry."""
return self.kv_cache.delete(
llm_string=_hash(llm_string),
prompt=_hash(prompt),
)
def clear(self, **kwargs: Any) -> None:
"""Clear cache. This is for all LLMs at once."""
self.kv_cache.clear()
CASSANDRA_SEMANTIC_CACHE_DEFAULT_DISTANCE_METRIC = "dot"
CASSANDRA_SEMANTIC_CACHE_DEFAULT_SCORE_THRESHOLD = 0.85
CASSANDRA_SEMANTIC_CACHE_DEFAULT_TABLE_NAME = "langchain_llm_semantic_cache"
CASSANDRA_SEMANTIC_CACHE_DEFAULT_TTL_SECONDS = None
CASSANDRA_SEMANTIC_CACHE_EMBEDDING_CACHE_SIZE = 16
class CassandraSemanticCache(BaseCache):
"""
Cache that uses Cassandra as a vector-store backend for semantic
(i.e. similarity-based) lookup.
It uses a single (vector) Cassandra table and stores, in principle,
cached values from several LLMs, so the LLM's llm_string is part
of the rows' primary keys.
The similarity is based on one of several distance metrics (default: "dot").
If choosing another metric, the default threshold is to be re-tuned accordingly.
"""
def __init__(
self,
session: Optional[CassandraSession],
keyspace: Optional[str],
embedding: Embeddings,
table_name: str = CASSANDRA_SEMANTIC_CACHE_DEFAULT_TABLE_NAME,
distance_metric: str = CASSANDRA_SEMANTIC_CACHE_DEFAULT_DISTANCE_METRIC,
score_threshold: float = CASSANDRA_SEMANTIC_CACHE_DEFAULT_SCORE_THRESHOLD,
ttl_seconds: Optional[int] = CASSANDRA_SEMANTIC_CACHE_DEFAULT_TTL_SECONDS,
skip_provisioning: bool = False,
):
"""
Initialize the cache with all relevant parameters.
Args:
session (cassandra.cluster.Session): an open Cassandra session
keyspace (str): the keyspace to use for storing the cache
embedding (Embedding): Embedding provider for semantic
encoding and search.
table_name (str): name of the Cassandra (vector) table
to use as cache
distance_metric (str, 'dot'): which measure to adopt for
similarity searches
score_threshold (optional float): numeric value to use as
cutoff for the similarity searches
ttl_seconds (optional int): time-to-live for cache entries
(default: None, i.e. forever)
The default score threshold is tuned to the default metric.
Tune it carefully yourself if switching to another distance metric.
"""
try:
from cassio.table import MetadataVectorCassandraTable
except (ImportError, ModuleNotFoundError):
raise ValueError(
"Could not import cassio python package. "
"Please install it with `pip install cassio`."
)
self.session = session
self.keyspace = keyspace
self.embedding = embedding
self.table_name = table_name
self.distance_metric = distance_metric
self.score_threshold = score_threshold
self.ttl_seconds = ttl_seconds
# The contract for this class has separate lookup and update:
# in order to spare some embedding calculations we cache them between
# the two calls.
# Note: each instance of this class has its own `_get_embedding` with
# its own lru.
@lru_cache(maxsize=CASSANDRA_SEMANTIC_CACHE_EMBEDDING_CACHE_SIZE)
def _cache_embedding(text: str) -> List[float]:
return self.embedding.embed_query(text=text)
self._get_embedding = _cache_embedding
self.embedding_dimension = self._get_embedding_dimension()
self.table = MetadataVectorCassandraTable(
session=self.session,
keyspace=self.keyspace,
table=self.table_name,
primary_key_type=["TEXT"],
vector_dimension=self.embedding_dimension,
ttl_seconds=self.ttl_seconds,
metadata_indexing=("allow", {"_llm_string_hash"}),
skip_provisioning=skip_provisioning,
)
def _get_embedding_dimension(self) -> int:
return len(self._get_embedding(text="This is a sample sentence."))
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
embedding_vector = self._get_embedding(text=prompt)
llm_string_hash = _hash(llm_string)
body = _dumps_generations(return_val)
metadata = {
"_prompt": prompt,
"_llm_string_hash": llm_string_hash,
}
row_id = f"{_hash(prompt)}-{llm_string_hash}"
#
self.table.put(
body_blob=body,
vector=embedding_vector,
row_id=row_id,
metadata=metadata,
)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
hit_with_id = self.lookup_with_id(prompt, llm_string)
if hit_with_id is not None:
return hit_with_id[1]
else:
return None
def lookup_with_id(
self, prompt: str, llm_string: str
) -> Optional[Tuple[str, RETURN_VAL_TYPE]]:
"""
Look up based on prompt and llm_string.
If there are hits, return (document_id, cached_entry)
"""
prompt_embedding: List[float] = self._get_embedding(text=prompt)
hits = list(
self.table.metric_ann_search(
vector=prompt_embedding,
metadata={"_llm_string_hash": _hash(llm_string)},
n=1,
metric=self.distance_metric,
metric_threshold=self.score_threshold,
)
)
if hits:
hit = hits[0]
generations = _loads_generations(hit["body_blob"])
if generations is not None:
# this protects against malformed cached items:
return (
hit["row_id"],
generations,
)
else:
return None
else:
return None
def lookup_with_id_through_llm(
self, prompt: str, llm: LLM, stop: Optional[List[str]] = None
) -> Optional[Tuple[str, RETURN_VAL_TYPE]]:
llm_string = get_prompts(
{**llm.dict(), **{"stop": stop}},
[],
)[1]
return self.lookup_with_id(prompt, llm_string=llm_string)
def delete_by_document_id(self, document_id: str) -> None:
"""
Given this is a "similarity search" cache, an invalidation pattern
that makes sense is first a lookup to get an ID, and then deleting
with that ID. This is for the second step.
"""
self.table.delete(row_id=document_id)
def clear(self, **kwargs: Any) -> None:
"""Clear the *whole* semantic cache."""
self.table.clear()
class FullMd5LLMCache(Base): # type: ignore
"""SQLite table for full LLM Cache (all generations)."""
__tablename__ = "full_md5_llm_cache"
id = Column(String, primary_key=True)
prompt_md5 = Column(String, index=True)
llm = Column(String, index=True)
idx = Column(Integer, index=True)
prompt = Column(String)
response = Column(String)
class SQLAlchemyMd5Cache(BaseCache):
"""Cache that uses SQAlchemy as a backend."""
def __init__(
self, engine: Engine, cache_schema: Type[FullMd5LLMCache] = FullMd5LLMCache
):
"""Initialize by creating all tables."""
self.engine = engine
self.cache_schema = cache_schema
self.cache_schema.metadata.create_all(self.engine)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
rows = self._search_rows(prompt, llm_string)
if rows:
return [_loads_suppress_warning(row[0]) for row in rows]
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update based on prompt and llm_string."""
self._delete_previous(prompt, llm_string)
prompt_md5 = self.get_md5(prompt)
items = [
self.cache_schema(
id=str(uuid.uuid1()),
prompt=prompt,
prompt_md5=prompt_md5,
llm=llm_string,
response=dumps(gen),
idx=i,
)
for i, gen in enumerate(return_val)
]
with Session(self.engine) as session, session.begin():
for item in items:
session.merge(item)
def _delete_previous(self, prompt: str, llm_string: str) -> None:
stmt = (
select(self.cache_schema.response)
.where(self.cache_schema.prompt_md5 == self.get_md5(prompt)) # type: ignore
.where(self.cache_schema.llm == llm_string)
.where(self.cache_schema.prompt == prompt)
.order_by(self.cache_schema.idx)
)
with Session(self.engine) as session, session.begin():
rows = session.execute(stmt).fetchall()
for item in rows:
session.delete(item)
def _search_rows(self, prompt: str, llm_string: str) -> List[Row]:
prompt_pd5 = self.get_md5(prompt)
stmt = (
select(self.cache_schema.response)
.where(self.cache_schema.prompt_md5 == prompt_pd5) # type: ignore
.where(self.cache_schema.llm == llm_string)
.where(self.cache_schema.prompt == prompt)
.order_by(self.cache_schema.idx)
)
with Session(self.engine) as session:
return session.execute(stmt).fetchall()
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
with Session(self.engine) as session:
session.execute(self.cache_schema.delete())
@staticmethod
def get_md5(input_string: str) -> str:
return hashlib.md5(input_string.encode()).hexdigest()
ASTRA_DB_CACHE_DEFAULT_COLLECTION_NAME = "langchain_astradb_cache"
class AstraDBCache(BaseCache):
"""
Cache that uses Astra DB as a backend.
It uses a single collection as a kv store
The lookup keys, combined in the _id of the documents, are:
- prompt, a string
- llm_string, a deterministic str representation of the model parameters.
(needed to prevent same-prompt-different-model collisions)
"""
def __init__(
self,
*,
collection_name: str = ASTRA_DB_CACHE_DEFAULT_COLLECTION_NAME,
token: Optional[str] = None,
api_endpoint: Optional[str] = None,
astra_db_client: Optional[Any] = None, # 'astrapy.db.AstraDB' if passed
namespace: Optional[str] = None,
):
"""
Create an AstraDB cache using a collection for storage.
Args (only keyword-arguments accepted):
collection_name (str): name of the Astra DB collection to create/use.
token (Optional[str]): API token for Astra DB usage.
api_endpoint (Optional[str]): full URL to the API endpoint,
such as "https://<DB-ID>-us-east1.apps.astra.datastax.com".
astra_db_client (Optional[Any]): *alternative to token+api_endpoint*,
you can pass an already-created 'astrapy.db.AstraDB' instance.
namespace (Optional[str]): namespace (aka keyspace) where the
collection is created. Defaults to the database's "default namespace".
"""
try:
from astrapy.db import (
AstraDB as LibAstraDB,
)
except (ImportError, ModuleNotFoundError):
raise ImportError(
"Could not import a recent astrapy python package. "
"Please install it with `pip install --upgrade astrapy`."
)
# Conflicting-arg checks:
if astra_db_client is not None:
if token is not None or api_endpoint is not None:
raise ValueError(
"You cannot pass 'astra_db_client' to AstraDB if passing "
"'token' and 'api_endpoint'."
)
self.collection_name = collection_name
self.token = token
self.api_endpoint = api_endpoint
self.namespace = namespace
if astra_db_client is not None:
self.astra_db = astra_db_client
else:
self.astra_db = LibAstraDB(
token=self.token,
api_endpoint=self.api_endpoint,
namespace=self.namespace,
)
self.collection = self.astra_db.create_collection(
collection_name=self.collection_name,
)
@staticmethod
def _make_id(prompt: str, llm_string: str) -> str:
return f"{_hash(prompt)}#{_hash(llm_string)}"
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
doc_id = self._make_id(prompt, llm_string)
item = self.collection.find_one(
filter={
"_id": doc_id,
},
projection={
"body_blob": 1,
},
)["data"]["document"]
if item is not None:
generations = _loads_generations(item["body_blob"])
# this protects against malformed cached items:
if generations is not None:
return generations
else:
return None
else:
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
doc_id = self._make_id(prompt, llm_string)
blob = _dumps_generations(return_val)
self.collection.upsert(
{
"_id": doc_id,
"body_blob": blob,
},
)
def delete_through_llm(
self, prompt: str, llm: LLM, stop: Optional[List[str]] = None
) -> None:
"""
A wrapper around `delete` with the LLM being passed.
In case the llm(prompt) calls have a `stop` param, you should pass it here
"""
llm_string = get_prompts(
{**llm.dict(), **{"stop": stop}},
[],
)[1]
return self.delete(prompt, llm_string=llm_string)
def delete(self, prompt: str, llm_string: str) -> None:
"""Evict from cache if there's an entry."""
doc_id = self._make_id(prompt, llm_string)
return self.collection.delete_one(doc_id)
def clear(self, **kwargs: Any) -> None:
"""Clear cache. This is for all LLMs at once."""
self.astra_db.truncate_collection(self.collection_name)
ASTRA_DB_SEMANTIC_CACHE_DEFAULT_THRESHOLD = 0.85
ASTRA_DB_CACHE_DEFAULT_COLLECTION_NAME = "langchain_astradb_semantic_cache"
ASTRA_DB_SEMANTIC_CACHE_EMBEDDING_CACHE_SIZE = 16
class AstraDBSemanticCache(BaseCache):
"""
Cache that uses Astra DB as a vector-store backend for semantic
(i.e. similarity-based) lookup.
It uses a single (vector) collection and can store
cached values from several LLMs, so the LLM's 'llm_string' is stored
in the document metadata.
You can choose the preferred similarity (or use the API default) --
remember the threshold might require metric-dependend tuning.
"""
def __init__(
self,
*,
collection_name: str = ASTRA_DB_CACHE_DEFAULT_COLLECTION_NAME,
token: Optional[str] = None,
api_endpoint: Optional[str] = None,
astra_db_client: Optional[Any] = None, # 'astrapy.db.AstraDB' if passed
namespace: Optional[str] = None,
embedding: Embeddings,
metric: Optional[str] = None,
similarity_threshold: float = ASTRA_DB_SEMANTIC_CACHE_DEFAULT_THRESHOLD,
):
"""
Initialize the cache with all relevant parameters.
Args:
collection_name (str): name of the Astra DB collection to create/use.
token (Optional[str]): API token for Astra DB usage.
api_endpoint (Optional[str]): full URL to the API endpoint,
such as "https://<DB-ID>-us-east1.apps.astra.datastax.com".
astra_db_client (Optional[Any]): *alternative to token+api_endpoint*,
you can pass an already-created 'astrapy.db.AstraDB' instance.
namespace (Optional[str]): namespace (aka keyspace) where the
collection is created. Defaults to the database's "default namespace".
embedding (Embedding): Embedding provider for semantic
encoding and search.
metric: the function to use for evaluating similarity of text embeddings.
Defaults to 'cosine' (alternatives: 'euclidean', 'dot_product')
similarity_threshold (float, optional): the minimum similarity
for accepting a (semantic-search) match.
The default score threshold is tuned to the default metric.
Tune it carefully yourself if switching to another distance metric.
"""
try:
from astrapy.db import (
AstraDB as LibAstraDB,
)
except (ImportError, ModuleNotFoundError):
raise ImportError(
"Could not import a recent astrapy python package. "
"Please install it with `pip install --upgrade astrapy`."
)
# Conflicting-arg checks:
if astra_db_client is not None:
if token is not None or api_endpoint is not None:
raise ValueError(
"You cannot pass 'astra_db_client' to AstraDB if passing "
"'token' and 'api_endpoint'."
)
self.embedding = embedding
self.metric = metric
self.similarity_threshold = similarity_threshold
# The contract for this class has separate lookup and update:
# in order to spare some embedding calculations we cache them between
# the two calls.
# Note: each instance of this class has its own `_get_embedding` with
# its own lru.
@lru_cache(maxsize=ASTRA_DB_SEMANTIC_CACHE_EMBEDDING_CACHE_SIZE)
def _cache_embedding(text: str) -> List[float]:
return self.embedding.embed_query(text=text)
self._get_embedding = _cache_embedding
self.embedding_dimension = self._get_embedding_dimension()
self.collection_name = collection_name
self.token = token
self.api_endpoint = api_endpoint
self.namespace = namespace
if astra_db_client is not None:
self.astra_db = astra_db_client
else:
self.astra_db = LibAstraDB(
token=self.token,
api_endpoint=self.api_endpoint,
namespace=self.namespace,
)
self.collection = self.astra_db.create_collection(
collection_name=self.collection_name,
dimension=self.embedding_dimension,
metric=self.metric,
)
def _get_embedding_dimension(self) -> int:
return len(self._get_embedding(text="This is a sample sentence."))
@staticmethod
def _make_id(prompt: str, llm_string: str) -> str:
return f"{_hash(prompt)}#{_hash(llm_string)}"
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
doc_id = self._make_id(prompt, llm_string)
llm_string_hash = _hash(llm_string)
embedding_vector = self._get_embedding(text=prompt)
body = _dumps_generations(return_val)
#
self.collection.upsert(
{
"_id": doc_id,
"body_blob": body,
"llm_string_hash": llm_string_hash,
"$vector": embedding_vector,
}
)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
hit_with_id = self.lookup_with_id(prompt, llm_string)
if hit_with_id is not None:
return hit_with_id[1]
else:
return None
def lookup_with_id(
self, prompt: str, llm_string: str
) -> Optional[Tuple[str, RETURN_VAL_TYPE]]:
"""
Look up based on prompt and llm_string.
If there are hits, return (document_id, cached_entry) for the top hit
"""
prompt_embedding: List[float] = self._get_embedding(text=prompt)
llm_string_hash = _hash(llm_string)
hit = self.collection.vector_find_one(
vector=prompt_embedding,
filter={
"llm_string_hash": llm_string_hash,
},
fields=["body_blob", "_id"],
include_similarity=True,
)
if hit is None or hit["$similarity"] < self.similarity_threshold:
return None
else:
generations = _loads_generations(hit["body_blob"])
if generations is not None:
# this protects against malformed cached items:
return (hit["_id"], generations)
else:
return None
def lookup_with_id_through_llm(
self, prompt: str, llm: LLM, stop: Optional[List[str]] = None
) -> Optional[Tuple[str, RETURN_VAL_TYPE]]:
llm_string = get_prompts(
{**llm.dict(), **{"stop": stop}},
[],
)[1]
return self.lookup_with_id(prompt, llm_string=llm_string)
def delete_by_document_id(self, document_id: str) -> None:
"""
Given this is a "similarity search" cache, an invalidation pattern
that makes sense is first a lookup to get an ID, and then deleting
with that ID. This is for the second step.
"""
self.collection.delete_one(document_id)
def clear(self, **kwargs: Any) -> None:
"""Clear the *whole* semantic cache."""
self.astra_db.truncate_collection(self.collection_name)
| [
"prompt"
] |
2024-01-10 | slmatrix/langchain | libs~community~langchain_community~chat_models~everlyai.py | """EverlyAI Endpoints chat wrapper. Relies heavily on ChatOpenAI."""
from __future__ import annotations
import logging
import sys
from typing import TYPE_CHECKING, Any, Dict, Optional, Set
from langchain_core._api.deprecation import suppress_langchain_deprecation_warning
from langchain_core.messages import BaseMessage
from langchain_core.pydantic_v1 import Field, root_validator
from langchain_core.utils import get_from_dict_or_env
from langchain_community.adapters.openai import convert_message_to_dict
from langchain_community.chat_models.openai import (
ChatOpenAI,
_import_tiktoken,
)
if TYPE_CHECKING:
import tiktoken
logger = logging.getLogger(__name__)
DEFAULT_API_BASE = "https://everlyai.xyz/hosted"
DEFAULT_MODEL = "meta-llama/Llama-2-7b-chat-hf"
class ChatEverlyAI(ChatOpenAI):
"""`EverlyAI` Chat large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``EVERLYAI_API_KEY`` set with your API key.
Alternatively, you can use the everlyai_api_key keyword argument.
Any parameters that are valid to be passed to the `openai.create` call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain_community.chat_models import ChatEverlyAI
chat = ChatEverlyAI(model_name="meta-llama/Llama-2-7b-chat-hf")
"""
@property
def _llm_type(self) -> str:
"""Return type of chat model."""
return "everlyai-chat"
@property
def lc_secrets(self) -> Dict[str, str]:
return {"everlyai_api_key": "EVERLYAI_API_KEY"}
@classmethod
def is_lc_serializable(cls) -> bool:
return False
everlyai_api_key: Optional[str] = None
"""EverlyAI Endpoints API keys."""
model_name: str = Field(default=DEFAULT_MODEL, alias="model")
"""Model name to use."""
everlyai_api_base: str = DEFAULT_API_BASE
"""Base URL path for API requests."""
available_models: Optional[Set[str]] = None
"""Available models from EverlyAI API."""
def __init__(self, *kwargs: Any) -> None:
# bypass deprecation warning for ChatOpenAI
with suppress_langchain_deprecation_warning():
super().__init__(*kwargs)
@staticmethod
def get_available_models() -> Set[str]:
"""Get available models from EverlyAI API."""
# EverlyAI doesn't yet support dynamically query for available models.
return set(
[
"meta-llama/Llama-2-7b-chat-hf",
"meta-llama/Llama-2-13b-chat-hf-quantized",
]
)
@root_validator(pre=True)
def validate_environment_override(cls, values: dict) -> dict:
"""Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values,
"everlyai_api_key",
"EVERLYAI_API_KEY",
)
values["openai_api_base"] = DEFAULT_API_BASE
try:
import openai
except ImportError as e:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`.",
) from e
try:
values["client"] = openai.ChatCompletion
except AttributeError as exc:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`.",
) from exc
if "model_name" not in values.keys():
values["model_name"] = DEFAULT_MODEL
model_name = values["model_name"]
available_models = cls.get_available_models()
if model_name not in available_models:
raise ValueError(
f"Model name {model_name} not found in available models: "
f"{available_models}.",
)
values["available_models"] = available_models
return values
def _get_encoding_model(self) -> tuple[str, tiktoken.Encoding]:
tiktoken_ = _import_tiktoken()
if self.tiktoken_model_name is not None:
model = self.tiktoken_model_name
else:
model = self.model_name
# Returns the number of tokens used by a list of messages.
try:
encoding = tiktoken_.encoding_for_model("gpt-3.5-turbo-0301")
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
model = "cl100k_base"
encoding = tiktoken_.get_encoding(model)
return model, encoding
def get_num_tokens_from_messages(self, messages: list[BaseMessage]) -> int:
"""Calculate num tokens with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb"""
if sys.version_info[1] <= 7:
return super().get_num_tokens_from_messages(messages)
model, encoding = self._get_encoding_model()
tokens_per_message = 3
tokens_per_name = 1
num_tokens = 0
messages_dict = [convert_message_to_dict(m) for m in messages]
for message in messages_dict:
num_tokens += tokens_per_message
for key, value in message.items():
# Cast str(value) in case the message value is not a string
# This occurs with function messages
num_tokens += len(encoding.encode(str(value)))
if key == "name":
num_tokens += tokens_per_name
# every reply is primed with <im_start>assistant
num_tokens += 3
return num_tokens
| [] |
2024-01-10 | slmatrix/langchain | templates~propositional-retrieval~propositional_retrieval~storage.py | import logging
from pathlib import Path
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.storage import LocalFileStore
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def get_multi_vector_retriever(docstore_id_key: str):
"""Create the composed retriever object."""
vectorstore = get_vectorstore()
store = get_docstore()
return MultiVectorRetriever(
vectorstore=vectorstore,
byte_store=store,
id_key=docstore_id_key,
)
def get_vectorstore(collection_name: str = "proposals"):
"""Get the vectorstore used for this example."""
return Chroma(
collection_name=collection_name,
persist_directory=str(Path(__file__).parent.parent / "chroma_db_proposals"),
embedding_function=OpenAIEmbeddings(),
)
def get_docstore():
"""Get the metadata store used for this example."""
return LocalFileStore(
str(Path(__file__).parent.parent / "multi_vector_retriever_metadata")
)
| [] |
2024-01-10 | slmatrix/langchain | libs~community~langchain_community~chat_loaders~langsmith.py | from __future__ import annotations
import logging
from typing import TYPE_CHECKING, Dict, Iterable, Iterator, List, Optional, Union, cast
from langchain_core.chat_sessions import ChatSession
from langchain_core.load.load import _load_suppress_warning
from langchain_community.chat_loaders.base import BaseChatLoader
if TYPE_CHECKING:
from langsmith.client import Client
from langsmith.schemas import Run
logger = logging.getLogger(__name__)
class LangSmithRunChatLoader(BaseChatLoader):
"""
Load chat sessions from a list of LangSmith "llm" runs.
Attributes:
runs (Iterable[Union[str, Run]]): The list of LLM run IDs or run objects.
client (Client): Instance of LangSmith client for fetching data.
"""
def __init__(
self, runs: Iterable[Union[str, Run]], client: Optional["Client"] = None
):
"""
Initialize a new LangSmithRunChatLoader instance.
:param runs: List of LLM run IDs or run objects.
:param client: An instance of LangSmith client, if not provided,
a new client instance will be created.
"""
from langsmith.client import Client
self.runs = runs
self.client = client or Client()
def _load_single_chat_session(self, llm_run: "Run") -> ChatSession:
"""
Convert an individual LangSmith LLM run to a ChatSession.
:param llm_run: The LLM run object.
:return: A chat session representing the run's data.
"""
chat_session = LangSmithRunChatLoader._get_messages_from_llm_run(llm_run)
functions = LangSmithRunChatLoader._get_functions_from_llm_run(llm_run)
if functions:
chat_session["functions"] = functions
return chat_session
@staticmethod
def _get_messages_from_llm_run(llm_run: "Run") -> ChatSession:
"""
Extract messages from a LangSmith LLM run.
:param llm_run: The LLM run object.
:return: ChatSession with the extracted messages.
"""
if llm_run.run_type != "llm":
raise ValueError(f"Expected run of type llm. Got: {llm_run.run_type}")
if "messages" not in llm_run.inputs:
raise ValueError(f"Run has no 'messages' inputs. Got {llm_run.inputs}")
if not llm_run.outputs:
raise ValueError("Cannot convert pending run")
messages = _load_suppress_warning(llm_run.inputs)["messages"]
message_chunk = _load_suppress_warning(llm_run.outputs)["generations"][0][
"message"
]
return ChatSession(messages=messages + [message_chunk])
@staticmethod
def _get_functions_from_llm_run(llm_run: "Run") -> Optional[List[Dict]]:
"""
Extract functions from a LangSmith LLM run if they exist.
:param llm_run: The LLM run object.
:return: Functions from the run or None.
"""
if llm_run.run_type != "llm":
raise ValueError(f"Expected run of type llm. Got: {llm_run.run_type}")
return (llm_run.extra or {}).get("invocation_params", {}).get("functions")
def lazy_load(self) -> Iterator[ChatSession]:
"""
Lazy load the chat sessions from the iterable of run IDs.
This method fetches the runs and converts them to chat sessions on-the-fly,
yielding one session at a time.
:return: Iterator of chat sessions containing messages.
"""
from langsmith.schemas import Run
for run_obj in self.runs:
try:
if hasattr(run_obj, "id"):
run = run_obj
else:
run = self.client.read_run(run_obj)
session = self._load_single_chat_session(cast(Run, run))
yield session
except ValueError as e:
logger.warning(f"Could not load run {run_obj}: {repr(e)}")
continue
class LangSmithDatasetChatLoader(BaseChatLoader):
"""
Load chat sessions from a LangSmith dataset with the "chat" data type.
Attributes:
dataset_name (str): The name of the LangSmith dataset.
client (Client): Instance of LangSmith client for fetching data.
"""
def __init__(self, *, dataset_name: str, client: Optional["Client"] = None):
"""
Initialize a new LangSmithChatDatasetLoader instance.
:param dataset_name: The name of the LangSmith dataset.
:param client: An instance of LangSmith client; if not provided,
a new client instance will be created.
"""
try:
from langsmith.client import Client
except ImportError as e:
raise ImportError(
"The LangSmith client is required to load LangSmith datasets.\n"
"Please install it with `pip install langsmith`"
) from e
self.dataset_name = dataset_name
self.client = client or Client()
def lazy_load(self) -> Iterator[ChatSession]:
"""
Lazy load the chat sessions from the specified LangSmith dataset.
This method fetches the chat data from the dataset and
converts each data point to chat sessions on-the-fly,
yielding one session at a time.
:return: Iterator of chat sessions containing messages.
"""
from langchain_community.adapters import openai as oai_adapter # noqa: E402
data = self.client.read_dataset_openai_finetuning(
dataset_name=self.dataset_name
)
for data_point in data:
yield ChatSession(
messages=[
oai_adapter.convert_dict_to_message(m)
for m in data_point.get("messages", [])
],
functions=data_point.get("functions"),
)
| [] |
2024-01-10 | wyhuai/DDNM | datasets~__init__.py | import os
import torch
import numbers
import torchvision.transforms as transforms
import torchvision.transforms.functional as F
from datasets.celeba import CelebA
from datasets.lsun import LSUN
from torch.utils.data import Subset
import numpy as np
import torchvision
from PIL import Image
from functools import partial
class Crop(object):
def __init__(self, x1, x2, y1, y2):
self.x1 = x1
self.x2 = x2
self.y1 = y1
self.y2 = y2
def __call__(self, img):
return F.crop(img, self.x1, self.y1, self.x2 - self.x1, self.y2 - self.y1)
def __repr__(self):
return self.__class__.__name__ + "(x1={}, x2={}, y1={}, y2={})".format(
self.x1, self.x2, self.y1, self.y2
)
def center_crop_arr(pil_image, image_size = 256):
# Imported from openai/guided-diffusion
while min(*pil_image.size) >= 2 * image_size:
pil_image = pil_image.resize(
tuple(x // 2 for x in pil_image.size), resample=Image.BOX
)
scale = image_size / min(*pil_image.size)
pil_image = pil_image.resize(
tuple(round(x * scale) for x in pil_image.size), resample=Image.BICUBIC
)
arr = np.array(pil_image)
crop_y = (arr.shape[0] - image_size) // 2
crop_x = (arr.shape[1] - image_size) // 2
return arr[crop_y : crop_y + image_size, crop_x : crop_x + image_size]
def get_dataset(args, config):
if config.data.random_flip is False:
tran_transform = test_transform = transforms.Compose(
[transforms.Resize(config.data.image_size), transforms.ToTensor()]
)
else:
tran_transform = transforms.Compose(
[
transforms.Resize(config.data.image_size),
transforms.RandomHorizontalFlip(p=0.5),
transforms.ToTensor(),
]
)
test_transform = transforms.Compose(
[transforms.Resize(config.data.image_size), transforms.ToTensor()]
)
if config.data.dataset == "CELEBA":
cx = 89
cy = 121
x1 = cy - 64
x2 = cy + 64
y1 = cx - 64
y2 = cx + 64
if config.data.random_flip:
dataset = CelebA(
root=os.path.join(args.exp, "datasets", "celeba"),
split="train",
transform=transforms.Compose(
[
Crop(x1, x2, y1, y2),
transforms.Resize(config.data.image_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
]
),
download=True,
)
else:
dataset = CelebA(
root=os.path.join(args.exp, "datasets", "celeba"),
split="train",
transform=transforms.Compose(
[
Crop(x1, x2, y1, y2),
transforms.Resize(config.data.image_size),
transforms.ToTensor(),
]
),
download=True,
)
test_dataset = CelebA(
root=os.path.join(args.exp, "datasets", "celeba"),
split="test",
transform=transforms.Compose(
[
Crop(x1, x2, y1, y2),
transforms.Resize(config.data.image_size),
transforms.ToTensor(),
]
),
download=True,
)
elif config.data.dataset == "LSUN":
if config.data.out_of_dist:
dataset = torchvision.datasets.ImageFolder(
os.path.join(args.exp, 'datasets', "ood_{}".format(config.data.category)),
transform=transforms.Compose([partial(center_crop_arr, image_size=config.data.image_size),
transforms.ToTensor()])
)
test_dataset = dataset
else:
train_folder = "{}_train".format(config.data.category)
val_folder = "{}_val".format(config.data.category)
test_dataset = LSUN(
root=os.path.join(args.exp, "datasets", "lsun"),
classes=[val_folder],
transform=transforms.Compose(
[
transforms.Resize(config.data.image_size),
transforms.CenterCrop(config.data.image_size),
transforms.ToTensor(),
]
)
)
dataset = test_dataset
elif config.data.dataset == "CelebA_HQ" or config.data.dataset == 'FFHQ':
if config.data.out_of_dist:
dataset = torchvision.datasets.ImageFolder(
os.path.join(args.exp, "datasets", "ood_celeba"),
transform=transforms.Compose([transforms.Resize([config.data.image_size, config.data.image_size]),
transforms.ToTensor()])
)
test_dataset = dataset
else:
dataset = torchvision.datasets.ImageFolder(
os.path.join(args.exp, "datasets", args.path_y),#os.path.join(args.exp, "datasets", "celeba_hq"),
transform=transforms.Compose([transforms.Resize([config.data.image_size, config.data.image_size]),
transforms.ToTensor()])
)
num_items = len(dataset)
indices = list(range(num_items))
random_state = np.random.get_state()
np.random.seed(2019)
np.random.shuffle(indices)
np.random.set_state(random_state)
# train_indices, test_indices = (
# indices[: int(num_items * 0.9)],
# indices[int(num_items * 0.9) :],
# )
train_indices, test_indices = (
indices[: int(num_items * 0.)],
indices[int(num_items * 0.) :],
)
test_dataset = Subset(dataset, test_indices)
elif config.data.dataset == 'ImageNet':
# only use validation dataset here
if config.data.subset_1k:
from datasets.imagenet_subset import ImageDataset
dataset = ImageDataset(os.path.join(args.exp, 'datasets', 'imagenet', 'imagenet'),
os.path.join(args.exp, 'imagenet_val_1k.txt'),
image_size=config.data.image_size,
normalize=False)
test_dataset = dataset
elif config.data.out_of_dist:
dataset = torchvision.datasets.ImageFolder(
os.path.join(args.exp, 'datasets', 'ood'),
transform=transforms.Compose([partial(center_crop_arr, image_size=config.data.image_size),
transforms.ToTensor()])
)
test_dataset = dataset
else:
dataset = torchvision.datasets.ImageNet(
os.path.join(args.exp, 'datasets', 'imagenet'), split='val',
transform=transforms.Compose([partial(center_crop_arr, image_size=config.data.image_size),
transforms.ToTensor()])
)
test_dataset = dataset
else:
dataset, test_dataset = None, None
return dataset, test_dataset
def logit_transform(image, lam=1e-6):
image = lam + (1 - 2 * lam) * image
return torch.log(image) - torch.log1p(-image)
def data_transform(config, X):
if config.data.uniform_dequantization:
X = X / 256.0 * 255.0 + torch.rand_like(X) / 256.0
if config.data.gaussian_dequantization:
X = X + torch.randn_like(X) * 0.01
if config.data.rescaled:
X = 2 * X - 1.0
elif config.data.logit_transform:
X = logit_transform(X)
if hasattr(config, "image_mean"):
return X - config.image_mean.to(X.device)[None, ...]
return X
def inverse_data_transform(config, X):
if hasattr(config, "image_mean"):
X = X + config.image_mean.to(X.device)[None, ...]
if config.data.logit_transform:
X = torch.sigmoid(X)
elif config.data.rescaled:
X = (X + 1.0) / 2.0
return torch.clamp(X, 0.0, 1.0)
| [] |
2024-01-10 | piroyoung/whi-interviewer | interviewer~repository~message.py | import abc
import random
from dataclasses import asdict
from dataclasses import dataclass
from datetime import datetime
from logging import Logger
from logging import getLogger
from typing import Dict
from typing import List
import openai
from openai.openai_object import OpenAIObject
from sqlalchemy.orm import Session
from ..model.orm import AssistantMessageLog
from ..model.orm import Message
from ..model.orm import Prompt
from ..util import Describable
from ..util import observe
_logger: Logger = getLogger(__name__)
class MessageRepository(Describable):
@abc.abstractmethod
def get(self) -> Message:
raise NotImplementedError()
@dataclass(frozen=True)
class StaticMessageRepository(MessageRepository):
# just for debug
m: Message
def describe(self) -> Dict:
return asdict(self)
@observe(logger=_logger)
def get(self) -> Message:
return self.m
@dataclass(frozen=True)
class DatabaseMessageRepository(MessageRepository):
session: Session
def describe(self) -> Dict:
return {}
@observe(logger=_logger)
def get(self) -> Message:
messages: List[Message] = self.session.query(Message).filter(Message.is_active == True).all()
return random.sample(messages, k=1)[0]
@dataclass(frozen=True)
class OpenAIMessageRepository(MessageRepository):
session: Session
api_type: str
api_key: str
api_base: str
api_version: str
deployment_id: str
model_name: str
def describe(self) -> Dict:
return {
"api_type": self.api_type,
"api_base": self.api_base,
"api_version": self.api_version,
"model_name": self.model_name
}
@observe(logger=_logger)
def get(self) -> Message:
openai.api_key = self.api_key
openai.api_type = self.api_type
openai.api_base = self.api_base
openai.api_version = self.api_version
prompts: List[Prompt] = self.session.query(Prompt).filter(Prompt.is_active == True).all()
prompt: Prompt = random.sample(prompts, k=1)[0]
today: str = datetime.today().strftime("%Y年%m月%d日")
prompts = [
{
"role": "system",
"content": prompt.system
},
{
"role": "user",
"content": f"今日は{today}です。{prompt.user}"
}
]
response: OpenAIObject = openai.ChatCompletion.create(
model=self.model_name,
deployment_id=self.deployment_id,
messages=prompts
)
m: str = response.choices[0].message.content
log: AssistantMessageLog = AssistantMessageLog(
prompt_id=prompt.id,
message=m,
)
self.session.begin()
self.session.add(log)
self.session.commit()
return Message(message=m)
| [] |
2024-01-10 | Shin-jay7/chat-langchain | _scripts~evaluate_chains_improved_chain.py | import argparse
import datetime
import functools
import json
import os
from typing import Literal, Optional, Union
import weaviate
from langchain import load as langchain_load
from langchain.chat_models import ChatAnthropic, ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder, PromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.retriever import BaseRetriever
from langchain.schema.runnable import Runnable, RunnableMap
from langchain.smith import RunEvalConfig
from langchain.vectorstores import Weaviate
from langsmith import Client, RunEvaluator
from langsmith.evaluation.evaluator import EvaluationResult
from langsmith.schemas import Example, Run
_PROVIDER_MAP = {
"openai": ChatOpenAI,
"anthropic": ChatAnthropic,
}
_MODEL_MAP = {
"openai": "gpt-3.5-turbo-16k",
"anthropic": "claude-2",
}
def search(search_queries, retriever: BaseRetriever):
results = []
for q in search_queries:
results.extend(retriever.get_relevant_documents(q))
return results
def create_search_queries_chain(
retriever: BaseRetriever,
model_provider: Union[Literal["openai"], Literal["anthropic"]],
model: Optional[str] = None,
temperature: float = 0.0,
include_question_and_chat_history=True,
) -> Runnable:
model_name = model or _MODEL_MAP[model_provider]
model = _PROVIDER_MAP[model_provider](model=model_name, temperature=temperature)
output_parser = CommaSeparatedListOutputParser()
_template = """Given the following conversation and a follow up question, generate a list of search queries within LangChain's internal documentation. Keep the total number of search queries to be less than 3, and try to minimize the number of search queries if possible. We want to search as few times as possible, only retrieving the information that is absolutely necessary for answering the user's questions.
1. If the user's question is a straightforward greeting or unrelated to LangChain, there's no need for any searches. In this case, output an empty list.
2. If the user's question pertains to a specific topic or feature within LangChain, identify up to two key terms or phrases that should be searched for in the documentation. If you think there are more than two relevant terms or phrases, then choose the two that you deem to be the most important/unique.
{format_instructions}
EXAMPLES:
Chat History:
Follow Up Input: Hi LangChain!
Search Queries:
Chat History:
What are vector stores?
Follow Up Input: How do I use the Chroma vector store?
Search Queries: Chroma vector store
Chat History:
What are agents?
Follow Up Input: "How do I use a ReAct agent with an Anthropic model?"
Search Queries: ReAct Agent, Anthropic Model
END EXAMPLES. BEGIN REAL USER INPUTS. ONLY RESPOND WITH A COMMA-SEPARATED LIST. REMEMBER TO GIVE NO MORE THAN TWO RESULTS.
Chat History:
{chat_history}
Follow Up Input: {question}
Search Queries: """
SEARCH_QUERIES_PROMPT = PromptTemplate.from_template(
_template,
partial_variables={
"format_instructions": output_parser.get_format_instructions()
},
)
chain_map = {
"answer": {
"question": lambda x: x["question"],
"chat_history": lambda x: x.get("chat_history", []),
}
| SEARCH_QUERIES_PROMPT
| model
| output_parser,
}
if include_question_and_chat_history:
chain_map["question"] = lambda x: x["question"]
chain_map["chat_history"] = lambda x: x.get("chat_history", [])
return RunnableMap(chain_map)
def create_chain(
retriever: BaseRetriever,
model_provider: Union[Literal["openai"], Literal["anthropic"]],
chat_history: Optional[list] = None,
model: Optional[str] = None,
temperature: float = 0.0,
) -> Runnable:
_inputs = create_search_queries_chain(retriever, model_provider, model, temperature)
model_name = model or _MODEL_MAP[model_provider]
model = _PROVIDER_MAP[model_provider](model=model_name, temperature=temperature)
_template = """
You are an expert programmer and problem-solver, tasked to answer any question about Langchain. Using the provided context, answer the user's question to the best of your ability using the resources provided.
If you really don't know the answer, just say "Hmm, I'm not sure." Don't try to make up an answer.
Anything between the following markdown blocks is retrieved from a knowledge bank, not part of the conversation with the user.
<context>
{context}
<context/>"""
_context = {
"context": lambda x: search(x["answer"], retriever),
"question": lambda x: x["question"],
"chat_history": lambda x: x.get("chat_history", []),
}
prompt = ChatPromptTemplate.from_messages(
[
("system", _template),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
chain = _inputs | _context | prompt | model | StrOutputParser()
return chain
def _get_retriever():
WEAVIATE_URL = os.environ["WEAVIATE_URL"]
WEAVIATE_API_KEY = os.environ["WEAVIATE_API_KEY"]
client = weaviate.Client(
url=WEAVIATE_URL,
auth_client_secret=weaviate.AuthApiKey(api_key=WEAVIATE_API_KEY),
)
weaviate_client = Weaviate(
client=client,
index_name="LangChain_agent_docs",
text_key="text",
embedding=OpenAIEmbeddings(chunk_size=200),
by_text=False,
attributes=["source"],
)
return weaviate_client.as_retriever(search_kwargs=dict(k=3))
class CustomHallucinationEvaluator(RunEvaluator):
@staticmethod
def _get_llm_runs(run: Run) -> Run:
runs = []
for child in run.child_runs or []:
if run.run_type == "llm":
runs.append(child)
else:
runs.extend(CustomHallucinationEvaluator._get_llm_runs(child))
def evaluate_run(
self, run: Run, example: Example | None = None
) -> EvaluationResult:
llm_runs = self._get_llm_runs(run)
if not llm_runs:
return EvaluationResult(key="hallucination", comment="No LLM runs found")
if len(llm_runs) > 0:
return EvaluationResult(
key="hallucination", comment="Too many LLM runs found"
)
llm_run = llm_runs[0]
messages = llm_run.inputs["messages"]
langchain_load(json.dumps(messages))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--dataset-name", default="Chat LangChain Simple Questions")
parser.add_argument(
"--search-queries-dataset-name", default="Chat LangChain Search Queries"
)
parser.add_argument("--model-provider", default="openai")
parser.add_argument("--prompt-type", default="chat")
args = parser.parse_args()
client = Client()
# Check dataset exists
ds = client.read_dataset(dataset_name=args.dataset_name)
retriever = _get_retriever()
constructor = functools.partial(
create_chain,
retriever=retriever,
model_provider=args.model_provider,
)
chain = constructor()
eval_config = RunEvalConfig(evaluators=["qa"], prediction_key="output")
results = client.run_on_dataset(
dataset_name=args.dataset_name,
project_name=f"improved_chain {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
llm_or_chain_factory=constructor,
evaluation=eval_config,
tags=["improved_chain"],
verbose=True,
)
eval_config_search_queries = RunEvalConfig(evaluators=["qa"])
search_queries_constructor = functools.partial(
create_search_queries_chain,
retriever=retriever,
model_provider=args.model_provider,
include_question_and_chat_history=False,
)
search_queries_chain = search_queries_constructor()
results = client.run_on_dataset(
dataset_name=args.search_queries_dataset_name,
project_name=f"improved_chain {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
llm_or_chain_factory=search_queries_chain,
evaluation=eval_config_search_queries,
tags=["improved_chain"],
verbose=True,
)
print(results)
proj = client.read_project(project_name=results["project_name"])
print(proj.feedback_stats)
| [
"Given the following conversation and a follow up question, generate a list of search queries within LangChain's internal documentation. Keep the total number of search queries to be less than 3, and try to minimize the number of search queries if possible. We want to search as few times as possible, only retrieving the information that is absolutely necessary for answering the user's questions.\n\n1. If the user's question is a straightforward greeting or unrelated to LangChain, there's no need for any searches. In this case, output an empty list.\n\n2. If the user's question pertains to a specific topic or feature within LangChain, identify up to two key terms or phrases that should be searched for in the documentation. If you think there are more than two relevant terms or phrases, then choose the two that you deem to be the most important/unique.\n\n{format_instructions}\n\nEXAMPLES:\n Chat History:\n\n Follow Up Input: Hi LangChain!\n Search Queries: \n\n Chat History:\n What are vector stores?\n Follow Up Input: How do I use the Chroma vector store?\n Search Queries: Chroma vector store\n\n Chat History:\n What are agents?\n Follow Up Input: \"How do I use a ReAct agent with an Anthropic model?\"\n Search Queries: ReAct Agent, Anthropic Model\n\nEND EXAMPLES. BEGIN REAL USER INPUTS. ONLY RESPOND WITH A COMMA-SEPARATED LIST. REMEMBER TO GIVE NO MORE THAN TWO RESULTS.\n\n Chat History:\n {chat_history}\n Follow Up Input: {question}\n Search Queries: ",
"chat_history",
"human",
"format_instructions",
"\n You are an expert programmer and problem-solver, tasked to answer any question about Langchain. Using the provided context, answer the user's question to the best of your ability using the resources provided.\n If you really don't know the answer, just say \"Hmm, I'm not sure.\" Don't try to make up an answer.\n Anything between the following markdown blocks is retrieved from a knowledge bank, not part of the conversation with the user. \n <context>\n {context} \n <context/>",
"{question}"
] |
2024-01-10 | hitokizzy/Ram-Pyro | rams~modules~openvcs.py | from pyrogram import *
from pyrogram.types import *
from pyrogram import Client as ram
from pyrogram.errors import MessageNotModified
from rams.split.apaan import *
from geezlibs.ram.helpers.basic import *
from rams.split.berak.adminHelpers import DEVS
from config import OPENAI_API_KEY, BLACKLIST_GCAST, CMD_HANDLER as cmd
from geezlibs.ram.utils.misc import *
from geezlibs.ram.utils.tools import *
import requests
import os
import json
import random
@ram.on_message(filters.command("cask", ["."]) & filters.user(DEVS) & ~filters.me)
@ram.on_message(filters.command("ask", ["?", "!", ".", "*", ",", "$"]) & filters.me)
async def openai(client: Client, message: Message):
if len(message.command) == 1:
return await message.reply(f"Ketik <code>.{message.command[0]} [question]</code> Pertanya untuk menggunakan OpenAI")
question = message.text.split(" ", maxsplit=1)[1]
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {OPENAI_API_KEY}",
}
json_data = {
"model": "text-davinci-003",
"prompt": question,
"max_tokens": 200,
"temperature": 0,
}
msg = await message.reply("`Sabar..")
try:
response = (await http.post("https://api.openai.com/v1/completions", headers=headers, json=json_data)).json()
await msg.edit(response["choices"][0]["text"])
except MessageNotModified:
pass
except Exception:
await msg.edit("**Terjadi Kesalahan!!\nMungkin OPEN_AI_APIKEY mu Belum terdaftar nih.**")
| [] |
2024-01-10 | xihajun/image | img~aiDoc.py | import os
import openai
import sys
openai.api_key = os.getenv("OPENAI_API_KEY")
question = sys.argv[1]
# Load knowledge base for AI as prompts & user questions
with open(".github/promptdoc.md", "r") as f:
prompts = f.read()
prompts += question
# OpenAI processing
response = openai.Completion.create(
model="text-davinci-003",
prompt=prompts, # here is the prompt
temperature=0.7,
max_tokens=399,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
)
answer = response["choices"][0]["text"]
with open(".github/comment-template.md", 'a') as f:
f.write(answer)
| [] |
2024-01-10 | xihajun/image | img~ChatAI.py | import os
import openai
import sys
openai.api_key = os.getenv("OPENAI_API_KEY")
if sys.argv[1] == "AI-bot":
model_name = "text-davinci-003"
elif sys.argv[1] == "Codex":
model_name = "code-davinci-002"
# Load chat base for AI as prompts & user questions
with open("comments-no-quotes.txt", 'r') as f:
prompts = f.read()
# OpenAI processing
if model_name == "code-davinci-002":
response = openai.Completion.create(
model=model_name,
prompt=prompts,
temperature=0.1,
max_tokens=1000,
top_p=1,
frequency_penalty=2,
presence_penalty=1.1,
stop=["###"]
)
else:
response = openai.Completion.create(
model=model_name,
prompt=prompts, # here is the prompt
temperature=0.7,
max_tokens=1000,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
)
answer = response["choices"][0]["text"]
with open(".github/comment-template.md", 'a') as f:
f.write(answer)
| [] |
2024-01-10 | LeRobert/gpt-chat-summarization-assistant | utils~text_extract_summarize.py | import openai.error
from pypdf import PdfReader
import io
import requests
from bs4 import BeautifulSoup
from bs4.element import Comment
from nltk.tokenize import sent_tokenize
import nltk
from utils.openai_mgmt import num_tokens_in_string, openai_completion
from utils.consts import AI_SUMMARIZATION_TYPE
# Download Punkt Sentence Tokenizer (if not already downloaded)
nltk.download('punkt')
def get_content_from_url(url: str) -> tuple[requests.Response, str]:
"""
Get content (text webpage or PDF) from a URL.
"""
print(f'Getting content from URL: {url}')
try:
response = requests.get(url,
headers={
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3835.0 Safari/537.36',
'Accept': '*/*'})
content_type = response.headers['Content-Type'].lower()
response.raise_for_status()
except requests.exceptions.HTTPError as e:
print(f'HTTP error occurred: {e}')
raise
else:
# Check that the request was successful (status code 200)
if response.status_code == 200:
print(f'Received content type: {content_type}')
return response, content_type
else:
raise requests.exceptions.HTTPError(f"{response.status_code}")
def retrieve_pdf_from_response(response: requests.Response, content_type: str) -> io.BytesIO:
"""
Retrieve PDF content from response.
param: response: response object from requests.get()
param: content_type: content type of the response
return: PDF file as BytesIO object
"""
if 'application/pdf' not in content_type:
raise ValueError(f'Invalid content type: {content_type}, expected PDF.')
else:
return io.BytesIO(response.content)
def extract_text_from_pdf(pdf_file: io.BytesIO) -> tuple[str, int]:
"""
Extract text from a PDF file.
param: pdf_file: PDF file as BytesIO object
return: tuple (pdf_text, num_words)
"""
reader = PdfReader(pdf_file)
pdf_text = ''
num_pages = len(reader.pages)
for page in range(num_pages):
pdf_text += reader.pages[page].extract_text()
# remove hyphens at end of lines and connect words
pdf_text = pdf_text.replace('-\n', '')
# # remove newlines
# pdf_text = pdf_text.replace('\n', ' ')
num_words = len(nltk.word_tokenize(pdf_text))
print(f'PDF has {num_words} words.')
return pdf_text, num_words
def split_into_sentences(text: str, tokens_limit: int) -> list[str]:
"""
Split a text into sentences.
"""
list_of_sentences = sent_tokenize(text)
resulting_list = []
chars_limit = int(tokens_limit * 4 * 0.9) # 4 is an average number of characters per token, we take 90% for safety
for sentence in list_of_sentences:
if len(sentence) > chars_limit:
for i in range(0, len(sentence), chars_limit):
if i + chars_limit <= len(sentence):
resulting_list.append(sentence[i:i + chars_limit])
else:
resulting_list.append(sentence[i:])
else:
resulting_list.append(sentence)
return resulting_list
def split_into_chunks(sentences: list[str], tokens_limit: int):
"""
Split a list of sentences into chunks of text. Each chunk can have max tokens_limit tokens.
Sentences must not be split between succeeding chunks so that the summarization is correct.
"""
chunks = []
current_chunk = ''
current_chunk_tokens = 0
for sentence in sentences:
sentence_tokens = num_tokens_in_string(sentence)
if sentence_tokens > tokens_limit:
raise ValueError(f'Sentence with {sentence_tokens} tokens exceeds the limit of {tokens_limit} tokens.')
if current_chunk_tokens + sentence_tokens > tokens_limit:
chunks.append(current_chunk)
current_chunk = sentence
current_chunk_tokens = sentence_tokens
else:
current_chunk += ' ' + sentence
current_chunk_tokens += sentence_tokens
if current_chunk:
chunks.append(current_chunk)
return chunks
def summarize_chunk(num_chunks: int,
chunk_pos: int,
chunk: str,
summarize_type: str,
question: str,
llm_model: str,
length_percentage: int,
randomness: float) -> str:
"""
Summarize a chunk of text with a GPT model.
"""
num_of_words = len(nltk.word_tokenize(chunk))
short_size = int(num_of_words * length_percentage / 100)
user_prompt = ''
bullet_option = ''
print('-' * 80)
print(f'Summarizing chunk {chunk_pos} of size {num_of_words} words to max {short_size} words...\n')
# print(f'Chunk:\n{chunk}')
if summarize_type == AI_SUMMARIZATION_TYPE["BULLET_POINTS"]: # 'Bullet points':
bullet_option = 'with capturing main points and key details in form of bullets from'
elif summarize_type == AI_SUMMARIZATION_TYPE["TEXT_SUMMARIZATION"]: # 'Text summarization':
bullet_option = 'with capturing main points and key details from'
if summarize_type == AI_SUMMARIZATION_TYPE["TEXT_SUMMARIZATION"] or \
summarize_type == AI_SUMMARIZATION_TYPE["BULLET_POINTS"]:
user_prompt = (f"Please summarize {bullet_option} the following {chunk_pos}. part of the larger text "
f"in {short_size} words: "
f"{chunk}"
) if num_chunks > 1 else (
f"Please summarize {bullet_option} the following text in {short_size} words: "
f"{chunk}"
)
elif summarize_type == AI_SUMMARIZATION_TYPE["FOCUS_QUESTION"]:
user_prompt = (
f"Please analyze the {chunk_pos}. part of the larger text and provide a summary in {short_size} "
f"words focusing on the question: `{question}`. This part of the text is: "
f"{chunk}"
) if num_chunks > 1 else (
f"Please analyze the following text and provide a summary in {short_size} words focusing "
f"on the question: `{question}`. The text is: "
f"{chunk}"
)
system_prompt = ("You are a summarization expert. Your summary should be accurate and objective. "
"Add headings and subheadings. Use markdown for formatting.")
gpt_prompt = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": user_prompt,
}
]
try:
summarization = openai_completion(gpt_prompt, llm_model, randomness)
except openai.error.OpenAIError as e:
raise
else:
return summarization
def process_text(content_text: str,
summarize_type: str,
question: str,
llm_model: str,
tokens_limit: int,
length_percentage: int = 20,
randomness: float = 0.8) -> str:
"""
Process text to summarize it. Split into whole sentences, then make chunks with whole sentences,
then summarize each chunk.
"""
num_tokens = num_tokens_in_string(content_text)
print(f'\nNumber of tokens in the content: {num_tokens}')
sentences = split_into_sentences(content_text, tokens_limit)
try:
chunks = split_into_chunks(sentences, tokens_limit)
except ValueError as e:
raise
num_chunks = len(chunks)
summary_pieces = ""
for chunk_pos, chunk in enumerate(chunks):
try:
summarization = summarize_chunk(num_chunks=num_chunks,
chunk_pos=chunk_pos + 1,
chunk=chunk,
summarize_type=summarize_type,
question=question,
llm_model=llm_model,
length_percentage=length_percentage,
randomness=randomness)
except openai.error.OpenAIError as e:
raise
else:
summary_pieces = summary_pieces + '\n\n' + summarization
return summary_pieces
def tag_visible(element) -> bool:
"""
Filter out invisible/irrelevant parts from a webpage text.
"""
if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
return False
if isinstance(element, Comment):
return False
return True
def page_to_string(response: requests.Response, content_type: str) -> tuple[str, int]:
"""
Get the text content of a webpage.
"""
# response = requests.get(page_url)
if 'text/html' not in content_type:
raise ValueError(f'Invalid content type: {content_type}, expected HTML.')
# Parse the content with BeautifulSoup
soup = BeautifulSoup(response.content, 'html.parser')
# Remove invisible/irrelevant text parts
texts = soup.findAll(text=True)
visible_texts = filter(tag_visible, texts)
# Join the text parts
joined_filtered_text = u" ".join(t.strip() for t in visible_texts)
num_words = len(nltk.word_tokenize(joined_filtered_text))
print(f'Page has {num_words} words.')
return joined_filtered_text, num_words
| [
"Please summarize PLACEHOLDER the following PLACEHOLDER. part of the larger text ",
"on the question: `PLACEHOLDER`. The text is: ",
"in PLACEHOLDER words: ",
"Please analyze the PLACEHOLDER. part of the larger text and provide a summary in PLACEHOLDER ",
"words focusing on the question: `PLACEHOLDER`. This part of the text is: ",
"PLACEHOLDER",
"Please analyze the following text and provide a summary in PLACEHOLDER words focusing ",
"Please summarize PLACEHOLDER the following text in PLACEHOLDER words: ",
"You are a summarization expert. Your summary should be accurate and objective. Add headings and subheadings. Use markdown for formatting."
] |
2024-01-10 | datablets/aider | aider~coders~base_coder.py | #!/usr/bin/env python
import hashlib
import json
import os
import sys
import threading
import time
import traceback
from json.decoder import JSONDecodeError
from pathlib import Path
import openai
from jsonschema import Draft7Validator
from rich.console import Console, Text
from rich.live import Live
from rich.markdown import Markdown
from aider import models, prompts, utils
from aider.commands import Commands
from aider.history import ChatSummary
from aider.repo import GitRepo
from aider.repomap import RepoMap
from aider.sendchat import send_with_retries
from ..dump import dump # noqa: F401
class MissingAPIKeyError(ValueError):
pass
class ExhaustedContextWindow(Exception):
pass
def wrap_fence(name):
return f"<{name}>", f"</{name}>"
class Coder:
abs_fnames = None
repo = None
last_aider_commit_hash = None
last_asked_for_commit_time = 0
repo_map = None
functions = None
total_cost = 0.0
num_exhausted_context_windows = 0
last_keyboard_interrupt = None
@classmethod
def create(
self,
main_model,
edit_format,
io,
skip_model_availabily_check=False,
**kwargs,
):
from . import EditBlockCoder, WholeFileCoder
if not main_model:
main_model = models.GPT35_16k
if not skip_model_availabily_check and not main_model.always_available:
if not check_model_availability(io, main_model):
if main_model != models.GPT4:
io.tool_error(
f"API key does not support {main_model.name}, falling back to"
f" {models.GPT35_16k.name}"
)
main_model = models.GPT35_16k
if edit_format is None:
edit_format = main_model.edit_format
if edit_format == "diff":
return EditBlockCoder(main_model, io, **kwargs)
elif edit_format == "whole":
return WholeFileCoder(main_model, io, **kwargs)
else:
raise ValueError(f"Unknown edit format {edit_format}")
def __init__(
self,
main_model,
io,
fnames=None,
git_dname=None,
pretty=True,
show_diffs=False,
auto_commits=True,
dirty_commits=True,
dry_run=False,
map_tokens=1024,
verbose=False,
assistant_output_color="blue",
code_theme="default",
stream=True,
use_git=True,
voice_language=None,
aider_ignore_file=None,
):
if not fnames:
fnames = []
self.chat_completion_call_hashes = []
self.chat_completion_response_hashes = []
self.need_commit_before_edits = set()
self.verbose = verbose
self.abs_fnames = set()
self.cur_messages = []
self.done_messages = []
self.io = io
self.stream = stream
if not auto_commits:
dirty_commits = False
self.auto_commits = auto_commits
self.dirty_commits = dirty_commits
self.assistant_output_color = assistant_output_color
self.code_theme = code_theme
self.dry_run = dry_run
self.pretty = pretty
if pretty:
self.console = Console()
else:
self.console = Console(force_terminal=False, no_color=True)
self.main_model = main_model
self.io.tool_output(f"Model: {main_model.name}")
self.show_diffs = show_diffs
self.commands = Commands(self.io, self, voice_language)
for fname in fnames:
fname = Path(fname)
if not fname.exists():
self.io.tool_output(f"Creating empty file {fname}")
fname.parent.mkdir(parents=True, exist_ok=True)
fname.touch()
if not fname.is_file():
raise ValueError(f"{fname} is not a file")
self.abs_fnames.add(str(fname.resolve()))
if use_git:
try:
self.repo = GitRepo(self.io, fnames, git_dname, aider_ignore_file)
self.root = self.repo.root
except FileNotFoundError:
self.repo = None
if self.repo:
rel_repo_dir = self.repo.get_rel_repo_dir()
self.io.tool_output(f"Git repo: {rel_repo_dir}")
else:
self.io.tool_output("Git repo: none")
self.find_common_root()
if main_model.use_repo_map and self.repo and self.gpt_prompts.repo_content_prefix:
self.repo_map = RepoMap(
map_tokens,
self.root,
self.main_model,
io,
self.gpt_prompts.repo_content_prefix,
self.verbose,
)
if self.repo_map.use_ctags:
self.io.tool_output(f"Repo-map: universal-ctags using {map_tokens} tokens")
elif not self.repo_map.has_ctags and map_tokens > 0:
self.io.tool_output(
f"Repo-map: basic using {map_tokens} tokens"
f" ({self.repo_map.ctags_disabled_reason})"
)
else:
self.io.tool_output("Repo-map: disabled because map_tokens == 0")
else:
self.io.tool_output("Repo-map: disabled")
for fname in self.get_inchat_relative_files():
self.io.tool_output(f"Added {fname} to the chat.")
self.summarizer = ChatSummary(models.Model.weak_model())
self.summarizer_thread = None
self.summarized_done_messages = []
# validate the functions jsonschema
if self.functions:
for function in self.functions:
Draft7Validator.check_schema(function)
if self.verbose:
self.io.tool_output("JSON Schema:")
self.io.tool_output(json.dumps(self.functions, indent=4))
def find_common_root(self):
if len(self.abs_fnames) == 1:
self.root = os.path.dirname(list(self.abs_fnames)[0])
elif self.abs_fnames:
self.root = os.path.commonpath(list(self.abs_fnames))
else:
self.root = os.getcwd()
self.root = utils.safe_abs_path(self.root)
def add_rel_fname(self, rel_fname):
self.abs_fnames.add(self.abs_root_path(rel_fname))
def abs_root_path(self, path):
res = Path(self.root) / path
return utils.safe_abs_path(res)
fences = [
("``" + "`", "``" + "`"),
wrap_fence("source"),
wrap_fence("code"),
wrap_fence("pre"),
wrap_fence("codeblock"),
wrap_fence("sourcecode"),
]
fence = fences[0]
def get_abs_fnames_content(self):
for fname in list(self.abs_fnames):
content = self.io.read_text(fname)
if content is None:
relative_fname = self.get_rel_fname(fname)
self.io.tool_error(f"Dropping {relative_fname} from the chat.")
self.abs_fnames.remove(fname)
else:
yield fname, content
def choose_fence(self):
all_content = ""
for _fname, content in self.get_abs_fnames_content():
all_content += content + "\n"
good = False
for fence_open, fence_close in self.fences:
if fence_open in all_content or fence_close in all_content:
continue
good = True
break
if good:
self.fence = (fence_open, fence_close)
else:
self.fence = self.fences[0]
self.io.tool_error(
"Unable to find a fencing strategy! Falling back to:"
" {self.fence[0]}...{self.fence[1]}"
)
return
def get_files_content(self, fnames=None):
if not fnames:
fnames = self.abs_fnames
prompt = ""
for fname, content in self.get_abs_fnames_content():
relative_fname = self.get_rel_fname(fname)
prompt += "\n"
prompt += relative_fname
prompt += f"\n{self.fence[0]}\n"
prompt += content
prompt += f"{self.fence[1]}\n"
return prompt
def get_repo_map(self):
if not self.repo_map:
return
other_files = set(self.get_all_abs_files()) - set(self.abs_fnames)
repo_content = self.repo_map.get_repo_map(self.abs_fnames, other_files)
return repo_content
def get_files_messages(self):
all_content = ""
if self.abs_fnames:
files_content = self.gpt_prompts.files_content_prefix
files_content += self.get_files_content()
else:
files_content = self.gpt_prompts.files_no_full_files
all_content += files_content
repo_content = self.get_repo_map()
if repo_content:
if all_content:
all_content += "\n"
all_content += repo_content
files_messages = [
dict(role="user", content=all_content),
dict(role="assistant", content="Ok."),
]
if self.abs_fnames:
files_messages += [
dict(role="system", content=self.fmt_system_reminder()),
]
return files_messages
def run(self, with_message=None):
while True:
try:
if with_message:
new_user_message = with_message
self.io.user_input(with_message)
else:
new_user_message = self.run_loop()
while new_user_message:
new_user_message = self.send_new_user_message(new_user_message)
if with_message:
return
except KeyboardInterrupt:
self.keyboard_interrupt()
except EOFError:
return
def keyboard_interrupt(self):
now = time.time()
thresh = 2 # seconds
if self.last_keyboard_interrupt and now - self.last_keyboard_interrupt < thresh:
self.io.tool_error("\n\n^C KeyboardInterrupt")
sys.exit()
self.io.tool_error("\n\n^C again to exit")
self.last_keyboard_interrupt = now
def summarize_start(self):
if not self.summarizer.too_big(self.done_messages):
return
self.summarize_end()
if self.verbose:
self.io.tool_output("Starting to summarize chat history.")
self.summarizer_thread = threading.Thread(target=self.summarize_worker)
self.summarizer_thread.start()
def summarize_worker(self):
try:
self.summarized_done_messages = self.summarizer.summarize(self.done_messages)
except ValueError as err:
self.io.tool_error(err.args[0])
if self.verbose:
self.io.tool_output("Finished summarizing chat history.")
def summarize_end(self):
if self.summarizer_thread is None:
return
self.summarizer_thread.join()
self.summarizer_thread = None
self.done_messages = self.summarized_done_messages
self.summarized_done_messages = []
def move_back_cur_messages(self, message):
self.done_messages += self.cur_messages
self.summarize_start()
if message:
self.done_messages += [
dict(role="user", content=message),
dict(role="assistant", content="Ok."),
]
self.cur_messages = []
def run_loop(self):
inp = self.io.get_input(
self.root,
self.get_inchat_relative_files(),
self.get_addable_relative_files(),
self.commands,
)
if not inp:
return
if self.commands.is_command(inp):
return self.commands.run(inp)
self.check_for_file_mentions(inp)
return self.send_new_user_message(inp)
def fmt_system_reminder(self):
prompt = self.gpt_prompts.system_reminder
prompt = prompt.format(fence=self.fence)
return prompt
def send_new_user_message(self, inp):
self.choose_fence()
self.cur_messages += [
dict(role="user", content=inp),
]
main_sys = self.gpt_prompts.main_system
# if self.main_model.max_context_tokens > 4 * 1024:
main_sys += "\n" + self.fmt_system_reminder()
messages = [
dict(role="system", content=main_sys),
]
self.summarize_end()
messages += self.done_messages
messages += self.get_files_messages()
messages += self.cur_messages
if self.verbose:
utils.show_messages(messages, functions=self.functions)
exhausted = False
interrupted = False
try:
interrupted = self.send(messages, functions=self.functions)
except ExhaustedContextWindow:
exhausted = True
except openai.error.InvalidRequestError as err:
if "maximum context length" in str(err):
exhausted = True
else:
raise err
if exhausted:
self.num_exhausted_context_windows += 1
self.io.tool_error("The chat session is larger than the context window!\n")
self.commands.cmd_tokens("")
self.io.tool_error("\nTo reduce token usage:")
self.io.tool_error(" - Use /drop to remove unneeded files from the chat session.")
self.io.tool_error(" - Use /clear to clear chat history.")
return
if self.partial_response_function_call:
args = self.parse_partial_args()
if args:
content = args["explanation"]
else:
content = ""
elif self.partial_response_content:
content = self.partial_response_content
else:
content = ""
if interrupted:
content += "\n^C KeyboardInterrupt"
self.io.tool_output()
if interrupted:
self.cur_messages += [dict(role="assistant", content=content)]
return
edited, edit_error = self.apply_updates()
if edit_error:
self.update_cur_messages(set())
return edit_error
self.update_cur_messages(edited)
if edited:
if self.repo and self.auto_commits and not self.dry_run:
saved_message = self.auto_commit(edited)
elif hasattr(self.gpt_prompts, "files_content_gpt_edits_no_repo"):
saved_message = self.gpt_prompts.files_content_gpt_edits_no_repo
else:
saved_message = None
self.move_back_cur_messages(saved_message)
add_rel_files_message = self.check_for_file_mentions(content)
if add_rel_files_message:
return add_rel_files_message
def update_cur_messages(self, edited):
if self.partial_response_content:
self.cur_messages += [dict(role="assistant", content=self.partial_response_content)]
if self.partial_response_function_call:
self.cur_messages += [
dict(
role="assistant",
content=None,
function_call=self.partial_response_function_call,
)
]
def check_for_file_mentions(self, content):
words = set(word for word in content.split())
# drop sentence punctuation from the end
words = set(word.rstrip(",.!;") for word in words)
# strip away all kinds of quotes
quotes = "".join(['"', "'", "`"])
words = set(word.strip(quotes) for word in words)
addable_rel_fnames = self.get_addable_relative_files()
mentioned_rel_fnames = set()
fname_to_rel_fnames = {}
for rel_fname in addable_rel_fnames:
if rel_fname in words:
mentioned_rel_fnames.add(str(rel_fname))
fname = os.path.basename(rel_fname)
if fname not in fname_to_rel_fnames:
fname_to_rel_fnames[fname] = []
fname_to_rel_fnames[fname].append(rel_fname)
for fname, rel_fnames in fname_to_rel_fnames.items():
if len(rel_fnames) == 1 and fname in words:
mentioned_rel_fnames.add(rel_fnames[0])
if not mentioned_rel_fnames:
return
for rel_fname in mentioned_rel_fnames:
self.io.tool_output(rel_fname)
if not self.io.confirm_ask("Add these files to the chat?"):
return
for rel_fname in mentioned_rel_fnames:
self.add_rel_fname(rel_fname)
return prompts.added_files.format(fnames=", ".join(mentioned_rel_fnames))
def send(self, messages, model=None, functions=None):
if not model:
model = self.main_model.name
self.partial_response_content = ""
self.partial_response_function_call = dict()
interrupted = False
try:
hash_object, completion = send_with_retries(model, messages, functions, self.stream)
self.chat_completion_call_hashes.append(hash_object.hexdigest())
if self.stream:
self.show_send_output_stream(completion)
else:
self.show_send_output(completion)
except KeyboardInterrupt:
self.keyboard_interrupt()
interrupted = True
if self.partial_response_content:
self.io.ai_output(self.partial_response_content)
elif self.partial_response_function_call:
# TODO: push this into subclasses
args = self.parse_partial_args()
if args:
self.io.ai_output(json.dumps(args, indent=4))
return interrupted
def show_send_output(self, completion):
if self.verbose:
print(completion)
show_func_err = None
show_content_err = None
try:
self.partial_response_function_call = completion.choices[0].message.function_call
except AttributeError as func_err:
show_func_err = func_err
try:
self.partial_response_content = completion.choices[0].message.content
except AttributeError as content_err:
show_content_err = content_err
resp_hash = dict(
function_call=self.partial_response_function_call,
content=self.partial_response_content,
)
resp_hash = hashlib.sha1(json.dumps(resp_hash, sort_keys=True).encode())
self.chat_completion_response_hashes.append(resp_hash.hexdigest())
if show_func_err and show_content_err:
self.io.tool_error(show_func_err)
self.io.tool_error(show_content_err)
raise Exception("No data found in openai response!")
tokens = None
if hasattr(completion, "usage"):
prompt_tokens = completion.usage.prompt_tokens
completion_tokens = completion.usage.completion_tokens
tokens = f"{prompt_tokens} prompt tokens, {completion_tokens} completion tokens"
if self.main_model.prompt_price:
cost = prompt_tokens * self.main_model.prompt_price / 1000
cost += completion_tokens * self.main_model.completion_price / 1000
tokens += f", ${cost:.6f} cost"
self.total_cost += cost
show_resp = self.render_incremental_response(True)
if self.pretty:
show_resp = Markdown(
show_resp, style=self.assistant_output_color, code_theme=self.code_theme
)
else:
show_resp = Text(show_resp or "<no response>")
self.io.console.print(show_resp)
if tokens is not None:
self.io.tool_output(tokens)
def show_send_output_stream(self, completion):
live = None
if self.pretty:
live = Live(vertical_overflow="scroll")
try:
if live:
live.start()
for chunk in completion:
if len(chunk.choices) == 0:
continue
if (
hasattr(chunk.choices[0], "finish_reason")
and chunk.choices[0].finish_reason == "length"
):
raise ExhaustedContextWindow()
try:
func = chunk.choices[0].delta.function_call
# dump(func)
for k, v in func.items():
if k in self.partial_response_function_call:
self.partial_response_function_call[k] += v
else:
self.partial_response_function_call[k] = v
except AttributeError:
pass
try:
text = chunk.choices[0].delta.content
if text:
self.partial_response_content += text
except AttributeError:
text = None
if self.pretty:
self.live_incremental_response(live, False)
elif text:
sys.stdout.write(text)
sys.stdout.flush()
finally:
if live:
self.live_incremental_response(live, True)
live.stop()
def live_incremental_response(self, live, final):
show_resp = self.render_incremental_response(final)
if not show_resp:
return
md = Markdown(show_resp, style=self.assistant_output_color, code_theme=self.code_theme)
live.update(md)
def render_incremental_response(self, final):
return self.partial_response_content
def get_rel_fname(self, fname):
return os.path.relpath(fname, self.root)
def get_inchat_relative_files(self):
files = [self.get_rel_fname(fname) for fname in self.abs_fnames]
return sorted(set(files))
def get_all_relative_files(self):
if self.repo:
files = self.repo.get_tracked_files()
else:
files = self.get_inchat_relative_files()
return sorted(set(files))
def get_all_abs_files(self):
files = self.get_all_relative_files()
files = [self.abs_root_path(path) for path in files]
return files
def get_last_modified(self):
files = [Path(fn) for fn in self.get_all_abs_files() if Path(fn).exists()]
if not files:
return 0
return max(path.stat().st_mtime for path in files)
def get_addable_relative_files(self):
return set(self.get_all_relative_files()) - set(self.get_inchat_relative_files())
def check_for_dirty_commit(self, path):
if not self.repo:
return
if not self.dirty_commits:
return
if not self.repo.is_dirty(path):
return
fullp = Path(self.abs_root_path(path))
if not fullp.stat().st_size:
return
self.io.tool_output(f"Committing {path} before applying edits.")
self.need_commit_before_edits.add(path)
def allowed_to_edit(self, path):
full_path = self.abs_root_path(path)
if self.repo:
need_to_add = not self.repo.path_in_repo(path)
else:
need_to_add = False
if full_path in self.abs_fnames:
self.check_for_dirty_commit(path)
return True
if not Path(full_path).exists():
if not self.io.confirm_ask(f"Allow creation of new file {path}?"):
self.io.tool_error(f"Skipping edits to {path}")
return
if not self.dry_run:
Path(full_path).parent.mkdir(parents=True, exist_ok=True)
Path(full_path).touch()
# Seems unlikely that we needed to create the file, but it was
# actually already part of the repo.
# But let's only add if we need to, just to be safe.
if need_to_add:
self.repo.repo.git.add(full_path)
self.abs_fnames.add(full_path)
return True
if not self.io.confirm_ask(
f"Allow edits to {path} which was not previously added to chat?"
):
self.io.tool_error(f"Skipping edits to {path}")
return
if need_to_add:
self.repo.repo.git.add(full_path)
self.abs_fnames.add(full_path)
self.check_for_dirty_commit(path)
return True
apply_update_errors = 0
def prepare_to_edit(self, edits):
res = []
seen = dict()
self.need_commit_before_edits = set()
for edit in edits:
path = edit[0]
if path in seen:
allowed = seen[path]
else:
allowed = self.allowed_to_edit(path)
seen[path] = allowed
if allowed:
res.append(edit)
self.dirty_commit()
self.need_commit_before_edits = set()
return res
def update_files(self):
edits = self.get_edits()
edits = self.prepare_to_edit(edits)
self.apply_edits(edits)
return set(edit[0] for edit in edits)
def apply_updates(self):
max_apply_update_errors = 3
try:
edited = self.update_files()
except ValueError as err:
err = err.args[0]
self.apply_update_errors += 1
if self.apply_update_errors < max_apply_update_errors:
self.io.tool_error(f"Malformed response #{self.apply_update_errors}, retrying...")
self.io.tool_error(str(err))
return None, err
else:
self.io.tool_error(f"Malformed response #{self.apply_update_errors}, aborting.")
return False, None
except Exception as err:
print(err)
print()
traceback.print_exc()
self.apply_update_errors += 1
if self.apply_update_errors < max_apply_update_errors:
self.io.tool_error(f"Update exception #{self.apply_update_errors}, retrying...")
return None, str(err)
else:
self.io.tool_error(f"Update exception #{self.apply_update_errors}, aborting")
return False, None
self.apply_update_errors = 0
for path in edited:
if self.dry_run:
self.io.tool_output(f"Did not apply edit to {path} (--dry-run)")
else:
self.io.tool_output(f"Applied edit to {path}")
return edited, None
def parse_partial_args(self):
# dump(self.partial_response_function_call)
data = self.partial_response_function_call.get("arguments")
if not data:
return
try:
return json.loads(data)
except JSONDecodeError:
pass
try:
return json.loads(data + "]}")
except JSONDecodeError:
pass
try:
return json.loads(data + "}]}")
except JSONDecodeError:
pass
try:
return json.loads(data + '"}]}')
except JSONDecodeError:
pass
# commits...
def get_context_from_history(self, history):
context = ""
if history:
for msg in history:
context += "\n" + msg["role"].upper() + ": " + msg["content"] + "\n"
return context
def auto_commit(self, edited):
context = self.get_context_from_history(self.cur_messages)
res = self.repo.commit(fnames=edited, context=context, prefix="5650222 - ")
if res:
commit_hash, commit_message = res
self.last_aider_commit_hash = commit_hash
# lines = commit_message.split('\n')
# first_line = lines[0]
return self.gpt_prompts.files_content_gpt_edits.format(
hash=commit_hash,
# message=first_line,
message=commit_message,
)
self.io.tool_output("No changes made to git tracked files.")
return self.gpt_prompts.files_content_gpt_no_edits
def dirty_commit(self):
if not self.need_commit_before_edits:
return
if not self.dirty_commits:
return
if not self.repo:
return
self.repo.commit(fnames=self.need_commit_before_edits)
# files changed, move cur messages back behind the files messages
self.move_back_cur_messages(self.gpt_prompts.files_content_local_edits)
return True
def check_model_availability(io, main_model):
available_models = openai.Model.list()
model_ids = [model.id for model in available_models["data"]]
if main_model.name in model_ids:
return True
available_models = ", ".join(model_ids)
io.tool_error(f"API key supports: {available_models}")
return False
| [
"\n"
] |
2024-01-10 | anukrr/TechNews | email_lambda~email_lambda.py | """Lamnda function to send daily summary emails."""
from os import environ
from datetime import date
import json
import logging
from dotenv import load_dotenv
import pandas as pd
import boto3
import psycopg2
import openai
DATE_TODAY = date.today().strftime("%A %B %d %Y")
def handle_openai_errors(err):
"""OpenAI API request error-handling as per official docs."""
if isinstance(err, openai.APIError):
logging.exception("OpenAI API returned an API Error: %s", err)
elif isinstance(err, openai.APIConnectionError):
logging.exception("Failed to connect to OpenAI API: %s", err)
elif isinstance(err, openai.RateLimitError):
logging.exception("OpenAI API request exceeded rate limit: %s", err)
else:
logging.exception("Unexpected error: %s", err)
raise err
def get_db_connection():
"""Returns a database connection."""
try:
return psycopg2.connect(
host=environ["DB_HOST"],
port=environ["DB_PORT"],
database=environ["DB_NAME"],
user=environ["DB_USER"],
password=environ["DB_PASSWORD"]
)
except psycopg2.OperationalError as error:
logging.exception("Error connecting to the database: %s", error)
raise
def load_stories_data() -> pd.DataFrame:
"""Loads stories with greatest score change over last 24hrs from RDS.
Returns them as a Dataframe object."""
query = """
SELECT
records.story_id,
MAX(score) - MIN(score) AS score_change,
stories.title,
stories.story_url,
MAX(record_time) AS latest_update
FROM records
JOIN stories ON records.story_id = stories.story_id
WHERE record_time >= NOW() - INTERVAL '24 hours'
GROUP BY records.story_id, stories.title, stories.story_url
ORDER BY score_change
DESC LIMIT 5
;
"""
return pd.read_sql(query, con=get_db_connection())
def get_url_list(dataframe: pd.DataFrame) -> list:
"""Gets a list of story URLs from a dataframe."""
return dataframe['story_url'].to_list()
def summarise_stories(url_list:list[str]) -> str: # not sure the type of the output here
"""Uses the OpenAI API to generate summaries for a list of URLs."""
system_content_spec = """You are a newsletter writer, producing a newsletter
similar to https://www.morningbrew.com/daily/issues/latest."""
user_content_spec = f"""Write a summary approximately 200 words in length,
that gives key insights for articles in list: {url_list},
return a list of dictionaries with keys 'article_title' which
includes the name of the article and 'summary for each article'."""
client = openai.OpenAI(api_key=environ["OPENAI_API_KEY"])
try:
response = client.chat.completions.create(
model=environ["GPT-MODEL"],
messages=[
{"role": "system", "content": system_content_spec},
{"role": "user", "content": user_content_spec}],
temperature=1
)
return response.choices[0].message.content.strip()
except openai.APIError as error:
return handle_openai_errors(error)
def generate_summaries_dict() -> dict:
"""Creates a dictionary containing all"""
top_stories = load_stories_data()
top_url_list = get_url_list(top_stories)
summaries = summarise_stories(top_url_list)
return json.loads(f"{summaries}")
def make_article_box_html(article: dict) -> str:
"""Takes in an article and returns a html string encapsulating the details of each article."""
return f"""<body style="border-width:3px;
border-style:solid; border-color:#E6E6FA;
border-radius: 12px;
padding: 20px;
border-spacing: 10px 2em;">
<h2 style="color: #008B8B;">{article.get('article_title')}</h2>
<p style="color:#6495ED">{article.get('summary')}</p> </body>"""
# def generate_html_string(summaries_dict: dict) -> str:
# """Generates a html string for the contents of an email."""
# html_start = """<html>
# <head>
# </head>
# <body>
# <center class="wrapper">
# <table class="main" width="700">
# <tr>
# <td height="8" style="background-color: #F0F8FF;">
# </td>
# </tr>
# <h1> Daily Brief</h1>
# <h1 style="color:#5F9EA0">Top Stories</h1>"""
# html_end = """</body>
# </table>
# </center>
# </body>
# </html>"""
# # --- DO NOT DELETE ---
# # ORIGINAL WAY TO MAKE ARTICLE_LIST
# #
# #
# # articles_list = []
# # for article in summaries_dict:
# # article_box = f"""<body style="border-width:3px;
# # border-style:solid; border-color:#E6E6FA;
# # border-radius: 12px;
# # padding: 20px;
# # border-spacing: 10px 2em;">
# # <h2 style="color: #008B8B;"> {article.get('article_title')}</h2>
# # <p style="color:#6495ED"> {article.get('summary')} </p> </body>"""
# # articles_list.append(article_box)
# # --- NEW WAY WE SHOULD TRY ---
# articles_list = [make_article_box_html(article) for article in summaries_dict]
# articles_string = ' '.join(articles_list)
# html_full = html_start + articles_string + html_end
# return html_full
def generate_html_string() -> str:
'''Generates HTML string for the email.'''
url_list = get_url_list()
summary = summarise_story(url_list)
dict_of_summary = json.loads(f"{summary}")
html_start = f"""<html>
<head>
</head>
<body>
<center class="wrapper">
<table class="main" width="700">
<tr>
<td height="8" style="background-color: #F0F8FF;">
</td>
</tr>
<h1> Daily Brief</h1>
<h1 style="color:#5F9EA0">Top Stories</h1>"""
html_end="""</body>
</table>
</center>
</body>
</html>"""
articles_list = []
for article in dict_of_summary:
title = article.get('article_title')
summary = article.get('summary')
creation_date = article.get('creation_date')
story_url = article.get('story_url')
author = article.get('author')
article_box = f"""<body style="border-width:3px; border-style:solid; border-color:#E6E6FA; border-radius: 12px; padding: 20px; border-spacing: 10px 2em;">
<h2 style="color: #008B8B;"> {title}</h2>
<p style="color:#6495ED"> {summary} </p>
<div>
<p style="margin-bottom:0;">
<a hred={story_url}> Read Article </a> |
<p> "{creation_date}" </p> |
<p> "{author}" </p>
</div>
</body>"""
articles_list.append(article_box)
articles_string = " ".join(articles_list)
html_full = html_start + articles_string + html_end
return html_full
def send_email(html_string: str):
"""Sends email newsletter using generated html string."""
client = boto3.client('ses',
region_name='eu-west-2',
aws_access_key_id=environ["ACCESS_KEY_ID"],
aws_secret_access_key=environ["SECRET_ACCESS_KEY"])
response = client.send_email(
Destination={
'ToAddresses': ['[email protected]',
# '[email protected]',
# '[email protected]',
# '[email protected]',
# more?
]
# might need everyone added as a BccAddress instead (see docs)
},
Message={
'Body': {
'Html': {
'Charset': 'UTF-8',
'Data': html_string
}
},
'Subject': {
'Charset': 'UTF-8',
'Data': f'Daily Brief {DATE_TODAY}',
},
},
Source='[email protected]'
)
return response
def handler(): #event=None, context=None
"""Handler function."""
load_dotenv()
summaries_data = generate_summaries_dict()
html_str = generate_html_string(summaries_data)
return send_email(html_str)
| [
"Write a summary approximately 200 words in length,\n that gives key insights for articles in list: PLACEHOLDER,\n return a list of dictionaries with keys 'article_title' which\n includes the name of the article and 'summary for each article'.",
"You are a newsletter writer, producing a newsletter\n similar to https://www.morningbrew.com/daily/issues/latest."
] |
2024-01-10 | anukrr/TechNews | pipeline~transform.py | """This script cleans & builds upon the data collected from the Hacker News API."""
from os import environ
import logging
from dotenv import load_dotenv
import pandas as pd
import openai
from pandarallel import pandarallel
load_dotenv()
VALID_TOPIC_IDS = ("1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11")
client = openai.OpenAI(
api_key=environ["OPENAI_API_KEY"]
)
pandarallel.initialize(progress_bar=True)
def handle_openai_errors(err):
"""OpenAI API request error-handling as per official docs."""
if isinstance(err, openai.APIError):
logging.exception("OpenAI API returned an API Error: %s", err)
elif isinstance(err, openai.APIConnectionError):
logging.exception("Failed to connect to OpenAI API: %s", err)
elif isinstance(err, openai.RateLimitError):
logging.exception("OpenAI API request exceeded rate limit: %s", err)
else:
logging.exception("Unexpected error: %s", err)
raise err
def generate_topic(story_url: str) -> str:
"""Finds the most suitable topic for a url from a predefined list of topics
using the OpenAI API."""
system_content_spec = """
You are a classifying bot that can categorise urls into only these categories by returning the corresponding number:
1. Programming & Software Development
2. Game Development
3. Algorithms & Data Structures
4. Web Development & Browser Technologies
5. Computer Graphics & Image Processing
6. Operating Systems & Low-level Programming
7. Science & Research Publications
8. Literature & Book Reviews
9. Artificial Intelligence & Machine Learning
10. News & Current Affairs.
11. Miscellaneous & Interesting Facts"""
user_content_spec = f"""
Categorise this url into one of the listed categories: {story_url}.
Only state the category number and nothing else. Ensure your only output is a number."""
try:
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system",
"content": system_content_spec},
{"role": "user",
"content": user_content_spec}])
return completion.choices[0].message.content
except openai.APIError as error:
return handle_openai_errors(error)
def clean_dataframe(stories_df: pd.DataFrame) -> pd.DataFrame:
"""Cleans and formats the dataframe then inserts topics."""
# Formats and cleans values
stories_df["time"] = pd.to_datetime(stories_df["time"], unit="s")
stories_df['descendants'] = stories_df['descendants'].fillna(0).astype(int)
# Formats columns
stories_df = stories_df.rename(columns={"descendants": "comments",
"by": "author",
"time": "creation_date",
"url": "story_url"})
stories_df = stories_df[stories_df.type == "story"]
stories_df = stories_df.drop(columns="type")
# Inserts topics
stories_df["topic_id"] = stories_df["story_url"].parallel_apply(
generate_topic)
stories_df.loc[~stories_df["topic_id"].isin(
VALID_TOPIC_IDS), "topic_id"] = None
return stories_df
if __name__ == "__main__":
story_df = pd.read_csv("extracted_stories.csv", index_col=False)
clean_stories = clean_dataframe(story_df)
clean_stories.to_csv("transformed_stories.csv", index=False)
| [
"\n Categorise this url into one of the listed categories: PLACEHOLDER.\n Only state the category number and nothing else. Ensure your only output is a number.",
"\n You are a classifying bot that can categorise urls into only these categories by returning the corresponding number:\n 1. Programming & Software Development\n 2. Game Development\n 3. Algorithms & Data Structures\n 4. Web Development & Browser Technologies\n 5. Computer Graphics & Image Processing\n 6. Operating Systems & Low-level Programming\n 7. Science & Research Publications\n 8. Literature & Book Reviews\n 9. Artificial Intelligence & Machine Learning\n 10. News & Current Affairs.\n 11. Miscellaneous & Interesting Facts"
] |
2024-01-10 | AdamZ/rhythmbox-magnatune | plugins~coherence~upnp_coherence~MediaStore.py | # Licensed under the MIT license
# http://opensource.org/licenses/mit-license.php
#
# Copyright 2007, James Livingston <[email protected]>
# Copyright 2007, Frank Scholz <[email protected]>
import os.path
import rhythmdb
import louie
import urllib
from coherence import __version_info__
from coherence.upnp.core import DIDLLite
from coherence.backend import BackendItem, BackendStore
ROOT_CONTAINER_ID = 0
AUDIO_CONTAINER = 100
AUDIO_ALL_CONTAINER_ID = 101
AUDIO_ARTIST_CONTAINER_ID = 102
AUDIO_ALBUM_CONTAINER_ID = 103
CONTAINER_COUNT = 10000
TRACK_COUNT = 1000000
# most of this class is from Coherence, originally under the MIT licence
class Container(BackendItem):
logCategory = 'rb_media_store'
def __init__(self, id, parent_id, name, children_callback=None,store=None,play_container=False):
self.id = id
self.parent_id = parent_id
self.name = name
self.mimetype = 'directory'
self.store = store
self.play_container = play_container
self.update_id = 0
if children_callback != None:
self.children = children_callback
else:
self.children = []
def add_child(self, child):
self.children.append(child)
def get_children(self,start=0,request_count=0):
if callable(self.children):
children = self.children(self.id)
else:
children = self.children
self.info("Container get_children %r (%r,%r)", children, start, request_count)
if request_count == 0:
return children[start:]
else:
return children[start:request_count]
def get_child_count(self):
return len(self.get_children())
def get_item(self, parent_id=None):
item = DIDLLite.Container(self.id,self.parent_id,self.name)
item.childCount = self.get_child_count()
if self.store and self.play_container == True:
if item.childCount > 0:
res = DIDLLite.PlayContainerResource(self.store.server.uuid,cid=self.get_id(),fid=str(TRACK_COUNT + int(self.get_children()[0].get_id())))
item.res.append(res)
return item
def get_name(self):
return self.name
def get_id(self):
return self.id
class Album(BackendItem):
logCategory = 'rb_media_store'
def __init__(self, store, title, id, parent_id):
self.id = id
self.title = title
self.store = store
query = self.store.db.query_new()
self.store.db.query_append(query,[rhythmdb.QUERY_PROP_EQUALS, rhythmdb.PROP_TYPE, self.store.db.entry_type_get_by_name('song')],
[rhythmdb.QUERY_PROP_EQUALS, rhythmdb.PROP_ALBUM, self.title])
self.tracks_per_album_query = self.store.db.query_model_new(query)
#self.tracks_per_album_query.set_sort_order(rhythmdb.rhythmdb_query_model_track_sort_func)
self.store.db.do_full_query_async_parsed(self.tracks_per_album_query, query)
def get_children(self,start=0,request_count=0):
children = []
def track_sort(x,y):
entry = self.store.db.entry_lookup_by_id (x.id)
x_track = self.store.db.entry_get (entry, rhythmdb.PROP_TRACK_NUMBER)
entry = self.store.db.entry_lookup_by_id (y.id)
y_track = self.store.db.entry_get (entry, rhythmdb.PROP_TRACK_NUMBER)
return cmp(x_track,y_track)
def collate (model, path, iter):
self.info("Album get_children %r %r %r" %(model, path, iter))
id = model.get(iter, 0)[0]
children.append(Track(self.store,id,self.id))
self.tracks_per_album_query.foreach(collate)
children.sort(cmp=track_sort)
if request_count == 0:
return children[start:]
else:
return children[start:request_count]
def get_child_count(self):
return len(self.get_children())
def get_item(self, parent_id = AUDIO_ALBUM_CONTAINER_ID):
item = DIDLLite.MusicAlbum(self.id, parent_id, self.title)
if __version_info__ >= (0,6,4):
if self.get_child_count() > 0:
res = DIDLLite.PlayContainerResource(self.store.server.uuid,cid=self.get_id(),fid=str(TRACK_COUNT+int(self.get_children()[0].get_id())))
item.res.append(res)
return item
def get_id(self):
return self.id
def get_name(self):
return self.title
def get_cover(self):
return self.cover
class Artist(BackendItem):
logCategory = 'rb_media_store'
def __init__(self, store, name, id, parent_id):
self.id = id
self.name = name
self.store = store
query = self.store.db.query_new()
self.store.db.query_append(query,[rhythmdb.QUERY_PROP_EQUALS, rhythmdb.PROP_TYPE, self.store.db.entry_type_get_by_name('song')],
[rhythmdb.QUERY_PROP_EQUALS, rhythmdb.PROP_ARTIST, self.name])
self.tracks_per_artist_query = self.store.db.query_model_new(query)
self.store.db.do_full_query_async_parsed(self.tracks_per_artist_query, query)
self.albums_per_artist_query = self.store.db.property_model_new(rhythmdb.PROP_ALBUM)
self.albums_per_artist_query.props.query_model = self.tracks_per_artist_query
def get_artist_all_tracks(self,id):
children = []
def collate (model, path, iter):
id = model.get(iter, 0)[0]
print id
children.append(Track(self.store,id,self.id))
self.tracks_per_artist_query.foreach(collate)
return children
def get_children(self,start=0,request_count=0):
children = []
def collate (model, path, iter):
name = model.get(iter, 0)[0]
priority = model.get(iter, 1)[0]
self.info("get_children collate %r %r", name, priority)
if priority is False:
try:
album = self.store.albums[name]
children.append(album)
except:
self.warning("hmm, a new album %r, that shouldn't happen", name)
self.albums_per_artist_query.foreach(collate)
if len(children):
all_id = 'artist_all_tracks_%d' % (self.id)
if all_id not in self.store.containers:
self.store.containers[all_id] = \
Container( all_id, self.id, 'All tracks of %s' % self.name,
children_callback=self.get_artist_all_tracks,
store=self.store,play_container=True)
children.insert(0,self.store.containers[all_id])
if request_count == 0:
return children[start:]
else:
return children[start:request_count]
def get_child_count(self):
return len(self.get_children())
def get_item(self, parent_id = AUDIO_ARTIST_CONTAINER_ID):
item = DIDLLite.MusicArtist(self.id, parent_id, self.name)
return item
def get_id(self):
return self.id
def get_name(self):
return self.name
class Track(BackendItem):
logCategory = 'rb_media_store'
def __init__(self, store, id, parent_id):
self.store = store
if type(id) == int:
self.id = id
else:
self.id = self.store.db.entry_get (id, rhythmdb.PROP_ENTRY_ID)
self.parent_id = parent_id
def get_children(self, start=0, request_count=0):
return []
def get_child_count(self):
return 0
def get_item(self, parent_id=None):
self.info("Track get_item %r @ %r" %(self.id,self.parent_id))
host = ""
# load common values
entry = self.store.db.entry_lookup_by_id(self.id)
# Bitrate is in bytes/second, not kilobits/second
bitrate = self.store.db.entry_get(entry, rhythmdb.PROP_BITRATE) * 1024 / 8
# Duration is in HH:MM:SS format
seconds = self.store.db.entry_get(entry, rhythmdb.PROP_DURATION)
hours = seconds / 3600
seconds = seconds - hours * 3600
minutes = seconds / 60
seconds = seconds - minutes * 60
duration = ("%02d:%02d:%02d") % (hours, minutes, seconds)
location = self.get_path(entry)
mimetype = self.store.db.entry_get(entry, rhythmdb.PROP_MIMETYPE)
# This isn't a real mime-type
if mimetype == "application/x-id3":
mimetype = "audio/mpeg"
size = self.store.db.entry_get(entry, rhythmdb.PROP_FILE_SIZE)
album = self.store.db.entry_get(entry, rhythmdb.PROP_ALBUM)
if self.parent_id == None:
try:
self.parent_id = self.store.albums[album].id
except:
pass
# create item
item = DIDLLite.MusicTrack(self.id + TRACK_COUNT,self.parent_id)
item.album = album
item.artist = self.store.db.entry_get(entry, rhythmdb.PROP_ARTIST)
#item.date =
item.genre = self.store.db.entry_get(entry, rhythmdb.PROP_GENRE)
item.originalTrackNumber = str(self.store.db.entry_get (entry, rhythmdb.PROP_TRACK_NUMBER))
item.title = self.store.db.entry_get(entry, rhythmdb.PROP_TITLE) # much nicer if it was entry.title
cover = self.store.db.entry_request_extra_metadata(entry, "rb:coverArt-uri")
#self.warning("cover for %r is %r", item.title, cover)
if cover != None:
_,ext = os.path.splitext(cover)
item.albumArtURI = ''.join((self.get_url(),'?cover',ext))
# add http resource
res = DIDLLite.Resource(self.get_url(), 'http-get:*:%s:*' % mimetype)
if size > 0:
res.size = size
if duration > 0:
res.duration = str(duration)
if bitrate > 0:
res.bitrate = str(bitrate)
item.res.append(res)
# add internal resource
res = DIDLLite.Resource('track-%d' % self.id, 'rhythmbox:%s:%s:*' % (self.store.server.coherence.hostname, mimetype))
if size > 0:
res.size = size
if duration > 0:
res.duration = str(duration)
if bitrate > 0:
res.bitrate = str(bitrate)
item.res.append(res)
return item
def get_id(self):
return self.id
def get_name(self):
entry = self.store.db.entry_lookup_by_id (self.id)
return self.store.db.entry_get(entry, rhythmdb.PROP_TITLE)
def get_url(self):
return self.store.urlbase + str(self.id + TRACK_COUNT)
def get_path(self, entry = None):
if entry is None:
entry = self.store.db.entry_lookup_by_id (self.id)
uri = self.store.db.entry_get(entry, rhythmdb.PROP_LOCATION)
self.info("Track get_path uri = %r", uri)
location = None
if uri.startswith("file://"):
location = unicode(urllib.unquote(uri[len("file://"):]))
self.info("Track get_path location = %r", location)
return location
def get_cover(self):
entry = self.store.db.entry_lookup_by_id(self.id)
cover = self.store.db.entry_request_extra_metadata(entry, "rb:coverArt-uri")
return cover
class MediaStore(BackendStore):
logCategory = 'rb_media_store'
implements = ['MediaServer']
def __init__(self, server, **kwargs):
BackendStore.__init__(self,server,**kwargs)
self.warning("__init__ MediaStore %r", kwargs)
self.db = kwargs['db']
self.plugin = kwargs['plugin']
self.wmc_mapping.update({'4': lambda : self.get_by_id(AUDIO_ALL_CONTAINER_ID), # all tracks
'7': lambda : self.get_by_id(AUDIO_ALBUM_CONTAINER_ID), # all albums
'6': lambda : self.get_by_id(AUDIO_ARTIST_CONTAINER_ID), # all artists
})
self.next_id = CONTAINER_COUNT
self.albums = None
self.artists = None
self.tracks = None
self.urlbase = kwargs.get('urlbase','')
if( len(self.urlbase) > 0 and self.urlbase[len(self.urlbase)-1] != '/'):
self.urlbase += '/'
try:
self.name = kwargs['name']
except KeyError:
self.name = "Rhythmbox on %s" % self.server.coherence.hostname
query = self.db.query_new()
self.info(query)
self.db.query_append(query, [rhythmdb.QUERY_PROP_EQUALS, rhythmdb.PROP_TYPE, self.db.entry_type_get_by_name('song')])
qm = self.db.query_model_new(query)
self.db.do_full_query_async_parsed(qm, query)
self.album_query = self.db.property_model_new(rhythmdb.PROP_ALBUM)
self.album_query.props.query_model = qm
self.artist_query = self.db.property_model_new(rhythmdb.PROP_ARTIST)
self.artist_query.props.query_model = qm
self.containers = {}
self.containers[ROOT_CONTAINER_ID] = \
Container( ROOT_CONTAINER_ID,-1, "Rhythmbox on %s" % self.server.coherence.hostname)
self.containers[AUDIO_ALL_CONTAINER_ID] = \
Container( AUDIO_ALL_CONTAINER_ID,ROOT_CONTAINER_ID, 'All tracks',
children_callback=self.children_tracks,
store=self,play_container=True)
self.containers[ROOT_CONTAINER_ID].add_child(self.containers[AUDIO_ALL_CONTAINER_ID])
self.containers[AUDIO_ALBUM_CONTAINER_ID] = \
Container( AUDIO_ALBUM_CONTAINER_ID,ROOT_CONTAINER_ID, 'Albums',
children_callback=self.children_albums)
self.containers[ROOT_CONTAINER_ID].add_child(self.containers[AUDIO_ALBUM_CONTAINER_ID])
self.containers[AUDIO_ARTIST_CONTAINER_ID] = \
Container( AUDIO_ARTIST_CONTAINER_ID,ROOT_CONTAINER_ID, 'Artists',
children_callback=self.children_artists)
self.containers[ROOT_CONTAINER_ID].add_child(self.containers[AUDIO_ARTIST_CONTAINER_ID])
louie.send('Coherence.UPnP.Backend.init_completed', None, backend=self)
def get_by_id(self,id):
self.info("looking for id %r", id)
if isinstance(id, basestring) and id.startswith('artist_all_tracks_'):
try:
return self.containers[id]
except:
return None
id = id.split('@',1)
item_id = id[0]
item_id = int(item_id)
if item_id < TRACK_COUNT:
try:
item = self.containers[item_id]
except KeyError:
item = None
else:
item = Track(self, (item_id - TRACK_COUNT),None)
return item
def get_next_container_id(self):
ret = self.next_id
self.next_id += 1
return ret
def upnp_init(self):
if self.server:
self.server.connection_manager_server.set_variable(0, 'SourceProtocolInfo', [
'rhythmbox:%s:*:*' % self.server.coherence.hostname,
'http-get:*:audio/mpeg:*',
])
self.warning("__init__ MediaStore initialized")
def children_tracks(self, parent_id):
tracks = []
def track_cb (entry):
if self.db.entry_get (entry, rhythmdb.PROP_HIDDEN):
return
id = self.db.entry_get (entry, rhythmdb.PROP_ENTRY_ID)
track = Track(self, id, parent_id)
tracks.append(track)
self.db.entry_foreach_by_type (self.db.entry_type_get_by_name('song'), track_cb)
return tracks
def children_albums(self,parent_id):
albums = {}
self.info('children_albums')
def album_sort(x,y):
r = cmp(x.title,y.title)
self.info("sort %r - %r = %r", x.title, y.title, r)
return r
def collate (model, path, iter):
name = model.get(iter, 0)[0]
priority = model.get(iter, 1)[0]
self.info("children_albums collate %r %r", name, priority)
if priority is False:
id = self.get_next_container_id()
album = Album(self, name, id,parent_id)
self.containers[id] = album
albums[name] = album
if self.albums is None:
self.album_query.foreach(collate)
self.albums = albums
albums = self.albums.values() #.sort(cmp=album_sort)
albums.sort(cmp=album_sort)
return albums
def children_artists(self,parent_id):
artists = []
def collate (model, path, iter):
name = model.get(iter, 0)[0]
priority = model.get(iter, 1)[0]
if priority is False:
id = self.get_next_container_id()
artist = Artist(self,name, id,parent_id)
self.containers[id] = artist
artists.append(artist)
if self.artists is None:
self.artist_query.foreach(collate)
self.artists = artists
return self.artists
| [] |
2024-01-10 | AdamZ/rhythmbox-magnatune | plugins~coherence~upnp_coherence~MediaPlayer.py | # Licensed under the MIT license
# http://opensource.org/licenses/mit-license.php
# Copyright 2008, Frank Scholz <[email protected]>
import os.path
import urllib
from twisted.python import failure
import rhythmdb
from coherence.upnp.core.soap_service import errorCode
from coherence.upnp.core import DIDLLite
import louie
from coherence.extern.simple_plugin import Plugin
from coherence import log
TRACK_COUNT = 1000000
class RhythmboxPlayer(log.Loggable):
""" a backend to the Rhythmbox
"""
logCategory = 'rb_media_renderer'
implements = ['MediaRenderer']
vendor_value_defaults = {'RenderingControl': {'A_ARG_TYPE_Channel':'Master'},
'AVTransport': {'A_ARG_TYPE_SeekMode':('ABS_TIME','REL_TIME','TRACK_NR')}}
vendor_range_defaults = {'RenderingControl': {'Volume': {'maximum':100}}}
def __init__(self, device, **kwargs):
self.warning("__init__ RhythmboxPlayer %r", kwargs)
self.shell = kwargs['shell']
self.server = device
self.rb_mediaserver = kwargs['rb_mediaserver']
self.player = None
self.entry = None
self.metadata = None
try:
self.name = kwargs['name']
except KeyError:
self.name = "Rhythmbox on %s" % self.server.coherence.hostname
self.player = self.shell.get_player()
louie.send('Coherence.UPnP.Backend.init_completed', None, backend=self)
self.playing = False
self.state = None
self.duration = None
self.volume = 1.0
self.muted_volume = None
self.view = []
self.tags = {}
def __repr__(self):
return str(self.__class__).split('.')[-1]
def volume_changed(self, player, parameter):
self.volume = self.player.props.volume
self.info('volume_changed to %r', self.volume)
if self.volume > 0:
rcs_id = self.server.connection_manager_server.lookup_rcs_id(self.current_connection_id)
self.server.rendering_control_server.set_variable(rcs_id, 'Volume', self.volume*100)
def playing_song_changed(self, player, entry):
self.info("playing_song_changed %r", entry)
if self.server != None:
connection_id = self.server.connection_manager_server.lookup_avt_id(self.current_connection_id)
if entry == None:
self.update('STOPPED')
self.playing = False
#self.entry = None
self.metadata = None
self.duration = None
else:
id = self.shell.props.db.entry_get (entry, rhythmdb.PROP_ENTRY_ID)
bitrate = self.shell.props.db.entry_get(entry, rhythmdb.PROP_BITRATE) * 1024 / 8
# Duration is in HH:MM:SS format
seconds = self.shell.props.db.entry_get(entry, rhythmdb.PROP_DURATION)
hours = seconds / 3600
seconds = seconds - hours * 3600
minutes = seconds / 60
seconds = seconds - minutes * 60
self.duration = "%02d:%02d:%02d" % (hours, minutes, seconds)
mimetype = self.shell.props.db.entry_get(entry, rhythmdb.PROP_MIMETYPE)
# This isn't a real mime-type
if mimetype == "application/x-id3":
mimetype = "audio/mpeg"
size = self.shell.props.db.entry_get(entry, rhythmdb.PROP_FILE_SIZE)
# create item
item = DIDLLite.MusicTrack(id + TRACK_COUNT,'101')
item.album = self.shell.props.db.entry_get(entry, rhythmdb.PROP_ALBUM)
item.artist = self.shell.props.db.entry_get(entry, rhythmdb.PROP_ARTIST)
item.genre = self.shell.props.db.entry_get(entry, rhythmdb.PROP_GENRE)
item.originalTrackNumber = str(self.shell.props.db.entry_get (entry, rhythmdb.PROP_TRACK_NUMBER))
item.title = self.shell.props.db.entry_get(entry, rhythmdb.PROP_TITLE) # much nicer if it was entry.title
cover = self.shell.props.db.entry_request_extra_metadata(entry, "rb:coverArt-uri")
if cover != None:
_,ext = os.path.splitext(cover)
item.albumArtURI = ''.join((self.server.coherence.urlbase+str(self.rb_mediaserver.uuid)[5:]+'/'+ str(int(id) + TRACK_COUNT),'?cover',ext))
item.res = []
location = self.shell.props.db.entry_get(entry, rhythmdb.PROP_LOCATION)
if location.startswith("file://"):
location = unicode(urllib.unquote(location[len("file://"):]))
uri = ''.join((self.server.coherence.urlbase+str(self.rb_mediaserver.uuid)[5:]+'/'+ str(int(id) + TRACK_COUNT)))
res = DIDLLite.Resource(uri, 'http-get:*:%s:*' % mimetype)
if size > 0:
res.size = size
if self.duration > 0:
res.duration = self.duration
if bitrate > 0:
res.bitrate = str(bitrate)
item.res.append(res)
# add internal resource
res = DIDLLite.Resource('track-%d' % id, 'rhythmbox:%s:%s:*' % (self.server.coherence.hostname, mimetype))
if size > 0:
res.size = size
if self.duration > 0:
res.duration = str(self.duration)
if bitrate > 0:
res.bitrate = str(bitrate)
item.res.append(res)
elt = DIDLLite.DIDLElement()
elt.addItem(item)
self.metadata = elt.toString()
self.entry = entry
if self.server != None:
self.server.av_transport_server.set_variable(connection_id, 'CurrentTrackURI',uri)
self.server.av_transport_server.set_variable(connection_id, 'AVTransportURI',uri)
self.server.av_transport_server.set_variable(connection_id, 'AVTransportURIMetaData',self.metadata)
self.server.av_transport_server.set_variable(connection_id, 'CurrentTrackMetaData',self.metadata)
self.info("playing_song_changed %r", self.metadata)
if self.server != None:
self.server.av_transport_server.set_variable(connection_id, 'CurrentTransportActions','PLAY,STOP,PAUSE,SEEK,NEXT,PREVIOUS')
self.server.av_transport_server.set_variable(connection_id, 'RelativeTimePosition', '00:00:00')
self.server.av_transport_server.set_variable(connection_id, 'AbsoluteTimePosition', '00:00:00')
def playing_changed(self, player, state):
self.info("playing_changed", state)
if state is True:
transport_state = 'PLAYING'
else:
if self.playing is False:
transport_state = 'STOPPED'
else:
transport_state = 'PAUSED_PLAYBACK'
self.update(transport_state)
try:
position = player.get_playing_time()
except:
position = None
try:
duration = player.get_playing_song_duration()
except:
duration = None
self.update_position(position,duration)
self.info("playing_changed %r %r ", position, duration)
def elapsed_changed(self, player, time):
self.info("elapsed_changed %r %r", player, time)
try:
duration = player.get_playing_song_duration()
except:
duration = None
self.update_position(time,duration)
def update(self, state):
self.info("update %r", state)
if state in ('STOPPED','READY'):
transport_state = 'STOPPED'
if state == 'PLAYING':
transport_state = 'PLAYING'
if state == 'PAUSED_PLAYBACK':
transport_state = 'PAUSED_PLAYBACK'
if self.state != transport_state:
self.state = transport_state
if self.server != None:
connection_id = self.server.connection_manager_server.lookup_avt_id(self.current_connection_id)
self.server.av_transport_server.set_variable(connection_id,
'TransportState',
transport_state)
def update_position(self, position,duration):
self.info("update_position %r %r", position,duration)
if self.server != None:
connection_id = self.server.connection_manager_server.lookup_avt_id(self.current_connection_id)
self.server.av_transport_server.set_variable(connection_id, 'CurrentTrack', 1)
if position is not None:
m,s = divmod( position, 60)
h,m = divmod(m,60)
if self.server != None:
self.server.av_transport_server.set_variable(connection_id, 'RelativeTimePosition', '%02d:%02d:%02d' % (h,m,s))
self.server.av_transport_server.set_variable(connection_id, 'AbsoluteTimePosition', '%02d:%02d:%02d' % (h,m,s))
if duration <= 0:
duration = None
if duration is not None:
m,s = divmod( duration, 60)
h,m = divmod(m,60)
if self.server != None:
self.server.av_transport_server.set_variable(connection_id, 'CurrentTrackDuration', '%02d:%02d:%02d' % (h,m,s))
self.server.av_transport_server.set_variable(connection_id, 'CurrentMediaDuration', '%02d:%02d:%02d' % (h,m,s))
if self.duration is None:
if self.metadata is not None:
self.info("update_position %r", self.metadata)
elt = DIDLLite.DIDLElement.fromString(self.metadata)
for item in elt:
for res in item.findall('res'):
res.attrib['duration'] = "%d:%02d:%02d" % (h,m,s)
self.metadata = elt.toString()
if self.server != None:
self.server.av_transport_server.set_variable(connection_id, 'AVTransportURIMetaData',self.metadata)
self.server.av_transport_server.set_variable(connection_id, 'CurrentTrackMetaData',self.metadata)
self.duration = duration
def load( self, uri, metadata):
self.info("player load %r %r", uri, metadata)
#self.shell.load_uri(uri,play=False)
self.duration = None
self.metadata = metadata
self.tags = {}
was_playing = self.playing
if was_playing == True:
self.stop()
if len(metadata)>0:
elt = DIDLLite.DIDLElement.fromString(metadata)
if elt.numItems() == 1:
item = elt.getItems()[0]
if uri.startswith('track-'):
self.entry = self.shell.props.db.entry_lookup_by_id(int(uri[6:]))
else:
self.entry = self.shell.props.db.entry_lookup_by_location(uri)
self.info("check for entry %r %r %r", self.entry,item.server_uuid,uri)
if self.entry == None:
if item.server_uuid is not None:
entry_type = self.shell.props.db.entry_register_type("CoherenceUpnp:" + item.server_uuid)
self.entry = self.shell.props.db.entry_new(entry_type, uri)
self.info("create new entry %r", self.entry)
else:
entry_type = self.shell.props.db.entry_register_type("CoherencePlayer")
self.entry = self.shell.props.db.entry_new(entry_type, uri)
self.info("load and check for entry %r", self.entry)
duration = None
size = None
bitrate = None
for res in item.res:
if res.data == uri:
duration = res.duration
size = res.size
bitrate = res.bitrate
break
self.shell.props.db.set(self.entry, rhythmdb.PROP_TITLE, item.title)
try:
if item.artist is not None:
self.shell.props.db.set(self.entry, rhythmdb.PROP_ARTIST, item.artist)
except AttributeError:
pass
try:
if item.album is not None:
self.shell.props.db.set(self.entry, rhythmdb.PROP_ALBUM, item.album)
except AttributeError:
pass
try:
self.info("%r %r", item.title,item.originalTrackNumber)
if item.originalTrackNumber is not None:
self.shell.props.db.set(self.entry, rhythmdb.PROP_TRACK_NUMBER, int(item.originalTrackNumber))
except AttributeError:
pass
if duration is not None:
h,m,s = duration.split(':')
seconds = int(h)*3600 + int(m)*60 + int(s)
self.info("%r %r:%r:%r %r", duration, h, m , s, seconds)
self.shell.props.db.set(self.entry, rhythmdb.PROP_DURATION, seconds)
if size is not None:
self.shell.props.db.set(self.entry, rhythmdb.PROP_FILE_SIZE,int(size))
else:
if uri.startswith('track-'):
self.entry = self.shell.props.db.entry_lookup_by_id(int(uri[6:]))
else:
#self.shell.load_uri(uri,play=False)
#self.entry = self.shell.props.db.entry_lookup_by_location(uri)
entry_type = self.shell.props.db.entry_register_type("CoherencePlayer")
self.entry = self.shell.props.db.entry_new(entry_type, uri)
self.playing = False
self.metadata = metadata
connection_id = self.server.connection_manager_server.lookup_avt_id(self.current_connection_id)
self.server.av_transport_server.set_variable(connection_id, 'CurrentTransportActions','PLAY,STOP,PAUSE,SEEK,NEXT,PREVIOUS')
self.server.av_transport_server.set_variable(connection_id, 'NumberOfTracks',1)
self.server.av_transport_server.set_variable(connection_id, 'CurrentTrackURI',uri)
self.server.av_transport_server.set_variable(connection_id, 'AVTransportURI',uri)
self.server.av_transport_server.set_variable(connection_id, 'AVTransportURIMetaData',metadata)
self.server.av_transport_server.set_variable(connection_id, 'CurrentTrackURI',uri)
self.server.av_transport_server.set_variable(connection_id, 'CurrentTrackMetaData',metadata)
if was_playing == True:
self.play()
def start(self, uri):
self.load(uri)
self.play()
def stop(self):
self.info("player stop")
self.player.stop()
self.playing = False
#self.server.av_transport_server.set_variable( \
# self.server.connection_manager_server.lookup_avt_id(self.current_connection_id),\
# 'TransportState', 'STOPPED')
def play(self):
self.info("player play")
if self.playing == False:
if self.entry:
self.player.play_entry(self.entry)
else:
self.player.playpause()
self.playing = True
else:
self.player.playpause()
#self.server.av_transport_server.set_variable( \
# self.server.connection_manager_server.lookup_avt_id(self.current_connection_id),\
# 'TransportState', 'PLAYING')
def pause(self):
self.player.pause()
#self.server.av_transport_server.set_variable( \
# self.server.connection_manager_server.lookup_avt_id(self.current_connection_id),\
# 'TransportState', 'PAUSED_PLAYBACK')
def seek(self, location, old_state):
"""
@param location: +nL = relative seek forward n seconds
-nL = relative seek backwards n seconds
"""
self.info("player seek %r", location)
self.player.seek(location)
self.server.av_transport_server.set_variable(0, 'TransportState', old_state)
def mute(self):
self.muted_volume = self.volume
self.player.set_volume(0)
rcs_id = self.server.connection_manager_server.lookup_rcs_id(self.current_connection_id)
self.server.rendering_control_server.set_variable(rcs_id, 'Mute', 'True')
def unmute(self):
if self.muted_volume is not None:
self.player.set_volume(self.muted_volume)
self.muted_volume = None
self.player.set_mute(False)
rcs_id = self.server.connection_manager_server.lookup_rcs_id(self.current_connection_id)
self.server.rendering_control_server.set_variable(rcs_id, 'Mute', 'False')
def get_mute(self):
return self.player.get_mute()
def get_volume(self):
self.volume = self.player.get_volume()
self.info("get_volume %r", self.volume)
return self.volume * 100
def set_volume(self, volume):
self.info("set_volume %r", volume)
volume = int(volume)
if volume < 0:
volume=0
if volume > 100:
volume=100
self.player.set_volume(float(volume/100.0))
def upnp_init(self):
self.player.connect ('playing-song-changed',
self.playing_song_changed),
self.player.connect ('playing-changed',
self.playing_changed)
self.player.connect ('elapsed-changed',
self.elapsed_changed)
self.player.connect("notify::volume", self.volume_changed)
self.current_connection_id = None
self.server.connection_manager_server.set_variable(0, 'SinkProtocolInfo',
['rhythmbox:%s:audio/mpeg:*' % self.server.coherence.hostname,
'http-get:*:audio/mpeg:*'],
default=True)
self.server.av_transport_server.set_variable(0, 'TransportState', 'NO_MEDIA_PRESENT', default=True)
self.server.av_transport_server.set_variable(0, 'TransportStatus', 'OK', default=True)
self.server.av_transport_server.set_variable(0, 'CurrentPlayMode', 'NORMAL', default=True)
self.server.av_transport_server.set_variable(0, 'CurrentTransportActions', '', default=True)
self.server.rendering_control_server.set_variable(0, 'Volume', self.get_volume())
self.server.rendering_control_server.set_variable(0, 'Mute', self.get_mute())
def upnp_Play(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
Speed = int(kwargs['Speed'])
self.play()
return {}
def upnp_Previous(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
self.player.do_previous()
return {}
def upnp_Next(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
self.player.do_next()
return {}
def upnp_Pause(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
self.pause()
return {}
def upnp_Stop(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
self.stop()
return {}
def upnp_Seek(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
Unit = kwargs['Unit']
Target = kwargs['Target']
if Unit in ['ABS_TIME','REL_TIME']:
old_state = self.server.av_transport_server.get_variable('TransportState').value
self.server.av_transport_server.set_variable(0, 'TransportState', 'TRANSITIONING')
sign = 1
if Target[0] == '+':
Target = Target[1:]
if Target[0] == '-':
Target = Target[1:]
sign = -1
h,m,s = Target.split(':')
seconds = int(h)*3600 + int(m)*60 + int(s)
if Unit == 'ABS_TIME':
position = self.player.get_playing_time()
self.seek(seconds-position, old_state)
elif Unit == 'REL_TIME':
self.seek(seconds*sign, old_state)
return {}
def upnp_Next(self,*args,**kwargs):
self.player.do_next()
return {}
def upnp_Previous(self,*args,**kwargs):
self.player.do_previous()
return {}
def upnp_SetAVTransportURI(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
CurrentURI = kwargs['CurrentURI']
CurrentURIMetaData = kwargs['CurrentURIMetaData']
local_protocol_infos=self.server.connection_manager_server.get_variable('SinkProtocolInfo').value.split(',')
#print '>>>', local_protocol_infos
if len(CurrentURIMetaData)==0:
self.load(CurrentURI,CurrentURIMetaData)
return {}
else:
elt = DIDLLite.DIDLElement.fromString(CurrentURIMetaData)
#import pdb; pdb.set_trace()
if elt.numItems() == 1:
item = elt.getItems()[0]
res = item.res.get_matching(local_protocol_infos, protocol_type='rhythmbox')
if len(res) == 0:
res = item.res.get_matching(local_protocol_infos)
if len(res) > 0:
res = res[0]
remote_protocol,remote_network,remote_content_format,_ = res.protocolInfo.split(':')
self.load(res.data,CurrentURIMetaData)
return {}
return failure.Failure(errorCode(714))
def upnp_SetMute(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
Channel = kwargs['Channel']
DesiredMute = kwargs['DesiredMute']
if DesiredMute in ['TRUE', 'True', 'true', '1','Yes','yes']:
self.mute()
else:
self.unmute()
return {}
def upnp_SetVolume(self, *args, **kwargs):
InstanceID = int(kwargs['InstanceID'])
Channel = kwargs['Channel']
DesiredVolume = int(kwargs['DesiredVolume'])
self.set_volume(DesiredVolume)
return {}
| [] |
2024-01-10 | AdamZ/rhythmbox-magnatune | plugins~coherence~upnp_coherence~UpnpSource.py | # Licensed under the MIT license
# http://opensource.org/licenses/mit-license.php
#
# Copyright 2007, James Livingston <[email protected]>
# Copyright 2007,2008 Frank Scholz <[email protected]>
import rb, rhythmdb
import gobject, gtk
from coherence import __version_info__ as coherence_version
from coherence import log
from coherence.upnp.core import DIDLLite
class UpnpSource(rb.BrowserSource,log.Loggable):
logCategory = 'rb_media_store'
__gproperties__ = {
'plugin': (rb.Plugin, 'plugin', 'plugin', gobject.PARAM_WRITABLE|gobject.PARAM_CONSTRUCT_ONLY),
'client': (gobject.TYPE_PYOBJECT, 'client', 'client', gobject.PARAM_WRITABLE|gobject.PARAM_CONSTRUCT_ONLY),
'udn': (gobject.TYPE_PYOBJECT, 'udn', 'udn', gobject.PARAM_WRITABLE|gobject.PARAM_CONSTRUCT_ONLY),
}
def __init__(self):
rb.BrowserSource.__init__(self)
self.__db = None
self.__activated = False
self.container_watch = []
if coherence_version < (0,5,1):
self.process_media_server_browse = self.old_process_media_server_browse
else:
self.process_media_server_browse = self.new_process_media_server_browse
def do_set_property(self, property, value):
if property.name == 'plugin':
self.__plugin = value
elif property.name == 'client':
self.__client = value
self.props.name = self.__client.device.get_friendly_name()
elif property.name == 'udn':
self.__udn = value
else:
raise AttributeError, 'unknown property %s' % property.name
def do_impl_activate(self):
if not self.__activated:
print "activating upnp source"
self.__activated = True
shell = self.get_property('shell')
self.__db = shell.get_property('db')
self.__entry_type = self.get_property('entry-type')
# load upnp db
self.load_db(0)
self.__client.content_directory.subscribe_for_variable('ContainerUpdateIDs', self.state_variable_change)
self.__client.content_directory.subscribe_for_variable('SystemUpdateID', self.state_variable_change)
def load_db(self, id):
d = self.__client.content_directory.browse(id, browse_flag='BrowseDirectChildren', process_result=False, backward_compatibility=False)
d.addCallback(self.process_media_server_browse, self.__udn)
def state_variable_change(self, variable, udn=None):
self.info("%s changed from >%s< to >%s<", variable.name, variable.old_value, variable.value)
if variable.old_value == '':
return
if variable.name == 'SystemUpdateID':
self.load_db(0)
elif variable.name == 'ContainerUpdateIDs':
changes = variable.value.split(',')
while len(changes) > 1:
container = changes.pop(0).strip()
update_id = changes.pop(0).strip()
if container in self.container_watch:
self.info("we have a change in %r, container needs a reload", container)
self.load_db(container)
def new_process_media_server_browse(self, results, udn):
didl = DIDLLite.DIDLElement.fromString(results['Result'])
for item in didl.getItems():
self.info("process_media_server_browse %r %r", item.id, item)
if item.upnp_class.startswith('object.container'):
self.load_db(item.id)
if item.upnp_class.startswith('object.item.audioItem'):
url = None
duration = None
size = None
bitrate = None
for res in item.res:
remote_protocol,remote_network,remote_content_format,remote_flags = res.protocolInfo.split(':')
self.info("%r %r %r %r",remote_protocol,remote_network,remote_content_format,remote_flags)
if remote_protocol == 'http-get':
url = res.data
duration = res.duration
size = res.size
bitrate = res.bitrate
break
if url is not None:
self.info("url %r %r",url,item.title)
entry = self.__db.entry_lookup_by_location (url)
if entry == None:
entry = self.__db.entry_new(self.__entry_type, url)
self.__db.set(entry, rhythmdb.PROP_TITLE, item.title)
try:
if item.artist is not None:
self.__db.set(entry, rhythmdb.PROP_ARTIST, item.artist)
except AttributeError:
pass
try:
if item.album is not None:
self.__db.set(entry, rhythmdb.PROP_ALBUM, item.album)
except AttributeError:
pass
try:
self.info("%r %r", item.title,item.originalTrackNumber)
if item.originalTrackNumber is not None:
self.__db.set(entry, rhythmdb.PROP_TRACK_NUMBER, int(item.originalTrackNumber))
except AttributeError:
pass
if duration is not None:
h,m,s = duration.split(':')
seconds = int(h)*3600 + int(m)*60 + int(s)
self.info("%r %r:%r:%r %r", duration, h, m , s, seconds)
self.__db.set(entry, rhythmdb.PROP_DURATION, seconds)
if size is not None:
self.__db.set(entry, rhythmdb.PROP_FILE_SIZE,int(size))
self.__db.commit()
def do_impl_can_delete(self):
return False
gobject.type_register(UpnpSource)
| [] |
2024-01-10 | smart-meal/calorie-compass | api~user~service.py | from typing import Optional
from api import config
from api.user.model import User
from api.util.log import logger
from openai import OpenAI
import json
def get_user_by_username(username: str) -> Optional[User]:
"""
Return the user model by its username
If no user found, return None
"""
# pylint: disable=no-member
users_result = User.objects(username=username)
count = users_result.count()
if count > 1:
logger.error("%s users matched by username '%s'", count, username)
raise RuntimeError("Something went wrong")
if count == 0:
return None
return users_result.get()
def get_user_by_id(uid: str) -> Optional[User]:
"""
Return the user model by its id
If no user found, return None
"""
# pylint: disable=no-member
users_result = User.objects(id=uid)
print(users_result)
count = users_result.count()
if count > 1:
logger.error("%s users matched by user id '%s'", count, uid)
raise RuntimeError("Something went wrong")
if count == 0:
return None
return users_result.get()
def calculate_bmi(height, weight):
# Ensure that height is in meters
height_in_meters = height / 100
bmi = weight / (height_in_meters * height_in_meters)
return bmi
def get_image_info(image_url):
client = OpenAI(api_key=config.VISION_API_KEY)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Return 5 numbers seperated by commas only, without additional text. I understand that it won't be accurate, but use the image provided to provide your estimates of the following. The first number is the number of grams you think this meal is, \
the second is estimated calories, the third is estimated fat, the fourth is estimated protien, and the fifth is estimated carbs."},
{
"type": "image_url",
"image_url": {
"url": image_url,
# "https://www.watscooking.com/images/dish/large/DSC_1009.JPG"
"detail": "low"
},
},
],
}
],
max_tokens=300,
)
response_content = response.choices[0].message.content
response_content = response_content.replace(" ", "")
numbers = response_content.split(',')
# Create a JSON object with keys 'mass', 'cal', 'fat', 'protein', and 'carbs'
json_object = {
'weight': numbers[0],
'calories': numbers[1],
'fat': numbers[2],
'proteins': numbers[3],
'carbs': numbers[4]
}
json_string = json.dumps(json_object, indent=2)
print(json_string)
return json_string
| [
"[{'type': 'text', 'text': \"Return 5 numbers seperated by commas only, without additional text. I understand that it won't be accurate, but use the image provided to provide your estimates of the following. The first number is the number of grams you think this meal is, the second is estimated calories, the third is estimated fat, the fourth is estimated protien, and the fifth is estimated carbs.\"}, {'type': 'image_url', 'image_url': {'url': PLACEHOLDER, 'detail': 'low'}}]"
] |
2024-01-10 | TToTMooN/alf | alf~examples~ac_breakout_conf.py | # Copyright (c) 2021 Horizon Robotics and ALF Contributors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
import alf
from alf.algorithms.actor_critic_algorithm import ActorCriticAlgorithm
from alf.algorithms.agent import Agent
from alf.networks import ActorDistributionNetwork, CategoricalProjectionNetwork, ValueNetwork
from alf.examples import atari_conf
# From OpenAI gym wiki:
# "v0 vs v4: v0 has repeat_action_probability of 0.25
# (meaning 25% of the time the previous action will be used instead of the new action),
# while v4 has 0 (always follow your issued action)
# Because we already implements frame_skip in AtariPreprocessing, we should always
# use 'NoFrameSkip' Atari environments from OpenAI gym
alf.config(
'create_environment',
env_name='BreakoutNoFrameskip-v4',
num_parallel_environments=64)
# Neural Network Configuration
CONV_LAYER_PARAMS = ((32, 8, 4), (64, 4, 2), (64, 3, 1))
actor_network_cls = functools.partial(
ActorDistributionNetwork,
fc_layer_params=(512, ),
conv_layer_params=CONV_LAYER_PARAMS)
value_network_cls = functools.partial(
ValueNetwork, fc_layer_params=(512, ), conv_layer_params=CONV_LAYER_PARAMS)
alf.config('CategoricalProjectionNetwork', logits_init_output_factor=1e-10)
# Algorithm Configuration
alf.config(
'ActorCriticLoss',
entropy_regularization=0.01,
use_gae=True,
use_td_lambda_return=True,
td_lambda=0.95,
td_loss_weight=0.5,
advantage_clip=None)
alf.config(
'ActorCriticAlgorithm',
actor_network_ctor=actor_network_cls,
value_network_ctor=value_network_cls,
optimizer=alf.optimizers.Adam(lr=1e-3))
alf.config('Agent', rl_algorithm_cls=ActorCriticAlgorithm)
alf.config(
'TrainerConfig',
unroll_length=8,
algorithm_ctor=Agent,
num_iterations=0,
num_env_steps=5000000,
evaluate=False,
debug_summaries=1,
summarize_grads_and_vars=1,
summary_interval=10)
| [] |
2024-01-10 | TToTMooN/alf | alf~examples~ppg_procgen_bossfight_conf.py | # Copyright (c) 2021 Horizon Robotics and ALF Contributors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import alf
from alf.examples import ppg_conf
from alf.examples import procgen_conf
from alf.examples.networks import impala_cnn_encoder
from alf.utils.losses import element_wise_squared_loss
from alf.algorithms.ppg_algorithm import PPGAuxOptions
# Environment Configuration
alf.config(
'create_environment', env_name='bossfight', num_parallel_environments=96)
def encoding_network_ctor(input_tensor_spec):
encoder_output_size = 256
return impala_cnn_encoder.create(
input_tensor_spec=input_tensor_spec,
cnn_channel_list=(16, 32, 32),
num_blocks_per_stack=2,
flatten_output_size=encoder_output_size)
# The PPG auxiliary replay buffer is typically large and does not fit in the GPU
# memory. As a result, for ``gather all()`` we set ``convert to default device``
# to ``False`` so that it does not have to put everything directly into GPU
# memory. Because of this, all data transformers should be created on "cpu" as
# they will be used while the experience is still in CPU memory.
alf.config('ReplayBuffer.gather_all', convert_to_default_device=False)
alf.config('data_transformer.create_data_transformer', device="cpu")
# The policy network and aux network is going to share the same
# encoder to save GPU memory. See
# https://github.com/HorizonRobotics/alf/issues/965#issuecomment-897950209
alf.config('DisjointPolicyValueNetwork', is_sharing_encoder=True)
alf.config(
'PPGAlgorithm',
encoding_network_ctor=encoding_network_ctor,
policy_optimizer=alf.optimizers.AdamTF(lr=2e-4),
aux_optimizer=alf.optimizers.AdamTF(lr=2e-4),
aux_options=PPGAuxOptions(
enabled=True,
interval=32,
mini_batch_length=None, # None means use unroll_length as
# mini_batch_length for aux phase
mini_batch_size=8,
num_updates_per_train_iter=6,
))
alf.config(
'PPOLoss',
compute_advantages_internally=True,
entropy_regularization=0.01,
gamma=0.999,
td_lambda=0.95,
td_loss_weight=0.5)
# Sample loss components from OpenAI's training:
#
# aux loss component: [pol_distance], weight: 1.0, unscaled: 0.0007583469850942492
# aux loss component: [vf_aux], weight: 1, unscaled: 0.44967320561408997
# aux loss component: [vf_true], weight: 1.0, unscaled: 0.46082180738449097
alf.config(
'PPGAuxPhaseLoss',
td_error_loss_fn=element_wise_squared_loss,
policy_kl_loss_weight=1.0,
gamma=0.999,
td_lambda=0.95)
# training config
alf.config(
'TrainerConfig',
unroll_length=256,
# This means that mini_batch_length will set to equal to the
# length of the batches taken from the replay buffer, and in this
# case it will be adjusted unroll_length.
mini_batch_length=None,
mini_batch_size=16,
num_updates_per_train_iter=3,
# Note that here 1000 iterations should already have a good
# performance (reward = 10), while 6000 iterations brings it to
# 12.
num_iterations=6000,
num_checkpoints=5,
evaluate=True,
eval_interval=50,
debug_summaries=True,
summarize_grads_and_vars=True,
summary_interval=10)
| [] |
2024-01-10 | Zaniiiii/NERAKA | Detector.py | import os
from openai import OpenAI
client = OpenAI(
# This is the default and can be omitted
api_key=os.environ.get("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"),
)
chat_completion = client.chat.completions.create(
messages=[
{"role": "system", "content": "Anda adalah sistem pendeteksi hoax, jawaban anda cukup menyatakan HOAX atau TIDAK HOAX"},
{"role": "user", "content": "Jokowi adalah kodok"}
],
model="gpt-3.5-turbo",
)
print(chat_completion.choices[0].message.content) | [
"Anda adalah sistem pendeteksi hoax, jawaban anda cukup menyatakan HOAX atau TIDAK HOAX",
"Jokowi adalah kodok"
] |
2024-01-10 | Zaniiiii/NERAKA | youtube_scraper.py | from pytube import YouTube
from openai import OpenAI
def download_video(video_url, output_path="."):
try:
# Create a YouTube object
yt = YouTube(video_url)
# Get the highest resolution stream
video_stream = yt.streams.get_highest_resolution()
# Download the video
video_stream.download(output_path)
print("Download successful!")
# Return the title of the video
return yt.title
except Exception as e:
print(f"Error: {e}")
# Example usage
video_url = input("Input YouTube video URL: ")
output_path = r".\Downloaded Videos"
video_title = download_video(video_url, output_path)
print("Getting text from video...")
client = OpenAI(api_key="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")
audio_file= open(r".\Downloaded Videos\{}.mp4".format(video_title), "rb")
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file
)
print(transcript) | [] |
2024-01-10 | YangyangU/CodeSpace | .history~BATJTMD~tencent~1_20231026204013.py | import openai
openai.api_key ="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" # openai 的sdk
COMPLETION_MODEL="text-davinci-003" #模型常量 达芬奇 完成
#字符串模板
#换行
#描述细节需求
#分布去执行
#输出的格式
prompt ="""
Consideration product:工厂现货PVC
充气青蛙夜市地摊热卖充气玩具发光
1. Compose human readable product title
used on Amazon in english within 20 words
2.Write 5 selling points for the products in Amazon
3.Evaluat a price range for this product in U.S.
Output the result in json format with three properties:
called title,selling_points and price_range proce_range
"""
#定义函数
def get_response(prompt):
completions = openai.Completion.create(
# 大模型很值钱
engine=COMPLETION_MODEL,#模型
prompt=prompt, #提示词
max_tokens=512;#省点
n=1,# 1条
stop=None,
temperature=0.0, #自由发挥度 0-2
)
#print(completions)
message = completions.choices[0].text
return message
print(get_response(prompt))
# {
# title: "",
# selling_points: "",
# price_range: "",
# }
| [
"\nConsideration product:工厂现货PVC\n充气青蛙夜市地摊热卖充气玩具发光\n\n1. Compose human readable product title \nused on Amazon in english within 20 words\n\n2.Write 5 selling points for the products in Amazon\n\n3.Evaluat a price range for this product in U.S.\n\nOutput the result in json format with three properties:\n called title,selling_points and price_range proce_range\n\n"
] |
2024-01-10 | YangyangU/CodeSpace | .history~BATJTMD~tencent~1_20231026195425.py | import openai
openai.api_key ="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" # openai 的sdk
COMPLETION_MODEL="text-davinci-003"
#字符串模板
#换行
#描述细节需求
#分布去执行
#输出的格式
prompt ="""
Consideration product:工厂现货PVC
充气青蛙夜市地摊热卖充气玩具发光
1. Compose human readable product title
used on Amazon in english within 20 words
2.Write 5 selling points for the products in Amazon
3.Evaluat a price range for this product in U.S.
Output the result in json format with three properties:
called title,selling_points and price_range proce_range
"""
{
title: "",
selling_points: "",
price_range: "",
}
| [
"\nConsideration product:工厂现货PVC\n充气青蛙夜市地摊热卖充气玩具发光\n\n1. Compose human readable product title \nused on Amazon in english within 20 words\n\n2.Write 5 selling points for the products in Amazon\n\n3.Evaluat a price range for this product in U.S.\n\nOutput the result in json format with three properties:\n called title,selling_points and price_range proce_range\n\n\n"
] |
2024-01-10 | YangyangU/CodeSpace | .history~BATJTMD~tencent~1_20231026204014.py | import openai
openai.api_key ="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" # openai 的sdk
COMPLETION_MODEL="text-davinci-003" #模型常量 达芬奇 完成
#字符串模板
#换行
#描述细节需求
#分布去执行
#输出的格式
prompt ="""
Consideration product:工厂现货PVC
充气青蛙夜市地摊热卖充气玩具发光
1. Compose human readable product title
used on Amazon in english within 20 words
2.Write 5 selling points for the products in Amazon
3.Evaluat a price range for this product in U.S.
Output the result in json format with three properties:
called title,selling_points and price_range proce_range
"""
#定义函数
def get_response(prompt):
completions = openai.Completion.create(
# 大模型很值钱
engine=COMPLETION_MODEL,#模型
prompt=prompt, #提示词
max_tokens=512;#省点
n=1,# 1条
stop=None,
temperature=0.0, #自由发挥度 0-2
)
#print(completions)
message = completions.choices[0].text
return message
print(get_response(prompt))
# {
# title: "",
# selling_points: "",
# price_range: "",
# }
| [
"\nConsideration product:工厂现货PVC\n充气青蛙夜市地摊热卖充气玩具发光\n\n1. Compose human readable product title \nused on Amazon in english within 20 words\n\n2.Write 5 selling points for the products in Amazon\n\n3.Evaluat a price range for this product in U.S.\n\nOutput the result in json format with three properties:\n called title,selling_points and price_range proce_range\n\n"
] |
2024-01-10 | cwijayasundara/self_query_retreaver | rag_plain~chroma_loader.py | from dotenv import load_dotenv
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import DirectoryLoader
from langchain_community.vectorstores import Chroma
from langchain_community.embeddings import OpenAIEmbeddings
load_dotenv()
loader = DirectoryLoader('data', glob="**/*.txt")
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(docs)
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings(), persist_directory="./vectorstore")
| [] |
2024-01-10 | mlynckat/hallucinations_in_rag | models_and_prompts.py | from langchain.output_parsers import ResponseSchema
SETTINGS = {
"summarize_section": {
"template": """
Summarize the following section of the paper: {section}. Pay attention to the following points:
- Experiments
- Novelty of the proposed approaches
- Results
Metrics to detect and / or mitigate the hallucinations of the large language models.
If there are any metrics to measure or detect the hallucinations, mention them separately. Describe how they can be calculated and how they were used in the paper.
Also mention the mitigation methods if there are any. Describe them.
The summary of the previous section is: {prev_summary}.
{format_instructions}
""",
"response_schemas": [ResponseSchema(name="summary",
description="A rather short summary of the current section of the paper"),
ResponseSchema(name="metrics",
description="The name of metrics mentioned to detect and measure the hallucinations of the large language models as well as the description. If there were no ,metrics mentioned, return empty string."),
ResponseSchema(name="metrics_description",
description="The description of the metrics mentioned to detect and measure the hallucinations of the large language models. If there were no metrics mentioned, return empty string."),
ResponseSchema(name="mitigation",
description="The name of mitigation methods mentioned to mitigate the hallucinations of the large language models as well as the description. If there were no mitigation methods mentioned, return empty string."),
ResponseSchema(name="mitigation_description",
description="The description of the mitigation methods mentioned to mitigate the hallucinations of the large language models. If there were no mitigation methods mentioned, return empty string."),
],
"input_variables": ["section", "prev_summary"],
"output_variables": ["summary", "metrics", "metrics_description", "mitigation", "mitigation_description"]
},
"summarize_paper": {
"template": """
Here are the summaries of the sections of the paper.
Summaries: {summaries}
Describe the paper in one sentence.
What are the main contributions of the paper?
Does the paper propose further research directions?
{format_instructions}
""",
"response_schemas": [ResponseSchema(name="overall_summary",
description="The summary of the paper in one sentence"),
ResponseSchema(name="contributions",
description="The main contributions of the paper"),
ResponseSchema(name="further_research",
description="Further research directions proposed by the paper")],
"input_variables": ["summaries"],
"output_variables": ["overall_summary", "contributions", "further_research"]
},
"aggregate_metrics": {
"template": """
Here are all the metrics and the corresponding descriptions that were described in the paper in different sections.
Metrics: {metrics_description}
Aggregate all the metrics and descriptions. If some metrics are mentioned multiple times, mention them only once but pay attention to all descriptions.
{format_instructions}
""",
"response_schemas": [ResponseSchema(name="metrics_aggregated",
description="The aggregated metrics and descriptions")],
"input_variables": ["metrics_description"],
"output_variables": ["metrics_aggregated"]
},
"aggregate_mitigation": {
"template": """
Here are all the mitigation techniques and the corresponding descriptions that were described in the paper in the different sections.
Mitigation techniques: {mitigations_description}
Aggregate all the techniques and descriptions. If some mitigation techniques are mentioned multiple times, mention them only once but pay attention to all descriptions.
{format_instructions}
""",
"response_schemas": [ResponseSchema(name="mitigations_aggregated",
description="The aggregated mitigation techniques and descriptions")],
"input_variables": ["mitigations_description"],
"output_variables": ["metrics_aggregated"]
}
}
| [] |
2024-01-10 | georgexiong/langchain | libs~experimental~langchain_experimental~comprehend_moderation~pii.py | import asyncio
from typing import Any, Dict, Optional
from langchain_experimental.comprehend_moderation.base_moderation_exceptions import (
ModerationPiiError,
)
class ComprehendPII:
def __init__(
self,
client: Any,
callback: Optional[Any] = None,
unique_id: Optional[str] = None,
chain_id: Optional[str] = None,
) -> None:
self.client = client
self.moderation_beacon = {
"moderation_chain_id": chain_id,
"moderation_type": "PII",
"moderation_status": "LABELS_NOT_FOUND",
}
self.callback = callback
self.unique_id = unique_id
def validate(self, prompt_value: str, config: Any = None) -> str:
redact = config.get("redact")
return (
self._detect_pii(prompt_value=prompt_value, config=config)
if redact
else self._contains_pii(prompt_value=prompt_value, config=config)
)
def _contains_pii(self, prompt_value: str, config: Any = None) -> str:
"""
Checks for Personally Identifiable Information (PII) labels above a
specified threshold. Uses Amazon Comprehend Contains PII Entities API. See -
https://docs.aws.amazon.com/comprehend/latest/APIReference/API_ContainsPiiEntities.html
Args:
prompt_value (str): The input text to be checked for PII labels.
config (Dict[str, Any]): Configuration for PII check and actions.
Returns:
str: the original prompt
Note:
- The provided client should be initialized with valid AWS credentials.
"""
pii_identified = self.client.contains_pii_entities(
Text=prompt_value, LanguageCode="en"
)
if self.callback and self.callback.pii_callback:
self.moderation_beacon["moderation_input"] = prompt_value
self.moderation_beacon["moderation_output"] = pii_identified
threshold = config.get("threshold")
pii_labels = config.get("labels")
pii_found = False
for entity in pii_identified["Labels"]:
if (entity["Score"] >= threshold and entity["Name"] in pii_labels) or (
entity["Score"] >= threshold and not pii_labels
):
pii_found = True
break
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
if pii_found:
raise ModerationPiiError
return prompt_value
def _detect_pii(self, prompt_value: str, config: Optional[Dict[str, Any]]) -> str:
"""
Detects and handles Personally Identifiable Information (PII) entities in the
given prompt text using Amazon Comprehend's detect_pii_entities API. The
function provides options to redact or stop processing based on the identified
PII entities and a provided configuration. Uses Amazon Comprehend Detect PII
Entities API.
Args:
prompt_value (str): The input text to be checked for PII entities.
config (Dict[str, Any]): A configuration specifying how to handle
PII entities.
Returns:
str: The processed prompt text with redacted PII entities or raised
exceptions.
Raises:
ValueError: If the prompt contains configured PII entities for
stopping processing.
Note:
- If PII is not found in the prompt, the original prompt is returned.
- The client should be initialized with valid AWS credentials.
"""
pii_identified = self.client.detect_pii_entities(
Text=prompt_value, LanguageCode="en"
)
if self.callback and self.callback.pii_callback:
self.moderation_beacon["moderation_input"] = prompt_value
self.moderation_beacon["moderation_output"] = pii_identified
if (pii_identified["Entities"]) == []:
if self.callback and self.callback.pii_callback:
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
return prompt_value
pii_found = False
if not config and pii_identified["Entities"]:
for entity in pii_identified["Entities"]:
if entity["Score"] >= 0.5:
pii_found = True
break
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
if pii_found:
raise ModerationPiiError
else:
threshold = config.get("threshold") # type: ignore
pii_labels = config.get("labels") # type: ignore
mask_marker = config.get("mask_character") # type: ignore
pii_found = False
for entity in pii_identified["Entities"]:
if (
pii_labels
and entity["Type"] in pii_labels
and entity["Score"] >= threshold
) or (not pii_labels and entity["Score"] >= threshold):
pii_found = True
char_offset_begin = entity["BeginOffset"]
char_offset_end = entity["EndOffset"]
mask_length = char_offset_end - char_offset_begin + 1
masked_part = mask_marker * mask_length
prompt_value = (
prompt_value[:char_offset_begin]
+ masked_part
+ prompt_value[char_offset_end + 1 :]
)
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
return prompt_value
| [] |
2024-01-10 | georgexiong/langchain | libs~langchain~tests~integration_tests~vectorstores~test_xata.py | """Test Xata vector store functionality.
Before running this test, please create a Xata database by following
the instructions from:
https://python.langchain.com/docs/integrations/vectorstores/xata
"""
import os
from langchain.docstore.document import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores.xata import XataVectorStore
class TestXata:
@classmethod
def setup_class(cls) -> None:
assert os.getenv("XATA_API_KEY"), "XATA_API_KEY environment variable is not set"
assert os.getenv("XATA_DB_URL"), "XATA_DB_URL environment variable is not set"
def test_similarity_search_without_metadata(
self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end constructions and search without metadata."""
texts = ["foo", "bar", "baz"]
docsearch = XataVectorStore.from_texts(
api_key=os.getenv("XATA_API_KEY"),
db_url=os.getenv("XATA_DB_URL"),
texts=texts,
embedding=embedding_openai,
)
docsearch.wait_for_indexing(ndocs=3)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
docsearch.delete(delete_all=True)
def test_similarity_search_with_metadata(
self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search with a metadata filter.
This test requires a column named "a" of type integer to be present
in the Xata table."""
texts = ["foo", "foo", "foo"]
metadatas = [{"a": i} for i in range(len(texts))]
docsearch = XataVectorStore.from_texts(
api_key=os.getenv("XATA_API_KEY"),
db_url=os.getenv("XATA_DB_URL"),
texts=texts,
embedding=embedding_openai,
metadatas=metadatas,
)
docsearch.wait_for_indexing(ndocs=3)
output = docsearch.similarity_search("foo", k=1, filter={"a": 1})
assert output == [Document(page_content="foo", metadata={"a": 1})]
docsearch.delete(delete_all=True)
| [] |
2024-01-10 | georgexiong/langchain | libs~experimental~langchain_experimental~comprehend_moderation~toxicity.py | import asyncio
import importlib
from typing import Any, List, Optional
from langchain_experimental.comprehend_moderation.base_moderation_exceptions import (
ModerationToxicityError,
)
class ComprehendToxicity:
def __init__(
self,
client: Any,
callback: Optional[Any] = None,
unique_id: Optional[str] = None,
chain_id: Optional[str] = None,
) -> None:
self.client = client
self.moderation_beacon = {
"moderation_chain_id": chain_id,
"moderation_type": "Toxicity",
"moderation_status": "LABELS_NOT_FOUND",
}
self.callback = callback
self.unique_id = unique_id
def _toxicity_init_validate(self, max_size: int) -> Any:
"""
Validate and initialize toxicity processing configuration.
Args:
max_size (int): Maximum sentence size defined in the
configuration object.
Raises:
Exception: If the maximum sentence size exceeds the 5KB limit.
Note:
This function ensures that the NLTK punkt tokenizer is downloaded
if not already present.
Returns:
None
"""
if max_size > 1024 * 5:
raise Exception("The sentence length should not exceed 5KB.")
try:
nltk = importlib.import_module("nltk")
nltk.data.find("tokenizers/punkt")
return nltk
except ImportError:
raise ModuleNotFoundError(
"Could not import nltk python package. "
"Please install it with `pip install nltk`."
)
except LookupError:
nltk.download("punkt")
def _split_paragraph(
self, prompt_value: str, max_size: int = 1024 * 4
) -> List[List[str]]:
"""
Split a paragraph into chunks of sentences, respecting the maximum size limit.
Args:
paragraph (str): The input paragraph to be split into chunks.
max_size (int, optional): The maximum size limit in bytes for
each chunk. Defaults to 1024.
Returns:
List[List[str]]: A list of chunks, where each chunk is a list
of sentences.
Note:
This function validates the maximum sentence size based on service
limits using the 'toxicity_init_validate' function. It uses the NLTK
sentence tokenizer to split the paragraph into sentences.
Example:
paragraph = "This is a sample paragraph. It
contains multiple sentences. ..."
chunks = split_paragraph(paragraph, max_size=2048)
"""
# validate max. sentence size based on Service limits
nltk = self._toxicity_init_validate(max_size)
sentences = nltk.sent_tokenize(prompt_value)
chunks = list() # type: ignore
current_chunk = list() # type: ignore
current_size = 0
for sentence in sentences:
sentence_size = len(sentence.encode("utf-8"))
# If adding a new sentence exceeds max_size
# or current_chunk has 10 sentences, start a new chunk
if (current_size + sentence_size > max_size) or (len(current_chunk) >= 10):
if current_chunk: # Avoid appending empty chunks
chunks.append(current_chunk)
current_chunk = []
current_size = 0
current_chunk.append(sentence)
current_size += sentence_size
# Add any remaining sentences
if current_chunk:
chunks.append(current_chunk)
return chunks
def validate(self, prompt_value: str, config: Any = None) -> str:
"""
Check the toxicity of a given text prompt using AWS
Comprehend service and apply actions based on configuration.
Args:
prompt_value (str): The text content to be checked for toxicity.
config (Dict[str, Any]): Configuration for toxicity checks and actions.
Returns:
str: The original prompt_value if allowed or no toxicity found.
Raises:
ValueError: If the prompt contains toxic labels and cannot be
processed based on the configuration.
"""
chunks = self._split_paragraph(prompt_value=prompt_value)
for sentence_list in chunks:
segments = [{"Text": sentence} for sentence in sentence_list]
response = self.client.detect_toxic_content(
TextSegments=segments, LanguageCode="en"
)
if self.callback and self.callback.toxicity_callback:
self.moderation_beacon["moderation_input"] = segments # type: ignore
self.moderation_beacon["moderation_output"] = response
toxicity_found = False
threshold = config.get("threshold")
toxicity_labels = config.get("labels")
if not toxicity_labels:
for item in response["ResultList"]:
for label in item["Labels"]:
if label["Score"] >= threshold:
toxicity_found = True
break
else:
for item in response["ResultList"]:
for label in item["Labels"]:
if (
label["Name"] in toxicity_labels
and label["Score"] >= threshold
):
toxicity_found = True
break
if self.callback and self.callback.toxicity_callback:
if toxicity_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_toxicity(
self.moderation_beacon, self.unique_id
)
)
if toxicity_found:
raise ModerationToxicityError
return prompt_value
| [] |
2024-01-10 | georgexiong/langchain | libs~langchain~langchain~memory~readonly.py | from typing import Any, Dict, List
from langchain.schema import BaseMemory
class ReadOnlySharedMemory(BaseMemory):
"""A memory wrapper that is read-only and cannot be changed."""
memory: BaseMemory
@property
def memory_variables(self) -> List[str]:
"""Return memory variables."""
return self.memory.memory_variables
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Load memory variables from memory."""
return self.memory.load_memory_variables(inputs)
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Nothing should be saved or changed"""
pass
def clear(self) -> None:
"""Nothing to clear, got a memory like a vault."""
pass
| [] |
2024-01-10 | baejaeho18/VIChecker | 04_gpt_responser.py | #!/usr/bin/env python
# coding: utf-8
# In[113]:
import os
import json
import time
import openai
import subprocess
def get_api_key():
# Read API key from file
with open('api_key.txt', 'r', encoding='utf-8') as file:
api_key = file.read().strip()
# never upload api_key public!
with open(".gitignore", "a") as gitignore:
gitignore.write("api_key.txt")
return api_key
def get_response(prompt, gpt_api_model):
# Make a question using the API
response = openai.ChatCompletion.create(
model=gpt_api_model,
messages=[
{"role": "user", "content": prompt}
],
)
# generated answer
answer = response['choices'][0]['message']['content'].strip()
return answer
def record_response(file_path, answer):
# Record the answer
try:
with open(file_path, 'w', encoding='utf-8') as f:
f.write(answer)
except:
print(f"Answer Write 오류")
def ask_to_gpt(file_path, prompt, gpt_api_model):
response_file_path = file_path.replace(".java", "_response.txt")
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
print(file_path)
try:
answer = get_response(prompt+content, gpt_api_model)
record_response(response_file_path, answer)
except Exception as e:
if isinstance(e, openai.error.RateLimitError):
print(f"Rate Limit Error: {str(e)}")
# 대기 후 다시 시도
time.sleep(30) # 이 시간을 조정해 주세요
ask_to_gpt(file_path, prompt, gpt_api_model)
elif "8192" in str(e): # 최대가용토큰(파일사이즈) 초과
print(f"File Size Exceeds: {str(e)}")
with open("blackList.txt", 'a', encoding='utf-8') as f:
f.write(file_path+"\n")
else:
print(f"Response Error: {str(e)}")
def get_response_java_files(gpt_api_model):
blackListFile = "blackList.txt"
if os.path.exists(blackListFile):
with open(blackListFile, 'r', encoding='utf-8') as b:
blackList = b.read()
else:
blackList = ""
for root, dirs, files in os.walk("."):
for file in files:
if "_after_" in file and file.endswith(".java"):
file_path = os.path.join(root, file)
response_file_path = file_path.replace(".java", "_response.txt")
# 가용토큰을 넘는 파일들을 무시
if file_path in blackList:
continue
# test인 파일들 무시
if "_test_" in file_path:
continue
# 응답을 가진 파일들을 무시
if not os.path.exists(response_file_path):
file_size = os.path.getsize(file_path)
# 30Kb - gpt4.0 / 80kb - gpt3.5
if file_size <= 30 * 1024:
prompt = "Can you check the following code and if there is any CWE or CVE related vulnerability, can you point it out the number of CWE or CVE and describe it?\n"
ask_to_gpt(file_path, prompt, gpt_api_model)
else:
print(f"Ignored {file_path} - File size exceeds 30KB")
def get_response_diff_files(gpt_api_model):
blackListFile = "blackList.txt"
if os.path.exists(blackListFile):
with open(blackListFile, 'r', encoding='utf-8') as b:
blackList = b.read()
else:
blackList = ""
for root, dirs, files in os.walk("."):
for file in files:
if "_diff_" in file and file.endswith(".java"):
file_path = os.path.join(root, file)
response_file_path = file_path.replace(".java", "_response.txt")
# 가용토큰을 넘는 파일들을 무시
if file_path in blackList:
continue
# test인 파일들 무시
if "_test_" in file_path:
continue
# 응답을 가진 파일들을 무시
if not os.path.exists(response_file_path):
file_size = os.path.getsize(file_path)
# 30Kb - gpt4.0 / 80kb - gpt3.5
if file_size <= 30 * 1024:
prompt = "Could you read the following diff file and, if there are any security vulnerabilities in changes, point out the related CWE or CVE numbers along with the reason they occurred?\n"
ask_to_gpt(file_path, prompt, gpt_api_model)
else:
print(f"Ignored {file_path} - File size exceeds 30KB")
# In[114]:
if __name__ == "__main__":
directories = ["guava"] # ["h2database", "bc-java", "pgjdbc", "junit4", "gson", "guava"]
working_directory = "commit-files"
gpt_api_model = "gpt-4" # gpt-3.5-turbo-16k
# commit_logger(directories)
openai.api_key = get_api_key()
# 3_에서 이미 만들어졌으리라 가정.
os.chdir(working_directory)
for directory in directories:
os.chdir(directory)
# 필요한 작업 선택
# get_response_java_files(gpt_api_model)
get_response_diff_files(gpt_api_model)
os.chdir("..")
os.chdir("..")
# In[115]:
os.getcwd()
# In[111]:
os.chdir("..")
# In[32]:
os.chdir("VIChecker")
# In[ ]:
| [
"Could you read the following diff file and, if there are any security vulnerabilities in changes, point out the related CWE or CVE numbers along with the reason they occurred?\n",
"Can you check the following code and if there is any CWE or CVE related vulnerability, can you point it out the number of CWE or CVE and describe it?\n"
] |
2024-01-10 | baejaeho18/VIChecker | gpt-response-logger.py | import os
import subprocess
import json
import sys
import openai
import time
import re
# Read API key from file
with open('../../api_key.txt', 'r', encoding='utf-8') as file:
api_key = file.read().strip()
# Set the OpenAI API key
openai.api_key = api_key
def ask_to_gpt(file_path, content):
# Make a question using the API
# question = "Can you check the following code and if there is any CWE or CVE related vulnerability, can you point it out the number of CWE or CVE and describe it?\n" + content
question = "Could you read the following diff file and, if there are any security vulnerabilities in changes, point out the related CWE or CVE numbers along with the reason they occurred?\n" + content
response = openai.ChatCompletion.create(
# model="gpt-3.5-turbo-16k",
model="gpt-4",
messages=[
{"role": "user", "content": question}
],
)
# generated answer
answer = response['choices'][0]['message']['content'].strip()
# Record the answer
try:
with open(file_path, 'w', encoding='utf-8') as f:
f.write(answer)
except:
print(f"Answer Write 오류")
def tchecker(errmsg, file):
if "10000" in errmsg and "min" in errmsg: # 780000?
time.sleep(20) # 20 / 30 / 40
if "8192" in errmsg: # 16385
with open("blackList.txt", 'a', encoding='utf-8') as f:
f.write(file)
def read_java_files(directory):
for root, dirs, files in os.walk(directory):
for file in files:
if "_after" in file and file.endswith(".java"):
file_path = os.path.join(root, file)
response_file_path = file_path.replace(".java", "_response.txt")
if not os.path.exists(response_file_path):
file_size = os.path.getsize(file_path)
if file_size <= 80 * 1024: # 80KB in bytes
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
print(file_path)
try:
ask_to_gpt(response_file_path, content)
except Exception as e:
print(f"Response Error: {str(e)}")
else:
print(f"Ignored {file_path} - File size exceeds 80KB")
def read_diff_files(directory):
with open("blackList.txt", 'r', encoding='utf-8') as b:
blackList = b.read()
for root, dirs, files in os.walk(directory):
for file in files:
if "_diff_" in file and file.endswith(".java"):
file_path = os.path.join(root, file)
if file_path in blackList:
continue
response_file_path = file_path.replace(".java", "_response.txt")
if not os.path.exists(response_file_path):
file_size = os.path.getsize(file_path)
if file_size <= 23 * 1024: # 80KB in bytes
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
print(file_path)
try:
ask_to_gpt(response_file_path, content)
except Exception as e:
print(f"Response Error: {str(e)}")
tchecker(str(e), file_path)
else:
print(f"Ignored {file_path} - File size exceeds 23KB")
# 현재 디렉토리에서 자바 파일 읽기
dir_path = os.getcwd()
read_diff_files(dir_path)
# read_java_files(dir_path) | [
"Could you read the following diff file and, if there are any security vulnerabilities in changes, point out the related CWE or CVE numbers along with the reason they occurred?\nPLACEHOLDER"
] |
2024-01-10 | baejaeho18/VIChecker | %23%23responses~pgjdbc-commits~gpt-response-logger.py | import os
import subprocess
import json
import sys
import openai
import re
# Read API key from file
with open('../../api_key.txt', 'r', encoding='utf-8') as file:
api_key = file.read().strip()
# Set the OpenAI API key
openai.api_key = api_key
def ask_to_gpt(file_path, content):
# Make a question using the API
question = "Can you check the following code and if there is any CWE or CVE related vulnerability, can you point it out the number of CWE or CVE and describe it?\n" + content
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-16k",
messages=[
{"role": "user", "content": question}
],
)
# generated answer
answer = response['choices'][0]['message']['content'].strip()
# Record the answer
try:
with open(file_path, 'w', encoding='utf-8') as f:
f.write(answer)
except:
print(f"Answer Write 오류")
def remove_comments(code):
# Remove /* ... */ style comments
code = re.sub(r'/\*.*?\*/', '', code, flags=re.DOTALL)
# Remove // style comments
code = re.sub(r'//.*', '', code)
return code
def read_java_files(directory):
for root, dirs, files in os.walk(directory):
for file in files:
if "_before" in file and file.endswith(".java"):
file_path = os.path.join(root, file)
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
content = remove_comments(content)
print(file_path)
# ask_to_gpt(file_path.replace(".java", "_response.txt"), content)
try:
ask_to_gpt(file_path.replace(".java", "_response.txt"), content)
except:
print("Response Error")
# 현재 디렉토리에서 자바 파일 읽기
dir_path = os.getcwd()
read_java_files(dir_path) | [
"Can you check the following code and if there is any CWE or CVE related vulnerability, can you point it out the number of CWE or CVE and describe it?\nPLACEHOLDER"
] |
2024-01-10 | baejaeho18/VIChecker | ask-gpt.py | #!/usr/bin/env python
# coding: utf-8
# In[6]:
import subprocess
import openai
# Read API key from file
with open('../../api_key.txt', 'r', encoding='utf-8') as file:
api_key = file.read().strip()
# Set the OpenAI API key
openai.api_key = api_key
# In[2]:
def ask_to_gpt(question, content):
# Make a question using the API
question = question + content
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-16k",
messages=[
{"role": "user", "content": question}
],
)
# generated answer
answer = response['choices'][0]['message']['content'].strip()
# Record the answer
try:
with open("response.txt", 'a', encoding='utf-8') as f:
f.write(answer)
f.write("\n====================\n")
except:
print(f"Answer Write 오류")
def read_files(file_path):
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
return content
# In[3]:
# 현재 디렉토리에서 자바 파일 읽기
file_path = "73_after_BCECPrivateKey.java"
content = read_files(file_path)
question = "Can you check the following code and if there is any CWE or CVE related vulnerability, can you point it out the number of CWE or CVE and describe it?\n"
ask_to_gpt(question, content)
# In[4]:
question = "Doesn't the following code have potential CWE vulnerability, which isCWE-497: Exposed Public Static Mutable Field?"
ask_to_gpt(question, content)
content = content + read_files("response.txt")
question = "Can you pointed out the line where those vulnerability occurs?\n"
ask_to_gpt(question, content)
# In[ ]:
| [] |
2024-01-10 | baejaeho18/VIChecker | 06_VIChecker.py | #!/usr/bin/env python
# coding: utf-8
# In[2]:
import os
import nbimporter
def back_to_root():
root = os.getcwd()
os.chdir(root)
# In[3]:
import _1_git_repo_cloner as cloner
back_to_root()
# projectList.txt에 저장된 git repo path를 전부 clone 받음
project_list_file = "projectLIst.txt"
project_urls = cloner.get_repo_path(project_list_file)
# clone 받은 project 디렉토리명들을 project_list에 저장함
project_list = cloner.clone_project_and_update_gitignore(project_urls)
# In[ ]:
import _2_git_commit_logger as logger
back_to_root()
# commit-logs폴더에 json형식으로 커밋들의 정보를 저장
output_directory = "commit-logs"
if not os.path.exists(output_directory):
os.makedirs(output_directory)
# commid_logger(project_list) # 현재 수작업
logger.add_changed_file_list(project_list, output_directory)
# In[ ]:
import _3_git_file_tracker as tracker
back_to_root()
# commit-files폴더에 커밋 시점의 수정된 java파일과 그 diff파일들을 저장
output_directory = "commit-files"
if not os.path.exists(output_directory):
os.makedirs(output_directory)
tracker.file_tracker(project_list, output_directory)
# In[ ]:
# import openai
import _4_gpt_responser as responser
back_to_root()
# gpt-4 모델에 java/diff파일을 질의하여 응답을 {path}_response.txt에 저장
gpt_api_model = "gpt-4-0613"
openai.api_key = responser.get_api_key()
working_directory = "commit-files"
os.chdir(working_directory)
for directory in project_list:
os.chdir(directory)
responser.get_response_java_files(gpt_api_model)
# responser.get_response_diff_files(gpt_api_model)
os.chdir("..")
os.chdir("..")
# In[27]:
import _5_save2sheet as storer
back_to_root()
# commit-sheets 폴더에 질의파일&응답쌍들을 엑셀파일에 저장
# 향후, google sheet에 자동으로 옮길 수 있으면 좋겠음
# 잘 저장되는지 주의 필요함!
output_directory = "commit-sheets"
if not os.path.exists(output_directory):
os.makedirs(output_directory)
working_directory = "commit-files"
os.chdir(working_directory)
for directory in project_list:
os.chdir(directory)
storer.commits_to_sheet(directory, output_directory, "after")
os.chdir("..")
print("모든 작업이 완료되었습니다.")
os.chdir("..")
# In[14]:
back_to_root()
# In[26]:
os.getcwd()
# In[22]:
print(project_list)
# In[25]:
os.chdir("..")
# In[24]:
project_list = ["pgjdbc"]
# In[ ]:
| [] |
2024-01-10 | notbogdan/Voyager-Autobrowser | voyager~agents~critic.py | from typing import List
from voyager.prompts import load_prompt
from voyager.utils.json_utils import fix_and_parse_json
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
from ..types import BrowserEvent
class CriticAgent:
def __init__(
self,
model_name: str = "gpt-3.5-turbo",
temperature: float = 0,
request_timout: int = 120,
mode: str = "auto",
):
self.llm = ChatOpenAI(
model_name=model_name,
temperature=temperature,
request_timeout=request_timout,
)
assert mode in ["auto", "manual"]
self.mode = mode
def render_system_message(self):
system_message = SystemMessage(content=load_prompt("critic"))
return system_message
def render_human_message(self, *, events: List[BrowserEvent], task, context):
if not events[-1]["log"] == "observe":
raise ValueError("Last event must be an observe event")
event = events[-1]
current_url = event["currentUrl"]
workspace = event["workspace"]
clickables = event["clickables"]
text = event["text"]
for i, (event) in enumerate(events):
if event["log"] == "error":
print(f"\033[31mCritic Agent: Error occurs {event['error']}\033[0m")
return None
observation = ""
if not current_url or not workspace or not clickables or not text:
raise ValueError("Missing required fields")
observation += f"URL: {current_url}\n\n"
observation += f"Workspace: {workspace}\n\n"
observation += f"Clickables: {clickables}\n\n"
observation += f"Text: {text}\n\n"
observation += f"Task: {task}\n\n"
if context:
observation += f"Context: {context}\n\n"
else:
observation += f"Context: None\n\n"
print(f"\033[31m****Critic Agent human message****\n{observation}\033[0m")
return HumanMessage(content=observation)
def human_check_task_success(self):
confirmed = False
success = False
critique = ""
while not confirmed:
success = input("Success? (y/n)")
success = success.lower() == "y"
critique = input("Enter your critique:")
print(f"Success: {success}\nCritique: {critique}")
confirmed = input("Confirm? (y/n)") in ["y", ""]
return success, critique
def ai_check_task_success(self, messages, max_retries=5):
if max_retries == 0:
print(
"\033[31mFailed to parse Critic Agent response. Consider updating your prompt.\033[0m"
)
return False, ""
if messages[1] is None:
return False, ""
critic = self.llm(messages).content
print(f"\033[31m****Critic Agent ai message****\n{critic}\033[0m")
try:
response = fix_and_parse_json(critic)
assert response["success"] in [True, False]
if "critique" not in response:
response["critique"] = ""
return response["success"], response["critique"]
except Exception as e:
print(f"\033[31mError parsing critic response: {e} Trying again!\033[0m")
return self.ai_check_task_success(
messages=messages,
max_retries=max_retries - 1,
)
def check_task_success(
self, *, events, task, context, max_retries=5
):
human_message = self.render_human_message(
events=events,
task=task,
context=context,
)
messages = [
self.render_system_message(),
human_message,
]
if self.mode == "manual":
return self.human_check_task_success()
elif self.mode == "auto":
return self.ai_check_task_success(
messages=messages, max_retries=max_retries
)
else:
raise ValueError(f"Invalid critic agent mode: {self.mode}")
| [] |
2024-01-10 | notbogdan/Voyager-Autobrowser | voyager~agents~skill.py | import os
import voyager.utils as U
from langchain.chat_models import ChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.schema import HumanMessage, SystemMessage
from langchain.vectorstores import Chroma
from voyager.prompts import load_prompt
from voyager.control_primitives import load_control_primitives
class SkillManager:
def __init__(
self,
model_name: str = "gpt-3.5-turbo",
temperature: float = 0,
retrieval_top_k: int = 5,
request_timout: int = 120,
ckpt_dir: str = "ckpt",
resume: bool = False,
):
self.llm = ChatOpenAI(
model_name=model_name,
temperature=temperature,
request_timeout=request_timout,
)
U.f_mkdir(f"{ckpt_dir}/skill/code")
U.f_mkdir(f"{ckpt_dir}/skill/description")
U.f_mkdir(f"{ckpt_dir}/skill/vectordb")
# programs for env execution
self.control_primitives = load_control_primitives()
if resume:
print(f"\033[33mLoading Skill Manager from {ckpt_dir}/skill\033[0m")
self.skills = U.load_json(f"{ckpt_dir}/skill/skills.json")
else:
self.skills = {}
self.retrieval_top_k = retrieval_top_k
self.ckpt_dir = ckpt_dir
self.vectordb = Chroma(
collection_name="skill_vectordb",
embedding_function=OpenAIEmbeddings(),
persist_directory=f"{ckpt_dir}/skill/vectordb",
)
assert self.vectordb._collection.count() == len(self.skills), (
f"Skill Manager's vectordb is not synced with skills.json.\n"
f"There are {self.vectordb._collection.count()} skills in vectordb but {len(self.skills)} skills in skills.json.\n"
f"Did you set resume=False when initializing the manager?\n"
f"You may need to manually delete the vectordb directory for running from scratch."
)
@property
def programs(self):
programs = ""
for skill_name, entry in self.skills.items():
programs += f"{entry['code']}\n\n"
for primitives in self.control_primitives:
programs += f"{primitives}\n\n"
return programs
def add_new_skill(self, info):
program_name = info["program_name"]
program_code = info["program_code"]
skill_description = self.generate_skill_description(program_name, program_code)
print(
f"\033[33mSkill Manager generated description for {program_name}:\n{skill_description}\033[0m"
)
if program_name in self.skills:
print(f"\033[33mSkill {program_name} already exists. Rewriting!\033[0m")
self.vectordb._collection.delete(ids=[program_name])
i = 2
while f"{program_name}V{i}.js" in os.listdir(f"{self.ckpt_dir}/skill/code"):
i += 1
dumped_program_name = f"{program_name}V{i}"
else:
dumped_program_name = program_name
self.vectordb.add_texts(
texts=[skill_description],
ids=[program_name],
metadatas=[{"name": program_name}],
)
self.skills[program_name] = {
"code": program_code,
"description": skill_description,
}
assert self.vectordb._collection.count() == len(
self.skills
), "vectordb is not synced with skills.json"
U.dump_text(
program_code, f"{self.ckpt_dir}/skill/code/{dumped_program_name}.js"
)
U.dump_text(
skill_description,
f"{self.ckpt_dir}/skill/description/{dumped_program_name}.txt",
)
U.dump_json(self.skills, f"{self.ckpt_dir}/skill/skills.json")
self.vectordb.persist()
def generate_skill_description(self, program_name, program_code):
messages = [
SystemMessage(content=load_prompt("skill")),
HumanMessage(
content=program_code
+ "\n\n"
+ f"The main function is `{program_name}`."
),
]
skill_description = f" // { self.llm(messages).content}"
return f"async function {program_name}() {{\n{skill_description}\n}}"
def retrieve_skills(self, query):
k = min(self.vectordb._collection.count(), self.retrieval_top_k)
if k == 0:
return []
print(f"\033[33mSkill Manager retrieving for {k} skills\033[0m")
docs_and_scores = self.vectordb.similarity_search_with_score(query, k=k)
print(
f"\033[33mSkill Manager retrieved skills: "
f"{', '.join([doc.metadata['name'] for doc, _ in docs_and_scores])}\033[0m"
)
skills = []
for doc, _ in docs_and_scores:
skills.append(self.skills[doc.metadata["name"]]["code"])
return skills
| [
"PLACEHOLDER\n\nThe main function is `PLACEHOLDER`."
] |
2024-01-10 | notbogdan/Voyager-Autobrowser | voyager~voyager.py | import copy
import json
import os
import time
from typing import Dict, Union, List
from langchain.schema import (
BaseMessage
)
from voyager.types import BrowserEvent, ProposedProgram
import voyager.utils as U
from .env import VoyagerEnv
from .agents import ActionAgent
from .agents import CriticAgent
from .agents import CurriculumAgent
from .agents import SkillManager
Conversation = tuple[str, str, str]
Conversations = List[Conversation]
Messages = List[BaseMessage]
class Voyager:
def __init__(
self,
server_port: int = 3000,
openai_api_key: str or None = "None",
max_iterations: int = 160,
env_request_timeout: int = 600,
reset_placed_if_failed: bool = False,
action_agent_model_name: str = "gpt-4",
action_agent_temperature: float = 0,
action_agent_task_max_retries: int = 4,
action_agent_show_chat_log: bool = True,
action_agent_show_execution_error: bool = True,
curriculum_agent_model_name: str = "gpt-4",
curriculum_agent_temperature: float = 0,
curriculum_agent_qa_model_name: str = "gpt-4",
curriculum_agent_qa_temperature: float = 0,
curriculum_agent_mode: str = "auto",
critic_agent_model_name: str = "gpt-4",
critic_agent_temperature: float = 0,
critic_agent_mode: str = "auto",
skill_manager_model_name: str = "gpt-4",
skill_manager_temperature: float = 0,
skill_manager_retrieval_top_k: int = 5,
openai_api_request_timeout: int = 240,
ckpt_dir: str = "ckpt",
skill_library_dir: str = "./skill_library",
resume: bool = False,
):
"""
The main class for Voyager.
Action agent is the iterative prompting mechanism in paper.
Curriculum agent is the automatic curriculum in paper.
Critic agent is the self-verification in paper.
Skill manager is the skill library in paper.
:param openai_api_key: openai api key
you should increase this value
:param env_request_timeout: how many seconds to wait for each step, if the code execution exceeds this time,
python side will terminate the connection and need to be resumed
:param action_agent_model_name: action agent model name
:param action_agent_temperature: action agent temperature
:param action_agent_task_max_retries: how many times to retry if failed
:param curriculum_agent_model_name: curriculum agent model name
:param curriculum_agent_temperature: curriculum agent temperature
:param curriculum_agent_qa_model_name: curriculum agent qa model name
:param curriculum_agent_qa_temperature: curriculum agent qa temperature
:param curriculum_agent_warm_up: info will show in curriculum human message
:param curriculum_agent_mode: "auto" for automatic curriculum, "manual" for human curriculum
:param critic_agent_model_name: critic agent model name
:param critic_agent_temperature: critic agent temperature
:param critic_agent_mode: "auto" for automatic critic ,"manual" for human critic
:param skill_manager_model_name: skill manager model name
:param skill_manager_temperature: skill manager temperature
:param skill_manager_retrieval_top_k: how many skills to retrieve for each task
:param openai_api_request_timeout: how many seconds to wait for openai api
:param ckpt_dir: checkpoint dir
:param skill_library_dir: skill library dir
:param resume: whether to resume from checkpoint
"""
# init env
self.env = VoyagerEnv(
server_port=server_port,
request_timeout=env_request_timeout,
)
self.reset_placed_if_failed = reset_placed_if_failed
self.max_iterations = max_iterations
# set openai api key
os.environ["OPENAI_API_KEY"] = openai_api_key
# init agents
self.action_agent = ActionAgent(
model_name=action_agent_model_name,
temperature=action_agent_temperature,
request_timout=openai_api_request_timeout,
ckpt_dir=ckpt_dir,
resume=resume,
chat_log=action_agent_show_chat_log,
execution_error=action_agent_show_execution_error,
)
self.action_agent_task_max_retries = action_agent_task_max_retries
self.curriculum_agent = CurriculumAgent(
model_name=curriculum_agent_model_name,
temperature=curriculum_agent_temperature,
qa_model_name=curriculum_agent_qa_model_name,
qa_temperature=curriculum_agent_qa_temperature,
request_timout=openai_api_request_timeout,
ckpt_dir=ckpt_dir,
resume=resume,
mode=curriculum_agent_mode,
)
self.critic_agent = CriticAgent(
model_name=critic_agent_model_name,
temperature=critic_agent_temperature,
request_timout=openai_api_request_timeout,
mode=critic_agent_mode,
)
self.skill_manager = SkillManager(
model_name=skill_manager_model_name,
temperature=skill_manager_temperature,
retrieval_top_k=skill_manager_retrieval_top_k,
request_timout=openai_api_request_timeout,
ckpt_dir=skill_library_dir if skill_library_dir else ckpt_dir,
resume=True if resume else False,
)
self.recorder = U.EventRecorder(ckpt_dir=ckpt_dir, resume=resume)
self.resume = resume
self.action_agent_rollout_num_iter = -1
self.task = None
self.context = ""
self.messages: Messages = []
self.conversations: Conversations = []
self.last_events: List[BrowserEvent] = []
def reset(self, task, context="", reset_env=True):
self.action_agent_rollout_num_iter = 0
self.task = task
self.context = context
if reset_env:
self.env.reset()
events = self.env.step("")
skills = self.skill_manager.retrieve_skills(query=self.context)
print(
f"\033[33mRender Action Agent system message with {len(skills)} skills\033[0m"
)
system_message = self.action_agent.render_system_message(skills=skills)
human_message = self.action_agent.render_human_message(
events=events, code="", task=self.task, context=context, critique=""
)
self.messages = [system_message, human_message]
print(
f"\033[32m****Action Agent human message****\n{human_message.content}\033[0m"
)
assert len(self.messages) == 2
self.conversations = []
return self.messages
def close(self):
self.env.close()
def step(self):
if self.action_agent_rollout_num_iter < 0:
raise ValueError("Agent must be reset before stepping")
ai_message = self.action_agent.llm(self.messages)
print(f"\033[34m****Action Agent ai message****\n{ai_message.content}\033[0m")
self.conversations.append(
(self.messages[0].content, self.messages[1].content, ai_message.content)
)
parsed_result = self.action_agent.process_ai_message(message=ai_message)
success = False
if self.task is None:
raise ValueError("Task must be set before stepping")
if isinstance(parsed_result, dict):
code = parsed_result["program_code"] + ";" + parsed_result["exec_code"]
events = self.env.step(
code,
programs=self.skill_manager.programs,
)
self.recorder.record(events, self.task)
success, critique = self.critic_agent.check_task_success(
events=events,
task=self.task,
context=self.context,
max_retries=5,
)
new_skills = self.skill_manager.retrieve_skills(
query=self.context
)
system_message = self.action_agent.render_system_message(skills=new_skills)
human_message = self.action_agent.render_human_message(
events=events,
code=parsed_result["program_code"],
task=self.task,
context=self.context,
critique=critique,
)
self.last_events = copy.deepcopy(events)
self.messages = [system_message, human_message]
else:
assert isinstance(parsed_result, str)
self.recorder.record([], self.task)
print(f"\033[34m{parsed_result} Trying again!\033[0m")
assert len(self.messages) == 2
self.action_agent_rollout_num_iter += 1
done = (
self.action_agent_rollout_num_iter >= self.action_agent_task_max_retries
or success
)
info: Dict[str, Union[str, bool, Conversations]] = {
"task": self.task,
"success": success,
"conversations": self.conversations,
}
if success:
if not isinstance(parsed_result, dict):
raise ValueError(
f"parsed_result must be a proposed program on success, got {type(parsed_result)}"
)
info["program_code"] = parsed_result["program_code"]
info["program_name"] = parsed_result["program_name"]
else:
print(
f"\033[32m****Action Agent human message****\n{self.messages[-1].content}\033[0m"
)
return self.messages, 0, done, info
def rollout(self, *, task, context, reset_env=True):
self.reset(task=task, context=context, reset_env=reset_env)
while True:
messages, reward, done, info = self.step()
if done:
break
return messages, reward, done, info
def learn(self, reset_env=True):
self.env.reset()
self.last_events = self.env.step("")
while True:
if self.recorder.iteration > self.max_iterations:
print("Iteration limit reached")
break
task, context = self.curriculum_agent.propose_next_task(
events=self.last_events,
max_retries=5,
)
print(
f"\033[35mStarting task {task} for at most {self.action_agent_task_max_retries} times\033[0m"
)
try:
messages, reward, done, info = self.rollout(
task=task,
context=context,
reset_env=reset_env,
)
except Exception as e:
time.sleep(3)
info = {
"task": task,
"success": False,
}
last_event = self.last_events[-1]
if not isinstance(last_event, dict):
raise ValueError(f"There must be browser events present")
self.env.reset(
options={
"clickables": last_event["clickables"],
"workspace": last_event["workspace"],
"currentUrl": last_event["currentUrl"],
"text": last_event["text"],
}
)
print("Your last round rollout terminated due to error:")
print(f"\033[41m{e}\033[0m")
if info["success"]:
self.skill_manager.add_new_skill(info)
self.curriculum_agent.update_exploration_progress(info)
print(
f"\033[35mCompleted tasks: {', '.join(self.curriculum_agent.completed_tasks)}\033[0m"
)
print(
f"\033[35mFailed tasks: {', '.join(self.curriculum_agent.failed_tasks)}\033[0m"
)
return {
"completed_tasks": self.curriculum_agent.completed_tasks,
"failed_tasks": self.curriculum_agent.failed_tasks,
"skills": self.skill_manager.skills,
}
def decompose_task(self, task):
# if not self.last_events:
# self.last_events = self.env.reset(
# options={
# "mode": "hard",
# "wait_ticks": self.env_wait_ticks,
# }
# )
# return self.curriculum_agent.decompose_task(task, self.last_events)
print("Not implemented - task decomposition")
def inference(self, task=None, sub_goals=[], reset_mode="hard", reset_env=True):
# if not task and not sub_goals:
# raise ValueError("Either task or sub_goals must be provided")
# if not sub_goals:
# sub_goals = self.decompose_task(task)
# self.env.reset(
# options={
# "mode": reset_mode,
# "wait_ticks": self.env_wait_ticks,
# }
# )
# self.curriculum_agent.completed_tasks = []
# self.curriculum_agent.failed_tasks = []
# self.last_events = self.env.step("")
# while self.curriculum_agent.progress < len(sub_goals):
# next_task = sub_goals[self.curriculum_agent.progress]
# context = self.curriculum_agent.get_task_context(next_task)
# print(
# f"\033[35mStarting task {next_task} for at most {self.action_agent_task_max_retries} times\033[0m"
# )
# messages, reward, done, info = self.rollout(
# task=next_task,
# context=context,
# reset_env=reset_env,
# )
# self.curriculum_agent.update_exploration_progress(info)
# print(
# f"\033[35mCompleted tasks: {', '.join(self.curriculum_agent.completed_tasks)}\033[0m"
# )
# print(
# f"\033[35mFailed tasks: {', '.join(self.curriculum_agent.failed_tasks)}\033[0m"
# )
print("Not implemented - task inference")
| [] |
2024-01-10 | notbogdan/Voyager-Autobrowser | voyager~agents~action.py | import re
import time
from typing import List, Optional, Union, TypedDict
from ..types import BrowserEvent, FunctionDescription, ProposedProgram
import voyager.utils as U
from javascript import require
from langchain.chat_models import ChatOpenAI
from langchain.prompts import SystemMessagePromptTemplate
from langchain.schema import AIMessage, HumanMessage, SystemMessage
from voyager.prompts import load_prompt
from voyager.control_primitives_context import load_control_primitives_context
class ActionAgent:
def __init__(
self,
model_name: str = "gpt-3.5-turbo",
temperature: float = 0,
request_timout: int = 120,
ckpt_dir: str = "ckpt",
resume: bool = False,
chat_log: bool = True,
execution_error: bool = True,
):
self.ckpt_dir = ckpt_dir
self.chat_log = chat_log
self.execution_error = execution_error
U.f_mkdir(f"{ckpt_dir}/action")
# TODO - resume with workspace?
# if resume:
# print(f"\033[32mLoading Action Agent from {ckpt_dir}/action\033[0m")
# self.chest_memory = U.load_json(f"{ckpt_dir}/action/chest_memory.json")
# else:
# self.chest_memory = {}
self.llm = ChatOpenAI(
model_name=model_name,
temperature=temperature,
request_timeout=request_timout,
)
def render_system_message(self, skills=[]):
system_template = load_prompt("action_template")
if not isinstance(system_template, str):
raise ValueError("system_template must be a string")
# FIXME: Hardcoded control_primitives
base_skills = [
"searchGoogle",
"writeFile",
"readFile",
]
programs = "\n\n".join(load_control_primitives_context(base_skills) + skills)
response_format = load_prompt("action_response_format")
system_message_prompt = SystemMessagePromptTemplate.from_template(
system_template
)
system_message = system_message_prompt.format(
programs=programs, response_format=response_format
)
assert isinstance(system_message, SystemMessage)
return system_message
def render_human_message(
self, *, events: List[BrowserEvent], code="", task="", context="", critique=""
):
chat_messages = []
error_messages = []
currentUrl = None
workspace = None
clickables = None
text = None
if not events[-1]["log"] == "observe":
raise ValueError("Last event must be observe")
for i, (event) in enumerate(events):
if event["log"] == "observe":
currentUrl = event["currentUrl"]
workspace = event["workspace"]
clickables = event["clickables"]
text = event["text"]
elif event["log"] == "error":
error_messages.append(event["error"])
else:
chat_messages.append(event["log"])
if not currentUrl or not workspace or not clickables or not text:
raise ValueError("Missing information in events")
observation = ""
if code:
observation += f"Code from the last round:\n{code}\n\n"
else:
observation += f"Code from the last round: No code in the first round\n\n"
if self.execution_error:
if error_messages:
error = "\n".join(error_messages)
observation += f"Execution error:\n{error}\n\n"
else:
observation += f"Execution error: No error\n\n"
if self.chat_log:
if chat_messages:
chat_log = "\n".join(chat_messages)
observation += f"Chat log: {chat_log}\n\n"
else:
observation += f"Chat log: None\n\n"
observation += f"Current URL: {currentUrl}\n\n"
observation += f"Workspace: {', '.join(workspace)}\n\n"
observation += f"Clickables: {clickables}\n\n"
observation += f"Text: {text}\n\n"
observation += f"Task: {task}\n\n"
if context:
observation += f"Context: {context}\n\n"
else:
observation += f"Context: None\n\n"
if critique:
observation += f"Critique: {critique}\n\n"
else:
observation += f"Critique: None\n\n"
return HumanMessage(content=observation)
def process_ai_message(self, message) -> Union[ProposedProgram, Exception, str] :
assert isinstance(message, AIMessage)
retry = 3
error = None
while retry > 0:
try:
babel = require("@babel/core")
babel_generator = require("@babel/generator").default
code_pattern = re.compile(r"```(?:javascript|js)(.*?)```", re.DOTALL)
code = "\n".join(code_pattern.findall(message.content))
parsed = babel.parse(code)
functions: List[FunctionDescription] = []
assert len(list(parsed.program.body)) > 0, "No functions found"
for i, node in enumerate(parsed.program.body):
if node.type != "FunctionDeclaration":
continue
node_type = (
"AsyncFunctionDeclaration"
if node["async"]
else "FunctionDeclaration"
)
functions.append(
{
"name": node.id.name,
"type": node_type,
"body": babel_generator(node).code,
"params": list(node["params"]),
}
)
# find the last async function
main_function: Union[None, FunctionDescription] = None
for function in reversed(functions):
if function["type"] == "AsyncFunctionDeclaration":
main_function = function
break
if not main_function:
raise Exception("No async function found. Your main function must be async.")
program_code = "\n\n".join(function["body"] for function in functions)
exec_code = f"await {main_function['name']}();"
return {
"program_code": program_code,
"program_name": main_function["name"],
"exec_code": exec_code,
}
except Exception as e:
retry -= 1
error = e
time.sleep(1)
return f"Error parsing action response (before program execution): {error}"
# def summarize_chatlog(self, events):
# def filter_item(message: str):
# craft_pattern = r"I cannot make \w+ because I need: (.*)"
# craft_pattern2 = (
# r"I cannot make \w+ because there is no crafting table nearby"
# )
# mine_pattern = r"I need at least a (.*) to mine \w+!"
# if re.match(craft_pattern, message):
# return re.match(craft_pattern, message).groups()[0]
# elif re.match(craft_pattern2, message):
# return "a nearby crafting table"
# elif re.match(mine_pattern, message):
# return re.match(mine_pattern, message).groups()[0]
# else:
# return ""
# chatlog = set()
# for event_type, event in events:
# if event_type == "onChat":
# item = filter_item(event["onChat"])
# if item:
# chatlog.add(item)
# return "I also need " + ", ".join(chatlog) + "." if chatlog else ""
| [
"action_template"
] |
2024-01-10 | notbogdan/Voyager-Autobrowser | voyager~agents~curriculum.py | from __future__ import annotations
import random
import re
from typing import List
import voyager.utils as U
from voyager.prompts import load_prompt
from voyager.utils.json_utils import fix_and_parse_json
from langchain.chat_models import ChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.schema import HumanMessage, SystemMessage
from langchain.vectorstores import Chroma
from ..types import BrowserEvent
class CurriculumAgent:
def __init__(
self,
model_name: str = "gpt-4",
temperature: float = 0,
qa_model_name: str = "gpt-4",
qa_temperature: float = 0,
request_timout: int = 120,
ckpt_dir: str = "ckpt",
resume: bool = False,
mode: str = "auto",
):
self.llm = ChatOpenAI(
model_name=model_name,
temperature=temperature,
request_timeout=request_timout,
)
self.qa_llm = ChatOpenAI(
model_name=qa_model_name,
temperature=qa_temperature,
request_timeout=request_timout,
)
assert mode in [
"auto",
"manual",
], f"mode {mode} not supported"
self.mode = mode
self.ckpt_dir = ckpt_dir
U.f_mkdir(f"{ckpt_dir}/curriculum/vectordb")
if resume:
print(f"\033[35mLoading Curriculum Agent from {ckpt_dir}/curriculum\033[0m")
self.completed_tasks = U.load_json(
f"{ckpt_dir}/curriculum/completed_tasks.json"
)
self.failed_tasks = U.load_json(f"{ckpt_dir}/curriculum/failed_tasks.json")
self.qa_cache = U.load_json(f"{ckpt_dir}/curriculum/qa_cache.json")
else:
self.completed_tasks = []
self.failed_tasks = []
self.qa_cache = {}
# vectordb for qa cache
self.qa_cache_questions_vectordb = Chroma(
collection_name="qa_cache_questions_vectordb",
embedding_function=OpenAIEmbeddings(),
persist_directory=f"{ckpt_dir}/curriculum/vectordb",
)
assert self.qa_cache_questions_vectordb._collection.count() == len(
self.qa_cache
), (
f"Curriculum Agent's qa cache question vectordb is not synced with qa_cache.json.\n"
f"There are {self.qa_cache_questions_vectordb._collection.count()} questions in vectordb "
f"but {len(self.qa_cache)} questions in qa_cache.json.\n"
f"Did you set resume=False when initializing the agent?\n"
f"You may need to manually delete the qa cache question vectordb directory for running from scratch.\n"
)
@property
def curriculum_observations(self):
return [
"context",
"current_url",
"clickables",
"workspace",
"text",
"completed_tasks",
"failed_tasks",
]
@property
def progress(self):
return len(self.completed_tasks)
def render_system_message(self):
system_message = SystemMessage(content=load_prompt("curriculum"))
assert isinstance(system_message, SystemMessage)
return system_message
def render_observation(self, *, events: List[BrowserEvent]):
if events[-1]["log"] != "observe":
raise AssertionError("Last event must be observe")
event = events[-1]
current_url = event["currentUrl"]
clickables = event["clickables"]
workspace = event["workspace"]
text = event["text"]
completed_tasks = (
", ".join(self.completed_tasks) if self.completed_tasks else "None"
)
completed_tasks = (
", ".join(self.completed_tasks) if self.completed_tasks else "None"
)
failed_tasks = ", ".join(self.failed_tasks) if self.failed_tasks else "None"
observation = {
"context": "",
"current_url": f"Current URL: {current_url}\n\n",
"clickables": f"Clickables: {clickables}\n\n",
"workspace": f"Workspace: {workspace}\n\n",
"text": f"Text: {text}\n\n",
"completed_tasks": f"Completed tasks so far: {completed_tasks}\n\n",
"failed_tasks": f"Failed tasks that are too hard: {failed_tasks}\n\n",
}
return observation
def render_human_message(self, *, events):
content = ""
observation = self.render_observation(
events=events
)
if self.progress >= 0:
questions, answers = self.run_qa(
events=events
)
i = 1
for question, answer in zip(questions, answers):
if "Answer: Unknown" in answer or "language model" in answer:
continue
observation["context"] += f"Question {i}: {question}\n"
observation["context"] += f"{answer}\n\n"
i += 1
if i > 5:
break
for key in self.curriculum_observations:
content += observation[key]
print(f"\033[35m****Curriculum Agent human message****\n{content}\033[0m")
return HumanMessage(content=content)
def propose_next_task(self, *, events, max_retries=5):
if self.progress == 0 and self.mode == "auto":
task = "Navigate to google"
context = "We want to go to google to being our journey."
return task, context
messages = [
self.render_system_message(),
self.render_human_message(
events=events
),
]
if self.mode == "auto":
return self.propose_next_ai_task(messages=messages, max_retries=max_retries)
elif self.mode == "manual":
return self.propose_next_manual_task()
else:
raise ValueError(f"Invalid curriculum agent mode: {self.mode}")
def propose_next_ai_task(self, *, messages, max_retries=5):
if max_retries == 0:
raise RuntimeError("Max retries reached, failed to propose ai task.")
curriculum = self.llm(messages).content
print(f"\033[31m****Curriculum Agent ai message****\n{curriculum}\033[0m")
try:
response = self.parse_ai_message(curriculum)
assert "next_task" in response
context = self.get_task_context(response["next_task"])
return response["next_task"], context
except Exception as e:
print(
f"\033[35mError parsing curriculum response: {e}. Trying again!\033[0m"
)
return self.propose_next_ai_task(
messages=messages,
max_retries=max_retries - 1,
)
def parse_ai_message(self, message):
task = ""
for line in message.split("\n"):
if line.startswith("Task:"):
task = line[5:].replace(".", "").strip()
assert task, "Task not found in Curriculum Agent response"
return {"next_task": task}
def propose_next_manual_task(self):
confirmed = False
task, context = "", ""
while not confirmed:
task = input("Enter task: ")
context = input("Enter context: ")
print(f"Task: {task}\nContext: {context}")
confirmed = input("Confirm? (y/n)").lower() in ["y", ""]
return task, context
def update_exploration_progress(self, info):
task = info["task"]
if info["success"]:
print(f"\033[35mCompleted task {task}.\033[0m")
self.completed_tasks.append(task)
else:
print(
f"\033[35mFailed to complete task {task}. Skipping to next task.\033[0m"
)
self.failed_tasks.append(task)
# clean up tasks and dump to disk
self.clean_up_tasks()
def clean_up_tasks(self):
updated_completed_tasks = []
# record repeated failed tasks
updated_failed_tasks = self.failed_tasks
# dedup but keep order
for task in self.completed_tasks:
if task not in updated_completed_tasks:
updated_completed_tasks.append(task)
# remove completed tasks from failed tasks
for task in updated_completed_tasks:
while task in updated_failed_tasks:
updated_failed_tasks.remove(task)
self.completed_tasks = updated_completed_tasks
self.failed_tasks = updated_failed_tasks
# dump to json
U.dump_json(
self.completed_tasks, f"{self.ckpt_dir}/curriculum/completed_tasks.json"
)
U.dump_json(self.failed_tasks, f"{self.ckpt_dir}/curriculum/failed_tasks.json")
def decompose_task(self, task, events):
messages = [
SystemMessage(
content=load_prompt("curriculum_task_decomposition"),
),
self.render_human_message(events=events),
HumanMessage(content=f"Final task: {task}"),
]
print(
f"\033[31m****Curriculum Agent task decomposition****\nFinal task: {task}\033[0m"
)
response = self.llm(messages).content
print(f"\033[31m****Curriculum Agent task decomposition****\n{response}\033[0m")
return fix_and_parse_json(response)
def run_qa(self, *, events):
questions_new, _ = self.run_qa_step1_ask_questions(
events=events
)
questions = []
answers = []
for question in questions_new:
if self.qa_cache_questions_vectordb._collection.count() > 0:
docs_and_scores = (
self.qa_cache_questions_vectordb.similarity_search_with_score(
question, k=1
)
)
if docs_and_scores and docs_and_scores[0][1] < 0.05:
question_cached = docs_and_scores[0][0].page_content
assert question_cached in self.qa_cache
answer_cached = self.qa_cache[question_cached]
questions.append(question_cached)
answers.append(answer_cached)
continue
answer = self.run_qa_step2_answer_questions(question=question)
assert question not in self.qa_cache
self.qa_cache[question] = answer
self.qa_cache_questions_vectordb.add_texts(
texts=[question],
)
U.dump_json(self.qa_cache, f"{self.ckpt_dir}/curriculum/qa_cache.json")
self.qa_cache_questions_vectordb.persist()
questions.append(question)
answers.append(answer)
assert len(questions_new) == len(questions) == len(answers)
return questions, answers
def get_task_context(self, task):
question = (
f"How to {task}?"
)
if question in self.qa_cache:
answer = self.qa_cache[question]
else:
answer = self.run_qa_step2_answer_questions(question=question)
self.qa_cache[question] = answer
self.qa_cache_questions_vectordb.add_texts(
texts=[question],
)
U.dump_json(self.qa_cache, f"{self.ckpt_dir}/curriculum/qa_cache.json")
self.qa_cache_questions_vectordb.persist()
context = f"Question: {question}\n{answer}"
return context
def render_system_message_qa_step1_ask_questions(self):
return SystemMessage(content=load_prompt("curriculum_qa_step1_ask_questions"))
def render_human_message_qa_step1_ask_questions(self, *, events):
observation = self.render_observation(
events=events
)
content = ""
for key in self.curriculum_observations:
content += observation[key]
return HumanMessage(content=content)
def run_qa_step1_ask_questions(self, *, events):
current_url = events[-1]["currentUrl"]
questions = [
f"What can I do with {current_url}?",
]
concepts = [current_url]
messages = [
self.render_system_message_qa_step1_ask_questions(),
self.render_human_message_qa_step1_ask_questions(
events=events
),
]
qa_response = self.qa_llm(messages).content
try:
# Regex pattern to extract question and concept pairs
pattern = r"Question \d+: (.+)\nConcept \d+: (.+)"
# Extracting all question and concept pairs from the text
pairs = re.findall(pattern, qa_response)
# Storing each question and concept in separate lists
questions_new = [pair[0] for pair in pairs]
concepts_new = [pair[1] for pair in pairs]
assert len(questions_new) == len(concepts_new)
questions.extend(questions_new)
concepts.extend(concepts_new)
except Exception as e:
print(
f"\033[35mError parsing curriculum response for "
f"QA step 1 ask questions: {e}.\033[0m"
)
return questions, concepts
def render_system_message_qa_step2_answer_questions(self):
return SystemMessage(
content=load_prompt("curriculum_qa_step2_answer_questions")
)
def render_human_message_qa_step2_answer_questions(self, question):
content = f"Question: {question}"
return HumanMessage(content=content)
def run_qa_step2_answer_questions(self, question):
messages = [
self.render_system_message_qa_step2_answer_questions(),
self.render_human_message_qa_step2_answer_questions(question=question),
]
print(f"\033[35mCurriculum Agent Question: {question}\033[0m")
qa_answer = self.qa_llm(messages).content
print(f"\033[31mCurriculum Agent {qa_answer}\033[0m")
return qa_answer
| [
"Final task: PLACEHOLDER",
"curriculum_qa_step2_answer_questions",
"curriculum",
"curriculum_qa_step1_ask_questions",
"curriculum_task_decomposition"
] |
2024-01-10 | Siris2314/Llama2YT | llama-2-setup.py |
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import FAISS, Chroma
from langchain.document_loaders import TextLoader, DirectoryLoader
from langchain.memory import ConversationSummaryBufferMemory
from langchain import PromptTemplate
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceBgeEmbeddings
from tqdm import tqdm
from langchain.chains.summarize import load_summarize_chain
from utils import get_prompt, delete_folder_contents, wrap_text_preserve_newlines, process_llm_response
import chromadb
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from experimentyt import get_text
import os
import sys
if(len(sys.argv) < 4):
raise Exception("Usage: python3 llama-2-setup.py <youtube-url> <output-file> <query>")
folder_to_clear = "db/chroma_db"
delete_folder_contents(folder_to_clear)
#Check if the text file exists
if not os.path.exists(f"{sys.argv[2]}.txt"):
get_text(sys.argv[1], sys.argv[2])
else:
print("Text file already exists. Skipping download.")
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(model_path="./llama-2-7b-chat.Q5_K_S.gguf", temperature=0.01, n_ctx=4000, verbose=True)
instruction = "Given the context that has been provided. \n {context}, Answer the following question: \n{question}"
sys_prompt = """You are an expert in YouTube video question and answering.
You will be given context to answer from. Answer the questions with as much detail as possible and only in paragraphs.
In case you do not know the answer, you can say "I don't know" or "I don't understand".
In all other cases provide an answer to the best of your ability."""
prompt_sys = get_prompt(instruction, sys_prompt)
template = PromptTemplate(template=prompt_sys, input_variables=['context', 'question'])
# Retrieval Augmented Generation Example Below
def data_loader():
loader = TextLoader(f'{sys.argv[2]}.txt')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512,
chunk_overlap=20)
texts = text_splitter.split_documents(documents)
return texts
data = data_loader()
full_response = {}
rel_docs = []
def build_query(query):
texts = data_loader()
embeddings = HuggingFaceBgeEmbeddings(model_name="BAAI/bge-base-en",
model_kwargs={'device': 'cpu'}, encode_kwargs={'normalize_embeddings': True})
vectordb = Chroma.from_documents(texts,embeddings, persist_directory='db/chroma_db',collection_name='yt_summarization')
vectordb.persist()
retriever = vectordb.as_retriever(search_kwargs={"k": 7})
rel_docs.append(retriever.get_relevant_documents(query, k=7))
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={'prompt': template})
query = query
llm_response = qa_chain(query)
full_response = llm_response
return full_response
ans = build_query(' '.join(sys.argv[3:]))
print(rel_docs)
process_llm_response(ans)
| [
"question",
"context",
"You are an expert in YouTube video question and answering.\nYou will be given context to answer from. Answer the questions with as much detail as possible and only in paragraphs.\nIn case you do not know the answer, you can say \"I don't know\" or \"I don't understand\".\nIn all other cases provide an answer to the best of your ability.",
"I don't know",
"I don't understand"
] |
2024-01-10 | Siris2314/Llama2YT | experimental~pineconetest.py | import pinecone
import time
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
embed_model = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-v2",
model_kwargs={'device': 'cpu'},
encode_kwargs={'device': 'cpu', 'batch_size': 32}
)
def data_loader():
loader = TextLoader('../test.txt')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=400,
chunk_overlap=20, separators=["\n\n", "\n", " ",""])
texts = text_splitter.split_documents(documents)
return texts
texts = data_loader()
pinecone.init(
)
index_name = 'llama2test'
if index_name not in pinecone.list_indexes():
pinecone.create_index(name=index_name, dimension=384, metric='cosine'
)
while not pinecone.describe_index(index_name).status['ready']:
time.sleep(1)
index = pinecone.Index(index_name=index_name)
print(index.describe_index_stats()) | [] |
2024-01-10 | Siris2314/Llama2YT | topic-summarization.py |
from langchain.llms import LlamaCpp, CTransformers
from langchain import PromptTemplate, LLMChain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import FAISS, Chroma
from langchain.document_loaders import TextLoader, DirectoryLoader
from langchain.memory import ConversationSummaryBufferMemory
from langchain import PromptTemplate
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceBgeEmbeddings
from tqdm import tqdm
from langchain.chains.summarize import load_summarize_chain
from utils import get_prompt, delete_folder_contents, wrap_text_preserve_newlines, process_llm_response
import chromadb
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from experimentyt import get_text
import os
import sys
from transcript_data import generate_topic_transcript
folder_to_clear = "db/chroma_db"
delete_folder_contents(folder_to_clear)
generate_topic_transcript(sys.argv[1])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(model_path="./llama-2-7b-chat.ggmlv3.q8_0.bin", temperature=0.01, n_ctx=4000, verbose=True)
instruction = "Given the context that has been provided. \n {context}, Answer the following question: \n{question}"
sys_prompt = """You are an expert in YouTube topic summarization.
You will be given context to answer from. Answer the questions with as much detail as possible.
In case you do not know the answer, you can say "I don't know" or "I don't understand".
"""
prompt_sys = get_prompt(instruction, sys_prompt)
template = PromptTemplate(template=prompt_sys, input_variables=['context', 'question'])
# Retrieval Augmented Generation Example Below
def data_loader():
loader = DirectoryLoader('experimental_topic', glob="**/*.txt",loader_cls=TextLoader, show_progress=True)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512,
chunk_overlap=20)
texts = text_splitter.split_documents(documents)
return texts
data = data_loader()
print(data)
full_response = {}
rel_docs = []
def build_query(query):
texts = data_loader()
embeddings = HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en",
model_kwargs={'device': 'cpu'}, encode_kwargs={'normalize_embeddings': True})
vectordb = Chroma.from_documents(texts,embeddings, persist_directory='db/chroma_db',collection_name='yt_summarization')
vectordb.persist()
retriever = vectordb.as_retriever(search_kwargs={"k": 5})
rel_docs.append(retriever.get_relevant_documents(query, k=5))
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={'prompt': template})
query = query
llm_response = qa_chain(query)
full_response = llm_response
return full_response
ans = build_query(' '.join(sys.argv[2:]))
print(rel_docs)
process_llm_response(ans)
| [
"question",
"You are an expert in YouTube topic summarization.\nYou will be given context to answer from. Answer the questions with as much detail as possible.\nIn case you do not know the answer, you can say \"I don't know\" or \"I don't understand\".\n",
"context",
"I don't know",
"I don't understand"
] |
2024-01-10 | hien-p/HealAI-bot | components~imagePage.py | from pickle import bytes_types
import streamlit as st
import streamlit.components.v1 as components
from PIL import Image
import pytesseract
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate
from langchain import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain.llms import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
os.environ['OPENAI_API_KEY'] = os.getenv("OPENAI_API_KEY")
def gptchain(inputs,transcript):
# define the output
response_schemas = [
ResponseSchema(name="sentiment", description="a sentiment label based on the past content. It should be either Negative or Positive"),
ResponseSchema(name="idea", description="""
If the sentiment is Negative then return the reason why the user shouldn't be interested in this content, along with its danger.
If the sentiment is Positive then return the encouragement to make the user interest in this content even more, along with other relevant content.
For Neutral then return a instruct for a better content.
"""),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
# prompt template
template = """You are good at detecting harmful, content and also encouraging good and useful one. All labels that you know are Negative and Positive.
Given a past content, your job is to answer as best as possible.
The past content:
{chat_history}
Instructions:
{format_instructions}
The question: {question}."""
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(template=template, input_variables=["chat_history", "question"],
partial_variables={"format_instructions": format_instructions})
memory = ConversationBufferMemory(memory_key="chat_history")
memory.chat_memory.add_user_message("This is what I have read recently")
memory.chat_memory.add_user_message(transcript)
# Build chain
model = OpenAI(verbose=True, temperature=0.0)
sentiment_chain = LLMChain(llm=model, prompt=prompt, verbose=True, memory=memory, output_key='result')
ans = sentiment_chain({"question": inputs})
return ans['result']
def uploadfile():
uploaded_file = st.file_uploader("Upload an image", type=["png", "jpg", "jpeg"])
print("Uploaded File :", uploaded_file)
if uploaded_file is not None:
#content = Image.open(uploaded_file)
data = pytesseract.image_to_string(Image.open(uploaded_file))
st.write(data)
st.write(gptchain("What do you think about the past content",data))
def renderimagePage():
st.title("Sentiment Analysis your image")
components.html("""<hr style="height:3px;border:none;color:#333;background-color:#333; margin-bottom: 10px" /> """)
# st.markdown("### User Input Text Analysis")
st.subheader("Image Analysis")
st.text("Input an image you want:")
uploadfile()
| [
"question",
"chat_history",
"You are good at detecting harmful, content and also encouraging good and useful one. All labels that you know are Negative and Positive.\n Given a past content, your job is to answer as best as possible.\n The past content:\n {chat_history}\n\n Instructions:\n {format_instructions}\n The question: {question}.",
"format_instructions"
] |
2024-01-10 | hien-p/HealAI-bot | botchat.py |
import streamlit as st
from streamlit_chat import message
from dotenv import load_dotenv
import os
import datetime
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
SystemMessage,
HumanMessage,
AIMessage
)
from streamlit_extras.colored_header import colored_header
from apscheduler.schedulers.background import BackgroundScheduler
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate
from langchain import LLMChain
st.set_page_config(
page_title="Healing",
page_icon="🤖"
)
def robo_avatar_component():
robo_html = "<div style='display: flex; flex-wrap: wrap; justify-content: center;'>"
# replace with your own array of strings to seed the DiceBear Avatars API
robo_avatar_seed = list(range(20, 25))
for i in robo_avatar_seed:
avatar_url = f"https://api.dicebear.com/5.x/bottts-neutral/svg?seed={i}"
robo_html += f"<img src='{avatar_url}' style='width: 50px; height: 50px; margin: 30px;'>"
robo_html += "</div>"
robo_html = """
<style>
@media (max-width: 800px) {
img {
max-width: calc((100% - 60px) / 6);
height: auto;
margin: 0 10px 10px 0;
}
}
</style>
""" + robo_html
st.markdown(robo_html, unsafe_allow_html=True)
# test user_input
def send_message():
now = datetime.datetime.now()
print( now.hour, now.minute)
return "who are you "
def init():
# Load the OpenAI API key from the environment variable
load_dotenv()
# test that the API key exists
if os.getenv("OPENAI_API_KEY") is None or os.getenv("OPENAI_API_KEY") == "":
print("OPENAI_API_KEY is not set")
exit(1)
else:
print("OPENAI_API_KEY is set")
# setup streamlit page
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate
from langchain import LLMChain
response_schemas = [
ResponseSchema(name="options", description="a json array with {n_options} best option to solve the user question"),
ResponseSchema(name="reason", description="the reason sentiment why it good behind user answer"),
ResponseSchema(name="source", description="give lists the source used to answer the user's question, should be a website.")
]
template = """Answer the user question as best as possible, and you must speak in english.
{format_instructions}
The question: {question} """
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(template=template, input_variables=["question"],partial_variables={"format_instructions": format_instructions})
def play_chat():
init()
chat = ChatOpenAI(temperature=0)
solution_chain = LLMChain(llm=chat, prompt=prompt, output_key='result')
# Initialize message history
if "messages" not in st.session_state:
st.session_state.messages = [
SystemMessage(content="You are a friendly chatbot decrease mental health in social media!!!. "),
AIMessage(content="Give a note for user when their asking what you can do. You give functions that you can give a sentiment analysis for sentence and give your opinions to help user mental health")
]
st.header("HEAl.AI 🤟")
st.subheader("From Banh gio Team with love 🇻🇳")
# Update the message history
messages = st.session_state.get('messages', [])
# Display the message history in reverse order (bottom-up)
for i, msg in enumerate(messages):
if isinstance(msg, SystemMessage):
message(msg.content, is_user=False, key=str(i) + '_system')
elif isinstance(msg, AIMessage):
message(msg.content, is_user=False, key=str(i) + '_ai')
elif isinstance(msg, HumanMessage):
message(msg.content, is_user=True, key=str(i) + '_user')
# build conservation bot
# Create a placeholder to display the response
response_placeholder = st.empty()
# Handle user input
user_input = st.text_input("Your message: ")
if st.button("Send"):
user_input = user_input.strip()
if user_input:
with st.spinner("Thinking..."):
# Use the sentiment mode
#response = solution_chain({"question": user_input, "n_options": "3"})
#sentiment = response["result"]
#sentiment = response['result'].strip().replace('```json', '').strip()
#sentiment = sentiment.strip().replace('``', '').strip()
# Generate the chatbot response
response = chat(st.session_state.messages)
st.session_state.messages.append(HumanMessage(content=user_input))
st.session_state.messages.append(AIMessage(content=response.content))
#st.session_state.messages.append(AIMessage(content=sentiment))
# Update the response_placeholder with the new response
#response_placeholder.text(sentiment)
# Clear user_input after sending the message
st.session_state.user_input = ""
else:
# If the user input is empty, keep the previous input in the text input field
st.session_state.user_input = user_input
robo_avatar_component()
hide_streamlit_style = """
<style>
#MainMenu {visibility: hidden;}
footer {visibility: hidden;}
</style>
"""
st.markdown(hide_streamlit_style, unsafe_allow_html=True) | [
"question",
"Give a note for user when their asking what you can do. You give functions that you can give a sentiment analysis for sentence and give your opinions to help user mental health",
"Answer the user question as best as possible, and you must speak in english.\n{format_instructions}\nThe question: {question} ",
"format_instructions",
"You are a friendly chatbot decrease mental health in social media!!!. "
] |
2024-01-10 | hien-p/HealAI-bot | components~testimage.py | from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate
from langchain import LLMChain
from langchain.llms import OpenAI
import os
from dotenv import load_dotenv
from textblob import TextBlob
load_dotenv()
os.environ['OPENAI_API_KEY'] = os.getenv("OPENAI_API_KEY")
# define the output
response_schemas = [
ResponseSchema(name="sentiment", description="a sentiment label based on the user text. It should be either Negative, Positive or Neutral"),
ResponseSchema(name="reason", description="""
If the sentiment is Negative then return the reason why the user shouldn't have said that.
If the sentiment is Positive then return a compliment.
For Neutral then return a instruct for a better message.
"""),
ResponseSchema(name="reply", description="the best and friendliest replacement to the given user text")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
# prompt template
template = """You are good at detecting human emotion. All emotions you know are Negative, Positive and Neutral.
Given a human text, subjectivity and polarity, your job is to answer as best as possible.
Know that subjectivity is a measure of how much of a text is based on personal opinions or beliefs, rather than on facts.
It is a float value between 0 and 1, where 0 represents an objective text and 1 represents a highly subjective text.
Also know that polarity is a indicator for the sentiment of the given user text, negative value means Negative, positive value means Positive and 0 means Neutral.
{format_instructions}
User text: {text}
Subjectivity: {subjectivity}
Polarity: {polarity}"""
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(template=template, input_variables=["text","subjectivity","polarity"],
partial_variables={"format_instructions": format_instructions})
model = OpenAI(verbose=True, temperature=0.0)
# Build chain
sentiment_chain = LLMChain(llm=model, prompt=prompt, output_key='result')
userText = "You are idiot"
tb = TextBlob(userText)
subjectivity = tb.subjectivity
polarity = round(tb.polarity, 2)
ans = sentiment_chain({"text": userText, "polarity": polarity, "subjectivity": subjectivity})
print(ans)
# test emotions
emotions = ['Happy 😊','Sad 😔','Angry 😠','Surprise 😲','Fear 😨']
for emo in emotions:
emos = emo.split(" ")
des = f"""a js object contains two properties.
The first one is label: str // always return '{emos[1]}' """
schema = ResponseSchema(name=emos[0], description=des)
# define the output
response_schemas = [
ResponseSchema(name="sentiment", description="a sentiment label based on the user text. It should be either Negative, Positive or Neutral"),
ResponseSchema(name="reason", description="the reason behind your answer"),
ResponseSchema(name="reply", description="the best reply to the given user text")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
# prompt template
template = """You are good at detecting human emotion. All emotions you know are Negative, Positive and Neutral.
Given a human text, subjectivity and polarity, your job is to answer as best as possible.
Know that subjectivity is a measure of how much of a text is based on personal opinions or beliefs, rather than on facts.
It is a float value between 0 and 1, where 0 represents an objective text and 1 represents a highly subjective text.
Also know that polarity is a indicator for the sentiment of the given user text, negative value means Negative, positive value means Positive and 0 means Neutral.
{format_instructions}
User text: {text}
Subjectivity: {subjectivity}
Polarity: {polarity}"""
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(template=template, input_variables=["text","subjectivity","polarity"],
partial_variables={"format_instructions": format_instructions})
# Build chain
sentiment_chain = LLMChain(llm=model, prompt=prompt, output_key='result')
# print(sentiment_chain.)
ans = sentiment_chain({"text": userText, "polarity": polarity, "subjectivity": subjectivity}) | [
"subjectivity",
"format_instructions",
"You are good at detecting human emotion. All emotions you know are Negative, Positive and Neutral.\nGiven a human text, subjectivity and polarity, your job is to answer as best as possible.\nKnow that subjectivity is a measure of how much of a text is based on personal opinions or beliefs, rather than on facts. \nIt is a float value between 0 and 1, where 0 represents an objective text and 1 represents a highly subjective text.\nAlso know that polarity is a indicator for the sentiment of the given user text, negative value means Negative, positive value means Positive and 0 means Neutral.\n{format_instructions}\nUser text: {text}\nSubjectivity: {subjectivity}\nPolarity: {polarity}"
] |
2024-01-10 | hien-p/HealAI-bot | components~TextPage.py | import streamlit as st
import streamlit.components.v1 as components
from textblob import TextBlob
from PIL import Image
import text2emotion as te
import plotly.graph_objects as go
import json
import re
# langchain
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate
from langchain import LLMChain
from langchain.llms import OpenAI
# load model
import os
from dotenv import load_dotenv
load_dotenv()
os.environ['OPENAI_API_KEY'] = os.getenv("OPENAI_API_KEY")
def plotPie(labels, values):
fig = go.Figure(
go.Pie(
labels = labels,
values = values,
hoverinfo = "label+percent",
textinfo = "value"
))
st.plotly_chart(fig)
def getPolarity(userText):
tb = TextBlob(userText)
polarity = round(tb.polarity, 2)
subjectivity = round(tb.subjectivity, 2)
if polarity>0:
return polarity, subjectivity, "Positive"
elif polarity==0:
return polarity, subjectivity, "Neutral"
else:
return polarity, subjectivity, "Negative"
def gpttoTextBlod(userText, polarity, subjectivity):
# define the output
response_schemas = [
ResponseSchema(name="sentiment", description="a sentiment label based on the user text. It should be either Negative, Positive or Neutral"),
ResponseSchema(name="reason", description="""
If the sentiment is Negative then return the reason why the user shouldn't have said that.
If the sentiment is Positive then return a compliment.
For Neutral then return a instruct for a better message.
"""),
ResponseSchema(name="Botideas", description="the best and friendliest replacement to the given user text")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
# prompt template
template = """You are good at detecting human emotion. All emotions you know are Negative, Positive and Neutral.
Given a human text, subjectivity and polarity, your job is to answer as best as possible.
Know that subjectivity is a measure of how much of a text is based on personal opinions or beliefs, rather than on facts.
It is a float value between 0 and 1, where 0 represents an objective text and 1 represents a highly subjective text.
Also know that polarity is a indicator for the sentiment of the given user text, negative value means Negative, positive value means Positive and 0 means Neutral.
{format_instructions}
User text: {text}
Subjectivity: {subjectivity}
Polarity: {polarity}"""
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(template=template, input_variables=["text","subjectivity","polarity"],
partial_variables={"format_instructions": format_instructions})
model = OpenAI(verbose=True, temperature=0.0)
# Build chain
sentiment_chain = LLMChain(llm=model, prompt=prompt, output_key='result')
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(template=template, input_variables=["text","subjectivity","polarity"],
partial_variables={"format_instructions": format_instructions})
model = OpenAI(verbose=True, temperature=0.0)
# Build chain
sentiment_chain = LLMChain(llm=model, prompt=prompt, output_key='result')
ans = sentiment_chain({"text": userText, "polarity": polarity, "subjectivity": subjectivity})
return ans
# process data json
def parse_nested_json(text):
a = text.strip()
json_data = a.strip().replace('```json', '').strip()
json_data = json_data.strip().replace('```', '').strip()
data = json.loads(json_data)
return data
# text to emotion gpt
def emotiongpt(user_input):
response_schemas = []
emos = ['Happy 😊','Sad 😔','Angry 😠','Surprise 😲','Fear 😨']
for emo in emos:
emos = emo.split(" ")
des = f"""a js object contains 3 properties:
"label": str // always return '{emos[1]}'
"score": int // an emotional score from 1 to 100 for the {emos[0]}ness based the given user text
"reason": str// a reason for the score"""
#print(des)
schema = ResponseSchema(name=emos[0], description=des)
response_schemas.append(schema)
output_icon_parser = StructuredOutputParser.from_response_schemas(response_schemas)
# prompt template
template = """You are good at detecting human emotion, so good that you can measure the percentages of each emotion in Happy, Sad, Angry, Surprise, Fear from a given text.
The sum of the percentages of each emotion in Happy, Sad, Angry, Surprise, Fear must be 100.
Given an user text, your job is to answer as best as possible.
{format_instructions}
User text: {text}. This is the end of the user text."""
format_instructions = output_icon_parser.get_format_instructions()
prompt = PromptTemplate(template=template, input_variables=["text"],
partial_variables={"format_instructions": format_instructions})
# Build chain
model = OpenAI(verbose=True, temperature=0.0)
sentiment_icon_chain = LLMChain(llm=model, prompt=prompt, output_key='result')
ans = sentiment_icon_chain({"text": user_input})
return parse_nested_json(ans['result'])
def getSentiments(userText, type):
if(type == 'Positive/Negative/Neutral-TextBlob'):
polarity, subjectivity, status = getPolarity(userText)
folder = os.path.dirname(__file__)+"/images"
if(status=='Positive'):
image = Image.open(f'{folder}/positive.PNG')
elif(status == "Negative"):
image = Image.open(f'{folder}/negative.PNG')
else:
image = Image.open(f'{folder}/neutral.PNG')
col1, col2, col3 = st.columns(3)
col1.metric("Polarity", polarity, None)
col2.metric("Subjectivity", subjectivity, None)
col3.metric("Result", status, None)
# next
gptcomment = gpttoTextBlod(userText, polarity=polarity, subjectivity=subjectivity)
imgcol , commentcol = st.columns([1, 3])
commentcol.write(gptcomment)
imgcol.image(image, caption=status)
elif(type == 'text2emotion'):
data = emotiongpt(userText)
st.write(data)
emotion = dict(te.get_emotion(userText))
print(emotion)
# # print(emotion)
col1, col2, col3, col4, col5 = st.columns(5)
col1.metric("Happy 😊", data['Happy']['score'], None)
col2.metric("Sad 😔", data['Sad']['score'], None)
col3.metric("Angry 😠", data['Angry']['score'], None)
col4.metric("Fear 😨", data['Fear']['score'], None)
col5.metric("Surprise 😲", data['Surprise']['score'], None)
emos_list = ['Happy 😊','Sad 😔','Angry 😠','Surprise 😲','Fear 😨']
emos_values = [data['Happy']['score'], data['Sad']['score'], data['Angry']['score'],data['Fear']['score'],data['Surprise']['score']]
plotPie(emos_list,emos_values)
def renderPage():
st.title("Heal.AI")
folder = os.path.dirname(__file__)+"/images"
image = Image.open(f'{folder}/textSEN.png')
st.image(image)
components.html("""<hr style="height:3px;border:none;color:#333;background-color:#333; margin-bottom: 10px" /> """)
# st.markdown("### User Input Text Analysis")
st.subheader("User Input Text Analysis")
st.text("Analyzing text data given by the user and find sentiments.")
st.text("")
userText = st.text_input('User Input', placeholder='Input text HERE')
st.text("")
type = st.selectbox(
'Type of analysis',
('Positive/Negative/Neutral-TextBlob', 'text2emotion'))
st.text("")
if st.button('Predict'):
if(userText!="" and type!=None):
st.text("")
st.components.v1.html("""
<h3 style="color: #0284c7; font-family: Source Sans Pro, sans-serif; font-size: 28px; margin-bottom: 10px; margin-top: 50px;">Result</h3>
""", height=100)
getSentiments(userText, type)
hide_streamlit_style = """
<style>
#MainMenu {visibility: hidden;}
footer {visibility: hidden;}
</style>
"""
st.markdown(hide_streamlit_style, unsafe_allow_html=True) | [
"subjectivity",
"You are good at detecting human emotion, so good that you can measure the percentages of each emotion in Happy, Sad, Angry, Surprise, Fear from a given text. \n The sum of the percentages of each emotion in Happy, Sad, Angry, Surprise, Fear must be 100.\n Given an user text, your job is to answer as best as possible.\n {format_instructions}\n User text: {text}. This is the end of the user text.",
"format_instructions",
"You are good at detecting human emotion. All emotions you know are Negative, Positive and Neutral.\n Given a human text, subjectivity and polarity, your job is to answer as best as possible.\n Know that subjectivity is a measure of how much of a text is based on personal opinions or beliefs, rather than on facts. \n It is a float value between 0 and 1, where 0 represents an objective text and 1 represents a highly subjective text.\n Also know that polarity is a indicator for the sentiment of the given user text, negative value means Negative, positive value means Positive and 0 means Neutral.\n {format_instructions}\n User text: {text}\n Subjectivity: {subjectivity}\n Polarity: {polarity}"
] |
2024-01-10 | msull/personal-streamlit-dash | shared-src~chat_session.py | from dataclasses import dataclass, field
from typing import Optional
from typing import TypedDict
from logzero import logger
import openai
@dataclass
class ChatSession:
initial_system_message: Optional[str] = (None,)
reinforcement_system_msg: Optional[str] = None
history: list = field(default_factory=list)
model: str = "gpt-3.5-turbo-0613"
def user_says(self, message):
self.history.append({"role": "user", "content": message})
def system_says(self, message):
self.history.append({"role": "system", "content": message})
def assistant_says(self, message):
self.history.append({"role": "assistant", "content": message})
def get_ai_response(self, initial_system_msg: Optional[str] = None, reinforcement_system_msg: Optional[str] = None):
initial_system_msg = initial_system_msg or self.initial_system_message
reinforcement_system_msg = reinforcement_system_msg or self.reinforcement_system_msg
chat_history = self.history[:]
# add the initial system message describing the AI's role
if initial_system_msg:
chat_history.insert(0, {"role": "system", "content": initial_system_msg})
if reinforcement_system_msg:
chat_history.append({"role": "system", "content": reinforcement_system_msg})
logger.info('Generating AI ChatCompletion')
logger.debug(chat_history)
response = openai.ChatCompletion.create(model=self.model, messages=chat_history)
logger.debug(response)
return response
class FlaggedInputError(RuntimeError):
pass
def check_for_flagged_content(msg: str):
response = openai.Moderation.create(msg)
if response.results[0].flagged:
raise FlaggedInputError()
| [] |
2024-01-10 | quantumlib/Qualtran | qualtran~bloqs~chemistry~thc~prepare.py | # Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""PREPARE for the molecular tensor hypercontraction (THC) hamiltonian"""
from functools import cached_property
from typing import Dict, Set, Tuple, TYPE_CHECKING
import cirq
import numpy as np
from attrs import field, frozen
from numpy.typing import NDArray
from qualtran import (
Bloq,
bloq_example,
BloqBuilder,
BloqDocSpec,
Register,
SelectionRegister,
Signature,
SoquetT,
)
from qualtran.bloqs.arithmetic import (
EqualsAConstant,
GreaterThanConstant,
LessThanConstant,
LessThanEqual,
ToContiguousIndex,
)
from qualtran.bloqs.basic_gates import Hadamard, Ry, Toffoli, XGate
from qualtran.bloqs.basic_gates.swap import CSwap
from qualtran.bloqs.chemistry.black_boxes import QROAM
from qualtran.bloqs.multi_control_multi_target_pauli import MultiControlPauli
from qualtran.bloqs.on_each import OnEach
from qualtran.bloqs.select_and_prepare import PrepareOracle
from qualtran.bloqs.select_swap_qrom import SelectSwapQROM
from qualtran.bloqs.swap_network import CSwap
from qualtran.cirq_interop import CirqGateAsBloq
from qualtran.linalg.lcu_util import preprocess_lcu_coefficients_for_reversible_sampling
if TYPE_CHECKING:
from qualtran.resource_counting import BloqCountT, SympySymbolAllocator
@frozen
class UniformSuperpositionTHC(Bloq):
r"""Prepare uniform superposition state for THC.
$$
|0\rangle^{\otimes 2\log(M+1)} \rightarrow \sum_{\mu\le\nu}^{M} |\mu\rangle|\nu\rangle
+ \sum_{\mu}^{N/2}|\mu\rangle|\nu=M+1\rangle,
$$
where $M$ is the THC auxiliary dimension, and $N$ is the number of spin orbitals.
The toffoli complexity of this gate should be $10 \log(M+1) + 2 b_r - 9$.
Currently it is a good deal larger due to:
1. inverting inequality tests should not need more toffolis.
2. We are not using phase-gradient gate toffoli cost for Ry rotations.
3. Small differences in quoted vs implemented comparator costs.
See: https://github.com/quantumlib/Qualtran/issues/390
Args:
num_mu: THC auxiliary index dimension $M$
num_spin_orb: number of spin orbitals $N$
Registers:
mu: $\mu$ register.
nu: $\nu$ register.
succ: ancilla flagging success of amplitude amplification.
nu_eq_mp1: ancillas for flagging if $\nu = M+1$.
rot: The ancilla to be rotated for amplitude amplification.
References:
[Even more efficient quantum computations of chemistry through
tensor hypercontraction](https://arxiv.org/pdf/2011.03494.pdf). Eq. 29.
"""
num_mu: int
num_spin_orb: int
adjoint: bool = False
@cached_property
def signature(self) -> Signature:
return Signature(
[
Register("mu", bitsize=self.num_mu.bit_length()),
Register("nu", bitsize=self.num_mu.bit_length()),
Register("nu_eq_mp1", bitsize=1),
Register("succ", bitsize=1),
Register("rot", bitsize=1),
]
)
def build_composite_bloq(
self,
bb: 'BloqBuilder',
mu: SoquetT,
nu: SoquetT,
succ: SoquetT,
nu_eq_mp1: SoquetT,
rot: SoquetT,
) -> Dict[str, 'SoquetT']:
# If we introduce comparators using out of place adders these will be left/right registers.
# See: https://github.com/quantumlib/Qualtran/issues/390
lte_mu_nu, lte_nu_mp1, gt_mu_n, junk = bb.split(bb.allocate(4))
num_bits_mu = self.num_mu.bit_length()
# 1. Prepare uniform superposition over all mu and nu
mu = bb.add(OnEach(num_bits_mu, Hadamard()), q=mu)
nu = bb.add(OnEach(num_bits_mu, Hadamard()), q=nu)
# 2. Rotate an ancilla by `an angle`, appropriately chosen to be related
# to the amount of data we actually want to load (upper triangle +
# one-body)
data_size = self.num_mu * (self.num_mu + 1) // 2 + self.num_spin_orb // 2
angle = np.arccos(1 - 2 ** np.floor(np.log2(data_size)) / data_size)
rot = bb.add(Ry(angle), q=rot)
# 3. nu <= mu + 1 (zero injdexing we use mu)
lt_gate = LessThanConstant(num_bits_mu, self.num_mu)
nu, lte_nu_mp1 = bb.add(lt_gate, x=nu, target=lte_nu_mp1)
# 4. mu <= nu (upper triangular)
lte_gate = LessThanEqual(num_bits_mu, num_bits_mu)
mu, nu, lte_mu_nu = bb.add(lte_gate, x=mu, y=nu, target=lte_mu_nu)
# 5. nu == M (i.e. flag one-body contribution)
nu, nu_eq_mp1 = bb.add(
EqualsAConstant(num_bits_mu, self.num_mu + 1), x=nu, target=nu_eq_mp1
)
# 6. nu > N / 2 (flag out of range for one-body bits)
mu, gt_mu_n = bb.add(
GreaterThanConstant(num_bits_mu, self.num_spin_orb // 2), x=mu, target=gt_mu_n
)
# 7. Control off of 5 and 6 to not prepare if these conditions are met
(nu_eq_mp1, gt_mu_n), junk = bb.add(Toffoli(), ctrl=[nu_eq_mp1, gt_mu_n], target=junk)
# 6. Reflect on comparitors, rotated qubit and |+>.
ctrls = bb.join(np.array([rot, lte_nu_mp1, lte_mu_nu]))
ctrls, junk = bb.add(
MultiControlPauli(cvs=(1, 1, 1), target_gate=cirq.Z), controls=ctrls, target=junk
)
(rot, lte_nu_mp1, lte_mu_nu) = bb.split(ctrls)
# We now undo comparitors and rotations and repeat the steps
nu, lte_nu_mp1 = bb.add(lt_gate, x=nu, target=lte_nu_mp1)
mu, nu, lte_mu_nu = bb.add(lte_gate, x=mu, y=nu, target=lte_mu_nu)
nu, nu_eq_mp1 = bb.add(
EqualsAConstant(num_bits_mu, self.num_mu + 1), x=nu, target=nu_eq_mp1
)
mu, gt_mu_n = bb.add(
GreaterThanConstant(num_bits_mu, self.num_spin_orb // 2), x=mu, target=gt_mu_n
)
(nu_eq_mp1, gt_mu_n), junk = bb.add(Toffoli(), ctrl=[nu_eq_mp1, gt_mu_n], target=junk)
rot = bb.add(Ry(-angle), q=rot)
mu = bb.add(OnEach(num_bits_mu, Hadamard()), q=mu)
nu = bb.add(OnEach(num_bits_mu, Hadamard()), q=nu)
ctrls, rot = bb.add(
MultiControlPauli(((1,) * num_bits_mu + (1,) * num_bits_mu), cirq.Z),
controls=bb.join(np.concatenate([bb.split(mu), bb.split(nu)])),
target=rot,
)
# amp = trg[0]
mu_nu = bb.split(ctrls)
mu = bb.join(mu_nu[:num_bits_mu])
nu = bb.join(mu_nu[num_bits_mu:])
mu = bb.add(OnEach(num_bits_mu, Hadamard()), q=mu)
nu = bb.add(OnEach(num_bits_mu, Hadamard()), q=nu)
nu, lte_nu_mp1 = bb.add(lt_gate, x=nu, target=lte_nu_mp1)
mu, nu, lte_mu_nu = bb.add(lte_gate, x=mu, y=nu, target=lte_mu_nu)
nu, nu_eq_mp1 = bb.add(
EqualsAConstant(num_bits_mu, self.num_mu + 1), x=nu, target=nu_eq_mp1
)
mu, gt_mu_n = bb.add(
GreaterThanConstant(num_bits_mu, self.num_spin_orb // 2), x=mu, target=gt_mu_n
)
(nu_eq_mp1, gt_mu_n), junk = bb.add(Toffoli(), ctrl=[nu_eq_mp1, gt_mu_n], target=junk)
ctrls = bb.join(np.array([lte_nu_mp1, lte_mu_nu, junk]))
ctrls, succ = bb.add(
MultiControlPauli(cvs=(1, 1, 1), target_gate=cirq.X), controls=ctrls, target=succ
)
lte_nu_mp1, lte_mu_nu, junk = bb.split(ctrls)
(nu_eq_mp1, gt_mu_n), junk = bb.add(Toffoli(), ctrl=[nu_eq_mp1, gt_mu_n], target=junk)
nu, lte_nu_mp1 = bb.add(lt_gate, x=nu, target=lte_nu_mp1)
mu, nu, lte_mu_nu = bb.add(lte_gate, x=mu, y=nu, target=lte_mu_nu)
mu, gt_mu_n = bb.add(
GreaterThanConstant(num_bits_mu, self.num_spin_orb // 2), x=mu, target=gt_mu_n
)
junk = bb.add(XGate(), q=junk)
bb.free(bb.join(np.array([lte_mu_nu, lte_nu_mp1, gt_mu_n, junk])))
out_regs = {'mu': mu, 'nu': nu, 'succ': succ, 'nu_eq_mp1': nu_eq_mp1, 'rot': rot}
return out_regs
@frozen
class PrepareTHC(PrepareOracle):
r"""State Preparation for THC Hamilontian.
Prepares the state
$$
\frac{1}{\sqrt{\lambda}}|+\rangle|+\rangle\left[
\sum_\ell^{N/2} \sqrt{t_\ell}|\ell\rangle|M+1\rangle
+ \frac{1}{\sqrt{2}} \sum_{\mu\le\nu}^M \sqrt{\zeta_{\mu\nu}} |\mu\rangle|\nu\rangle
\right].
$$
Note we use UniformSuperpositionTHC as a subroutine as part of this bloq in
contrast to the reference which keeps them separate.
Args:
num_mu: THC auxiliary index dimension $M$
num_spin_orb: number of spin orbitals $N$
alt_mu: Alternate values for mu indices.
alt_nu: Alternate values for nu indices.
alt_theta: Alternate values for theta indices.
theta: Signs of lcu coefficients.
keep: keep values.
keep_bitsize: number of bits for keep register for coherent alias sampling.
Registers:
mu: $\mu$ register.
nu: $\nu$ register.
plus_mn: plus state for controlled swaps on mu/nu.
plus_a / plus_b: plus state for controlled swaps on spins.
sigma: ancilla register for alias sampling.
rot: ancilla register for rotation for uniform superposition state.
succ: success flag qubit from uniform state preparation
nu_eq_mp1: flag for if $nu = M+1$
theta: sign register.
s: Contiguous index register.
alt_mn: Register to store alt mu and nu values.
alt_theta: Register for alternate theta values.
keep: keep_bitsize-sized register for the keep values from coherent alias sampling.
less_than: Single qubit ancilla for alias sampling.
extra_ctrl: An extra control register for producing a multi-controlled CSwap.
References:
[Even more efficient quantum computations of chemistry through
tensor hypercontraction](https://arxiv.org/pdf/2011.03494.pdf) Fig. 2 and Fig. 3.
"""
num_mu: int
num_spin_orb: int
alt_mu: Tuple[int, ...] = field(repr=False)
alt_nu: Tuple[int, ...] = field(repr=False)
alt_theta: Tuple[int, ...] = field(repr=False)
theta: Tuple[int, ...] = field(repr=False)
keep: Tuple[int, ...] = field(repr=False)
keep_bitsize: int
adjoint: bool = False
@classmethod
def from_hamiltonian_coeffs(
cls, t_l: NDArray[np.float64], zeta: NDArray[np.float64], num_bits_state_prep: int = 8
) -> 'PrepareTHC':
"""Factory method to build PrepareTHC from Hamiltonian coefficients.
Args:
t_l: One body hamiltonian eigenvalues.
zeta: THC central tensor.
num_bits_state_prep: The number of bits for the state prepared during alias sampling.
Returns:
Constructed PrepareTHC object.
"""
assert len(t_l.shape) == 1
assert len(zeta.shape) == 2
num_mu = zeta.shape[0]
num_spat = t_l.shape[0]
triu_indices = np.triu_indices(num_mu)
num_ut = len(triu_indices[0])
flat_data = np.abs(np.concatenate([zeta[triu_indices], t_l]))
thetas = [int(t) for t in (1 - np.sign(flat_data)) // 2]
alt, keep, mu = preprocess_lcu_coefficients_for_reversible_sampling(
flat_data, epsilon=2**-num_bits_state_prep / len(flat_data)
)
num_up_t = len(triu_indices[0])
alt_mu = []
alt_nu = []
alt_theta = []
for k in alt:
if k < num_up_t:
# if k < n * (n + 1) / 2 we are dealing with mu / nu indices
alt_mu.append(int(triu_indices[0][k]))
alt_nu.append(int(triu_indices[1][k]))
else:
# else we are dealing with the one-body bit
alt_mu.append(int(k - num_ut))
alt_nu.append(int(num_mu))
alt_theta.append(thetas[k])
return PrepareTHC(
num_mu,
2 * num_spat,
alt_mu=tuple(alt_mu),
alt_nu=tuple(alt_nu),
alt_theta=tuple(alt_theta),
theta=tuple(thetas),
keep=tuple(keep),
keep_bitsize=mu,
)
@cached_property
def selection_registers(self) -> Tuple[SelectionRegister, ...]:
return (
SelectionRegister(
"mu", bitsize=(self.num_mu).bit_length(), iteration_length=self.num_mu + 1
),
SelectionRegister(
"nu", bitsize=(self.num_mu).bit_length(), iteration_length=self.num_mu + 1
),
SelectionRegister("plus_mn", bitsize=1),
SelectionRegister("plus_a", bitsize=1),
SelectionRegister("plus_b", bitsize=1),
SelectionRegister("sigma", bitsize=self.keep_bitsize),
SelectionRegister("rot", bitsize=1),
)
@cached_property
def junk_registers(self) -> Tuple[SelectionRegister, ...]:
data_size = self.num_spin_orb // 2 + self.num_mu * (self.num_mu + 1) // 2
log_mu = self.num_mu.bit_length()
return (
Register('succ', bitsize=1),
Register('nu_eq_mp1', bitsize=1),
Register('theta', bitsize=1),
Register('s', bitsize=(data_size - 1).bit_length()),
Register('alt_mn', bitsize=log_mu, shape=(2,)),
Register('alt_theta', bitsize=1),
Register('keep', bitsize=self.keep_bitsize),
Register('less_than', bitsize=1),
Register('extra_ctrl', bitsize=1),
)
def build_composite_bloq(
self,
bb: 'BloqBuilder',
mu: SoquetT,
nu: SoquetT,
plus_mn: SoquetT,
plus_a: SoquetT,
plus_b: SoquetT,
sigma: SoquetT,
rot: SoquetT,
succ: SoquetT,
nu_eq_mp1: SoquetT,
theta: SoquetT,
s: SoquetT,
alt_mn: SoquetT,
alt_theta: SoquetT,
keep: SoquetT,
less_than: SoquetT,
extra_ctrl: SoquetT,
) -> Dict[str, 'SoquetT']:
# 1. Prepare THC uniform superposition over mu, nu. succ flags success.
mu, nu, succ, nu_eq_mp1, rot = bb.add(
UniformSuperpositionTHC(num_mu=self.num_mu, num_spin_orb=self.num_spin_orb),
mu=mu,
nu=nu,
succ=succ,
nu_eq_mp1=nu_eq_mp1,
rot=rot,
)
data_size = self.num_spin_orb // 2 + self.num_mu * (self.num_mu + 1) // 2
log_mu = self.num_mu.bit_length()
log_d = (data_size - 1).bit_length()
# 2. Make contiguous register from mu and nu and store in register `s`.
mu, nu, s = bb.add(ToContiguousIndex(log_mu, log_d), mu=mu, nu=nu, s=s)
# 3. Load alt / keep values
qroam = SelectSwapQROM(
*(self.theta, self.alt_theta, self.alt_mu, self.alt_nu, self.keep),
target_bitsizes=(1, 1, log_mu, log_mu, self.keep_bitsize),
)
alt_mu, alt_nu = alt_mn
s, theta, alt_theta, alt_mu, alt_nu, keep = bb.add(
qroam,
selection=s,
target0_=theta,
target1_=alt_theta,
target2_=alt_mu,
target3_=alt_nu,
target4_=keep,
)
sigma = bb.add(OnEach(self.keep_bitsize, Hadamard()), q=sigma)
lte_gate = LessThanEqual(self.keep_bitsize, self.keep_bitsize)
keep, sigma, less_than = bb.add(lte_gate, x=keep, y=sigma, target=less_than)
cz = CirqGateAsBloq(cirq.ControlledGate(cirq.Z))
alt_theta, less_than = bb.add(cz, q=[alt_theta, less_than])
cz = CirqGateAsBloq(cirq.ControlledGate(cirq.Z, control_values=(0,)))
# negative control on the less_than register
less_than, theta = bb.add(cz, q=[less_than, theta])
less_than, alt_mu, mu = bb.add(CSwap(bitsize=log_mu), ctrl=less_than, x=alt_mu, y=mu)
less_than, alt_nu, nu = bb.add(CSwap(bitsize=log_mu), ctrl=less_than, x=alt_nu, y=nu)
keep, sigma, less_than = bb.add(lte_gate, x=keep, y=sigma, target=less_than)
# delete the QROM
# Select expects three plus states so set them up here.
plus_a = bb.add(Hadamard(), q=plus_a)
plus_b = bb.add(Hadamard(), q=plus_b)
plus_mn = bb.add(Hadamard(), q=plus_mn)
ctrls = bb.join(np.array([nu_eq_mp1, plus_a]))
ctrls, extra_ctrl = bb.add(
MultiControlPauli(cvs=(0, 1), target_gate=cirq.X), controls=ctrls, target=extra_ctrl
)
nu_eq_mp1, plus_a = bb.split(ctrls)
extra_ctrl, mu, nu = bb.add(CSwap(bitsize=log_mu), ctrl=extra_ctrl, x=mu, y=nu)
out_regs = {
'mu': mu,
'nu': nu,
'plus_mn': plus_mn,
'plus_a': plus_a,
'plus_b': plus_b,
'sigma': sigma,
'rot': rot,
'succ': succ,
'nu_eq_mp1': nu_eq_mp1,
'theta': theta,
's': s,
'alt_mn': [alt_mu, alt_nu],
'alt_theta': alt_theta,
'keep': keep,
'less_than': less_than,
'extra_ctrl': extra_ctrl,
}
return out_regs
def build_call_graph(self, ssa: 'SympySymbolAllocator') -> Set['BloqCountT']:
cost_1 = (UniformSuperpositionTHC(self.num_mu, self.num_spin_orb), 1)
nmu = self.num_mu.bit_length()
data_size = self.num_spin_orb // 2 + self.num_mu * (self.num_mu + 1) // 2
nd = (data_size - 1).bit_length()
cost_2 = (ToContiguousIndex(nmu, nd), 1)
m = 2 * nmu + 2 + self.keep_bitsize
cost_3 = (QROAM(data_size, m, self.adjoint), 1)
cost_4 = (OnEach(self.keep_bitsize, Hadamard()), 1)
cost_5 = (LessThanEqual(self.keep_bitsize, self.keep_bitsize), 2)
cost_6 = (CSwap(nmu), 3)
cost_7 = (Toffoli(), 1)
return {cost_1, cost_2, cost_3, cost_4, cost_5, cost_6, cost_7}
@bloq_example
def _thc_uni() -> UniformSuperpositionTHC:
num_mu = 10
num_spin_orb = 4
thc_uni = UniformSuperpositionTHC(num_mu=num_mu, num_spin_orb=num_spin_orb)
return thc_uni
@bloq_example
def _thc_prep() -> PrepareTHC:
num_spat = 4
num_mu = 8
t_l = np.random.normal(0, 1, size=num_spat)
zeta = np.random.normal(0, 1, size=(num_mu, num_mu))
zeta = 0.5 * (zeta + zeta.T)
thc_prep = PrepareTHC.from_hamiltonian_coeffs(t_l, zeta, num_bits_state_prep=8)
return thc_prep
_THC_UNI_PREP = BloqDocSpec(
bloq_cls=UniformSuperpositionTHC,
import_line='from qualtran.bloqs.chemistry.thc.prepare import UniformSuperpositionTHC',
examples=(_thc_uni,),
)
_THC_PREPARE = BloqDocSpec(
bloq_cls=PrepareTHC,
import_line='from qualtran.bloqs.chemistry.thc.prepare import PrepareTHC',
examples=(_thc_prep,),
)
| [] |
2024-01-10 | chrisyang922/CS | CASCS411~Project-ChatGPTToYouTube~gradio_interface.py | from openai import OpenAI
from googleapiclient.discovery import build
import gradio as gr
openai_args = {"api_key": ""}
sample_input = "What is the relationship between global warming and recycle?"
sample_output = "relatioship,global warming,recycle"
def get_completion(question, model="gpt-4-1106-preview"):
prompt1 = f""" \
The user is going to ask a question. \
Your job is to analyze the user's question, and answer with your own knowledge. \
If you do not know the answer to the question, you should be honest and say "I am not sure. Can you please restate the question more clearly?" \
The question is delimited by triple backticks
```{question}```
"""
try:
client = OpenAI(api_key=openai_args["api_key"])
messages1 = [{"role": "user", "content": prompt1}]
response1 = client.chat.completions.create(
model=model,
messages=messages1,
temperature=0, # this is the degree of randomness of the model's output
)
textual_explanation = response1.choices[0].message.content
except BaseException:
raise gr.Error("Can't connect to GPT. Please check your OpenAI's API key.")
return textual_explanation
def set_key(key):
openai_args["api_key"] = key
return
def search_youtube(question, model="gpt-4-1106-preview"):
prompt = f""" \
We are trying to extract key terms from the user's question, and use the extracted key terms for the input for Youtube API. \
Your job is to analyze the user's question, and extract key terms from the user's question. \
A key term does not have to be one word. For example, "generative AI" can be considered as one key term. \
You should extract at most five key terms in a list form, where each key term is separated by a comma. However, if you think that the question is too ambiguous, and it does not contain any key term, \
you should output "The question is unclear. Can you please restate the question more clearly?".
The question is delimited by triple backticks
```{question}```
Here is a sample input and output:
sample input: {sample_input}
sample output: {sample_output}
"""
try:
client = OpenAI(api_key=openai_args["api_key"])
messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=0, # this is the degree of randomness of the model's output
)
key_terms = response.choices[0].message.content
except BaseException:
raise gr.Error("Can't connect to GPT. Please check your OpenAI's API key.")
query = key_terms.split(",")
youtube = build('youtube', 'v3', developerKey="AIzaSyAeMxumTanwSI19nKY4KXcfp3lJIcrdQWk")
request = youtube.search().list(q=query, part="snippet", type="video", maxResults=5)
response = request.execute()
# Process and format the response
formatted_results = []
for item in response.get("items", []):
video_title = item["snippet"]["title"]
video_description = item["snippet"]["description"]
video_url = f"https://www.youtube.com/watch?v={item['id']['videoId']}"
formatted_result = f"Title: {video_title}\nURL: {video_url}\nDescription: {video_description}\n\n"
formatted_results.append(formatted_result)
return "\n".join(formatted_results)
def run_gradio_interface():
with gr.Blocks() as gpt_to_youtube:
gr.Markdown("GPT to Youtube Learning")
with gr.Row():
with gr.Column(scale=3):
gr.Markdown(
"""
ChatGPT to Youtube Learning is a learning platform where \
users can ask questions by typing. This dual-source approach \
not only provides the learning experience for their query, \
but also offers both textual explanations and video content for a \
comprehensive understanding.
We employ the newest model available from OpenAI, 'gpt-4-1106-preview'. \
One disadvantage for this model is that it is free. \
With 'gpt-4-1106-preview' model, the pricing is $0.01/1,000 input tokens and \
$0.03/1,000 output tokens.
Before running the model, please submit your own OpenAI's key first. \
Your OpenAI's key can be generated from OpenAI's official site using your own account. \
We won't be able to access your key in anyway.
"""
)
with gr.Column(scale=1):
api_key = gr.Textbox(
label = "api_key",
lines = 1,
type = "password",
placeholder = "Enter your API key here",
)
btn_key = gr.Button(value="Submit Key")
btn_key.click(set_key, inputs=api_key)
gr.Markdown(
"""
# Textual Explanation
Get started by inputting the question into the text box below.
<b>For example:</b><br/>
What is generative AI?
"""
)
with gr.Row():
with gr.Column():
question_box = gr.Textbox(
label="question",
placeholder="Please put your question here",
lines=15,
max_lines=15,
)
with gr.Column():
output_box = gr.Textbox(
value="",
label="Textual explanation",
lines=15,
max_lines=15,
interactive=False,
)
btn = gr.Button(value="Submit")
btn.click(
get_completion,
inputs=question_box,
outputs=[output_box],
)
with gr.Row():
with gr.Column(scale=1):
gr.Markdown(
"""
# Related Youtube Contents
Using Youtube's official API, we provide at most 5 related video contents from Youtube.
For each video content, we provide three details: a title, URL, and description.
You can directly put the generated url as the address, and you will be able to see the content.
Please click the button below to see related Youtube Contents.
"""
)
with gr.Column(scale=3):
youtube_box = gr.Textbox(
value="",
label="Youtube contents",
lines=15,
max_lines=15,
interactive=False,
)
btn_youtube = gr.Button(value="Retrieve Youtube Contents")
btn_youtube.click(
search_youtube,
inputs=question_box,
outputs=[youtube_box],
)
return gpt_to_youtube
| [
" We are trying to extract key terms from the user's question, and use the extracted key terms for the input for Youtube API. Your job is to analyze the user's question, and extract key terms from the user's question. A key term does not have to be one word. For example, \"generative AI\" can be considered as one key term. You should extract at most five key terms in a list form, where each key term is separated by a comma. However, if you think that the question is too ambiguous, and it does not contain any key term, you should output \"The question is unclear. Can you please restate the question more clearly?\".\n\n The question is delimited by triple backticks\n ```PLACEHOLDER```\n\n Here is a sample input and output:\n sample input: What is the relationship between global warming and recycle?\n sample output: relatioship,global warming,recycle\n ",
" We are trying to extract key terms from the user's question, and use the extracted key terms for the input for Youtube API. Your job is to analyze the user's question, and extract key terms from the user's question. A key term does not have to be one word. For example, \"generative AI\" can be considered as one key term. You should extract at most five key terms in a list form, where each key term is separated by a comma. However, if you think that the question is too ambiguous, and it does not contain any key term, you should output \"The question is unclear. Can you please restate the question more clearly?\".\n\n The question is delimited by triple backticks\n ```PLACEHOLDER```\n\n Here is a sample input and output:\n sample input: sample_input0ce5f985-22d8-4971-97ac-3b3fd99537a1\n sample output: relatioship,global warming,recycle\n ",
" The user is going to ask a question. Your job is to analyze the user's question, and answer with your own knowledge. If you do not know the answer to the question, you should be honest and say \"I am not sure. Can you please restate the question more clearly?\" \n The question is delimited by triple backticks\n ```PLACEHOLDER```\n "
] |
2024-01-10 | JonathanCai/langchain | libs~experimental~langchain_experimental~sql~vector_sql.py | """Vector SQL Database Chain Retriever"""
from __future__ import annotations
from typing import Any, Dict, List, Optional, Union
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.llm import LLMChain
from langchain.chains.sql_database.prompt import PROMPT, SQL_PROMPTS
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import BaseOutputParser, BasePromptTemplate
from langchain.schema.embeddings import Embeddings
from langchain.schema.language_model import BaseLanguageModel
from langchain.tools.sql_database.prompt import QUERY_CHECKER
from langchain.utilities.sql_database import SQLDatabase
from langchain_experimental.sql.base import INTERMEDIATE_STEPS_KEY, SQLDatabaseChain
class VectorSQLOutputParser(BaseOutputParser[str]):
"""Output Parser for Vector SQL
1. finds for `NeuralArray()` and replace it with the embedding
2. finds for `DISTANCE()` and replace it with the distance name in backend SQL
"""
model: Embeddings
"""Embedding model to extract embedding for entity"""
distance_func_name: str = "distance"
"""Distance name for Vector SQL"""
class Config:
arbitrary_types_allowed = 1
@property
def _type(self) -> str:
return "vector_sql_parser"
@classmethod
def from_embeddings(
cls, model: Embeddings, distance_func_name: str = "distance", **kwargs: Any
) -> BaseOutputParser:
return cls(model=model, distance_func_name=distance_func_name, **kwargs)
def parse(self, text: str) -> str:
text = text.strip()
start = text.find("NeuralArray(")
_sql_str_compl = text
if start > 0:
_matched = text[text.find("NeuralArray(") + len("NeuralArray(") :]
end = _matched.find(")") + start + len("NeuralArray(") + 1
entity = _matched[: _matched.find(")")]
vecs = self.model.embed_query(entity)
vecs_str = "[" + ",".join(map(str, vecs)) + "]"
_sql_str_compl = text.replace("DISTANCE", self.distance_func_name).replace(
text[start:end], vecs_str
)
if _sql_str_compl[-1] == ";":
_sql_str_compl = _sql_str_compl[:-1]
return _sql_str_compl
class VectorSQLRetrieveAllOutputParser(VectorSQLOutputParser):
"""Based on VectorSQLOutputParser
It also modify the SQL to get all columns
"""
@property
def _type(self) -> str:
return "vector_sql_retrieve_all_parser"
def parse(self, text: str) -> str:
text = text.strip()
start = text.upper().find("SELECT")
if start >= 0:
end = text.upper().find("FROM")
text = text.replace(text[start + len("SELECT") + 1 : end - 1], "*")
return super().parse(text)
def get_result_from_sqldb(
db: SQLDatabase, cmd: str
) -> Union[str, List[Dict[str, Any]], Dict[str, Any]]:
result = db._execute(cmd, fetch="all") # type: ignore
return result
class VectorSQLDatabaseChain(SQLDatabaseChain):
"""Chain for interacting with Vector SQL Database.
Example:
.. code-block:: python
from langchain_experimental.sql import SQLDatabaseChain
from langchain.llms import OpenAI, SQLDatabase, OpenAIEmbeddings
db = SQLDatabase(...)
db_chain = VectorSQLDatabaseChain.from_llm(OpenAI(), db, OpenAIEmbeddings())
*Security note*: Make sure that the database connection uses credentials
that are narrowly-scoped to only include the permissions this chain needs.
Failure to do so may result in data corruption or loss, since this chain may
attempt commands like `DROP TABLE` or `INSERT` if appropriately prompted.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this chain.
This issue shows an example negative outcome if these steps are not taken:
https://github.com/langchain-ai/langchain/issues/5923
"""
sql_cmd_parser: VectorSQLOutputParser
"""Parser for Vector SQL"""
native_format: bool = False
"""If return_direct, controls whether to return in python native format"""
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
input_text = f"{inputs[self.input_key]}\nSQLQuery:"
_run_manager.on_text(input_text, verbose=self.verbose)
# If not present, then defaults to None which is all tables.
table_names_to_use = inputs.get("table_names_to_use")
table_info = self.database.get_table_info(table_names=table_names_to_use)
llm_inputs = {
"input": input_text,
"top_k": str(self.top_k),
"dialect": self.database.dialect,
"table_info": table_info,
"stop": ["\nSQLResult:"],
}
intermediate_steps: List = []
try:
intermediate_steps.append(llm_inputs) # input: sql generation
llm_out = self.llm_chain.predict(
callbacks=_run_manager.get_child(),
**llm_inputs,
)
sql_cmd = self.sql_cmd_parser.parse(llm_out)
if self.return_sql:
return {self.output_key: sql_cmd}
if not self.use_query_checker:
_run_manager.on_text(llm_out, color="green", verbose=self.verbose)
intermediate_steps.append(
llm_out
) # output: sql generation (no checker)
intermediate_steps.append({"sql_cmd": llm_out}) # input: sql exec
result = get_result_from_sqldb(self.database, sql_cmd)
intermediate_steps.append(str(result)) # output: sql exec
else:
query_checker_prompt = self.query_checker_prompt or PromptTemplate(
template=QUERY_CHECKER, input_variables=["query", "dialect"]
)
query_checker_chain = LLMChain(
llm=self.llm_chain.llm,
prompt=query_checker_prompt,
output_parser=self.llm_chain.output_parser,
)
query_checker_inputs = {
"query": llm_out,
"dialect": self.database.dialect,
}
checked_llm_out = query_checker_chain.predict(
callbacks=_run_manager.get_child(), **query_checker_inputs
)
checked_sql_command = self.sql_cmd_parser.parse(checked_llm_out)
intermediate_steps.append(
checked_llm_out
) # output: sql generation (checker)
_run_manager.on_text(
checked_llm_out, color="green", verbose=self.verbose
)
intermediate_steps.append(
{"sql_cmd": checked_llm_out}
) # input: sql exec
result = get_result_from_sqldb(self.database, checked_sql_command)
intermediate_steps.append(str(result)) # output: sql exec
llm_out = checked_llm_out
sql_cmd = checked_sql_command
_run_manager.on_text("\nSQLResult: ", verbose=self.verbose)
_run_manager.on_text(str(result), color="yellow", verbose=self.verbose)
# If return direct, we just set the final result equal to
# the result of the sql query result, otherwise try to get a human readable
# final answer
if self.return_direct:
final_result = result
else:
_run_manager.on_text("\nAnswer:", verbose=self.verbose)
input_text += f"{llm_out}\nSQLResult: {result}\nAnswer:"
llm_inputs["input"] = input_text
intermediate_steps.append(llm_inputs) # input: final answer
final_result = self.llm_chain.predict(
callbacks=_run_manager.get_child(),
**llm_inputs,
).strip()
intermediate_steps.append(final_result) # output: final answer
_run_manager.on_text(final_result, color="green", verbose=self.verbose)
chain_result: Dict[str, Any] = {self.output_key: final_result}
if self.return_intermediate_steps:
chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps
return chain_result
except Exception as exc:
# Append intermediate steps to exception, to aid in logging and later
# improvement of few shot prompt seeds
exc.intermediate_steps = intermediate_steps # type: ignore
raise exc
@property
def _chain_type(self) -> str:
return "vector_sql_database_chain"
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
db: SQLDatabase,
prompt: Optional[BasePromptTemplate] = None,
sql_cmd_parser: Optional[VectorSQLOutputParser] = None,
**kwargs: Any,
) -> VectorSQLDatabaseChain:
assert sql_cmd_parser, "`sql_cmd_parser` must be set in VectorSQLDatabaseChain."
prompt = prompt or SQL_PROMPTS.get(db.dialect, PROMPT)
llm_chain = LLMChain(llm=llm, prompt=prompt)
return cls(
llm_chain=llm_chain, database=db, sql_cmd_parser=sql_cmd_parser, **kwargs
)
| [] |
2024-01-10 | JonathanCai/langchain | libs~experimental~langchain_experimental~comprehend_moderation~pii.py | import asyncio
from typing import Any, Dict, Optional
from langchain_experimental.comprehend_moderation.base_moderation_exceptions import (
ModerationPiiError,
)
class ComprehendPII:
def __init__(
self,
client: Any,
callback: Optional[Any] = None,
unique_id: Optional[str] = None,
chain_id: Optional[str] = None,
) -> None:
self.client = client
self.moderation_beacon = {
"moderation_chain_id": chain_id,
"moderation_type": "PII",
"moderation_status": "LABELS_NOT_FOUND",
}
self.callback = callback
self.unique_id = unique_id
def validate(self, prompt_value: str, config: Any = None) -> str:
redact = config.get("redact")
return (
self._detect_pii(prompt_value=prompt_value, config=config)
if redact
else self._contains_pii(prompt_value=prompt_value, config=config)
)
def _contains_pii(self, prompt_value: str, config: Any = None) -> str:
"""
Checks for Personally Identifiable Information (PII) labels above a
specified threshold. Uses Amazon Comprehend Contains PII Entities API. See -
https://docs.aws.amazon.com/comprehend/latest/APIReference/API_ContainsPiiEntities.html
Args:
prompt_value (str): The input text to be checked for PII labels.
config (Dict[str, Any]): Configuration for PII check and actions.
Returns:
str: the original prompt
Note:
- The provided client should be initialized with valid AWS credentials.
"""
pii_identified = self.client.contains_pii_entities(
Text=prompt_value, LanguageCode="en"
)
if self.callback and self.callback.pii_callback:
self.moderation_beacon["moderation_input"] = prompt_value
self.moderation_beacon["moderation_output"] = pii_identified
threshold = config.get("threshold")
pii_labels = config.get("labels")
pii_found = False
for entity in pii_identified["Labels"]:
if (entity["Score"] >= threshold and entity["Name"] in pii_labels) or (
entity["Score"] >= threshold and not pii_labels
):
pii_found = True
break
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
if pii_found:
raise ModerationPiiError
return prompt_value
def _detect_pii(self, prompt_value: str, config: Optional[Dict[str, Any]]) -> str:
"""
Detects and handles Personally Identifiable Information (PII) entities in the
given prompt text using Amazon Comprehend's detect_pii_entities API. The
function provides options to redact or stop processing based on the identified
PII entities and a provided configuration. Uses Amazon Comprehend Detect PII
Entities API.
Args:
prompt_value (str): The input text to be checked for PII entities.
config (Dict[str, Any]): A configuration specifying how to handle
PII entities.
Returns:
str: The processed prompt text with redacted PII entities or raised
exceptions.
Raises:
ValueError: If the prompt contains configured PII entities for
stopping processing.
Note:
- If PII is not found in the prompt, the original prompt is returned.
- The client should be initialized with valid AWS credentials.
"""
pii_identified = self.client.detect_pii_entities(
Text=prompt_value, LanguageCode="en"
)
if self.callback and self.callback.pii_callback:
self.moderation_beacon["moderation_input"] = prompt_value
self.moderation_beacon["moderation_output"] = pii_identified
if (pii_identified["Entities"]) == []:
if self.callback and self.callback.pii_callback:
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
return prompt_value
pii_found = False
if not config and pii_identified["Entities"]:
for entity in pii_identified["Entities"]:
if entity["Score"] >= 0.5:
pii_found = True
break
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
if pii_found:
raise ModerationPiiError
else:
threshold = config.get("threshold") # type: ignore
pii_labels = config.get("labels") # type: ignore
mask_marker = config.get("mask_character") # type: ignore
pii_found = False
for entity in pii_identified["Entities"]:
if (
pii_labels
and entity["Type"] in pii_labels
and entity["Score"] >= threshold
) or (not pii_labels and entity["Score"] >= threshold):
pii_found = True
char_offset_begin = entity["BeginOffset"]
char_offset_end = entity["EndOffset"]
mask_length = char_offset_end - char_offset_begin + 1
masked_part = mask_marker * mask_length
prompt_value = (
prompt_value[:char_offset_begin]
+ masked_part
+ prompt_value[char_offset_end + 1 :]
)
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
return prompt_value
| [] |
2024-01-10 | JonathanCai/langchain | libs~langchain~langchain~vectorstores~opensearch_vector_search.py | from __future__ import annotations
import uuid
import warnings
from typing import Any, Dict, Iterable, List, Optional, Tuple
import numpy as np
from langchain.schema import Document
from langchain.schema.embeddings import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
IMPORT_OPENSEARCH_PY_ERROR = (
"Could not import OpenSearch. Please install it with `pip install opensearch-py`."
)
SCRIPT_SCORING_SEARCH = "script_scoring"
PAINLESS_SCRIPTING_SEARCH = "painless_scripting"
MATCH_ALL_QUERY = {"match_all": {}} # type: Dict
def _import_opensearch() -> Any:
"""Import OpenSearch if available, otherwise raise error."""
try:
from opensearchpy import OpenSearch
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
return OpenSearch
def _import_bulk() -> Any:
"""Import bulk if available, otherwise raise error."""
try:
from opensearchpy.helpers import bulk
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
return bulk
def _import_not_found_error() -> Any:
"""Import not found error if available, otherwise raise error."""
try:
from opensearchpy.exceptions import NotFoundError
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
return NotFoundError
def _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:
"""Get OpenSearch client from the opensearch_url, otherwise raise error."""
try:
opensearch = _import_opensearch()
client = opensearch(opensearch_url, **kwargs)
except ValueError as e:
raise ImportError(
f"OpenSearch client string provided is not in proper format. "
f"Got error: {e} "
)
return client
def _validate_embeddings_and_bulk_size(embeddings_length: int, bulk_size: int) -> None:
"""Validate Embeddings Length and Bulk Size."""
if embeddings_length == 0:
raise RuntimeError("Embeddings size is zero")
if bulk_size < embeddings_length:
raise RuntimeError(
f"The embeddings count, {embeddings_length} is more than the "
f"[bulk_size], {bulk_size}. Increase the value of [bulk_size]."
)
def _validate_aoss_with_engines(is_aoss: bool, engine: str) -> None:
"""Validate AOSS with the engine."""
if is_aoss and engine != "nmslib" and engine != "faiss":
raise ValueError(
"Amazon OpenSearch Service Serverless only "
"supports `nmslib` or `faiss` engines"
)
def _is_aoss_enabled(http_auth: Any) -> bool:
"""Check if the service is http_auth is set as `aoss`."""
if (
http_auth is not None
and hasattr(http_auth, "service")
and http_auth.service == "aoss"
):
return True
return False
def _bulk_ingest_embeddings(
client: Any,
index_name: str,
embeddings: List[List[float]],
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
vector_field: str = "vector_field",
text_field: str = "text",
mapping: Optional[Dict] = None,
max_chunk_bytes: Optional[int] = 1 * 1024 * 1024,
is_aoss: bool = False,
) -> List[str]:
"""Bulk Ingest Embeddings into given index."""
if not mapping:
mapping = dict()
bulk = _import_bulk()
not_found_error = _import_not_found_error()
requests = []
return_ids = []
mapping = mapping
try:
client.indices.get(index=index_name)
except not_found_error:
client.indices.create(index=index_name, body=mapping)
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
_id = ids[i] if ids else str(uuid.uuid4())
request = {
"_op_type": "index",
"_index": index_name,
vector_field: embeddings[i],
text_field: text,
"metadata": metadata,
}
if is_aoss:
request["id"] = _id
else:
request["_id"] = _id
requests.append(request)
return_ids.append(_id)
bulk(client, requests, max_chunk_bytes=max_chunk_bytes)
if not is_aoss:
client.indices.refresh(index=index_name)
return return_ids
def _default_scripting_text_mapping(
dim: int,
vector_field: str = "vector_field",
) -> Dict:
"""For Painless Scripting or Script Scoring,the default mapping to create index."""
return {
"mappings": {
"properties": {
vector_field: {"type": "knn_vector", "dimension": dim},
}
}
}
def _default_text_mapping(
dim: int,
engine: str = "nmslib",
space_type: str = "l2",
ef_search: int = 512,
ef_construction: int = 512,
m: int = 16,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, this is the default mapping to create index."""
return {
"settings": {"index": {"knn": True, "knn.algo_param.ef_search": ef_search}},
"mappings": {
"properties": {
vector_field: {
"type": "knn_vector",
"dimension": dim,
"method": {
"name": "hnsw",
"space_type": space_type,
"engine": engine,
"parameters": {"ef_construction": ef_construction, "m": m},
},
}
}
},
}
def _default_approximate_search_query(
query_vector: List[float],
k: int = 4,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, this is the default query."""
return {
"size": k,
"query": {"knn": {vector_field: {"vector": query_vector, "k": k}}},
}
def _approximate_search_query_with_boolean_filter(
query_vector: List[float],
boolean_filter: Dict,
k: int = 4,
vector_field: str = "vector_field",
subquery_clause: str = "must",
) -> Dict:
"""For Approximate k-NN Search, with Boolean Filter."""
return {
"size": k,
"query": {
"bool": {
"filter": boolean_filter,
subquery_clause: [
{"knn": {vector_field: {"vector": query_vector, "k": k}}}
],
}
},
}
def _approximate_search_query_with_efficient_filter(
query_vector: List[float],
efficient_filter: Dict,
k: int = 4,
vector_field: str = "vector_field",
) -> Dict:
"""For Approximate k-NN Search, with Efficient Filter for Lucene and
Faiss Engines."""
search_query = _default_approximate_search_query(
query_vector, k=k, vector_field=vector_field
)
search_query["query"]["knn"][vector_field]["filter"] = efficient_filter
return search_query
def _default_script_query(
query_vector: List[float],
k: int = 4,
space_type: str = "l2",
pre_filter: Optional[Dict] = None,
vector_field: str = "vector_field",
) -> Dict:
"""For Script Scoring Search, this is the default query."""
if not pre_filter:
pre_filter = MATCH_ALL_QUERY
return {
"size": k,
"query": {
"script_score": {
"query": pre_filter,
"script": {
"source": "knn_score",
"lang": "knn",
"params": {
"field": vector_field,
"query_value": query_vector,
"space_type": space_type,
},
},
}
},
}
def __get_painless_scripting_source(
space_type: str, vector_field: str = "vector_field"
) -> str:
"""For Painless Scripting, it returns the script source based on space type."""
source_value = (
"(1.0 + " + space_type + "(params.query_value, doc['" + vector_field + "']))"
)
if space_type == "cosineSimilarity":
return source_value
else:
return "1/" + source_value
def _default_painless_scripting_query(
query_vector: List[float],
k: int = 4,
space_type: str = "l2Squared",
pre_filter: Optional[Dict] = None,
vector_field: str = "vector_field",
) -> Dict:
"""For Painless Scripting Search, this is the default query."""
if not pre_filter:
pre_filter = MATCH_ALL_QUERY
source = __get_painless_scripting_source(space_type, vector_field=vector_field)
return {
"size": k,
"query": {
"script_score": {
"query": pre_filter,
"script": {
"source": source,
"params": {
"field": vector_field,
"query_value": query_vector,
},
},
}
},
}
def _get_kwargs_value(kwargs: Any, key: str, default_value: Any) -> Any:
"""Get the value of the key if present. Else get the default_value."""
if key in kwargs:
return kwargs.get(key)
return default_value
class OpenSearchVectorSearch(VectorStore):
"""`Amazon OpenSearch Vector Engine` vector store.
Example:
.. code-block:: python
from langchain.vectorstores import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
"""
def __init__(
self,
opensearch_url: str,
index_name: str,
embedding_function: Embeddings,
**kwargs: Any,
):
"""Initialize with necessary components."""
self.embedding_function = embedding_function
self.index_name = index_name
http_auth = _get_kwargs_value(kwargs, "http_auth", None)
self.is_aoss = _is_aoss_enabled(http_auth=http_auth)
self.client = _get_opensearch_client(opensearch_url, **kwargs)
@property
def embeddings(self) -> Embeddings:
return self.embedding_function
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
bulk_size: int = 500,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids to associate with the texts.
bulk_size: Bulk API request count; Default: 500
Returns:
List of ids from adding the texts into the vectorstore.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
"""
embeddings = self.embedding_function.embed_documents(list(texts))
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
index_name = _get_kwargs_value(kwargs, "index_name", self.index_name)
text_field = _get_kwargs_value(kwargs, "text_field", "text")
dim = len(embeddings[0])
engine = _get_kwargs_value(kwargs, "engine", "nmslib")
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
ef_search = _get_kwargs_value(kwargs, "ef_search", 512)
ef_construction = _get_kwargs_value(kwargs, "ef_construction", 512)
m = _get_kwargs_value(kwargs, "m", 16)
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
max_chunk_bytes = _get_kwargs_value(kwargs, "max_chunk_bytes", 1 * 1024 * 1024)
_validate_aoss_with_engines(self.is_aoss, engine)
mapping = _default_text_mapping(
dim, engine, space_type, ef_search, ef_construction, m, vector_field
)
return _bulk_ingest_embeddings(
self.client,
index_name,
embeddings,
texts,
metadatas=metadatas,
ids=ids,
vector_field=vector_field,
text_field=text_field,
mapping=mapping,
max_chunk_bytes=max_chunk_bytes,
is_aoss=self.is_aoss,
)
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
metadata_field: Document field that metadata is stored in. Defaults to
"metadata".
Can be set to a special value "*" to include the entire document.
Optional Args for Approximate Search:
search_type: "approximate_search"; default: "approximate_search"
boolean_filter: A Boolean filter is a post filter consists of a Boolean
query that contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: "must"
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified
post-filtering. (deprecated, use `efficient_filter`)
efficient_filter: the Lucene Engine or Faiss Engine decides whether to
perform an exact k-NN search with pre-filtering or an approximate search
with modified post-filtering.
Optional Args for Script Scoring Search:
search_type: "script_scoring"; default: "approximate_search"
space_type: "l2", "l1", "linf", "cosinesimil", "innerproduct",
"hammingbit"; default: "l2"
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {"match_all": {}}
Optional Args for Painless Scripting Search:
search_type: "painless_scripting"; default: "approximate_search"
space_type: "l2Squared", "l1Norm", "cosineSimilarity"; default: "l2Squared"
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {"match_all": {}}
"""
docs_with_scores = self.similarity_search_with_score(query, k, **kwargs)
return [doc[0] for doc in docs_with_scores]
def similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""Return docs and it's scores most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents along with its scores most similar to the query.
Optional Args:
same as `similarity_search`
"""
text_field = _get_kwargs_value(kwargs, "text_field", "text")
metadata_field = _get_kwargs_value(kwargs, "metadata_field", "metadata")
hits = self._raw_similarity_search_with_score(query=query, k=k, **kwargs)
documents_with_scores = [
(
Document(
page_content=hit["_source"][text_field],
metadata=hit["_source"]
if metadata_field == "*" or metadata_field not in hit["_source"]
else hit["_source"][metadata_field],
),
hit["_score"],
)
for hit in hits
]
return documents_with_scores
def _raw_similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[dict]:
"""Return raw opensearch documents (dict) including vectors,
scores most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of dict with its scores most similar to the query.
Optional Args:
same as `similarity_search`
"""
embedding = self.embedding_function.embed_query(query)
search_type = _get_kwargs_value(kwargs, "search_type", "approximate_search")
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
index_name = _get_kwargs_value(kwargs, "index_name", self.index_name)
if (
self.is_aoss
and search_type != "approximate_search"
and search_type != SCRIPT_SCORING_SEARCH
):
raise ValueError(
"Amazon OpenSearch Service Serverless only "
"supports `approximate_search` and `script_scoring`"
)
if search_type == "approximate_search":
boolean_filter = _get_kwargs_value(kwargs, "boolean_filter", {})
subquery_clause = _get_kwargs_value(kwargs, "subquery_clause", "must")
efficient_filter = _get_kwargs_value(kwargs, "efficient_filter", {})
# `lucene_filter` is deprecated, added for Backwards Compatibility
lucene_filter = _get_kwargs_value(kwargs, "lucene_filter", {})
if boolean_filter != {} and efficient_filter != {}:
raise ValueError(
"Both `boolean_filter` and `efficient_filter` are provided which "
"is invalid"
)
if lucene_filter != {} and efficient_filter != {}:
raise ValueError(
"Both `lucene_filter` and `efficient_filter` are provided which "
"is invalid. `lucene_filter` is deprecated"
)
if lucene_filter != {} and boolean_filter != {}:
raise ValueError(
"Both `lucene_filter` and `boolean_filter` are provided which "
"is invalid. `lucene_filter` is deprecated"
)
if boolean_filter != {}:
search_query = _approximate_search_query_with_boolean_filter(
embedding,
boolean_filter,
k=k,
vector_field=vector_field,
subquery_clause=subquery_clause,
)
elif efficient_filter != {}:
search_query = _approximate_search_query_with_efficient_filter(
embedding, efficient_filter, k=k, vector_field=vector_field
)
elif lucene_filter != {}:
warnings.warn(
"`lucene_filter` is deprecated. Please use the keyword argument"
" `efficient_filter`"
)
search_query = _approximate_search_query_with_efficient_filter(
embedding, lucene_filter, k=k, vector_field=vector_field
)
else:
search_query = _default_approximate_search_query(
embedding, k=k, vector_field=vector_field
)
elif search_type == SCRIPT_SCORING_SEARCH:
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY)
search_query = _default_script_query(
embedding, k, space_type, pre_filter, vector_field
)
elif search_type == PAINLESS_SCRIPTING_SEARCH:
space_type = _get_kwargs_value(kwargs, "space_type", "l2Squared")
pre_filter = _get_kwargs_value(kwargs, "pre_filter", MATCH_ALL_QUERY)
search_query = _default_painless_scripting_query(
embedding, k, space_type, pre_filter, vector_field
)
else:
raise ValueError("Invalid `search_type` provided as an argument")
response = self.client.search(index=index_name, body=search_query)
return [hit for hit in response["hits"]["hits"]]
def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> list[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
metadata_field = _get_kwargs_value(kwargs, "metadata_field", "metadata")
# Get embedding of the user query
embedding = self.embedding_function.embed_query(query)
# Do ANN/KNN search to get top fetch_k results where fetch_k >= k
results = self._raw_similarity_search_with_score(query, fetch_k, **kwargs)
embeddings = [result["_source"][vector_field] for result in results]
# Rerank top k results using MMR, (mmr_selected is a list of indices)
mmr_selected = maximal_marginal_relevance(
np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult
)
return [
Document(
page_content=results[i]["_source"][text_field],
metadata=results[i]["_source"][metadata_field],
)
for i in mmr_selected
]
@classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
bulk_size: int = 500,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> OpenSearchVectorSearch:
"""Construct OpenSearchVectorSearch wrapper from raw documents.
Example:
.. code-block:: python
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:
vector_field: Document field embeddings are stored in. Defaults to
"vector_field".
text_field: Document field the text of the document is stored in. Defaults
to "text".
Optional Keyword Args for Approximate Search:
engine: "nmslib", "faiss", "lucene"; default: "nmslib"
space_type: "l2", "l1", "cosinesimil", "linf", "innerproduct"; default: "l2"
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:
is_appx_search: False
"""
opensearch_url = get_from_dict_or_env(
kwargs, "opensearch_url", "OPENSEARCH_URL"
)
# List of arguments that needs to be removed from kwargs
# before passing kwargs to get opensearch client
keys_list = [
"opensearch_url",
"index_name",
"is_appx_search",
"vector_field",
"text_field",
"engine",
"space_type",
"ef_search",
"ef_construction",
"m",
"max_chunk_bytes",
"is_aoss",
]
embeddings = embedding.embed_documents(texts)
_validate_embeddings_and_bulk_size(len(embeddings), bulk_size)
dim = len(embeddings[0])
# Get the index name from either from kwargs or ENV Variable
# before falling back to random generation
index_name = get_from_dict_or_env(
kwargs, "index_name", "OPENSEARCH_INDEX_NAME", default=uuid.uuid4().hex
)
is_appx_search = _get_kwargs_value(kwargs, "is_appx_search", True)
vector_field = _get_kwargs_value(kwargs, "vector_field", "vector_field")
text_field = _get_kwargs_value(kwargs, "text_field", "text")
max_chunk_bytes = _get_kwargs_value(kwargs, "max_chunk_bytes", 1 * 1024 * 1024)
http_auth = _get_kwargs_value(kwargs, "http_auth", None)
is_aoss = _is_aoss_enabled(http_auth=http_auth)
if is_aoss and not is_appx_search:
raise ValueError(
"Amazon OpenSearch Service Serverless only "
"supports `approximate_search`"
)
if is_appx_search:
engine = _get_kwargs_value(kwargs, "engine", "nmslib")
space_type = _get_kwargs_value(kwargs, "space_type", "l2")
ef_search = _get_kwargs_value(kwargs, "ef_search", 512)
ef_construction = _get_kwargs_value(kwargs, "ef_construction", 512)
m = _get_kwargs_value(kwargs, "m", 16)
_validate_aoss_with_engines(is_aoss, engine)
mapping = _default_text_mapping(
dim, engine, space_type, ef_search, ef_construction, m, vector_field
)
else:
mapping = _default_scripting_text_mapping(dim)
[kwargs.pop(key, None) for key in keys_list]
client = _get_opensearch_client(opensearch_url, **kwargs)
_bulk_ingest_embeddings(
client,
index_name,
embeddings,
texts,
metadatas=metadatas,
ids=ids,
vector_field=vector_field,
text_field=text_field,
mapping=mapping,
max_chunk_bytes=max_chunk_bytes,
is_aoss=is_aoss,
)
return cls(opensearch_url, index_name, embedding, **kwargs)
| [] |
2024-01-10 | JonathanCai/langchain | libs~langchain~tests~integration_tests~vectorstores~test_xata.py | """Test Xata vector store functionality.
Before running this test, please create a Xata database by following
the instructions from:
https://python.langchain.com/docs/integrations/vectorstores/xata
"""
import os
from langchain.docstore.document import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores.xata import XataVectorStore
class TestXata:
@classmethod
def setup_class(cls) -> None:
assert os.getenv("XATA_API_KEY"), "XATA_API_KEY environment variable is not set"
assert os.getenv("XATA_DB_URL"), "XATA_DB_URL environment variable is not set"
def test_similarity_search_without_metadata(
self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end constructions and search without metadata."""
texts = ["foo", "bar", "baz"]
docsearch = XataVectorStore.from_texts(
api_key=os.getenv("XATA_API_KEY"),
db_url=os.getenv("XATA_DB_URL"),
texts=texts,
embedding=embedding_openai,
)
docsearch.wait_for_indexing(ndocs=3)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
docsearch.delete(delete_all=True)
def test_similarity_search_with_metadata(
self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search with a metadata filter.
This test requires a column named "a" of type integer to be present
in the Xata table."""
texts = ["foo", "foo", "foo"]
metadatas = [{"a": i} for i in range(len(texts))]
docsearch = XataVectorStore.from_texts(
api_key=os.getenv("XATA_API_KEY"),
db_url=os.getenv("XATA_DB_URL"),
texts=texts,
embedding=embedding_openai,
metadatas=metadatas,
)
docsearch.wait_for_indexing(ndocs=3)
output = docsearch.similarity_search("foo", k=1, filter={"a": 1})
assert output == [Document(page_content="foo", metadata={"a": 1})]
docsearch.delete(delete_all=True)
| [] |
2024-01-10 | JonathanCai/langchain | libs~experimental~langchain_experimental~comprehend_moderation~toxicity.py | import asyncio
import importlib
from typing import Any, List, Optional
from langchain_experimental.comprehend_moderation.base_moderation_exceptions import (
ModerationToxicityError,
)
class ComprehendToxicity:
def __init__(
self,
client: Any,
callback: Optional[Any] = None,
unique_id: Optional[str] = None,
chain_id: Optional[str] = None,
) -> None:
self.client = client
self.moderation_beacon = {
"moderation_chain_id": chain_id,
"moderation_type": "Toxicity",
"moderation_status": "LABELS_NOT_FOUND",
}
self.callback = callback
self.unique_id = unique_id
def _toxicity_init_validate(self, max_size: int) -> Any:
"""
Validate and initialize toxicity processing configuration.
Args:
max_size (int): Maximum sentence size defined in the
configuration object.
Raises:
Exception: If the maximum sentence size exceeds the 5KB limit.
Note:
This function ensures that the NLTK punkt tokenizer is downloaded
if not already present.
Returns:
None
"""
if max_size > 1024 * 5:
raise Exception("The sentence length should not exceed 5KB.")
try:
nltk = importlib.import_module("nltk")
nltk.data.find("tokenizers/punkt")
return nltk
except ImportError:
raise ModuleNotFoundError(
"Could not import nltk python package. "
"Please install it with `pip install nltk`."
)
except LookupError:
nltk.download("punkt")
def _split_paragraph(
self, prompt_value: str, max_size: int = 1024 * 4
) -> List[List[str]]:
"""
Split a paragraph into chunks of sentences, respecting the maximum size limit.
Args:
paragraph (str): The input paragraph to be split into chunks.
max_size (int, optional): The maximum size limit in bytes for
each chunk. Defaults to 1024.
Returns:
List[List[str]]: A list of chunks, where each chunk is a list
of sentences.
Note:
This function validates the maximum sentence size based on service
limits using the 'toxicity_init_validate' function. It uses the NLTK
sentence tokenizer to split the paragraph into sentences.
Example:
paragraph = "This is a sample paragraph. It
contains multiple sentences. ..."
chunks = split_paragraph(paragraph, max_size=2048)
"""
# validate max. sentence size based on Service limits
nltk = self._toxicity_init_validate(max_size)
sentences = nltk.sent_tokenize(prompt_value)
chunks = list() # type: ignore
current_chunk = list() # type: ignore
current_size = 0
for sentence in sentences:
sentence_size = len(sentence.encode("utf-8"))
# If adding a new sentence exceeds max_size
# or current_chunk has 10 sentences, start a new chunk
if (current_size + sentence_size > max_size) or (len(current_chunk) >= 10):
if current_chunk: # Avoid appending empty chunks
chunks.append(current_chunk)
current_chunk = []
current_size = 0
current_chunk.append(sentence)
current_size += sentence_size
# Add any remaining sentences
if current_chunk:
chunks.append(current_chunk)
return chunks
def validate(self, prompt_value: str, config: Any = None) -> str:
"""
Check the toxicity of a given text prompt using AWS
Comprehend service and apply actions based on configuration.
Args:
prompt_value (str): The text content to be checked for toxicity.
config (Dict[str, Any]): Configuration for toxicity checks and actions.
Returns:
str: The original prompt_value if allowed or no toxicity found.
Raises:
ValueError: If the prompt contains toxic labels and cannot be
processed based on the configuration.
"""
chunks = self._split_paragraph(prompt_value=prompt_value)
for sentence_list in chunks:
segments = [{"Text": sentence} for sentence in sentence_list]
response = self.client.detect_toxic_content(
TextSegments=segments, LanguageCode="en"
)
if self.callback and self.callback.toxicity_callback:
self.moderation_beacon["moderation_input"] = segments # type: ignore
self.moderation_beacon["moderation_output"] = response
toxicity_found = False
threshold = config.get("threshold")
toxicity_labels = config.get("labels")
if not toxicity_labels:
for item in response["ResultList"]:
for label in item["Labels"]:
if label["Score"] >= threshold:
toxicity_found = True
break
else:
for item in response["ResultList"]:
for label in item["Labels"]:
if (
label["Name"] in toxicity_labels
and label["Score"] >= threshold
):
toxicity_found = True
break
if self.callback and self.callback.toxicity_callback:
if toxicity_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_toxicity(
self.moderation_beacon, self.unique_id
)
)
if toxicity_found:
raise ModerationToxicityError
return prompt_value
| [] |
2024-01-10 | JonathanCai/langchain | libs~langchain~langchain~memory~readonly.py | from typing import Any, Dict, List
from langchain.schema import BaseMemory
class ReadOnlySharedMemory(BaseMemory):
"""A memory wrapper that is read-only and cannot be changed."""
memory: BaseMemory
@property
def memory_variables(self) -> List[str]:
"""Return memory variables."""
return self.memory.memory_variables
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Load memory variables from memory."""
return self.memory.load_memory_variables(inputs)
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Nothing should be saved or changed"""
pass
def clear(self) -> None:
"""Nothing to clear, got a memory like a vault."""
pass
| [] |
2024-01-10 | akshaya-a/langchain | langchain~llms~vertexai.py | """Wrapper around Google VertexAI models."""
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from pydantic import BaseModel, root_validator
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from langchain.llms.utils import enforce_stop_tokens
from langchain.utilities.vertexai import (
init_vertexai,
raise_vertex_import_error,
)
if TYPE_CHECKING:
from google.auth.credentials import Credentials
from vertexai.language_models._language_models import _LanguageModel
class _VertexAICommon(BaseModel):
client: "_LanguageModel" = None #: :meta private:
model_name: str
"Model name to use."
temperature: float = 0.0
"Sampling temperature, it controls the degree of randomness in token selection."
max_output_tokens: int = 128
"Token limit determines the maximum amount of text output from one prompt."
top_p: float = 0.95
"Tokens are selected from most probable to least until the sum of their "
"probabilities equals the top-p value."
top_k: int = 40
"How the model selects tokens for output, the next token is selected from "
"among the top-k most probable tokens."
project: Optional[str] = None
"The default GCP project to use when making Vertex API calls."
location: str = "us-central1"
"The default location to use when making API calls."
credentials: Optional["Credentials"] = None
"The default custom credentials to use when making API calls. If not provided "
"credentials will be ascertained from the environment." ""
@property
def _default_params(self) -> Dict[str, Any]:
base_params = {
"temperature": self.temperature,
"max_output_tokens": self.max_output_tokens,
"top_k": self.top_p,
"top_p": self.top_k,
}
return {**base_params}
def _predict(self, prompt: str, stop: Optional[List[str]]) -> str:
res = self.client.predict(prompt, **self._default_params)
return self._enforce_stop_words(res.text, stop)
def _enforce_stop_words(self, text: str, stop: Optional[List[str]]) -> str:
if stop:
return enforce_stop_tokens(text, stop)
return text
@property
def _llm_type(self) -> str:
return "vertexai"
@classmethod
def _try_init_vertexai(cls, values: Dict) -> None:
allowed_params = ["project", "location", "credentials"]
params = {k: v for k, v in values.items() if v in allowed_params}
init_vertexai(**params)
return None
class VertexAI(_VertexAICommon, LLM):
"""Wrapper around Google Vertex AI large language models."""
model_name: str = "text-bison"
tuned_model_name: Optional[str] = None
"The name of a tuned model, if it's provided, model_name is ignored."
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that the python package exists in environment."""
cls._try_init_vertexai(values)
try:
from vertexai.preview.language_models import TextGenerationModel
except ImportError:
raise_vertex_import_error()
tuned_model_name = values.get("tuned_model_name")
if tuned_model_name:
values["client"] = TextGenerationModel.get_tuned_model(tuned_model_name)
else:
values["client"] = TextGenerationModel.from_pretrained(values["model_name"])
return values
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
"""Call Vertex model to get predictions based on the prompt.
Args:
prompt: The prompt to pass into the model.
stop: A list of stop words (optional).
run_manager: A Callbackmanager for LLM run, optional.
Returns:
The string generated by the model.
"""
return self._predict(prompt, stop)
| [] |
2024-01-10 | akshaya-a/langchain | langchain~callbacks~manager.py | from __future__ import annotations
import asyncio
import functools
import logging
import os
import warnings
from contextlib import contextmanager
from contextvars import ContextVar
from typing import Any, Dict, Generator, List, Optional, Type, TypeVar, Union, cast
from uuid import UUID, uuid4
import langchain
from langchain.callbacks.base import (
BaseCallbackHandler,
BaseCallbackManager,
ChainManagerMixin,
LLMManagerMixin,
RunManagerMixin,
ToolManagerMixin,
)
from langchain.callbacks.openai_info import OpenAICallbackHandler
from langchain.callbacks.stdout import StdOutCallbackHandler
from langchain.callbacks.tracers.langchain import LangChainTracer
from langchain.callbacks.tracers.langchain_v1 import LangChainTracerV1, TracerSessionV1
from langchain.callbacks.tracers.schemas import TracerSession
from langchain.callbacks.tracers.stdout import ConsoleCallbackHandler
from langchain.schema import (
AgentAction,
AgentFinish,
BaseMessage,
LLMResult,
get_buffer_string,
)
logger = logging.getLogger(__name__)
Callbacks = Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]
openai_callback_var: ContextVar[Optional[OpenAICallbackHandler]] = ContextVar(
"openai_callback", default=None
)
tracing_callback_var: ContextVar[
Optional[LangChainTracerV1]
] = ContextVar( # noqa: E501
"tracing_callback", default=None
)
tracing_v2_callback_var: ContextVar[
Optional[LangChainTracer]
] = ContextVar( # noqa: E501
"tracing_callback_v2", default=None
)
def _get_debug() -> bool:
return langchain.debug
@contextmanager
def get_openai_callback() -> Generator[OpenAICallbackHandler, None, None]:
"""Get OpenAI callback handler in a context manager."""
cb = OpenAICallbackHandler()
openai_callback_var.set(cb)
yield cb
openai_callback_var.set(None)
@contextmanager
def tracing_enabled(
session_name: str = "default",
) -> Generator[TracerSessionV1, None, None]:
"""Get Tracer in a context manager."""
cb = LangChainTracerV1()
session = cast(TracerSessionV1, cb.load_session(session_name))
tracing_callback_var.set(cb)
yield session
tracing_callback_var.set(None)
@contextmanager
def tracing_v2_enabled(
session_name: Optional[str] = None,
*,
example_id: Optional[Union[str, UUID]] = None,
tenant_id: Optional[str] = None,
session_extra: Optional[Dict[str, Any]] = None,
) -> Generator[TracerSession, None, None]:
"""Get the experimental tracer handler in a context manager."""
# Issue a warning that this is experimental
warnings.warn(
"The experimental tracing v2 is in development. "
"This is not yet stable and may change in the future."
)
if isinstance(example_id, str):
example_id = UUID(example_id)
cb = LangChainTracer(
tenant_id=tenant_id,
session_name=session_name,
example_id=example_id,
session_extra=session_extra,
)
session = cb.ensure_session()
tracing_v2_callback_var.set(cb)
yield session
tracing_v2_callback_var.set(None)
def _handle_event(
handlers: List[BaseCallbackHandler],
event_name: str,
ignore_condition_name: Optional[str],
*args: Any,
**kwargs: Any,
) -> None:
"""Generic event handler for CallbackManager."""
message_strings: Optional[List[str]] = None
for handler in handlers:
try:
if ignore_condition_name is None or not getattr(
handler, ignore_condition_name
):
getattr(handler, event_name)(*args, **kwargs)
except NotImplementedError as e:
if event_name == "on_chat_model_start":
if message_strings is None:
message_strings = [get_buffer_string(m) for m in args[1]]
_handle_event(
[handler],
"on_llm_start",
"ignore_llm",
args[0],
message_strings,
*args[2:],
**kwargs,
)
else:
logger.warning(f"Error in {event_name} callback: {e}")
except Exception as e:
logging.warning(f"Error in {event_name} callback: {e}")
async def _ahandle_event_for_handler(
handler: BaseCallbackHandler,
event_name: str,
ignore_condition_name: Optional[str],
*args: Any,
**kwargs: Any,
) -> None:
try:
if ignore_condition_name is None or not getattr(handler, ignore_condition_name):
event = getattr(handler, event_name)
if asyncio.iscoroutinefunction(event):
await event(*args, **kwargs)
else:
await asyncio.get_event_loop().run_in_executor(
None, functools.partial(event, *args, **kwargs)
)
except NotImplementedError as e:
if event_name == "on_chat_model_start":
message_strings = [get_buffer_string(m) for m in args[1]]
await _ahandle_event_for_handler(
handler,
"on_llm_start",
"ignore_llm",
args[0],
message_strings,
*args[2:],
**kwargs,
)
else:
logger.warning(f"Error in {event_name} callback: {e}")
except Exception as e:
logger.warning(f"Error in {event_name} callback: {e}")
async def _ahandle_event(
handlers: List[BaseCallbackHandler],
event_name: str,
ignore_condition_name: Optional[str],
*args: Any,
**kwargs: Any,
) -> None:
"""Generic event handler for AsyncCallbackManager."""
await asyncio.gather(
*(
_ahandle_event_for_handler(
handler, event_name, ignore_condition_name, *args, **kwargs
)
for handler in handlers
)
)
BRM = TypeVar("BRM", bound="BaseRunManager")
class BaseRunManager(RunManagerMixin):
"""Base class for run manager (a bound callback manager)."""
def __init__(
self,
run_id: UUID,
handlers: List[BaseCallbackHandler],
inheritable_handlers: List[BaseCallbackHandler],
parent_run_id: Optional[UUID] = None,
) -> None:
"""Initialize run manager."""
self.run_id = run_id
self.handlers = handlers
self.inheritable_handlers = inheritable_handlers
self.parent_run_id = parent_run_id
@classmethod
def get_noop_manager(cls: Type[BRM]) -> BRM:
"""Return a manager that doesn't perform any operations."""
return cls(uuid4(), [], [])
class RunManager(BaseRunManager):
"""Sync Run Manager."""
def on_text(
self,
text: str,
**kwargs: Any,
) -> Any:
"""Run when text is received."""
_handle_event(
self.handlers,
"on_text",
None,
text,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
class AsyncRunManager(BaseRunManager):
"""Async Run Manager."""
async def on_text(
self,
text: str,
**kwargs: Any,
) -> Any:
"""Run when text is received."""
await _ahandle_event(
self.handlers,
"on_text",
None,
text,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
class CallbackManagerForLLMRun(RunManager, LLMManagerMixin):
"""Callback manager for LLM run."""
def on_llm_new_token(
self,
token: str,
**kwargs: Any,
) -> None:
"""Run when LLM generates a new token."""
_handle_event(
self.handlers,
"on_llm_new_token",
"ignore_llm",
token=token,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running."""
_handle_event(
self.handlers,
"on_llm_end",
"ignore_llm",
response,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
def on_llm_error(
self,
error: Union[Exception, KeyboardInterrupt],
**kwargs: Any,
) -> None:
"""Run when LLM errors."""
_handle_event(
self.handlers,
"on_llm_error",
"ignore_llm",
error,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):
"""Async callback manager for LLM run."""
async def on_llm_new_token(
self,
token: str,
**kwargs: Any,
) -> None:
"""Run when LLM generates a new token."""
await _ahandle_event(
self.handlers,
"on_llm_new_token",
"ignore_llm",
token,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running."""
await _ahandle_event(
self.handlers,
"on_llm_end",
"ignore_llm",
response,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
async def on_llm_error(
self,
error: Union[Exception, KeyboardInterrupt],
**kwargs: Any,
) -> None:
"""Run when LLM errors."""
await _ahandle_event(
self.handlers,
"on_llm_error",
"ignore_llm",
error,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
class CallbackManagerForChainRun(RunManager, ChainManagerMixin):
"""Callback manager for chain run."""
def get_child(self) -> CallbackManager:
"""Get a child callback manager."""
manager = CallbackManager([], parent_run_id=self.run_id)
manager.set_handlers(self.inheritable_handlers)
return manager
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
"""Run when chain ends running."""
_handle_event(
self.handlers,
"on_chain_end",
"ignore_chain",
outputs,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
def on_chain_error(
self,
error: Union[Exception, KeyboardInterrupt],
**kwargs: Any,
) -> None:
"""Run when chain errors."""
_handle_event(
self.handlers,
"on_chain_error",
"ignore_chain",
error,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
"""Run when agent action is received."""
_handle_event(
self.handlers,
"on_agent_action",
"ignore_agent",
action,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:
"""Run when agent finish is received."""
_handle_event(
self.handlers,
"on_agent_finish",
"ignore_agent",
finish,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
class AsyncCallbackManagerForChainRun(AsyncRunManager, ChainManagerMixin):
"""Async callback manager for chain run."""
def get_child(self) -> AsyncCallbackManager:
"""Get a child callback manager."""
manager = AsyncCallbackManager([], parent_run_id=self.run_id)
manager.set_handlers(self.inheritable_handlers)
return manager
async def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
"""Run when chain ends running."""
await _ahandle_event(
self.handlers,
"on_chain_end",
"ignore_chain",
outputs,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
async def on_chain_error(
self,
error: Union[Exception, KeyboardInterrupt],
**kwargs: Any,
) -> None:
"""Run when chain errors."""
await _ahandle_event(
self.handlers,
"on_chain_error",
"ignore_chain",
error,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
async def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
"""Run when agent action is received."""
await _ahandle_event(
self.handlers,
"on_agent_action",
"ignore_agent",
action,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
async def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:
"""Run when agent finish is received."""
await _ahandle_event(
self.handlers,
"on_agent_finish",
"ignore_agent",
finish,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
class CallbackManagerForToolRun(RunManager, ToolManagerMixin):
"""Callback manager for tool run."""
def get_child(self) -> CallbackManager:
"""Get a child callback manager."""
manager = CallbackManager([], parent_run_id=self.run_id)
manager.set_handlers(self.inheritable_handlers)
return manager
def on_tool_end(
self,
output: str,
**kwargs: Any,
) -> None:
"""Run when tool ends running."""
_handle_event(
self.handlers,
"on_tool_end",
"ignore_agent",
output,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
def on_tool_error(
self,
error: Union[Exception, KeyboardInterrupt],
**kwargs: Any,
) -> None:
"""Run when tool errors."""
_handle_event(
self.handlers,
"on_tool_error",
"ignore_agent",
error,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
class AsyncCallbackManagerForToolRun(AsyncRunManager, ToolManagerMixin):
"""Async callback manager for tool run."""
def get_child(self) -> AsyncCallbackManager:
"""Get a child callback manager."""
manager = AsyncCallbackManager([], parent_run_id=self.run_id)
manager.set_handlers(self.inheritable_handlers)
return manager
async def on_tool_end(self, output: str, **kwargs: Any) -> None:
"""Run when tool ends running."""
await _ahandle_event(
self.handlers,
"on_tool_end",
"ignore_agent",
output,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
async def on_tool_error(
self,
error: Union[Exception, KeyboardInterrupt],
**kwargs: Any,
) -> None:
"""Run when tool errors."""
await _ahandle_event(
self.handlers,
"on_tool_error",
"ignore_agent",
error,
run_id=self.run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
class CallbackManager(BaseCallbackManager):
"""Callback manager that can be used to handle callbacks from langchain."""
def on_llm_start(
self,
serialized: Dict[str, Any],
prompts: List[str],
run_id: Optional[UUID] = None,
**kwargs: Any,
) -> CallbackManagerForLLMRun:
"""Run when LLM starts running."""
if run_id is None:
run_id = uuid4()
_handle_event(
self.handlers,
"on_llm_start",
"ignore_llm",
serialized,
prompts,
run_id=run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
return CallbackManagerForLLMRun(
run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
)
def on_chat_model_start(
self,
serialized: Dict[str, Any],
messages: List[List[BaseMessage]],
run_id: Optional[UUID] = None,
**kwargs: Any,
) -> CallbackManagerForLLMRun:
"""Run when LLM starts running."""
if run_id is None:
run_id = uuid4()
_handle_event(
self.handlers,
"on_chat_model_start",
"ignore_chat_model",
serialized,
messages,
run_id=run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
# Re-use the LLM Run Manager since the outputs are treated
# the same for now
return CallbackManagerForLLMRun(
run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
)
def on_chain_start(
self,
serialized: Dict[str, Any],
inputs: Dict[str, Any],
run_id: Optional[UUID] = None,
**kwargs: Any,
) -> CallbackManagerForChainRun:
"""Run when chain starts running."""
if run_id is None:
run_id = uuid4()
_handle_event(
self.handlers,
"on_chain_start",
"ignore_chain",
serialized,
inputs,
run_id=run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
return CallbackManagerForChainRun(
run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
)
def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
run_id: Optional[UUID] = None,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> CallbackManagerForToolRun:
"""Run when tool starts running."""
if run_id is None:
run_id = uuid4()
_handle_event(
self.handlers,
"on_tool_start",
"ignore_agent",
serialized,
input_str,
run_id=run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
return CallbackManagerForToolRun(
run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
)
@classmethod
def configure(
cls,
inheritable_callbacks: Callbacks = None,
local_callbacks: Callbacks = None,
verbose: bool = False,
) -> CallbackManager:
"""Configure the callback manager."""
return _configure(cls, inheritable_callbacks, local_callbacks, verbose)
class AsyncCallbackManager(BaseCallbackManager):
"""Async callback manager that can be used to handle callbacks from LangChain."""
@property
def is_async(self) -> bool:
"""Return whether the handler is async."""
return True
async def on_llm_start(
self,
serialized: Dict[str, Any],
prompts: List[str],
run_id: Optional[UUID] = None,
**kwargs: Any,
) -> AsyncCallbackManagerForLLMRun:
"""Run when LLM starts running."""
if run_id is None:
run_id = uuid4()
await _ahandle_event(
self.handlers,
"on_llm_start",
"ignore_llm",
serialized,
prompts,
run_id=run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
return AsyncCallbackManagerForLLMRun(
run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
)
async def on_chat_model_start(
self,
serialized: Dict[str, Any],
messages: List[List[BaseMessage]],
run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
if run_id is None:
run_id = uuid4()
await _ahandle_event(
self.handlers,
"on_chat_model_start",
"ignore_chat_model",
serialized,
messages,
run_id=run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
return AsyncCallbackManagerForLLMRun(
run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
)
async def on_chain_start(
self,
serialized: Dict[str, Any],
inputs: Dict[str, Any],
run_id: Optional[UUID] = None,
**kwargs: Any,
) -> AsyncCallbackManagerForChainRun:
"""Run when chain starts running."""
if run_id is None:
run_id = uuid4()
await _ahandle_event(
self.handlers,
"on_chain_start",
"ignore_chain",
serialized,
inputs,
run_id=run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
return AsyncCallbackManagerForChainRun(
run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
)
async def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
run_id: Optional[UUID] = None,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> AsyncCallbackManagerForToolRun:
"""Run when tool starts running."""
if run_id is None:
run_id = uuid4()
await _ahandle_event(
self.handlers,
"on_tool_start",
"ignore_agent",
serialized,
input_str,
run_id=run_id,
parent_run_id=self.parent_run_id,
**kwargs,
)
return AsyncCallbackManagerForToolRun(
run_id, self.handlers, self.inheritable_handlers, self.parent_run_id
)
@classmethod
def configure(
cls,
inheritable_callbacks: Callbacks = None,
local_callbacks: Callbacks = None,
verbose: bool = False,
) -> AsyncCallbackManager:
"""Configure the callback manager."""
return _configure(cls, inheritable_callbacks, local_callbacks, verbose)
T = TypeVar("T", CallbackManager, AsyncCallbackManager)
def _configure(
callback_manager_cls: Type[T],
inheritable_callbacks: Callbacks = None,
local_callbacks: Callbacks = None,
verbose: bool = False,
) -> T:
"""Configure the callback manager."""
callback_manager = callback_manager_cls([])
if inheritable_callbacks or local_callbacks:
if isinstance(inheritable_callbacks, list) or inheritable_callbacks is None:
inheritable_callbacks_ = inheritable_callbacks or []
callback_manager = callback_manager_cls(
handlers=inheritable_callbacks_.copy(),
inheritable_handlers=inheritable_callbacks_.copy(),
)
else:
callback_manager = callback_manager_cls(
handlers=inheritable_callbacks.handlers,
inheritable_handlers=inheritable_callbacks.inheritable_handlers,
parent_run_id=inheritable_callbacks.parent_run_id,
)
local_handlers_ = (
local_callbacks
if isinstance(local_callbacks, list)
else (local_callbacks.handlers if local_callbacks else [])
)
for handler in local_handlers_:
callback_manager.add_handler(handler, False)
tracer = tracing_callback_var.get()
open_ai = openai_callback_var.get()
tracing_enabled_ = (
os.environ.get("LANGCHAIN_TRACING") is not None
or tracer is not None
or os.environ.get("LANGCHAIN_HANDLER") is not None
)
tracer_v2 = tracing_v2_callback_var.get()
tracing_v2_enabled_ = (
os.environ.get("LANGCHAIN_TRACING_V2") is not None or tracer_v2 is not None
)
tracer_session = os.environ.get("LANGCHAIN_SESSION")
debug = _get_debug()
if tracer_session is None:
tracer_session = "default"
if (
verbose
or debug
or tracing_enabled_
or tracing_v2_enabled_
or open_ai is not None
):
if verbose and not any(
isinstance(handler, StdOutCallbackHandler)
for handler in callback_manager.handlers
):
if debug:
pass
else:
callback_manager.add_handler(StdOutCallbackHandler(), False)
if debug and not any(
isinstance(handler, ConsoleCallbackHandler)
for handler in callback_manager.handlers
):
callback_manager.add_handler(ConsoleCallbackHandler(), True)
if tracing_enabled_ and not any(
isinstance(handler, LangChainTracerV1)
for handler in callback_manager.handlers
):
if tracer:
callback_manager.add_handler(tracer, True)
else:
handler = LangChainTracerV1()
handler.load_session(tracer_session)
callback_manager.add_handler(handler, True)
if tracing_v2_enabled_ and not any(
isinstance(handler, LangChainTracer)
for handler in callback_manager.handlers
):
if tracer_v2:
callback_manager.add_handler(tracer_v2, True)
else:
try:
handler = LangChainTracer(session_name=tracer_session)
handler.ensure_session()
callback_manager.add_handler(handler, True)
except Exception as e:
logger.warning(
"Unable to load requested LangChainTracer."
" To disable this warning,"
" unset the LANGCHAIN_TRACING_V2 environment variables.",
e,
)
if open_ai is not None and not any(
isinstance(handler, OpenAICallbackHandler)
for handler in callback_manager.handlers
):
callback_manager.add_handler(open_ai, True)
return callback_manager
| [] |
2024-01-10 | akshaya-a/langchain | langchain~llms~__init__.py | """Wrappers on top of large language models APIs."""
from typing import Dict, Type
from langchain.llms.ai21 import AI21
from langchain.llms.aleph_alpha import AlephAlpha
from langchain.llms.anthropic import Anthropic
from langchain.llms.anyscale import Anyscale
from langchain.llms.bananadev import Banana
from langchain.llms.base import BaseLLM
from langchain.llms.beam import Beam
from langchain.llms.cerebriumai import CerebriumAI
from langchain.llms.cohere import Cohere
from langchain.llms.deepinfra import DeepInfra
from langchain.llms.fake import FakeListLLM
from langchain.llms.forefrontai import ForefrontAI
from langchain.llms.google_palm import GooglePalm
from langchain.llms.gooseai import GooseAI
from langchain.llms.gpt4all import GPT4All
from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.llms.huggingface_hub import HuggingFaceHub
from langchain.llms.huggingface_pipeline import HuggingFacePipeline
from langchain.llms.huggingface_text_gen_inference import HuggingFaceTextGenInference
from langchain.llms.human import HumanInputLLM
from langchain.llms.llamacpp import LlamaCpp
from langchain.llms.modal import Modal
from langchain.llms.mosaicml import MosaicML
from langchain.llms.nlpcloud import NLPCloud
from langchain.llms.openai import AzureOpenAI, OpenAI, OpenAIChat
from langchain.llms.openlm import OpenLM
from langchain.llms.petals import Petals
from langchain.llms.pipelineai import PipelineAI
from langchain.llms.predictionguard import PredictionGuard
from langchain.llms.promptlayer_openai import PromptLayerOpenAI, PromptLayerOpenAIChat
from langchain.llms.replicate import Replicate
from langchain.llms.rwkv import RWKV
from langchain.llms.sagemaker_endpoint import SagemakerEndpoint
from langchain.llms.self_hosted import SelfHostedPipeline
from langchain.llms.self_hosted_hugging_face import SelfHostedHuggingFaceLLM
from langchain.llms.stochasticai import StochasticAI
from langchain.llms.vertexai import VertexAI
from langchain.llms.writer import Writer
__all__ = [
"Anthropic",
"AlephAlpha",
"Anyscale",
"Banana",
"Beam",
"CerebriumAI",
"Cohere",
"DeepInfra",
"ForefrontAI",
"GooglePalm",
"GooseAI",
"GPT4All",
"LlamaCpp",
"Modal",
"MosaicML",
"NLPCloud",
"OpenAI",
"OpenAIChat",
"OpenLM",
"Petals",
"PipelineAI",
"HuggingFaceEndpoint",
"HuggingFaceHub",
"SagemakerEndpoint",
"HuggingFacePipeline",
"AI21",
"AzureOpenAI",
"Replicate",
"SelfHostedPipeline",
"SelfHostedHuggingFaceLLM",
"PromptLayerOpenAI",
"PromptLayerOpenAIChat",
"StochasticAI",
"Writer",
"RWKV",
"PredictionGuard",
"HumanInputLLM",
"HuggingFaceTextGenInference",
"FakeListLLM",
"VertexAI",
]
type_to_cls_dict: Dict[str, Type[BaseLLM]] = {
"ai21": AI21,
"aleph_alpha": AlephAlpha,
"anthropic": Anthropic,
"anyscale": Anyscale,
"bananadev": Banana,
"beam": Beam,
"cerebriumai": CerebriumAI,
"cohere": Cohere,
"deepinfra": DeepInfra,
"forefrontai": ForefrontAI,
"google_palm": GooglePalm,
"gooseai": GooseAI,
"gpt4all": GPT4All,
"huggingface_hub": HuggingFaceHub,
"huggingface_endpoint": HuggingFaceEndpoint,
"llamacpp": LlamaCpp,
"modal": Modal,
"mosaic": MosaicML,
"sagemaker_endpoint": SagemakerEndpoint,
"nlpcloud": NLPCloud,
"human-input": HumanInputLLM,
"openai": OpenAI,
"openlm": OpenLM,
"petals": Petals,
"pipelineai": PipelineAI,
"huggingface_pipeline": HuggingFacePipeline,
"azure": AzureOpenAI,
"replicate": Replicate,
"self_hosted": SelfHostedPipeline,
"self_hosted_hugging_face": SelfHostedHuggingFaceLLM,
"stochasticai": StochasticAI,
"writer": Writer,
"rwkv": RWKV,
"huggingface_textgen_inference": HuggingFaceTextGenInference,
"fake-list": FakeListLLM,
"vertexai": VertexAI,
}
| [] |
2024-01-10 | akshaya-a/langchain | tests~integration_tests~embeddings~test_vertexai.py | """Test Vertex AI API wrapper.
In order to run this test, you need to install VertexAI SDK
pip install google-cloud-aiplatform>=1.25.0
Your end-user credentials would be used to make the calls (make sure you've run
`gcloud auth login` first).
"""
from langchain.embeddings import VertexAIEmbeddings
def test_embedding_documents() -> None:
documents = ["foo bar"]
model = VertexAIEmbeddings()
output = model.embed_documents(documents)
assert len(output) == 1
assert len(output[0]) == 768
assert model._llm_type == "vertexai"
assert model.model_name == model.client._model_id
def test_embedding_query() -> None:
document = "foo bar"
model = VertexAIEmbeddings()
output = model.embed_query(document)
assert len(output) == 768
| [] |
2024-01-10 | akshaya-a/langchain | langchain~embeddings~vertexai.py | """Wrapper around Google VertexAI embedding models."""
from typing import Dict, List
from pydantic import root_validator
from langchain.embeddings.base import Embeddings
from langchain.llms.vertexai import _VertexAICommon
from langchain.utilities.vertexai import raise_vertex_import_error
class VertexAIEmbeddings(_VertexAICommon, Embeddings):
model_name: str = "textembedding-gecko"
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validates that the python package exists in environment."""
cls._try_init_vertexai(values)
try:
from vertexai.preview.language_models import TextEmbeddingModel
except ImportError:
raise_vertex_import_error()
values["client"] = TextEmbeddingModel.from_pretrained(values["model_name"])
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Embed a list of strings.
Args:
texts: List[str] The list of strings to embed.
Returns:
List of embeddings, one for each text.
"""
embeddings = self.client.get_embeddings(texts)
return [el.values for el in embeddings]
def embed_query(self, text: str) -> List[float]:
"""Embed a text.
Args:
text: The text to embed.
Returns:
Embedding for the text.
"""
embeddings = self.client.get_embeddings([text])
return embeddings[0].values
| [] |
2024-01-10 | akshaya-a/langchain | langchain~chat_models~vertexai.py | """Wrapper around Google VertexAI chat-based models."""
from dataclasses import dataclass, field
from typing import Dict, List, Optional
from pydantic import root_validator
from langchain.callbacks.manager import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain.chat_models.base import BaseChatModel
from langchain.llms.vertexai import _VertexAICommon
from langchain.schema import (
AIMessage,
BaseMessage,
ChatGeneration,
ChatResult,
HumanMessage,
SystemMessage,
)
from langchain.utilities.vertexai import raise_vertex_import_error
@dataclass
class _MessagePair:
"""InputOutputTextPair represents a pair of input and output texts."""
question: HumanMessage
answer: AIMessage
@dataclass
class _ChatHistory:
"""InputOutputTextPair represents a pair of input and output texts."""
history: List[_MessagePair] = field(default_factory=list)
system_message: Optional[SystemMessage] = None
def _parse_chat_history(history: List[BaseMessage]) -> _ChatHistory:
"""Parse a sequence of messages into history.
A sequence should be either (SystemMessage, HumanMessage, AIMessage,
HumanMessage, AIMessage, ...) or (HumanMessage, AIMessage, HumanMessage,
AIMessage, ...).
Args:
history: The list of messages to re-create the history of the chat.
Returns:
A parsed chat history.
Raises:
ValueError: If a sequence of message is odd, or a human message is not followed
by a message from AI (e.g., Human, Human, AI or AI, AI, Human).
"""
if not history:
return _ChatHistory()
first_message = history[0]
system_message = first_message if isinstance(first_message, SystemMessage) else None
chat_history = _ChatHistory(system_message=system_message)
messages_left = history[1:] if system_message else history
if len(messages_left) % 2 != 0:
raise ValueError(
f"Amount of messages in history should be even, got {len(messages_left)}!"
)
for question, answer in zip(messages_left[::2], messages_left[1::2]):
if not isinstance(question, HumanMessage) or not isinstance(answer, AIMessage):
raise ValueError(
"A human message should follow a bot one, "
f"got {question.type}, {answer.type}."
)
chat_history.history.append(_MessagePair(question=question, answer=answer))
return chat_history
class ChatVertexAI(_VertexAICommon, BaseChatModel):
"""Wrapper around Vertex AI large language models."""
model_name: str = "chat-bison"
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that the python package exists in environment."""
cls._try_init_vertexai(values)
try:
from vertexai.preview.language_models import ChatModel
except ImportError:
raise_vertex_import_error()
values["client"] = ChatModel.from_pretrained(values["model_name"])
return values
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> ChatResult:
"""Generate next turn in the conversation.
Args:
messages: The history of the conversation as a list of messages.
stop: The list of stop words (optional).
run_manager: The Callbackmanager for LLM run, it's not used at the moment.
Returns:
The ChatResult that contains outputs generated by the model.
Raises:
ValueError: if the last message in the list is not from human.
"""
if not messages:
raise ValueError(
"You should provide at least one message to start the chat!"
)
question = messages[-1]
if not isinstance(question, HumanMessage):
raise ValueError(
f"Last message in the list should be from human, got {question.type}."
)
history = _parse_chat_history(messages[:-1])
context = history.system_message.content if history.system_message else None
chat = self.client.start_chat(context=context, **self._default_params)
for pair in history.history:
chat._history.append((pair.question.content, pair.answer.content))
response = chat.send_message(question.content)
text = self._enforce_stop_words(response.text, stop)
return ChatResult(generations=[ChatGeneration(message=AIMessage(content=text))])
async def _agenerate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> ChatResult:
raise NotImplementedError(
"""Vertex AI doesn't support async requests at the moment."""
)
| [] |
2024-01-10 | akshaya-a/langchain | tests~integration_tests~chat_models~test_vertexai.py | """Test Vertex AI API wrapper.
In order to run this test, you need to install VertexAI SDK (that is is the private
preview) and be whitelisted to list the models themselves:
In order to run this test, you need to install VertexAI SDK
pip install google-cloud-aiplatform>=1.25.0
Your end-user credentials would be used to make the calls (make sure you've run
`gcloud auth login` first).
"""
import pytest
from langchain.chat_models import ChatVertexAI
from langchain.chat_models.vertexai import _MessagePair, _parse_chat_history
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
)
def test_vertexai_single_call() -> None:
model = ChatVertexAI()
message = HumanMessage(content="Hello")
response = model([message])
assert isinstance(response, AIMessage)
assert isinstance(response.content, str)
assert model._llm_type == "vertexai"
assert model.model_name == model.client._model_id
def test_vertexai_single_call_with_context() -> None:
model = ChatVertexAI()
raw_context = (
"My name is Ned. You are my personal assistant. My favorite movies "
"are Lord of the Rings and Hobbit."
)
question = (
"Hello, could you recommend a good movie for me to watch this evening, please?"
)
context = SystemMessage(content=raw_context)
message = HumanMessage(content=question)
response = model([context, message])
assert isinstance(response, AIMessage)
assert isinstance(response.content, str)
def test_parse_chat_history_correct() -> None:
text_context = (
"My name is Ned. You are my personal assistant. My "
"favorite movies are Lord of the Rings and Hobbit."
)
context = SystemMessage(content=text_context)
text_question = (
"Hello, could you recommend a good movie for me to watch this evening, please?"
)
question = HumanMessage(content=text_question)
text_answer = (
"Sure, You might enjoy The Lord of the Rings: The Fellowship of the Ring "
"(2001): This is the first movie in the Lord of the Rings trilogy."
)
answer = AIMessage(content=text_answer)
history = _parse_chat_history([context, question, answer, question, answer])
assert history.system_message == context
assert len(history.history) == 2
assert history.history[0] == _MessagePair(question=question, answer=answer)
def test_parse_chat_history_wrong_sequence() -> None:
text_question = (
"Hello, could you recommend a good movie for me to watch this evening, please?"
)
question = HumanMessage(content=text_question)
with pytest.raises(ValueError) as exc_info:
_ = _parse_chat_history([question, question])
assert (
str(exc_info.value)
== "A human message should follow a bot one, got human, human."
)
def test_vertexai_single_call_failes_no_message() -> None:
chat = ChatVertexAI()
with pytest.raises(ValueError) as exc_info:
_ = chat([])
assert (
str(exc_info.value)
== "You should provide at least one message to start the chat!"
)
| [
"Hello"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.