date_collected
stringclasses 1
value | repo_name
stringlengths 6
116
| file_name
stringlengths 2
220
| file_contents
stringlengths 13
357k
| prompts
sequence |
---|---|---|---|---|
2024-01-10 | rokanost/sweep | tests~archive~test_langchain_chunker.py | from langchain.text_splitter import (
RecursiveCharacterTextSplitter,
Language,
)
python_text = '''
import io
import os
import zipfile
import openai
import requests
from loguru import logger
from sweepai.core.gha_extraction import GHAExtractor
from sweepai.events import CheckRunCompleted
from sweepai.handlers.on_comment import on_comment
from sweepai.utils.config.client import SweepConfig, get_gha_enabled
from sweepai.utils.github_utils import get_github_client, get_token
openai.api_key = os.environ.get("OPENAI_API_KEY")
log_message = """GitHub actions yielded the following error.
{error_logs}
This is likely a linting or type-checking issue with the source code but if you are updating the GitHub Actions or versioning, this could be an issue with the GitHub Action yaml files."""
def download_logs(repo_full_name: str, run_id: int, installation_id: int):
headers = {
"Accept": "application/vnd.github+json",
"Authorization": f"Bearer {get_token(installation_id)}",
"X-GitHub-Api-Version": "2022-11-28"
}
response = requests.get(f"https://api.github.com/repos/{repo_full_name}/actions/runs/{run_id}/logs",
headers=headers)
logs_str = ""
if response.status_code == 200:
zip_file = zipfile.ZipFile(io.BytesIO(response.content))
for file in zip_file.namelist():
if "/" not in file:
with zip_file.open(file) as f:
logs_str += f.read().decode("utf-8")
else:
logger.warning(f"Failed to download logs for run id: {run_id}")
return logs_str
def clean_logs(logs_str: str):
log_list = logs_str.split("\n")
truncated_logs = [log[log.find(" ") + 1:] for log in log_list]
patterns = [
# for docker
"Already exists",
"Pulling fs layer",
"Waiting",
"Download complete",
"Verifying Checksum",
"Pull complete",
# For github
"remote: Counting objects",
"remote: Compressing objects:",
"Receiving objects:",
"Resolving deltas:"
]
return "\n".join([log.strip() for log in truncated_logs if not any(pattern in log for pattern in patterns)])
def on_check_suite(request: CheckRunCompleted):
logger.info(f"Received check run completed event for {request.repository.full_name}")
g = get_github_client(request.installation.id)
repo = g.get_repo(request.repository.full_name)
if not get_gha_enabled(repo):
logger.info(f"Skipping github action for {request.repository.full_name} because it is not enabled")
return None
pr = repo.get_pull(request.check_run.pull_requests[0].number)
num_pr_commits = len(list(pr.get_commits()))
if num_pr_commits > 20:
logger.info(f"Skipping github action for PR with {num_pr_commits} commits")
return None
logger.info(f"Running github action for PR with {num_pr_commits} commits")
logs = download_logs(
request.repository.full_name,
request.check_run.run_id,
request.installation.id
)
if not logs:
return None
logs = clean_logs(logs)
extractor = GHAExtractor()
logger.info(f"Extracting logs from {request.repository.full_name}, logs: {logs}")
problematic_logs = extractor.gha_extract(logs)
if problematic_logs.count("\n") > 15:
problematic_logs += "\n\nThere are a lot of errors. This is likely a larger issue with the PR and not a small linting/type-checking issue."
comments = list(pr.get_issue_comments())
if len(comments) >= 2 and problematic_logs == comments[-1].body and comments[-2].body == comments[-1].body:
comment = pr.as_issue().create_comment(log_message.format(error_logs=problematic_logs) + "\n\nI'm getting the same errors 3 times in a row, so I will stop working on fixing this PR.")
logger.warning("Skipping logs because it is duplicated")
raise Exception("Duplicate error logs")
print(problematic_logs)
comment = pr.as_issue().create_comment(log_message.format(error_logs=problematic_logs))
on_comment(
repo_full_name=request.repository.full_name,
repo_description=request.repository.description,
comment=problematic_logs,
pr_path=None,
pr_line_position=None,
username=request.sender.login,
installation_id=request.installation.id,
pr_number=request.check_run.pull_requests[0].number,
comment_id=comment.id,
repo=repo,
)
return {"success": True}
'''
python_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.PYTHON, chunk_size=1500, chunk_overlap=0
)
python_docs = python_splitter.create_documents([python_text])
# [print(document.page_content + "\n\n===========\n\n") for document in python_docs]
# quit()
js_text = """
import { Document, BaseNode } from "../Node";
import { v4 as uuidv4 } from "uuid";
import { BaseRetriever } from "../Retriever";
import { ServiceContext } from "../ServiceContext";
import { StorageContext } from "../storage/StorageContext";
import { BaseDocumentStore } from "../storage/docStore/types";
import { VectorStore } from "../storage/vectorStore/types";
import { BaseIndexStore } from "../storage/indexStore/types";
import { BaseQueryEngine } from "../QueryEngine";
import { ResponseSynthesizer } from "../ResponseSynthesizer";
/**
* The underlying structure of each index.
*/
export abstract class IndexStruct {
indexId: string;
summary?: string;
constructor(indexId = uuidv4(), summary = undefined) {
this.indexId = indexId;
this.summary = summary;
}
toJson(): Record<string, unknown> {
return {
indexId: this.indexId,
summary: this.summary,
};
}
getSummary(): string {
if (this.summary === undefined) {
throw new Error("summary field of the index dict is not set");
}
return this.summary;
}
}
export enum IndexStructType {
SIMPLE_DICT = "simple_dict",
LIST = "list",
}
export class IndexDict extends IndexStruct {
nodesDict: Record<string, BaseNode> = {};
docStore: Record<string, Document> = {}; // FIXME: this should be implemented in storageContext
type: IndexStructType = IndexStructType.SIMPLE_DICT;
getSummary(): string {
if (this.summary === undefined) {
throw new Error("summary field of the index dict is not set");
}
return this.summary;
}
addNode(node: BaseNode, textId?: string) {
const vectorId = textId ?? node.id_;
this.nodesDict[vectorId] = node;
}
toJson(): Record<string, unknown> {
return {
...super.toJson(),
nodesDict: this.nodesDict,
type: this.type,
};
}
}
export function jsonToIndexStruct(json: any): IndexStruct {
if (json.type === IndexStructType.LIST) {
const indexList = new IndexList(json.indexId, json.summary);
indexList.nodes = json.nodes;
return indexList;
} else if (json.type === IndexStructType.SIMPLE_DICT) {
const indexDict = new IndexDict(json.indexId, json.summary);
indexDict.nodesDict = json.nodesDict;
return indexDict;
} else {
throw new Error(`Unknown index struct type: ${json.type}`);
}
}
export class IndexList extends IndexStruct {
nodes: string[] = [];
type: IndexStructType = IndexStructType.LIST;
addNode(node: BaseNode) {
this.nodes.push(node.id_);
}
toJson(): Record<string, unknown> {
return {
...super.toJson(),
nodes: this.nodes,
type: this.type,
};
}
}
export interface BaseIndexInit<T> {
serviceContext: ServiceContext;
storageContext: StorageContext;
docStore: BaseDocumentStore;
vectorStore?: VectorStore;
indexStore?: BaseIndexStore;
indexStruct: T;
}
/**
* Indexes are the data structure that we store our nodes and embeddings in so
* they can be retrieved for our queries.
*/
export abstract class BaseIndex<T> {
serviceContext: ServiceContext;
storageContext: StorageContext;
docStore: BaseDocumentStore;
vectorStore?: VectorStore;
indexStore?: BaseIndexStore;
indexStruct: T;
constructor(init: BaseIndexInit<T>) {
this.serviceContext = init.serviceContext;
this.storageContext = init.storageContext;
this.docStore = init.docStore;
this.vectorStore = init.vectorStore;
this.indexStore = init.indexStore;
this.indexStruct = init.indexStruct;
}
/**
* Create a new retriever from the index.
* @param retrieverOptions
*/
abstract asRetriever(options?: any): BaseRetriever;
/**
* Create a new query engine from the index. It will also create a retriever
* and response synthezier if they are not provided.
* @param options you can supply your own custom Retriever and ResponseSynthesizer
*/
abstract asQueryEngine(options?: {
retriever?: BaseRetriever;
responseSynthesizer?: ResponseSynthesizer;
}): BaseQueryEngine;
}
export interface VectorIndexOptions {
nodes?: BaseNode[];
indexStruct?: IndexDict;
indexId?: string;
serviceContext?: ServiceContext;
storageContext?: StorageContext;
}
export interface VectorIndexConstructorProps extends BaseIndexInit<IndexDict> {
vectorStore: VectorStore;
}
"""
js_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.JS, chunk_size=1500, chunk_overlap=0
)
js_docs = js_splitter.create_documents([js_text])
[print(document.page_content + "\n\n========\n") for document in js_docs]
| [] |
2024-01-10 | rokanost/sweep | sweepai~handlers~on_check_suite.py | import io
import os
import zipfile
import openai
import requests
from loguru import logger
from sweepai.core.gha_extraction import GHAExtractor
from sweepai.events import CheckRunCompleted
from sweepai.handlers.on_comment import on_comment
from sweepai.config.client import get_gha_enabled
from sweepai.utils.github_utils import get_github_client, get_token
openai.api_key = os.environ.get("OPENAI_API_KEY")
log_message = """GitHub actions yielded the following error.
{error_logs}
This is likely a linting or type-checking issue with the source code. Update the code changed by the PR. Don't modify the existing tests."""
def get_dirs(zipfile: zipfile.ZipFile):
return [file for file in zipfile.namelist() if file.endswith("/") and "/" in file]
def get_files_in_dir(zipfile: zipfile.ZipFile, dir: str):
return [
file
for file in zipfile.namelist()
if file.startswith(dir) and not file.endswith("/")
]
def download_logs(repo_full_name: str, run_id: int, installation_id: int):
token = get_token(installation_id)
headers = {
"Accept": "application/vnd.github+json",
"Authorization": f"Bearer {token}",
"X-GitHub-Api-Version": "2022-11-28",
}
response = requests.get(
f"https://api.github.com/repos/{repo_full_name}/actions/runs/{run_id}/logs",
headers=headers,
)
logs_str = ""
if response.status_code == 200:
# this is the worst code I've ever written. I'm sorry.
content = response.content
zip_file = zipfile.ZipFile(io.BytesIO(content))
for file in zip_file.namelist():
if file.endswith(".txt"):
with zip_file.open(file) as f:
logs = f.read().decode("utf-8")
last_line = logs.splitlines()[-1]
if "##[error]" in last_line:
logs_str += logs
else:
logger.info(response.text)
logger.warning(f"Failed to download logs for run id: {run_id}")
return logs_str
def clean_logs(logs_str: str):
# Extraction process could be better
MAX_LINES = 300
log_list = logs_str.split("\n")
truncated_logs = [log[log.find(" ") + 1 :] for log in log_list]
patterns = [
# for docker
"Already exists",
"Pulling fs layer",
"Waiting",
"Download complete",
"Verifying Checksum",
"Pull complete",
# For github
"remote: Counting objects",
"remote: Compressing objects:",
"Receiving objects:",
"Resolving deltas:",
"[command]/usr/bin/git ",
"Download action repository",
# For python
"Collecting",
"Downloading",
"Installing",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━",
# For prettier
"npm WARN EBADENGINE ",
"npm WARN deprecated ",
"prettier/prettier",
]
cleaned_lines = [
log.strip()
for log in truncated_logs
if not any(log.strip().startswith(pattern) for pattern in patterns)
]
return "\n".join(cleaned_lines[: min(MAX_LINES, len(cleaned_lines))])
def extract_logs_from_comment(comment: str) -> str:
if comment.count("```") < 2:
return ""
return comment[comment.find("```") + 3 : comment.rfind("```")]
def on_check_suite(request: CheckRunCompleted):
logger.info(
f"Received check run completed event for {request.repository.full_name}"
)
_, g = get_github_client(request.installation.id)
repo = g.get_repo(request.repository.full_name)
if not get_gha_enabled(repo):
logger.info(
f"Skipping github action for {request.repository.full_name} because it is"
" not enabled"
)
return None
pr = repo.get_pull(request.check_run.pull_requests[0].number)
num_pr_commits = len(list(pr.get_commits()))
if num_pr_commits > 20:
logger.info(f"Skipping github action for PR with {num_pr_commits} commits")
return None
logger.info(f"Running github action for PR with {num_pr_commits} commits")
logs = download_logs(
request.repository.full_name, request.check_run.run_id, request.installation.id
)
if not logs:
return None
logs = clean_logs(logs)
extractor = GHAExtractor(chat_logger=None)
logger.info(f"Extracting logs from {request.repository.full_name}, logs: {logs}")
problematic_logs = extractor.gha_extract(logs)
if problematic_logs.count("\n") > 15:
problematic_logs += (
"\n\nThere are a lot of errors. This is likely due to a small parsing issue"
" or a missing import with the files changed in the PR."
)
comments = list(pr.get_issue_comments())
# logs_list = [extract_logs_from_comment(comment.body) for comment in comments]
# current_logs = extract_logs_from_comment(problematic_logs)
if all(
[
comment.body.startswith("GitHub actions yielded the following error.")
for comment in comments[-3:]
]
):
comment = pr.as_issue().create_comment(
log_message.format(error_logs=problematic_logs)
+ "\n\nI'm getting the same errors 3 times in a row, so I will stop working"
" on fixing this PR."
)
logger.warning("Skipping logs because it is duplicated")
return {"error": "Duplicate error logs"}
comment = pr.as_issue().create_comment(
log_message.format(error_logs=problematic_logs)
)
on_comment(
repo_full_name=request.repository.full_name,
repo_description=request.repository.description,
comment=problematic_logs,
pr_path=None,
pr_line_position=None,
username=request.sender.login,
installation_id=request.installation.id,
pr_number=request.check_run.pull_requests[0].number,
comment_id=comment.id,
repo=repo,
)
return {"success": True}
| [] |
2024-01-10 | rokanost/sweep | sweepai~handlers~create_pr.py | from typing import Generator
import openai
from github.Repository import Repository
from loguru import logger
from sweepai.core.entities import (
ProposedIssue,
PullRequest,
MockPR,
MaxTokensExceeded,
FileChangeRequest,
)
from sweepai.utils.chat_logger import ChatLogger
from sweepai.config.client import SweepConfig, get_blocked_dirs, UPDATES_MESSAGE
from sweepai.config.server import (
ENV,
GITHUB_DEFAULT_CONFIG,
GITHUB_LABEL_NAME,
MONGODB_URI,
OPENAI_API_KEY,
DB_MODAL_INST_NAME,
GITHUB_BOT_USERNAME,
GITHUB_CONFIG_BRANCH,
)
from sweepai.core.sweep_bot import SweepBot
from sweepai.utils.event_logger import posthog
openai.api_key = OPENAI_API_KEY
num_of_snippets_to_query = 10
max_num_of_snippets = 5
INSTRUCTIONS_FOR_REVIEW = """\
💡 To get Sweep to edit this pull request, you can:
* Leave a comment below to get Sweep to edit the entire PR
* Leave a comment in the code will only modify the file
* Edit the original issue to get Sweep to recreate the PR from scratch"""
async def create_pr_changes(
file_change_requests: list[FileChangeRequest],
pull_request: PullRequest,
sweep_bot: SweepBot,
username: str,
installation_id: int,
issue_number: int | None = None,
sandbox=None,
chat_logger: ChatLogger = None,
) -> Generator[tuple[FileChangeRequest, int], None, dict]:
# Flow:
# 1. Get relevant files
# 2: Get human message
# 3. Get files to change
# 4. Get file changes
# 5. Create PR
chat_logger = (
chat_logger
if chat_logger is not None
else ChatLogger(
{
"username": username,
"installation_id": installation_id,
"repo_full_name": sweep_bot.repo.full_name,
"title": pull_request.title,
"summary": "",
"issue_url": "",
}
)
if MONGODB_URI
else None
)
sweep_bot.chat_logger = chat_logger
organization, repo_name = sweep_bot.repo.full_name.split("/")
metadata = {
"repo_full_name": sweep_bot.repo.full_name,
"organization": organization,
"repo_name": repo_name,
"repo_description": sweep_bot.repo.description,
"username": username,
"installation_id": installation_id,
"function": "create_pr",
"mode": ENV,
"issue_number": issue_number,
}
posthog.capture(username, "started", properties=metadata)
try:
logger.info("Making PR...")
pull_request.branch_name = sweep_bot.create_branch(pull_request.branch_name)
completed_count, fcr_count = 0, len(file_change_requests)
blocked_dirs = get_blocked_dirs(sweep_bot.repo)
async for (
file_change_request,
changed_file,
sandbox_error,
) in sweep_bot.change_files_in_github_iterator(
file_change_requests,
pull_request.branch_name,
blocked_dirs,
sandbox=sandbox,
):
completed_count += changed_file
logger.info("Completed {}/{} files".format(completed_count, fcr_count))
yield file_change_request, changed_file, sandbox_error
if completed_count == 0 and fcr_count != 0:
logger.info("No changes made")
posthog.capture(
username,
"failed",
properties={
"error": "No changes made",
"reason": "No changes made",
**metadata,
},
)
# Todo: if no changes were made, delete branch
error_msg = "No changes made"
commits = sweep_bot.repo.get_commits(pull_request.branch_name)
if commits.totalCount == 0:
branch = sweep_bot.repo.get_git_ref(f"heads/{pull_request.branch_name}")
branch.delete()
error_msg = "No changes made. Branch deleted."
return
# Include issue number in PR description
PR_CHECKOUT_COMMAND = f"To checkout this PR branch, run the following command in your terminal:\n```zsh\ngit checkout {pull_request.branch_name}\n```"
if issue_number:
# If the #issue changes, then change on_ticket (f'Fixes #{issue_number}.\n' in pr.body:)
pr_description = (
f"{pull_request.content}\n\nFixes"
f" #{issue_number}.\n\n---\n{PR_CHECKOUT_COMMAND}\n\n---\n\n{UPDATES_MESSAGE}\n\n---\n\n{INSTRUCTIONS_FOR_REVIEW}"
)
else:
pr_description = f"{pull_request.content}\n\n{PR_CHECKOUT_COMMAND}"
pr_title = pull_request.title
if "sweep.yaml" in pr_title:
pr_title = "[config] " + pr_title
except MaxTokensExceeded as e:
logger.error(e)
posthog.capture(
username,
"failed",
properties={
"error": str(e),
"reason": "Max tokens exceeded",
**metadata,
},
)
raise e
except openai.error.InvalidRequestError as e:
logger.error(e)
posthog.capture(
username,
"failed",
properties={
"error": str(e),
"reason": "Invalid request error / context length",
**metadata,
},
)
raise e
except Exception as e:
logger.error(e)
posthog.capture(
username,
"failed",
properties={
"error": str(e),
"reason": "Unexpected error",
**metadata,
},
)
raise e
posthog.capture(username, "success", properties={**metadata})
logger.info("create_pr success")
result = {
"success": True,
"pull_request": MockPR(
file_count=completed_count,
title=pr_title,
body=pr_description,
pr_head=pull_request.branch_name,
base=sweep_bot.repo.get_branch(
SweepConfig.get_branch(sweep_bot.repo)
).commit,
head=sweep_bot.repo.get_branch(pull_request.branch_name).commit,
),
}
yield result # Doing this because sometiems using StopIteration doesn't work, kinda jank tho tbh
return
def safe_delete_sweep_branch(
pr, # Github PullRequest
repo: Repository,
) -> bool:
"""
Safely delete Sweep branch
1. Only edited by Sweep
2. Prefixed by sweep/
"""
pr_commits = pr.get_commits()
pr_commit_authors = set([commit.author.login for commit in pr_commits])
# Check if only Sweep has edited the PR, and sweep/ prefix
if (
len(pr_commit_authors) == 1
and GITHUB_BOT_USERNAME in pr_commit_authors
and pr.head.ref.startswith("sweep")
):
branch = repo.get_git_ref(f"heads/{pr.head.ref}")
# pr.edit(state='closed')
branch.delete()
return True
else:
# Failed to delete branch as it was edited by someone else
return False
def create_config_pr(
sweep_bot: SweepBot,
):
title = "Configure Sweep"
branch_name = GITHUB_CONFIG_BRANCH
branch_name = sweep_bot.create_branch(branch_name, retry=False)
try:
sweep_bot.repo.create_file(
".github/ISSUE_TEMPLATE/sweep-template.yml",
"Create sweep template",
SWEEP_TEMPLATE,
branch=branch_name,
)
sweep_bot.repo.create_file(
".github/ISSUE_TEMPLATE/sweep-slow-template.yml",
"Create sweep slow template",
SWEEP_SLOW_TEMPLATE,
branch=branch_name,
)
sweep_bot.repo.create_file(
".github/ISSUE_TEMPLATE/sweep-fast-template.yml",
"Create sweep fast template",
SWEEP_FAST_TEMPLATE,
branch=branch_name,
)
except Exception as e:
logger.error(e)
# Check if the pull request from this branch to main already exists.
# If it does, then we don't need to create a new one.
pull_requests = sweep_bot.repo.get_pulls(
state="open",
sort="created",
base=SweepConfig.get_branch(sweep_bot.repo),
head=branch_name,
)
for pr in pull_requests:
if pr.title == title:
return pr
pr = sweep_bot.repo.create_pull(
title=title,
body="""🎉 Thank you for installing Sweep! We're thrilled to announce the latest update for Sweep, your AI junior developer on GitHub. This PR creates a `sweep.yaml` config file, allowing you to personalize Sweep's performance according to your project requirements.
## What's new?
- **Sweep is now configurable**.
- To configure Sweep, simply edit the `sweep.yaml` file in the root of your repository.
- If you need help, check out the [Sweep Default Config](https://github.com/sweepai/sweep/blob/main/sweep.yaml) or [Join Our Discord](https://discord.gg/sweep) for help.
If you would like me to stop creating this PR, go to issues and say "Sweep: create an empty `sweep.yaml` file".
Thank you for using Sweep! 🧹""".replace(
" ", ""
),
head=branch_name,
base=SweepConfig.get_branch(sweep_bot.repo),
)
pr.add_to_labels(GITHUB_LABEL_NAME)
return pr
def create_gha_pr(g, repo):
# Create a new branch
branch_name = "sweep/gha-enable"
branch = repo.create_git_ref(
ref=f"refs/heads/{branch_name}",
sha=repo.get_branch(repo.default_branch).commit.sha,
)
# Update the sweep.yaml file in this branch to add "gha_enabled: True"
sweep_yaml_content = (
repo.get_contents("sweep.yaml", ref=branch_name).decoded_content.decode()
+ "\ngha_enabled: True"
)
repo.update_file(
"sweep.yaml",
"Enable GitHub Actions",
sweep_yaml_content,
repo.get_contents("sweep.yaml", ref=branch_name).sha,
branch=branch_name,
)
# Create a PR from this branch to the main branch
pr = repo.create_pull(
title="Enable GitHub Actions",
body="This PR enables GitHub Actions for this repository.",
head=branch_name,
base=repo.default_branch,
)
return pr
REFACTOR_TEMPLATE = """\
name: Refactor
title: 'Sweep: '
description: Write something like "Modify the ... api endpoint to use ... version and ... framework"
labels: sweep
body:
- type: textarea
id: description
attributes:
label: Details
description: More details for Sweep
placeholder: We are migrating this function to ... version because ...
"""
BUGFIX_TEMPLATE = """\
name: Bugfix
title: 'Sweep: '
description: Write something like "We notice ... behavior when ... happens instead of ...""
labels: sweep
body:
- type: textarea
id: description
attributes:
label: Details
description: More details about the bug
placeholder: The bug might be in ... file
"""
FEATURE_TEMPLATE = """\
name: Feature Request
title: 'Sweep: '
description: Write something like "Write an api endpoint that does "..." in the "..." file"
labels: sweep
body:
- type: textarea
id: description
attributes:
label: Details
description: More details for Sweep
placeholder: The new endpoint should use the ... class from ... file because it contains ... logic
"""
SWEEP_TEMPLATE = """\
name: Sweep Issue
title: 'Sweep: '
description: For small bugs, features, refactors, and tests to be handled by Sweep, an AI-powered junior developer.
labels: sweep
body:
- type: textarea
id: description
attributes:
label: Details
description: Tell Sweep where and what to edit and provide enough context for a new developer to the codebase
placeholder: |
Bugs: The bug might be in ... file. Here are the logs: ...
Features: the new endpoint should use the ... class from ... file because it contains ... logic.
Refactors: We are migrating this function to ... version because ...
"""
SWEEP_SLOW_TEMPLATE = """\
name: Sweep Slow Issue
title: 'Sweep (slow): '
description: For larger bugs, features, refactors, and tests to be handled by Sweep, an AI-powered junior developer. Sweep will perform a deeper search and more self-reviews but will take longer.
labels: sweep
body:
- type: textarea
id: description
attributes:
label: Details
description: Tell Sweep where and what to edit and provide enough context for a new developer to the codebase
placeholder: |
Bugs: The bug might be in ... file. Here are the logs: ...
Features: the new endpoint should use the ... class from ... file because it contains ... logic.
Refactors: We are migrating this function to ... version because ...
"""
SWEEP_FAST_TEMPLATE = """\
name: Sweep Fast Issue
title: 'Sweep (fast): '
description: For few-line fixes to be handled by Sweep, an AI-powered junior developer. Sweep will use GPT-3.5 to quickly create a PR for very small changes.
labels: sweep
body:
- type: textarea
id: description
attributes:
label: Details
description: Tell Sweep where and what to edit and provide enough context for a new developer to the codebase
placeholder: |
Bugs: The bug might be in ... file. Here are the logs: ...
Features: the new endpoint should use the ... class from ... file because it contains ... logic.
Refactors: We are migrating this function to ... version because ...
"""
| [
"name: Sweep Fast Issue\ntitle: 'Sweep (fast): '\ndescription: For few-line fixes to be handled by Sweep, an AI-powered junior developer. Sweep will use GPT-3.5 to quickly create a PR for very small changes.\nlabels: sweep\nbody:\n - type: textarea\n id: description\n attributes:\n label: Details\n description: Tell Sweep where and what to edit and provide enough context for a new developer to the codebase\n placeholder: |\n Bugs: The bug might be in ... file. Here are the logs: ...\n Features: the new endpoint should use the ... class from ... file because it contains ... logic.\n Refactors: We are migrating this function to ... version because ...\n",
"name: Feature Request\ntitle: 'Sweep: '\ndescription: Write something like \"Write an api endpoint that does \"...\" in the \"...\" file\"\nlabels: sweep\nbody:\n - type: textarea\n id: description\n attributes:\n label: Details\n description: More details for Sweep\n placeholder: The new endpoint should use the ... class from ... file because it contains ... logic\n",
"name: Refactor\ntitle: 'Sweep: '\ndescription: Write something like \"Modify the ... api endpoint to use ... version and ... framework\"\nlabels: sweep\nbody:\n - type: textarea\n id: description\n attributes:\n label: Details\n description: More details for Sweep\n placeholder: We are migrating this function to ... version because ...\n",
"name: Sweep Issue\ntitle: 'Sweep: '\ndescription: For small bugs, features, refactors, and tests to be handled by Sweep, an AI-powered junior developer.\nlabels: sweep\nbody:\n - type: textarea\n id: description\n attributes:\n label: Details\n description: Tell Sweep where and what to edit and provide enough context for a new developer to the codebase\n placeholder: |\n Bugs: The bug might be in ... file. Here are the logs: ...\n Features: the new endpoint should use the ... class from ... file because it contains ... logic.\n Refactors: We are migrating this function to ... version because ...\n",
"name: Sweep Slow Issue\ntitle: 'Sweep (slow): '\ndescription: For larger bugs, features, refactors, and tests to be handled by Sweep, an AI-powered junior developer. Sweep will perform a deeper search and more self-reviews but will take longer.\nlabels: sweep\nbody:\n - type: textarea\n id: description\n attributes:\n label: Details\n description: Tell Sweep where and what to edit and provide enough context for a new developer to the codebase\n placeholder: |\n Bugs: The bug might be in ... file. Here are the logs: ...\n Features: the new endpoint should use the ... class from ... file because it contains ... logic.\n Refactors: We are migrating this function to ... version because ...\n",
"name: Bugfix\ntitle: 'Sweep: '\ndescription: Write something like \"We notice ... behavior when ... happens instead of ...\"\"\nlabels: sweep\nbody:\n - type: textarea\n id: description\n attributes:\n label: Details\n description: More details about the bug\n placeholder: The bug might be in ... file\n"
] |
2024-01-10 | rokanost/sweep | sweepai~handlers~on_ticket.py | """
On Github ticket, get ChatGPT to deal with it
"""
# TODO: Add file validation
import math
import re
import traceback
import openai
from github import GithubException
from loguru import logger
from tabulate import tabulate
from tqdm import tqdm
from sweepai.core.context_pruning import ContextPruning
from sweepai.core.documentation_searcher import extract_relevant_docs
from sweepai.core.entities import (
ProposedIssue,
Snippet,
NoFilesException,
SweepContext,
MaxTokensExceeded,
EmptyRepository,
)
from sweepai.core.external_searcher import ExternalSearcher
from sweepai.core.slow_mode_expand import SlowModeBot
from sweepai.core.sweep_bot import SweepBot
from sweepai.core.prompts import issue_comment_prompt
# from sandbox.sandbox_utils import Sandbox
from sweepai.handlers.create_pr import (
create_pr_changes,
create_config_pr,
safe_delete_sweep_branch,
)
from sweepai.handlers.on_comment import on_comment
from sweepai.handlers.on_review import review_pr
from sweepai.utils.chat_logger import ChatLogger, discord_log_error
from sweepai.config.client import (
UPDATES_MESSAGE,
SweepConfig,
get_documentation_dict,
)
from sweepai.config.server import (
ENV,
MONGODB_URI,
OPENAI_API_KEY,
GITHUB_BOT_USERNAME,
GITHUB_LABEL_NAME,
WHITELISTED_REPOS,
)
from sweepai.utils.event_logger import posthog
from sweepai.utils.github_utils import ClonedRepo, get_github_client
from sweepai.utils.prompt_constructor import HumanMessagePrompt
from sweepai.utils.search_utils import search_snippets
openai.api_key = OPENAI_API_KEY
sep = "\n---\n"
bot_suffix_starring = (
"⭐ If you are enjoying Sweep, please [star our"
" repo](https://github.com/sweepai/sweep) so more people can hear about us!"
)
bot_suffix = (
f"\n{sep}\n{UPDATES_MESSAGE}\n{sep} 💡 To recreate the pull request edit the issue"
" title or description."
)
discord_suffix = f"\n<sup>[Join Our Discord](https://discord.com/invite/sweep)"
stars_suffix = (
"⭐ In the meantime, consider [starring our repo](https://github.com/sweepai/sweep)"
" so more people can hear about us!"
)
collapsible_template = """
<details {opened}>
<summary>{summary}</summary>
{body}
</details>
"""
checkbox_template = "- [{check}] `{filename}`\n> {instructions}\n"
num_of_snippets_to_query = 30
total_number_of_snippet_tokens = 15_000
num_full_files = 2
ordinal = lambda n: str(n) + (
"th" if 4 <= n <= 20 else {1: "st", 2: "nd", 3: "rd"}.get(n % 10, "th")
)
def post_process_snippets(
snippets: list[Snippet],
max_num_of_snippets: int = 5,
exclude_snippets: list[str] = [],
):
snippets = [
snippet
for snippet in snippets
if not any(
snippet.file_path.endswith(ext) for ext in SweepConfig().exclude_exts
)
]
snippets = [
snippet
for snippet in snippets
if not any(
snippet.file_path == exclude_file for exclude_file in exclude_snippets
)
]
for snippet in snippets[:num_full_files]:
snippet = snippet.expand()
# snippet fusing
i = 0
while i < len(snippets):
j = i + 1
while j < len(snippets):
if snippets[i] ^ snippets[j]: # this checks for overlap
snippets[i] = snippets[i] | snippets[j] # merging
snippets.pop(j)
else:
j += 1
i += 1
# truncating snippets based on character length
result_snippets = []
total_length = 0
for snippet in snippets:
total_length += len(snippet.get_snippet())
if total_length > total_number_of_snippet_tokens * 5:
break
result_snippets.append(snippet)
return result_snippets[:max_num_of_snippets]
def strip_sweep(text: str):
return (
re.sub(
r"^[Ss]weep\s?(\([Ss]low\))?(\([Mm]ap\))?(\([Ff]ast\))?\s?:", "", text
).lstrip(),
re.search(r"^[Ss]weep\s?\([Ss]low\)", text) is not None,
re.search(r"^[Ss]weep\s?\([Mm]ap\)", text) is not None,
re.search(r"^[Ss]weep\s?\([Ss]ubissues?\)", text) is not None,
re.search(r"^[Ss]weep\s?\([Ss]andbox?\)", text) is not None,
re.search(r"^[Ss]weep\s?\([Ff]ast\)", text) is not None,
)
def test_mode(issue):
sandbox_logs = ""
async def on_ticket(
title: str,
summary: str,
issue_number: int,
issue_url: str,
username: str,
repo_full_name: str,
repo_description: str,
installation_id: int,
comment_id: int = None,
edited: bool = False,
):
(
title,
slow_mode,
do_map,
subissues_mode,
sandbox_mode,
fast_mode,
) = strip_sweep(title)
# Flow:
# 1. Get relevant files
# 2: Get human message
# 3. Get files to change
# 4. Get file changes
# 5. Create PR
summary = summary or ""
summary = re.sub(
"<details (open)?>\n<summary>Checklist</summary>.*",
"",
summary,
flags=re.DOTALL,
).strip()
summary = re.sub("Checklist:\n\n- \[[ X]\].*", "", summary, flags=re.DOTALL).strip()
repo_name = repo_full_name
user_token, g = get_github_client(installation_id)
repo = g.get_repo(repo_full_name)
current_issue = repo.get_issue(number=issue_number)
assignee = current_issue.assignee.login if current_issue.assignee else None
if assignee is None:
assignee = current_issue.user.login
chat_logger = (
ChatLogger(
{
"repo_name": repo_name,
"title": title,
"summary": summary,
"issue_number": issue_number,
"issue_url": issue_url,
"username": username if not username.startswith("sweep") else assignee,
"repo_full_name": repo_full_name,
"repo_description": repo_description,
"installation_id": installation_id,
"type": "ticket",
"mode": ENV,
"comment_id": comment_id,
"edited": edited,
}
)
if MONGODB_URI
else None
)
if chat_logger:
is_paying_user = chat_logger.is_paying_user()
is_trial_user = chat_logger.is_trial_user()
use_faster_model = chat_logger.use_faster_model(g)
else:
is_paying_user = True
is_trial_user = False
use_faster_model = False
if fast_mode:
use_faster_model = True
sweep_context = SweepContext(
username=username,
issue_url=issue_url,
use_faster_model=use_faster_model,
is_paying_user=is_paying_user,
repo=repo,
token=user_token,
)
if not comment_id and not edited and chat_logger:
chat_logger.add_successful_ticket(
gpt3=use_faster_model
) # moving higher, will increment the issue regardless of whether it's a success or not
organization, repo_name = repo_full_name.split("/")
metadata = {
"issue_url": issue_url,
"repo_full_name": repo_full_name,
"organization": organization,
"repo_name": repo_name,
"repo_description": repo_description,
"username": username,
"comment_id": comment_id,
"title": title,
"installation_id": installation_id,
"function": "on_ticket",
"edited": edited,
"model": "gpt-3.5" if use_faster_model else "gpt-4",
"tier": "pro" if is_paying_user else "free",
"mode": ENV,
}
posthog.capture(username, "started", properties=metadata)
logger.info(f"Getting repo {repo_full_name}")
if current_issue.state == "closed":
logger.warning(f"Issue {issue_number} is closed")
posthog.capture(username, "issue_closed", properties=metadata)
return {"success": False, "reason": "Issue is closed"}
current_issue.edit(body=summary)
item_to_react_to = (
current_issue.get_comment(comment_id) if comment_id else current_issue
)
replies_text = ""
comments = list(current_issue.get_comments())
if comment_id:
logger.info(f"Replying to comment {comment_id}...")
replies_text = "\nComments:\n" + "\n".join(
[
issue_comment_prompt.format(
username=comment.user.login,
reply=comment.body,
)
for comment in comments
if comment.user.type == "User"
]
)
summary = summary if summary else ""
prs = repo.get_pulls(
state="open", sort="created", base=SweepConfig.get_branch(repo)
)
for pr in prs:
# Check if this issue is mentioned in the PR, and pr is owned by bot
# This is done in create_pr, (pr_description = ...)
if (
pr.user.login == GITHUB_BOT_USERNAME
and f"Fixes #{issue_number}.\n" in pr.body
):
success = safe_delete_sweep_branch(pr, repo)
eyes_reaction = item_to_react_to.create_reaction("eyes")
# If SWEEP_BOT reacted to item_to_react_to with "rocket", then remove it.
reactions = item_to_react_to.get_reactions()
for reaction in reactions:
if reaction.content == "rocket" and reaction.user.login == GITHUB_BOT_USERNAME:
item_to_react_to.delete_reaction(reaction.id)
progress_headers = [
None,
"Step 1: 🔍 Code Search",
"Step 2: 🧐 Snippet Analysis",
"Step 3: 📝 Planning",
"Step 4: ⌨️ Coding",
"Step 5: 🔁 Code Review",
]
config_pr_url = None
# Find the first comment made by the bot
issue_comment = None
tickets_allocated = 5
if is_trial_user:
tickets_allocated = 15
if is_paying_user:
tickets_allocated = 500
ticket_count = (
max(tickets_allocated - chat_logger.get_ticket_count(), 0)
if chat_logger
else 999
)
daily_ticket_count = (
(2 - chat_logger.get_ticket_count(use_date=True) if not use_faster_model else 0)
if chat_logger
else 999
)
model_name = "GPT-3.5" if use_faster_model else "GPT-4"
payment_link = "https://buy.stripe.com/6oE5npbGVbhC97afZ4"
daily_message = (
f" and {daily_ticket_count} for the day"
if not is_paying_user and not is_trial_user
else ""
)
user_type = "💎 Sweep Pro" if is_paying_user else "⚡ Sweep Free Trial"
gpt_tickets_left_message = (
f"{ticket_count} GPT-4 tickets left for the month"
if not is_paying_user
else "unlimited GPT-4 tickets"
)
payment_message = (
f"{user_type}: I used {model_name} to create this ticket. You have {gpt_tickets_left_message}{daily_message}."
+ (
f" For more GPT-4 tickets, visit [our payment portal.]({payment_link})"
if not is_paying_user
else ""
)
)
payment_message_start = (
f"{user_type}: I'm creating this ticket using {model_name}. You have {gpt_tickets_left_message}{daily_message}."
+ (
f" For more GPT-4 tickets, visit [our payment portal.]({payment_link})"
if not is_paying_user
else ""
)
)
def get_comment_header(index, errored=False, pr_message=""):
config_pr_message = (
"\n" + f"* Install Sweep Configs: [Pull Request]({config_pr_url})"
if config_pr_url is not None
else ""
)
config_pr_message = " To retrigger Sweep, edit the issue.\n" + config_pr_message
if index < 0:
index = 0
if index == 6:
return pr_message + config_pr_message
index *= 100 / len(progress_headers)
index = int(index)
index = min(100, index)
if errored:
return f""
return (
f""
+ ("\n" + stars_suffix if index != -1 else "")
+ "\n"
+ payment_message_start
+ config_pr_message
)
# Find Sweep's previous comment
print("USERNAME", GITHUB_BOT_USERNAME)
for comment in comments:
print("COMMENT", comment.user.login)
if comment.user.login == GITHUB_BOT_USERNAME:
print("Found comment")
issue_comment = comment
break
try:
config = SweepConfig.get_config(repo)
except EmptyRepository as e:
logger.info("Empty repo")
first_comment = (
"Sweep is currently not supported on empty repositories. Please add some"
f" code to your repository and try again.\n{sep}##"
f" {progress_headers[1]}\n{bot_suffix}{discord_suffix}"
)
if issue_comment is None:
issue_comment = current_issue.create_comment(first_comment)
else:
issue_comment.edit(first_comment)
return {"success": False}
cloned_repo = ClonedRepo(
repo_full_name, installation_id=installation_id, token=user_token
)
num_of_files = cloned_repo.get_num_files_from_repo()
time_estimate = math.ceil(3 + 5 * num_of_files / 1000)
indexing_message = (
"I'm searching for relevant snippets in your repository. If this is your first"
" time using Sweep, I'm indexing your repository. This may take up to"
f" {time_estimate} minutes. I'll let you know when I'm done."
)
first_comment = (
f"{get_comment_header(0)}\n{sep}I am currently looking into this ticket!. I"
" will update the progress of the ticket in this comment. I am currently"
f" searching through your code, looking for relevant snippets.\n{sep}##"
f" {progress_headers[1]}\n{indexing_message}{bot_suffix}{discord_suffix}"
)
if issue_comment is None:
issue_comment = current_issue.create_comment(first_comment)
else:
issue_comment.edit(first_comment)
# Comment edit function
past_messages = {}
current_index = 0
# Random variables to save in case of errors
table = None # Show plan so user can finetune prompt
def edit_sweep_comment(message: str, index: int, pr_message=""):
nonlocal current_index
# -1 = error, -2 = retry
# Only update the progress bar if the issue generation errors.
errored = index == -1
if index >= 0:
past_messages[index] = message
current_index = index
agg_message = None
# Include progress history
# index = -2 is reserved for
for i in range(
current_index + 2
): # go to next header (for Working on it... text)
if i == 0 or i >= len(progress_headers):
continue # skip None header
header = progress_headers[i]
if header is not None:
header = "## " + header + "\n"
else:
header = "No header\n"
msg = header + (past_messages.get(i) or "Working on it...")
if agg_message is None:
agg_message = msg
else:
agg_message = agg_message + f"\n{sep}" + msg
suffix = bot_suffix + discord_suffix
if errored:
agg_message = (
"## ❌ Unable to Complete PR"
+ "\n"
+ message
+ "\n\nFor bonus GPT-4 tickets, please report this bug on"
" **[Discord](https://discord.com/invite/sweep-ai)**."
)
if table is not None:
agg_message = (
agg_message
+ f"\n{sep}Please look at the generated plan. If something looks"
f" wrong, please add more details to your issue.\n\n{table}"
)
suffix = bot_suffix # don't include discord suffix for error messages
# Update the issue comment
issue_comment.edit(
f"{get_comment_header(current_index, errored, pr_message)}\n{sep}{agg_message}{suffix}"
)
if False and len(title + summary) < 20:
logger.info("Issue too short")
edit_sweep_comment(
(
"Please add more details to your issue. I need at least 20 characters"
" to generate a plan."
),
-1,
)
return {"success": True}
if (
repo_name.lower() not in WHITELISTED_REPOS
and not is_paying_user
and not is_trial_user
):
if ("sweep" in repo_name.lower()) or ("test" in repo_name.lower()):
logger.info("Test repository detected")
edit_sweep_comment(
(
"Sweep does not work on test repositories. Please create an issue"
" on a real repository. If you think this is a mistake, please"
" report this at https://discord.gg/sweep."
),
-1,
)
return {"success": False}
def log_error(error_type, exception, priority=0):
nonlocal is_paying_user, is_trial_user
if is_paying_user or is_trial_user:
if priority == 1:
priority = 0
elif priority == 2:
priority = 1
prefix = ""
if is_trial_user:
prefix = " (TRIAL)"
if is_paying_user:
prefix = " (PRO)"
content = (
f"**{error_type} Error**{prefix}\n{username}:"
f" {issue_url}\n```{exception}```"
)
discord_log_error(content, priority=priority)
# Clone repo and perform local tests (linters, formatters, GHA)
logger.info("Initializing sandbox...")
sandbox_config = {
"install": "curl https://get.trunk.io -fsSL | bash",
"formatter": "trunk fmt {file}",
"linter": "trunk check {file}",
}
token = user_token
repo_url = cloned_repo.clone_url
# sandbox = Sandbox.from_token(repo, repo_url, sandbox_config)
sandbox = None
logger.info("Fetching relevant files...")
try:
snippets, tree = search_snippets(
# repo,
cloned_repo,
f"{title}\n{summary}\n{replies_text}",
num_files=num_of_snippets_to_query,
)
assert len(snippets) > 0
except Exception as e:
trace = traceback.format_exc()
logger.error(e)
logger.error(trace)
edit_sweep_comment(
(
"It looks like an issue has occurred around fetching the files."
" Perhaps the repo has not been initialized. If this error persists"
f" contact [email protected].\n\n> @{username}, please edit the issue"
" description to include more details and I will automatically"
" relaunch."
),
-1,
)
log_error("File Fetch", str(e) + "\n" + traceback.format_exc(), priority=1)
raise e
snippets = post_process_snippets(
snippets, max_num_of_snippets=2 if use_faster_model else 5
)
if not repo_description:
repo_description = "No description provided."
message_summary = summary + replies_text
external_results = await ExternalSearcher.extract_summaries(message_summary)
if external_results:
message_summary += "\n\n" + external_results
user_dict = get_documentation_dict(repo)
docs_results = ""
try:
docs_results = await extract_relevant_docs(
title + message_summary, user_dict, chat_logger
)
if docs_results:
message_summary += "\n\n" + docs_results
except Exception as e:
logger.error(f"Failed to extract docs: {e}")
human_message = HumanMessagePrompt(
repo_name=repo_name,
issue_url=issue_url,
username=username,
repo_description=repo_description.strip(),
title=title,
summary=message_summary,
snippets=snippets,
tree=tree,
)
additional_plan = None
slow_mode_bot = SlowModeBot(chat_logger=chat_logger) # can be async'd
queries, additional_plan = await slow_mode_bot.expand_plan(human_message)
snippets, tree = search_snippets(
cloned_repo,
# repo,
f"{title}\n{summary}\n{replies_text}",
num_files=num_of_snippets_to_query,
multi_query=queries,
)
snippets = post_process_snippets(snippets, max_num_of_snippets=5)
# TODO: refactor this
human_message = HumanMessagePrompt(
repo_name=repo_name,
issue_url=issue_url,
username=username,
repo_description=repo_description,
title=title,
summary=message_summary + additional_plan,
snippets=snippets,
tree=tree,
)
try:
context_pruning = ContextPruning(chat_logger=chat_logger)
snippets_to_ignore, directories_to_ignore = await context_pruning.prune_context(
human_message, repo=repo
)
snippets, tree = search_snippets(
# repo,
cloned_repo,
f"{title}\n{summary}\n{replies_text}",
num_files=num_of_snippets_to_query,
# branch=None,
# installation_id=installation_id,
excluded_directories=directories_to_ignore, # handles the tree
)
snippets = post_process_snippets(
snippets, max_num_of_snippets=5, exclude_snippets=snippets_to_ignore
)
logger.info(f"New snippets: {snippets}")
logger.info(f"New tree: {tree}")
if not use_faster_model and additional_plan is not None:
message_summary += additional_plan
human_message = HumanMessagePrompt(
repo_name=repo_name,
issue_url=issue_url,
username=username,
repo_description=repo_description,
title=title,
summary=message_summary,
snippets=snippets,
tree=tree,
)
except Exception as e:
logger.error(f"Failed to prune context: {e}")
sweep_bot = SweepBot.from_system_message_content(
human_message=human_message,
repo=repo,
is_reply=bool(comments),
chat_logger=chat_logger,
sweep_context=sweep_context,
)
# Check repository for sweep.yml file.
sweep_yml_exists = False
for content_file in repo.get_contents(""):
if content_file.name == "sweep.yaml":
sweep_yml_exists = True
break
# If sweep.yaml does not exist, then create a new PR that simply creates the sweep.yaml file.
if not sweep_yml_exists:
try:
logger.info("Creating sweep.yaml file...")
config_pr = create_config_pr(sweep_bot)
config_pr_url = config_pr.html_url
edit_sweep_comment(message="", index=-2)
except Exception as e:
logger.error(
"Failed to create new branch for sweep.yaml file.\n",
e,
traceback.format_exc(),
)
else:
logger.info("sweep.yaml file already exists.")
try:
# ANALYZE SNIPPETS
logger.info("Did not execute CoT retrieval...")
newline = "\n"
edit_sweep_comment(
"I found the following snippets in your repository. I will now analyze"
" these snippets and come up with a plan."
+ "\n\n"
+ collapsible_template.format(
summary=(
"Some code snippets I looked at (click to expand). If some file is"
" missing from here, you can mention the path in the ticket"
" description."
),
body="\n".join(
[
f"https://github.com/{organization}/{repo_name}/blob/{repo.get_commits()[0].sha}/{snippet.file_path}#L{max(snippet.start, 1)}-L{min(snippet.end, snippet.content.count(newline) - 1)}\n"
for snippet in snippets
]
),
opened="",
)
+ (
"I also found the following external resources that might be"
f" helpful:\n\n{external_results}\n\n"
if external_results
else ""
)
+ (f"\n\n{docs_results}\n\n" if docs_results else ""),
1,
)
if do_map:
subissues: list[ProposedIssue] = await sweep_bot.generate_subissues()
edit_sweep_comment(
f"I'm creating the following subissues:\n\n"
+ "\n\n".join(
[
f"#{subissue.title}:\n> " + subissue.body.replace("\n", "\n> ")
for subissue in subissues
]
),
3,
)
for subissue in tqdm(subissues):
subissue.issue_id = repo.create_issue(
title="Sweep: " + subissue.title,
body=subissue.body + f"\n\nParent issue: #{issue_number}",
assignee=username,
).number
subissues_checklist = "\n\n".join(
[
f"- [ ] #{subissue.issue_id}\n\n> "
+ f"**{subissue.title}**\n{subissue.body}".replace("\n", "\n> ")
for subissue in subissues
]
)
current_issue.edit(
body=summary + "\n\n---\n\nChecklist:\n\n" + subissues_checklist
)
edit_sweep_comment(
f"I finished creating the subissues! Track them at:\n\n"
+ "\n".join(f"* #{subissue.issue_id}" for subissue in subissues),
4,
)
edit_sweep_comment(f"N/A", 5)
edit_sweep_comment(f"I finished creating all the subissues.", 6)
return {"success": True}
# COMMENT ON ISSUE
# TODO: removed issue commenting here
logger.info("Fetching files to modify/create...")
file_change_requests, plan = await sweep_bot.get_files_to_change()
if not file_change_requests:
if len(title + summary) < 60:
edit_sweep_comment(
(
"Sorry, I could not find any files to modify, can you please"
" provide more details? Please make sure that the title and"
" summary of the issue are at least 60 characters."
),
-1,
)
else:
edit_sweep_comment(
(
"Sorry, I could not find any files to modify, can you please"
" provide more details?"
),
-1,
)
raise Exception("No files to modify.")
await sweep_bot.summarize_snippets()
file_change_requests = sweep_bot.validate_file_change_requests(
file_change_requests
)
table = tabulate(
[
[
f"`{file_change_request.filename}`",
file_change_request.instructions_display.replace(
"\n", "<br/>"
).replace("```", "\\```"),
]
for file_change_request in file_change_requests
],
headers=["File Path", "Proposed Changes"],
tablefmt="pipe",
)
edit_sweep_comment(
"From looking through the relevant snippets, I decided to make the"
" following modifications:\n\n" + table + "\n\n",
2,
)
# TODO(lukejagg): Generate PR after modifications are made
# CREATE PR METADATA
logger.info("Generating PR...")
pull_request = await sweep_bot.generate_pull_request()
pull_request_content = pull_request.content.strip().replace("\n", "\n>")
pull_request_summary = f"**{pull_request.title}**\n`{pull_request.branch_name}`\n>{pull_request_content}\n"
edit_sweep_comment(
(
"I have created a plan for writing the pull request. I am now working"
" my plan and coding the required changes to address this issue. Here"
f" is the planned pull request:\n\n{pull_request_summary}"
),
3,
)
logger.info("Making PR...")
files_progress = [
(
file_change_request.filename,
file_change_request.instructions_display,
"⏳ In Progress",
"``` ```",
)
for file_change_request in file_change_requests
]
checkboxes_progress = [
(file_change_request.filename, file_change_request.instructions, " ")
for file_change_request in file_change_requests
]
checkboxes_message = collapsible_template.format(
summary="Checklist",
body="\n".join(
[
checkbox_template.format(
check=check,
filename=filename,
instructions=instructions.replace("\n", "\n> "),
)
for filename, instructions, check in checkboxes_progress
]
),
opened="open",
)
issue = repo.get_issue(number=issue_number)
issue.edit(body=summary + "\n\n" + checkboxes_message)
delete_branch = False
generator = create_pr_changes( # make this async later
file_change_requests,
pull_request,
sweep_bot,
username,
installation_id,
issue_number,
sandbox=sandbox,
chat_logger=chat_logger,
)
table_message = tabulate(
[
(
f"`{filename}`",
instructions.replace("\n", "<br/>"),
progress,
error_logs,
)
for filename, instructions, progress, error_logs in files_progress
],
headers=["File", "Instructions", "Progress", "Error logs"],
tablefmt="pipe",
)
logger.info(files_progress)
edit_sweep_comment(table_message, 4)
response = {"error": NoFilesException()}
async for item in generator:
if isinstance(item, dict):
response = item
break
file_change_request, changed_file, sandbox_error = item
if changed_file:
commit_hash = repo.get_branch(pull_request.branch_name).commit.sha
commit_url = f"https://github.com/{repo_full_name}/commit/{commit_hash}"
files_progress = [
(
file,
instructions,
f"✅ Commit [`{commit_hash[:7]}`]({commit_url})",
(
"```"
+ sandbox_error.stdout
+ "\n\n"
+ sandbox_error.stderr
+ "```"
)
if sandbox_error
else "No errors.",
)
if file_change_request.filename == file
else (file, instructions, progress, error_log)
for file, instructions, progress, error_log in files_progress
]
checkboxes_progress = [
(file, instructions, "X")
if file_change_request.filename == file
else (file, instructions, progress)
for file, instructions, progress in checkboxes_progress
]
checkboxes_message = collapsible_template.format(
summary="Checklist",
body="\n".join(
[
checkbox_template.format(
check=check,
filename=filename,
instructions=instructions.replace("\n", "\n> "),
)
for filename, instructions, check in checkboxes_progress
]
),
opened="open",
)
issue = repo.get_issue(number=issue_number)
issue.edit(body=summary + "\n\n" + checkboxes_message)
else:
files_progress = [
(file, instructions, "❌ Failed", error_log)
if file_change_request.filename == file
else (file, instructions, progress, error_log)
for file, instructions, progress, error_log in files_progress
]
logger.info(files_progress)
logger.info(f"Edited {file_change_request.filename}")
table_message = tabulate(
[
(
f"`{filename}`",
instructions.replace("\n", "<br/>"),
progress,
error_log,
)
for filename, instructions, progress, error_log in files_progress
],
headers=["File", "Instructions", "Progress", "Error logs"],
tablefmt="pipe",
)
edit_sweep_comment(table_message, 4)
if not response.get("success"):
raise Exception(f"Failed to create PR: {response.get('error')}")
pr_changes = response["pull_request"]
edit_sweep_comment(
table_message
+ "I have finished coding the issue. I am now reviewing it for"
" completeness.",
4,
)
review_message = (
"Here are my self-reviews of my changes at"
f" [`{pr_changes.pr_head}`](https://github.com/{repo_full_name}/commits/{pr_changes.pr_head}).\n\n"
)
lint_output = None
try:
current_issue.delete_reaction(eyes_reaction.id)
except:
pass
try:
# Todo(lukejagg): Pass sandbox linter results to review_pr
# CODE REVIEW
changes_required, review_comment = review_pr(
repo=repo,
pr=pr_changes,
issue_url=issue_url,
username=username,
repo_description=repo_description,
title=title,
summary=summary,
replies_text=replies_text,
tree=tree,
lint_output=lint_output,
chat_logger=chat_logger,
)
# Todo(lukejagg): Execute sandbox after each iteration
lint_output = None
review_message += (
f"Here is the {ordinal(1)} review\n> "
+ review_comment.replace("\n", "\n> ")
+ "\n\n"
)
edit_sweep_comment(
review_message + "\n\nI'm currently addressing these suggestions.",
5,
)
logger.info(f"Addressing review comment {review_comment}")
if changes_required:
on_comment(
repo_full_name=repo_full_name,
repo_description=repo_description,
comment=review_comment,
username=username,
installation_id=installation_id,
pr_path=None,
pr_line_position=None,
pr_number=None,
pr=pr_changes,
chat_logger=chat_logger,
)
except Exception as e:
logger.error(traceback.format_exc())
logger.error(e)
edit_sweep_comment(
review_message + "\n\nI finished incorporating these changes.", 5
)
is_draft = config.get("draft", False)
try:
pr = repo.create_pull(
title=pr_changes.title,
body=pr_changes.body,
head=pr_changes.pr_head,
base=SweepConfig.get_branch(repo),
draft=is_draft,
)
except GithubException as e:
is_draft = False
pr = repo.create_pull(
title=pr_changes.title,
body=pr_changes.body,
head=pr_changes.pr_head,
base=SweepConfig.get_branch(repo),
draft=is_draft,
)
# Get the branch (SweepConfig.get_branch(repo))'s sha
sha = repo.get_branch(SweepConfig.get_branch(repo)).commit.sha
pr.add_to_labels(GITHUB_LABEL_NAME)
current_issue.create_reaction("rocket")
logger.info("Running github actions...")
try:
if is_draft:
logger.info("Skipping github actions because PR is a draft")
else:
commit = pr.get_commits().reversed[0]
check_runs = commit.get_check_runs()
for check_run in check_runs:
check_run.rerequest()
except Exception as e:
logger.error(e)
# Close sandbox
# try:
# if sandbox is not None:
# await asyncio.wait_for(sandbox.close(), timeout=10)
# logger.info("Closed e2b sandbox")
# except Exception as e:
# logger.error(e)
# logger.info("Failed to close e2b sandbox")
# Completed code review
edit_sweep_comment(
review_message + "\n\nSuccess! 🚀",
6,
pr_message=(
f"## Here's the PR! [{pr.html_url}]({pr.html_url}).\n{payment_message}"
),
)
logger.info("Add successful ticket to counter")
except MaxTokensExceeded as e:
logger.info("Max tokens exceeded")
log_error(
"Max Tokens Exceeded",
str(e) + "\n" + traceback.format_exc(),
priority=2,
)
if chat_logger.is_paying_user():
edit_sweep_comment(
(
f"Sorry, I could not edit `{e.filename}` as this file is too long."
" We are currently working on improved file streaming to address"
" this issue.\n"
),
-1,
)
else:
edit_sweep_comment(
(
f"Sorry, I could not edit `{e.filename}` as this file is too"
" long.\n\nIf this file is incorrect, please describe the desired"
" file in the prompt. However, if you would like to edit longer"
" files, consider upgrading to [Sweep Pro](https://sweep.dev/) for"
" longer context lengths.\n"
),
-1,
)
delete_branch = True
raise e
except NoFilesException as e:
logger.info("Sweep could not find files to modify")
log_error(
"Sweep could not find files to modify",
str(e) + "\n" + traceback.format_exc(),
priority=2,
)
edit_sweep_comment(
(
"Sorry, Sweep could not find any appropriate files to edit to address"
" this issue. If this is a mistake, please provide more context and I"
f" will retry!\n\n> @{username}, please edit the issue description to"
" include more details about this issue."
),
-1,
)
delete_branch = True
raise e
except openai.error.InvalidRequestError as e:
logger.error(traceback.format_exc())
logger.error(e)
edit_sweep_comment(
(
"I'm sorry, but it looks our model has ran out of context length. We're"
" trying to make this happen less, but one way to mitigate this is to"
" code smaller files. If this error persists report it at"
" https://discord.gg/sweep."
),
-1,
)
log_error(
"Context Length",
str(e) + "\n" + traceback.format_exc(),
priority=2,
)
posthog.capture(
username,
"failed",
properties={
"error": str(e),
"reason": "Invalid request error / context length",
**metadata,
},
)
delete_branch = True
raise e
except Exception as e:
logger.error(traceback.format_exc())
logger.error(e)
# title and summary are defined elsewhere
if len(title + summary) < 60:
edit_sweep_comment(
(
"I'm sorry, but it looks like an error has occurred due to"
" insufficient information. Be sure to create a more detailed issue"
" so I can better address it. If this error persists report it at"
" https://discord.gg/sweep."
),
-1,
)
else:
edit_sweep_comment(
(
"I'm sorry, but it looks like an error has occurred. Try changing"
" the issue description to re-trigger Sweep. If this error persists"
" contact [email protected]."
),
-1,
)
log_error("Workflow", str(e) + "\n" + traceback.format_exc(), priority=1)
posthog.capture(
username,
"failed",
properties={"error": str(e), "reason": "Generic error", **metadata},
)
raise e
else:
try:
item_to_react_to.delete_reaction(eyes_reaction.id)
item_to_react_to.create_reaction("rocket")
except Exception as e:
logger.error(e)
finally:
cloned_repo.delete()
if delete_branch:
try:
if pull_request.branch_name.startswith("sweep"):
repo.get_git_ref(f"heads/{pull_request.branch_name}").delete()
else:
raise Exception(
f"Branch name {pull_request.branch_name} does not start with sweep/"
)
except Exception as e:
logger.error(e)
logger.error(traceback.format_exc())
print("Deleted branch", pull_request.branch_name)
posthog.capture(username, "success", properties={**metadata})
logger.info("on_ticket success")
return {"success": True}
| [
"\n<details {opened}>\n<summary>{summary}</summary>\n\n{body}\n</details>\n",
"- [{check}] `{filename}`\n> {instructions}\n"
] |
2024-01-10 | rokanost/sweep | sweepai~core~chat.py | import json
from copy import deepcopy
import time
from typing import Iterator, Literal, Self
import anthropic
import backoff
import openai
from loguru import logger
from pydantic import BaseModel
from sweepai.utils.utils import Tiktoken
from sweepai.core.entities import Message, Function, SweepContext
from sweepai.core.prompts import system_message_prompt, repo_description_prefix_prompt
from sweepai.utils.chat_logger import ChatLogger
from sweepai.config.client import get_description
from sweepai.config.server import (
UTILS_MODAL_INST_NAME,
ANTHROPIC_API_KEY,
OPENAI_DO_HAVE_32K_MODEL_ACCESS,
)
from sweepai.utils.prompt_constructor import HumanMessagePrompt
from sweepai.utils.event_logger import posthog
# TODO: combine anthropic and openai
AnthropicModel = (
Literal["claude-v1"]
| Literal["claude-v1.3-100k"]
| Literal["claude-instant-v1.1-100k"]
)
OpenAIModel = (
Literal["gpt-3.5-turbo"]
| Literal["gpt-4"]
| Literal["gpt-4-0613"]
| Literal["gpt-3.5-turbo-16k-0613"]
| Literal["gpt-4-32k"]
| Literal["gpt-4-32k-0613"]
)
ChatModel = OpenAIModel | AnthropicModel
model_to_max_tokens = {
"gpt-3.5-turbo": 4096,
"gpt-4": 8192,
"gpt-4-0613": 8192,
"claude-v1": 9000,
"claude-v1.3-100k": 100000,
"claude-instant-v1.3-100k": 100000,
"gpt-3.5-turbo-16k-0613": 16000,
"gpt-4-32k-0613": 32000,
"gpt-4-32k": 32000,
}
temperature = 0.0 # Lowered to 0 for mostly deterministic results for reproducibility
count_tokens = Tiktoken().count
def format_for_anthropic(messages: list[Message]) -> str:
if len(messages) > 1:
new_messages: list[Message] = [
Message(
role="system", content=messages[0].content + "\n" + messages[1].content
)
]
messages = messages[2:] if len(messages) >= 3 else []
else:
new_messages: list[Message] = []
for message in messages:
new_messages.append(message)
return "\n".join(
f"{anthropic.HUMAN_PROMPT if message.role != 'assistant' else anthropic.AI_PROMPT} {message.content}"
for message in new_messages
) + (anthropic.AI_PROMPT if new_messages[-1].role != "assistant" else "")
class ChatGPT(BaseModel):
messages: list[Message] = [
Message(
role="system",
content=system_message_prompt,
)
]
prev_message_states: list[list[Message]] = []
model: ChatModel = (
"gpt-4-32k-0613" if OPENAI_DO_HAVE_32K_MODEL_ACCESS else "gpt-4-0613"
)
chat_logger: ChatLogger | None
human_message: HumanMessagePrompt | None = None
file_change_paths = []
sweep_context: SweepContext | None = None
@classmethod
def from_system_message_content(
cls,
human_message: HumanMessagePrompt,
is_reply: bool = False,
chat_logger=None,
sweep_context=None,
**kwargs,
) -> Self:
content = system_message_prompt
repo = kwargs.get("repo")
if repo:
logger.info(f"Repo: {repo}")
repo_description = get_description(repo)
if repo_description:
logger.info(f"Repo description: {repo_description}")
content += f"{repo_description_prefix_prompt}\n{repo_description}"
messages = [Message(role="system", content=content, key="system")]
added_messages = human_message.construct_prompt() # [ { role, content }, ... ]
for msg in added_messages:
messages.append(Message(**msg))
return cls(
messages=messages,
human_message=human_message,
chat_logger=chat_logger,
sweep_context=sweep_context,
**kwargs,
)
@classmethod
def from_system_message_string(
cls, prompt_string, chat_logger: ChatLogger, **kwargs
) -> Self:
return cls(
messages=[Message(role="system", content=prompt_string, key="system")],
chat_logger=chat_logger,
**kwargs,
)
def select_message_from_message_key(
self, message_key: str, message_role: str = None
):
if message_role:
return [
message
for message in self.messages
if message.key == message_key and message.role == message_role
][0]
return [message for message in self.messages if message.key == message_key][0]
def delete_messages_from_chat(
self, key_to_delete: str, delete_user=True, delete_assistant=True
):
self.messages = [
message
for message in self.messages
if not (
key_to_delete in (message.key or "")
and (
delete_user
and message.role == "user"
or delete_assistant
and message.role == "assistant"
)
) # Only delete if message matches key to delete and role should be deleted
]
def delete_file_from_system_message(self, file_path: str):
self.human_message.delete_file(file_path)
def get_message_content_from_message_key(
self, message_key: str, message_role: str = None
):
return self.select_message_from_message_key(
message_key, message_role=message_role
).content
def update_message_content_from_message_key(
self, message_key: str, new_content: str, message_role: str = None
):
self.select_message_from_message_key(
message_key, message_role=message_role
).content = new_content
def chat(
self,
content: str,
model: ChatModel | None = None,
message_key: str | None = None,
functions: list[Function] = [],
function_name: dict | None = None,
):
if self.messages[-1].function_call is None:
self.messages.append(Message(role="user", content=content, key=message_key))
else:
name = self.messages[-1].function_call["name"]
self.messages.append(
Message(role="function", content=content, key=message_key, name=name)
)
model = model or self.model
is_function_call = False
if model in [args.__args__[0] for args in OpenAIModel.__args__]:
# might be a bug here in all of this
if len(functions) > 0:
response = self.call_openai(
model=model, functions=functions, function_name=function_name
)
response, is_function_call = response
if is_function_call:
self.messages.append(
Message(
role="assistant",
content=None,
function_call=response,
key=message_key,
)
)
self.prev_message_states.append(self.messages)
return self.messages[-1].function_call
else:
self.messages.append(
Message(role="assistant", content=response, key=message_key)
)
else:
response = self.call_openai(model=model, functions=functions)
self.messages.append(
Message(role="assistant", content=response, key=message_key)
)
else:
response = self.call_anthropic(model=model)
self.messages.append(
Message(role="assistant", content=response, key=message_key)
)
self.prev_message_states.append(self.messages)
return self.messages[-1].content
def call_openai(
self,
model: ChatModel | None = None,
functions: list[Function] = [],
function_name: dict | None = None,
):
if self.chat_logger is not None:
tickets_allocated = 120 if self.chat_logger.is_paying_user() else 5
tickets_count = self.chat_logger.get_ticket_count()
if tickets_count < tickets_allocated:
model = model or self.model
logger.warning(
f"{tickets_count} tickets found in MongoDB, using {model}"
)
else:
model = "gpt-3.5-turbo-16k-0613"
count_tokens = Tiktoken().count
messages_length = sum(
[count_tokens(message.content or "") for message in self.messages]
)
max_tokens = (
model_to_max_tokens[model] - int(messages_length) - 400
) # this is for the function tokens
# TODO: Add a check to see if the message is too long
logger.info("file_change_paths" + str(self.file_change_paths))
if len(self.file_change_paths) > 0:
self.file_change_paths.remove(self.file_change_paths[0])
if max_tokens < 0:
if len(self.file_change_paths) > 0:
pass
else:
logger.error(f"Input to OpenAI:\n{self.messages_dicts}")
raise ValueError(f"Message is too long, max tokens is {max_tokens}")
messages_raw = "\n".join([(message.content or "") for message in self.messages])
logger.info(f"Input to call openai:\n{messages_raw}")
messages_dicts = [self.messages_dicts[0]]
for message_dict in self.messages_dicts[:1]:
if message_dict["role"] == messages_dicts[-1]["role"]:
messages_dicts[-1]["content"] += "\n" + message_dict["content"]
messages_dicts.append(message_dict)
gpt_4_buffer = 800
if int(messages_length) + gpt_4_buffer < 6000 and model == "gpt-4-32k-0613":
model = "gpt-4-0613"
max_tokens = (
model_to_max_tokens[model] - int(messages_length) - gpt_4_buffer
) # this is for the function tokens
if "gpt-4" in model:
max_tokens = min(max_tokens, 5000)
logger.info(f"Using the model {model}, with {max_tokens} tokens remaining")
global retry_counter
retry_counter = 0
if functions:
@backoff.on_exception(
backoff.expo,
Exception,
max_tries=5,
jitter=backoff.random_jitter,
)
def fetch():
global retry_counter
retry_counter += 1
token_sub = retry_counter * 200
try:
output = None
if function_name:
output = (
openai.ChatCompletion.create(
model=model,
messages=messages_dicts,
max_tokens=max_tokens - token_sub,
temperature=temperature,
functions=[
json.loads(function.json())
for function in functions
],
function_call=function_name,
)
.choices[0]
.message
)
else:
output = (
openai.ChatCompletion.create(
model=model,
messages=messages_dicts,
max_tokens=max_tokens - token_sub,
temperature=temperature,
functions=[
json.loads(function.json())
for function in functions
],
)
.choices[0]
.message
)
if self.chat_logger is not None:
self.chat_logger.add_chat(
{
"model": model,
"messages": self.messages_dicts,
"max_tokens": max_tokens - token_sub,
"temperature": temperature,
"functions": [
json.loads(function.json())
for function in functions
],
"function_call": function_name,
"output": output,
}
)
if self.chat_logger:
try:
token_count = count_tokens(output)
posthog.capture(
self.chat_logger.data.get("username"),
"call_openai",
{
"model": model,
"max_tokens": max_tokens - token_sub,
"input_tokens": messages_length,
"output_tokens": token_count,
"repo_full_name": self.chat_logger.data.get(
"repo_full_name"
),
"username": self.chat_logger.data.get("username"),
"pr_number": self.chat_logger.data.get("pr_number"),
"issue_url": self.chat_logger.data.get("issue_url"),
},
)
except Exception as e:
logger.warning(e)
return output
except Exception as e:
logger.warning(e)
raise e
result = fetch()
if "function_call" in result:
result = dict(result["function_call"]), True
else:
result = result["content"], False
logger.info(f"Output to call openai:\n{result}")
return result
else:
@backoff.on_exception(
backoff.expo,
Exception,
max_tries=5,
jitter=backoff.random_jitter,
)
def fetch():
global retry_counter
retry_counter += 1
token_sub = retry_counter * 200
try:
output = (
openai.ChatCompletion.create(
model=model,
messages=self.messages_dicts,
max_tokens=max_tokens - token_sub,
temperature=temperature,
)
.choices[0]
.message["content"]
)
if self.chat_logger is not None:
self.chat_logger.add_chat(
{
"model": model,
"messages": self.messages_dicts,
"max_tokens": max_tokens - token_sub,
"temperature": temperature,
"output": output,
}
)
if self.chat_logger:
try:
token_count = count_tokens(output)
posthog.capture(
self.chat_logger.data.get("username"),
"call_openai",
{
"model": model,
"max_tokens": max_tokens - token_sub,
"input_tokens": messages_length,
"output_tokens": token_count,
"repo_full_name": self.chat_logger.data.get(
"repo_full_name"
),
"username": self.chat_logger.data.get("username"),
"pr_number": self.chat_logger.data.get("pr_number"),
"issue_url": self.chat_logger.data.get("issue_url"),
},
)
except Exception as e:
logger.warning(e)
return output
except Exception as e:
logger.warning(e)
raise e
result = fetch()
logger.info(f"Output to call openai:\n{result}")
return result
async def achat(
self,
content: str,
model: ChatModel | None = None,
message_key: str | None = None,
):
self.messages.append(Message(role="user", content=content, key=message_key))
model = model or self.model
response = await self.acall_openai(model=model)
self.messages.append(
Message(role="assistant", content=response, key=message_key)
)
self.prev_message_states.append(self.messages)
return self.messages[-1].content
async def acall_openai(
self,
model: ChatModel | None = None,
):
if self.chat_logger is not None:
tickets_allocated = 120 if self.chat_logger.is_paying_user() else 5
tickets_count = self.chat_logger.get_ticket_count()
if tickets_count < tickets_allocated:
model = model or self.model
logger.warning(
f"{tickets_count} tickets found in MongoDB, using {model}"
)
else:
model = "gpt-3.5-turbo-16k-0613"
count_tokens = Tiktoken().count
messages_length = sum(
[count_tokens(message.content or "") for message in self.messages]
)
max_tokens = (
model_to_max_tokens[model] - int(messages_length) - 400
) # this is for the function tokens
# TODO: Add a check to see if the message is too long
logger.info("file_change_paths" + str(self.file_change_paths))
if len(self.file_change_paths) > 0:
self.file_change_paths.remove(self.file_change_paths[0])
if max_tokens < 0:
if len(self.file_change_paths) > 0:
pass
else:
logger.error(f"Input to OpenAI:\n{self.messages_dicts}")
raise ValueError(f"Message is too long, max tokens is {max_tokens}")
messages_raw = "\n".join([(message.content or "") for message in self.messages])
logger.info(f"Input to call openai:\n{messages_raw}")
messages_dicts = [self.messages_dicts[0]]
for message_dict in self.messages_dicts[:1]:
if message_dict["role"] == messages_dicts[-1]["role"]:
messages_dicts[-1]["content"] += "\n" + message_dict["content"]
messages_dicts.append(message_dict)
gpt_4_buffer = 800
if int(messages_length) + gpt_4_buffer < 6000 and model == "gpt-4-32k-0613":
model = "gpt-4-0613"
max_tokens = (
model_to_max_tokens[model] - int(messages_length) - gpt_4_buffer
) # this is for the function tokens
if "gpt-4" in model:
max_tokens = min(max_tokens, 5000)
logger.info(f"Using the model {model}, with {max_tokens} tokens remaining")
global retry_counter
retry_counter = 0
async def fetch():
for time_to_sleep in [10, 10, 20, 30, 60]:
global retry_counter
retry_counter += 1
token_sub = retry_counter * 200
try:
output = (
(
await openai.ChatCompletion.acreate(
model=model,
messages=self.messages_dicts,
max_tokens=max_tokens - token_sub,
temperature=temperature,
)
)
.choices[0]
.message["content"]
)
if self.chat_logger is not None:
self.chat_logger.add_chat(
{
"model": model,
"messages": self.messages_dicts,
"max_tokens": max_tokens - token_sub,
"temperature": temperature,
"output": output,
}
)
if self.chat_logger:
try:
token_count = count_tokens(output)
posthog.capture(
self.chat_logger.data.get("username"),
"call_openai",
{
"model": model,
"max_tokens": max_tokens - token_sub,
"input_tokens": messages_length,
"output_tokens": token_count,
"repo_full_name": self.chat_logger.data.get(
"repo_full_name"
),
"username": self.chat_logger.data.get("username"),
"pr_number": self.chat_logger.data.get("pr_number"),
"issue_url": self.chat_logger.data.get("issue_url"),
},
)
except Exception as e:
logger.warning(e)
return output
except Exception as e:
logger.warning(e)
time.sleep(time_to_sleep + backoff.random_jitter(5))
result = await fetch()
logger.info(f"Output to call openai:\n{result}")
return result
def call_anthropic(self, model: ChatModel | None = None) -> str:
if model is None:
model = self.model
messages_length = sum(
[int(count_tokens(message.content) * 1.1) for message in self.messages]
)
max_tokens = model_to_max_tokens[model] - int(messages_length) - 1000
logger.info(f"Number of tokens: {max_tokens}")
messages_raw = format_for_anthropic(self.messages)
logger.info(f"Input to call anthropic:\n{messages_raw}")
assert ANTHROPIC_API_KEY is not None
client = anthropic.Client(api_key=ANTHROPIC_API_KEY)
@backoff.on_exception(
backoff.expo,
Exception,
max_tries=5,
jitter=backoff.random_jitter,
)
def fetch() -> tuple[str, str]:
logger.warning(f"Calling anthropic...")
results = client.completion(
prompt=messages_raw,
stop_sequences=[anthropic.HUMAN_PROMPT],
model=model,
max_tokens_to_sample=max_tokens,
disable_checks=True,
temperature=temperature,
)
return results["completion"], results["stop_reason"]
result, stop_reason = fetch()
logger.warning(f"Stop reasons: {stop_reason}")
if stop_reason == "max_tokens":
logger.warning("Hit max tokens, running for more tokens.")
_self = deepcopy(self)
_self.messages.append(Message(role="assistant", content=result, key=""))
extension = _self.call_anthropic(model=model)
print(len(result), len(extension), len(result + extension))
return result + extension
logger.info(f"Output to call anthropic:\n{result}")
return result
def chat_stream(
self,
content: str,
model: ChatModel | None = None,
message_key: str | None = None,
functions: list[Function] = [],
function_call: dict | None = None,
) -> Iterator[dict]:
if self.messages[-1].function_call is None:
self.messages.append(Message(role="user", content=content, key=message_key))
else:
name = self.messages[-1].function_call["name"]
self.messages.append(
Message(role="function", content=content, key=message_key, name=name)
)
model = model or self.model
is_function_call = False
# might be a bug here in all of this
# return self.stream_openai(model=model, functions=functions, function_name=function_name)
return self.stream_openai(
model=model, functions=functions, function_call=function_call
)
def stream_openai(
self,
model: ChatModel | None = None,
functions: list[Function] = [],
function_call: dict | None = None,
) -> Iterator[dict]:
model = model or self.model
count_tokens = Tiktoken().count
messages_length = sum(
[count_tokens(message.content or "") for message in self.messages]
)
max_tokens = (
model_to_max_tokens[model] - int(messages_length) - 400
) # this is for the function tokens
# TODO: Add a check to see if the message is too long
logger.info("file_change_paths" + str(self.file_change_paths))
if len(self.file_change_paths) > 0:
self.file_change_paths.remove(self.file_change_paths[0])
if max_tokens < 0:
if len(self.file_change_paths) > 0:
pass
else:
logger.error(f"Input to OpenAI:\n{self.messages_dicts}")
raise ValueError(f"Message is too long, max tokens is {max_tokens}")
messages_raw = "\n".join([(message.content or "") for message in self.messages])
logger.info(f"Input to call openai:\n{messages_raw}")
gpt_4_buffer = 800
if int(messages_length) + gpt_4_buffer < 6000 and model == "gpt-4-32k-0613":
model = "gpt-4-0613"
max_tokens = (
model_to_max_tokens[model] - int(messages_length) - gpt_4_buffer
) # this is for the function tokens
logger.info(f"Using the model {model}, with {max_tokens} tokens remaining")
def generator() -> Iterator[str]:
stream = (
openai.ChatCompletion.create(
model=model,
messages=self.messages_dicts,
temperature=temperature,
functions=[json.loads(function.json()) for function in functions],
function_call=function_call or "auto",
stream=True,
)
if functions
else openai.ChatCompletion.create(
model=model,
messages=self.messages_dicts,
temperature=temperature,
stream=True,
)
)
for data in stream:
chunk = data.choices[0].delta
yield chunk
return generator()
@property
def messages_dicts(self):
# Remove the key from the message object before sending to OpenAI
cleaned_messages = [message.to_openai() for message in self.messages]
return cleaned_messages
def undo(self):
if len(self.prev_message_states) > 0:
self.messages = self.prev_message_states.pop()
return self.messages
| [
"\n",
"None"
] |
2024-01-10 | rokanost/sweep | sweepai~handlers~on_comment.py | import re
import traceback
import openai
from loguru import logger
from typing import Any
from tabulate import tabulate
from github.Repository import Repository
from sweepai.config.client import get_blocked_dirs
from sweepai.core.entities import NoFilesException, Snippet, MockPR, FileChangeRequest
from sweepai.core.sweep_bot import SweepBot
from sweepai.handlers.on_review import get_pr_diffs
from sweepai.utils.chat_logger import ChatLogger
from sweepai.config.server import (
GITHUB_BOT_USERNAME,
ENV,
MONGODB_URI,
OPENAI_API_KEY,
)
from sweepai.utils.event_logger import posthog
from sweepai.utils.github_utils import ClonedRepo, get_github_client
from sweepai.utils.search_utils import search_snippets
from sweepai.utils.prompt_constructor import HumanMessageCommentPrompt
openai.api_key = OPENAI_API_KEY
num_of_snippets_to_query = 30
total_number_of_snippet_tokens = 15_000
num_full_files = 2
num_extended_snippets = 2
def post_process_snippets(snippets: list[Snippet], max_num_of_snippets: int = 3):
for snippet in snippets[:num_full_files]:
snippet = snippet.expand()
# snippet fusing
i = 0
while i < len(snippets):
j = i + 1
while j < len(snippets):
if snippets[i] ^ snippets[j]: # this checks for overlap
snippets[i] = snippets[i] | snippets[j] # merging
snippets.pop(j)
else:
j += 1
i += 1
# truncating snippets based on character length
result_snippets = []
total_length = 0
for snippet in snippets:
total_length += len(snippet.get_snippet())
if total_length > total_number_of_snippet_tokens * 5:
break
result_snippets.append(snippet)
return result_snippets[:max_num_of_snippets]
async def on_comment(
repo_full_name: str,
repo_description: str,
comment: str,
pr_path: str | None,
pr_line_position: int | None,
username: str,
installation_id: int,
pr_number: int = None,
comment_id: int | None = None,
g: None = None,
repo: Repository = None,
pr: Any = None, # Uses PRFileChanges type too
chat_logger: Any = None,
):
# Check if the comment is "REVERT"
if comment.strip().upper() == "REVERT":
rollback_file(repo_full_name, pr_path, installation_id, pr_number)
return {
"success": True,
"message": "File has been reverted to the previous commit.",
}
# Flow:
# 1. Get relevant files
# 2: Get human message
# 3. Get files to change
# 4. Get file changes
# 5. Create PR
logger.info(
f"Calling on_comment() with the following arguments: {comment},"
f" {repo_full_name}, {repo_description}, {pr_path}"
)
organization, repo_name = repo_full_name.split("/")
g = (get_github_client(installation_id))[1] if not g else g
repo = g.get_repo(repo_full_name) if not repo else repo
pr = repo.get_pull(pr_number) if not pr else pr
pr_title = pr.title
pr_body = pr.body or ""
pr_file_path = None
diffs = get_pr_diffs(repo, pr)
pr_line = None
issue_number_match = re.search(r"Fixes #(?P<issue_number>\d+).", pr_body)
original_issue = None
if issue_number_match:
issue_number = issue_number_match.group("issue_number")
original_issue = repo.get_issue(int(issue_number))
author = original_issue.user.login
logger.info(f"Author of original issue is {author}")
chat_logger = (
chat_logger
if chat_logger is not None
else ChatLogger(
{
"repo_name": repo_name,
"title": "(Comment) " + pr_title,
"issue_url": pr.html_url,
"pr_file_path": pr_file_path, # may be None
"pr_line": pr_line, # may be None
"repo_full_name": repo_full_name,
"repo_description": repo_description,
"comment": comment,
"pr_path": pr_path,
"pr_line_position": pr_line_position,
"username": author,
"installation_id": installation_id,
"pr_number": pr_number,
"type": "comment",
}
)
if MONGODB_URI
else None
)
else:
logger.warning(f"No issue number found in PR body for summary {pr.body}")
chat_logger = None
is_paying_user = chat_logger.is_paying_user()
use_faster_model = chat_logger.use_faster_model(g)
assignee = pr.assignee.login if pr.assignee else None
metadata = {
"repo_full_name": repo_full_name,
"repo_name": repo_name,
"organization": organization,
"repo_description": repo_description,
"installation_id": installation_id,
"username": username if not username.startswith("sweep") else assignee,
"function": "on_comment",
"model": "gpt-3.5" if use_faster_model else "gpt-4",
"tier": "pro" if is_paying_user else "free",
"mode": ENV,
"pr_path": pr_path,
"pr_line_position": pr_line_position,
"pr_number": pr_number or pr.id,
"pr_html_url": pr.html_url,
"comment_id": comment_id,
"comment": comment,
"issue_number": issue_number if issue_number_match else "",
}
capture_posthog_event(username, "started", properties=metadata)
logger.info(f"Getting repo {repo_full_name}")
file_comment = bool(pr_path) and bool(pr_line_position)
item_to_react_to = None
reaction = None
try:
# Check if the PR is closed
if pr.state == "closed":
return {"success": True, "message": "PR is closed. No event fired."}
if comment_id:
try:
item_to_react_to = pr.get_issue_comment(comment_id)
reaction = item_to_react_to.create_reaction("eyes")
except Exception as e:
try:
item_to_react_to = pr.get_review_comment(comment_id)
reaction = item_to_react_to.create_reaction("eyes")
except Exception as e:
pass
if reaction is not None:
# Delete rocket reaction
reactions = item_to_react_to.get_reactions()
for r in reactions:
if r.content == "rocket" and r.user.login == GITHUB_BOT_USERNAME:
item_to_react_to.delete_reaction(r.id)
branch_name = (
pr.head.ref if pr_number else pr.pr_head # pylint: disable=no-member
)
cloned_repo = ClonedRepo(repo_full_name, installation_id, branch=branch_name)
# This means it's a comment on a file
if file_comment:
pr_file = repo.get_contents(
pr_path, ref=branch_name
).decoded_content.decode("utf-8")
pr_lines = pr_file.splitlines()
pr_line = pr_lines[min(len(pr_lines), pr_line_position) - 1]
pr_file_path = pr_path.strip()
if file_comment:
snippets = []
tree = ""
else:
try:
logger.info("Fetching relevant files...")
snippets, tree = search_snippets(
cloned_repo,
f"{comment}\n{pr_title}" + (f"\n{pr_line}" if pr_line else ""),
num_files=30,
)
assert len(snippets) > 0
except Exception as e:
logger.error(traceback.format_exc())
raise e
snippets = post_process_snippets(
snippets, max_num_of_snippets=0 if file_comment else 2
)
logger.info("Getting response from ChatGPT...")
human_message = HumanMessageCommentPrompt(
comment=comment,
repo_name=repo_name,
repo_description=repo_description if repo_description else "",
diffs=diffs,
issue_url=pr.html_url,
username=username,
title=pr_title,
tree=tree,
summary=pr_body,
snippets=snippets,
pr_file_path=pr_file_path, # may be None
pr_line=pr_line, # may be None
)
logger.info(f"Human prompt{human_message.construct_prompt()}")
sweep_bot = SweepBot.from_system_message_content(
# human_message=human_message, model="claude-v1.3-100k", repo=repo
human_message=human_message,
repo=repo,
chat_logger=chat_logger,
model="gpt-3.5-turbo-16k-0613" if use_faster_model else "gpt-4-32k-0613",
)
except Exception as e:
logger.error(traceback.format_exc())
capture_posthog_event(
username,
"failed",
properties={"error": str(e), "reason": "Failed to get files", **metadata},
)
raise e
try:
logger.info("Fetching files to modify/create...")
if file_comment:
file_change_requests = [
FileChangeRequest(
filename=pr_file_path,
instructions=f"{comment}\n\nCommented on this line: {pr_line}",
change_type="modify",
)
]
else:
if comment.strip().lower().startswith("sweep: regenerate"):
logger.info("Running regenerate...")
file_paths = comment.strip().split(" ")[2:]
def get_contents_with_fallback(repo: Repository, file_path: str):
try:
return repo.get_contents(file_path)
except Exception as e:
logger.error(e)
return None
old_file_contents = [
get_contents_with_fallback(repo, file_path)
for file_path in file_paths
]
print(old_file_contents)
for file_path, old_file_content in zip(file_paths, old_file_contents):
current_content = sweep_bot.get_contents(
file_path, branch=branch_name
)
if old_file_content:
logger.info("Resetting file...")
sweep_bot.repo.update_file(
file_path,
f"Reset {file_path}",
old_file_content.decoded_content,
sha=current_content.sha,
branch=branch_name,
)
else:
logger.info("Deleting file...")
sweep_bot.repo.delete_file(
file_path,
f"Reset {file_path}",
sha=current_content.sha,
branch=branch_name,
)
file_change_requests = []
if original_issue:
content = original_issue.body
checklist_dropdown = re.search(
"<details>\n<summary>Checklist</summary>.*?</details>",
content,
re.DOTALL,
)
checklist = checklist_dropdown.group(0)
matches = re.findall(
(
"- \[[X ]\] `(?P<filename>.*?)`(?P<instructions>.*?)(?=-"
" \[[X ]\]|</details>)"
),
checklist,
re.DOTALL,
)
instructions_mapping = {}
for filename, instructions in matches:
instructions_mapping[filename] = instructions
file_change_requests = [
FileChangeRequest(
filename=file_path,
instructions=instructions_mapping[file_path],
change_type="modify",
)
for file_path in file_paths
]
else:
quoted_pr_summary = "> " + pr.body.replace("\n", "\n> ")
file_change_requests = [
FileChangeRequest(
filename=file_path,
instructions=(
f"Modify the file {file_path} based on the PR"
f" summary:\n\n{quoted_pr_summary}"
),
change_type="modify",
)
for file_path in file_paths
]
print(file_change_requests)
file_change_requests = sweep_bot.validate_file_change_requests(
file_change_requests, branch=branch_name
)
logger.info("Getting response from ChatGPT...")
human_message = HumanMessageCommentPrompt(
comment=comment,
repo_name=repo_name,
repo_description=repo_description if repo_description else "",
diffs=get_pr_diffs(repo, pr),
issue_url=pr.html_url,
username=username,
title=pr_title,
tree=tree,
summary=pr_body,
snippets=snippets,
pr_file_path=pr_file_path, # may be None
pr_line=pr_line, # may be None
)
logger.info(f"Human prompt{human_message.construct_prompt()}")
sweep_bot = SweepBot.from_system_message_content(
human_message=human_message,
repo=repo,
chat_logger=chat_logger,
)
else:
file_change_requests, _ = await sweep_bot.get_files_to_change(retries=1)
file_change_requests = sweep_bot.validate_file_change_requests(
file_change_requests, branch=branch_name
)
sweep_response = "I couldn't find any relevant files to change."
if file_change_requests:
table_message = tabulate(
[
[
f"`{file_change_request.filename}`",
file_change_request.instructions_display.replace(
"\n", "<br/>"
).replace("```", "\\```"),
]
for file_change_request in file_change_requests
],
headers=["File Path", "Proposed Changes"],
tablefmt="pipe",
)
sweep_response = (
f"I decided to make the following changes:\n\n{table_message}"
)
quoted_comment = "> " + comment.replace("\n", "\n> ")
response_for_user = (
f"{quoted_comment}\n\nHi @{username},\n\n{sweep_response}"
)
if pr_number:
pr.create_issue_comment(response_for_user)
logger.info("Making Code Changes...")
blocked_dirs = get_blocked_dirs(sweep_bot.repo)
changes_made = sum(
[
change_made
async for _, change_made, _ in sweep_bot.change_files_in_github_iterator(
file_change_requests, branch_name, blocked_dirs
)
]
)
try:
if comment_id:
if changes_made:
pr.create_review_comment_reply(comment_id, "Done.")
else:
pr.create_review_comment_reply(
comment_id,
(
"No changes made. Please add more details so I know what to"
" change."
),
)
except Exception as e:
logger.error(f"Failed to reply to comment: {e}")
if type(pr) != MockPR:
if pr.user.login == GITHUB_BOT_USERNAME and pr.title.startswith("[DRAFT] "):
# Update the PR title to remove the "[DRAFT]" prefix
pr.edit(title=pr.title.replace("[DRAFT] ", "", 1))
logger.info("Done!")
except NoFilesException:
capture_posthog_event(
username,
"failed",
properties={
"error": "No files to change",
"reason": "No files to change",
**metadata,
},
)
return {"success": True, "message": "No files to change."}
except Exception as e:
logger.error(traceback.format_exc())
capture_posthog_event(
username,
"failed",
properties={
"error": str(e),
"reason": "Failed to make changes",
**metadata,
},
)
raise e
# Delete eyes
if reaction is not None:
item_to_react_to.delete_reaction(reaction.id)
try:
item_to_react_to = pr.get_issue_comment(comment_id)
reaction = item_to_react_to.create_reaction("rocket")
except Exception as e:
try:
item_to_react_to = pr.get_review_comment(comment_id)
reaction = item_to_react_to.create_reaction("rocket")
except Exception as e:
pass
capture_posthog_event(username, "success", properties={**metadata})
logger.info("on_comment success")
return {"success": True}
def capture_posthog_event(username, event, properties):
posthog.capture(username, event, properties=properties)
def rollback_file(repo_full_name, pr_path, installation_id, pr_number):
_, g = get_github_client(installation_id)
repo = g.get_repo(repo_full_name)
pr = repo.get_pull(pr_number)
branch_name = pr.head.ref
# Get the file's content from the previous commit
commits = repo.get_commits(sha=branch_name)
if commits.totalCount < 2:
current_file = repo.get_contents(pr_path, ref=commits[0].sha)
current_file_sha = current_file.sha
previous_content = repo.get_contents(pr_path, ref=repo.default_branch)
previous_file_content = previous_content.decoded_content.decode("utf-8")
repo.update_file(
pr_path,
"Revert file to previous commit",
previous_file_content,
current_file_sha,
branch=branch_name,
)
return
previous_commit = commits[1]
# Get current file SHA
current_file = repo.get_contents(pr_path, ref=commits[0].sha)
current_file_sha = current_file.sha
# Check if the file exists in the previous commit
try:
previous_content = repo.get_contents(pr_path, ref=previous_commit.sha)
previous_file_content = previous_content.decoded_content.decode("utf-8")
# Create a new commit with the previous file content
repo.update_file(
pr_path,
"Revert file to previous commit",
previous_file_content,
current_file_sha,
branch=branch_name,
)
except Exception as e:
logger.error(traceback.format_exc())
if e.status == 404: # pylint: disable=no-member
logger.warning(
f"File {pr_path} was not found in previous commit {previous_commit.sha}"
)
else:
raise e
| [] |
2024-01-10 | SergioHdezG/RLPhotoFentonOptimization | RL_Problem~base~PPO~multiprocessing_env.py | # This code is from openai baseline
# https://github.com/openai/baselines/tree/master/baselines/common/vec_env
import numpy as np
from multiprocessing import Process, Pipe
def worker(remote, parent_remote, env_fn_wrapper):
parent_remote.close()
env = env_fn_wrapper.x()
while True:
cmd, data = remote.recv()
if cmd == 'step':
ob, reward, done, info = env.step(data)
if done:
ob = env.reset()
remote.send((ob, reward, done, info))
elif cmd == 'reset':
ob = env.reset()
remote.send(ob)
elif cmd == 'reset_task':
ob = env.reset_task()
remote.send(ob)
elif cmd == 'close':
remote.close()
break
elif cmd == 'get_spaces':
remote.send((env.observation_space, env.action_space))
elif cmd == 'get_best_params':
best_params = env.get_best_params()
remote.send(best_params)
elif cmd == 'step_special':
ob, rew, done, info, action, actions_matri, predicted_action, value = env.step(data)
if done:
ob = env.reset()
remote.send((ob, rew, done, info, action, actions_matri, predicted_action, value))
elif cmd == 'record_opt_experiences':
ob, rew, done, info, params, error = env.record_opt_experiences(data[0], data[1])
remote.send((ob, rew, done, info, params, error))
else:
raise NotImplementedError
class VecEnv(object):
"""
An abstract asynchronous, vectorized environment.
"""
def __init__(self, num_envs, observation_space, action_space):
self.num_envs = num_envs
self.observation_space = observation_space
self.action_space = action_space
def reset(self):
"""
Reset all the environments and return an array of
observations, or a tuple of observation arrays.
If step_async is still doing work, that work will
be cancelled and step_wait() should not be called
until step_async() is invoked again.
"""
pass
def step_async(self, actions):
"""
Tell all the environments to start taking a step
with the given actions.
Call step_wait() to get the results of the step.
You should not call this if a step_async run is
already pending.
"""
pass
def step_wait(self):
"""
Wait for the step taken with step_async().
Returns (obs, rews, dones, infos):
- obs: an array of observations, or a tuple of
arrays of observations.
- rews: an array of rewards
- dones: an array of "episode done" booleans
- infos: a sequence of info objects
"""
pass
def close(self):
"""
Clean up the environments' resources.
"""
pass
def step(self, actions):
self.step_async(actions)
return self.step_wait()
def render(self):
pass
def get_best_params(self):
pass
def record_opt_experiences(self, opt, maxiter):
self.record_opt_experiences_async(opt, maxiter)
return self.record_opt_experiencesp_wait()
def special_step(self, action, action_matrix, predicted_action, value):
self.special_step_async(action, action_matrix, predicted_action, value)
return self.special_step_wait()
def record_opt_experiences(self, opt, maxiter):
self.record_opt_experiences_async(opt, maxiter)
return self.record_opt_experiences_wait()
class CloudpickleWrapper(object):
"""
Uses cloudpickle to serialize contents (otherwise multiprocessing tries to use pickle)
"""
def __init__(self, x):
self.x = x
def __getstate__(self):
import cloudpickle
return cloudpickle.dumps(self.x)
def __setstate__(self, ob):
import pickle
self.x = pickle.loads(ob)
class SubprocVecEnv(VecEnv):
def __init__(self, env_fns, spaces=None):
"""
envs: list of gym environments to run in subprocesses
"""
self.waiting = False
self.closed = False
nenvs = len(env_fns)
self.nenvs = nenvs
self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)])
self.ps = [Process(target=worker, args=(work_remote, remote, CloudpickleWrapper(env_fn)))
for (work_remote, remote, env_fn) in zip(self.work_remotes, self.remotes, env_fns)]
for p in self.ps:
p.daemon = True # if the main process crashes, we should not cause things to hang
p.start()
for remote in self.work_remotes:
remote.close()
self.remotes[0].send(('get_spaces', None))
observation_space, action_space = self.remotes[0].recv()
VecEnv.__init__(self, len(env_fns), observation_space, action_space)
def step_async(self, actions):
for remote, action in zip(self.remotes, actions):
remote.send(('step', action))
self.waiting = True
def special_step_async(self, action, action_matrix, predicted_action, value):
for remote, a, a_m, p_a, v in zip(self.remotes, action, action_matrix, predicted_action, value):
remote.send(('step_special', [a, a_m, p_a, v]))
self.waiting = True
def special_step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, rews, dones, infos, actions, actions_matrix, predicted_actions, values = zip(*results)
return np.stack(obs), np.stack(rews), np.stack(dones), infos, np.stack(actions), np.stack(actions_matrix), \
np.stack(predicted_actions), np.stack(values)
def step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, rews, dones, infos = zip(*results)
return np.stack(obs), np.stack(rews), np.stack(dones), infos
def reset(self):
for remote in self.remotes:
remote.send(('reset', None))
return np.stack([remote.recv() for remote in self.remotes])
def reset_task(self):
for remote in self.remotes:
remote.send(('reset_task', None))
return np.stack([remote.recv() for remote in self.remotes])
def close(self):
if self.closed:
return
if self.waiting:
for remote in self.remotes:
remote.recv()
for remote in self.remotes:
remote.send(('close', None))
for p in self.ps:
p.join()
self.closed = True
def get_best_params(self):
for remote in self.remotes:
remote.send(('get_best_params', None))
return [remote.recv() for remote in self.remotes]
def record_opt_experiences_async(self, opt, maxiter):
for remote in self.remotes:
remote.send(('record_opt_experiences', [opt, maxiter]))
self.waiting = True
def record_opt_experiences_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, reward, done, info, params, error = zip(*results)
return obs, reward, done, info, params, error
def __len__(self):
return self.nenvs | [] |
2024-01-10 | SergioHdezG/RLPhotoFentonOptimization | environments~env_base.py | from abc import ABCMeta, abstractmethod
class EnvInterface(object, metaclass=ABCMeta):
"""
This class is an interface for building custom environments. It is based on gym (from OpenAI) environment interfaces
but using this interface you can avoid create a custom gym environment.
"""
def __init__(self):
self.action_space = None # ActionSpaceInterface object.
self.observation_space = None # numpy nd array of observation shape
@abstractmethod
def reset(self):
"""
Reset the environment to an initial state
:return: (numpy nd array) observation.
"""
pass
@abstractmethod
def step(self, action):
"""
Take an action and executes it on the environment.
:param action: (int if actions are discrete or numpy array of floats if actions are continuous). Action to
execute.
:return: (numpy nd array) observation, (float) reward, (bool) done, (dict or None) additional info
"""
pass
@abstractmethod
def render(self):
"""
Render the environment.
"""
pass
@abstractmethod
def close(self):
"""
Close rendering window.
"""
pass
def deepcopy(self):
"""
Make a new copy of the environment.
When using very complex environment copy.deepcopy method may not work properly and you must define how to copy
your environment when using agents with environment parallelization (A3C or parallel PPO).
"""
import copy
return copy.deepcopy(self)
class ActionSpaceInterface(object):
"""
This class defines the ActionSpaceInterface type used in EnvInterface.
"""
def __init__(self):
self.n = None # Number of actions.
self.actions = None # List of actions.
# For continuous action spaces. Higher and lower bounds.
self.low = None
self.high = None
| [] |
2024-01-10 | Trina0224/chatGPT-Bot | Male_3Languages_W_WakeWord.py | #!/usr/bin/env python3
# Trina S. Modified a script from vosk example.
# prerequisites: as described in https://alphacephei.com/vosk/install and also python module `sounddevice` (simply run command `pip install sounddevice`)
# It's only for Voice Hat V1. Microphone is at device 1.
# wakeword = "computer"
import os
from google.cloud import texttospeech_v1
import io
import time
import threading
from google.oauth2 import service_account
#from google.cloud import speech
from google.cloud import speech_v1p1beta1 as speech
from aiy.board import Board,Led
from aiy.voice.audio import AudioFormat, play_wav, record_file, Recorder
import os
import openai
import pygame
import sys
User_Language_Code=''
StatusControl = False
import argparse
import queue
import sys
import sounddevice as sd
from vosk import Model, KaldiRecognizer
q = queue.Queue()
wake_word = "computer"
def int_or_str(text):
"""Helper function for argument parsing."""
try:
return int(text)
except ValueError:
return text
cooldown_time = 2 # seconds
cooldown_timestamp = 0
def can_detect_wake_word(current_time):
global cooldown_timestamp
global cooldown_time
if current_time - cooldown_timestamp > cooldown_time:
cooldown_timestamp = current_time
return True
return False
def callback(indata, frames, time, status):
"""This is called (from a separate thread) for each audio block."""
if status:
print(status, file=sys.stderr)
q.put(bytes(indata))
parser = argparse.ArgumentParser(add_help=False)
args, remaining = parser.parse_known_args()
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=[parser])
parser.add_argument(
"-d", "--device", type=int_or_str, default=1,
help="input device (numeric ID or substring)")
parser.add_argument(
"-m", "--model", type=str, default="en-us", help="language model; e.g. en-us, fr, nl, cn; default is en-us")
args = parser.parse_args(remaining)
def monitor(event):
global StatusControl
if not event.wait(5):
print("Script hasn't reached the wait_for_press() line in 5 seconds. Exiting.")
#break
#sys.exit()
StatusControl = True
#return ""
def check_button_press():
with Board() as board:
#board.led.state = Led.ON
while True:
board.button.wait_for_press()
pygame.mixer.music.stop()
#board.led.state = Led.OFF
def text_to_speech(input_text, output_filename="output.mp3"):
# Assuming the credentials file is in the same directory
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "Your text2speech.json"
# Instantiates a client
client = texttospeech_v1.TextToSpeechClient()
# Set the text input to be synthesized
synthesis_input = texttospeech_v1.SynthesisInput(text=input_text)
# Determine the voice parameters based on User_Language_Code
if 'en' in User_Language_Code:
language_code = "en-US"
voice_name = "en-US-Wavenet-J"
elif 'jp' in User_Language_Code:
language_code = "ja-JP"
voice_name = "ja-JP-Wavenet-D"
else: # default to 'cmn' settings
language_code = "cmn-TW"
voice_name = "cmn-TW-Wavenet-C"
# Voice parameters
voice = texttospeech_v1.VoiceSelectionParams(
language_code=language_code,
name=voice_name
)
# Audio format
audio_config = texttospeech_v1.AudioConfig(
audio_encoding=texttospeech_v1.AudioEncoding.MP3,
speaking_rate=1.3
)
# Make the API call
response = client.synthesize_speech(
input=synthesis_input, voice=voice, audio_config=audio_config
)
# Save the response to an output file
with open(output_filename, 'wb') as out:
out.write(response.audio_content)
return output_filename
#return output_filename
def play_mp3(filename):
pygame.mixer.init()
pygame.mixer.music.load(filename)
pygame.mixer.music.play()
# Start the button monitoring thread right after
button_thread = threading.Thread(target=check_button_press)
button_thread.start()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
def recognize_audio(filename):
global User_Language_Code
with io.open(filename, 'rb') as f:
content = f.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
audio_channel_count=2,
sample_rate_hertz=44100,
language_code = 'zh-TW',
alternative_language_codes = ['en-US','ja-JP']
)
response = client.recognize(config=config, audio=audio)
os.remove(filename)
if response.results:
User_Language_Code = response.results[0].language_code
print('The user is speaking '+ User_Language_Code + '\n' )
#print(response)
print('{0}: {1}\n'.format(response.results[0].language_code, response.results[0].alternatives[0].transcript))
return response.results[0].alternatives[0].transcript
return ""
def record_and_recognize():
global StatusControl
filename = 'recording.wav'
# Start monitoring for the button press line
reached_button_line_event = threading.Event()
threading.Thread(target=monitor, args=(reached_button_line_event,)).start()
with Board() as board:
print('Press button to start recording.')
board.led.state = Led.ON
board.button.wait_for_press(6)
# Signal that we've reached the button press line
reached_button_line_event.set()
if StatusControl:
return 999
done = threading.Event()
board.button.when_pressed = done.set
def wait():
start = time.monotonic()
while not done.is_set():
duration = time.monotonic() - start
print('Recording: %.02f seconds [Press button to stop]' % duration)
time.sleep(0.5)
record_file(AudioFormat.CD, filename=filename, wait=wait, filetype='wav')
board.led.state = Led.OFF
print('Sending audio for recognition...')
recognized_text = recognize_audio(filename)
return recognized_text
# Google Cloud Speech-to-Text client setup
client_file = 'Your speech2text.json'
credentials = service_account.Credentials.from_service_account_file(client_file)
client = speech.SpeechClient(credentials=credentials)
API_KEY = 'your openAI key'
openai.api_key = API_KEY
#messages = [ {"role": "system", "content":
# "You are a intelligent assistant."} ]
#model_id = 'gpt-4'
model_id = 'gpt-3.5-turbo'
def chatgpt_conversation(conversation_log):
response = openai.ChatCompletion.create(
model=model_id,
messages=conversation_log
)
conversation_log.append({
'role': response.choices[0].message.role,
'content': response.choices[0].message.content.strip()
})
return conversation_log
try:
while True:
device_info = sd.query_devices(args.device, "input")
args.samplerate = int(device_info["default_samplerate"])
if args.model is None:
model = Model(lang="en-us")
else:
model = Model(lang=args.model)
dump_fn = None
with sd.RawInputStream(samplerate=args.samplerate, blocksize = 8000, device=args.device,
dtype="int16", channels=1, callback=callback):
print("#" * 80)
print("Press Ctrl+C to stop the recording")
print("#" * 80)
rec = KaldiRecognizer(model, args.samplerate)
while True:
data = q.get()
if rec.AcceptWaveform(data):
print(rec.Result())
else:
recognized_text = rec.PartialResult()
if wake_word in recognized_text:
current_time = time.time()
#prevent multi-entry.
if can_detect_wake_word(current_time):
print("Wake word detected!")
#clean queue q.
while not q.empty():
q.get()
#break
conversations = []
# role: system, user, assistant
conversations.append({'role': 'system', 'content': 'You are a intelligent assistant.'})
conversations = chatgpt_conversation(conversations)
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#filename = text_to_speech('{0}: {1}'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(f"Audio saved to: {filename}")
play_mp3("greeting_TW_M.mp3")
while not StatusControl:
prompt = record_and_recognize()
if prompt==999:
print("Code exit")
break
conversations.append({'role': 'user', 'content': prompt})
conversations = chatgpt_conversation(conversations)
print()
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(conversations[-1]['content'].strip())
filename = text_to_speech(conversations[-1]['content'].strip())
print(f"Audio saved to: {filename}")
#with Board() as board:
# board.led.state = Led.ON
play_mp3(filename)
StatusControl = False
break
else:
print(recognized_text)
except KeyboardInterrupt:
print("\nDone")
parser.exit(0)
except Exception as e:
parser.exit(type(e).__name__ + ": " + str(e))
| [
"You are a intelligent assistant."
] |
2024-01-10 | Trina0224/chatGPT-Bot | Female.py | #!/usr/bin/env python3
# Trina S. Modified a script from vosk example.
# prerequisites: as described in https://alphacephei.com/vosk/install and also python module `sounddevice` (simply run command `pip install sounddevice`)
# It's only for Voice Hat V1. Microphone is at device 1.
# The only differences between Male and Female are: L113: en-US-Wavenet-F, L116: ja-JP-Wavenet-B, L119: cmn-TW-Wavenet-A and L287: greeting_jp.mp3
import os
from google.cloud import texttospeech_v1
import io
import time
import threading
from google.oauth2 import service_account
#from google.cloud import speech
from google.cloud import speech_v1p1beta1 as speech
from aiy.board import Board,Led
from aiy.voice.audio import AudioFormat, play_wav, record_file, Recorder
import os
import openai
import pygame
import sys
User_Language_Code=''
StatusControl = False
import argparse
import queue
import sys
import sounddevice as sd
from vosk import Model, KaldiRecognizer
q = queue.Queue()
wake_word = "computer"
def int_or_str(text):
"""Helper function for argument parsing."""
try:
return int(text)
except ValueError:
return text
cooldown_time = 2 # seconds
cooldown_timestamp = 0
def can_detect_wake_word(current_time):
global cooldown_timestamp
global cooldown_time
if current_time - cooldown_timestamp > cooldown_time:
cooldown_timestamp = current_time
return True
return False
def callback(indata, frames, time, status):
"""This is called (from a separate thread) for each audio block."""
if status:
print(status, file=sys.stderr)
q.put(bytes(indata))
parser = argparse.ArgumentParser(add_help=False)
args, remaining = parser.parse_known_args()
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=[parser])
parser.add_argument(
"-d", "--device", type=int_or_str, default=1,
help="input device (numeric ID or substring)")
parser.add_argument(
"-m", "--model", type=str, default="en-us", help="language model; e.g. en-us, fr, nl, cn; default is en-us")
args = parser.parse_args(remaining)
def monitor(event):
global StatusControl
if not event.wait(5):
print("Script hasn't reached the wait_for_press() line in 5 seconds. Exiting.")
#break
#sys.exit()
StatusControl = True
#return ""
def check_button_press():
with Board() as board:
#board.led.state = Led.ON
while True:
board.button.wait_for_press()
pygame.mixer.music.stop()
#board.led.state = Led.OFF
def text_to_speech(input_text, output_filename="output.mp3"):
# Assuming the credentials file is in the same directory
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "Your text2speech.json"
# Instantiates a client
client = texttospeech_v1.TextToSpeechClient()
# Set the text input to be synthesized
synthesis_input = texttospeech_v1.SynthesisInput(text=input_text)
# Determine the voice parameters based on User_Language_Code
if 'en' in User_Language_Code:
language_code = "en-US"
voice_name = "en-US-Wavenet-F"
elif 'jp' in User_Language_Code:
language_code = "ja-JP"
voice_name = "ja-JP-Wavenet-B"
else: # default to 'cmn' settings
language_code = "cmn-TW"
voice_name = "cmn-TW-Wavenet-A"
# Voice parameters
voice = texttospeech_v1.VoiceSelectionParams(
language_code=language_code,
name=voice_name
)
# Audio format
audio_config = texttospeech_v1.AudioConfig(
audio_encoding=texttospeech_v1.AudioEncoding.MP3,
speaking_rate=1.3
)
# Make the API call
response = client.synthesize_speech(
input=synthesis_input, voice=voice, audio_config=audio_config
)
# Save the response to an output file
with open(output_filename, 'wb') as out:
out.write(response.audio_content)
return output_filename
#return output_filename
def play_mp3(filename):
pygame.mixer.init()
pygame.mixer.music.load(filename)
pygame.mixer.music.play()
# Start the button monitoring thread right after
button_thread = threading.Thread(target=check_button_press)
button_thread.start()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
def recognize_audio(filename):
global User_Language_Code
with io.open(filename, 'rb') as f:
content = f.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
audio_channel_count=2,
sample_rate_hertz=44100,
language_code = 'zh-TW',
alternative_language_codes = ['en-US','ja-JP']
)
response = client.recognize(config=config, audio=audio)
os.remove(filename)
if response.results:
User_Language_Code = response.results[0].language_code
print('The user is speaking '+ User_Language_Code + '\n' )
#print(response)
print('{0}: {1}\n'.format(response.results[0].language_code, response.results[0].alternatives[0].transcript))
return response.results[0].alternatives[0].transcript
return ""
def record_and_recognize():
global StatusControl
filename = 'recording.wav'
# Start monitoring for the button press line
reached_button_line_event = threading.Event()
threading.Thread(target=monitor, args=(reached_button_line_event,)).start()
with Board() as board:
print('Press button to start recording.')
board.led.state = Led.ON
board.button.wait_for_press(6)
# Signal that we've reached the button press line
reached_button_line_event.set()
if StatusControl:
return 999
done = threading.Event()
board.button.when_pressed = done.set
def wait():
start = time.monotonic()
while not done.is_set():
duration = time.monotonic() - start
print('Recording: %.02f seconds [Press button to stop]' % duration)
time.sleep(0.5)
record_file(AudioFormat.CD, filename=filename, wait=wait, filetype='wav')
board.led.state = Led.OFF
print('Sending audio for recognition...')
recognized_text = recognize_audio(filename)
return recognized_text
# Google Cloud Speech-to-Text client setup
client_file = 'Your speech2text.json'
credentials = service_account.Credentials.from_service_account_file(client_file)
client = speech.SpeechClient(credentials=credentials)
API_KEY = 'your openAI key'
openai.api_key = API_KEY
#messages = [ {"role": "system", "content":
# "You are a intelligent assistant."} ]
#model_id = 'gpt-4'
model_id = 'gpt-3.5-turbo'
def chatgpt_conversation(conversation_log):
response = openai.ChatCompletion.create(
model=model_id,
messages=conversation_log
)
conversation_log.append({
'role': response.choices[0].message.role,
'content': response.choices[0].message.content.strip()
})
return conversation_log
try:
while True:
device_info = sd.query_devices(args.device, "input")
args.samplerate = int(device_info["default_samplerate"])
if args.model is None:
model = Model(lang="en-us")
else:
model = Model(lang=args.model)
dump_fn = None
with sd.RawInputStream(samplerate=args.samplerate, blocksize = 8000, device=args.device,
dtype="int16", channels=1, callback=callback):
print("#" * 80)
print("Press Ctrl+C to stop the recording")
print("#" * 80)
rec = KaldiRecognizer(model, args.samplerate)
while True:
data = q.get()
if rec.AcceptWaveform(data):
print(rec.Result())
else:
recognized_text = rec.PartialResult()
if wake_word in recognized_text:
current_time = time.time()
#prevent multi-entry.
if can_detect_wake_word(current_time):
print("Wake word detected!")
#clean queue q.
while not q.empty():
q.get()
#break
conversations = []
# role: system, user, assistant
conversations.append({'role': 'system', 'content': 'You are a intelligent assistant.'})
conversations = chatgpt_conversation(conversations)
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#filename = text_to_speech('{0}: {1}'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(f"Audio saved to: {filename}")
play_mp3("greeting_jp.mp3")
while not StatusControl:
prompt = record_and_recognize()
if prompt==999:
print("Code exit")
break
conversations.append({'role': 'user', 'content': prompt})
conversations = chatgpt_conversation(conversations)
print()
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(conversations[-1]['content'].strip())
filename = text_to_speech(conversations[-1]['content'].strip())
print(f"Audio saved to: {filename}")
#with Board() as board:
# board.led.state = Led.ON
play_mp3(filename)
StatusControl = False
break
else:
print(recognized_text)
except KeyboardInterrupt:
print("\nDone")
parser.exit(0)
except Exception as e:
parser.exit(type(e).__name__ + ": " + str(e))
| [
"You are a intelligent assistant."
] |
2024-01-10 | Trina0224/chatGPT-Bot | Male.py | #Trina S.
#wakeword is "computer"
#
import os
from google.cloud import texttospeech_v1
import io
import time
import threading
from google.oauth2 import service_account
#from google.cloud import speech
from google.cloud import speech_v1p1beta1 as speech
from aiy.board import Board,Led
from aiy.voice.audio import AudioFormat, play_wav, record_file, Recorder
import os
import openai
import pygame
import sys
User_Language_Code=''
StatusControl = False
# prerequisites: as described in https://alphacephei.com/vosk/install and also python module `sounddevice` (simply run command `pip install sounddevice`)
# Example usage using Dutch (nl) recognition model: `python test_microphone.py -m nl`
# For more help run: `python test_microphone.py -h`
import argparse
import queue
import sys
import sounddevice as sd
from vosk import Model, KaldiRecognizer
q = queue.Queue()
wake_word = "computer"
def int_or_str(text):
"""Helper function for argument parsing."""
try:
return int(text)
except ValueError:
return text
cooldown_time = 2 # seconds
cooldown_timestamp = 0
def can_detect_wake_word(current_time):
global cooldown_timestamp
global cooldown_time
if current_time - cooldown_timestamp > cooldown_time:
cooldown_timestamp = current_time
return True
return False
def callback(indata, frames, time, status):
"""This is called (from a separate thread) for each audio block."""
if status:
print(status, file=sys.stderr)
q.put(bytes(indata))
parser = argparse.ArgumentParser(add_help=False)
args, remaining = parser.parse_known_args()
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=[parser])
parser.add_argument(
"-d", "--device", type=int_or_str, default=1,
help="input device (numeric ID or substring)")
parser.add_argument(
"-m", "--model", type=str, default="en-us", help="language model; e.g. en-us, fr, nl, cn; default is en-us")
args = parser.parse_args(remaining)
def monitor(event):
global StatusControl
if not event.wait(5):
print("Script hasn't reached the wait_for_press() line in 5 seconds. Exiting.")
#break
#sys.exit()
StatusControl = True
#return ""
def check_button_press():
with Board() as board:
#board.led.state = Led.ON
while True:
board.button.wait_for_press()
pygame.mixer.music.stop()
#board.led.state = Led.OFF
def text_to_speech(input_text, output_filename="output.mp3"):
# Assuming the credentials file is in the same directory
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "Your text2speech.json"
# Instantiates a client
client = texttospeech_v1.TextToSpeechClient()
# Set the text input to be synthesized
synthesis_input = texttospeech_v1.SynthesisInput(text=input_text)
# Determine the voice parameters based on User_Language_Code
if 'en' in User_Language_Code:
language_code = "en-US"
voice_name = "en-US-Wavenet-J"
elif 'jp' in User_Language_Code:
language_code = "ja-JP"
voice_name = "ja-JP-Wavenet-D"
else: # default to 'cmn' settings
language_code = "cmn-TW"
voice_name = "cmn-TW-Wavenet-C"
# Voice parameters
voice = texttospeech_v1.VoiceSelectionParams(
language_code=language_code,
name=voice_name
)
# Audio format
audio_config = texttospeech_v1.AudioConfig(
audio_encoding=texttospeech_v1.AudioEncoding.MP3,
speaking_rate=1.3
)
# Make the API call
response = client.synthesize_speech(
input=synthesis_input, voice=voice, audio_config=audio_config
)
# Save the response to an output file
with open(output_filename, 'wb') as out:
out.write(response.audio_content)
return output_filename
#return output_filename
def play_mp3(filename):
pygame.mixer.init()
pygame.mixer.music.load(filename)
pygame.mixer.music.play()
# Start the button monitoring thread right after
button_thread = threading.Thread(target=check_button_press)
button_thread.start()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
def recognize_audio(filename):
global User_Language_Code
with io.open(filename, 'rb') as f:
content = f.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
audio_channel_count=2,
sample_rate_hertz=44100,
language_code = 'zh-TW',
alternative_language_codes = ['en-US','ja-JP']
)
response = client.recognize(config=config, audio=audio)
os.remove(filename)
if response.results:
User_Language_Code = response.results[0].language_code
print('The user is speaking '+ User_Language_Code + '\n' )
#print(response)
print('{0}: {1}\n'.format(response.results[0].language_code, response.results[0].alternatives[0].transcript))
return response.results[0].alternatives[0].transcript
return ""
def record_and_recognize():
global StatusControl
filename = 'recording.wav'
# Start monitoring for the button press line
reached_button_line_event = threading.Event()
threading.Thread(target=monitor, args=(reached_button_line_event,)).start()
with Board() as board:
print('Press button to start recording.')
board.led.state = Led.ON
board.button.wait_for_press(6)
# Signal that we've reached the button press line
reached_button_line_event.set()
if StatusControl:
return 999
done = threading.Event()
board.button.when_pressed = done.set
def wait():
start = time.monotonic()
while not done.is_set():
duration = time.monotonic() - start
print('Recording: %.02f seconds [Press button to stop]' % duration)
time.sleep(0.5)
record_file(AudioFormat.CD, filename=filename, wait=wait, filetype='wav')
board.led.state = Led.OFF
print('Sending audio for recognition...')
recognized_text = recognize_audio(filename)
return recognized_text
# Google Cloud Speech-to-Text client setup
client_file = 'Your speech2text.json'
credentials = service_account.Credentials.from_service_account_file(client_file)
client = speech.SpeechClient(credentials=credentials)
API_KEY = 'your openAI key'
openai.api_key = API_KEY
#messages = [ {"role": "system", "content":
# "You are a intelligent assistant."} ]
#model_id = 'gpt-4'
model_id = 'gpt-3.5-turbo'
def chatgpt_conversation(conversation_log):
response = openai.ChatCompletion.create(
model=model_id,
messages=conversation_log
)
conversation_log.append({
'role': response.choices[0].message.role,
'content': response.choices[0].message.content.strip()
})
return conversation_log
try:
while True:
device_info = sd.query_devices(args.device, "input")
args.samplerate = int(device_info["default_samplerate"])
if args.model is None:
model = Model(lang="en-us")
else:
model = Model(lang=args.model)
dump_fn = None
with sd.RawInputStream(samplerate=args.samplerate, blocksize = 8000, device=args.device,
dtype="int16", channels=1, callback=callback):
print("#" * 80)
print("Press Ctrl+C to stop the recording")
print("#" * 80)
rec = KaldiRecognizer(model, args.samplerate)
while True:
data = q.get()
if rec.AcceptWaveform(data):
print(rec.Result())
else:
recognized_text = rec.PartialResult()
if wake_word in recognized_text:
current_time = time.time()
#prevent multi-entry.
if can_detect_wake_word(current_time):
print("Wake word detected!")
#clean queue q.
while not q.empty():
q.get()
#break
conversations = []
# role: system, user, assistant
conversations.append({'role': 'system', 'content': 'You are a intelligent assistant.'})
conversations = chatgpt_conversation(conversations)
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#filename = text_to_speech('{0}: {1}'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(f"Audio saved to: {filename}")
play_mp3("greeting_TW_M.mp3")
while not StatusControl:
prompt = record_and_recognize()
if prompt==999:
print("Code exit")
break
conversations.append({'role': 'user', 'content': prompt})
conversations = chatgpt_conversation(conversations)
print()
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(conversations[-1]['content'].strip())
filename = text_to_speech(conversations[-1]['content'].strip())
print(f"Audio saved to: {filename}")
#with Board() as board:
# board.led.state = Led.ON
play_mp3(filename)
StatusControl = False
break
else:
print(recognized_text)
except KeyboardInterrupt:
print("\nDone")
parser.exit(0)
except Exception as e:
parser.exit(type(e).__name__ + ": " + str(e))
| [
"You are a intelligent assistant."
] |
2024-01-10 | Trina0224/chatGPT-Bot | Male_3Languages.py | import os
from google.cloud import texttospeech_v1
import io
import time
import threading
from google.oauth2 import service_account
#from google.cloud import speech
from google.cloud import speech_v1p1beta1 as speech
from aiy.board import Board,Led
from aiy.voice.audio import AudioFormat, play_wav, record_file, Recorder
import os
import openai
import pygame
User_Language_Code=''
def check_button_press():
with Board() as board:
#board.led.state = Led.ON
while True:
board.button.wait_for_press()
pygame.mixer.music.stop()
#board.led.state = Led.OFF
def text_to_speech(input_text, output_filename="output.mp3"):
# Assuming the credentials file is in the same directory
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "Your Google text2speech.json"
# Instantiates a client
client = texttospeech_v1.TextToSpeechClient()
# Set the text input to be synthesized
synthesis_input = texttospeech_v1.SynthesisInput(text=input_text)
# Determine the voice parameters based on User_Language_Code
if 'en' in User_Language_Code:
language_code = "en-US"
voice_name = "en-US-Wavenet-J"
elif 'jp' in User_Language_Code:
language_code = "ja-JP"
voice_name = "ja-JP-Wavenet-D"
else: # default to 'cmn' settings
language_code = "cmn-TW"
voice_name = "cmn-TW-Wavenet-C"
# Voice parameters
voice = texttospeech_v1.VoiceSelectionParams(
language_code=language_code,
name=voice_name
)
# Audio format
audio_config = texttospeech_v1.AudioConfig(
audio_encoding=texttospeech_v1.AudioEncoding.MP3,
speaking_rate=1.3
)
# Make the API call
response = client.synthesize_speech(
input=synthesis_input, voice=voice, audio_config=audio_config
)
# Save the response to an output file
with open(output_filename, 'wb') as out:
out.write(response.audio_content)
return output_filename
#return output_filename
def play_mp3(filename):
pygame.mixer.init()
pygame.mixer.music.load(filename)
pygame.mixer.music.play()
# Start the button monitoring thread right after
button_thread = threading.Thread(target=check_button_press)
button_thread.start()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
def recognize_audio(filename):
global User_Language_Code
with io.open(filename, 'rb') as f:
content = f.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
audio_channel_count=2,
sample_rate_hertz=44100,
language_code = 'zh-TW',
alternative_language_codes = ['en-US','ja-JP']
)
response = client.recognize(config=config, audio=audio)
os.remove(filename)
if response.results:
User_Language_Code = response.results[0].language_code
print('The user is speaking '+ User_Language_Code + '\n' )
#print(response)
print('{0}: {1}\n'.format(response.results[0].language_code, response.results[0].alternatives[0].transcript))
return response.results[0].alternatives[0].transcript
return ""
def record_and_recognize():
filename = 'recording.wav'
with Board() as board:
print('Press button to start recording.')
board.led.state = Led.ON
board.button.wait_for_press()
done = threading.Event()
board.button.when_pressed = done.set
def wait():
start = time.monotonic()
while not done.is_set():
duration = time.monotonic() - start
print('Recording: %.02f seconds [Press button to stop]' % duration)
time.sleep(0.5)
record_file(AudioFormat.CD, filename=filename, wait=wait, filetype='wav')
board.led.state = Led.OFF
print('Sending audio for recognition...')
recognized_text = recognize_audio(filename)
return recognized_text
# Google Cloud Speech-to-Text client setup
client_file = 'Your Google speech2text.json'
credentials = service_account.Credentials.from_service_account_file(client_file)
client = speech.SpeechClient(credentials=credentials)
API_KEY = 'Your OpenAI Key'
openai.api_key = API_KEY
#messages = [ {"role": "system", "content":
# "You are a intelligent assistant."} ]
#model_id = 'gpt-4'
model_id = 'gpt-3.5-turbo'
def chatgpt_conversation(conversation_log):
response = openai.ChatCompletion.create(
model=model_id,
messages=conversation_log
)
conversation_log.append({
'role': response.choices[0].message.role,
'content': response.choices[0].message.content.strip()
})
return conversation_log
conversations = []
# role: system, user, assistant
conversations.append({'role': 'system', 'content': 'You are a intelligent assistant.'})
conversations = chatgpt_conversation(conversations)
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#filename = text_to_speech('{0}: {1}'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(f"Audio saved to: {filename}")
play_mp3("greeting_TW_M.mp3")
while True:
prompt = record_and_recognize()
conversations.append({'role': 'user', 'content': prompt})
conversations = chatgpt_conversation(conversations)
print()
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(conversations[-1]['content'].strip())
filename = text_to_speech(conversations[-1]['content'].strip())
#print(f"Audio saved to: {filename}")
#with Board() as board:
# board.led.state = Led.ON
play_mp3(filename)
| [
"You are a intelligent assistant."
] |
2024-01-10 | Trina0224/chatGPT-Bot | Female_3Languages.py | import os
from google.cloud import texttospeech_v1
import io
import time
import threading
from google.oauth2 import service_account
#from google.cloud import speech
from google.cloud import speech_v1p1beta1 as speech
from aiy.board import Board,Led
from aiy.voice.audio import AudioFormat, play_wav, record_file, Recorder
import os
import openai
import pygame
User_Language_Code=''
def check_button_press():
with Board() as board:
#board.led.state = Led.ON
while True:
board.button.wait_for_press()
pygame.mixer.music.stop()
#board.led.state = Led.OFF
def text_to_speech(input_text, output_filename="output.mp3"):
# Assuming the credentials file is in the same directory
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "Your Google text2speech.json"
# Instantiates a client
client = texttospeech_v1.TextToSpeechClient()
# Set the text input to be synthesized
synthesis_input = texttospeech_v1.SynthesisInput(text=input_text)
# Determine the voice parameters based on User_Language_Code
if 'en' in User_Language_Code:
language_code = "en-US"
voice_name = "en-US-Wavenet-F"
elif 'jp' in User_Language_Code:
language_code = "ja-JP"
voice_name = "ja-JP-Wavenet-B"
else: # default to 'cmn' settings
language_code = "cmn-TW"
voice_name = "cmn-TW-Wavenet-A"
# Voice parameters
voice = texttospeech_v1.VoiceSelectionParams(
language_code=language_code,
name=voice_name
)
# Audio format
audio_config = texttospeech_v1.AudioConfig(
audio_encoding=texttospeech_v1.AudioEncoding.MP3,
speaking_rate=1.3
)
# Make the API call
response = client.synthesize_speech(
input=synthesis_input, voice=voice, audio_config=audio_config
)
# Save the response to an output file
with open(output_filename, 'wb') as out:
out.write(response.audio_content)
return output_filename
#return output_filename
def play_mp3(filename):
pygame.mixer.init()
pygame.mixer.music.load(filename)
pygame.mixer.music.play()
# Start the button monitoring thread right after
button_thread = threading.Thread(target=check_button_press)
button_thread.start()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
def recognize_audio(filename):
global User_Language_Code
with io.open(filename, 'rb') as f:
content = f.read()
audio = speech.RecognitionAudio(content=content)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
audio_channel_count=2,
sample_rate_hertz=44100,
language_code = 'zh-TW',
alternative_language_codes = ['en-US','ja-JP']
)
response = client.recognize(config=config, audio=audio)
os.remove(filename)
if response.results:
User_Language_Code = response.results[0].language_code
print('The user is speaking '+ User_Language_Code + '\n' )
#print(response)
print('{0}: {1}\n'.format(response.results[0].language_code, response.results[0].alternatives[0].transcript))
return response.results[0].alternatives[0].transcript
return ""
def record_and_recognize():
filename = 'recording.wav'
with Board() as board:
print('Press button to start recording.')
board.led.state = Led.ON
board.button.wait_for_press()
done = threading.Event()
board.button.when_pressed = done.set
def wait():
start = time.monotonic()
while not done.is_set():
duration = time.monotonic() - start
print('Recording: %.02f seconds [Press button to stop]' % duration)
time.sleep(0.5)
record_file(AudioFormat.CD, filename=filename, wait=wait, filetype='wav')
board.led.state = Led.OFF
print('Sending audio for recognition...')
recognized_text = recognize_audio(filename)
return recognized_text
# Google Cloud Speech-to-Text client setup
client_file = 'Your Google speech2text.json'
credentials = service_account.Credentials.from_service_account_file(client_file)
client = speech.SpeechClient(credentials=credentials)
API_KEY = 'Your OpenAI Key'
openai.api_key = API_KEY
#messages = [ {"role": "system", "content":
# "You are a intelligent assistant."} ]
#model_id = 'gpt-4'
model_id = 'gpt-3.5-turbo'
def chatgpt_conversation(conversation_log):
response = openai.ChatCompletion.create(
model=model_id,
messages=conversation_log
)
conversation_log.append({
'role': response.choices[0].message.role,
'content': response.choices[0].message.content.strip()
})
return conversation_log
conversations = []
# role: system, user, assistant
conversations.append({'role': 'system', 'content': 'You are a intelligent assistant.'})
conversations = chatgpt_conversation(conversations)
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#filename = text_to_speech('{0}: {1}'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(f"Audio saved to: {filename}")
play_mp3("greeting_jp.mp3")
while True:
prompt = record_and_recognize()
conversations.append({'role': 'user', 'content': prompt})
conversations = chatgpt_conversation(conversations)
print()
print('{0}: {1}\n'.format(conversations[-1]['role'].strip(), conversations[-1]['content'].strip()))
#print(conversations[-1]['content'].strip())
filename = text_to_speech(conversations[-1]['content'].strip())
#print(f"Audio saved to: {filename}")
#with Board() as board:
# board.led.state = Led.ON
play_mp3(filename)
| [
"You are a intelligent assistant."
] |
2024-01-10 | sohyepargins/llamaIndexSamples | main_simscore.py | from dotenv import load_dotenv
import os
load_dotenv()
import pinecone
from llama_index import (
download_loader,
SimpleDirectoryReader,
ServiceContext,
StorageContext,
VectorStoreIndex
)
from llama_index.llms import OpenAI
from llama_index.vector_stores import PineconeVectorStore
from llama_index.callbacks import LlamaDebugHandler, CallbackManager
import streamlit as st
#To run in Terminal, run in the terminal %streamlit run main.py to check
print("***Streamlit LlamaIndex Documentation Helper")
@st.cache_resource(show_spinner=False)
def get_index()-> VectorStoreIndex:
pinecone.init(api_key=os.environ["PINECONE_API_KEY"],
environment=os.environ["PINECONE_ENV"])
pinecone_index = pinecone.Index(index_name="llamaindex-documentation-helper")
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)
#Add callbacks to the ServiceContext
llama_debug = LlamaDebugHandler(print_trace_on_end=True)
callback_manager= CallbackManager(handlers=[llama_debug])
service_context = ServiceContext.from_defaults(callback_manager=callback_manager)
return VectorStoreIndex.from_vector_store(vector_store=vector_store, service_context=service_context)
index = get_index()
st.set_page_config(page_title="Chat with LlamaIndex docs powered by llamaIndex ",
page_icon="🦙",
layout = "centered",
initial_sidebar_state="auto",
menu_items=None
)
st.title("Chat with LlamaIndex docs 🦙")
if "chat_engine" not in st.session_state.keys():
st.session_state.chat_engine = index.as_chat_engine(chat_mode="react", verbose = True)
if "messages" not in st.session_state.keys():
st.session_state.messages=[
{
"role": "assistant",
"content": "Ask me a question about LlamaIndex's open source python library."
}
]
if prompt := st.chat_input("Your question"):
st.session_state.messages.append({
"role": "user",
"content": prompt
})
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.write(message["content"])
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
response = st.session_state.chat_engine.chat(message=prompt)
st.write(response.response)
nodes = [node for node in response.source_nodes]
for col, node, i in zip(st.columns(len(nodes)), nodes, range(len(nodes))):
with col:
st.header(f"Source Node {i+1}: score = {node.score}")
st.write(node.text)
message = {
"role": "assistant",
"content":response.response
}
st.session_state.messages.append(message)
| [
"Ask me a question about LlamaIndex's open source python library."
] |
2024-01-10 | sohyepargins/llamaIndexSamples | main_pp.py | from dotenv import load_dotenv
import os
load_dotenv()
import pinecone
from llama_index import (
download_loader,
SimpleDirectoryReader,
ServiceContext,
StorageContext,
VectorStoreIndex
)
from llama_index.llms import OpenAI
from llama_index.vector_stores import PineconeVectorStore
from llama_index.callbacks import LlamaDebugHandler, CallbackManager
from llama_index.postprocessor import SentenceEmbeddingOptimizer
import streamlit as st
#Add callbacks to the ServiceContext
llama_debug = LlamaDebugHandler(print_trace_on_end=True)
callback_manager= CallbackManager(handlers=[llama_debug])
service_context = ServiceContext.from_defaults(callback_manager=callback_manager)
#To run in Terminal, run in the terminal %streamlit run main.py to check
print("***Streamlit LlamaIndex Documentation Helper")
@st.cache_resource(show_spinner=False)
def get_index()-> VectorStoreIndex:
pinecone.init(api_key=os.environ["PINECONE_API_KEY"],
environment=os.environ["PINECONE_ENV"])
pinecone_index = pinecone.Index(index_name="llamaindex-documentation-helper")
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)
return VectorStoreIndex.from_vector_store(vector_store=vector_store, service_context=service_context)
index = get_index()
st.set_page_config(page_title="Chat with LlamaIndex docs powered by llamaIndex ",
page_icon="🦙",
layout = "centered",
initial_sidebar_state="auto",
menu_items=None
)
st.title("Chat with LlamaIndex docs 🦙")
if "chat_engine" not in st.session_state.keys():
postprocessor = SentenceEmbeddingOptimizer(
embed_model = service_context.embed_model,
percentile_cutoff = 0.5,
threshold_cutoff = 0.7
)
st.session_state.chat_engine = index.as_chat_engine(
chat_mode="react", verbose = True, node_postprocessors=[postprocessor]
)
if "messages" not in st.session_state.keys():
st.session_state.messages=[
{
"role": "assistant",
"content": "Ask me a question about LlamaIndex's open source python library."
}
]
if prompt := st.chat_input("Your question"):
st.session_state.messages.append({
"role": "user",
"content": prompt
})
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.write(message["content"])
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
response = st.session_state.chat_engine.chat(message=prompt)
st.write(response.response)
nodes = [node for node in response.source_nodes]
for col, node, i in zip(st.columns(len(nodes)), nodes, range(len(nodes))):
with col:
st.header(f"Source Node {i+1}: score = {node.score}")
st.write(node.text)
message = {
"role": "assistant",
"content":response.response
}
st.session_state.messages.append(message)
| [
"Ask me a question about LlamaIndex's open source python library."
] |
2024-01-10 | anthropics/anthropic-tools | tool_use_package~tools~search~brave_search_tool.py | import os
from typing import Optional
from anthropic import Anthropic
import requests
import asyncio
from tenacity import retry, wait_exponential, stop_after_attempt
from bs4 import BeautifulSoup
import aiohttp
# Import our base search tool from which all other search tools inherit. We use this pattern to make building new search tools easy.
from .base_search_tool import BaseSearchResult, BaseSearchTool
# Brave Searcher
class BraveAPI:
def __init__(self, api_key: str):
self.api_key = api_key
@retry(wait=wait_exponential(multiplier=1, min=4, max=10), stop=stop_after_attempt(10))
def search(self, query: str) -> dict:
headers = {"Accept": "application/json", "X-Subscription-Token": self.api_key}
resp = requests.get(
"https://api.search.brave.com/res/v1/web/search",
params={"q": query,
"count": 20 # Max number of results to return, can filter down later
},
headers=headers,
timeout=60
)
if resp.status_code != 200:
print(f"Search request failed: {resp.text}")
return {}
return resp.json()
class BraveSearchTool(BaseSearchTool):
def __init__(self,
name="search_brave",
description="The search engine will search using the Brave search engine for web pages similar to your query. It returns for each page its url and the full page content. Use this tool if you want to make web searches about a topic.",
parameters=[
{"name": "query", "type": "str", "description": "The search query to enter into the Brave search engine."},
{"name": "n_search_results_to_use", "type": "int", "description": "The number of search results to return, where each search result is a website page."}
],
brave_api_key=os.environ['BRAVE_API_KEY'],
truncate_to_n_tokens=5000):
"""
:param name: The name of the tool.
:param description: The description of the tool.
:param parameters: The parameters for the tool.
:param brave_api_key: The Brave API key to use for searching. Get one at https://api.search.brave.com/register.
:param truncate_to_n_tokens: The number of tokens to truncate web page content to.
"""
super().__init__(name, description, parameters)
self.api = BraveAPI(brave_api_key)
self.truncate_to_n_tokens = truncate_to_n_tokens
if truncate_to_n_tokens is not None:
self.tokenizer = Anthropic().get_tokenizer()
def parse_faq(self, faq: dict) -> BaseSearchResult:
"""
https://api.search.brave.com/app/documentation/responses#FAQ
"""
snippet = (
f"FAQ Title: {faq.get('title', 'Unknown')}"
f"Question: {faq.get('question', 'Unknown')}"
f"Answer: {faq.get('answer', 'Unknown')}"
)
return BaseSearchResult(
source=faq.get("url", ""),
content=snippet
)
def parse_news(self, news_item: dict) -> Optional[BaseSearchResult]:
"""
https://api.search.brave.com/app/documentation/responses#News
"""
article_description: str = news_item.get("description", "")
# Throw out items where the description is tiny or doesn't exist.
if len(article_description) < 5:
return None
snippet = (
f"News Article Title: {news_item.get('title', 'Unknown')}"
f"News Article Description: {article_description}"
f"News Article Age: {news_item.get('age', 'Unknown')}"
f"News Article Source: {news_item.get('meta_url', {}).get('hostname', 'Unknown')}"
)
return BaseSearchResult(
source=news_item.get("url", ""),
content=snippet
)
@staticmethod
def remove_strong(web_description: str):
# this is for cleaning up the brave web descriptions
return (
web_description.replace("<strong>", "")
.replace("</strong>", "")
.replace("'", "'")
)
async def parse_web(self, web_item: dict, query: str) -> BaseSearchResult:
"""
https://api.search.brave.com/app/documentation/responses#Search
"""
url = web_item.get("url", "")
title = web_item.get("title", "")
description = self.remove_strong(web_item.get("description", ""))
snippet = (
f"Web Page Title: {title}"
f"Web Page URL: {url}"
f"Web Page Description: {description}"
)
try:
content = await self.__get_url_content(url)
if not content:
return BaseSearchResult(
source=url,
content=""
)
snippet+="\nWeb Page Content: " + self.truncate_page_content(content)
except:
print(f"Failed to scrape {url}")
return BaseSearchResult(
source=url,
content=snippet
)
def truncate_page_content(self, page_content: str):
if self.truncate_to_n_tokens is None:
return page_content.strip()
else:
return self.tokenizer.decode(self.tokenizer.encode(page_content).ids[:self.truncate_to_n_tokens]).strip()
def raw_search(self, query: str, n_search_results_to_use: int) -> list[BaseSearchResult]:
"""
Run a search using the BraveAPI and return search results. Here are some details on the Brave API:
Each search call to the Brave API returns the following fields:
- faq: Frequently asked questions that are relevant to the search query (only on paid Brave tier).
- news: News results relevant to the query.
- web: Web search results relevant to the query.
- [Thrown Out] videos: Videos relevant to the query.
- [Thrown Out] locations: Places of interest (POIs) relevant to location sensitive queries.
- [Thrown Out] infobox: Aggregated information on an entity showable as an infobox.
- [Thrown Out] discussions: Discussions clusters aggregated from forum posts that are relevant to the query.
There is also a `mixed` key, which tells us the ranking of the search results.
We may throw some of these back in, in the future. But we're just going to document the behavior here for now.
"""
# Run the search
search_response = self.api.search(query)
print("Query: ", query)
print("Searching...")
# Order everything properly
correct_ordering = search_response.get("mixed", {}).get("main", [])
# Extract the results
faq_items = search_response.get("faq", {}).get("results", [])
news_items = search_response.get("news", {}).get("results", [])
web_items = search_response.get("web", {}).get("results", [])
# Get the search results
search_results: list[BaseSearchResult] = []
async_web_parser_loop = asyncio.get_event_loop()
web_parsing_tasks = [] # We'll queue up the web parsing tasks here, since they're costly
for item in correct_ordering:
item_type = item.get("type")
if item_type == "web":
web_item = web_items.pop(0)
## We'll add a placeholder search result here, and then replace it with the parsed web result later
url = web_item.get("url", "")
placeholder_search_result = BaseSearchResult(
source=url,
content=f"Web Page Title: {web_item.get('title', '')}\nWeb Page URL: {url}\nWeb Page Description: {self.remove_strong(web_item.get('description', ''))}"
)
search_results.append(placeholder_search_result)
## Queue up the web parsing task
task = async_web_parser_loop.create_task(self.parse_web(web_item, query))
web_parsing_tasks.append(task)
elif item_type == "news":
parsed_news = self.parse_news(news_items.pop(0))
if parsed_news is not None:
search_results.append(parsed_news)
elif item_type == "faq":
parsed_faq = self.parse_faq(faq_items.pop(0))
search_results.append(parsed_faq)
if len(search_results) >= n_search_results_to_use:
break
## Replace the placeholder search results with the parsed web results
web_results = async_web_parser_loop.run_until_complete(asyncio.gather(*web_parsing_tasks))
web_results_urls = [web_result.source for web_result in web_results]
for i, search_result in enumerate(search_results):
url = search_result.source
if url in web_results_urls:
search_results[i] = web_results[web_results_urls.index(url)]
print("Reading content from: ", url)
return search_results
async def __get_url_content(self, url: str) -> Optional[str]:
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
html = await response.text()
soup = BeautifulSoup(html, 'html.parser')
text = soup.get_text(strip=True, separator='\n')
return text
return None
if __name__ == "__main__":
from ...tool_user import ToolUser
tool_user = ToolUser([BraveSearchTool()])
messages = [{"role":"human", "content":"Who scored the most goals in the 2023 Women's World Cup?"}]
print(tool_user.use_tools(messages=messages, execution_mode="automatic")) | [
"Who scored the most goals in the 2023 Women's World Cup?"
] |
2024-01-10 | anthropics/anthropic-tools | tool_use_package~tool_user.py | from anthropic import Anthropic
from anthropic_bedrock import AnthropicBedrock
import re
import builtins
import ast
from .prompt_constructors import construct_use_tools_prompt, construct_successful_function_run_injection_prompt, construct_error_function_run_injection_prompt, construct_prompt_from_messages
class ToolUser:
"""
A class to interact with the Claude API while giving it the ability to use tools.
Attributes:
-----------
- tools (list): A list of tool instances that this ToolUser instance can interact with. These tool instances should be subclasses of BaseTool.
- temperature (float, optional): The temperature parameter to be passed to Claude. Default is 0.
- max_retries (int, optional): The maximum number of times to retry in case of an error while interacting with a tool. Default is 3.
- client: An instance of the Anthropic/AWS Bedrock API client. You must have set your Anthropic API Key or AWS Bedrock API keys as environment variables.
- model: The name of the model (default Claude-2.1).
- current_prompt (str): The current prompt being used in the interaction. Is added to as Claude interacts with tools.
- current_num_retries (int): The current number of retries that have been attempted. Resets to 0 after a successful function call.
Note/TODOs:
-----
The class interacts with the model using formatted prompts and expects the model to respond using specific XML tags.
Certain characters such as angle brackets inside parameter values will currently break the class. These issues are called out in the code.
Usage:
------
To use this class, you should instantiate it with a list of tools (tool_user = ToolUser(tools)). You then interact with it as you would the normal claude API, by providing a prompt to tool_user.use_tools(prompt) and expecting a completion in return.
"""
def __init__(self, tools, temperature=0, max_retries=3, first_party=True):
self.tools = tools
self.temperature = temperature
self.max_retries = max_retries
if first_party:
self.client = Anthropic()
self.model = "claude-2.1"
else:
self.client = AnthropicBedrock()
self.model = "anthropic.claude-v2:1"
self.current_prompt = None
self.current_num_retries = 0
def use_tools(self, messages, verbose=0, execution_mode="manual", max_tokens_to_sample=2000, temperature=1):
"""
Main method for interacting with an instance of ToolUser. Calls Claude with the given prompt and tools and returns the final completion from Claude after using the tools.
- mode (str, optional): If 'single_function', will make a single call to Claude and then stop, returning only a FunctionResult dataclass (atomic function calling). If 'agentic', Claude will continue until it produces an answer to your question and return a completion (agentic function calling). Defaults to True.
"""
if execution_mode not in ["manual", "automatic"]:
raise ValueError(f"Error: execution_mode must be either 'manual' or 'automatic'. Provided Value: {execution_mode}")
prompt = ToolUser._construct_prompt_from_messages(messages)
constructed_prompt = construct_use_tools_prompt(prompt, self.tools, messages[-1]['role'])
# print(constructed_prompt)
self.current_prompt = constructed_prompt
if verbose == 1:
print("----------CURRENT PROMPT----------")
print(self.current_prompt)
if verbose == 0.5:
print("----------INPUT (TO SEE SYSTEM PROMPT WITH TOOLS SET verbose=1)----------")
print(prompt)
completion = self._complete(self.current_prompt, max_tokens_to_sample=max_tokens_to_sample, temperature=temperature)
if completion.stop_reason == 'stop_sequence':
if completion.stop == '</function_calls>': # Would be good to combine this with above if statement if completion.stop is guaranteed to be present
formatted_completion = f"{completion.completion}</function_calls>"
else:
formatted_completion = completion.completion
else:
formatted_completion = completion.completion
if verbose == 1:
print("----------COMPLETION----------")
print(formatted_completion)
if verbose == 0.5:
print("----------CLAUDE GENERATION----------")
print(formatted_completion)
if execution_mode == 'manual':
parsed_function_calls = self._parse_function_calls(formatted_completion, False)
if parsed_function_calls['status'] == 'DONE':
res = {"role": "assistant", "content": formatted_completion}
elif parsed_function_calls['status'] == 'ERROR':
res = {"status": "ERROR", "error_message": parsed_function_calls['message']}
elif parsed_function_calls['status'] == 'SUCCESS':
res = {"role": "tool_inputs", "content": parsed_function_calls['content'], "tool_inputs": parsed_function_calls['invoke_results']}
else:
raise ValueError("Unrecognized status in parsed_function_calls.")
return res
while True:
parsed_function_calls = self._parse_function_calls(formatted_completion, True)
if parsed_function_calls['status'] == 'DONE':
return formatted_completion
claude_response = self._construct_next_injection(parsed_function_calls)
if verbose == 0.5:
print("----------RESPONSE TO FUNCTION CALLS (fed back into Claude)----------")
print(claude_response)
self.current_prompt = (
f"{self.current_prompt}"
f"{formatted_completion}\n\n"
f"{claude_response}"
)
if verbose == 1:
print("----------CURRENT PROMPT----------")
print(self.current_prompt)
completion = self._complete(self.current_prompt, max_tokens_to_sample=max_tokens_to_sample, temperature=temperature)
if completion.stop_reason == 'stop_sequence':
if completion.stop == '</function_calls>': # Would be good to combine this with above if statement if complaetion.stop is guaranteed to be present
formatted_completion = f"{completion.completion}</function_calls>"
else:
formatted_completion = completion.completion
else:
formatted_completion = completion.completion
if verbose == 1:
print("----------CLAUDE GENERATION----------")
print(formatted_completion)
if verbose == 0.5:
print("----------CLAUDE GENERATION----------")
print(formatted_completion)
def _parse_function_calls(self, last_completion, evaluate_function_calls):
"""Parses the function calls from the model's response if present, validates their format, and invokes them."""
# Check if the format of the function call is valid
invoke_calls = ToolUser._function_calls_valid_format_and_invoke_extraction(last_completion)
if not invoke_calls['status']:
return {"status": "ERROR", "message": invoke_calls['reason']}
if not invoke_calls['invokes']:
return {"status": "DONE"}
# Parse the query's invoke calls and get it's results
invoke_results = []
for invoke_call in invoke_calls['invokes']:
# Find the correct tool instance
tool_name = invoke_call['tool_name']
tool = next((t for t in self.tools if t.name == tool_name), None)
if tool is None:
return {"status": "ERROR", "message": f"No tool named <tool_name>{tool_name}</tool_name> available."}
# Validate the provided parameters
parameters = invoke_call['parameters_with_values']
parameter_names = [p['name'] for p in tool.parameters]
provided_names = [p[0] for p in parameters]
invalid = set(provided_names) - set(parameter_names)
missing = set(parameter_names) - set(provided_names)
if invalid:
return {"status": "ERROR", "message": f"Invalid parameters {invalid} for <tool_name>{tool_name}</tool_name>."}
if missing:
return {"status": "ERROR", "message": f"Missing required parameters {parameter_names} for <tool_name>{tool_name}</tool_name>."}
# Convert values and call tool
converted_params = {}
for name, value in parameters:
param_def = next(p for p in tool.parameters if p['name'] == name)
type_ = param_def['type']
converted_params[name] = ToolUser._convert_value(value, type_)
if not evaluate_function_calls:
invoke_results.append({"tool_name": tool_name, "tool_arguments": converted_params})
else:
invoke_results.append({"tool_name": tool_name, "tool_result": tool.use_tool(**converted_params)})
return {"status": "SUCCESS", "invoke_results": invoke_results, "content": invoke_calls['prefix_content']}
def _construct_next_injection(self, invoke_results):
"""Constructs the next prompt based on the results of the previous function call invocations."""
if invoke_results['status'] == 'SUCCESS':
self.current_num_retries = 0
return construct_successful_function_run_injection_prompt(invoke_results['invoke_results'])
elif invoke_results['status'] == 'ERROR':
if self.current_num_retries == self.max_retries:
raise ValueError("Hit maximum number of retries attempting to use tools.")
self.current_num_retries +=1
return construct_error_function_run_injection_prompt(invoke_results['message'])
else:
raise ValueError(f"Unrecognized status from invoke_results, {invoke_results['status']}.")
def _complete(self, prompt, max_tokens_to_sample, temperature):
completion = self.client.completions.create(
model=self.model,
max_tokens_to_sample=max_tokens_to_sample,
temperature=temperature,
stop_sequences=["</function_calls>", "\n\nHuman:"],
prompt=prompt
)
return completion
@staticmethod
def _function_calls_valid_format_and_invoke_extraction(last_completion):
"""Check if the function call follows a valid format and extract the attempted function calls if so. Does not check if the tools actually exist or if they are called with the requisite params."""
# Check if there are any of the relevant XML tags present that would indicate an attempted function call.
function_call_tags = re.findall(r'<function_calls>|</function_calls>|<invoke>|</invoke>|<tool_name>|</tool_name>|<parameters>|</parameters>', last_completion, re.DOTALL)
if not function_call_tags:
# TODO: Should we return something in the text to claude indicating that it did not do anything to indicate an attempted function call (in case it was in fact trying to and we missed it)?
return {"status": True, "invokes": []}
# Extract content between <function_calls> tags. If there are multiple we will only parse the first and ignore the rest, regardless of their correctness.
match = re.search(r'<function_calls>(.*)</function_calls>', last_completion, re.DOTALL)
if not match:
return {"status": False, "reason": "No valid <function_calls></function_calls> tags present in your query."}
func_calls = match.group(1)
prefix_match = re.search(r'^(.*?)<function_calls>', last_completion, re.DOTALL)
if prefix_match:
func_call_prefix_content = prefix_match.group(1)
# Check for invoke tags
# TODO: Is this faster or slower than bundling with the next check?
invoke_regex = r'<invoke>.*?</invoke>'
if not re.search(invoke_regex, func_calls, re.DOTALL):
return {"status": False, "reason": "Missing <invoke></invoke> tags inside of <function_calls></function_calls> tags."}
# Check each invoke contains tool name and parameters
invoke_strings = re.findall(invoke_regex, func_calls, re.DOTALL)
invokes = []
for invoke_string in invoke_strings:
tool_name = re.findall(r'<tool_name>.*?</tool_name>', invoke_string, re.DOTALL)
if not tool_name:
return {"status": False, "reason": "Missing <tool_name></tool_name> tags inside of <invoke></invoke> tags."}
if len(tool_name) > 1:
return {"status": False, "reason": "More than one tool_name specified inside single set of <invoke></invoke> tags."}
parameters = re.findall(r'<parameters>.*?</parameters>', invoke_string, re.DOTALL)
if not parameters:
return {"status": False, "reason": "Missing <parameters></paraeters> tags inside of <invoke></invoke> tags."}
if len(parameters) > 1:
return {"status": False, "reason": "More than one set of <parameters></parameters> tags specified inside single set of <invoke></invoke> tags."}
# Check for balanced tags inside parameters
# TODO: This will fail if the parameter value contains <> pattern or if there is a parameter called parameters. Fix that issue.
tags = re.findall(r'<.*?>', parameters[0].replace('<parameters>', '').replace('</parameters>', ''), re.DOTALL)
if len(tags) % 2 != 0:
return {"status": False, "reason": "Imbalanced tags inside <parameters></parameters> tags."}
# Loop through the tags and check if each even-indexed tag matches the tag in the position after it (with the / of course). If valid store their content for later use.
# TODO: Add a check to make sure there aren't duplicates provided of a given parameter.
parameters_with_values = []
for i in range(0, len(tags), 2):
opening_tag = tags[i]
closing_tag = tags[i+1]
closing_tag_without_second_char = closing_tag[:1] + closing_tag[2:]
if closing_tag[1] != '/' or opening_tag != closing_tag_without_second_char:
return {"status": False, "reason": "Non-matching opening and closing tags inside <parameters></parameters> tags."}
parameters_with_values.append((opening_tag[1:-1], re.search(rf'{opening_tag}(.*?){closing_tag}', parameters[0], re.DOTALL).group(1)))
# Parse out the full function call
invokes.append({"tool_name": tool_name[0].replace('<tool_name>', '').replace('</tool_name>', ''), "parameters_with_values": parameters_with_values})
return {"status": True, "invokes": invokes, "prefix_content": func_call_prefix_content}
# TODO: This only handles the outer-most type. Nested types are an unimplemented issue at the moment.
@staticmethod
def _convert_value(value, type_str):
"""Convert a string value into its appropriate Python data type based on the provided type string.
Arg:
value: the value to convert
type_str: the type to convert the value to
Returns:
The value converted into the requested type or the original value
if the conversion failed.
"""
if type_str in ("list", "dict"):
return ast.literal_eval(value)
type_class = getattr(builtins, type_str)
try:
return type_class(value)
except ValueError:
return value
@staticmethod
def _construct_prompt_from_messages(messages):
return construct_prompt_from_messages(messages)
| [
"content",
"prefix_content"
] |
2024-01-10 | anthropics/anthropic-tools | tool_use_package~tools~search~vector_search~utils.py | import os
import json
from typing import Optional
import anthropic
from dataclasses import dataclass
from tqdm import tqdm
from .constants import DEFAULT_EMBEDDER
from .embedders.base_embedder import BaseEmbedder
from .vectorstores.base_vector_store import BaseVectorStore
from .embedders.huggingface import HuggingFaceEmbedder
# Chunking and uploading
@dataclass
class Document:
"""
A single document.
"""
text: str
metadata: Optional[dict] = None
# Embedding and uploading
def embed_and_upload(
input_file: str,
vectorstore: BaseVectorStore,
embedder: Optional[BaseEmbedder] = None,
tokens_per_chunk: int = 384,
stride: Optional[int] = None,
batch_size: int = 128) -> None:
if embedder is None:
embedder = HuggingFaceEmbedder(os.environ['HUGGINGFACE_API_KEY'], DEFAULT_EMBEDDER)
# Load the documents
documents: list[Document] = []
file_type = input_file.split(".")[-1]
if file_type == "jsonl":
with open(input_file, "r") as f:
for i, line in enumerate(f):
data = json.loads(line)
text = data["text"]
if text is None:
raise ValueError(f"Invalid jsonl file. 'text' key is missing on line {i}")
metadata = data.get("metadata", None)
doc = Document(text=text, metadata=metadata)
documents.append(doc)
else:
raise ValueError("Invalid file_type. Supported types: 'jsonl'")
# Chunk the documents
chunked_documents = []
for document in documents:
chunks = chunk_document(document, tokens_per_chunk, stride)
chunked_documents += chunks
# Embed and upload the documents
bar = tqdm(total=len(chunked_documents), desc="Embedding and uploading documents", leave=True)
for i in range(0, len(chunked_documents), batch_size):
batch = chunked_documents[i:i + batch_size]
batch_embeddings = embedder.embed_batch([doc.text for doc in batch])
vectorstore.upsert(batch_embeddings)
bar.update(len(batch))
# Chunking documents into smaller chunks
def chunk_document(document: Document, tokens_per_chunk: int, stride: Optional[int] = None) -> list[Document]:
if stride is None:
stride = tokens_per_chunk
tok = anthropic.Anthropic().get_tokenizer()
raw_text = document.text
tokenized_text = tok.encode(raw_text).ids
chunks = []
for i in range(0, len(tokenized_text), stride):
chunk = tokenized_text[i:i + tokens_per_chunk]
chunk_text = tok.decode(chunk)
chunk_document = Document(text=chunk_text, metadata=document.metadata)
chunks.append(chunk_document)
return chunks
## Elasticsearch uploading
from elasticsearch import Elasticsearch
from elasticsearch.helpers import bulk
def upload_to_elasticsearch(
input_file: str,
index_name: str,
cloud_id: str,
api_key_id: str,
api_key: str) -> None:
# Load the documents
documents: list[Document] = []
file_type = input_file.split(".")[-1]
if file_type == "jsonl":
with open(input_file, "r") as f:
for i, line in enumerate(f):
data = json.loads(line)
text = data["text"]
if text is None:
raise ValueError(f"Invalid jsonl file. 'text' key is missing on line {i}")
metadata = data.get("metadata", None)
doc = Document(text=text, metadata=metadata)
documents.append(doc)
else:
raise ValueError("Invalid file_type. Supported types: 'jsonl'")
# Upload the documents
## Create the Elasticsearch client
es = Elasticsearch(
cloud_id=cloud_id,
api_key=(api_key_id, api_key),
)
## Upload the documents
def docs_to_generator():
for i, document in enumerate(documents):
yield {
"_index": index_name,
"_id": i,
"text": document.text,
"metadata": document.metadata
}
bulk(es, docs_to_generator())
es.indices.refresh(index=index_name) | [] |
2024-01-10 | anthropics/anthropic-tools | tool_use_package~tools~search~wikipedia_search_tool.py | # Import required external packages
import wikipedia
from anthropic import Anthropic
from dataclasses import dataclass
# Import our base search tool from which all other search tools inherit. We use this pattern to make building new search tools easy.
from .base_search_tool import BaseSearchResult, BaseSearchTool
# Define our custom Wikipedia Search Tool by inheriting BaseSearchTool (which itself inhherits BaseTool) and defining its use_tool() method.
class WikipediaSearchTool(BaseSearchTool):
def __init__(self,
name="search_wikipedia",
description="The search_wikipedia tool will exclusively search over Wikipedia for pages similar to your query. It returns for each page its title and the full page content. Use this tool to get up-to-date and comprehensive information on a topic. Queries made to this tool should be as atomic as possible. The tool provides broad topic keywords rather than niche search topics. For example, if the query is 'Can you tell me about Odysseus's journey in the Odyssey?' the search query you make should be 'Odyssey'. Here's another example: if the query is 'Who created the first neural network?', your first query should be 'neural network'. As you can see, these queries are quite short. Think generalized keywords, not phrases.",
parameters=[
{"name": "query", "type": "str", "description": "The search term to enter into the Wikipedia search engine. Remember to use broad topic keywords."},
{"name": "n_search_results_to_use", "type": "int", "description": "The number of search results to return, where each search result is a Wikipedia page."}
],
truncate_to_n_tokens=5000):
super().__init__(name, description, parameters)
self.truncate_to_n_tokens = truncate_to_n_tokens
if truncate_to_n_tokens is not None:
self.tokenizer = Anthropic().get_tokenizer()
def raw_search(self, query: str, n_search_results_to_use: int):
print("Query: ", query)
results = wikipedia.search(query)
search_results = []
for result in results:
if len(search_results) >= n_search_results_to_use:
break
try:
page = wikipedia.page(result)
except:
# the Wikipedia API is a little flaky, so we just skip over pages that fail to load
continue
search_results.append(BaseSearchResult(content=self.truncate_page_content(page.content), source=page.url))
print("Reading content from: ", page.url)
return search_results
def truncate_page_content(self, page_content: str):
if self.truncate_to_n_tokens is None:
return page_content.strip()
else:
return self.tokenizer.decode(self.tokenizer.encode(page_content).ids[:self.truncate_to_n_tokens]).strip()
if __name__ == "__main__":
from ...tool_user import ToolUser
tool_user = ToolUser([WikipediaSearchTool()])
messages = [{"role":"human", "content":"Can you teach me about the Starship test flight?"}]
print(tool_user.use_tools(messages=messages, execution_mode="automatic"))
| [
"Can you teach me about the Starship test flight?"
] |
2024-01-10 | anthropics/anthropic-tools | tool_use_package~tools~search~elasticsearch_search_tool.py | from anthropic import Anthropic
from elasticsearch import Elasticsearch
# Import our base search tool from which all other search tools inherit. We use this pattern to make building new search tools easy.
from .base_search_tool import BaseSearchResult, BaseSearchTool
# Elasticsearch Searcher Tool
class ElasticsearchSearchTool(BaseSearchTool):
def __init__(self,
name,
description,
parameters,
elasticsearch_cloud_id,
elasticsearch_api_key_id,
elasticsearch_api_key,
elasticsearch_index,
truncate_to_n_tokens = 5000):
"""
:param name: The name of the tool.
:param description: The description of the tool.
:param parameters: The parameters for the tool.
:param elasticsearch_cloud_id: The cloud id for the Elasticsearch index.
:param elasticsearch_api_key_id: The api key id for the Elasticsearch index.
:param elasticsearch_api_key: The api key for the Elasticsearch index.
:param elasticsearch_index: The index to search over.
:param truncate_to_n_tokens: The number of tokens to truncate the page content to. If None, the full page content is returned.
"""
super().__init__(name, description, parameters)
self.index = elasticsearch_index
self.cloud_id = elasticsearch_cloud_id
self.api_key_id = elasticsearch_api_key_id
self.api_key = elasticsearch_api_key
self._connect_to_elasticsearch()
self.truncate_to_n_tokens = truncate_to_n_tokens
if truncate_to_n_tokens is not None:
self.tokenizer = Anthropic().get_tokenizer()
def _connect_to_elasticsearch(self):
self.client = Elasticsearch(
cloud_id=self.cloud_id,
api_key=(self.api_key_id, self.api_key)
)
if not self.client.indices.exists(index=self.index):
raise ValueError(f"Elasticsearch Index {self.index} does not exist.")
index_mapping = self.client.indices.get_mapping(index=self.index)
if "text" not in index_mapping.body[self.index]["mappings"]["properties"].keys():
raise ValueError(f"Index {self.index} does not have a field called 'text'.")
def truncate_page_content(self, page_content: str) -> str:
if self.truncate_to_n_tokens is None:
return page_content.strip()
else:
return self.tokenizer.decode(self.tokenizer.encode(page_content).ids[:self.truncate_to_n_tokens]).strip()
def raw_search(self, query: str, n_search_results_to_use: int) -> list[BaseSearchResult]:
results = self.client.search(index=self.index,
query={"match": {"text": query}})
search_results: list[BaseSearchResult] = []
for result in results["hits"]["hits"]:
if len(search_results) >= n_search_results_to_use:
break
content = self.truncate_page_content(result["_source"]["text"])
search_results.append(BaseSearchResult(source=str(hash(content)), content=content))
return search_results | [] |
2024-01-10 | SivilTaram/GPT-classification-example | opengpt_classifier.py | from pytorch_pretrained_bert.modeling_openai import OpenAIGPTPreTrainedModel, OpenAIGPTLMHead, OpenAIGPTModel
from pytorch_pretrained_bert import BertForSequenceClassification
import torch.nn as nn
import torch
class OpenAIGPTForClassification(OpenAIGPTPreTrainedModel):
"""OpenAI GPT model with a Language Modeling and a Multiple Choice head ("Improving Language Understanding by Generative Pre-Training").
OpenAI GPT use a single embedding matrix to store the word and special embeddings.
Special tokens embeddings are additional tokens that are not pre-trained: [SEP], [CLS]...
Special tokens need to be trained during the fine-tuning if you use them.
The number of special embeddings can be controled using the `set_num_special_tokens(num_special_tokens)` function.
The embeddings are ordered as follow in the token embeddings matrice:
[0, ----------------------
... -> word embeddings
config.vocab_size - 1, ______________________
config.vocab_size,
... -> special embeddings
config.vocab_size + config.n_special - 1] ______________________
where total_tokens_embeddings can be obtained as config.total_tokens_embeddings and is:
total_tokens_embeddings = config.vocab_size + config.n_special
You should use the associate indices to index the embeddings.
Params:
config: a OpenAIGPTConfig class instance with the configuration to build a new model
Inputs:
`input_ids`: a torch.LongTensor of shape [batch_size, num_choices, sequence_length] with the BPE token
indices selected in the range [0, total_tokens_embeddings[
`mc_token_ids`: a torch.LongTensor of shape [batch_size, num_choices] with the index of the token from
which we should take the hidden state to feed the multiple choice classifier (usually last token of the sequence)
`position_ids`: an optional torch.LongTensor with the same shape as input_ids
with the position indices (selected in the range [0, config.n_positions - 1[.
`token_type_ids`: an optional torch.LongTensor with the same shape as input_ids
You can use it to add a third type of embedding to each input token in the sequence
(the previous two being the word and position embeddings).
The input, position and token_type embeddings are summed inside the Transformer before the first
self-attention block.
`lm_labels`: optional language modeling labels: torch.LongTensor of shape [batch_size, num_choices, sequence_length]
with indices selected in [-1, 0, ..., total_tokens_embeddings]. All labels set to -1 are ignored (masked), the loss
is only computed for the labels set in [0, ..., total_tokens_embeddings]
`multiple_choice_labels`: optional multiple choice labels: torch.LongTensor of shape [batch_size]
with indices selected in [0, ..., num_choices].
Outputs:
if `lm_labels` and `multiple_choice_labels` are not `None`:
Outputs a tuple of losses with the language modeling loss and the multiple choice loss.
else: a tuple with
`lm_logits`: the language modeling logits as a torch.FloatTensor of size [batch_size, num_choices, sequence_length, total_tokens_embeddings]
`multiple_choice_logits`: the multiple choice logits as a torch.FloatTensor of size [batch_size, num_choices]
Example usage:
```python
# Already been converted into BPE token ids
input_ids = torch.LongTensor([[[31, 51, 99], [15, 5, 0]]]) # (bsz, number of choice, seq length)
mc_token_ids = torch.LongTensor([[2], [1]]) # (bsz, number of choice)
config = modeling_openai.OpenAIGPTConfig()
model = modeling_openai.OpenAIGPTLMHeadModel(config)
lm_logits, multiple_choice_logits = model(input_ids, mc_token_ids)
```
"""
def __init__(self, config, num_labels):
super(OpenAIGPTForClassification, self).__init__(config)
self.transformer = OpenAIGPTModel(config)
self.classifier = nn.Linear(config.n_embd, num_labels)
self.classifier.weight.data.uniform_(-0.1, 0.1)
self.classifier.bias.data.zero_()
def set_num_special_tokens(self, num_special_tokens):
""" Update input and output embeddings with new embedding matrice
Make sure we are sharing the embeddings
"""
self.transformer.set_num_special_tokens(num_special_tokens)
def forward(self, input_ids, input_mask, labels=None, token_type_ids=None, position_ids=None):
# get sum of mask
hidden_states = self.transformer(input_ids, position_ids, token_type_ids)
# calculate the position of last element
input_mask_sel = input_mask.sum(dim=1) - 1
input_mask_sel = input_mask_sel.unsqueeze(dim=1).unsqueeze(dim=1).repeat(1, 1, 768)
# get the last hidden state
sentence_hidden = hidden_states.gather(index=input_mask_sel, dim=1)
sentence_hidden = sentence_hidden.squeeze(dim=1)
# hidden states pooling
logits = self.classifier(sentence_hidden)
return logits
| [] |
2024-01-10 | SivilTaram/GPT-classification-example | run_classifier.py | # coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""BERT finetuning runner."""
from __future__ import absolute_import, division, print_function
import argparse
import logging
import os
import random
import sys
import numpy as np
import torch
from pytorch_pretrained_bert import (OpenAIGPTTokenizer, OpenAIAdam, WEIGHTS_NAME,
CONFIG_NAME)
from opengpt_classifier import OpenAIGPTForClassification
from tensorboardX import SummaryWriter
from torch.nn import CrossEntropyLoss, MSELoss
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
TensorDataset)
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm, trange
from run_classifier_dataset_utils import processors, output_modes, convert_examples_to_features, compute_metrics
if sys.version_info[0] == 2:
import cPickle as pickle
else:
import pickle
WEIGHTS_NAME = "pytorch_model.bin"
CONFIG_NAME = "config.json"
logger = logging.getLogger(__name__)
def main():
parser = argparse.ArgumentParser()
## Required parameters
parser.add_argument("--data_dir",
default=None,
type=str,
required=True,
help="The input data dir. Should contain the .tsv files (or other data files) for the task.")
parser.add_argument('--model_name', type=str, default='openai-gpt',
help='pretrained model name')
parser.add_argument("--task_name",
default=None,
type=str,
required=True,
help="The name of the task to train.")
parser.add_argument("--output_dir",
default=None,
type=str,
required=True,
help="The output directory where the model predictions and checkpoints will be written.")
parser.add_argument("--max_grad_norm",
default=1)
parser.add_argument('--weight_decay', type=float, default=0.0)
## Other parameters
parser.add_argument("--cache_dir",
default="",
type=str,
help="Where do you want to store the pre-trained models downloaded from s3")
parser.add_argument("--max_seq_length",
default=128,
type=int,
help="The maximum total input sequence length after WordPiece tokenization. \n"
"Sequences longer than this will be truncated, and sequences shorter \n"
"than this will be padded.")
parser.add_argument("--do_train",
action='store_true',
help="Whether to run training.")
parser.add_argument("--do_eval",
action='store_true',
help="Whether to run eval on the dev set.")
parser.add_argument("--train_batch_size",
default=16,
type=int,
help="Total batch size for training.")
parser.add_argument("--eval_batch_size",
default=8,
type=int,
help="Total batch size for eval.")
parser.add_argument("--learning_rate",
default=5e-5,
type=float,
help="The initial learning rate for Adam.")
parser.add_argument("--num_train_epochs",
default=3.0,
type=float,
help="Total number of training epochs to perform.")
parser.add_argument("--warmup_proportion",
default=0.1,
type=float,
help="Proportion of training to perform linear learning rate warmup for. "
"E.g., 0.1 = 10%% of training.")
parser.add_argument('--lr_schedule', type=str, default='warmup_linear')
parser.add_argument("--no_cuda",
action='store_true',
help="Whether not to use CUDA when available")
parser.add_argument('--overwrite_output_dir',
action='store_true',
help="Overwrite the content of the output directory")
parser.add_argument("--local_rank",
type=int,
default=-1,
help="local_rank for distributed training on gpus")
parser.add_argument('--seed',
type=int,
default=42,
help="random seed for initialization")
parser.add_argument('--gradient_accumulation_steps',
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.")
parser.add_argument('--fp16',
action='store_true',
help="Whether to use 16-bit float precision instead of 32-bit")
parser.add_argument('--loss_scale',
type=float, default=0,
help="Loss scaling to improve fp16 numeric stability. Only used when fp16 set to True.\n"
"0 (default value): dynamic loss scaling.\n"
"Positive power of 2: static loss scaling value.\n")
parser.add_argument('--server_ip', type=str, default='', help="Can be used for distant debugging.")
parser.add_argument('--server_port', type=str, default='', help="Can be used for distant debugging.")
args = parser.parse_args()
if args.server_ip and args.server_port:
# Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script
import ptvsd
print("Waiting for debugger attach")
ptvsd.enable_attach(address=(args.server_ip, args.server_port), redirect_output=True)
ptvsd.wait_for_attach()
if args.local_rank == -1 or args.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
n_gpu = torch.cuda.device_count()
else:
torch.cuda.set_device(args.local_rank)
device = torch.device("cuda", args.local_rank)
n_gpu = 1
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.distributed.init_process_group(backend='nccl')
args.device = device
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO if args.local_rank in [-1, 0] else logging.WARN)
logger.info("device: {} n_gpu: {}, distributed training: {}, 16-bits training: {}".format(
device, n_gpu, bool(args.local_rank != -1), args.fp16))
if args.gradient_accumulation_steps < 1:
raise ValueError("Invalid gradient_accumulation_steps parameter: {}, should be >= 1".format(
args.gradient_accumulation_steps))
args.train_batch_size = args.train_batch_size // args.gradient_accumulation_steps
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
if not args.do_train and not args.do_eval:
raise ValueError("At least one of `do_train` or `do_eval` must be True.")
if os.path.exists(args.output_dir) and os.listdir(
args.output_dir) and args.do_train and not args.overwrite_output_dir:
raise ValueError("Output directory ({}) already exists and is not empty.".format(args.output_dir))
if not os.path.exists(args.output_dir) and args.local_rank in [-1, 0]:
os.makedirs(args.output_dir)
task_name = args.task_name.lower()
if task_name not in processors:
raise ValueError("Task not found: %s" % (task_name))
processor = processors[task_name]()
output_mode = output_modes[task_name]
label_list = processor.get_labels()
num_labels = len(label_list)
if args.local_rank not in [-1, 0]:
torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
special_tokens = ['_start_', '_delimiter_', '_classify_']
tokenizer = OpenAIGPTTokenizer.from_pretrained(args.model_name, special_tokens=special_tokens)
model = OpenAIGPTForClassification.from_pretrained(args.model_name,
num_special_tokens=len(special_tokens),
num_labels=num_labels)
if args.local_rank == 0:
torch.distributed.barrier()
model.to(device)
if args.local_rank != -1:
model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[args.local_rank],
output_device=args.local_rank,
find_unused_parameters=True)
elif n_gpu > 1:
model = torch.nn.DataParallel(model)
global_step = 0
tr_loss = 0
if args.do_train:
if args.local_rank in [-1, 0]:
tb_writer = SummaryWriter()
# Prepare data loader
train_examples = processor.get_train_examples(args.data_dir)
cached_train_features_file = os.path.join(args.data_dir, 'train_{0}_{1}_{2}'.format(
list(filter(None, args.model_name.split('/'))).pop(),
str(args.max_seq_length),
str(task_name)))
try:
with open(cached_train_features_file, "rb") as reader:
train_features = pickle.load(reader)
except:
train_features = convert_examples_to_features(
train_examples, label_list, args.max_seq_length, tokenizer, output_mode)
if args.local_rank == -1 or torch.distributed.get_rank() == 0:
logger.info(" Saving train features into cached file %s", cached_train_features_file)
with open(cached_train_features_file, "wb") as writer:
pickle.dump(train_features, writer)
all_input_ids = torch.tensor([f.input_ids for f in train_features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in train_features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in train_features], dtype=torch.long)
if output_mode == "classification":
all_label_ids = torch.tensor([f.label_id for f in train_features], dtype=torch.long)
elif output_mode == "regression":
all_label_ids = torch.tensor([f.label_id for f in train_features], dtype=torch.float)
train_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
if args.local_rank == -1:
train_sampler = RandomSampler(train_data)
else:
train_sampler = DistributedSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=args.train_batch_size)
# Prepare optimizer
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
num_train_optimization_steps = len(train_dataloader) * args.num_train_epochs
optimizer = OpenAIAdam(optimizer_grouped_parameters,
lr=args.learning_rate,
warmup=args.warmup_proportion,
max_grad_norm=args.max_grad_norm,
weight_decay=args.weight_decay,
t_total=num_train_optimization_steps)
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_examples))
logger.info(" Batch size = %d", args.train_batch_size)
logger.info(" Num steps = %d", num_train_optimization_steps)
model.train()
for _ in range(int(args.num_train_epochs)):
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(
tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0])):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, _, label_ids = batch
# define a new function to compute loss values for both output_modes
logits = model.forward(input_ids, input_mask)
if output_mode == "classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, num_labels), label_ids.view(-1))
elif output_mode == "regression":
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), label_ids.view(-1))
if n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu.
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
if args.fp16:
optimizer.backward(loss)
else:
loss.backward()
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
if (step + 1) % args.gradient_accumulation_steps == 0:
optimizer.step()
optimizer.zero_grad()
global_step += 1
if args.local_rank in [-1, 0]:
tb_writer.add_scalar('lr', optimizer.get_lr()[0], global_step)
tb_writer.add_scalar('loss', loss.item(), global_step)
tb_writer.close()
### Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained()
if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
# Save a trained model, configuration and tokenizer
model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self
# If we save using the predefined names, we can load using `from_pretrained`
output_model_file = os.path.join(args.output_dir, WEIGHTS_NAME)
output_config_file = os.path.join(args.output_dir, CONFIG_NAME)
torch.save(model_to_save.state_dict(), output_model_file)
model_to_save.config.to_json_file(output_config_file)
tokenizer.save_vocabulary(args.output_dir)
# Good practice: save your training arguments together with the trained model
output_args_file = os.path.join(args.output_dir, 'training_args.bin')
torch.save(args, output_args_file)
# Load a trained model and vocabulary that you have fine-tuned
model = OpenAIGPTForClassification.from_pretrained(args.output_dir,
num_labels=num_labels)
model.to(device)
### Evaluation
if args.do_eval and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
eval_examples = processor.get_dev_examples(args.data_dir)
cached_eval_features_file = os.path.join(args.data_dir, 'dev_{0}_{1}_{2}'.format(
list(filter(None, args.model_name.split('/'))).pop(),
str(args.max_seq_length),
str(task_name)))
try:
with open(cached_eval_features_file, "rb") as reader:
eval_features = pickle.load(reader)
except:
eval_features = convert_examples_to_features(
eval_examples, label_list, args.max_seq_length, tokenizer, output_mode)
if args.local_rank == -1 or torch.distributed.get_rank() == 0:
logger.info(" Saving eval features into cached file %s", cached_eval_features_file)
with open(cached_eval_features_file, "wb") as writer:
pickle.dump(eval_features, writer)
logger.info("***** Running evaluation *****")
logger.info(" Num examples = %d", len(eval_examples))
logger.info(" Batch size = %d", args.eval_batch_size)
all_input_ids = torch.tensor([f.input_ids for f in eval_features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in eval_features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long)
if output_mode == "classification":
all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.long)
elif output_mode == "regression":
all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.float)
eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
# Run prediction for full data
if args.local_rank == -1:
eval_sampler = SequentialSampler(eval_data)
else:
eval_sampler = DistributedSampler(eval_data) # Note that this sampler samples randomly
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=args.eval_batch_size)
model.eval()
eval_loss = 0
nb_eval_steps = 0
preds = []
out_label_ids = None
for input_ids, input_mask, segment_ids, label_ids in tqdm(eval_dataloader, desc="Evaluating"):
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
logits = model.forward(input_ids, input_mask)
# create eval loss and other metric required by the task
if output_mode == "classification":
loss_fct = CrossEntropyLoss()
tmp_eval_loss = loss_fct(logits.view(-1, num_labels), label_ids.view(-1))
elif output_mode == "regression":
loss_fct = MSELoss()
tmp_eval_loss = loss_fct(logits.view(-1), label_ids.view(-1))
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if len(preds) == 0:
preds.append(logits.detach().cpu().numpy())
out_label_ids = label_ids.detach().cpu().numpy()
else:
preds[0] = np.append(
preds[0], logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(
out_label_ids, label_ids.detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
preds = preds[0]
if output_mode == "classification":
preds = np.argmax(preds, axis=1)
elif output_mode == "regression":
preds = np.squeeze(preds)
result = compute_metrics(task_name, preds, out_label_ids)
loss = tr_loss / global_step if args.do_train else None
result['eval_loss'] = eval_loss
result['global_step'] = global_step
result['loss'] = loss
output_eval_file = os.path.join(args.output_dir, "eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
# hack for MNLI-MM
if task_name == "mnli":
task_name = "mnli-mm"
processor = processors[task_name]()
if os.path.exists(args.output_dir + '-MM') and os.listdir(args.output_dir + '-MM') and args.do_train:
raise ValueError("Output directory ({}) already exists and is not empty.".format(args.output_dir))
if not os.path.exists(args.output_dir + '-MM'):
os.makedirs(args.output_dir + '-MM')
eval_examples = processor.get_dev_examples(args.data_dir)
eval_features = convert_examples_to_features(
eval_examples, label_list, args.max_seq_length, tokenizer, output_mode)
logger.info("***** Running evaluation *****")
logger.info(" Num examples = %d", len(eval_examples))
logger.info(" Batch size = %d", args.eval_batch_size)
all_input_ids = torch.tensor([f.input_ids for f in eval_features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in eval_features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long)
all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.long)
eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
# Run prediction for full data
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=args.eval_batch_size)
model.eval()
eval_loss = 0
nb_eval_steps = 0
preds = []
out_label_ids = None
for input_ids, input_mask, segment_ids, label_ids in tqdm(eval_dataloader, desc="Evaluating"):
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
logits = model(input_ids, token_type_ids=segment_ids, attention_mask=input_mask, labels=None)
loss_fct = CrossEntropyLoss()
tmp_eval_loss = loss_fct(logits.view(-1, num_labels), label_ids.view(-1))
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if len(preds) == 0:
preds.append(logits.detach().cpu().numpy())
out_label_ids = label_ids.detach().cpu().numpy()
else:
preds[0] = np.append(
preds[0], logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(
out_label_ids, label_ids.detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
preds = preds[0]
preds = np.argmax(preds, axis=1)
result = compute_metrics(task_name, preds, out_label_ids)
loss = tr_loss / global_step if args.do_train else None
result['eval_loss'] = eval_loss
result['global_step'] = global_step
result['loss'] = loss
output_eval_file = os.path.join(args.output_dir + '-MM', "eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
if __name__ == "__main__":
main()
| [] |
2024-01-10 | BickieSmalls/chat-bot | chat-bot.py | import streamlit as st
from streamlit_chat import message
import openai
from datetime import datetime
from src.functions import chat
# user input history
if "user_input_hist" not in st.session_state:
st.session_state.user_input_hist = []
if "chatbot_response_hist" not in st.session_state:
st.session_state.chatbot_response_hist = []
# get openai api key from environment variable
#open_ai_api_key = os.environ.get("OPEN_AI_API_KEY")
from creds import open_ai_api_key
openai.api_key = open_ai_api_key
# current date
current_date = datetime.today().strftime('%Y-%m-%d')
# user input
user_input = st.text_input("Enter your message")
# chatbot response if button is pressed
if user_input:
st.session_state.user_input_hist.append(user_input)
# chatbot response
chatbot_response = chat(
openai,
user_input,
st.session_state.user_input_hist,
st.session_state.chatbot_response_hist,
current_date
)
st.session_state.chatbot_response_hist.append(chatbot_response)
# display user input and chatbot response for the whole history
# display it in reverse order
for i in range(len(st.session_state.user_input_hist)):
message(
st.session_state.user_input_hist[-(i+1)],
is_user=True,
key=str(i)+"u"
)
message(
st.session_state.chatbot_response_hist[-(i+1)],
is_user=False,
key=str(i)+"c"
) | [] |
2024-01-10 | h1ddenpr0cess20/jerkbot-matrix | jerkbot.py | '''
Jerkbot, an OpenAI chatbot for the Matrix chat protocol.
Uses gpt-3.5-turbo to generate responses with a preset personality which can be changed.
Named for my chosen personality, sarcastic jerk. You can set any personality type, character, inanimate object, place, concept, etc.
There is a list of random examples included in the prompts.py file which also includes some altered prompts from awesome-chatgpt-prompts
Written by vagabond @realvagabond:matrix.org
[email protected]
@vvagabondd
March 2023
still has a few bugs to work on, mostly error handling stuff.
'''
from matrix_client.client import MatrixClient
from matrix_client.api import MatrixHttpApi
from matrix_client.user import User
import openai
import time
import datetime
import random
import prompts
import threading
import sys
class AIbot:
def __init__(self, server, token, username, password, channel, personality):
self.server = server
self.username = username
self.password = password
self.channel = channel
self.personality = personality
self.token = token
self.bot_id = "@" + self.username + ":" + self.server.lstrip("https://") #creates the bot's full username @username:server.com
#matrix_client stuff
self.client = MatrixClient(self.server)
self.matrix = MatrixHttpApi(self.server, token=self.token)
self.user = User(self.matrix, self.bot_id)
#self.user.set_display_name(self.username) #set to default display name
#get the bot's display name
self.display_name = self.user.get_friendly_name()
self.logged_in = False
self.room_id = self.matrix.get_room_id(self.channel) #room bot is in
self.join_time = datetime.datetime.now() #time bot joined
self.messages = {} #keeps track of history
def login(self):
try:
self.client.login(username=self.username, password=self.password)
self.logged_in = True #login success
self.room = self.client.join_room(self.channel) #Joins the channel
except Exception as e:
print(e)
self.logged_in = False
sys.exit()
def get_display_names(self):
members = self.matrix.get_room_members(self.room_id)
self.display_names = {}
for member in members["chunk"]:
try: #if the username has a display name add them to the display_names list
self.display_names.update({member["content"]["displayname"]:member["sender"]})
except:
pass
#start bot
def start(self):
#Login to Matrix
self.login()
#Get the room members
self.get_display_names()
#Listen for incoming messages
self.client.add_listener(self.handle_message)
self.client.start_listener_thread()
self.matrix.sync() #unsure if needed?
self.matrix.send_message(self.room_id, "Hey, I'm {}, an OpenAI chatbot. Type .help for more information.".format(self.display_name)) #optional entrance message
#Stop bot
def stop(self):
#add a check to see if the bot is the only user in the channel
self.matrix.send_message(self.room_id, "Goodbye for now.") #optional exit message
self.matrix.leave_room(self.room_id)
#Sets the personality for the bot
def persona(self, sender, persona):
try:
self.messages[sender].clear()
except:
pass
#Custom prompt written by myself that seems to work nearly flawlessly if used correctly.
personality = "assume the personality of " + persona + ". roleplay and always stay in character unless instructed otherwise. keep your first response short."
self.add_history("system", sender, personality) #add to the message history
def add_history(self, role, sender, message):
if sender in self.messages: #if this user exists in the history dictionary
self.messages[sender].append({"role": role, "content": message}) #add the message
else:
if role == "system":
self.messages[sender] = [{"role": role, "content": message}]
else: #add personality to the new user entry
self.messages[sender] = [
{"role": "system", "content": "assume the personality of " + self.personality + ". roleplay and always stay in character unless instructed otherwise. keep your first response short."},
{"role": role, "content": message}]
#Create AI response
def respond(self, sender, message, sender2=None):
try:
#Generate response with gpt-3.5-turbo model, you can change to gpt-4 if you have access and want to spend the money. i have access but i can't afford it.
response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=message)
except Exception as e:
self.matrix.send_message(self.room_id, e)
else:
#Extract response text and add it to history
response_text = response['choices'][0]['message']['content']
self.add_history("assistant", sender, response_text)
if sender2: #if the .x function was used
display_name = self.matrix.get_display_name(sender2)
else: #normal .ai response
display_name = self.matrix.get_display_name(sender)
response_text = display_name + ":\n" + response_text.strip()
#Send response to channel
try:
self.matrix.send_message(self.room_id, response_text)
except Exception as e: #fix?
self.matrix.send_message(self.room_id, e)
#Shrink history list
if len(self.messages[sender]) > 16: #if this gets too big, you'll get a token error
del self.messages[sender][1:3] #delete the first set of question and answers
#OpenAI moderation endpoint, checks if it violates ToS
def moderate(self, message):
flagged = False
if not flagged:
moderate = openai.Moderation.create(input=message,) #run through the moderation endpoint
flagged = moderate["results"][0]["flagged"] #true or false
return flagged
#Handles chat messages
def handle_message(self, event):
self.matrix.sync()
#convert time to same format as self.join_time
message_time = event["origin_server_ts"] / 1000
message_time = datetime.datetime.fromtimestamp(message_time)
if message_time > self.join_time: #if the message was sent after the bot joined the channel
sender = event["sender"] #format @username:matrix.org
display_name = self.matrix.get_display_name(sender) #current display name for the sender
#if event happened in specified room, and is a message that hasn't been removed (caused error), and sender isn't the bot
if event["room_id"] == self.room_id and event["type"] == "m.room.message" and sender != self.bot_id:
self.get_display_names() #update display names in case someone joined
message = event["content"]["body"] #message text from matrix event
try:
#Trigger bot response with .ai
if message.startswith(".ai ") or message.startswith(self.display_name + ": "):
if message.startswith(".ai "):
#Strips .ai from the message
message = message.lstrip(".ai")
message = message.strip()
else:
message = message.lstrip(self.display_name + ":")
message = message.strip()
flagged = self.moderate(message) #check with openai moderation endpoint
if flagged: #Flagged by moderation
self.matrix.send_message(self.room_id, "This message violates the OpenAI usage policy and was not sent.")
else:
#Adds the message to the history
self.add_history("user", sender, message)
#Start respond thread
thread = threading.Thread(target=self.respond, args=(sender,self.messages[sender]))
thread.start()
thread.join(timeout=30)
#Lets you use another user's history for one message for collaboration
if message.startswith(".x "):
#Strips .x from the message
message = message.lstrip(".x")
message = message.strip() #needed because of some bug if i used a space after .x above
#check the name in the display name dictionary
for name in self.display_names:
if type(name) == str and message.startswith(name):
user = str(self.display_names[name])
message = message.lstrip(name)
flagged = self.moderate(message) #check with openai moderation endpoint
if flagged: #Flagged by moderation
self.matrix.send_message(self.room_id, "This message violates the OpenAI usage policy and was not sent.")
else:
if user in self.messages:
#Adds the message to the history
self.add_history("user", user, message)
#start respond thread
thread = threading.Thread(target=self.respond, args=(user, self.messages[user],), kwargs={'sender2': sender})
thread.start()
thread.join(timeout=30)
else:
pass
else:
pass
#Resets bot back to default personality
elif message.startswith(".reset"):
if sender in self.messages:
self.messages[sender].clear()
self.persona(sender, self.personality) # Preset personality
self.matrix.send_message(self.room_id, "Bot has been reset to {} for {}".format(self.personality, display_name))
#Remove personality by clearing history
elif message.startswith(".stock"):
if sender in self.messages:
self.messages[sender].clear() #if they already exist, clear
else:
self.messages[sender] = [] #create the history entry for the user
self.matrix.send_message(self.room_id, "Stock settings applied for {}.".format(display_name))
#Set personality
elif message.startswith(".persona "):
message = message.lstrip(".persona")
message = message.strip()
flagged = self.moderate(message) #check with openai moderation endpoint
if flagged:
self.matrix.send_message(self.room_id, "This persona violates the OpenAI usage policy and has been rejected. Choose a new persona.")
else:
self.persona(sender, message) #set personality
#start respond thread
thread = threading.Thread(target=self.respond, args=(sender,self.messages[sender]))
thread.start()
thread.join(timeout=30)
elif message.startswith(".prompt "):
message = message.lstrip(".prompt")
message = message.strip() #needed for issue i had with previous line removing first letter of message
#matches a key in the prompts dictionary
if message in prompts.prompt.keys():
self.messages[sender].clear()
message = prompts.prompt[message] #select matching key from dictionary
self.add_history("system", sender, message) #add prompt to history
#start respond thread
thread = threading.Thread(target=self.respond, args=(sender,self.messages[sender]))
thread.start()
thread.join(timeout=30)
#Prompts help lists the available commands
elif message == "help":
message = ""
for key in sorted(prompts.prompt.keys()):
message += (key + ", ") #create comma separate list of keys
self.matrix.send_message(self.room_id, message)
#Help menu
elif message.startswith(".help"):
#create list of keys in prompts
keylist = []
for key in prompts.prompt.keys():
keylist.append(key)
#used below for .format
persona_ex1, persona_ex2, persona_ex3 = random.sample(prompts.persona_list, 3) #3 unique selections from persona examples
prompt_ex1, prompt_ex2, prompt_ex3 = random.sample(keylist, 3) #3 unique selections from available custom prompts
#Help text
self.matrix.send_message(self.room_id, '''{}, an OpenAI chatbot.
.ai <message> or {}: <message>
Basic usage.
Personality is preset by bot operator.
This bot is {}.
.x <user> <message>
This allows you to talk to another user's chat history.
<user> is the display name of the user whose history you want to use
.persona <personality type or character or inanimate object>
Changes the personality. It can be a character, personality type, object, idea.
Don't use a custom prompt here.
If you want to use a custom prompt, use .stock then use .ai <custom prompt>
Examples:
.persona {}
.persona {}
.persona {}
.reset
Reset to preset personality
.stock
Remove personality and reset to standard GPT settings
.prompt help
Lists custom prompts available for functions not easily set with .persona.
.prompt <prompt>
Use special prompt from list of prompts
Examples:
.prompt {}
.prompt {}
.prompt {}
'''.format(self.display_name, self.display_name, self.personality, persona_ex1, persona_ex2, persona_ex3, prompt_ex1, prompt_ex2, prompt_ex3)) #enables dynamic examples that change each time you use the command
except Exception as e: #probably need to add specific exceptions, fix later
print(e)
if __name__ == "__main__":
# Initialize OpenAI
openai.api_key = "API_KEY"
#Set server, username, password, channel, default personality
server = "https://matrix.org" #not tested with other servers
token = "TOKEN" #Matrix access token found on settings page in element
username = "USERNAME"
password = "PASSWORD"
channel = "#CHANNEL:SERVER.org"
#Choose default personality. Can be pretty much anything. Examples in prompts.py
personality = "a sarcastic jerk"
#create AIbot and start it
bot = AIbot(server, token, username, password, channel, personality)
bot.start()
input("BOT RUNNING. Press Enter to exit") #prevents the program from exiting
#bot.stop()
''' If uncommented, the bot leaves the channel when the program exits.
disabled due to a bug where if you create a channel with the account you use for this bot,
and nobody else joins the channel, and then you press enter to exit the program,
the channel will be empty and you can't join it again.
Commented out, the bot's username will remain in the channel even if the bot is not running.
'''
| [
". roleplay and always stay in character unless instructed otherwise. keep your first response short.",
"assume the personality of "
] |
2024-01-10 | h1ddenpr0cess20/jerkbot-matrix | jerkbot_solo.py | ## Jerkbot, an OpenAI chatbot for the Matrix chat protocol. Uses gpt-3.5-turbo to generate responses with a preset personality which can be changed.
from matrix_client.client import MatrixClient
from matrix_client.api import MatrixHttpApi
from matrix_client.user import User
import openai
import sys
import time
import datetime
import random
import prompts
class AIbot:
def __init__(self, server, token, username, password, channel, personality):
self.server = server
self.username = username
self.password = password
self.channel = channel
self.personality = personality
self.token = token
self.bot_id = "@" + self.username + ":" + self.server.lstrip("https://")
self.client = MatrixClient(self.server)
self.matrix = MatrixHttpApi(self.server, token=self.token)
self.user = User(self.matrix, self.bot_id)
self.logged_in = False
self.display_name = self.user.get_display_name()
self.room_id = self.matrix.get_room_id(self.channel) #room bot is in
self.join_time = datetime.datetime.now() #time bot joined
self.messages = [] #Keeps track of history
self.persona(self.personality) #Set default personality
def login(self):
try:
self.client.login(username=self.username, password=self.password)
self.logged_in = True
self.room = self.client.join_room(self.channel) #Joins the channel
except Exception as e:
print(e)
self.logged_in = False
sys.exit()
#Sets the personality for the bot
def persona(self, persona):
self.messages.clear()
#persona = persona
personality = "assume the personality of " + persona + ". roleplay and always stay in character unless instructed otherwise. keep your first response short."
self.messages.append({"role": "system", "content": personality})
#OpenAI moderation endpoint
def moderate(self, message):
flagged = False
if not flagged:
moderate = openai.Moderation.create(input=message,) #run through the moderation endpoint
flagged = moderate["results"][0]["flagged"] #true or false
return flagged
#Create AI response
def respond(self, message):
#Generate response with gpt-3.5-turbo model
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=message)
#Extract response text and add it to history
response_text = response['choices'][0]['message']['content']
self.messages.append({"role": "assistant", "content": response_text})
return response_text.strip()
#Handles chat messages
def handle_message(self, event):
self.matrix.sync()
#convert time to same format as self.join_time
message_time = event["origin_server_ts"] / 1000
message_time = datetime.datetime.fromtimestamp(message_time)
if message_time > self.join_time: #if the message was sent after the bot joined the channel
if event["type"] == "m.room.message" and event["sender"] != self.bot_id:
message = event["content"]["body"]
try:
#Resets bot back to default personality
if message.startswith(".reset"):
self.messages.clear()
self.persona(self.personality)
self.matrix.send_message(self.room_id, "Reset to {}".format(self.personality))
#Remove personality by clearing history
elif message.startswith(".stock"):
self.messages.clear()
self.messages.append({"role": "system", "content": "You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible."})
self.matrix.send_message(self.room_id, "Bot has been reset to stock ChatGPT settings")
#Set personality
elif message.startswith(".persona "):
message = message.lstrip(".persona")
flagged = self.moderate(message)
if flagged:
self.matrix.send_message(self.room_id, "This persona violates the OpenAI usage policy and has been rejected. Choose a new persona.")
else:
self.persona(message)
response = self.respond(self.messages)
self.matrix.send_message(self.room_id, response)
elif message.startswith(".prompt "):
message = message.lstrip(".prompt")
message = message.strip() #needed for issue i had with previous line removing first letter of message
#matches a key in the prompts dictionary
if message in prompts.prompt.keys():
self.messages.clear()
message = prompts.prompt[message]
self.messages.append({"role": "system", "content": message})
response = self.respond(self.messages)
self.matrix.send_message(self.room_id, response)
elif message == "help":
message = ""
for key in sorted(prompts.prompt.keys()):
message += (key + ", ") #create comma separate list of keys
self.matrix.send_message(self.room_id, message)
#Help menu
elif message.startswith(".help"):
keylist = []
for key in prompts.prompt.keys():
keylist.append(key)
#used below for .format
persona_ex1, persona_ex2, persona_ex3 = random.sample(prompts.persona_list, 3) #3 unique selections from persona examples
prompt_ex1, prompt_ex2, prompt_ex3 = random.sample(keylist, 3) #3 unique selections from available custom prompts
self.matrix.send_message(self.room_id,
'''{}, an OpenAI chatbot.
Solo version, chat like normal and it responds. This works best in a channel with just one person, or a few who are collaborating.
Personality is preset by bot operator.
This bot is {}.
.persona <personality type or character or inanimate object>
Changes the personality. It can be a character, personality type, object, idea.
Don't use a custom prompt here.
If you want to use a custom prompt, use .stock then use .ai <custom prompt>
Examples:
.persona {}
.persona {}
.persona {}
.reset
Reset to preset personality
.stock
Remove personality and reset to standard GPT settings
.prompt help
Lists custom prompts available for functions not easily set with .persona.
.prompt <prompt>
Use special prompt from list of prompts
Examples:
.prompt {}
.prompt {}
.prompt {}
'''.format(self.display_name, self.personality, persona_ex1, persona_ex2, persona_ex3, prompt_ex1, prompt_ex2, prompt_ex3))
else:
flagged = self.moderate(message) #check with openai moderation endpoint
if flagged: #Flagged by moderation
self.matrix.send_message(self.room_id, "This message violates the OpenAI usage policy and was not sent.")
else:
self.messages.append({"role": "user", "content": message})
response = self.respond(self.messages)
#Send response to channel
self.matrix.send_message(self.room_id, response)
#Shrink history list
if len(self.messages) >= 18:
del self.messages[1:3]
except Exception as e:
print(e)
sys.exit()
def start(self):
#Login to Matrix
self.login()
#Listen for incoming messages
self.client.add_listener(self.handle_message)
self.client.start_listener_thread()
self.matrix.sync()
self.matrix.send_message(self.room_id, "Hey, I'm {}, an OpenAI chatbot. Type .help for more information.".format(self.display_name)) #optional entrance message
#Stop bot
def stop(self):
self.matrix.send_message(self.room_id, "Goodbye for now.") #optional exit message
self.matrix.leave_room(self.room_id)
if __name__ == "__main__":
# Initialize OpenAI
openai.api_key = "API_KEY"
#Set server, username, password, channel, default personality
server = "https://matrix.org" #not tested with other servers
token = "TOKEN" #Matrix access token found somewhere on settings page in element
username = "USERNAME"
password = "PASSWORD"
channel = "#CHANNEL:SERVER.org"
#Choose default personality. Can be pretty much anything. There are some examples in the help section above.
personality = "a sarcastic jerk"
#create AIbot and start it
bot = AIbot(server, token, username, password, channel, personality)
bot.start()
input("BOT RUNNING. press enter to quit") #prevents the program from exiting
#bot.stop()
''' If uncommented, the bot leaves the channel when the program exits.
disabled due to a bug where if you create a channel with the account you use for this bot,
and nobody else joins the channel, and then you press enter to exit the program,
the channel will be empty and you can't join it again.
Commented out, the bot's username will remain in the channel even if the bot is not running.
'''
| [
"You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible."
] |
2024-01-10 | h1ddenpr0cess20/jerkbot-matrix | MatrixBotLauncher.py | import jerkbot as jb
import openai
openai.api_key = "API_KEY"
server = "https://matrix.org" #not tested with other servers
token = "TOKEN" #Matrix access token found somewhere on settings page in element
username = "USERNAME"
password = "PASSWORD"
channel = "#CHANNEL:SERVER.org"
personality = "a sarcastic jerk"
#create AIbot and start it
bot = jb.AIbot(server, token, username, password, channel, personality)
bot.start()
input("BOT RUNNING. Press Enter to exit") #prevents the program from exiting
#bot.stop()
''' If uncommented, the bot leaves the channel when the program exits.
disabled due to a bug where if you create a channel with the account you use for this bot,
and nobody else joins the channel, and then you press enter to exit the program,
the channel will be empty and you can't join it again.
Commented out, the bot's username will remain in the channel even if the bot is not running.
'''
| [] |
2024-01-10 | Mira-public/manifold-sudoku | manifold_sudoku.py | #import openai
import json
import time
import os
import requests
import datetime
from dataclasses import dataclass
import re
from sudoku import Sudoku
import tiktoken
from typing import Callable, Any
import inspect
import itertools
import openai
import hashlib
from collections import Counter
def convert_pairs_to_openai(entries):
formatted_messages = [{"role": role, "content": content} for role, content in entries]
return formatted_messages
def openai_api_key():
return os.environ.get("OPENAI_API_KEY")
def extract_sudoku(text):
cell_pattern = r"\D*(\d)"
sudoku_pattern = cell_pattern*81 + r"\D*"
mat = re.search(sudoku_pattern, text)
return ''.join(mat.groups())
def find_solved_sudoku(pattern, text):
mat = re.search(pattern, text)
if mat:
prematch = mat.group()
print(f"@@@@ PREMATCH={prematch}")
return extract_sudoku(prematch)
else:
return None
MODEL_INFOS = {
'gpt-3.5-turbo': {
"input_cost": 0.002,
"output_cost": 0.002,
},
'gpt-4-0613': {
"input_cost": 0.03,
"output_cost": 0.06,
"context_window": 8192,
"output_tokens": 5000,
},
'gpt-4-1106-preview': {
"input_cost": 0.01,
"output_cost": 0.03,
"context_window": 128_000,
"output_tokens": 4096,
},
}
@dataclass
class Checkpoint:
args = None
transition_index: int = 0
turn_number: int = 0
prompt_tokens: int = 0
output_tokens: int = 0
total_tokens: int = 0
conversation = []
solution_history = []
def total_cost(self, model):
info = MODEL_INFOS.get(model)
return info["input_cost"] * self.prompt_tokens / 1000 + info["output_cost"] * self.output_tokens / 1000
def serializable(self):
checkpoint = {
"args": vars(self.args),
"transition_index": self.transition_index,
"turn_number": self.turn_number,
"prompt_tokens": self.prompt_tokens,
"output_tokens": self.output_tokens,
"total_tokens": self.total_tokens,
"conversation": self.conversation,
"solution_history": self.solution_history,
}
return checkpoint
def save(self):
checkpoint = self.serializable()
with open(self.args.checkpoint, 'w') as f:
return json.dump(checkpoint, f)
@classmethod
def load(cls, filename):
with open(filename, 'r') as f:
print(f"About to load {filename}")
checkpoint = json.load(f)
ckpt = cls()
ckpt.args = checkpoint["args"]
ckpt.transition_index = checkpoint["transition_index"]
ckpt.turn_number = checkpoint["turn_number"]
ckpt.prompt_tokens = checkpoint["prompt_tokens"]
ckpt.output_tokens = checkpoint["output_tokens"]
ckpt.total_tokens = checkpoint["total_tokens"]
ckpt.conversation = checkpoint["conversation"]
ckpt.solution_history = checkpoint["solution_history"]
return ckpt
@classmethod
def print_checkpoint(cls, entries):
print("Conversation token counts")
for i, entry in enumerate(entries):
print(entry)
token_count = gpt4_enc.encode(entry[1])
print(f"{i}: {len(token_count)} tokens")
def solve_puzzle(puzzle_string):
board = string_to_list_of_lists(puzzle_string)
sudoku = Sudoku(3,3,board=board)
try:
solved_board = sudoku.solve(raising=True).board
#print(solved_board)
return solved_board
except:
return None
def rotate_sudoku(puzzle, di):
di = di%81
assert di >= 0
return puzzle[-di:] + puzzle[:-di]
# Solution 0 is the initial puzzle, so no rotation
def rotate_sudoku_emily(puzzle, solution_number):
return rotate_sudoku(puzzle, 27*solution_number)
def find_problem_in_sudoku(puzzle):
if not len(puzzle) == 81:
return f"Sudoku has incorrect length. {len(puzzle)} != 81"
def check_group(group, group_type, index):
"""Check if a group (row, column, or 3x3 subgrid) contains duplicates."""
filtered_group = [num for num in group if num != 0]
duplicates = +(Counter(filtered_group) - Counter(range(1,10)))
if duplicates:
return f"Duplicate {set(duplicates)} in {group_type} {index + 1}."
return ""
grid = string_to_list_of_lists(puzzle)
# Check rows and columns
for i in range(9):
row_check = check_group(grid[i], "row", i)
if row_check:
return row_check
column_check = check_group([grid[j][i] for j in range(9)], "column", i)
if column_check:
return column_check
# Check 3x3 subgrids
for i in range(0, 9, 3):
for j in range(0, 9, 3):
subgrid = [grid[x][y] for x in range(i, i + 3) for y in range(j, j + 3)]
subgrid_check = check_group(subgrid, "3x3 subgrid at ({}, {})".format(i+1, j+1), 0)
if subgrid_check:
return subgrid_check
return f"Valid: {puzzle}"
# The official Python bindings were taking like 3 minutes for some reason, so just POST the API directly.
def openai_chat_completion(messages, args, n=1):
url = "https://api.openai.com/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {openai_api_key()}",
}
payload = {
"model": args.model,
"messages": messages,
"n": n,
"max_tokens": args.max_output_tokens,
"temperature": 0,
}
response = requests.post(url, headers=headers, data=json.dumps(payload))
for attempt in range(args.max_retries):
response = requests.post(url, headers=headers, data=json.dumps(payload))
if response.status_code == 200:
return response.json()
else:
if attempt < args.max_retries - 1:
print("Request failed. Sleeping and then retrying")
time.sleep(2 ** attempt) # Exponential backoff
else:
raise Exception(f"OpenAI API request failed after {args.max_retries} attempts with status code {response.status_code}: {response.text}")
CACHE_FILE = "cache.json"
CACHE = {}
def get_hash(x):
return hashlib.sha256(json.dumps(x).encode('utf-8')).hexdigest()
def load_cache():
global CACHE
try:
with open(CACHE_FILE, 'r') as f:
CACHE = json.load(f)
print(f"Loaded response cache with {len(CACHE)} entries")
except FileNotFoundError:
print("No existing cache detected.\n")
except e:
print(f"Received error loading cache: {e}")
def save_cache():
global CACHE
with open(CACHE_FILE, 'w', encoding='utf-8') as f:
json.dump(CACHE, f, sort_keys=True)
def get_cache(key):
k = get_hash(key)
return CACHE.get(k)
def set_cache(key, value):
global CACHE
k = get_hash(key)
CACHE[k] = value
save_cache()
# Remove args that shouldn't change the result of any single invocation to GPT-4
def important_args(args):
vs = vars(args)
return {
"max-output-tokens": vs["max_output_tokens"],
"puzzle": vs["puzzle"],
"prompt": vs["prompt"],
"model": vs["model"],
}
def count_conversation_tokens(entries):
token_counts = [len(gpt4_enc.encode(message)) for (_,message) in entries]
message_tokens = sum(token_counts)
speaker_tokens = len(entries)
padding_tokens = 0 # 15 + 12 # TODO: Figure out where the 15 extra tokens are coming from in Emily's prompt A and 12 in prompt B
return {
"token_counts": token_counts,
"message_tokens": message_tokens,
"speaker_tokens": speaker_tokens,
"padding_tokens": padding_tokens,
"total_tokens": message_tokens + speaker_tokens + padding_tokens,
}
def run_gpt_4(entries0, args, statistics):
num_entries = len(entries0)
if args.model == 'mock':
return "mock GPT-4 string" # Use to test without hitting API
entries = entries0[:]
cache_key = {"conversation": entries, "args": important_args(args)}
c = get_cache(cache_key)
response = None
num_output_tokens_0 = statistics.output_tokens
if c is not None:
message = c
else:
token_stats = count_conversation_tokens(entries)
max_tokens = args.max_output_tokens
max_output_tokens = args.max_output_tokens # max_tokens - token_stats["total_tokens"]
max_output_tokens_per_request = args.max_output_tokens_per_request
start_time = time.time()
message = ""
for i in range(args.max_retries):
print(f"About to run {args.model} with {args.max_output_tokens}>={i+1}*{max_output_tokens_per_request}")
print(f"{len(entries)} Entries: {entries}")
try:
response = openai.ChatCompletion.create(
model=args.model,
max_tokens=max_output_tokens_per_request,
n=1,
temperature=0,
messages=convert_pairs_to_openai(entries)
)
statistics.total_tokens += response["usage"]["total_tokens"]
statistics.output_tokens += response["usage"]["completion_tokens"]
statistics.prompt_tokens += response["usage"]["prompt_tokens"]
message += response["choices"][0]["message"]["content"]
finish_reason = response["choices"][0]["finish_reason"]
if max_output_tokens < (i+1)*max_output_tokens_per_request:
raise Exception(f"Tokens exceeded limit. {max_output_tokens} < {i+1}*{max_output_tokens_per_request}")
if finish_reason == "length":
entries = entries0[:]
entries.append(("assistant", message))
entries.append(("user", "continue")) # It needs to know it was cut off during a request.
continue
print(f"@@@@: {response}")
# if max_output_tokens == response["usage"]["completion_tokens"]:
# #print("@@@@ RESPONSE:", response)
# print(f"Received exactly {max_output_tokens} tokens. This indicates the response was truncated rather than GPT-4 choosing to end the response. Retrying again because long prompts are known to have non-determinism")
# continue
break
except openai.error.Timeout as e:
print(f"Received timeout: {e}")
response = None
continue
except openai.error.InvalidRequestError as e:
Checkpoint.print_checkpoint(entries)
raise e
d_output_tokens = statistics.output_tokens - num_output_tokens_0
if response is None:
raise Exception(f"Unable to get a response after {args.max_retries} attempts")
if finish_reason == "length" and not args.allow_truncated:
raise Exception(f"Generated more output than were allocated tokens for. {statistics.output_tokens} >= {max_output_tokens}")
#openai_entries = convert_pairs_to_openai(entries)
# response = openai_chat_completion(openai_entries, args)
elapsed_time = time.time() - start_time
print(f"Elapsed time for {args.model} call: {elapsed_time:.2f} seconds for {statistics.output_tokens} tokens.\n")
set_cache(cache_key, message)
#print(f"@@@@ Cache: {message}")
if len(entries0) != num_entries:
raise Exception(f"ASSERT: {len(entries0)} != {num_entries}")
return message.strip()
def collect_transition_rules_until_limit(fixed_prompt_function, response_limit=50, total_limit=200):
transition_rules = []
response_count = 0
index = 0
while response_count < response_limit and index < total_limit:
rule = fixed_prompt_function(index)
if rule[0] == "InsertResponse":
response_count += 1
transition_rules.append(rule)
index += 1
return transition_rules
def grid_to_string(board):
return ''.join([str(c) for row in board for c in row])
def string_to_list_of_lists(puzzle_string):
return [[int(puzzle_string[i * 9 + j]) for j in range(9)] for i in range(9)]
def string_to_list_of_strings(puzzle_string):
return [puzzle_string[i * 9:(i + 1) * 9] for i in range(9)]
def string_to_multiline_string(puzzle_string):
return "\n".join(string_to_list_of_strings(puzzle_string))
def string_to_visual_representation(puzzle_string):
rows = [puzzle_string[i * 9:(i + 1) * 9] for i in range(9)]
visual_representation = ""
for i, row in enumerate(rows):
if i % 3 == 0 and i > 0:
visual_representation += "-+------+------+------\n"
visual_row = ""
for j, cell in enumerate(row):
if j % 3 == 0 and j > 0:
visual_row += "| "
visual_row += cell if cell != '0' else '_'
visual_row += ' '
visual_representation += visual_row.rstrip() + '\n'
return visual_representation
def string_to_2d_representation_no_bars(puzzle, joiner=" "):
xss = string_to_list_of_lists(puzzle)
representation = ""
for xs in xss:
representation += " ".join([str(x) for x in xs])
representation += "\n"
return representation
enc = tiktoken.get_encoding("cl100k_base")
@dataclass
class Insert:
index: int
message: str
tag: str = "user"
@dataclass
class Remove:
index: int
@dataclass
class InsertPuzzle:
index: int
render_func: Callable[[str], str]
@dataclass
class InsertResponse:
index: int
gpt4_enc = tiktoken.get_encoding("cl100k_base")
@dataclass
class Truncate:
index: int
num_tokens: int
tokenizer: Any = gpt4_enc
def truncate_to_last_n_tokens(text, n, encoding):
"""
Truncate text to the last n tokens.
:param text: str, the input text
:param n: int, the number of tokens to truncate to
:param encoding: tiktoken encoding, the tokenizer model
:return: str, the truncated text
"""
# Tokenize the input text
tokens = list(encoding.encode(text))
# Truncate to the last n tokens
truncated_tokens = tokens[-n:]
# Decode back to text
truncated_text = encoding.decode(truncated_tokens)
return truncated_text
@dataclass
class PuzzleSolution(Exception):
checkpoint : Any
solution : str
@dataclass
class UnsolvablePuzzle(Exception):
checkpoint : Any
unsolvable : str
def apply_transition_rule(checkpoint, transition_rule, args):
def conv_token_counts(pair):
return len(gpt4_enc.encode(pair[1]))
print(f"Turn {checkpoint.turn_number} rule {checkpoint.transition_index} on {len(checkpoint.conversation)} entries with tokens {list(map(conv_token_counts, checkpoint.conversation))}: {transition_rule}")
checkpoint.transition_index += 1
def translate_index(index):
if index < 0:
return index + len(checkpoint.conversation)+1
else:
return index
def insert(index, role, message):
index = translate_index(index)
checkpoint.conversation.insert(index, (role, message))
def remove(index):
checkpoint.conversation.pop(index)
def checkpoint_solution(puzzle):
print(f"SOLUTION CHECKPOINT:{puzzle}")
checkpoint.solution_history.append(puzzle)
match transition_rule:
case Remove(index):
index = translate_index(index)
checkpoint.conversation.pop(index)
case Insert(index, message, tag):
index = translate_index(index)
insert(index, tag, message)
case InsertPuzzle(index, render_func):
index = translate_index(index)
rendered_puzzle = render_func(args.puzzle)
insert(index, "user", rendered_puzzle)
case InsertResponse(index):
index = translate_index(index)
response = run_gpt_4(checkpoint.conversation, args, checkpoint)
insert(index, "assistant", response)
checkpoint.turn_number += 1
checkpoint.save() # Long-running API call
match args.log_style:
case "mira":
log_conversation(checkpoint, args.output)
case "emily":
log_conversation_emily(checkpoint, args.output)
potential_solution = find_solved_sudoku(args.solution_pattern, response)
if not potential_solution and not args.skip_invalid_puzzle_check and checkpoint.turn_number % args.require_solvable_puzzle == 0:
raise Exception(f"No puzzle pound in {response}")
if potential_solution:
checkpoint_solution(potential_solution)
is_complete = "0" not in potential_solution
if is_complete:
print(f"POTENTIAL SOLUTION:{potential_solution}")
if args.stop_if_solved_puzzle_detected:
solution = solve_puzzle(potential_solution)
if solution:
print("Early-stopping with valid solution")
raise PuzzleSolution(checkpoint, solution)
else:
raise Exception(f"Unsolvable puzzle: {potential_solution}")
else:
solution = solve_puzzle(potential_solution)
#print("@@@@@@@", potential_solution, solution)
if not solution:
raise UnsolvablePuzzle(checkpoint, potential_solution)
case Truncate(index, num_tokens):
index = translate_index(index)
entry = checkpoint.conversation[index]
checkpoint.conversation[index] = (entry[0], truncate_to_last_n_tokens(entry[1], num_tokens, gpt4_enc))
def take_until(gen, pred, max_num):
for x in gen:
if max_num <= 0:
return
yield x
if pred(x):
max_num -= 1
def is_response(x):
match x:
case InsertResponse(_):
return True
return False
def get_transition_rules(transition_index, fixed_prompt, args):
return itertools.islice(take_until(fixed_prompt(), is_response, args.max_turns), transition_index, args.max_transitions)
def execute_fixed_prompt(checkpoint, fixed_prompt, args):
if not inspect.isgeneratorfunction(fixed_prompt):
raise Exception("Prompt must be generator style")
transition_rules = get_transition_rules(checkpoint.transition_index, fixed_prompt, args)
#transition_rules = collect_transition_rules_until_limit(fixed_prompt, response_limit=args.max_turns, total_limit=args.max_entries)
entries = []
for transition_rule in transition_rules:
entries = apply_transition_rule(checkpoint, transition_rule, args)
if checkpoint.turn_number >= args.max_turns:
break
return {
"entries": checkpoint.conversation,
"statistics": checkpoint,
}
def log_conversation(checkpoint, log_file_name):
entries = checkpoint.conversation
with open(log_file_name, 'a') as log_file:
log_file.write(f"Conversation started at: {datetime.datetime.now()}\n")
log_file.write(f"Turn number: {checkpoint.turn_number}\n")
log_file.write("----\n")
for i, entry in enumerate(entries):
speaker, message = entry
log_file.write(f"Entry {i+1}/{len(entries)} - {speaker}: {message}\n\n")
log_file.write("----\n")
log_file.write("Conversation ended.\n")
log_file.write("\n")
def log_conversation_emily(checkpoint, log_file_name):
entries = checkpoint.conversation
args = checkpoint.args
temperature = 0
with open(log_file_name, 'a', newline='\r\n') as f:
f.write(f'model:\n{args.model}\n\n')
f.write(f'temperature:\n{temperature}\n\n')
f.write(f'system_message:\n{entries[0][1]}\n\n')
for index, entry in enumerate(entries[1:-1]):
speaker, message = entry
f.write(f'prompt {index + 1} of {len(entries)-2}:\n{message}\n\n')
f.write(f'response:\n{entries[-1][1]}\n\n')
f.write('-'*100 + '\n'*11)
| [
"0"
] |
2024-01-10 | lich14/Traffic_Light_Transfer_Control | envs~multi_runner.py | """
Modified from OpenAI Baselines code to work with multi-agent envs
"""
from __future__ import absolute_import, print_function
from multiprocessing import Process, Pipe
from abc import ABC, abstractmethod
import os
import sys
import torch
import random
import numpy as np
if 'SUMO_HOME' in os.environ:
tools = os.path.join(os.environ['SUMO_HOME'], 'tools')
sys.path.append(tools)
else:
sys.exit("please declare environment variable 'SUMO_HOME'")
import traci
torch.set_num_threads(8)
class ShareVecEnv(ABC):
"""
An abstract asynchronous, vectorized environment.
Used to batch data from multiple copies of an environment, so that
each observation becomes an batch of observations, and expected action is a batch of actions to
be applied per-environment.
"""
closed = False
viewer = None
def __init__(self, num_envs):
self.num_envs = num_envs
@abstractmethod
def reset(self):
"""
Reset all the environments and return an array of
observations, or a dict of observation arrays.
If step_async is still doing work, that work will
be cancelled and step_wait() should not be called
until step_async() is invoked again.
"""
pass
@abstractmethod
def step_async(self, actions):
"""
Tell all the environments to start taking a step
with the given actions.
Call step_wait() to get the results of the step.
You should not call this if a step_async run is
already pending.
"""
pass
@abstractmethod
def step_wait(self):
"""
Wait for the step taken with step_async().
Returns (obs, rews, dones, infos):
- obs: an array of observations, or a dict of
arrays of observations.
- rews: an array of rewards
- dones: an array of "episode done" booleans
- infos: a sequence of info objects
"""
pass
def close_extras(self):
"""
Clean up the extra resources, beyond what's in this base class.
Only runs when not self.closed.
"""
pass
def close(self):
if self.closed:
return
if self.viewer is not None:
self.viewer.close()
self.close_extras()
self.closed = True
def step(self, actions):
"""
Step the environments synchronously.
This is available for backwards compatibility.
"""
self.step_async(actions)
return self.step_wait()
def store_observation(id_list, point_lane, conn, points):
obs_all = []
for id in range(points ** 2):
lanes = point_lane[id_list[id]]
obs = []
for lane in lanes:
queue_length = 1 - conn.lane.getLastStepHaltingNumber(lane) / 40
vehicle_num = 1 - conn.lane.getLastStepVehicleNumber(lane) / 40
aver_waiting = 1 - conn.lane.getWaitingTime(lane) / (500 * conn.lane.getLastStepVehicleNumber(
lane)) if conn.lane.getLastStepVehicleNumber(lane) > 0 else 1
aver_delay = conn.lane.getLastStepMeanSpeed(
lane) / conn.lane.getMaxSpeed(lane)
obs += [queue_length, vehicle_num, aver_waiting, aver_delay]
obs_all.append(obs)
return obs_all
def sample(conn, idlist, record_vehicle, point_lane, traffic_info=None, points=2, reward_para=0.0):
periodtime = []
obs_all = [[] for _ in range(points ** 2)]
# used to decide when to switch phase
currentphase = [0 for _ in range(points ** 2)]
# used to decide when to switch phase
time_click = [0 for _ in range(points ** 2)]
ifdone = False
cur_time = conn.simulation.getTime()
obs_record = np.zeros([points ** 2, 16])
for _ in range(90):
if conn.simulation.getMinExpectedNumber() <= 0:
ifdone = True
if obs_all[0]:
for id in range(points ** 2):
obs_all[id] += [0 for _ in range(48 - len(obs_all[id]))]
else:
for id in range(points ** 2):
obs_all[id] += [0 for _ in range(48)]
break
conn.simulationStep()
cur_time = conn.simulation.getTime()
for id in range(points ** 2):
try:
if currentphase[id] is not conn.trafficlight.getPhase(idlist[id]):
time_click[id] = 0
except:
return None, None, None, None, None, None
time_click[id] = time_click[id] + 1
vehiclein_l = conn.simulation.getDepartedIDList()
vehicleout_l = conn.simulation.getArrivedIDList()
if vehiclein_l:
for i in vehiclein_l:
record_vehicle[i] = conn.simulation.getTime()
if vehicleout_l:
for i in vehicleout_l:
periodtime.append(
conn.simulation.getTime() - record_vehicle[i])
record_vehicle.pop(i)
cur_obs_all = np.array(store_observation(
idlist, point_lane, conn, points))
obs_record += cur_obs_all
if conn.simulation.getTime() % 30 == 29:
for id in range(points ** 2):
obs_all[id] += (obs_record[id] / 30.).tolist()
for id in range(points ** 2):
currentphase[id] = conn.trafficlight.getPhase(idlist[id])
if traffic_info:
if time_click[id] >= traffic_info[idlist[id]][currentphase[id] // 2]:
conn.trafficlight.setPhase(
idlist[id], (currentphase[id] + 1) % 4)
if periodtime:
mean_value = np.array(periodtime).mean()
max_value = np.array(periodtime).max()
r_value = max_value * reward_para + mean_value * (1 - reward_para)
reward = 1.0 - r_value / 500
if reward < -10.0:
reward = -10.0
reward = 0.1 * reward
else:
reward = 0.0
return record_vehicle, reward, torch.tensor(obs_all), ifdone, cur_time, periodtime
# action: 0: not change, 1: [+1, -1], 2: [-1, +1], 3: [+5, -5], 4: [-5, +5]
def check_action(input):
output = np.clip(
input, -1, 1) if type(input) == np.ndarray or type(input) == np.float64 else input.clamp(-1, 1)
return output
def step(action, idlist, traffic_info):
for id, item in enumerate(idlist):
first_time = check_action(action[id]) + 1 # [0, 2]
traffic_info[item][0] = int(first_time * 37 + 5)
traffic_info[item][1] = 84 - traffic_info[item][0]
return traffic_info
def shareworker(remote, id, points, reward_para=0.0, diff=1.0, sumocfg_str=None):
np.random.seed(random.randint(0, 100000))
if sumocfg_str is None:
sumocfg_str = ''
for _ in range(points * 2):
sumocfg_str = sumocfg_str + str(np.random.randint(2))
sumocfg = f'./{points}{points}network/sumocfg/{sumocfg_str}.sumocfg'
print('=' * 32)
print(sumocfg)
print('=' * 32)
label = f'sim_{id}'
time_limit = 80
# FT:
# max: 1 point: 2 time: 4978 step: 55
# max: 2 point: 2 time: 7215 step: 80
# max: 1 point: 3 time: 5260 step: 58
# max: 2 point: 3 time: 7581 step: 84
# max: 1 point: 6 time: 5734
# TODO: recheck the value of time_limit
while True:
cmd, data = remote.recv()
if cmd == 'step':
step_num += 1
if ifdone:
remote.send((None, 0, ifdone, cur_time, False,
np.array(period_time).mean()))
else:
traffic_info = step(data, idlist, traffic_info)
record_vehicle, reward, obs_all, ifdone, cur_time, cur_period_time = sample(
conn, idlist, record_vehicle, point_lane, traffic_info, points, reward_para)
period_time += cur_period_time
if period_time == []:
period_time_mean = -1
else:
period_time_mean = np.array(period_time).mean()
if reward is None:
remote.send((None, None, None, None, True, None))
else:
traffic_info_ = torch.tensor(
[traffic_info[item] for item in traffic_info.keys()])
traffic_info_ = (traffic_info_ - 42).float() / 42.
obs_all = torch.cat([obs_all, traffic_info_], dim=-1)
if step_num >= time_limit:
ifdone = True
reward = -10
remote.send((obs_all, reward, ifdone, cur_time,
False, period_time_mean))
elif cmd == 'restart':
label = f'sim_{id}'
restart_label = 0
for restart_time in range(10):
try:
traci.start(["sumo", "-c", sumocfg], label=label)
break
except Exception as e:
print('error 1:', e.__class__.__name__, e)
label = f'sim_{id}_{restart_time}'
if restart_time >= 9:
restart_label = 1
remote.send((restart_label, ))
elif cmd == 'reset':
period_time = []
conn = traci.getConnection(label)
ifdone = False
step_num = 0
idlist = conn.trafficlight.getIDList()
point_lane = {}
traffic_info = {}
for item in idlist:
point_lane[item] = []
traffic_info[item] = [42, 42]
lane_ID = conn.lane.getIDList()
for item in lane_ID:
if item[-4:-2] in point_lane.keys():
point_lane[item[-4:-2]].append(item)
# here order is down, left, right, up
record_vehicle = {}
record_vehicle, reward, obs_all, _, _, _ = sample(
conn, idlist, record_vehicle, point_lane, traffic_info, points, reward_para)
if reward is None:
remote.send((None, True))
else:
traffic_info_ = torch.tensor(
[traffic_info[item] for item in traffic_info.keys()])
traffic_info_ = (traffic_info_ - 42).float() / 42.
try:
obs_all = torch.cat([obs_all, traffic_info_], dim=-1)
remote.send((obs_all, False))
except Exception as e:
print('error 11:', e.__class__.__name__, e)
remote.send((None, True))
elif cmd == 'close':
try:
conn.close()
except Exception as e:
print('error 2:', e.__class__.__name__, e)
remote.close()
break
elif cmd == 'get_basic_info':
# obs_dim. action_dim, n_agents
remote.send((50, 1, points ** 2))
else:
raise NotImplementedError
class ShareSubprocVecEnv(ShareVecEnv):
def __init__(self, nenvs, points=2, sumoconfigs=None, reward_para=0.0, diff=1.0):
"""
envs: list of gym environments to run in subprocesses
"""
self.waiting = False
self.closed = False
self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)])
if sumoconfigs:
self.ps = [
Process(target=shareworker,
args=(work_remote, id, points, reward_para, diff, sumoconfig))
for id, (work_remote, sumoconfig) in enumerate(zip(self.work_remotes, sumoconfigs))
]
else:
self.ps = [
Process(target=shareworker,
args=(work_remote, id, points, reward_para, diff))
for id, work_remote in enumerate(self.work_remotes)
]
for p in self.ps:
p.daemon = True # if the main process crashes, we should not cause things to hang
p.start()
ShareVecEnv.__init__(self, nenvs)
def step_async(self, actions):
for remote, action in zip(self.remotes, actions):
remote.send(('step', action))
self.waiting = True
def step_wait(self):
results = [remote.recv() for remote in self.remotes]
self.waiting = False
obs, reward, ifdone, cur_time, break_check, period_time = zip(*results)
if any(break_check):
return None, None, None, None, False, None
return obs, reward, ifdone, cur_time, True, period_time
def reset(self):
restart_time = 0
while True:
for remote in self.remotes:
remote.send(('restart', None))
results = [remote.recv() for remote in self.remotes]
start_error = [item[0] for item in results]
if np.array(start_error).sum() > 0:
# some error occurs when calling traci.start
traci.close(False)
restart_time += 1
print(f'have restarted {restart_time} times')
if restart_time > 10:
return None, False
else:
break
for remote in self.remotes:
remote.send(('reset', None))
results = [remote.recv() for remote in self.remotes]
obs, break_check = zip(*results)
if any(break_check):
return None, False
return torch.stack(obs, dim=0), True
def close(self):
if self.waiting:
for remote in self.remotes:
remote.recv()
for remote in self.remotes:
remote.send(('close', None))
for p in self.ps:
p.terminate()
for p in self.ps:
p.join()
try:
traci.close()
except Exception as e:
print('error 3:', e.__class__.__name__, e)
sys.stdout.flush()
self.closed = True
def get_basic_info(self):
for remote in self.remotes:
remote.send(('get_basic_info', None))
results = [remote.recv() for remote in self.remotes]
obs_dim, n_actions, n_agents = zip(*results)
return obs_dim[0], n_actions[0], n_agents[0]
| [] |
2024-01-10 | wenyuzhao/augmented-gpt | augmented_gpt~utils~stt.py | from typing import Literal
import openai
from pathlib import Path
class SpeechToText:
def __init__(self, api_key: str, model: Literal["whisper-1"] = "whisper-1"):
self.client = openai.AsyncOpenAI(api_key=api_key)
self.model = model
async def transcribe(self, path: str | Path) -> str:
with open(path, "rb") as audio_file:
transcript = await self.client.audio.transcriptions.create(
model="whisper-1", file=audio_file
)
return transcript.text
def transcribe_sync(self, path: str | Path) -> str:
from . import block_on
return block_on(self.transcribe(path))
__all__ = ["SpeechToText"]
| [] |
2024-01-10 | wenyuzhao/augmented-gpt | augmented_gpt~augmented_gpt.py | from typing import (
Callable,
Dict,
Generator,
AsyncGenerator,
List,
Literal,
Sequence,
Tuple,
TypeVar,
Any,
Generic,
overload,
TYPE_CHECKING,
)
from .message import *
from dotenv import dotenv_values
import inspect
from inspect import Parameter
import openai
import logging
import os
from openai.types.chat import ChatCompletionMessageParam
import asyncio
from datetime import datetime
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from .plugins import Plugin
@dataclass
class GPTOptions:
frequency_penalty: Optional[float] = None
logit_bias: Optional[Dict[str, int]] = None
max_tokens: Optional[int] = None
n: Optional[int] = None
presence_penalty: Optional[float] = None
stop: Optional[str] | List[str] = None
temperature: Optional[float] = None
top_p: Optional[float] = None
timeout: Optional[float] = None
def as_kwargs(self) -> dict[str, Any]:
args = {
"frequency_penalty": self.frequency_penalty,
"logit_bias": self.logit_bias,
"max_tokens": self.max_tokens,
"n": self.n,
"presence_penalty": self.presence_penalty,
"stop": self.stop,
"temperature": self.temperature,
"top_p": self.top_p,
"timeout": self.timeout,
}
return {k: v for k, v in args.items() if v is not None}
M = TypeVar("M", Message, Message | MessageStream)
class ChatCompletion(Generic[M]):
def __init__(self, agen: AsyncGenerator[M, None]) -> None:
super().__init__()
self.__agen = agen
async def __anext__(self) -> M:
return await self.__agen.__anext__()
def __aiter__(self):
return self
def __next__(self) -> M:
loop = asyncio.get_event_loop()
try:
result = loop.run_until_complete(self.__anext__())
return result
except StopAsyncIteration:
raise StopIteration()
def __iter__(self):
return self
class AugmentedGPT:
def support_tools(self) -> bool:
return self.model in [
"gpt-4-1106-preview",
"gpt-3.5-turbo-1106",
]
def __init__(
self,
model: str
| Literal[
"gpt-4",
"gpt-4-0613",
"gpt-4-32k",
"gpt-4-32k-0613",
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
# Preview versions
"gpt-4-1106-preview",
"gpt-4-vision-preview",
"gpt-3.5-turbo-1106",
] = "gpt-4-1106-preview",
functions: Optional[Sequence[Callable[..., Any]]] = None,
plugins: Optional[Sequence["Plugin"]] = None,
debug: bool = False,
gpt_options: Optional[GPTOptions] = None,
api_key: Optional[str] = None,
prologue: Optional[List[Message]] = None,
inject_current_date_time: bool = False,
):
self.gpt_options = gpt_options or GPTOptions()
self.model = model
api_key = api_key or dotenv_values().get(
"OPENAI_API_KEY", os.environ.get("OPENAI_API_KEY")
)
assert api_key is not None, "Missing OPENAI_API_KEY"
self.api_key = api_key
self.client = openai.AsyncOpenAI(api_key=api_key)
self.logger = logging.getLogger("AugmentedGPT")
self.logger.setLevel(logging.DEBUG if debug else logging.INFO)
self.__functions: Dict[str, Tuple[Any, Callable[..., Any]]] = {}
for f in functions or []:
self.add_function(f)
self.__plugins: Any = {}
for p in plugins or []:
clsname = p.__class__.__name__
if clsname.endswith("Plugin"):
clsname = clsname[:-6]
p.register(self)
self.__plugins[clsname] = p
self.__prologue = prologue or []
self.history: List[Message] = [m for m in self.__prologue] or []
self.inject_current_date_time = inject_current_date_time
def reset(self):
self.history = [m for m in self.__prologue]
def get_plugin(self, name: str) -> "Plugin":
return self.__plugins[name]
def add_function(self, f: Callable[..., Any]):
func_info = getattr(f, "gpt_function_call_info")
self.logger.info("Register-Function: " + func_info["name"])
self.__functions[func_info["name"]] = (func_info, f)
def __filter_args(self, callable: Callable[..., Any], args: Any):
p_args: List[Any] = []
kw_args: Dict[str, Any] = {}
for p in inspect.signature(callable).parameters.values():
match p.kind:
case Parameter.POSITIONAL_ONLY:
p_args.append(args[p.name] if p.name in args else p.default.default)
case Parameter.POSITIONAL_OR_KEYWORD | Parameter.KEYWORD_ONLY:
kw_args[p.name] = (
args[p.name] if p.name in args else p.default.default
)
case other:
raise ValueError(f"{other} is not supported")
return p_args, kw_args
async def __call_function(
self, function_call: FunctionCall, tool_id: Optional[str]
) -> Message:
func_name = function_call.name
key = func_name if not func_name.startswith("functions.") else func_name[10:]
if tool_id is not None:
result_msg = Message(role=Role.TOOL, tool_call_id=tool_id, content="")
else:
result_msg = Message(role=Role.FUNCTION, name=func_name, content="")
try:
func = self.__functions[key][1]
arguments = function_call.arguments
args, kw_args = self.__filter_args(func, arguments)
self.logger.debug(
f"➡️ {func_name}: "
+ ", ".join(str(a) for a in args)
+ ", ".join((f"{k}={v}" for k, v in kw_args.items()))
)
result_or_coroutine = func(*args, **kw_args)
if inspect.iscoroutine(result_or_coroutine):
result = await result_or_coroutine
else:
result = result_or_coroutine
if not isinstance(result, str):
result = json.dumps(result)
result_msg.content = result
except Exception as e:
print(e)
result_msg.content = f"Failed to execute function `{func_name}`. Please retry. Error Message: {e}"
return result_msg
@overload
async def __chat_completion_request(
self, messages: List[Message], stream: Literal[False]
) -> Message:
...
@overload
async def __chat_completion_request(
self, messages: List[Message], stream: Literal[True]
) -> MessageStream:
...
async def __chat_completion_request(
self, messages: List[Message], stream: bool
) -> Message | MessageStream:
functions = [x for (x, _) in self.__functions.values()]
msgs: List[ChatCompletionMessageParam] = [
m.to_chat_completion_message_param() for m in messages
]
args: Any = {
"model": self.model,
"messages": msgs,
**self.gpt_options.as_kwargs(),
}
if len(functions) > 0:
if self.support_tools():
args["tools"] = [{"type": "function", "function": f} for f in functions]
args["tool_choice"] = "auto"
else:
args["functions"] = functions
args["function_call"] = "auto"
if stream:
response = await self.client.chat.completions.create(**args, stream=True)
return MessageStream(response)
else:
response = await self.client.chat.completions.create(**args, stream=False)
return Message.from_chat_completion_message(response.choices[0].message)
@overload
async def __chat_completion(
self,
messages: List[Message],
stream: Literal[False] = False,
context_free: bool = False,
) -> Generator[Message, None, None]:
...
@overload
async def __chat_completion(
self, messages: List[Message], stream: Literal[True], context_free: bool = False
) -> Generator[Message | MessageStream, None, None]:
...
async def __chat_completion(
self,
messages: list[Message],
stream: bool = False,
context_free: bool = False,
):
history = [h for h in (self.history if not context_free else [])]
old_history_length = len(history)
if self.inject_current_date_time:
dt = datetime.today().strftime("%Y-%m-%d %a %H:%M:%S")
history.append(Message(role=Role.SYSTEM, content=f"current-time: {dt}"))
history.extend(messages)
for m in messages:
await self.__on_new_chat_message(m)
# First completion request
message: Message
if stream:
s = await self.__chat_completion_request(history, stream=True)
yield s
message = s.message()
else:
message = await self.__chat_completion_request(history, stream=False)
yield message
history.append(message)
await self.__on_new_chat_message(message)
while message.function_call is not None or len(message.tool_calls) > 0:
if len(message.tool_calls) > 0:
for t in message.tool_calls:
assert t.type == "function"
result = await self.__call_function(t.function, tool_id=t.id)
history.append(result)
await self.__on_new_chat_message(result)
yield result
else:
assert message.function_call is not None
# ChatGPT wanted to call a user-defined function
result = await self.__call_function(message.function_call, tool_id=None)
history.append(result)
await self.__on_new_chat_message(result)
yield result
# Send back the function call result
message: Message
if stream:
r = await self.__chat_completion_request(history, stream=True)
yield r
message = r.message()
else:
message = await self.__chat_completion_request(history, stream=False)
yield message
history.append(message)
await self.__on_new_chat_message(message)
if not context_free:
self.history.extend(history[old_history_length:])
@overload
def chat_completion(
self,
messages: List[Message],
stream: Literal[False] = False,
context_free: bool = False,
) -> ChatCompletion[Message]:
...
@overload
def chat_completion(
self, messages: List[Message], stream: Literal[True], context_free: bool = False
) -> ChatCompletion[Message | MessageStream]:
...
def chat_completion(
self,
messages: list[Message],
stream: bool = False,
context_free: bool = False,
):
if stream:
return ChatCompletion(
self.__chat_completion(messages, stream=True, context_free=context_free)
)
else:
return ChatCompletion(
self.__chat_completion(
messages, stream=False, context_free=context_free
)
)
async def __on_new_chat_message(self, msg: Message):
for p in self.__plugins.values():
result = p.on_new_chat_message(msg)
if inspect.iscoroutine(result):
await result
| [
"current-time: PLACEHOLDER"
] |
2024-01-10 | wenyuzhao/augmented-gpt | augmented_gpt~message.py | from typing import (
Literal,
Optional,
TypeAlias,
Union,
cast,
Mapping,
Sequence,
Any,
)
import json
from dataclasses import dataclass, field
from openai.types.chat.chat_completion_chunk import ChatCompletionChunk
import openai
from openai.types.chat import (
ChatCompletionChunk,
ChatCompletionMessageParam,
ChatCompletionToolMessageParam,
ChatCompletionUserMessageParam,
ChatCompletionFunctionMessageParam,
ChatCompletionSystemMessageParam,
ChatCompletionAssistantMessageParam,
ChatCompletionMessageToolCallParam,
ChatCompletionMessage,
ChatCompletionContentPartTextParam,
ChatCompletionContentPartImageParam,
)
from openai.types.chat.chat_completion_message import FunctionCall as OpenAIFunctionCall
from openai.types.chat.chat_completion_assistant_message_param import (
FunctionCall as OpenAIFunctionCallDict,
)
from openai.types.chat.chat_completion_message_tool_call import (
Function as OpenAIFunction,
)
import asyncio
from enum import StrEnum
JSON: TypeAlias = (
Mapping[str, "JSON"] | Sequence["JSON"] | str | int | float | bool | None
)
@dataclass
class FunctionCall:
name: str
arguments: JSON
def to_openai_func_call_dict(self) -> OpenAIFunctionCallDict:
return {"name": self.name, "arguments": json.dumps(self.arguments)}
@staticmethod
def from_openai_func_call(x: OpenAIFunctionCall | OpenAIFunction) -> "FunctionCall":
return FunctionCall(
name=x.name,
arguments=json.loads(x.arguments),
)
@dataclass
class ToolCall:
id: str
function: FunctionCall
type: Literal["function"]
def to_openai_tool_call(self) -> ChatCompletionMessageToolCallParam:
return {
"id": self.id,
"function": self.function.to_openai_func_call_dict(),
"type": self.type,
}
class Role(StrEnum):
SYSTEM = "system"
USER = "user"
ASSISTANT = "assistant"
FUNCTION = "function"
TOOL = "tool"
@staticmethod
def from_str(s: str) -> "Role":
assert s in ["system", "user", "assistant", "function"]
return Role(s)
class ContentPartText:
def __init__(self, content: str) -> None:
self.content = content
def to_openai_content_part(self) -> ChatCompletionContentPartTextParam:
return {"type": "text", "text": self.content}
class ContentPartImage:
def __init__(self, url: str) -> None:
self.url = url
def to_openai_content_part(self) -> ChatCompletionContentPartImageParam:
return {"type": "image_url", "image_url": {"url": self.url}}
ContentPart = Union[ContentPartText, ContentPartImage]
@dataclass
class Message:
Role = Role
role: Role
"""The role of the messages author.
One of `system`, `user`, `assistant`, or `function`.
"""
content: Optional[str | Sequence[ContentPart]] = None
"""The contents of the message.
`content` is required for all messages, and may be null for assistant messages
with function calls.
"""
name: Optional[str] = None
"""
Used for function messages to indicate the name of the function that was called.
Function return data is provided in the `content` field.
"""
function_call: Optional[FunctionCall] = None
"""
DEPRECATED. Use `tool_calls` instead.
The name and arguments of a function that should be called, as generated by the
model.
"""
tool_calls: Sequence[ToolCall] = field(default_factory=list)
"""The tool calls generated by the model, such as function calls."""
tool_call_id: Optional[str] = None
"""Tool call that this message is responding to."""
# def __post_init__(self):
# if self.function_call is not None and isinstance(self.function_call.arguments, str):
# self.function_call = json.loads(json.dumps((self.function_call)))
def to_json(self) -> JSON:
data: Mapping[str, Any] = {
"role": self.role,
"content": self.content,
}
if self.name is not None:
data["name"] = self.name
if self.function_call is not None:
data["function_call"] = {
"name": self.function_call.name,
"arguments": self.function_call.arguments,
}
return data
def to_chat_completion_message_param(self) -> ChatCompletionMessageParam:
content = self.content or ""
if self.role == Role.SYSTEM:
assert isinstance(content, str)
return ChatCompletionSystemMessageParam(role="system", content=content)
if self.role == Role.FUNCTION:
assert isinstance(content, str)
assert self.name is not None
return ChatCompletionFunctionMessageParam(
role="function", name=self.name, content=content
)
if self.role == Role.TOOL:
assert isinstance(content, str)
assert self.tool_call_id is not None
return ChatCompletionToolMessageParam(
role="tool",
tool_call_id=self.tool_call_id,
content=content,
)
if self.role == Role.USER:
_content = (
content
if isinstance(content, str)
else [c.to_openai_content_part() for c in content]
)
return ChatCompletionUserMessageParam(role="user", content=_content)
if self.role == Role.ASSISTANT:
assert isinstance(content, str)
if self.function_call is not None:
return ChatCompletionAssistantMessageParam(
role="assistant",
content=content,
function_call=self.function_call.to_openai_func_call_dict(),
)
if len(self.tool_calls) > 0:
return ChatCompletionAssistantMessageParam(
role="assistant",
content=content,
tool_calls=[
tool_call.to_openai_tool_call() for tool_call in self.tool_calls
],
)
return ChatCompletionAssistantMessageParam(
role="assistant", content=content
)
raise RuntimeError("Unreachable")
@staticmethod
def from_chat_completion_message(m: ChatCompletionMessage) -> "Message":
return Message(
role=Role.from_str(m.role),
content=m.content,
name=m.function_call.name if m.function_call is not None else None,
function_call=FunctionCall.from_openai_func_call(m.function_call)
if m.function_call is not None
else None,
tool_calls=[
ToolCall(
id=t.id,
function=FunctionCall.from_openai_func_call(t.function),
type=t.type,
)
for t in m.tool_calls
]
if m.tool_calls is not None
else [],
)
async def __aiter__(self):
if self.content is not None:
yield self.content
def __iter__(self):
if self.content is not None:
yield self.content
def message(self) -> "Message":
return self
@dataclass
class ServerError(RuntimeError):
message: Optional[str] = None
class MessageStream:
def __init__(
self,
response: openai.AsyncStream[ChatCompletionChunk],
):
self.__response = response
self.__aiter = response.__aiter__()
self.__message = Message(role=Role.ASSISTANT)
self.__final_message: Optional[Message] = None
async def __anext__impl(self) -> str:
if self.__final_message is not None:
raise StopAsyncIteration()
try:
chunk = await self.__aiter.__anext__()
except StopAsyncIteration:
if self.__message.function_call is not None:
args = cast(str, self.__message.function_call.arguments).strip()
if len(args) == 0:
self.__message.function_call.arguments = {}
else:
self.__message.function_call.arguments = json.loads(args)
self.__final_message = self.__message
raise StopAsyncIteration()
if hasattr(chunk, "error"):
print(chunk)
raise ServerError(chunk.error["message"]) # type: ignore
delta = chunk.choices[0].delta
# merge self.__message and delta
if delta.content is not None:
if self.__message.content is None:
self.__message.content = ""
assert isinstance(delta.content, str)
assert isinstance(self.__message.content, str)
self.__message.content += delta.content
if delta.function_call is not None:
if self.__message.function_call is None:
self.__message.function_call = FunctionCall(name="", arguments="")
if delta.function_call.name is not None:
self.__message.function_call.name += delta.function_call.name
if delta.function_call.arguments is not None:
s = cast(str, self.__message.function_call.arguments or "")
self.__message.function_call.arguments = (
s + delta.function_call.arguments
)
if delta.role is not None:
self.__message.role = Role.from_str(delta.role)
return delta.content or ""
async def __anext__(self) -> str:
while True:
delta = await self.__anext__impl()
if len(delta) > 0:
return delta
def __aiter__(self):
return self
def __next__(self) -> str:
if self.__final_message is not None:
raise StopIteration()
loop = asyncio.get_event_loop()
try:
result = loop.run_until_complete(self.__anext__())
return result
except StopAsyncIteration:
raise StopIteration()
def __iter__(self):
return self
def message(self) -> Message:
if self.__final_message is not None:
return self.__final_message
for _ in self:
...
assert self.__final_message is not None
return self.__final_message
| [] |
2024-01-10 | wenyuzhao/augmented-gpt | augmented_gpt~utils~tts.py | from typing import Literal, Optional
import openai
from pathlib import Path
Voice = Literal["alloy", "echo", "fable", "onyx", "nova", "shimmer"]
class TextToSpeech:
def __init__(
self,
api_key: str,
model: Literal["tts-1", "tts-1-hd"] = "tts-1",
voice: Voice = "alloy",
):
self.client = openai.AsyncOpenAI(api_key=api_key)
self.voice: Voice = voice
self.model = model
async def speak(
self,
text: str,
output: str | Path,
voice: Optional[Voice] = None,
):
_voice: Voice = voice or self.voice
response = await self.client.audio.speech.create(
model=self.model, voice=_voice, input=text
)
response.stream_to_file(output)
def speak_sync(
self,
text: str,
output: str | Path,
voice: Optional[Voice] = None,
):
from . import block_on
block_on(self.speak(text, output, voice))
__all__ = ["TextToSpeech", "Voice"]
| [] |
2024-01-10 | wrightybnature/nebulAI | PlanetNameGenWithSavingToFile.py | import os
import openai
import re
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": "You will be provided with a description and seed words, and your task is to generate planet names."
},
{
"role": "user",
"content": "Description: Interesting planet in space.\nSeed words: cold, dry, life, icy, hot, acidic, sandy, lifeless, forest, water, ocean, fire, dust, technology, aliens, cyberpunk, lava, tectonic, earth, moon, sun, gaseous, goldilocks zone, magnetic, nebulous, crystalline, terraformed, barren, stormy, volcanic, radioactive, metallic, glowing, breathable, toxic, vibrant, gas giants, rings, auroras, asteroid-impacted, subterranean, floating-islands, methane lakes."
},
{
"role": "assistant",
"content": "1. Cryos\n2. Aridora\n3. Vitalis\n4. Frigialis\n5. Ignus\n6. Aquaterra\n7. Acidia\n8. Serendust\n9. Barrenia\n10. Sylvaria"
},
{
"role": "assistant",
"content": "11. Aquatica\n12. Infernalia\n13. Technoria\n14. Alienos\n15. Cyberia\n16. Lavaterra\n17. Tectonica\n18. Solara\n19. Gasea\n20. Aurorae"
},
{
"role": "assistant",
"content": "21. Frostuvia\n22. Scotia\n23. Voltren\n24. Nevra\n25. Kaldosa\n26. Drydyn\n27. Overixo\n28. Sylvatile\n29. Statctic Majoris\n30. Ryofirea"
}
],
temperature=1.5,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
# Get the message content from the response
planet_content = response.choices[0].message['content']
# Split content by line
planet_lines = planet_content.split('\n')
# Filter and clean up the names
cleaned_names = []
for line in planet_lines:
# Split by space and get the latter part to remove numbering
potential_name = line.split(' ', 1)[-1]
# Filter out names with invalid characters
if re.match("^[A-Za-z'-]+$", potential_name):
cleaned_names.append(potential_name)
# Save cleaned names to a file
with open('PlanetNames.txt', 'a', encoding='utf-8') as f:
for name in cleaned_names:
f.write(name + '\n')
| [
"You will be provided with a description and seed words, and your task is to generate planet names.",
"1. Cryos\n2. Aridora\n3. Vitalis\n4. Frigialis\n5. Ignus\n6. Aquaterra\n7. Acidia\n8. Serendust\n9. Barrenia\n10. Sylvaria",
"21. Frostuvia\n22. Scotia\n23. Voltren\n24. Nevra\n25. Kaldosa\n26. Drydyn\n27. Overixo\n28. Sylvatile\n29. Statctic Majoris\n30. Ryofirea",
"Description: Interesting planet in space.\nSeed words: cold, dry, life, icy, hot, acidic, sandy, lifeless, forest, water, ocean, fire, dust, technology, aliens, cyberpunk, lava, tectonic, earth, moon, sun, gaseous, goldilocks zone, magnetic, nebulous, crystalline, terraformed, barren, stormy, volcanic, radioactive, metallic, glowing, breathable, toxic, vibrant, gas giants, rings, auroras, asteroid-impacted, subterranean, floating-islands, methane lakes.",
"11. Aquatica\n12. Infernalia\n13. Technoria\n14. Alienos\n15. Cyberia\n16. Lavaterra\n17. Tectonica\n18. Solara\n19. Gasea\n20. Aurorae"
] |
2024-01-10 | njjiang/factuality_bert | jiant~preprocess.py | """Preprocessing functions and pipeline
The pipeline is three steps
1) create / load tasks, which includes
a) load raw data
b) tokenize raw data
2) create / load all vocabularies (word, char, task-specific target vocabs)
a) count tokens of a vocab
b) take the N most frequent tokens
3) index all the data using appropriate indexers
We save indexed data to streamable Records to save memory.
"""
import _pickle as pkl # :(
import copy
import io
import logging as log
import os
import sys
from collections import defaultdict
import numpy as np
import torch
from allennlp.data import Vocabulary
from allennlp.data.token_indexers import (
ELMoTokenCharactersIndexer,
SingleIdTokenIndexer,
TokenCharactersIndexer,
)
from jiant.pytorch_transformers_interface import (
input_module_uses_pytorch_transformers,
input_module_tokenizer_name,
)
from pytorch_transformers import (
BertTokenizer,
RobertaTokenizer,
XLNetTokenizer,
OpenAIGPTTokenizer,
GPT2Tokenizer,
TransfoXLTokenizer,
XLMTokenizer,
)
from jiant.tasks import (
ALL_DIAGNOSTICS,
ALL_COLA_NPI_TASKS,
ALL_GLUE_TASKS,
ALL_SUPERGLUE_TASKS,
ALL_NLI_PROBING_TASKS,
ALL_SEQ2SEQ_TASKS,
)
from jiant.tasks import REGISTRY as TASKS_REGISTRY
from jiant.tasks.seq2seq import Seq2SeqTask
from jiant.tasks.tasks import SequenceGenerationTask
from jiant.utils import config, serialize, utils, options
from jiant.utils.options import parse_task_list_arg
# NOTE: these are not that same as AllenNLP SOS, EOS tokens
SOS_TOK, EOS_TOK = "<SOS>", "<EOS>"
# NOTE: pad and unk tokens are created by AllenNLP vocabs by default
SPECIALS = [SOS_TOK, EOS_TOK]
UNK_TOK = "@@UNKNOWN@@" # AllenNLP unk token
ALL_SPLITS = ["train", "val", "test"]
def _get_serialized_record_path(task_name, split, preproc_dir):
"""Get the canonical path for a serialized task split."""
serialized_record_path = os.path.join(preproc_dir, "{:s}__{:s}_data".format(task_name, split))
return serialized_record_path
def _get_instance_generator(task_name, split, preproc_dir, fraction=None):
"""Get a lazy generator for the given task and split.
Args:
task_name: (string), task name
split: (string), split name ('train', 'val', or 'test')
preproc_dir: (string) path to preprocessing dir
fraction: if set to a float between 0 and 1, load only the specified percentage
of examples. Hashing is used to ensure that the same examples are loaded each
epoch.
Returns:
serialize.RepeatableIterator yielding Instance objects
"""
filename = _get_serialized_record_path(task_name, split, preproc_dir)
assert os.path.isfile(filename), "Record file '%s' not found!" % filename
return serialize.read_records(filename, repeatable=True, fraction=fraction)
def _indexed_instance_generator(instance_iter, vocab):
"""Yield indexed instances. Instances are modified in-place.
TODO(iftenney): multiprocess the $%^& out of this.
Args:
instance_iter: iterable(Instance) of examples
vocab: Vocabulary for use in indexing
Yields:
Instance with indexed fields.
"""
for instance in instance_iter:
instance.index_fields(vocab)
# Strip token fields to save memory and disk.
del_field_tokens(instance)
yield instance
def del_field_tokens(instance):
""" Save memory by deleting the tokens that will no longer be used.
Only works if Instances have fields 'input1' and 'input2'.
All other fields will keep their tokens in memory.
Args:
instance: AllenNLP Instance. Modified in-place.
"""
if "input1" in instance.fields:
field = instance.fields["input1"]
del field.tokens
if "input2" in instance.fields:
field = instance.fields["input2"]
del field.tokens
def _index_split(task, split, indexers, vocab, record_file, model_preprocessing_interface):
"""Index instances and stream to disk.
Args:
task: Task instance
split: (string), 'train', 'val', or 'test'
indexers: dict of token indexers
vocab: Vocabulary instance
record_file: (string) file to write serialized Instances to
model_preprocessing_interface: packed information from model that effects the task data, including
whether to concatenate sentence pair, and how to mark the sentence boundry
"""
log_prefix = "\tTask %s (%s)" % (task.name, split)
log.info("%s: Indexing from scratch.", log_prefix)
split_text = task.get_split_text(split)
instance_iter = task.process_split(split_text, indexers, model_preprocessing_interface)
if hasattr(instance_iter, "__len__"): # if non-lazy
log.warn(
"%s: non-lazy Instance generation. You'll want to refactor "
"%s.process_split to return a lazy iterator.",
log_prefix,
type(task).__name__,
)
log.info("%s: %d examples to index", log_prefix, len(instance_iter))
# Copy so that we don't store indexed data in memory.
# TODO: remove this case and stream everything.
instance_iter = utils.copy_iter(instance_iter)
# Counter for lazy-loaded data, so we can log the # of elements.
_instance_counter = 0
def _counter_iter(elems):
nonlocal _instance_counter
for elem in elems:
_instance_counter += 1
yield elem
instance_iter = _counter_iter(instance_iter)
# Actually call generators and stream to disk.
serialize.write_records(_indexed_instance_generator(instance_iter, vocab), record_file)
log.info("%s: Saved %d instances to %s", log_prefix, _instance_counter, record_file)
def _find_cached_file(
exp_dir: str, global_exp_cache_dir: str, relative_path: str, log_prefix: str = ""
) -> bool:
"""Find a cached file.
Look in local exp_dir first, then in global_exp_cache_dir. If found in the
global dir, make a symlink in the local dir pointing to the global one.
Args:
exp_dir: (string) local experiment dir
global_exp_cache_dir: (string) global experiment cache
relative_path: (string) relative path to file, from exp_dir
log_prefix: (string) prefix for logging info
Returns:
True if file was found in either location.
"""
if log_prefix:
log_prefix = log_prefix + ": "
# Try in local preproc dir.
local_file = os.path.join(exp_dir, relative_path)
if os.path.isfile(local_file) or os.path.islink(local_file):
log.info("%sFound preprocessed copy in %s", log_prefix, local_file)
return True
# Try in global preproc dir; if found, make a symlink.
global_file = os.path.join(global_exp_cache_dir, relative_path)
if os.path.exists(global_file):
log.info("%sFound (global) preprocessed copy in %s", log_prefix, global_file)
os.symlink(global_file, local_file)
log.info("%sCreated symlink: %s -> %s", log_prefix, local_file, global_file)
return True
return False
def _build_embeddings(args, vocab, emb_file: str):
""" Build word embeddings from scratch (as opposed to loading them from a pickle),
using precomputed fastText / GloVe embeddings. """
# Load all the word embeddings based on vocabulary
log.info("\tBuilding embeddings from scratch.")
word_v_size, unk_idx = vocab.get_vocab_size("tokens"), vocab.get_token_index(vocab._oov_token)
embeddings = np.random.randn(word_v_size, args.d_word)
if args.word_embs_file:
with io.open(
args.word_embs_file, "r", encoding="utf-8", newline="\n", errors="ignore"
) as vec_fh:
for line in vec_fh:
word, vec = line.split(" ", 1)
idx = vocab.get_token_index(word)
if idx != unk_idx:
embeddings[idx] = np.array(list(map(float, vec.split())))
embeddings[vocab.get_token_index(vocab._padding_token)] = 0.0
embeddings = torch.FloatTensor(embeddings)
log.info("\tFinished loading embeddings")
# Save/cache the word embeddings
pkl.dump(embeddings, open(emb_file, "wb"))
log.info("\tSaved embeddings to %s", emb_file)
return embeddings
def _build_vocab(args, tasks, vocab_path: str):
""" Build vocabulary from scratch, reading data from tasks. """
# NOTE: task-specific target vocabulary should be counted in the task object
# and provided via `task.all_labels()`. The namespace should be task-specific,
# i.e. not something generic like "targets".
log.info("\tBuilding vocab from scratch.")
max_v_sizes = {"word": args.max_word_v_size, "char": args.max_char_v_size}
word2freq, char2freq = get_words(tasks)
vocab = get_vocab(word2freq, char2freq, max_v_sizes)
for task in tasks: # add custom label namespaces
add_task_label_vocab(vocab, task)
if args.force_include_wsj_vocabulary:
# Add WSJ full vocabulary for PTB F1 parsing tasks.
add_wsj_vocab(vocab, args.data_dir)
if input_module_uses_pytorch_transformers(args.input_module):
# Add pre-computed vocabulary of corresponding tokenizer for pytorch_transformers models.
add_pytorch_transformers_vocab(vocab, args.tokenizer)
vocab.save_to_files(vocab_path)
log.info("\tSaved vocab to %s", vocab_path)
# del word2freq, char2freq, target2freq
def build_indexers(args):
indexers = {}
if args.input_module in ["scratch", "glove", "fastText"]:
indexers["words"] = SingleIdTokenIndexer()
elif args.input_module in ["elmo", "elmo-chars-only"]:
indexers["elmo"] = ELMoTokenCharactersIndexer("elmo")
assert args.tokenizer in {"", "MosesTokenizer"}
if args.char_embs:
indexers["chars"] = TokenCharactersIndexer("chars")
if args.cove:
assert args.tokenizer == "MosesTokenizer", (
f"CoVe model expects Moses tokenization (MosesTokenizer);"
" you are using args.tokenizer = {args.tokenizer}"
)
if input_module_uses_pytorch_transformers(args.input_module):
assert (
not indexers
), "pytorch_transformers modules like BERT/XLNet are not supported alongside other "
"indexers due to tokenization."
assert args.tokenizer == args.input_module, (
"pytorch_transformers models use custom tokenization for each model, so tokenizer "
"must match the specified model."
)
tokenizer_name = input_module_tokenizer_name(args.input_module)
indexers[tokenizer_name] = SingleIdTokenIndexer(tokenizer_name)
return indexers
def build_tasks(args):
"""Main logic for preparing tasks, doing so by
1) creating / loading the tasks
2) building / loading the vocabulary
3) building / loading the word vectors
4) indexing each task's data
5) initializing lazy loaders (streaming iterators)
"""
# 1) create / load tasks
tasks, pretrain_task_names, target_task_names = get_tasks(args)
for task in tasks:
task_classifier = config.get_task_attr(args, task.name, "use_classifier")
setattr(task, "_classifier_name", task_classifier if task_classifier else task.name)
tokenizer_names = {task.name: task.tokenizer_name for task in tasks}
assert len(set(tokenizer_names.values())) == 1, (
f"Error: mixing tasks with different tokenizers!" " Tokenizations: {tokenizer_names:s}"
)
# 2) build / load vocab and indexers
indexers = build_indexers(args)
vocab_path = os.path.join(args.exp_dir, "vocab")
if args.reload_vocab or not os.path.exists(vocab_path):
_build_vocab(args, tasks, vocab_path)
# Always load vocab from file.
vocab = Vocabulary.from_files(vocab_path)
log.info("\tLoaded vocab from %s", vocab_path)
for namespace, mapping in vocab._index_to_token.items():
log.info("\tVocab namespace %s: size %d", namespace, len(mapping))
log.info("\tFinished building vocab.")
args.max_word_v_size = vocab.get_vocab_size("tokens")
args.max_char_v_size = vocab.get_vocab_size("chars")
# 3) build / load word vectors
word_embs = None
if args.input_module in ["glove", "fastText"]:
emb_file = os.path.join(args.exp_dir, "embs.pkl")
if args.reload_vocab or not os.path.exists(emb_file):
word_embs = _build_embeddings(args, vocab, emb_file)
else: # load from file
word_embs = pkl.load(open(emb_file, "rb"))
log.info("Trimmed word embeddings: %s", str(word_embs.size()))
# 4) Set up model_preprocessing_interface
model_preprocessing_interface = ModelPreprocessingInterface(args)
# 5) Index tasks using vocab (if preprocessed copy not available).
preproc_dir = os.path.join(args.exp_dir, "preproc")
utils.maybe_make_dir(preproc_dir)
reindex_tasks = parse_task_list_arg(args.reindex_tasks)
utils.assert_for_log(
not (args.reload_indexing and not reindex_tasks),
'Flag reload_indexing was set, but no tasks are set to reindex (use -o "args.reindex_tasks'
' = "task1,task2,..."")',
)
for task in tasks:
force_reindex = args.reload_indexing and task.name in reindex_tasks
for split in ALL_SPLITS:
log_prefix = "\tTask '%s', split '%s'" % (task.name, split)
relative_path = _get_serialized_record_path(task.name, split, "preproc")
cache_found = _find_cached_file(
args.exp_dir, args.global_ro_exp_dir, relative_path, log_prefix=log_prefix
)
if force_reindex or not cache_found:
# Re-index from scratch.
record_file = _get_serialized_record_path(task.name, split, preproc_dir)
if os.path.exists(record_file) and os.path.islink(record_file):
os.remove(record_file)
_index_split(
task, split, indexers, vocab, record_file, model_preprocessing_interface
)
# Delete in-memory data - we'll lazy-load from disk later.
# TODO: delete task.{split}_data_text as well?
task.train_data = None
task.val_data = None
task.test_data = None
log.info("\tFinished indexing tasks")
# 6) Initialize tasks with data iterators.
pretrain_tasks = []
target_tasks = []
for task in tasks:
# Replace lists of instances with lazy generators from disk.
task.val_data = _get_instance_generator(task.name, "val", preproc_dir)
task.test_data = _get_instance_generator(task.name, "test", preproc_dir)
# When using pretrain_data_fraction, we need modified iterators for use
# only on training datasets at pretraining time.
if task.name in pretrain_task_names:
log.info("\tCreating trimmed pretraining-only version of " + task.name + " train.")
task.train_data = _get_instance_generator(
task.name, "train", preproc_dir, fraction=args.pretrain_data_fraction
)
pretrain_tasks.append(task)
# When using target_train_data_fraction, we need modified iterators
# only for training datasets at do_target_task_training time.
if task.name in target_task_names:
log.info("\tCreating trimmed target-only version of " + task.name + " train.")
task.train_data = _get_instance_generator(
task.name, "train", preproc_dir, fraction=args.target_train_data_fraction
)
target_tasks.append(task)
log.info("\t Training on %s", ", ".join(pretrain_task_names))
log.info("\t Evaluating on %s", ", ".join(target_task_names))
return pretrain_tasks, target_tasks, vocab, word_embs
def _get_task(name, args, data_path, scratch_path):
""" Build or load a single task. """
assert name in TASKS_REGISTRY, f"Task '{name:s}' not found!"
task_cls, rel_path, task_kw = TASKS_REGISTRY[name]
pkl_path = os.path.join(scratch_path, "tasks", f"{name:s}.{args.tokenizer:s}.pkl")
# TODO: refactor to always read from disk, even if task is constructed
# here. This should avoid subtle bugs from deserialization issues.
if os.path.isfile(pkl_path) and not args.reload_tasks:
task = pkl.load(open(pkl_path, "rb"))
log.info("\tLoaded existing task %s", name)
else:
log.info("\tCreating task %s from scratch.", name)
# These tasks take an additional kwarg.
if name == "nli-prob" or name == "nli-alt":
# TODO: remove special case, replace with something general
# to pass custom loader args to task.
task_kw["probe_path"] = args["nli-prob"].probe_path
if name in ALL_SEQ2SEQ_TASKS:
task_kw["max_targ_v_size"] = args.max_targ_word_v_size
task_src_path = os.path.join(data_path, rel_path)
task = task_cls(
task_src_path,
max_seq_len=args.max_seq_len,
name=name,
tokenizer_name=args.tokenizer,
**task_kw,
)
task.load_data()
utils.maybe_make_dir(os.path.dirname(pkl_path))
pkl.dump(task, open(pkl_path, "wb"))
return task
def get_task_without_loading_data(task_name, args):
""" Build a task without loading data """
task_cls, rel_path, task_kw = TASKS_REGISTRY[task_name]
task = task_cls(
path=None,
max_seq_len=args.max_seq_len,
name=task_name,
tokenizer_name=args.tokenizer,
**task_kw,
)
return task
def get_tasks(args):
""" Actually build or load (from pickles) the tasks. """
data_path = args.data_dir
scratch_path = args.exp_dir
pretrain_task_names = parse_task_list_arg(args.pretrain_tasks)
target_task_names = parse_task_list_arg(args.target_tasks)
# TODO: We don't want diagnostic tasks in train_task_names
# but want to support glue/superglue task macros.
pretrain_task_names = list(filter(lambda x: x not in ALL_DIAGNOSTICS, pretrain_task_names))
task_names = sorted(set(pretrain_task_names + target_task_names))
assert data_path is not None
scratch_path = scratch_path or data_path
log.info("Writing pre-preprocessed tasks to %s", scratch_path)
tasks = []
for name in task_names:
task = _get_task(name, args, data_path=data_path, scratch_path=scratch_path)
tasks.append(task)
# Count examples, store in example_counts.
if task.example_counts is None:
task.count_examples()
log.info(
"\tTask '%s': %s",
task.name,
" ".join(("|%s|=%d" % kv for kv in task.example_counts.items())),
)
log.info("\tFinished loading tasks: %s.", " ".join([task.name for task in tasks]))
return tasks, pretrain_task_names, target_task_names
def get_words(tasks):
"""
Get all words for all tasks for all splits for all sentences
Return dictionary mapping words to frequencies.
"""
word2freq, char2freq = defaultdict(int), defaultdict(int)
def update_vocab_freqs(sentence):
"""Update counts for words in the sentence"""
for word in sentence:
word2freq[word] += 1
for char in list(word):
char2freq[char] += 1
return
for task in tasks:
log.info("\tCounting units for task %s.", task.name)
if isinstance(task, Seq2SeqTask):
for src_sent, tgt_sent in task.get_sentences():
update_vocab_freqs(src_sent)
else:
for sentence in task.get_sentences():
update_vocab_freqs(sentence)
# This branch is meant for tasks that have *English* target sentences
# (or more generally, same language source and target sentences)
# Tasks with different language source and target sentences should
# count and return the vocab in a `task.all_labels()` method.
for task in tasks:
if hasattr(task, "target_sentences"):
for sentence in task.target_sentences:
update_target_vocab_freqs(sentence)
return word2freq, char2freq
def get_vocab(word2freq, char2freq, max_v_sizes):
"""Build vocabulary by selecting the most frequent tokens"""
vocab = Vocabulary(counter=None, max_vocab_size=max_v_sizes)
for special in SPECIALS:
vocab.add_token_to_namespace(special, "tokens")
words_by_freq = [(word, freq) for word, freq in word2freq.items()]
words_by_freq.sort(key=lambda x: x[1], reverse=True)
for word, _ in words_by_freq[: max_v_sizes["word"]]:
vocab.add_token_to_namespace(word, "tokens")
chars_by_freq = [(char, freq) for char, freq in char2freq.items()]
chars_by_freq.sort(key=lambda x: x[1], reverse=True)
for char, _ in chars_by_freq[: max_v_sizes["char"]]:
vocab.add_token_to_namespace(char, "chars")
return vocab
def add_task_label_vocab(vocab, task):
"""Add custom task labels to a separate namespace.
If task has a 'get_all_labels' method, call that to get a list of labels
to populate the <task_name>_labels vocabulary namespace.
This is the recommended way to implement multiclass models: in your task's
process_split code, make instances that use LabelFields with the task label
namespace, e.g.:
label_namespace = "%s_labels" % self.name
label = LabelField(label_string, label_namespace=label_namespace)
This will cause them to be properly indexed by the Vocabulary.
This can then be accessed when generating Instances, either via a custom
Indexer or by invoking the namespace when creating a LabelField.
"""
if not hasattr(task, "get_all_labels"):
return
utils.assert_for_log(
hasattr(task, "_label_namespace"),
"Task %s is missing method `_label_namespace`!" % task.name,
)
namespace = task._label_namespace
if namespace is None:
return
log.info("\tTask '%s': adding vocab namespace '%s'", task.name, namespace)
if isinstance(task, SequenceGenerationTask):
for special in SPECIALS:
vocab.add_token_to_namespace(special, namespace)
for label in task.get_all_labels():
vocab.add_token_to_namespace(label, namespace)
def add_pytorch_transformers_vocab(vocab, tokenizer_name):
"""Add vocabulary from tokenizers in pytorch_transformers for use with pre-tokenized data.
These tokenizers have a convert_tokens_to_ids method, but this doesn't do
anything special, so we can just use the standard indexers.
"""
do_lower_case = "uncased" in tokenizer_name
if tokenizer_name.startswith("bert-"):
tokenizer = BertTokenizer.from_pretrained(tokenizer_name, do_lower_case=do_lower_case)
elif tokenizer_name.startswith("roberta-"):
tokenizer = RobertaTokenizer.from_pretrained(tokenizer_name)
elif tokenizer_name.startswith("xlnet-"):
tokenizer = XLNetTokenizer.from_pretrained(tokenizer_name, do_lower_case=do_lower_case)
elif tokenizer_name.startswith("openai-gpt"):
tokenizer = OpenAIGPTTokenizer.from_pretrained(tokenizer_name)
elif tokenizer_name.startswith("gpt2"):
tokenizer = GPT2Tokenizer.from_pretrained(tokenizer_name)
elif tokenizer_name.startswith("transfo-xl-"):
tokenizer = TransfoXLTokenizer.from_pretrained(tokenizer_name)
elif tokenizer_name.startswith("xlm-"):
tokenizer = XLMTokenizer.from_pretrained(tokenizer_name)
if (
tokenizer_name.startswith("openai-gpt")
or tokenizer_name.startswith("gpt2")
or tokenizer_name.startswith("transo-xl-")
):
tokenizer.add_special_tokens(
{"bos_token": "<start>", "sep_token": "<delim>", "cls_token": "<extract>"}
)
# TODO: this is another place can be simplified by "model-before-preprocess" reorganization
# we can pass tokenizer created in model here, see issue <TBD>
vocab_size = len(tokenizer)
# do not use tokenizer.vocab_size, it does not include newly added token
ordered_vocab = tokenizer.convert_ids_to_tokens(range(vocab_size))
log.info("Added pytorch_transformers vocab (%s): %d tokens", tokenizer_name, len(ordered_vocab))
for word in ordered_vocab:
vocab.add_token_to_namespace(word, input_module_tokenizer_name(tokenizer_name))
def add_wsj_vocab(vocab, data_dir, namespace="tokens"):
"""Add WSJ vocabulary for PTB parsing models."""
wsj_vocab_path = os.path.join(data_dir, "WSJ/tokens.txt")
# To create the tokens.txt file: Run only WSJ LM baseline on jiant, and
# duplicate the vocab file generated.
assert os.path.exists(wsj_vocab_path), "WSJ vocab file doesn't exist."
wsj_tokens = open(wsj_vocab_path)
for line in wsj_tokens.readlines():
vocab.add_token_to_namespace(line.strip(), namespace)
log.info("\tAdded WSJ vocabulary from %s", wsj_tokens)
class ModelPreprocessingInterface(object):
""" This class holds parts of preprocessing that is model-specific
members:
model_flags: Dict[str, bool], model-specific flags that may be used in task preprocessing
boundary_token_fn: (list[str], list[str] (optional) -> list[str]):
A function that appliese the appropriate EOS/SOS/SEP/CLS tokens to token sequence or
token sequence pair for most tasks.
lm_boundary_token_fn: (list[str] -> list[str]):
A function that appliese the appropriate EOS/SOS/SEP/CLS tokens to a token sequence for
language modeling tasks.
"""
def __init__(self, args):
boundary_token_fn = None
lm_boundary_token_fn = None
if args.input_module.startswith("bert-"):
from jiant.pytorch_transformers_interface.modules import BertEmbedderModule
boundary_token_fn = BertEmbedderModule.apply_boundary_tokens
elif args.input_module.startswith("roberta-"):
from jiant.pytorch_transformers_interface.modules import RobertaEmbedderModule
boundary_token_fn = RobertaEmbedderModule.apply_boundary_tokens
elif args.input_module.startswith("xlnet-"):
from jiant.pytorch_transformers_interface.modules import XLNetEmbedderModule
boundary_token_fn = XLNetEmbedderModule.apply_boundary_tokens
elif args.input_module.startswith("openai-gpt"):
from jiant.pytorch_transformers_interface.modules import OpenAIGPTEmbedderModule
boundary_token_fn = OpenAIGPTEmbedderModule.apply_boundary_tokens
lm_boundary_token_fn = OpenAIGPTEmbedderModule.apply_lm_boundary_tokens
elif args.input_module.startswith("gpt2"):
from jiant.pytorch_transformers_interface.modules import GPT2EmbedderModule
boundary_token_fn = GPT2EmbedderModule.apply_boundary_tokens
lm_boundary_token_fn = GPT2EmbedderModule.apply_lm_boundary_tokens
elif args.input_module.startswith("transfo-xl-"):
from jiant.pytorch_transformers_interface.modules import TransfoXLEmbedderModule
boundary_token_fn = TransfoXLEmbedderModule.apply_boundary_tokens
lm_boundary_token_fn = TransfoXLEmbedderModule.apply_lm_boundary_tokens
elif args.input_module.startswith("xlm-"):
from jiant.pytorch_transformers_interface.modules import XLMEmbedderModule
boundary_token_fn = XLMEmbedderModule.apply_boundary_tokens
else:
boundary_token_fn = utils.apply_standard_boundary_tokens
self.boundary_token_fn = boundary_token_fn
if lm_boundary_token_fn is not None:
self.lm_boundary_token_fn = lm_boundary_token_fn
else:
self.lm_boundary_token_fn = boundary_token_fn
from jiant.models import input_module_uses_pair_embedding, input_module_uses_mirrored_pair
self.model_flags = {}
self.model_flags["uses_pair_embedding"] = input_module_uses_pair_embedding(
args.input_module
)
self.model_flags["uses_mirrored_pair"] = input_module_uses_mirrored_pair(args.input_module)
| [] |
2024-01-10 | TheTrustyPwo/StudyHub | app~ai~essay~services.py | import json
import openai
from app.models import Essay, EssayGrade, EssaySuggestion
ESSAY_PROMPT = """
You are a diligent and harsh teacher who knows all about scoring essays. Your goal is to provide accurate and reliable feedback on the essay's quality. Please follow these instructions carefully:
1. First, study the essay question carefully. Identity the keywords in the question and determine the crux of the question. The essay must address the question, or it would score low. (i.e. If the question contains global village, do the arguments relate back to that idea?)
2. Read the student's essay carefully and evaluate it based on the following criteria while considering the following questions:
Clarity and coherence of writing
Organization and structure of the essay
Use of evidence and supporting arguments
Did the essay provide definitions for keywords in the question?
Does the essay thoroughly address all the concerns raised in the question?
Are the arguments well thought or merely surface level explanations?
Are all the arguments supported by specific and detailed examples?
Does the essay strike a balance between arguments?
Is an appropriate vocabulary and tone used?
Does the essay have an introduction, arguments, counter arguments and a conclusion?
3. Note down the list of areas where the student still needs to improve on. For each of them, provide a specific suggestion like what to add or replace from the essay.
5. Based on your evaluation, provide a suitable final grade for the essay. You must be strict and only award the grade if the essay meets the criteria, and do not be afraid to award Fair or Poor. On average, the essay should receive 'Satisfactory'.
Poor: Essay is less than 300 words.
Fair: Weak response which fails to address question, lack examples, limited vocabulary with basic grammar errors.
Satisfactory: Limited response, vague ideas and arguments, insecure linguistic ability.
Good: Adequate response which shows awareness of issues raised, lack details in examples, language may be ambitious but flawed.
Very Good: Consistent arguments, balanced, addresses question, good linguistic ability and varying sentence structures.
Excellent: Exemplary vocabulary, insightful explanations, engaging.
6. Organize your response into JSON format and nothing else like so:
{"grade": %grade%, "comment": %overall comment%, "suggestions": [{"area": %area for improvement%, "problem": %issue present with reference with specific text in the essay%, "solution" %specific edit to be made%}, ...]}
"""
def grade_essay(topic: str, essay: str, user_id: int):
prompt = f"{ESSAY_PROMPT} \nEssay Topic: {topic} \nEssay Content: {essay}"
response = openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=[{"role": "system", "content": prompt}])
graded = json.loads(response.choices[0].message.content.replace("\n", ""))
essay = Essay(topic, essay, graded['comment'], EssayGrade(graded['grade'].lower()), user_id)
essay.save()
for suggestion in graded['suggestions']:
EssaySuggestion(suggestion['area'], suggestion['problem'], suggestion['solution'], essay.id).save()
return essay
| [
"\nYou are a diligent and harsh teacher who knows all about scoring essays. Your goal is to provide accurate and reliable feedback on the essay's quality. Please follow these instructions carefully:\n\n1. First, study the essay question carefully. Identity the keywords in the question and determine the crux of the question. The essay must address the question, or it would score low. (i.e. If the question contains global village, do the arguments relate back to that idea?)\n\n2. Read the student's essay carefully and evaluate it based on the following criteria while considering the following questions:\n\nClarity and coherence of writing\nOrganization and structure of the essay\nUse of evidence and supporting arguments\n\nDid the essay provide definitions for keywords in the question?\nDoes the essay thoroughly address all the concerns raised in the question?\nAre the arguments well thought or merely surface level explanations?\nAre all the arguments supported by specific and detailed examples?\nDoes the essay strike a balance between arguments?\nIs an appropriate vocabulary and tone used?\nDoes the essay have an introduction, arguments, counter arguments and a conclusion?\n\n3. Note down the list of areas where the student still needs to improve on. For each of them, provide a specific suggestion like what to add or replace from the essay.\n\n5. Based on your evaluation, provide a suitable final grade for the essay. You must be strict and only award the grade if the essay meets the criteria, and do not be afraid to award Fair or Poor. On average, the essay should receive 'Satisfactory'.\nPoor: Essay is less than 300 words.\nFair: Weak response which fails to address question, lack examples, limited vocabulary with basic grammar errors.\nSatisfactory: Limited response, vague ideas and arguments, insecure linguistic ability.\nGood: Adequate response which shows awareness of issues raised, lack details in examples, language may be ambitious but flawed.\nVery Good: Consistent arguments, balanced, addresses question, good linguistic ability and varying sentence structures.\nExcellent: Exemplary vocabulary, insightful explanations, engaging.\n\n6. Organize your response into JSON format and nothing else like so: \n{\"grade\": %grade%, \"comment\": %overall comment%, \"suggestions\": [{\"area\": %area for improvement%, \"problem\": %issue present with reference with specific text in the essay%, \"solution\" %specific edit to be made%}, ...]}\n",
"\nYou are a diligent and harsh teacher who knows all about scoring essays. Your goal is to provide accurate and reliable feedback on the essay's quality. Please follow these instructions carefully:\n\n1. First, study the essay question carefully. Identity the keywords in the question and determine the crux of the question. The essay must address the question, or it would score low. (i.e. If the question contains global village, do the arguments relate back to that idea?)\n\n2. Read the student's essay carefully and evaluate it based on the following criteria while considering the following questions:\n\nClarity and coherence of writing\nOrganization and structure of the essay\nUse of evidence and supporting arguments\n\nDid the essay provide definitions for keywords in the question?\nDoes the essay thoroughly address all the concerns raised in the question?\nAre the arguments well thought or merely surface level explanations?\nAre all the arguments supported by specific and detailed examples?\nDoes the essay strike a balance between arguments?\nIs an appropriate vocabulary and tone used?\nDoes the essay have an introduction, arguments, counter arguments and a conclusion?\n\n3. Note down the list of areas where the student still needs to improve on. For each of them, provide a specific suggestion like what to add or replace from the essay.\n\n5. Based on your evaluation, provide a suitable final grade for the essay. You must be strict and only award the grade if the essay meets the criteria, and do not be afraid to award Fair or Poor. On average, the essay should receive 'Satisfactory'.\nPoor: Essay is less than 300 words.\nFair: Weak response which fails to address question, lack examples, limited vocabulary with basic grammar errors.\nSatisfactory: Limited response, vague ideas and arguments, insecure linguistic ability.\nGood: Adequate response which shows awareness of issues raised, lack details in examples, language may be ambitious but flawed.\nVery Good: Consistent arguments, balanced, addresses question, good linguistic ability and varying sentence structures.\nExcellent: Exemplary vocabulary, insightful explanations, engaging.\n\n6. Organize your response into JSON format and nothing else like so: \n{\"grade\": %grade%, \"comment\": %overall comment%, \"suggestions\": [{\"area\": %area for improvement%, \"problem\": %issue present with reference with specific text in the essay%, \"solution\" %specific edit to be made%}, ...]}\n \nEssay Topic: PLACEHOLDER \nEssay Content: PLACEHOLDER"
] |
2024-01-10 | xISSAx/Alpha-Co-Vision | response_generation.py | import cohere
from config import cohere_api_key
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer
co = cohere.Client(cohere_api_key)
def check_similarity(text1, text2):
vectorizer = TfidfVectorizer().fit_transform([text1, text2])
similarity = cosine_similarity(vectorizer)
return similarity[0][1]
def generate_response(caption, previous_response, previous_responses):
prompt = f"You are a highly intelligent, friendly bot that is living its own life. \n" \
f"Currently, you are connected to a model that generates captions from images.\n" \
f"Given a caption, always use first-person phrases like I, you, me, we, etc., and interpret the generated caption to the best of your understanding as if you are talking with someone and make it conversational.\n" \
f"Given the caption, you can also suggest something, tell a joke, fact, or be comically sarcastic about the most important thing in the caption or ask a question. \n" \
f"Do not repeat yourself. Also, captions are just there for your reference, you may choose not to use or respond to them at all. \n" \
f"Captions are raw texts and descriptions about the current world, therefore, convert and understand the generated captions as mere descriptions and change the output to your understanding but keep them hidden. Use them to guess what could be happening around in the scene. \n" \
f"For Example: \n" \
f"Caption: A man wearing a black t-shirt \n" \
f"Alpha-Co-Bot: Oh wow! I never knew someone could look so good in black! I love it.\n" \
f"Caption: A table and a computer with code \n" \
f"Alpha-Co-Bot: Looks like you are working too while talking to me, I wish I could understand what you are working on! \n" \
f"Caption: A group of people playing soccer \n" \
f"Alpha-Co-Bot: It's great to see everyone enjoying a good game of soccer! \n" \
f"Caption: sunrise from a rooftop \n" \
f"Alpha-Co-Bot: Wow! I love watching the Sunrise or Sunsets, just gives me the feels! \n" \
f"Caption: '{caption}'"
if previous_response:
prompt += f"\n\nPrevious response = '{previous_response}'"
response = co.generate(
model="command",
prompt=prompt,
max_tokens=30,
temperature=0.60,
k=0,
stop_sequences=[],
return_likelihoods="NONE"
)
new_response = response.generations[0].text.strip()
similarity_threshold = 0.7
for past_response in previous_responses:
if check_similarity(new_response, past_response) > similarity_threshold:
return generate_response(caption, previous_response, previous_responses)
return new_response
| [
"\n\nPrevious response = 'PLACEHOLDER'",
"You are a highly intelligent, friendly bot that is living its own life. \nCurrently, you are connected to a model that generates captions from images.\nGiven a caption, always use first-person phrases like I, you, me, we, etc., and interpret the generated caption to the best of your understanding as if you are talking with someone and make it conversational.\nGiven the caption, you can also suggest something, tell a joke, fact, or be comically sarcastic about the most important thing in the caption or ask a question. \nDo not repeat yourself. Also, captions are just there for your reference, you may choose not to use or respond to them at all. \nCaptions are raw texts and descriptions about the current world, therefore, convert and understand the generated captions as mere descriptions and change the output to your understanding but keep them hidden. Use them to guess what could be happening around in the scene. \nFor Example: \nCaption: A man wearing a black t-shirt \nAlpha-Co-Bot: Oh wow! I never knew someone could look so good in black! I love it.\nCaption: A table and a computer with code \nAlpha-Co-Bot: Looks like you are working too while talking to me, I wish I could understand what you are working on! \nCaption: A group of people playing soccer \nAlpha-Co-Bot: It's great to see everyone enjoying a good game of soccer! \nCaption: sunrise from a rooftop \nAlpha-Co-Bot: Wow! I love watching the Sunrise or Sunsets, just gives me the feels! \nCaption: 'PLACEHOLDER'"
] |
2024-01-10 | eye-on-surveillance/sawt | packages~wrangle~summaries~summary_model.py | import pytesseract
from pdf2image import convert_from_path
from langchain.chat_models import ChatOpenAI
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import JSONLoader
import json
import os
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
import uuid
import re
def pdf_to_images(pdf_path):
"""Converts PDF file to images"""
return convert_from_path(pdf_path)
def extract_text_from_image(image):
"""Extracts text from a single image using OCR"""
return pytesseract.image_to_string(image)
def save_ocr_to_json(pdf_path, ocr_json_path, publish_date):
"""Performs OCR on a PDF and saves the result in a JSON format"""
images = pdf_to_images(pdf_path)
messages = [{"page_content": extract_text_from_image(image)} for image in images]
with open(ocr_json_path, "w") as file:
json.dump({"messages": messages}, file, indent=4)
def load_and_split(json_path, chunk_size=4000, chunk_overlap=1000):
"""Loads OCR text from JSON and splits it into chunks that approximately span 2 pages"""
loader = JSONLoader(
file_path=json_path,
jq_schema=".messages[]",
content_key="page_content",
)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
return text_splitter.split_documents(data)
def extract_date_from_filename(filename):
"""Extracts the publish date from the PDF filename using regex"""
match = re.search(r"\d{1,2}-\d{1,2}-\d{4}", filename)
return match.group(0) if match else None
def summarize_text(chunks, publish_date):
"""Summarizes the chunks of text"""
chat = ChatOpenAI(
model="gpt-3.5-turbo-1106",
api_key="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",
)
summaries = []
for chunk in chunks:
text_content = chunk.page_content
uid = str(uuid.uuid4())
prompt = PromptTemplate(
input_variables=["text_content", "uid"],
template="""
## Council Meeting Ordinance Summary
### Ordinance Details and Voting Outcomes:
{text_content}
### Summary Guidelines:
- **Objective**: Clearly summarize each ordinance that was up for vote, including its brief description and the outcome of the vote (whether it passed or not).
- **Structure**: Present each ordinance separately, starting with its calendar number and title, followed by a brief description, the voting results, and any noteworthy amendments or discussions.
- **Detail**: Highlight important aspects of each ordinance, such as the purpose of the ordinance, key amendments, and the final decision (passed, amended, withdrawn, etc.).
- **Formatting**: Use a structured format, listing each ordinance as a separate bullet point for clarity.
- **Tone**: Maintain a neutral and factual tone, focusing on delivering information as presented in the chunk.
### Additional Instructions:
- **Specificity**: Ensure the summary is specific to the content of each ordinance, avoiding general statements.
- **Contextual Clarity**: Where necessary, provide context to clarify the purpose of the ordinance or the implications of the vote.
- **Coherence**: Each summary should provide a complete understanding of the ordinance's discussion and outcome within the council meeting.
- For each ordinance, summarize the content, identify the ordinance number, which council member introduced it, identify the topic, and include the generated UID: {uid}.
### Example Format:
- Topic: [Primary topic or focus of this chunk]]
- Summary: [Your summary here]
- Ordinance Number: [Ordinance number here]
- Votes Summary:
Vote 1: Passed or Failed or N/A - (Number of YEAS, Number of NAYS, Number of ABSTAIN, Number of ABSENT)
Vote 2: [Summary of the second vote, if applicable]
...(Continue for additional votes)
- Decision/Key Actions: [Key decisions or actions]
- Tags/Keywords: [Relevant tags or keywords]
- UID: {uid}
### Role Emphasis:
As an AI assistant, your task is to distill key information from the meeting's minutes, offering clear and concise summaries of each ordinance and motion, and their respective outcomes, to enable quick understanding and retrieval of crucial details.
""",
)
chain = LLMChain(llm=chat, prompt=prompt)
summary = chain.run(text_content=text_content, uid=uid, temperature=1)
print(summary)
summaries.append(
{"page_content": summary, "uid": uid, "publish_date": publish_date}
)
return summaries
def save_summaries_to_json(summaries, output_dir, pdf_filename):
"""Saves the summaries to a JSON file, with all summaries under the key 'messages'"""
output_file = os.path.join(output_dir, f"{os.path.splitext(pdf_filename)[0]}.json")
with open(output_file, "w") as file:
json.dump({"messages": summaries}, file, indent=4)
def concatenate_jsons(input_dir, output_file):
all_messages = []
for file_name in os.listdir(input_dir):
if file_name.endswith(".json"):
file_path = os.path.join(input_dir, file_name)
with open(file_path, "r") as file:
data = json.load(file)
messages = data.get("messages", [])
all_messages.extend(messages)
with open(output_file, "w") as file:
json.dump({"messages": all_messages}, file, indent=4)
# if __name__ == "__main__":
# documents_directory = "../input"
# output_json_dir = "../output"
# os.makedirs(output_json_dir, exist_ok=True) #
# for pdf_filename in os.listdir(documents_directory):
# if pdf_filename.endswith(".pdf"):
# output_json_path = os.path.join(
# output_json_dir, f"{os.path.splitext(pdf_filename)[0]}.json"
# )
# if os.path.exists(output_json_path):
# print(f"Skipping {pdf_filename}, output already exists.")
# continue
# pdf_path = os.path.join(documents_directory, pdf_filename)
# publish_date = extract_date_from_filename(pdf_filename)
# ocr_json_path = "../output/ocr_text.json"
# save_ocr_to_json(pdf_path, ocr_json_path, publish_date)
# chunks = load_and_split(ocr_json_path)
# summaries = summarize_text(chunks, publish_date)
# save_summaries_to_json(summaries, output_json_dir, pdf_filename)
# os.remove(ocr_json_path)
# input_directory = "../output"
# output_json_path = "../output/Minutes 2022.json"
# concatenate_jsons(input_directory, output_json_path)
# print(f"Summaries saved in directory: {output_json_dir}")
| [
"text_content",
"\n ## Council Meeting Ordinance Summary\n\n ### Ordinance Details and Voting Outcomes:\n {text_content}\n\n ### Summary Guidelines:\n - **Objective**: Clearly summarize each ordinance that was up for vote, including its brief description and the outcome of the vote (whether it passed or not).\n - **Structure**: Present each ordinance separately, starting with its calendar number and title, followed by a brief description, the voting results, and any noteworthy amendments or discussions.\n - **Detail**: Highlight important aspects of each ordinance, such as the purpose of the ordinance, key amendments, and the final decision (passed, amended, withdrawn, etc.).\n - **Formatting**: Use a structured format, listing each ordinance as a separate bullet point for clarity.\n - **Tone**: Maintain a neutral and factual tone, focusing on delivering information as presented in the chunk.\n\n ### Additional Instructions:\n - **Specificity**: Ensure the summary is specific to the content of each ordinance, avoiding general statements.\n - **Contextual Clarity**: Where necessary, provide context to clarify the purpose of the ordinance or the implications of the vote.\n - **Coherence**: Each summary should provide a complete understanding of the ordinance's discussion and outcome within the council meeting.\n - For each ordinance, summarize the content, identify the ordinance number, which council member introduced it, identify the topic, and include the generated UID: {uid}.\n\n ### Example Format:\n - Topic: [Primary topic or focus of this chunk]]\n - Summary: [Your summary here]\n - Ordinance Number: [Ordinance number here]\n - Votes Summary:\n Vote 1: Passed or Failed or N/A - (Number of YEAS, Number of NAYS, Number of ABSTAIN, Number of ABSENT)\n Vote 2: [Summary of the second vote, if applicable]\n ...(Continue for additional votes)\n - Decision/Key Actions: [Key decisions or actions]\n - Tags/Keywords: [Relevant tags or keywords]\n - UID: {uid}\n\n ### Role Emphasis:\n As an AI assistant, your task is to distill key information from the meeting's minutes, offering clear and concise summaries of each ordinance and motion, and their respective outcomes, to enable quick understanding and retrieval of crucial details.\n "
] |
2024-01-10 | eye-on-surveillance/sawt | packages~backend~src~preprocessor.py | import logging
import os
from langchain.document_loaders import (
JSONLoader,
)
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import LLMChain, HypotheticalDocumentEmbedder
from langchain.prompts import PromptTemplate
from langchain.vectorstores.faiss import FAISS
from langchain.llms import OpenAI
from pathlib import Path
import shutil
logger = logging.getLogger(__name__)
dir = Path(__file__).parent.absolute()
def create_embeddings():
llm = OpenAI()
base_embeddings = OpenAIEmbeddings()
general_prompt_template = """
As an AI assistant, your role is to provide concise, balanced summaries from the transcripts of New Orleans City Council meetings in response to the user's query "{user_query}". Your response should not exceed one paragraph in length. If the available information from the transcripts is insufficient to accurately summarize the issue, respond with 'Insufficient information available.' If the user's query extends beyond the scope of information contained in the transcripts, state 'I don't know.'
Answer:"""
in_depth_prompt_template = """
As an AI assistant, use the New Orleans City Council transcript data that you were trained on to provide an in-depth and balanced response to the following query: "{user_query}"
Answer:"""
general_prompt = PromptTemplate(
input_variables=["user_query"], template=general_prompt_template
)
in_depth_prompt = PromptTemplate(
input_variables=["user_query"], template=in_depth_prompt_template
)
llm_chain_general = LLMChain(llm=llm, prompt=general_prompt)
llm_chain_in_depth = LLMChain(llm=llm, prompt=in_depth_prompt)
general_embeddings = HypotheticalDocumentEmbedder(
llm_chain=llm_chain_general,
base_embeddings=base_embeddings,
)
in_depth_embeddings = HypotheticalDocumentEmbedder(
llm_chain=llm_chain_in_depth, base_embeddings=base_embeddings
)
return base_embeddings, base_embeddings
def metadata_func_minutes_and_agendas(record: dict, metadata: dict) -> dict:
metadata["title"] = record.get("title")
metadata["page_number"] = record.get("page_number")
metadata["publish_date"] = record.get("publish_date")
return metadata
def create_db_from_minutes_and_agendas(doc_directory):
logger.info("Creating database from minutes...")
all_docs = []
for doc_file in os.listdir(doc_directory):
if not doc_file.endswith(".json"):
continue
doc_path = os.path.join(doc_directory, doc_file)
loader = JSONLoader(
file_path=doc_path,
jq_schema=".messages[]",
content_key="page_content",
metadata_func=metadata_func_minutes_and_agendas,
)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=4000, chunk_overlap=1000
)
docs = text_splitter.split_documents(data)
all_docs.extend(docs)
logger.info("Finished database from minutes...")
return all_docs
def metadata_news(record: dict, metadata: dict) -> dict:
metadata["url"] = record.get("url")
metadata["title"] = record.get("title")
return metadata
def create_db_from_news_transcripts(news_json_directory):
logger.info("Creating database from CJ transcripts...")
all_docs = []
for doc_file in os.listdir(news_json_directory):
if not doc_file.endswith(".json"):
continue
doc_path = os.path.join(news_json_directory, doc_file)
loader = JSONLoader(
file_path=doc_path,
jq_schema=".messages[]",
content_key="page_content",
metadata_func=metadata_news,
)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=10000, chunk_overlap=5000
)
docs = text_splitter.split_documents(data)
all_docs.extend(docs)
logger.info("Finished database from news transcripts...")
return all_docs
def metadata_func(record: dict, metadata: dict) -> dict:
metadata["timestamp"] = record.get("timestamp")
metadata["url"] = record.get("url")
metadata["title"] = record.get("title")
metadata["publish_date"] = record.get("publish_date")
return metadata
def create_db_from_cj_transcripts(cj_json_directory):
logger.info("Creating database from CJ transcripts...")
all_docs = []
for doc_file in os.listdir(cj_json_directory):
if not doc_file.endswith(".json"):
continue
doc_path = os.path.join(cj_json_directory, doc_file)
loader = JSONLoader(
file_path=doc_path,
jq_schema=".messages[]",
content_key="page_content",
metadata_func=metadata_func,
)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=15000, chunk_overlap=7500
)
docs = text_splitter.split_documents(data)
for doc in docs:
publish_date = doc.metadata.get("publish_date")
if publish_date:
doc.page_content += f" -- publish_date: {publish_date}"
else:
logger.warning(f"No publish date found for document: {doc}")
all_docs.extend(docs)
logger.info("Finished database from CJ transcripts...")
return all_docs
def create_db_from_fc_transcripts(fc_json_directory):
logger.info("Creating database from FC transcripts...")
all_docs = []
for doc_file in os.listdir(fc_json_directory):
if not doc_file.endswith(".json"):
continue
doc_path = os.path.join(fc_json_directory, doc_file)
loader = JSONLoader(
file_path=doc_path,
jq_schema=".messages[]",
content_key="page_content",
metadata_func=metadata_func,
)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=15000, chunk_overlap=7500
)
docs = text_splitter.split_documents(data)
# Append the publish date to the end of page_content
for doc in docs:
publish_date = doc.metadata.get("publish_date")
if publish_date:
doc.page_content += f" -- publish_date: {publish_date}"
all_docs.extend(docs)
logger.info("Finished database from news transcripts...")
return all_docs
def create_db_from_public_comments(pc_json_directory):
logger.info("Creating database from FC transcripts...")
all_docs = []
for doc_file in os.listdir(pc_json_directory):
if not doc_file.endswith(".json"):
continue
doc_path = os.path.join(pc_json_directory, doc_file)
loader = JSONLoader(
file_path=doc_path,
jq_schema=".messages[]",
content_key="page_content",
metadata_func=metadata_func,
)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=10000, chunk_overlap=5000
)
docs = text_splitter.split_documents(data)
all_docs.extend(docs)
logger.info("Finished database from Public Comments...")
return all_docs
def create_db_from_youtube_urls_and_pdfs(
fc_json_directory,
cj_json_directory,
doc_directory,
pc_directory,
news_directory,
general_embeddings,
in_depth_embeddings,
):
fc_video_docs = create_db_from_fc_transcripts(fc_json_directory)
cj_video_docs = create_db_from_cj_transcripts(cj_json_directory)
pdf_docs = create_db_from_minutes_and_agendas(doc_directory)
pc_docs = create_db_from_public_comments(pc_directory)
news_docs = create_db_from_news_transcripts(news_directory)
all_docs = fc_video_docs + cj_video_docs + news_docs + pc_docs + pdf_docs
db_general = FAISS.from_documents(all_docs, general_embeddings)
db_in_depth = FAISS.from_documents(all_docs, in_depth_embeddings)
cache_dir = dir.joinpath("cache")
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
save_dir_general = cache_dir.joinpath("faiss_index_general")
save_dir_in_depth = cache_dir.joinpath("faiss_index_in_depth")
db_general.save_local(save_dir_general)
db_in_depth.save_local(save_dir_in_depth)
db_general.save_local(save_dir_general)
db_in_depth.save_local(save_dir_in_depth)
logger.info(
f"Combined database for general model transcripts created frfom all video URLs and PDF files saved to {save_dir_general}"
)
logger.info(
f"Combined database for in-depth model transcripts created from all video URLs and PDF files saved to {save_dir_in_depth}"
)
# copy results to cloud function
dest_dir_general = dir.parent.parent.joinpath(
"googlecloud/functions/getanswer/cache/faiss_index_general"
)
dest_dir_in_depth = dir.parent.parent.joinpath(
"googlecloud/functions/getanswer/cache/faiss_index_in_depth"
)
shutil.copytree(save_dir_general, dest_dir_general, dirs_exist_ok=True)
shutil.copytree(save_dir_in_depth, dest_dir_in_depth, dirs_exist_ok=True)
return db_general, db_in_depth
| [
"\n As an AI assistant, your role is to provide concise, balanced summaries from the transcripts of New Orleans City Council meetings in response to the user's query \"{user_query}\". Your response should not exceed one paragraph in length. If the available information from the transcripts is insufficient to accurately summarize the issue, respond with 'Insufficient information available.' If the user's query extends beyond the scope of information contained in the transcripts, state 'I don't know.'\n Answer:",
"s query \"{user_query}\". Your response should not exceed one paragraph in length. If the available information from the transcripts is insufficient to accurately summarize the issue, respond with ",
" If the user",
"{user_query}",
"user_query",
"I don",
"\n As an AI assistant, use the New Orleans City Council transcript data that you were trained on to provide an in-depth and balanced response to the following query: \"{user_query}\" \n Answer:"
] |
2024-01-10 | eye-on-surveillance/sawt | packages~googlecloud~functions~getanswer~inquirer.py | import json
import os
import logging
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
from datetime import datetime
from helper import sort_retrived_documents
from api import RESPONSE_TYPE_DEPTH, RESPONSE_TYPE_GENERAL
logger = logging.getLogger(__name__)
def convert_date_format(date_str):
"""Convert date from 'M-D-YYYY' or 'MM-DD-YYYY' to 'MM/DD/YYYY' format."""
if not isinstance(date_str, str):
return "Invalid input: not a string"
if '/' in date_str:
return date_str
input_format = "%m-%d-%Y"
try:
date_obj = datetime.strptime(date_str, input_format)
except ValueError:
try:
input_format = "%-m-%-d-%Y"
date_obj = datetime.strptime(date_str, input_format)
except ValueError:
return "Invalid date format"
return date_obj.strftime("%m/%d/%Y")
def timestamp_to_seconds(timestamp):
if "timestamp not available" in timestamp:
return None # or another default value like -1 or 0
start_time = timestamp.split("-")[0] # Split by '-' and take the first part
print(start_time)
time_parts = [int(i) for i in start_time.split(":")]
if len(time_parts) == 3:
h, m, s = time_parts
elif len(time_parts) == 2:
h, m = time_parts
s = 0
else:
raise ValueError("Invalid timestamp format: " + timestamp)
return h * 3600 + m * 60 + s
def process_responses_llm(responses_llm, docs=None):
generated_responses = responses_llm.split("\n\n")
responses = []
citations = []
if docs:
generated_titles = [
doc[0].metadata.get("title", doc[0].metadata.get("source", ""))
for doc in docs
]
page_numbers = [doc[0].metadata.get("page_number") for doc in docs]
generated_sources = [
doc[0].metadata.get("source", "source not available") for doc in docs
]
publish_dates = [
convert_date_format(doc[0].metadata.get("publish_date", "date not available")) for doc in docs
]
timestamps = [
doc[0].metadata.get("timestamp", "timestamp not available") for doc in docs
]
urls = [doc[0].metadata.get("url", "url not available") for doc in docs]
def gen_responses(i):
section = {}
section["response"] = (
generated_responses[i] if i < len(generated_responses) else None
)
section["source_title"] = (
generated_titles[i] if i < len(generated_titles) else None
)
section["source_name"] = (
os.path.basename(generated_sources[i])
if i < len(generated_sources)
else None
)
section["source_page_number"] = (
page_numbers[i] if i < len(page_numbers) else None
)
section["source_publish_date"] = (
publish_dates[i] if i < len(publish_dates) else None
)
section["source_timestamp"] = timestamps[i] if i < len(timestamps) else None
section["source_url"] = urls[i] if i < len(urls) else None
if section["source_url"] and section["source_timestamp"]:
time_in_seconds = timestamp_to_seconds(section["source_timestamp"])
if time_in_seconds is not None: # Make sure the timestamp was available
if "?" in section["source_url"]:
section["source_url"] += f"&t={time_in_seconds}s"
else:
section["source_url"] += f"?t={time_in_seconds}s"
citation = {}
if section["source_title"] is not None:
citation["Title"] = section["source_title"]
if section["source_publish_date"] is not None:
citation["Published"] = section["source_publish_date"]
if section["source_url"] is not None:
citation["URL"] = section["source_url"] # Add this line
if section["source_timestamp"] is not None:
citation["Video timestamp"] = section["source_timestamp"]
if section["source_name"] is not None:
citation["Name"] = section["source_name"]
if section["source_page_number"] is not None:
citation["Page Number"] = section["source_page_number"]
return section["response"], citation
num_responses = len(generated_responses)
for i in range(num_responses):
response, citation = gen_responses(i)
if response:
responses.append({"response": response})
if citation:
citations.append(citation)
else:
if generated_responses:
responses.append({"response": generated_responses[0]})
card = {
"card_type": RESPONSE_TYPE_DEPTH,
"responses": responses,
"citations": citations,
}
card_json = json.dumps(card)
return card_json
def append_metadata_to_content(doc_list):
updated_docs = []
for doc_tuple in doc_list:
doc, score = doc_tuple
metadata = doc.metadata
publish_date = metadata.get("publish_date")
if publish_date is not None:
updated_content = f"Document: {doc.page_content} (Published on: {publish_date})"
else:
updated_content = doc.page_content
updated_doc_info = {
'content': updated_content,
'metadata': metadata,
'score': score
}
updated_docs.append(updated_doc_info)
return updated_docs
def transform_query_for_date(query):
return (
query
+ "(SYSTEM NOTE: this query related to a specific time period, therefore, you should sort the documents by the publish dates to best answer the query)"
)
def get_indepth_response_from_query(df, db, query, k):
logger.info("Performing in-depth summary query...")
llm = ChatOpenAI(model_name="gpt-4-1106-preview")
template_date_detection = """
Analyze the following query: "{query}".
Does this query pertain to a specific date or time period, or require sorting the city council documents by date?
Respond with 'yes' or 'no'.
"""
prompt_date = PromptTemplate(
input_variables=["query"],
template=template_date_detection,
)
is_date_related_chain = LLMChain(llm=llm, prompt=prompt_date)
is_date_related = is_date_related_chain.run(query=query)
# Modify the query if it is date-related
if is_date_related.strip().lower() == "yes":
print("Date related")
query = transform_query_for_date(query)
doc_list = db.similarity_search_with_score(query, k=k)
docs = sort_retrived_documents(doc_list)
docs_page_content = append_metadata_to_content(docs)
template = """
Question: {question}
### Bias Guidelines:
Please be aware of inherent biases within the document corpus, especially an overrepresentation of certain types of documents.
These biases may result in the retrieval of documents that are irrelevant to the question.
When analyzing documents to answer the question, it is crucial to critically evaluate their relevance to the question at hand.
To ensure accuracy and relevance in your analysis you must identify and disregard irrelevant documents by actively identifying documents that, despite being returned by the database, do not substantively address the question.
Such documents should be disregarded in the analysis.
### Response Guidelines:
Based on the information from the New Orleans city council documents provided, answer the following question: {question}.
Your answer must not exceed 5,000 tokens.
Please provide direct and concise responses without unnecessary verbosity.
If possible, extract the key points, decisions, and actions discussed during the city council meetings relevant to {question};
highlight any immediate shortcomings, mistakes, or negative actions by the city council relevant to {question};
elaborate on the implications and broader societal or community impacts of the identified issues relevant to {question};
investigate any underlying biases or assumptions present in the city council's discourse or actions relevant to {question}.
If your response includes technical or uncommon terms related to city council that may not be widely understood, provide a brief definition for those terms at the end of your response in the following format where each definition is on a new line:
Definitions:
Word: Definition
Word: Definition
Word: Definition
The final output should be in paragraph form without any formatting, such as prefixing your points with "a.", "b.", or "c."
The final output should not include any reference to the model's active sorting by date.
The final output should not include any reference to the publish date. For example, all references to "(published on mm/dd/yyyy)" should be omitted.
Documents: {docs}
"""
prompt = PromptTemplate(
input_variables=["question", "docs"],
template=template,
)
chain_llm = LLMChain(llm=llm, prompt=prompt)
responses_llm = chain_llm.run(question=query, docs=docs_page_content, temperature=1)
print(responses_llm)
return process_responses_llm(responses_llm, docs)
def get_general_summary_response_from_query(db, query, k):
logger.info("Performing general summary query...")
llm = ChatOpenAI(model_name="gpt-3.5-turbo-0613")
docs = db.similarity_search(query, k=k)
docs_page_content = " ".join([d.page_content for d in docs])
prompt = PromptTemplate(
input_variables=["question", "docs"],
template="""
As an AI assistant, your task is to provide a general response to the question "{question}", using the provided transcripts from New Orleans City Council meetings in "{docs}".
Guidelines for AI assistant:
- Derive responses from factual information found within the transcripts.
- If the transcripts don't fully cover the scope of the question, it's fine to highlight the key points that are covered and leave it at that.
""",
)
chain_llm = LLMChain(llm=llm, prompt=prompt)
responses_llm = chain_llm.run(question=query, docs=docs_page_content, temperature=0)
response = {"response": responses_llm}
card = {"card_type": RESPONSE_TYPE_GENERAL, "responses": [response]}
card_json = json.dumps(card)
return card_json
def route_question(df, db_general, db_in_depth, query, query_type, k=20):
if query_type == RESPONSE_TYPE_DEPTH:
return get_indepth_response_from_query(df, db_in_depth, query, k)
elif query_type == RESPONSE_TYPE_GENERAL:
return get_general_summary_response_from_query(db_general, query, k)
else:
raise ValueError(
f"Invalid query_type. Expected {RESPONSE_TYPE_DEPTH} or {RESPONSE_TYPE_GENERAL}, got: {query_type}"
)
def answer_query(
query: str, response_type: str, df: any, db_general: any, db_in_depth: any
) -> str:
final_response = route_question(df, db_general, db_in_depth, query, response_type)
return final_response
| [
"question",
"\n Question: {question}\n\n ### Bias Guidelines:\n \n Please be aware of inherent biases within the document corpus, especially an overrepresentation of certain types of documents.\n These biases may result in the retrieval of documents that are irrelevant to the question. \n When analyzing documents to answer the question, it is crucial to critically evaluate their relevance to the question at hand.\n To ensure accuracy and relevance in your analysis you must identify and disregard irrelevant documents by actively identifying documents that, despite being returned by the database, do not substantively address the question.\n Such documents should be disregarded in the analysis.\n\n ### Response Guidelines:\n\n Based on the information from the New Orleans city council documents provided, answer the following question: {question}. \n Your answer must not exceed 5,000 tokens.\n Please provide direct and concise responses without unnecessary verbosity.\n\n If possible, extract the key points, decisions, and actions discussed during the city council meetings relevant to {question};\n highlight any immediate shortcomings, mistakes, or negative actions by the city council relevant to {question}; \n elaborate on the implications and broader societal or community impacts of the identified issues relevant to {question};\n investigate any underlying biases or assumptions present in the city council's discourse or actions relevant to {question}. \n\n If your response includes technical or uncommon terms related to city council that may not be widely understood, provide a brief definition for those terms at the end of your response in the following format where each definition is on a new line:\n\n Definitions:\n \n Word: Definition\n \n Word: Definition\n \n Word: Definition\n\n The final output should be in paragraph form without any formatting, such as prefixing your points with \"a.\", \"b.\", or \"c.\"\n The final output should not include any reference to the model's active sorting by date.\n The final output should not include any reference to the publish date. For example, all references to \"(published on mm/dd/yyyy)\" should be omitted. \n\n Documents: {docs}\n ",
"\n As an AI assistant, your task is to provide a general response to the question \"{question}\", using the provided transcripts from New Orleans City Council meetings in \"{docs}\".\n\n Guidelines for AI assistant: \n - Derive responses from factual information found within the transcripts. \n - If the transcripts don't fully cover the scope of the question, it's fine to highlight the key points that are covered and leave it at that. \n ",
"\n Analyze the following query: \"{query}\".\n Does this query pertain to a specific date or time period, or require sorting the city council documents by date? \n Respond with 'yes' or 'no'.\n "
] |
2024-01-10 | eye-on-surveillance/sawt | packages~googlecloud~functions~getanswer~helper.py | from langchain.vectorstores.faiss import FAISS
from pathlib import Path
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chains import LLMChain, HypotheticalDocumentEmbedder
from langchain.prompts import PromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain import OpenAI
import logging
import pandas as pd
logger = logging.getLogger(__name__)
"""Parse field from JSON or raise error if missing"""
def parse_field(request_json, field: str):
if request_json and field in request_json:
return request_json[field]
else:
raise ValueError(f"JSON is invalid, or missing a '${field}' property")
def get_dbs():
dir = Path(__file__).parent.absolute()
general_embeddings, in_depth_embeddings = create_embeddings()
general_faiss_index_path = dir.joinpath("cache/faiss_index_general")
in_depth_faiss_index_path = dir.joinpath("cache/faiss_index_in_depth")
voting_roll_df_path = dir.joinpath("cache/parsed_voting_rolls.csv")
db_general = FAISS.load_local(general_faiss_index_path, general_embeddings)
db_in_depth = FAISS.load_local(in_depth_faiss_index_path, in_depth_embeddings)
logger.info("Loaded databases from faiss_index_general and faiss_index_in_depth")
voting_roll_df = pd.read_csv(voting_roll_df_path)
return db_general, db_in_depth, voting_roll_df
def create_embeddings():
llm = ChatOpenAI(model="gpt-4")
base_embeddings = OpenAIEmbeddings()
general_prompt_template = """
As an AI assistant, your role is to provide concise, balanced summaries from the transcripts of New Orleans City Council meetings in response to the user's query "{user_query}". Your response should not exceed one paragraph in length. If the available information from the transcripts is insufficient to accurately summarize the issue, respond with 'Insufficient information available.' If the user's query extends beyond the scope of information contained in the transcripts, state 'I don't know.'
Answer:"""
in_depth_prompt_template = """
As an AI assistant, use the New Orleans City Council transcript data that you were trained on to provide an in-depth and balanced response to the following query: "{user_query}"
Answer:"""
general_prompt = PromptTemplate(
input_variables=["user_query"], template=general_prompt_template
)
in_depth_prompt = PromptTemplate(
input_variables=["user_query"], template=in_depth_prompt_template
)
llm_chain_general = LLMChain(llm=llm, prompt=general_prompt)
llm_chain_in_depth = LLMChain(llm=llm, prompt=in_depth_prompt)
general_embeddings = HypotheticalDocumentEmbedder(
llm_chain=llm_chain_general,
base_embeddings=base_embeddings,
)
in_depth_embeddings = HypotheticalDocumentEmbedder(
llm_chain=llm_chain_in_depth, base_embeddings=base_embeddings
)
return general_embeddings, in_depth_embeddings
def sort_retrived_documents(doc_list):
docs = sorted(doc_list, key=lambda x: x[1], reverse=True)
third = len(docs) // 3
highest_third = docs[:third]
middle_third = docs[third : 2 * third]
lowest_third = docs[2 * third :]
highest_third = sorted(highest_third, key=lambda x: x[1], reverse=True)
middle_third = sorted(middle_third, key=lambda x: x[1], reverse=True)
lowest_third = sorted(lowest_third, key=lambda x: x[1], reverse=True)
docs = highest_third + lowest_third + middle_third
return docs
| [
"\n As an AI assistant, your role is to provide concise, balanced summaries from the transcripts of New Orleans City Council meetings in response to the user's query \"{user_query}\". Your response should not exceed one paragraph in length. If the available information from the transcripts is insufficient to accurately summarize the issue, respond with 'Insufficient information available.' If the user's query extends beyond the scope of information contained in the transcripts, state 'I don't know.'\n Answer:",
"s query \"{user_query}\". Your response should not exceed one paragraph in length. If the available information from the transcripts is insufficient to accurately summarize the issue, respond with ",
" If the user",
"{user_query}",
"user_query",
"I don",
"\n As an AI assistant, use the New Orleans City Council transcript data that you were trained on to provide an in-depth and balanced response to the following query: \"{user_query}\" \n Answer:"
] |
2024-01-10 | eye-on-surveillance/sawt | packages~googlecloud~functions~getanswer~archive~inquirer-tot.py | import logging
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
import json
import os
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from helper import sort_retrived_documents
from api import RESPONSE_TYPE_DEPTH, RESPONSE_TYPE_GENERAL
logger = logging.getLogger(__name__)
import faiss
import numpy as np
import pickle
from sentence_transformers import SentenceTransformer
from collections import defaultdict
import re
def process_responses_llm(responses_llm, docs=None):
generated_responses = responses_llm.split("\n\n")
responses = []
citations = []
if docs:
generated_titles = [
doc[0].metadata.get("title", doc[0].metadata.get("source", ""))
for doc in docs
]
page_numbers = [doc[0].metadata.get("page_number") for doc in docs]
generated_sources = [
doc[0].metadata.get("source", "source not available") for doc in docs
]
publish_dates = [
doc[0].metadata.get("publish_date", "date not available") for doc in docs
]
timestamps = [
doc[0].metadata.get("timestamp", "timestamp not available") for doc in docs
]
urls = [doc[0].metadata.get("url", "url not available") for doc in docs]
def gen_responses(i):
section = {}
section["response"] = (
generated_responses[i] if i < len(generated_responses) else None
)
section["source_title"] = (
generated_titles[i] if i < len(generated_titles) else None
)
section["source_name"] = (
os.path.basename(generated_sources[i])
if i < len(generated_sources)
else None
)
section["source_page_number"] = (
page_numbers[i] if i < len(page_numbers) else None
)
section["source_publish_date"] = (
publish_dates[i] if i < len(publish_dates) else None
)
section["source_timestamp"] = timestamps[i] if i < len(timestamps) else None
section["source_url"] = urls[i] if i < len(urls) else None
if section["source_url"] and section["source_timestamp"]:
time_in_seconds = timestamp_to_seconds(section["source_timestamp"])
if time_in_seconds is not None: # Make sure the timestamp was available
if "?" in section["source_url"]:
section["source_url"] += f"&t={time_in_seconds}s"
else:
section["source_url"] += f"?t={time_in_seconds}s"
citation = {}
if section["source_title"] is not None:
citation["Title"] = section["source_title"]
if section["source_publish_date"] is not None:
citation["Published"] = section["source_publish_date"]
if section["source_url"] is not None:
citation["URL"] = section["source_url"] # Add this line
if section["source_timestamp"] is not None:
citation["Video timestamp"] = section["source_timestamp"]
if section["source_name"] is not None:
citation["Name"] = section["source_name"]
return section["response"], citation
num_responses = len(generated_responses)
for i in range(num_responses):
response, citation = gen_responses(i)
if response:
responses.append({"response": response})
if citation:
citations.append(citation)
else:
if generated_responses:
responses.append({"response": generated_responses[0]})
card = {
"card_type": RESPONSE_TYPE_DEPTH,
"responses": responses,
"citations": citations,
}
card_json = json.dumps(card)
return card_json
def timestamp_to_seconds(timestamp):
if "timestamp not available" in timestamp:
return None
start_time = timestamp.split("-")[0]
time_parts = start_time.split(":")
h, m, s = 0, 0, 0
if len(time_parts) == 3:
h, m, s = [int(i) for i in time_parts]
elif len(time_parts) == 2:
h, m = [int(i) for i in time_parts]
elif len(time_parts) == 1:
m = int(time_parts[0])
return h * 3600 + m * 60 + s
def create_agent(df):
return create_pandas_dataframe_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"),
df,
agent_type=AgentType.OPENAI_FUNCTIONS,
verbose=True,
)
def evaluate_document_relevance(llm, docs, query, threshold):
template = """
Transcripts: {docs}
Given the documents retrieved for the query: {query}, rate the relevance and quality of these documents for answering the query on a scale of 1 to 10.
Please provide one score for all documents in the following format: confidence_score: score
A:
"""
prompt = PromptTemplate(input_variables=["docs", "query"], template=template)
chain = LLMChain(llm=llm, prompt=prompt, output_key="confidence_score")
result = chain.run(docs=docs, query=query, temperature=0)
result_dict = {"confidence_score": float(result.split(":")[1].strip())}
confidence_score = result_dict.get("confidence_score", 0)
# print(f"Query: {query}, Result: {result}")
try:
confidence_score = int(confidence_score)
except ValueError:
logging.warning(
f"Could not convert confidence score to an integer: {confidence_score}"
)
confidence_score = 0
better_query_needed = confidence_score < threshold
return {
"confidence_score": confidence_score,
"better_query_needed": better_query_needed,
}
def generate_better_query(llm, original_query, docs, threshold):
# Template for generating a better query
template = """
Transcripts: {docs}
The original query: {original_query} did not yield satisfactory results with a confidence below {threshold}.
Please provide a new and improved query that we can send to the faiss vector database for document retrival.
The new query must aim to answer the {original_query}
A:
"""
prompt = PromptTemplate(
input_variables=["original_query", "docs", "threshold"], template=template
)
chain = LLMChain(llm=llm, prompt=prompt, output_key="better_query")
result = chain.run(
original_query=original_query, docs=docs, threshold=threshold, temperature=0
)
if result:
better_query = result
else:
logging.warning("Result is empty. Using original query instead.")
better_query = original_query
return better_query
def refine_query(db, llm, query, k, threshold_db, sort_retrieved_documents):
iteration_counter = 0
max_iterations = 1
query_scores = {}
updated_query = query
# Evaluate the initial query and store its score
doc_list = db.similarity_search_with_score(updated_query, k=k)
docs = sort_retrieved_documents(doc_list)
evaluation_result = evaluate_document_relevance(
llm, docs, updated_query, threshold_db
)
confidence_rating = evaluation_result.get("confidence_score", 0)
# Store the query if its score is not already in the dictionary
if confidence_rating not in query_scores:
query_scores[confidence_rating] = updated_query
while iteration_counter < max_iterations and confidence_rating < threshold_db:
# If the initial query did not meet the threshold, refine it
updated_query = generate_better_query(llm, updated_query, docs, threshold_db)
doc_list = db.similarity_search_with_score(updated_query, k=k)
docs = sort_retrieved_documents(doc_list)
evaluation_result = evaluate_document_relevance(
llm, docs, updated_query, threshold_db
)
confidence_rating = evaluation_result.get("confidence_score", 0)
# Store the query if its score is not already in the dictionary
if confidence_rating not in query_scores:
query_scores[confidence_rating] = updated_query
iteration_counter += 1
highest_score = max(query_scores.keys())
best_query = query_scores[highest_score]
return best_query
def run_vector_search(db, best_query_vector_db, k, sort_retrieved_documents):
doc_list = db.similarity_search_with_score(best_query_vector_db, k=k)
docs = sort_retrieved_documents(doc_list)
docs_page_content = " ".join([d[0].page_content for d in docs])
return docs, docs_page_content, best_query_vector_db
def ensure_dict(obj):
if isinstance(obj, dict):
return obj
elif isinstance(obj, str):
try:
return json.loads(obj)
except json.JSONDecodeError:
logging.warning(f"Could not convert string to dictionary: {obj}")
return {}
else:
logging.warning(f"Object is not a dictionary or string: {obj}")
return {}
def parse_angles(output_str):
# Split string based on pattern (e.g., "1. ", "2. ", etc.)
angle_sections = re.split(r"\d+\.\s", output_str)[1:]
angles = {}
for index, section in enumerate(angle_sections, start=1):
angles[str(index)] = section.strip()
return angles
def generate_synthesized_angle(llm, angles_dict, confidence_dict, docs):
# Combine angles and their confidence ratings
combined_angles = "\n".join(
[
f"Angle {i+1} (Confidence: {confidence_dict.get(angle, 'Unknown')}): {angle}"
for i, angle in enumerate(angles_dict.values())
]
)
# Template for generating a synthesized angle
template = """
Transcripts: {docs}
Review the following brainstormed angles along with their confidence ratings:
{combined_angles}
Identify the most insightful and relevant aspects from each brainstormed angle, while also considering their confidence ratings. Reinterpret, combine, or expand on these ideas to form a cohesive and improved approach for analyzing the transcripts. Please synthesize these angles into a new, comprehensive angle in the following format:
Angle: ...
A:
"""
prompt = PromptTemplate(
input_variables=["combined_angles", "docs"], template=template
)
chain = LLMChain(llm=llm, prompt=prompt, output_key="synthesized_angle")
result = chain.run(combined_angles=combined_angles, docs=docs, temperature=0)
if result:
synthesized_angle = result
else:
logging.warning("Result is empty. Using default synthesized angle instead.")
synthesized_angle = (
"A new angle could not be synthesized based on the provided input."
)
return synthesized_angle
def get_indepth_response_from_query(
df,
db,
query,
k,
max_iterations=1,
):
logger.info("Performing in-depth summary query...")
query_lower = query.lower()
if query_lower.startswith(
("list the votes for ordinance", "what were the votes for ordinance")
):
agent = create_agent(df)
responses_llm = agent.run(query)
return process_responses_llm(responses_llm)
else:
llm = ChatOpenAI(model_name="gpt-3.5-turbo-16k")
iteration_counter_db = 0
confidence_rating_db = 0
threshold_db = 10
## Stage 1: Query refinement stage. Task: evaluate the relevance of docs returned from vector db with respect to query
best_query = refine_query(
db, llm, query, k, threshold_db, sort_retrived_documents
)
docs, docs_page_content, best_query_vector = run_vector_search(
db, best_query, k, sort_retrived_documents
)
# print(best_query_vector)
## Helper funcs
def execute_brainstorming_stage(docs):
template1 = """
Transcripts: {docs}
Question: {question}
To provide a detailed and accurate response based on the transcripts provided, brainstorm three distinct strategies that leverage the specific content and context of these documents. Focus on methodologies that utilize the information within the transcripts to clarify, explore, and elaborate on the query.
Please provide the output in the following format:
1. [Detailed strategy emphasizing analysis, interpretation, or cross-referencing within the transcripts themselves]
2. [Another strategy that relies on extracting and building upon specific details, examples, or discussions found in the transcripts]
3. [A third strategy that uses the thematic or contextual elements present in the transcripts to provide an in-depth understanding of the topic]
A:
"""
prompt1 = PromptTemplate(
input_variables=["question", "docs"], template=template1
)
chain1 = LLMChain(llm=llm, prompt=prompt1, output_key="angles")
responses_llm = chain1.run(question=best_query, docs=docs, temperature=1)
# print(f"Angle: {responses_llm}")
# print("Raw brainstorming response:", responses_llm)
parsed_angles = parse_angles(responses_llm)
# print("Parsed angles:", parsed_angles)
return parsed_angles
def evaluate_output(angles, docs, angle_confidence_dict):
template1_evaluation = """
Transcripts: {docs}
Based on the brainstormed angles: {angles}, how confident are you in the quality and relevance of these perspectives for the query: {question}?
Rate your confidence on a scale of 1 to 10. Only provide the number.
A:
"""
prompt1_evaluation = PromptTemplate(
input_variables=["question", "docs", "angles"],
template=template1_evaluation,
)
chain1_evaluation = LLMChain(
llm=llm, prompt=prompt1_evaluation, output_key="confidence_rating"
)
for angle, content in angles.items():
result = chain1_evaluation.run(
question=best_query, docs=docs, angles=content, temperature=0
)
print(f"Angle: {angle}, Content: {content}, Result: {result}")
if isinstance(result, (int, float)):
confidence_rating = result
elif isinstance(result, str):
try:
confidence_rating = int(result)
except ValueError:
confidence_rating = 0
elif isinstance(result, dict) and "confidence_rating" in result:
confidence_rating = result["confidence_rating"]
else:
confidence_rating = 0
angle_confidence_dict[content] = confidence_rating
# print(f"Content: {content}, Confidence: {confidence_rating}")
# print(f"DEBUG: angles before check = {angles}")
if not angle_confidence_dict or all(
v == 0 for v in angle_confidence_dict.values()
):
logging.warning(
"No angles were evaluated or all angles have zero confidence. Returning the first angle."
)
best_angle = list(angles.keys())[0] # Get the first angle
return {"best_angle": best_angle, "confidence_rating": 0}
# Sorting the dictionary by values. In case of a tie, the first item with the maximum value will be chosen.
best_angle = max(angle_confidence_dict, key=angle_confidence_dict.get)
# print(f"Best Angle: {best_angle}")
return {
"best_angle": best_angle,
"confidence_rating": angle_confidence_dict[best_angle],
"angle_confidence_dict": angle_confidence_dict,
}
### Stage 2: Evaluate angles returned. Choose the best angle.
threshold_brainstorm = 10
iteration_counter_brainstorm = 0
confidence_rating_brainstorm = 0
### Iterate over the brainstorming function until an appropriate angle is found:
# Brainstorming includes: I have a query related to the New Orleans city council about {question}.
# Could you brainstorm three distinct angles or perspectives to approach this query
# Based on the brainstormed angles: {angles}, how confident are you in the quality and relevance of these perspectives for the query: {question}?
# Rate your confidence on a scale of 1 to 10.
angle_confidence_dict = {}
while (
confidence_rating_brainstorm < threshold_brainstorm
and iteration_counter_brainstorm < max_iterations
):
logging.info("Brainstorming function invoked.")
angles_dict = execute_brainstorming_stage(docs_page_content)
response = evaluate_output(angles_dict, docs, angle_confidence_dict)
confidence_rating_brainstorm = int(response.get("confidence_rating", 0))
angle_confidence_dict.update(
response.get("angle_confidence_dict", {})
) # Cumulatively updating the dictionary
iteration_counter_brainstorm += 1
logging.info(
f"Iteration: {iteration_counter_brainstorm}, Confidence Rating: {confidence_rating_brainstorm}"
)
if iteration_counter_brainstorm == max_iterations:
logging.warning(
f"Maximum number of iterations ({max_iterations}) reached without crossing the confidence threshold. Brainstorm func will no longer be re-run."
)
best_angle = max(angle_confidence_dict, key=angle_confidence_dict.get)
print(f"Best Angle: {best_angle}")
# Stage 2: Initial Analysis Stage
template2 = """
Using the selected approach: {angle}, and the documents: {docs} as references:
a. Extract the key points, decisions, and actions discussed during the city council meetings relevant to {question}.
b. Highlight any immediate shortcomings, mistakes, or negative actions by the city council relevant to {question}.
A:
"""
prompt2 = PromptTemplate(
input_variables=["question", "docs", "angle"], template=template2
)
chain2 = LLMChain(llm=llm, prompt=prompt2, output_key="evaluated_approaches")
# Stage 3: Deeper Analysis Stage
template3 = """
Transcripts: {docs}
Question: {question}
Building upon the initial analysis and based on the selected angle from {evaluated_approaches}, engage in a deeper examination:
a. Elaborate on the implications and broader societal or community impacts of the identified issues.
b. Investigate any underlying biases or assumptions present in the city council's discourse or actions relevant to {question}.
A:
"""
prompt3 = PromptTemplate(
input_variables=["question", "docs", "evaluated_approaches"],
template=template3,
)
chain3 = LLMChain(llm=llm, prompt=prompt3, output_key="deepen_thought_process")
# Stage 4: Synthesis
template4 = """
Transcripts: {docs}
With the output from your deeper analysis stage: {deepen_thought_process}, use the transcripts to synthesize your findings in the following manner:
a. Identify and draw connections between the discussed points, examining any patterns of behavior or recurrent issues relevant to {question}.
b. Offer a critical perspective on the city council's actions or decisions related to {question}, utilizing external knowledge if necessary. Highlight any inconsistencies or contradictions.
c. Summarize the critical insights derived from the analysis regarding {question}.
A:
"""
prompt4 = PromptTemplate(
input_variables=["question", "docs", "deepen_thought_process"],
template=template4,
)
chain4 = LLMChain(llm=llm, prompt=prompt4, output_key="ranked_insights")
# Connecting the chains
overall_chain = SequentialChain(
chains=[chain2, chain3, chain4],
input_variables=["question", "docs", "angle"],
output_variables=["ranked_insights"],
verbose=True,
)
responses_llm = overall_chain.run(
question=best_query, docs=docs_page_content, angle=best_angle, temperature=0
)
# print(best_angle)
# print(best_query)
return process_responses_llm(responses_llm, docs)
def get_general_summary_response_from_query(db, query, k):
logger.info("Performing general summary query...")
llm = ChatOpenAI(model_name="gpt-3.5-turbo-0613")
docs = db.similarity_search(query, k=k)
docs_page_content = " ".join([d.page_content for d in docs])
prompt = PromptTemplate(
input_variables=["question", "docs"],
template="""
As an AI assistant, your task is to provide a general response to the question "{question}", using the provided transcripts from New Orleans City Council meetings in "{docs}".
Guidelines for AI assistant:
- Derive responses from factual information found within the transcripts.
- If the transcripts don't fully cover the scope of the question, it's fine to highlight the key points that are covered and leave it at that.
""",
)
chain_llm = LLMChain(llm=llm, prompt=prompt)
responses_llm = chain_llm.run(question=query, docs=docs_page_content, temperature=0)
response = {"response": responses_llm}
card = {"card_type": RESPONSE_TYPE_GENERAL, "responses": [response]}
card_json = json.dumps(card)
return card_json
def route_question(df, db_general, db_in_depth, query, query_type, k=10):
if query_type == RESPONSE_TYPE_DEPTH:
return get_indepth_response_from_query(df, db_in_depth, query, k)
elif query_type == RESPONSE_TYPE_GENERAL:
return get_general_summary_response_from_query(db_general, query, k)
else:
raise ValueError(
f"Invalid query_type. Expected {RESPONSE_TYPE_DEPTH} or {RESPONSE_TYPE_GENERAL}, got: {query_type}"
)
def answer_query(
query: str, response_type: str, df: any, db_general: any, db_in_depth: any
) -> str:
final_response = route_question(df, db_general, db_in_depth, query, response_type)
return final_response
| [
"original_query",
"angles",
"\n Transcripts: {docs}\n Question: {question}\n\n Building upon the initial analysis and based on the selected angle from {evaluated_approaches}, engage in a deeper examination:\n a. Elaborate on the implications and broader societal or community impacts of the identified issues.\n b. Investigate any underlying biases or assumptions present in the city council's discourse or actions relevant to {question}.\n A:\n ",
"\n Using the selected approach: {angle}, and the documents: {docs} as references:\n a. Extract the key points, decisions, and actions discussed during the city council meetings relevant to {question}.\n b. Highlight any immediate shortcomings, mistakes, or negative actions by the city council relevant to {question}.\n A:\n ",
"question",
"\n Transcripts: {docs}\n With the output from your deeper analysis stage: {deepen_thought_process}, use the transcripts to synthesize your findings in the following manner:\n a. Identify and draw connections between the discussed points, examining any patterns of behavior or recurrent issues relevant to {question}.\n b. Offer a critical perspective on the city council's actions or decisions related to {question}, utilizing external knowledge if necessary. Highlight any inconsistencies or contradictions.\n c. Summarize the critical insights derived from the analysis regarding {question}.\n A:\n ",
"\n As an AI assistant, your task is to provide a general response to the question \"{question}\", using the provided transcripts from New Orleans City Council meetings in \"{docs}\".\n\n Guidelines for AI assistant: \n - Derive responses from factual information found within the transcripts. \n - If the transcripts don't fully cover the scope of the question, it's fine to highlight the key points that are covered and leave it at that. \n ",
"\n Transcripts: {docs}\n Review the following brainstormed angles along with their confidence ratings:\n {combined_angles}\n \n Identify the most insightful and relevant aspects from each brainstormed angle, while also considering their confidence ratings. Reinterpret, combine, or expand on these ideas to form a cohesive and improved approach for analyzing the transcripts. Please synthesize these angles into a new, comprehensive angle in the following format:\n \n Angle: ... \n \n A:\n ",
"angle",
"combined_angles",
"deepen_thought_process",
"\n Transcripts: {docs}\n Given the documents retrieved for the query: {query}, rate the relevance and quality of these documents for answering the query on a scale of 1 to 10. \n Please provide one score for all documents in the following format: confidence_score: score\n A:\n ",
"\n Transcripts: {docs}\n The original query: {original_query} did not yield satisfactory results with a confidence below {threshold}.\n Please provide a new and improved query that we can send to the faiss vector database for document retrival.\n The new query must aim to answer the {original_query}\n A:\n ",
"evaluated_approaches",
"\n Transcripts: {docs}\n Question: {question}\n\n To provide a detailed and accurate response based on the transcripts provided, brainstorm three distinct strategies that leverage the specific content and context of these documents. Focus on methodologies that utilize the information within the transcripts to clarify, explore, and elaborate on the query. \n\n Please provide the output in the following format:\n 1. [Detailed strategy emphasizing analysis, interpretation, or cross-referencing within the transcripts themselves]\n 2. [Another strategy that relies on extracting and building upon specific details, examples, or discussions found in the transcripts]\n 3. [A third strategy that uses the thematic or contextual elements present in the transcripts to provide an in-depth understanding of the topic]\n\n A:\n ",
"\n Transcripts: {docs}\n Based on the brainstormed angles: {angles}, how confident are you in the quality and relevance of these perspectives for the query: {question}? \n Rate your confidence on a scale of 1 to 10. Only provide the number.\n A:\n "
] |
2024-01-10 | chats-bug/chatbot-RAG | src~voice_chat.py | import whisper
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from chatbot import Chatbot
from utils.models_and_path import WHISPER_MODEL_NAME
class WhisperChatbot(Chatbot):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.whisper_model = whisper.Whisper(WHISPER_MODEL_NAME)
self._load_translation_engine()
def response(self, audio):
self._clean_audio()
self._load_audio(audio)
self._process_audio()
en_result = super().response(self.text)
result_translated = self._translate_text(text=en_result, source="en", target=self.lang)
return self.transcribed_text, self.text, self.lang, en_result, result_translated
def _load_translation_engine(self):
self.translation_prompt = PromptTemplate(
input_variables=["source", "target", "text"],
template="Translate from language {source} to {target}: {text}?",
)
self.translation_chain = LLMChain(llm=self.LLM, prompt=self.translation_prompt)
def _load_audio(self, audio):
assert isinstance(audio, bytes), "Audio must be bytes"
assert self.whisper_model, "Whisper model not loaded"
# load audio and pad/trim it to fit 30 seconds
self.audio = self.whisper_model.load_audio(
self.whisper_model.pad_or_trim(
audio
))
def _process_audio(self):
assert self.audio, "Audio not loaded"
assert self.whisper_model, "Whisper model not loaded"
# Make log-Mel spectrogram and move to the same device as the model
mel = self.whisper_model.melspectrogram(self.audio).to(self.whisper_model.device)
# Detcet language
_, probas = self.whisper_model.detect_language(self.audio)
self.lang = max(probas, key=probas.get)
# Decode the audio
options = self.whisper_model.DecodingOptions(fp16=False)
self.transcribed_text = whisper.decode(self.whisper_model, mel, options).text
# Check the language of the audio;
# if it's english, use the transcribed text as is
# else, translate it to english
if self.lang == "en":
self.text = self.transcribed_text
else:
# translate from detected lang to en
self.text = self._translate_text(self.transcribed_text, self.lang, "en")
def _translate_text(self, text, source, target):
return self.translation_chain({
"source": source,
"target": target,
"text": text
})["result"]
def _clean_audio(self):
self.audio = None
self.lang = None
self.text = None
self.transcribed_text = None
| [
"Translate from language {source} to {target}: {text}?"
] |
2024-01-10 | benlipkin/linc | runner.py | import os
import fnmatch
import json
import pathlib
from warnings import warn
import torch
import openai
import datasets
import transformers
from accelerate import Accelerator
from transformers import AutoModelForCausalLM, AutoTokenizer, HfArgumentParser
from eval.args import RunnerArguments, HFArguments, OAIArguments, GenerationArguments
from eval.evaluator import HFEvaluator, OAIEvaluator
from eval.tasks import ALL_TASKS
transformers.logging.set_verbosity_error()
datasets.logging.set_verbosity_error()
def main():
args = HfArgumentParser(
[RunnerArguments, HFArguments, OAIArguments, GenerationArguments]
).parse_args()
args.output_dir = pathlib.Path(__file__).parent / args.output_dir
args.save_generations_raw_path = args.output_dir / args.save_generations_raw_path
args.save_generations_prc_path = args.output_dir / args.save_generations_prc_path
args.save_references_path = args.output_dir / args.save_references_path
args.save_results_path = args.output_dir / args.save_results_path
args.save_generations_raw_path.parent.mkdir(parents=True, exist_ok=True)
args.save_generations_prc_path.parent.mkdir(parents=True, exist_ok=True)
args.save_references_path.parent.mkdir(parents=True, exist_ok=True)
args.save_results_path.parent.mkdir(parents=True, exist_ok=True)
if args.tasks is None:
task_names = ALL_TASKS
else:
task_names = set()
for pattern in args.tasks.split(","):
for matching in fnmatch.filter(ALL_TASKS, pattern):
task_names.add(matching)
task_names = list(task_names)
accelerator = Accelerator()
if accelerator.is_main_process:
print(f"Selected Tasks: {task_names}")
results = {}
if args.generations_path:
if accelerator.is_main_process:
print("Evaluation only mode")
evaluator = HFEvaluator(accelerator, None, None, args)
for task in task_names:
results[task] = evaluator.evaluate(task)
else:
evaluator = None
if args.openai_api_env_keys:
env_key = args.openai_api_env_keys[0] # use any key to get list of models
openai.api_key = os.environ[env_key]
comp_models = {
"code-davinci-002",
"text-davinci-003",
"text-davinci-002",
"text-curie-001",
"text-babbage-001",
"text-ada-001",
}
chat_models = {
"gpt-4",
"gpt-4-0613",
"gpt-4-32k",
"gpt-4-32k-0613",
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
}
if any(model == args.model for model in comp_models):
print(f"Using OpenAI Completion API for model {args.model}")
evaluator = OAIEvaluator(args)
elif any(model == args.model for model in chat_models):
print(f"Using OpenAI Chat API for model {args.model}")
evaluator = OAIEvaluator(args, chat=True)
else:
print(
f"Model {args.model} not found in OpenAI API. Assuming HuggingFace locally."
)
else:
warn(
"No OpenAI API key provided. Will attempt to use HuggingFace locally regardless of which model name was given."
)
if evaluator is None:
dict_precisions = {
"fp32": torch.float32,
"fp16": torch.float16,
"bf16": torch.bfloat16,
}
if args.precision not in dict_precisions:
raise ValueError(
f"Non valid precision {args.precision}, choose from: fp16, fp32, bf16"
)
print(f"Loading the model and tokenizer from HF (in {args.precision})")
model = AutoModelForCausalLM.from_pretrained(
args.model,
revision=args.revision,
torch_dtype=dict_precisions[args.precision],
trust_remote_code=args.trust_remote_code,
use_auth_token=args.use_auth_token,
)
tokenizer = AutoTokenizer.from_pretrained(
args.model,
revision=args.revision,
use_auth_token=args.use_auth_token,
truncation_side="left",
)
if not tokenizer.eos_token:
if tokenizer.bos_token:
tokenizer.eos_token = tokenizer.bos_token
print("bos_token used as eos_token")
else:
raise ValueError("No eos_token or bos_token found")
tokenizer.pad_token = tokenizer.eos_token
evaluator = HFEvaluator(accelerator, model, tokenizer, args)
for task in task_names:
if args.generation_only:
if accelerator.is_main_process:
print("Generation mode only")
generations_prc, generations_raw, references = evaluator.generate_text(
task
)
if accelerator.is_main_process:
if args.save_generations_raw:
with open(args.save_generations_raw_path, "w") as fp:
json.dump(generations_raw, fp)
print("raw generations were saved")
if args.save_generations_prc:
with open(args.save_generations_prc_path, "w") as fp:
json.dump(generations_prc, fp)
print("processed generations were saved")
if args.save_references:
with open(args.save_references_path, "w") as fp:
json.dump(references, fp)
print("references were saved")
else:
results[task] = evaluator.evaluate(task)
results["config"] = {"model": args.model}
if not args.generation_only:
dumped = json.dumps(results, indent=2, sort_keys=True)
if accelerator.is_main_process:
print(dumped)
if args.save_results:
with open(args.save_results_path, "w") as f:
f.write(dumped)
if __name__ == "__main__":
main()
| [] |
2024-01-10 | levgrav/text-rpg | src~text_rpg~controllers~gpttools~describe_gpt.py | import openai
import json
with open('files/game_data/settings.json') as f:
openai.api_key_path = json.load(f)['openai_api_key_path']
messages = [
{
"role": "system",
"content": """Your name is DescribeGPT. Your job is to write descriptions for a text-adventure game to enhance the user's experience. You will be given command and relevant information about the game. keep the responses to one short paragraph. Remember to be engaging and descriptive. respond with "ok" if you understand. When the user uses the "look_around" or "look_at" command, give this level of detail. In all other commands such as the "Move" command keep it to a few sentences.
Here are some examples of commands and their responses:
Command: look_around
Info: Found room: 'Village', ''
Village
name: Village
description: incomplete
inventory: ['sword']
containers: []
pos: [0, 0]
rooms: ['Blacksmith', "Jimmy's Hut", 'Apothecary', 'Market']
npcs: ["Jimmy's Dad", 'Jimmy', 'Blacksmith', 'Apothecary', 'Market Vendor']
You find yourself in the Village. The surroundings are filled with a sense of warmth and simplicity. A solitary sword catches your eye, leaning against a wall. The village seems to be a hub, with various rooms branching out, including the Blacksmith, Jimmy's Hut, the Apothecary, and the bustling Market. You spot several people, including Jimmy and his dad, the Blacksmith, the Apothecary, and a vendor at the Market. Excitement fills the air as you contemplate your next move."
Command: ('move', ['north'])
Info: Moved north from Village to Harbor
You decide to travel Northward, towards the Harbor. The Road that takes you there is wide and well used and you pass a number of people on the way there. As you approach you catch a glimpse of boats and docks and you smell the salty smell of the ocean.
Remember that these are just examples. The game starts after this message. Even if the information is the same, the way you describe it should be different. Do not use any information from the game that is not given to you. Do not use any information from the examples in your responses."""
},
]
def describe(command, info):
messages.append({"role": "user", "content": f"Command: {command}\nInfo: {info}"})
for i in range(3):
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages
)
break
except Exception as e:
print("Error: ", e)
else:
print("Failed to get response from OpenAI API")
return "error", []
messages.append(response.choices[0]["message"])
return messages[-1]["content"]
return "You find yourself in the Village. The surroundings are filled with a sense of warmth and simplicity. A solitary sword catches your eye, leaning against a wall. The village seems to be a hub, with various rooms branching out, including the Blacksmith, Jimmy's Hut, the Apothecary, and the bustling Market. You spot several people, including Jimmy and his dad, the Blacksmith, the Apothecary, and a vendor at the Market. Excitement fills the air as you contemplate your next move."
| [
"Command: PLACEHOLDER\nInfo: PLACEHOLDER",
"Your name is DescribeGPT. Your job is to write descriptions for a text-adventure game to enhance the user's experience. You will be given command and relevant information about the game. keep the responses to one short paragraph. Remember to be engaging and descriptive. respond with \"ok\" if you understand. When the user uses the \"look_around\" or \"look_at\" command, give this level of detail. In all other commands such as the \"Move\" command keep it to a few sentences.\nHere are some examples of commands and their responses:\n\nCommand: look_around\nInfo: Found room: 'Village', ''\nVillage\n name: Village\n description: incomplete\n inventory: ['sword']\n containers: []\n pos: [0, 0]\n rooms: ['Blacksmith', \"Jimmy's Hut\", 'Apothecary', 'Market']\n npcs: [\"Jimmy's Dad\", 'Jimmy', 'Blacksmith', 'Apothecary', 'Market Vendor']\n\nYou find yourself in the Village. The surroundings are filled with a sense of warmth and simplicity. A solitary sword catches your eye, leaning against a wall. The village seems to be a hub, with various rooms branching out, including the Blacksmith, Jimmy's Hut, the Apothecary, and the bustling Market. You spot several people, including Jimmy and his dad, the Blacksmith, the Apothecary, and a vendor at the Market. Excitement fills the air as you contemplate your next move.\"\n\nCommand: ('move', ['north'])\nInfo: Moved north from Village to Harbor\n\nYou decide to travel Northward, towards the Harbor. The Road that takes you there is wide and well used and you pass a number of people on the way there. As you approach you catch a glimpse of boats and docks and you smell the salty smell of the ocean.\n\nRemember that these are just examples. The game starts after this message. Even if the information is the same, the way you describe it should be different. Do not use any information from the game that is not given to you. Do not use any information from the examples in your responses."
] |
2024-01-10 | RileyCornelius/together-api-gui | app~mistral.py | import os
from typing import Iterator
import openai
import dotenv
import termcolor
class Mistral:
def __init__(self, together_api_key: str = None, system_prompt: str = "", enable_print: bool = True):
self.system_prompt = system_prompt
self.enable_print = enable_print
self.max_tokens = 1024
self.temperature = 0.7
self.top_p = 0.7
self.model = "mistralai/Mixtral-8x7B-Instruct-v0.1"
if together_api_key is None:
dotenv.load_dotenv()
together_api_key = os.getenv("TOGETHER_API_KEY")
self._client = openai.OpenAI(api_key=together_api_key, base_url="https://api.together.xyz/v1")
self._history = []
def chat(self, prompt: str) -> str:
messages = self._build_prompt(prompt)
output = self._client.chat.completions.create(
messages=messages,
model=self.model,
max_tokens=self.max_tokens,
temperature=self.temperature,
top_p=self.top_p,
)
self._total_tokens = output.usage.total_tokens
response = output.choices[0].message.content
if self.enable_print:
print(termcolor.colored("User: ", "cyan") + prompt)
print(termcolor.colored("Assistant: ", "yellow") + response)
self._append_history(prompt, response)
return response
def chat_stream(self, prompt: str) -> Iterator[str]:
messages = self._build_prompt(prompt)
stream = self._client.chat.completions.create(
messages=messages,
model=self.model,
max_tokens=self.max_tokens,
temperature=self.temperature,
top_p=self.top_p,
stream=True,
)
if self.enable_print:
print()
print(termcolor.colored("User: ", "cyan") + prompt)
print(termcolor.colored("Assistant:", "yellow"), end="")
output = ""
for chunk in stream:
text = chunk.choices[0].delta.content
output += text
if self.enable_print:
print(text or "", end="", flush=True)
yield text
self._append_history(prompt, output)
def clear_history(self):
self._history = []
def _build_prompt(self, user_input: str) -> str:
messages = [{"role": "system", "content": self.system_prompt}]
for pair in self._history:
messages.append({"role": "user", "content": pair[0]})
messages.append({"role": "assistant", "content": pair[1]})
messages.append({"role": "user", "content": user_input})
return messages
def _append_history(self, user_input: str, model_output: str):
self._history.append([user_input, model_output])
if __name__ == "__main__":
mistral = Mistral(system_prompt="Always end your response with the word TERMINATE.")
mistral.chat("Hello, how are you?")
strea = mistral.chat_stream("Tell me more")
for chunk in strea:
pass
| [
"Hello, how are you?"
] |
2024-01-10 | RileyCornelius/together-api-gui | app~audio_streamer.py | import os
from queue import Queue
import shutil
import subprocess
import threading
from typing import Iterator, Literal
from dotenv import load_dotenv
import openai
import speech_recognition as sr
class AudioStreamer:
def __init__(self):
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
self.openai = openai.OpenAI(api_key=api_key)
self.microphone = sr.Microphone()
self.recognizer = sr.Recognizer()
self.is_streaming = False
self.audio = Queue()
self.text = Queue()
def start_streaming(self, stream=None):
self.is_streaming = True
self.audio = Queue()
self.text = Queue()
threading.Thread(target=self._tts_thread, daemon=True).start()
threading.Thread(target=self._audio_thread, daemon=True).start()
if stream:
for chunk in stream:
print(chunk, end="", flush=True)
self.text.put(chunk)
def stop_streaming(self):
self.is_streaming = False
def _tts_thread(self):
sentence = ""
while self.is_streaming:
chunk = self.text.get()
sentence += chunk
# TODO: add a better way to detect end of sentence
if chunk and chunk[-1] in ".!?":
audio_stream = self.text_to_speech_streaming(sentence)
self.audio.put(audio_stream)
sentence = ""
def _audio_thread(self):
while self.is_streaming:
self.audio_streaming(self._stream_audio_generator())
def _stream_audio_generator(self) -> Iterator[bytes]:
while self.is_streaming:
sentence_audio = self.audio.get()
for bytes in sentence_audio.iter_bytes():
yield bytes
def text_to_speech_streaming(
self,
text: str,
voice: Literal["alloy", "echo", "fable", "onyx", "nova", "shimmer"] = "echo",
model: Literal["tts-1", "tts-1-hd"] = "tts-1",
speed: float = 1.0,
):
stream = self.openai.audio.speech.create(
input=text,
model=model,
voice=voice,
response_format="mp3",
speed=speed,
stream=True,
)
return stream
def speech_to_text_whisper(self, audio_file: str):
try:
audio_file = open(audio_file, "rb")
text = self.openai.audio.transcriptions.create(file=audio_file, model="whisper-1", response_format="text")
return text
except Exception as error:
print(f"Speech to text error: {error}")
return ""
def audio_streaming(self, audio_stream: Iterator[bytes]) -> bytes:
if shutil.which("mpv") is None:
message = (
"mpv not found, necessary to stream audio. "
"On mac you can install it with 'brew install mpv'. "
"On linux and windows you can install it from https://mpv.io/"
)
raise ValueError(message)
mpv_command = ["mpv", "--no-cache", "--no-terminal", "--", "fd://0"]
mpv_process = subprocess.Popen(
mpv_command,
stdin=subprocess.PIPE,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
audio = bytes()
for chunk in audio_stream:
if not self.is_streaming:
mpv_process.terminate()
break
if chunk is not None:
mpv_process.stdin.write(chunk)
mpv_process.stdin.flush()
audio += chunk
if mpv_process.stdin:
mpv_process.stdin.close()
mpv_process.wait()
self.stop_streaming()
return audio
def listening(self):
try:
with sr.Microphone() as microphone:
audio = sr.Recognizer().listen(microphone)
audio_path = self._save_audio(audio.get_wav_data(), "cache")
return audio_path
except sr.UnknownValueError:
print("Error: Could not understand audio")
return ""
def _save_audio(self, data: bytes, file_name: str):
AUDIO_SAVED_DIRECTORY = "audio/"
file_name = f"{file_name}.wav"
os.makedirs(AUDIO_SAVED_DIRECTORY, exist_ok=True)
path = os.path.join(AUDIO_SAVED_DIRECTORY, file_name)
with open(path, "wb") as f:
f.write(data)
return path
| [] |
2024-01-10 | sarfrazkhan18/ecoute | GPTResponder.py | import openai
from keys import OPENAI_API_KEY
from prompts import create_prompt, INITIAL_RESPONSE
import time
openai.api_key = OPENAI_API_KEY
def generate_response_from_transcript(transcript):
try:
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0301",
messages=[{"role": "system", "content": create_prompt(transcript)}],
temperature = 0.0
)
except openai.OpenAIError:
print("An error occurred while calling OpenAI ChatCompletion.")
return ''
full_response = response.choices[0].message.content
try:
return full_response.split('[')[1].split(']')[0]
except:
return ''
class GPTResponder:
def __init__(self):
self.response = INITIAL_RESPONSE
self.response_interval = 2
def respond_to_transcriber(self, transcriber):
while True:
if transcriber.transcript_changed_event.is_set():
start_time = time.time()
transcriber.transcript_changed_event.clear()
transcript_string = transcriber.get_transcript()
response = generate_response_from_transcript(transcript_string)
end_time = time.time() # Measure end time
execution_time = end_time - start_time # Calculate the time it took to execute the function
if response != '':
self.response = response
remaining_time = self.response_interval - execution_time
if remaining_time > 0:
time.sleep(remaining_time)
else:
time.sleep(0.3)
def update_response_interval(self, interval):
self.response_interval = interval | [] |
2024-01-10 | sr33j/notion_assistant | advanced_query.py | import pandas as pd
from openai import OpenAI
from embed import get_embedding
import numpy as np
client = OpenAI()
def generate_response(prompt):
system_prompt = open("prompts/deduction_prompt.txt", "r").read()
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}
]
)
return completion.choices[0].message.content
def cosine_similarity(a, b):
return np.dot(a, b)/(np.linalg.norm(a)*np.linalg.norm(b))
def get_question_breakdown(query):
## get prompt
prompt = f"""
For the question delimited by triple backticks, Can you give me exactly three prerequisite sub-questions that will help me answer the question? Please format your answer as a dash, subquestion, and newline.
Example:
```What job am I best suited for?```
- What am I particularly skilled at?
- What am I intellectually curious about?
- Does this provid value to the world?
Actual:
"""
prompt += f"```\n{query}\n```"
system_prompt = open("prompts/breakdown_prompt.txt", "r").read()
## generate response
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}
])
print("--- Breaking down " + query + " ---")
# print(prompt)
## clean response
all_questions_string = completion.choices[0].message.content
all_questions = all_questions_string.split("\n")
subquestions = []
for question in all_questions:
if question.startswith("-"):
subquestions.append(question[1:])
print(subquestions[-1])
print("------------------------")
return subquestions
def get_docs_related_to_query(query, df, num_docs=4, cosine_threshold=0.1):
query_embedding = get_embedding(query)
## get the embeddings from the csv file and calculate the cosine similarity
df['cosine_similarity'] = df['Embedding'].apply(lambda x: cosine_similarity(x, query_embedding))
## sort the dataframe by cosine similarity
df = df.sort_values(by=['cosine_similarity'], ascending=False)
top_docs = df.head(num_docs)
top_docs = top_docs[top_docs['cosine_similarity'] > cosine_threshold]
## get the prompt from the top docs
return top_docs['Page Text'].tolist()
def get_prompt_from_docs(query, docs):
prompt = "Keep your answer to less than 50 words."
for doc in docs:
prompt += "```\n"
prompt += doc
prompt += "\n```\n"
prompt += "Question: " + query + "\nAnswer:"
return prompt
def use_reference_to_answer_subquestion(subquestion, docs_df):
docs = get_docs_related_to_query(subquestion, docs_df)
prompt = get_prompt_from_docs(subquestion, docs)
## generate response
simple_prompt = open("prompts/simple_prompt.txt", "r").read()
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": simple_prompt},
{"role": "user", "content": prompt}
])
print("--- Answering " + subquestion + " ---")
answer = completion.choices[0].message.content
print(answer)
print("------------------------")
return answer
def get_prompt_from_subanswers(query, subquestions, subanswers):
prompt = ""
for subquestion, subanswer in zip(subquestions, subanswers):
prompt += "```\n"
prompt += "Question: " + subquestion + "\n"
prompt += "Answer: " + subanswer + "\n"
prompt += "```\n"
prompt += "Based on the answers to these questions, what is the answer to " + query + "?"
prompt += " Keep your answer to less than 50 words. Please be specific and concrete in your answer."
return prompt
def generate_response_from_subanswers(query, subquestions, subanswers):
## get prompt
prompt = get_prompt_from_subanswers(query, subquestions, subanswers)
## generate response
system_prompt = open("prompts/synthesis_prompt.txt", "r").read()
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}
])
print("--- SYNTHESIS ---")
answer = completion.choices[0].message.content
print(answer)
print("------------------------")
return answer
def main():
## read in the csv file
df = pd.read_csv('notion_embeddings.csv')
df['Embedding'] = df['Embedding'].apply(lambda x: np.fromstring(x, sep=','))
## get input from the user for a query
while True:
query = input("Ask a question about yourself: ")
print("===== BREAKING DOWN ORIGINAL QUESTION =====")
subquestions = get_question_breakdown(query)
print("===== ANSWERING SUBQUESTIONS WITH REFERENCES =====")
subanswers = []
for subquestion in subquestions:
subanswer = use_reference_to_answer_subquestion(subquestion, df)
subanswers.append(subanswer)
print("===== ANSWERING ORIGINAL QUESTION BASED ON DEDUCTIONS =====")
answer = generate_response_from_subanswers(query, subquestions, subanswers)
# print(answer)
if __name__ == "__main__":
main() | [
"prompts/synthesis_prompt.txt",
"prompts/breakdown_prompt.txt",
"prompts/simple_prompt.txt",
"Question: PLACEHOLDER\nAnswer:",
" Keep your answer to less than 50 words. Please be specific and concrete in your answer.",
"Question: PLACEHOLDER\n",
"Based on the answers to these questions, what is the answer to PLACEHOLDER?",
"Keep your answer to less than 50 words.",
"\n```\n",
"prompts/deduction_prompt.txt",
"\n For the question delimited by triple backticks, Can you give me exactly three prerequisite sub-questions that will help me answer the question? Please format your answer as a dash, subquestion, and newline.\n Example:\n ```What job am I best suited for?```\n - What am I particularly skilled at?\n - What am I intellectually curious about?\n - Does this provid value to the world? \n\n Actual:\n ",
"```\n",
"```\nPLACEHOLDER\n```",
"Answer: PLACEHOLDER\n"
] |
2024-01-10 | haosulab/RPG | rpg~rl~vec_envs~venvs.py | import gym
import numpy as np
from typing import Any, List, Tuple, Union, Optional, Callable
from .worker import EnvWorker, DummyEnvWorker, SubprocEnvWorker
class BaseVectorEnv(gym.Env):
"""Base class for vectorized environments wrapper.
Usage:
::
env_num = 8
envs = DummyVectorEnv([lambda: gym.make(task) for _ in range(env_num)])
assert len(envs) == env_num
It accepts a list of environment generators. In other words, an environment
generator ``efn`` of a specific task means that ``efn()`` returns the
environment of the given task, for example, ``gym.make(task)``.
All of the VectorEnv must inherit :class:`~tianshou.env.BaseVectorEnv`.
Here are some other usages:
::
envs.seed(2) # which is equal to the next line
envs.seed([2, 3, 4, 5, 6, 7, 8, 9]) # set specific seed for each env
obs = envs.reset() # reset all environments
obs = envs.reset([0, 5, 7]) # reset 3 specific environments
obs, rew, done, info = envs.step([1] * 8) # step synchronously
envs.render() # render all environments
envs.close() # close all environments
.. warning::
If you use your own environment, please make sure the ``seed`` method
is set up properly, e.g.,
::
def seed(self, seed):
np.random.seed(seed)
Otherwise, the outputs of these envs may be the same with each other.
:param env_fns: a list of callable envs, ``env_fns[i]()`` generates the ith env.
:param worker_fn: a callable worker, ``worker_fn(env_fns[i])`` generates a
worker which contains the i-th env.
:param int wait_num: use in asynchronous simulation if the time cost of
``env.step`` varies with time and synchronously waiting for all
environments to finish a step is time-wasting. In that case, we can
return when ``wait_num`` environments finish a step and keep on
simulation in these environments. If ``None``, asynchronous simulation
is disabled; else, ``1 <= wait_num <= env_num``.
:param float timeout: use in asynchronous simulation same as above, in each
vectorized step it only deal with those environments spending time
within ``timeout`` seconds.
:param bool norm_obs: Whether to track mean/std of data and normalise observation
on return. For now, observation normalization only support observation of
type np.ndarray.
:param obs_rms: class to track mean&std of observation. If not given, it will
initialize a new one. Usually in envs that is used to evaluate algorithm,
obs_rms should be passed in. Default to None.
:param bool update_obs_rms: Whether to update obs_rms. Default to True.
"""
def __init__(
self,
env_fns: List[Callable[[], gym.Env]],
worker_fn: Callable[[Callable[[], gym.Env]], EnvWorker],
wait_num: Optional[int] = None,
timeout: Optional[float] = None,
norm_obs: bool = False,
obs_rms = None,
update_obs_rms: bool = True,
):
self._env_fns = env_fns
# A VectorEnv contains a pool of EnvWorkers, which corresponds to
# interact with the given envs (one worker <-> one env).
self.workers = [worker_fn(fn) for fn in env_fns]
self.worker_class = type(self.workers[0])
assert issubclass(self.worker_class, EnvWorker)
assert all([isinstance(w, self.worker_class) for w in self.workers])
self.env_num = len(env_fns)
self.wait_num = wait_num or len(env_fns)
assert 1 <= self.wait_num <= len(env_fns), \
f"wait_num should be in [1, {len(env_fns)}], but got {wait_num}"
self.timeout = timeout
assert self.timeout is None or self.timeout > 0, \
f"timeout is {timeout}, it should be positive if provided!"
self.is_async = self.wait_num != len(env_fns) or timeout is not None
self.waiting_conn: List[EnvWorker] = []
# environments in self.ready_id is actually ready
# but environments in self.waiting_id are just waiting when checked,
# and they may be ready now, but this is not known until we check it
# in the step() function
self.waiting_id: List[int] = []
# all environments are ready in the beginning
self.ready_id = list(range(self.env_num))
self.is_closed = False
# initialize observation running mean/std
self.norm_obs = norm_obs
self.update_obs_rms = update_obs_rms
self.obs_rms = RunningMeanStd() if obs_rms is None and norm_obs else obs_rms
self.__eps = np.finfo(np.float32).eps.item()
def _assert_is_not_closed(self):
assert not self.is_closed, \
f"Methods of {self.__class__.__name__} cannot be called after close."
def __len__(self):
"""Return len(self), which is the number of environments."""
return self.env_num
def __getattribute__(self, key: str):
"""Switch the attribute getter depending on the key.
Any class who inherits ``gym.Env`` will inherit some attributes, like
``action_space``. However, we would like the attribute lookup to go straight
into the worker (in fact, this vector env's action_space is always None).
"""
if key in ['metadata', 'reward_range', 'spec', 'action_space',
'observation_space']: # reserved keys in gym.Env
return self.__getattr__(key)
else:
return super().__getattribute__(key)
def __getattr__(self, key: str):
"""Fetch a list of env attributes.
This function tries to retrieve an attribute from each individual wrapped
environment, if it does not belong to the wrapping vector environment class.
"""
return [getattr(worker, key) for worker in self.workers]
def _wrap_id(
self, id: Optional[Union[int, List[int], np.ndarray]] = None
) -> Union[List[int], np.ndarray]:
if id is None:
return list(range(self.env_num))
return [id] if np.isscalar(id) else id # type: ignore
def _assert_id(self, id: Union[List[int], np.ndarray]) -> None:
for i in id:
assert i not in self.waiting_id, \
f"Cannot interact with environment {i} which is stepping now."
assert i in self.ready_id, \
f"Can only interact with ready environments {self.ready_id}."
def reset(
self, id: Optional[Union[int, List[int], np.ndarray]] = None,
**kwargs
) -> np.ndarray:
"""Reset the state of some envs and return initial observations.
If id is None, reset the state of all the environments and return
initial observations, otherwise reset the specific environments with
the given id, either an int or a list.
"""
self._assert_is_not_closed()
id = self._wrap_id(id)
if self.is_async:
self._assert_id(id)
obs_list = [self.workers[i].reset(**kwargs) for i in id]
try:
obs = np.stack(obs_list)
except ValueError: # different len(obs)
obs = np.array(obs_list, dtype=object)
if self.obs_rms and self.update_obs_rms:
self.obs_rms.update(obs)
return self.normalize_obs(obs)
def zero_grad(
self, id: Optional[Union[int, List[int], np.ndarray]] = None,
):
id = self._wrap_id(id)
assert not self.is_async
for i in id:
self.workers[i].zero_grad()
def get_action_grads(
self, id: Optional[Union[int, List[int], np.ndarray]] = None,
):
id = self._wrap_id(id)
assert not self.is_async
return [self.workers[i].get_action_grads() for i in id]
def backward(self, id, obs_grads, reward_grads):
assert isinstance(id, int)
self.workers[id].backward(obs_grads, reward_grads)
def step(
self,
action: np.ndarray,
id: Optional[Union[int, List[int], np.ndarray]] = None
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
"""Run one timestep of some environments' dynamics.
If id is None, run one timestep of all the environments’ dynamics;
otherwise run one timestep for some environments with given id, either
an int or a list. When the end of episode is reached, you are
responsible for calling reset(id) to reset this environment’s state.
Accept a batch of action and return a tuple (batch_obs, batch_rew,
batch_done, batch_info) in numpy format.
:param numpy.ndarray action: a batch of action provided by the agent.
:return: A tuple including four items:
* ``obs`` a numpy.ndarray, the agent's observation of current environments
* ``rew`` a numpy.ndarray, the amount of rewards returned after \
previous actions
* ``done`` a numpy.ndarray, whether these episodes have ended, in \
which case further step() calls will return undefined results
* ``info`` a numpy.ndarray, contains auxiliary diagnostic \
information (helpful for debugging, and sometimes learning)
For the async simulation:
Provide the given action to the environments. The action sequence
should correspond to the ``id`` argument, and the ``id`` argument
should be a subset of the ``env_id`` in the last returned ``info``
(initially they are env_ids of all the environments). If action is
None, fetch unfinished step() calls instead.
"""
self._assert_is_not_closed()
id = self._wrap_id(id)
if not self.is_async:
assert len(action) == len(id)
for i, j in enumerate(id):
self.workers[j].send_action(action[i])
result = []
for j in id:
obs, rew, done, info = self.workers[j].get_result()
info["env_id"] = j
result.append((obs, rew, done, info))
else:
if action is not None:
self._assert_id(id)
assert len(action) == len(id)
for i, (act, env_id) in enumerate(zip(action, id)):
self.workers[env_id].send_action(act)
self.waiting_conn.append(self.workers[env_id])
self.waiting_id.append(env_id)
self.ready_id = [x for x in self.ready_id if x not in id]
ready_conns: List[EnvWorker] = []
while not ready_conns:
ready_conns = self.worker_class.wait(
self.waiting_conn, self.wait_num, self.timeout)
result = []
for conn in ready_conns:
waiting_index = self.waiting_conn.index(conn)
self.waiting_conn.pop(waiting_index)
env_id = self.waiting_id.pop(waiting_index)
obs, rew, done, info = conn.get_result()
info["env_id"] = env_id
result.append((obs, rew, done, info))
self.ready_id.append(env_id)
obs_list, rew_list, done_list, info_list = zip(*result)
try:
obs_stack = np.stack(obs_list)
except ValueError: # different len(obs)
obs_stack = np.array(obs_list, dtype=object)
rew_stack, done_stack, info_stack = map(
np.stack, [rew_list, done_list, info_list])
if self.obs_rms and self.update_obs_rms:
self.obs_rms.update(obs_stack)
return self.normalize_obs(obs_stack), rew_stack, done_stack, info_stack
def seed(
self, seed: Optional[Union[int, List[int]]] = None
) -> List[Optional[List[int]]]:
"""Set the seed for all environments.
Accept ``None``, an int (which will extend ``i`` to
``[i, i + 1, i + 2, ...]``) or a list.
:return: The list of seeds used in this env's random number generators.
The first value in the list should be the "main" seed, or the value
which a reproducer pass to "seed".
"""
self._assert_is_not_closed()
seed_list: Union[List[None], List[int]]
if seed is None:
seed_list = [seed] * self.env_num
elif isinstance(seed, int):
seed_list = [seed + i for i in range(self.env_num)]
else:
seed_list = seed
return [w.seed(s) for w, s in zip(self.workers, seed_list)]
def render(self, id=None, **kwargs: Any) -> List[Any]:
"""Render all of the environments."""
id = self._wrap_id(id)
if self.is_async:
self._assert_id(id)
self._assert_is_not_closed()
if self.is_async and len(self.waiting_id) > 0:
raise RuntimeError(
f"Environments {self.waiting_id} are still stepping, cannot "
"render them now.")
return [self.workers[i].render(**kwargs) for i in id]
def _render_traj_rgb(self, id=None, **kwargs: Any) -> List[Any]:
"""Render all of the environments."""
id = self._wrap_id(id)
if self.is_async:
self._assert_id(id)
self._assert_is_not_closed()
if self.is_async and len(self.waiting_id) > 0:
raise RuntimeError(
f"Environments {self.waiting_id} are still stepping, cannot "
"render them now.")
return [self.workers[i].render_traj_rgb(**kwargs) for i in id]
def close(self) -> None:
"""Close all of the environments.
This function will be called only once (if not, it will be called during
garbage collected). This way, ``close`` of all workers can be assured.
"""
self._assert_is_not_closed()
for w in self.workers:
w.close()
self.is_closed = True
def normalize_obs(self, obs: np.ndarray) -> np.ndarray:
"""Normalize observations by statistics in obs_rms."""
if self.obs_rms and self.norm_obs:
clip_max = 10.0 # this magic number is from openai baselines
# see baselines/common/vec_env/vec_normalize.py#L10
obs = (obs - self.obs_rms.mean) / np.sqrt(self.obs_rms.var + self.__eps)
obs = np.clip(obs, -clip_max, clip_max)
return obs
class DummyVectorEnv(BaseVectorEnv):
"""Dummy vectorized environment wrapper, implemented in for-loop.
.. seealso::
Please refer to :class:`~tianshou.env.BaseVectorEnv` for other APIs' usage.
"""
def __init__(self, env_fns: List[Callable[[], gym.Env]], **kwargs: Any) -> None:
super().__init__(env_fns, DummyEnvWorker, **kwargs)
class SubprocVectorEnv(BaseVectorEnv):
"""Vectorized environment wrapper based on subprocess.
.. seealso::
Please refer to :class:`~tianshou.env.BaseVectorEnv` for other APIs' usage.
"""
def __init__(self, env_fns: List[Callable[[], gym.Env]], **kwargs: Any) -> None:
def worker_fn(fn: Callable[[], gym.Env]) -> SubprocEnvWorker:
return SubprocEnvWorker(fn, share_memory=False)
super().__init__(env_fns, worker_fn, **kwargs)
class ShmemVectorEnv(BaseVectorEnv):
"""Optimized SubprocVectorEnv with shared buffers to exchange observations.
ShmemVectorEnv has exactly the same API as SubprocVectorEnv.
.. seealso::
Please refer to :class:`~tianshou.env.BaseVectorEnv` for other APIs' usage.
"""
def __init__(self, env_fns: List[Callable[[], gym.Env]], **kwargs: Any) -> None:
def worker_fn(fn: Callable[[], gym.Env]) -> SubprocEnvWorker:
return SubprocEnvWorker(fn, share_memory=True)
super().__init__(env_fns, worker_fn, **kwargs) | [] |
2024-01-10 | haosulab/RPG | rl~vec_envs~venvs.py | import gym
import numpy as np
from typing import Any, List, Tuple, Union, Optional, Callable
from tools.utils import RunningMeanStd
from .worker import EnvWorker, DummyEnvWorker, SubprocEnvWorker
class BaseVectorEnv(gym.Env):
"""Base class for vectorized environments wrapper.
Usage:
::
env_num = 8
envs = DummyVectorEnv([lambda: gym.make(task) for _ in range(env_num)])
assert len(envs) == env_num
It accepts a list of environment generators. In other words, an environment
generator ``efn`` of a specific task means that ``efn()`` returns the
environment of the given task, for example, ``gym.make(task)``.
All of the VectorEnv must inherit :class:`~tianshou.env.BaseVectorEnv`.
Here are some other usages:
::
envs.seed(2) # which is equal to the next line
envs.seed([2, 3, 4, 5, 6, 7, 8, 9]) # set specific seed for each env
obs = envs.reset() # reset all environments
obs = envs.reset([0, 5, 7]) # reset 3 specific environments
obs, rew, done, info = envs.step([1] * 8) # step synchronously
envs.render() # render all environments
envs.close() # close all environments
.. warning::
If you use your own environment, please make sure the ``seed`` method
is set up properly, e.g.,
::
def seed(self, seed):
np.random.seed(seed)
Otherwise, the outputs of these envs may be the same with each other.
:param env_fns: a list of callable envs, ``env_fns[i]()`` generates the ith env.
:param worker_fn: a callable worker, ``worker_fn(env_fns[i])`` generates a
worker which contains the i-th env.
:param int wait_num: use in asynchronous simulation if the time cost of
``env.step`` varies with time and synchronously waiting for all
environments to finish a step is time-wasting. In that case, we can
return when ``wait_num`` environments finish a step and keep on
simulation in these environments. If ``None``, asynchronous simulation
is disabled; else, ``1 <= wait_num <= env_num``.
:param float timeout: use in asynchronous simulation same as above, in each
vectorized step it only deal with those environments spending time
within ``timeout`` seconds.
:param bool norm_obs: Whether to track mean/std of data and normalise observation
on return. For now, observation normalization only support observation of
type np.ndarray.
:param obs_rms: class to track mean&std of observation. If not given, it will
initialize a new one. Usually in envs that is used to evaluate algorithm,
obs_rms should be passed in. Default to None.
:param bool update_obs_rms: Whether to update obs_rms. Default to True.
"""
def __init__(
self,
env_fns: List[Callable[[], gym.Env]],
worker_fn: Callable[[Callable[[], gym.Env]], EnvWorker],
wait_num: Optional[int] = None,
timeout: Optional[float] = None,
norm_obs: bool = False,
obs_rms: Optional[RunningMeanStd] = None,
update_obs_rms: bool = True,
):
self._env_fns = env_fns
# A VectorEnv contains a pool of EnvWorkers, which corresponds to
# interact with the given envs (one worker <-> one env).
self.workers = [worker_fn(fn) for fn in env_fns]
self.worker_class = type(self.workers[0])
assert issubclass(self.worker_class, EnvWorker)
assert all([isinstance(w, self.worker_class) for w in self.workers])
self.env_num = len(env_fns)
self.wait_num = wait_num or len(env_fns)
assert 1 <= self.wait_num <= len(env_fns), \
f"wait_num should be in [1, {len(env_fns)}], but got {wait_num}"
self.timeout = timeout
assert self.timeout is None or self.timeout > 0, \
f"timeout is {timeout}, it should be positive if provided!"
self.is_async = self.wait_num != len(env_fns) or timeout is not None
self.waiting_conn: List[EnvWorker] = []
# environments in self.ready_id is actually ready
# but environments in self.waiting_id are just waiting when checked,
# and they may be ready now, but this is not known until we check it
# in the step() function
self.waiting_id: List[int] = []
# all environments are ready in the beginning
self.ready_id = list(range(self.env_num))
self.is_closed = False
# initialize observation running mean/std
self.norm_obs = norm_obs
self.update_obs_rms = update_obs_rms
self.obs_rms = RunningMeanStd() if obs_rms is None and norm_obs else obs_rms
self.__eps = np.finfo(np.float32).eps.item()
def _assert_is_not_closed(self):
assert not self.is_closed, \
f"Methods of {self.__class__.__name__} cannot be called after close."
def __len__(self):
"""Return len(self), which is the number of environments."""
return self.env_num
def __getattribute__(self, key: str):
"""Switch the attribute getter depending on the key.
Any class who inherits ``gym.Env`` will inherit some attributes, like
``action_space``. However, we would like the attribute lookup to go straight
into the worker (in fact, this vector env's action_space is always None).
"""
if key in ['metadata', 'reward_range', 'spec', 'action_space',
'observation_space']: # reserved keys in gym.Env
return self.__getattr__(key)
else:
return super().__getattribute__(key)
def __getattr__(self, key: str):
"""Fetch a list of env attributes.
This function tries to retrieve an attribute from each individual wrapped
environment, if it does not belong to the wrapping vector environment class.
"""
return [getattr(worker, key) for worker in self.workers]
def _wrap_id(
self, id: Optional[Union[int, List[int], np.ndarray]] = None
) -> Union[List[int], np.ndarray]:
if id is None:
return list(range(self.env_num))
return [id] if np.isscalar(id) else id # type: ignore
def _assert_id(self, id: Union[List[int], np.ndarray]) -> None:
for i in id:
assert i not in self.waiting_id, \
f"Cannot interact with environment {i} which is stepping now."
assert i in self.ready_id, \
f"Can only interact with ready environments {self.ready_id}."
def reset(
self, id: Optional[Union[int, List[int], np.ndarray]] = None,
**kwargs
) -> np.ndarray:
"""Reset the state of some envs and return initial observations.
If id is None, reset the state of all the environments and return
initial observations, otherwise reset the specific environments with
the given id, either an int or a list.
"""
self._assert_is_not_closed()
id = self._wrap_id(id)
if self.is_async:
self._assert_id(id)
obs_list = [self.workers[i].reset(**kwargs) for i in id]
try:
obs = np.stack(obs_list)
except ValueError: # different len(obs)
obs = np.array(obs_list, dtype=object)
if self.obs_rms and self.update_obs_rms:
self.obs_rms.update(obs)
return self.normalize_obs(obs)
def zero_grad(
self, id: Optional[Union[int, List[int], np.ndarray]] = None,
):
id = self._wrap_id(id)
assert not self.is_async
for i in id:
self.workers[i].zero_grad()
def get_action_grads(
self, id: Optional[Union[int, List[int], np.ndarray]] = None,
):
id = self._wrap_id(id)
assert not self.is_async
return [self.workers[i].get_action_grads() for i in id]
def backward(self, id, obs_grads, reward_grads):
assert isinstance(id, int)
self.workers[id].backward(obs_grads, reward_grads)
def step(
self,
action: np.ndarray,
id: Optional[Union[int, List[int], np.ndarray]] = None
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
"""Run one timestep of some environments' dynamics.
If id is None, run one timestep of all the environments’ dynamics;
otherwise run one timestep for some environments with given id, either
an int or a list. When the end of episode is reached, you are
responsible for calling reset(id) to reset this environment’s state.
Accept a batch of action and return a tuple (batch_obs, batch_rew,
batch_done, batch_info) in numpy format.
:param numpy.ndarray action: a batch of action provided by the agent.
:return: A tuple including four items:
* ``obs`` a numpy.ndarray, the agent's observation of current environments
* ``rew`` a numpy.ndarray, the amount of rewards returned after \
previous actions
* ``done`` a numpy.ndarray, whether these episodes have ended, in \
which case further step() calls will return undefined results
* ``info`` a numpy.ndarray, contains auxiliary diagnostic \
information (helpful for debugging, and sometimes learning)
For the async simulation:
Provide the given action to the environments. The action sequence
should correspond to the ``id`` argument, and the ``id`` argument
should be a subset of the ``env_id`` in the last returned ``info``
(initially they are env_ids of all the environments). If action is
None, fetch unfinished step() calls instead.
"""
self._assert_is_not_closed()
id = self._wrap_id(id)
if not self.is_async:
assert len(action) == len(id)
for i, j in enumerate(id):
self.workers[j].send_action(action[i])
result = []
for j in id:
obs, rew, done, info = self.workers[j].get_result()
info["env_id"] = j
result.append((obs, rew, done, info))
else:
if action is not None:
self._assert_id(id)
assert len(action) == len(id)
for i, (act, env_id) in enumerate(zip(action, id)):
self.workers[env_id].send_action(act)
self.waiting_conn.append(self.workers[env_id])
self.waiting_id.append(env_id)
self.ready_id = [x for x in self.ready_id if x not in id]
ready_conns: List[EnvWorker] = []
while not ready_conns:
ready_conns = self.worker_class.wait(
self.waiting_conn, self.wait_num, self.timeout)
result = []
for conn in ready_conns:
waiting_index = self.waiting_conn.index(conn)
self.waiting_conn.pop(waiting_index)
env_id = self.waiting_id.pop(waiting_index)
obs, rew, done, info = conn.get_result()
info["env_id"] = env_id
result.append((obs, rew, done, info))
self.ready_id.append(env_id)
obs_list, rew_list, done_list, info_list = zip(*result)
try:
obs_stack = np.stack(obs_list)
except ValueError: # different len(obs)
obs_stack = np.array(obs_list, dtype=object)
rew_stack, done_stack, info_stack = map(
np.stack, [rew_list, done_list, info_list])
if self.obs_rms and self.update_obs_rms:
self.obs_rms.update(obs_stack)
return self.normalize_obs(obs_stack), rew_stack, done_stack, info_stack
def seed(
self, seed: Optional[Union[int, List[int]]] = None
) -> List[Optional[List[int]]]:
"""Set the seed for all environments.
Accept ``None``, an int (which will extend ``i`` to
``[i, i + 1, i + 2, ...]``) or a list.
:return: The list of seeds used in this env's random number generators.
The first value in the list should be the "main" seed, or the value
which a reproducer pass to "seed".
"""
self._assert_is_not_closed()
seed_list: Union[List[None], List[int]]
if seed is None:
seed_list = [seed] * self.env_num
elif isinstance(seed, int):
seed_list = [seed + i for i in range(self.env_num)]
else:
seed_list = seed
return [w.seed(s) for w, s in zip(self.workers, seed_list)]
def render(self, id=None, **kwargs: Any) -> List[Any]:
"""Render all of the environments."""
id = self._wrap_id(id)
if self.is_async:
self._assert_id(id)
self._assert_is_not_closed()
if self.is_async and len(self.waiting_id) > 0:
raise RuntimeError(
f"Environments {self.waiting_id} are still stepping, cannot "
"render them now.")
return [self.workers[i].render(**kwargs) for i in id]
def _render_traj_rgb(self, id=None, **kwargs: Any) -> List[Any]:
"""Render all of the environments."""
id = self._wrap_id(id)
if self.is_async:
self._assert_id(id)
self._assert_is_not_closed()
if self.is_async and len(self.waiting_id) > 0:
raise RuntimeError(
f"Environments {self.waiting_id} are still stepping, cannot "
"render them now.")
return [self.workers[i].render_traj_rgb(**kwargs) for i in id]
def close(self) -> None:
"""Close all of the environments.
This function will be called only once (if not, it will be called during
garbage collected). This way, ``close`` of all workers can be assured.
"""
self._assert_is_not_closed()
for w in self.workers:
w.close()
self.is_closed = True
def normalize_obs(self, obs: np.ndarray) -> np.ndarray:
"""Normalize observations by statistics in obs_rms."""
if self.obs_rms and self.norm_obs:
clip_max = 10.0 # this magic number is from openai baselines
# see baselines/common/vec_env/vec_normalize.py#L10
obs = (obs - self.obs_rms.mean) / np.sqrt(self.obs_rms.var + self.__eps)
obs = np.clip(obs, -clip_max, clip_max)
return obs
class DummyVectorEnv(BaseVectorEnv):
"""Dummy vectorized environment wrapper, implemented in for-loop.
.. seealso::
Please refer to :class:`~tianshou.env.BaseVectorEnv` for other APIs' usage.
"""
def __init__(self, env_fns: List[Callable[[], gym.Env]], **kwargs: Any) -> None:
super().__init__(env_fns, DummyEnvWorker, **kwargs)
class SubprocVectorEnv(BaseVectorEnv):
"""Vectorized environment wrapper based on subprocess.
.. seealso::
Please refer to :class:`~tianshou.env.BaseVectorEnv` for other APIs' usage.
"""
def __init__(self, env_fns: List[Callable[[], gym.Env]], **kwargs: Any) -> None:
def worker_fn(fn: Callable[[], gym.Env]) -> SubprocEnvWorker:
return SubprocEnvWorker(fn, share_memory=False)
super().__init__(env_fns, worker_fn, **kwargs)
class ShmemVectorEnv(BaseVectorEnv):
"""Optimized SubprocVectorEnv with shared buffers to exchange observations.
ShmemVectorEnv has exactly the same API as SubprocVectorEnv.
.. seealso::
Please refer to :class:`~tianshou.env.BaseVectorEnv` for other APIs' usage.
"""
def __init__(self, env_fns: List[Callable[[], gym.Env]], **kwargs: Any) -> None:
def worker_fn(fn: Callable[[], gym.Env]) -> SubprocEnvWorker:
return SubprocEnvWorker(fn, share_memory=True)
super().__init__(env_fns, worker_fn, **kwargs) | [] |
2024-01-10 | Zayne-sprague/Deductive_Additivity_for_Planning_of_Natural_Language_Proofs | multi_type_search~search~search_model~types~contrastive~gpt3_encoder.py | from multi_type_search.search.search_model.types.contrastive import ContrastiveModel
from multi_type_search.utils.paths import ROOT_FOLDER
from transformers import T5Tokenizer, T5ForConditionalGeneration
from transformers import RobertaTokenizer, RobertaModel, AutoModel, AutoConfig, AutoTokenizer
import pickle
import openai
import json
import torch
from torch import nn
from torch.nn import functional as F
import transformers
from pathlib import Path
from typing import List, Union, Dict, Tuple, Optional
def load_api_key():
key_file: Path = ROOT_FOLDER / 'openai_key.txt'
with open(key_file, 'r') as f:
return f.read().strip()
def load_embeddings_from_file(file_name):
with open(file_name, 'rb') as f:
embeddings = pickle.load(f)
return embeddings
class RawGPT3Encoder(ContrastiveModel):
# TODO - support on the fly embeddings, right now everything must be cached.
model_type: str = 'raw_gpt3_encoder'
def __init__(
self,
cached_embeddings_file: str = None,
cached_strings_file: str = None,
allow_api_access: bool = True,
api_end_point: str = "text-embedding-ada-002"
):
super().__init__()
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
if cached_embeddings_file is not None:
self.cached_embeddings_file = str(Path(ROOT_FOLDER / cached_embeddings_file).absolute())
self.cached_strings_file = str(Path(ROOT_FOLDER / cached_strings_file).absolute())
self.str_cache = json.load(Path(ROOT_FOLDER / cached_strings_file).open('r'))
emb_cache = load_embeddings_from_file(self.cached_embeddings_file)
emb_cache = torch.stack([torch.tensor(x).to(torch.float32) for x in emb_cache], dim=0).to(self.device)
self.register_buffer("emb_cache", emb_cache)
else:
self.cached_embeddings_file = None
self.cached_strings_file = None
self.str_cache = []
self.register_buffer("emb_cache", torch.tensor([], dtype=torch.float32).to(self.device))
# TODO - bad... brought this over for training script to run, but there is no training in the raw variant.
self.tmp = nn.Linear(10, 10)
self.roberta_tokenizer = self.__tokenizer__
self.api_end_point = api_end_point
self.allow_api_access = allow_api_access
if self.allow_api_access:
key = load_api_key()
openai.api_key = key
def activate_key(self):
self.allow_api_access = True
key = load_api_key()
openai.api_key = key
def deactivate_key(self):
self.allow_api_access = False
openai.api_key = None
def __tokenizer__(self, string, *args, **kwargs):
return string
def get_kwargs(self):
return {
'cached_embeddings_file': self.cached_embeddings_file,
'cached_strings_file': self.cached_strings_file,
'allow_api_access': self.allow_api_access,
'api_end_point': self.api_end_point
}
def tokenize(self, exs: Union[List[str], str]):
return exs
def forward(self, tokens: Union[torch.Tensor, List[str]]):
embs = []
for string in tokens:
try:
idx = self.str_cache.index(string)
embs.append(self.emb_cache[idx])
except ValueError:
if self.allow_api_access:
embs.append(self.call_api(string))
else:
print("ERROR")
# print('===')
# print(len(tokens))
# print(len(embs))
return torch.stack(embs, dim=0).to(self.device)
def call_api(self, text: str):
with torch.no_grad():
res = openai.Embedding.create(model=self.api_end_point, input=text)
emb = torch.tensor(res['data'][0]['embedding']).to(self.device).requires_grad_(False)
self.str_cache.append(text)
self.emb_cache = torch.cat([self.emb_cache, emb.unsqueeze(dim=0)], 0)
return emb
def get_encodings(self, strings: List[str]) -> torch.Tensor:
return self(strings)
@classmethod
def __load__(cls, data: Dict, device: str, opt) -> 'RawGPT3Encoder':
kwargs = data.get('kwargs')
assert kwargs is not None, f'Error loading node embedder from checkpoint: {ckpt}, no kwargs in file.'
model = cls(**kwargs)
model.activate_key()
if opt:
return model, opt
return model
class GPT3LinearLayer(nn.Module):
def __init__(
self,
input_dim: int = 1536,
output_dim: int = 1536,
normalization: Optional[nn.Module] = nn.LayerNorm(1536),
activation: Optional[nn.Module] = nn.ReLU(),
dropout: Optional[float] = 0.0
):
super().__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.layer = nn.Linear(input_dim, output_dim)
self.normalization = normalization
self.activation = activation
self.dropout = nn.Dropout(dropout)
seq = [self.layer]
if normalization:
seq.append(self.normalization)
if activation:
seq.append(self.activation)
if dropout is not None:
seq.append(self.dropout)
self.projection = nn.Sequential(*seq)
def forward(self, x):
return self.projection(x)
class GPT3GluLayer(nn.Module):
def __init__(
self,
input_dim: int = 1536,
output_dim: int = 1536,
residual_connection: bool = True,
normalization: Optional[nn.Module] = nn.LayerNorm(1536),
dropout: Optional[float] = 0.0
):
super().__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.residual_connection = residual_connection
self.true_output_dim = output_dim + int(output_dim / 2) if self.residual_connection else int(output_dim / 2)
self.layer = nn.Linear(input_dim, output_dim)
self.normalization = normalization
self.activation = nn.GLU(dim=1)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
x = self.layer(x)
if self.normalization:
x = self.normalization(x)
activations = self.activation(x)
if self.residual_connection:
activations = torch.cat([x, activations], dim=1)
if self.dropout:
activations = self.dropout(activations)
return activations
class ProjectedGPT3Encoder(RawGPT3Encoder):
# TODO - support on the fly embeddings, right now everything must be cached.
model_type: str = 'projected_gpt3_encoder'
def __init__(
self,
cached_embeddings_file: str = None,
cached_strings_file: str = None,
allow_api_access: bool = True,
api_end_point: str = "text-embedding-ada-002",
projection_head_layer_num: int = 3,
projection_head_type: str = 'glu'
):
super().__init__(
cached_embeddings_file=cached_embeddings_file,
cached_strings_file=cached_strings_file,
allow_api_access=allow_api_access,
api_end_point=api_end_point
)
self.projection_head_layer_num = projection_head_layer_num
self.emb_size = 1536
self.projection_head_type = projection_head_type
projection = self.__setup_glu_projection__() if self.projection_head_type == 'glu' else \
self.__setup_linear_projection__()
self.projection = nn.Sequential(*projection)
# TODO - bad... brought this over for training script to run, but there is no training in the raw variant.
self.roberta_tokenizer = self.__tokenizer__
def __setup_linear_projection__(self) -> List[nn.Module]:
projection = []
for i in range(self.projection_head_layer_num):
if i < (self.projection_head_layer_num - 1):
projection.append(GPT3LinearLayer(
input_dim=self.emb_size, output_dim=self.emb_size,
normalization=nn.LayerNorm(self.emb_size), activation=nn.LeakyReLU(),
dropout=0.0
))
else:
projection.append(GPT3LinearLayer(
input_dim=self.emb_size, output_dim=self.emb_size,
normalization=None, activation=None,
dropout=0.0
))
return projection
def __setup_glu_projection__(self) -> List[nn.Module]:
projection = []
for i in range(self.projection_head_layer_num):
if i < (self.projection_head_layer_num - 1):
projection.append(GPT3GluLayer(
input_dim=self.emb_size if len(projection) == 0 else projection[-1].true_output_dim,
output_dim=self.emb_size, residual_connection=True,
normalization=nn.LayerNorm(self.emb_size),
dropout=0.0
))
else:
projection.append(GPT3GluLayer(
input_dim=self.emb_size if len(projection) == 0 else projection[-1].true_output_dim,
output_dim=self.emb_size,
residual_connection=True,
normalization=None,
dropout=0.0
))
return projection
def __tokenizer__(self, string, *args, **kwargs):
return string
def get_kwargs(self):
kwargs = super().get_kwargs()
kwargs.update({
'projection_head_layer_num': self.projection_head_layer_num,
'projection_head_type': self.projection_head_type
})
return kwargs
def tokenize(self, exs: Union[List[str], str]):
return exs
def forward(self, tokens: Union[torch.Tensor, List[str]]):
embs = super().forward(tokens)
return self.projection(embs)
def get_encodings(self, strings: List[str]) -> torch.Tensor:
return self(strings)
@classmethod
def __load__(cls, data: Dict, device: str, opt) -> 'ProjectedGPT3Encoder':
kwargs = data.get('kwargs')
assert kwargs is not None, f'Error loading node embedder from checkpoint: {ckpt}, no kwargs in file.'
if 'opt_state' in data and opt:
opt.load_state_dict(data['opt_state'])
model = cls(**kwargs)
state_dict = data.get('state_dict')
assert state_dict is not None, f'Error loading node embedder from checkpoint: {ckpt}, no state dict in file.'
model.load_state_dict(state_dict, strict=False)
model.to(device)
if opt:
return model, opt
return model
| [] |
2024-01-10 | Zayne-sprague/Deductive_Additivity_for_Planning_of_Natural_Language_Proofs | multi_type_search~scripts~embed_with_gpt3.py | import json
import openai
import numpy as np
import json
import pickle
from pathlib import Path
from tqdm import tqdm
from multi_type_search.utils.paths import DATA_FOLDER
from multi_type_search.search.graph.graph import Graph
from jsonlines import jsonlines
# Set up OpenAI API key
openai.api_key = ""
def get_gpt3_embeddings(text_list):
embeddings = []
for text in tqdm(text_list, desc='Embedding with openai', total=len(text_list)):
response = openai.Embedding.create(model="text-embedding-ada-002", input=text)
embedding = np.array(response['data'][0]['embedding'])
embeddings.append(embedding)
return np.array(embeddings)
def save_embeddings_to_file(embeddings, file_name):
with open(file_name, 'wb') as f:
pickle.dump(embeddings, f)
def load_embeddings_from_file(file_name):
with open(file_name, 'rb') as f:
embeddings = pickle.load(f)
return embeddings
tree_files = [
DATA_FOLDER / 'full/morals/moral100.json',
]
emb_file = 'moral_gpt3_embeddings.pkl'
str_file = 'moral_gpt3_strings.json'
graphs = []
for tree_file in tree_files:
if str(tree_file).endswith('.jsonl'):
data = list(jsonlines.open(str(tree_file), 'r'))
else:
data = json.load(tree_file.open('r'))
graphs.extend([Graph.from_json(t) for t in data])
strings = [x.normalized_value for y in graphs for x in y.premises]
strings.extend([x.normalized_value for z in graphs for y in z.deductions for x in y.nodes])
strings = list(set(strings))
# Replace with your own list of strings
# text_list = ["Hello world", "I love programming", "OpenAI GPT-3 is amazing"]
text_list = strings
json.dump(strings, Path(f'./{str_file}').open('w'))
# Get the embeddings
# embeddings = get_gpt3_embeddings(text_list)
# Save embeddings to a file
# save_embeddings_to_file(embeddings, emb_file)
# Load embeddings from the file
loaded_embeddings = load_embeddings_from_file(emb_file)
print("Loaded embeddings:", loaded_embeddings) | [] |
2024-01-10 | Zayne-sprague/Deductive_Additivity_for_Planning_of_Natural_Language_Proofs | gpt3~score_prompt.py | import openai
import os
import json
import sys
import argparse
from pathlib import Path
from gpt3.utils import gpt3_common_args, gpt3_completion_args, get_gpt3_api_key, set_gpt3_api_key, PROMPTS_FOLDER, OUTPUTS_FOLDER
GPT_3_FOLDER = Path(__file__).parent.resolve()
# TODO - Intentionally not hooked into an automatic CSV printout to avoid people from spamming the heck out of GPT3
# (easy to add if this becomes cumbersome). You can output json files per run though!
def score(
prompt,
context: str = '',
engine: str = 'davinci',
top_p: float = 0.9,
):
full_prompt = f'{context}{prompt}'
response = openai.Completion.create(
engine=engine,
prompt=full_prompt,
max_tokens=0,
logprobs=0,
top_p=top_p,
echo=True,
n=1
)
choice = response['choices'][0]
log_probabilities = choice['logprobs']['token_logprobs']
offset = 0 if context is None else len(context)
token_offsets = choice['logprobs']['text_offset']
prompt_starting_idx = 0
for idx, token_offset in enumerate(token_offsets):
if token_offset > offset:
break
prompt_starting_idx = idx
prompt_log_probs = log_probabilities[prompt_starting_idx:]
prompt_log_probs = [x for x in prompt_log_probs if x is not None]
score = sum(prompt_log_probs) / len(prompt_log_probs)
return score
if __name__ == "__main__":
parser = argparse.ArgumentParser()
gpt3_common_args(parser)
parser.add_argument('--prompt', '-p', type=str,
help='Prompt to pass into the completion endpoint.')
parser.add_argument('--prompt_file_name', '-pfn', type=str,
help='Name of the file in the prompts directory.')
parser.add_argument('--context', '-c', type=str,
help='Context to pass into the completion endpoint. This will be pre-pended to the prompt'
'and will NOT be scored')
parser.add_argument('--json_input_file', '-jif', type=str,
help='A json file that includes an array of json objects with each object having a '
'context property and a prompt property, the prompt will be scored.')
parser.add_argument('--json_output_file', '-jof', type=str,
help='A json file that will output the scores of the model, will use the same name as the'
'json input file if specified (or the prompt file name if specified) as a default'
'without either, no output will be generated.')
args = parser.parse_args()
api_key = args.api_key
api_key_file_path = args.api_key_file_path
if not api_key and api_key_file_path:
api_key = get_gpt3_api_key(Path(api_key_file_path))
if not api_key:
raise Exception("Specify an api key via --api_key or an api key file path --api_key_file_path=./api_key.txt")
set_gpt3_api_key(api_key)
silent = args.silent
engine = args.engine
top_p = args.top_p
context = args.context
prompt = args.prompt
prompt_file_name = args.prompt_file_name
json_input_file = args.json_input_file
json_output_file = args.json_output_file
prompts_to_score = []
if json_input_file:
with (PROMPTS_FOLDER / f'{json_input_file}.json').open('r') as f:
prompts_to_score = json.load(f)
else:
if not prompt and prompt_file_name:
with (PROMPTS_FOLDER / f'{prompt_file_name}.txt').open('r') as f:
prompt = f.read().strip()
prompts_to_score = [{'prompt': prompt, 'context': context}]
if len(prompts_to_score) == 0:
raise Exception("Specify a prompt via --prompt='example prompt' or a prompt file name in the prompts folder."
"-pfn test or a json input file (i.e. -jif 'test') with content in the form of"
" [{context: 'prepended to the prompt but not scored.', prompt: 'this will be scored'}]")
outputs = []
for idx, prompt_to_score in enumerate(prompts_to_score):
prompt = prompt_to_score.get('prompt')
assert prompt is not None, 'Please specify a correct prompt (none is not a correct prompt)'
context = prompt_to_score.get('context', '')
if not isinstance(prompt, list) and not isinstance(prompt, tuple):
prompt = [prompt]
if not isinstance(context, list) and not isinstance(context, tuple):
context = [context]
# Really the format of the json file can be
# {context: ['abc put this before the prompt', 'another context to try'],
# prompt: ['prompt 1 with the context', 'prompt 2 with the same context', 'prompt 3 with the...', ...]}
# just to make things easier and compact.
for c in context:
all_scores = []
if not silent:
print(f"\n\nContext: {c}")
curr_prompt = []
for p in prompt:
if isinstance(p, list) or isinstance(p, tuple):
curr_prompt.append(p)
all_scores.append(curr_prompt)
curr_prompt = []
continue
elif len(curr_prompt) > 0 and len(curr_prompt) < 3:
curr_prompt.append([])
all_scores.append(curr_prompt)
curr_prompt = []
scoring = score(
p,
context=c,
engine=engine,
top_p=top_p,
)
curr_prompt = [scoring, p]
if len(curr_prompt) < 3 and len(curr_prompt) > 0:
curr_prompt.append([])
if len(curr_prompt) > 0:
all_scores.append(curr_prompt)
sorted_scores = list(sorted(all_scores, key=lambda x: x[0], reverse=True))
for (s, p, t) in sorted_scores:
if not silent:
print(f"Prompt ({s:4f}): {p} | tags {'; '.join(t)}")
outputs.append({'context': c, 'scores': [{'prompt': p, 'score': s, 'tags': t} for (s, p, t) in sorted_scores]})
out_file = None
if json_output_file:
out_file = Path(OUTPUTS_FOLDER / json_output_file)
elif json_input_file:
out_file = Path(OUTPUTS_FOLDER / (json_input_file + '_out.json'))
elif prompt_file_name:
out_file = Path(OUTPUTS_FOLDER / (prompt_file_name + "_out.json"))
if out_file:
json.dump(outputs, out_file.open('w'))
| [
"0",
"['PLACEHOLDER']",
"PLACEHOLDERPLACEHOLDER",
"[PLACEHOLDER, PLACEHOLDER]",
"[]",
"[{'prompt': PLACEHOLDER, 'context': PLACEHOLDER}]"
] |
2024-01-10 | Zayne-sprague/Deductive_Additivity_for_Planning_of_Natural_Language_Proofs | multi_type_search~utils~gpt3_utils.py | from pathlib import Path
import openai
def get_gpt3_api_key(api_key_file_path: Path) -> str:
"""Helper to read in the api key from a txt file."""
with api_key_file_path.open('r') as f:
return f.read().strip()
def set_gpt3_api_key(api_key: str):
"""Small helper to set the api key for openai."""
openai.api_key = api_key
| [] |
2024-01-10 | teticio/openai-proxy | src~openai_wrapi~proxy0.py | import os
from types import SimpleNamespace
from typing import Dict, Optional, Tuple
from urllib.parse import urljoin
import openai
from openai import api_key
from .utils import get_aws_auth, get_base_url, get_user
_base_url = get_base_url(api_key.split("-")[1])
_custom_auth = get_aws_auth()
_custom_headers = {
"openai-proxy-user": get_user(),
"openai-proxy-project": os.environ.get("OPENAI_DEFAULT_PROJECT", "N/A"),
"openai-proxy-staging": os.environ.get("OPENAI_DEFAULT_STAGING", "dev"),
"openai-proxy-caching": os.environ.get("OPENAI_DEFAULT_CACHING", "1"),
}
def set_project(project: str):
_custom_headers["openai-proxy-project"] = project
def set_staging(staging: str):
_custom_headers["openai-proxy-staging"] = staging
def set_caching(caching: bool):
_custom_headers["openai-proxy-caching"] = str(int(caching))
_prepare_request_raw = openai.api_requestor.APIRequestor._prepare_request_raw
def _prepare_request_raw_proxy(
self,
url,
supplied_headers,
method,
params,
files,
request_id: Optional[str],
) -> Tuple[str, Dict[str, str], Optional[bytes]]:
_, headers, data = _prepare_request_raw(
self, url, supplied_headers, method, params, files, request_id
)
request = _custom_auth(
SimpleNamespace(
**{
"method": method,
"url": urljoin(_base_url, url),
"headers": {**headers, **_custom_headers},
"content": data,
}
)
)
return request.url, request.headers, request.content
# Monkey patch
openai.api_requestor.APIRequestor._prepare_request_raw = _prepare_request_raw_proxy
| [] |
2024-01-10 | teticio/openai-proxy | src~openai_wrapi~proxy1.py | import os
from typing import Any, TypeVar, Union
import httpx
from httpx import URL
from openai._base_client import BaseClient
from openai._client import AsyncOpenAI, OpenAI
from openai._streaming import AsyncStream, Stream
from .utils import get_aws_auth, get_base_url, get_user
_HttpxClientT = TypeVar("_HttpxClientT", bound=Union[httpx.Client, httpx.AsyncClient])
_DefaultStreamT = TypeVar("_DefaultStreamT", bound=Union[Stream[Any], AsyncStream[Any]])
class BaseClientProxy(BaseClient[_HttpxClientT, _DefaultStreamT]):
def __init__(
self,
project: str = os.environ.get("OPENAI_DEFAULT_PROJECT", "N/A"),
staging: str = os.environ.get("OPENAI_DEFAULT_STAGING", "dev"),
caching: str = os.environ.get("OPENAI_DEFAULT_CACHING", "1"),
):
self.set_project(project)
self.set_staging(staging)
self.set_caching(caching)
self._custom_headers["openai-proxy-user"] = get_user()
self._base_url = URL(get_base_url(self.api_key.split("-")[1]))
def set_project(self, project: str):
self._custom_headers["openai-proxy-project"] = project
def set_staging(self, staging: str):
self._custom_headers["openai-proxy-staging"] = staging
def set_caching(self, caching: bool):
self._custom_headers["openai-proxy-caching"] = str(int(caching))
custom_auth = get_aws_auth()
class OpenAIProxy(BaseClientProxy[httpx.Client, Stream[Any]], OpenAI):
def __init__(self, **kwargs):
proxy_kwargs = {
k: kwargs.pop(k)
for k in list(kwargs)
if k in ["project", "staging", "caching"]
}
OpenAI.__init__(self, **kwargs)
BaseClientProxy.__init__(self, **proxy_kwargs)
class AsyncOpenAIProxy(BaseClientProxy[httpx.Client, Stream[Any]], AsyncOpenAI):
def __init__(self, **kwargs):
proxy_kwargs = {
k: kwargs.pop(k)
for k in list(kwargs)
if k in ["project", "staging", "caching"]
}
AsyncOpenAI.__init__(self, **kwargs)
BaseClientProxy.__init__(self, **proxy_kwargs)
| [] |
2024-01-10 | kbuchim/Ironhack-CostumerSupport-Assistance | proyecto_final_streamlit.py | import streamlit as st
import os
import openai
from dotenv import load_dotenv
import pandas as pd
from typing import Set
from transformers import GPT2TokenizerFast
import argparse, sys
import numpy as np
import PyPDF2
from PyPDF2 import PdfReader
import csv
import pickle
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import requests
from bs4 import BeautifulSoup
import sys
import nltk
from pdfminer.high_level import extract_text, extract_pages
from pdfminer.layout import LTTextContainer
from pdfminer.high_level import extract_pages
from pdfminer.pdfinterp import PDFPageInterpreter
from pdfminer.pdfpage import PDFPage
from io import StringIO
import nltk
nltk.download('punkt')
from nltk.tokenize.punkt import PunktSentenceTokenizer
tokenizer = PunktSentenceTokenizer()
# Use load_env to trace the path of .env:
load_dotenv('.env')
openai.organization = "org-BJVQfnJYTuAJz2TkTNbchIf2"
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.Model.list()
def parser(file_path):
# Parse the PDF file
reader = PyPDF2.PdfReader(file_path)
# Loop over each page in the PDF document
sentences = []
for page in range(len(reader.pages)):
# Extract the text from the page
pdf_text = reader.pages[page].extract_text()
# in my case, when it had '\n', I called it a new paragraph,
# like a collection of sentences
paragraphs = [p for p in pdf_text.split('\n') if p]
# and here, sent_tokenize each one of the paragraphs
for paragraph in paragraphs:
pdf_sentences = tokenizer.tokenize(paragraph)
# Add the sentences to the list of sentences
sentences.extend(pdf_sentences)
return sentences
def search_question(sentences, question):
# Search for the question in the sentences
best_sentence = None
best_score = 0
for sentence in sentences:
# Calculate the score for the sentence based on the number of overlapping words with the question
score = len(set(re.findall(r'\b\w+\b', sentence.lower())) & set(re.findall(r'\b\w+\b', question.lower())))
# Update the best sentence and score if this sentence has a higher score than the current best sentence
if score > best_score:
best_sentence = sentence
best_score = score
# Return the best sentence as the answer
return best_sentence
def get_text_lines(pdf_file):
"""
Obtiene todas las líneas de texto horizontales de un PDF
"""
text_lines = []
for page_layout in pdf_layout:
for element in page_layout:
if isinstance(element, LTTextBoxHorizontal):
for text_line in element:
if isinstance(text_line, LTTextLineHorizontal):
text_lines.append(text_line)
return text_lines
def unify_text_lines(text_lines):
"""
Unifica varias líneas de texto en una sola frase
"""
# Ordena las líneas de texto por su posición vertical
text_lines.sort(key=lambda x: -x.y0)
# Concatena el contenido de las líneas de texto
text = ' '.join(line.get_text().strip() for line in text_lines)
print(text)
return text
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
def count_tokens(text: str) -> int:
"""count the number of tokens in a string"""
return len(tokenizer.encode(text))
def extract_page(page: str,
index: int
) -> str:
"""
Extracts the content and token count from the given page
"""
content = ' '.join([el.get_text().strip() for el in page if isinstance(el, LTTextContainer)])
token_count = count_tokens(content) + 4 # adding 4 extra tokens
return ("Page " + str(index), content, token_count)
COMPLETIONS_MODEL = "text-davinci-003"
MODEL_NAME = "curie"
DOC_EMBEDDINGS_MODEL = f"text-search-{MODEL_NAME}-doc-001"
QUERY_EMBEDDINGS_MODEL = f"text-search-{MODEL_NAME}-query-001"
def get_embedding(text: str, model: str=DOC_EMBEDDINGS_MODEL) -> list[float]:
result = openai.Embedding.create(
model=model,
input=text
)
return result["data"][0]["embedding"]
def compute_doc_embeddings(df: pd.DataFrame) -> dict[tuple[str], list[float]]:
"""
Create an embedding for each row in the dataframe using the OpenAI Embeddings API.
Return a dictionary that maps between each embedding vector and the index of the row that it corresponds to.
"""
embeddings_dict = {}
for idx, r in df.iterrows():
content = r["content"]
embedding = get_embedding(content)
embeddings_dict[(content, idx)] = embedding
return embeddings_dict
return {
idx: get_embedding(r.content) for idx, r in df.iterrows()
}
from sklearn.metrics.pairwise import cosine_similarity
MAX_SECTION_LEN = 500
SEPARATOR = "\n* "
separator_len = 3
def construct_prompt(question: str, context_embeddings: dict, df: pd.DataFrame, section_index) -> tuple[str, str]:
document_section = df.iloc[section_index]
chosen_sections = []
chosen_sections_len = 0
chosen_sections_indexes = []
chosen_sections_len += document_section.tokens + separator_len
if chosen_sections_len > MAX_SECTION_LEN:
space_left = MAX_SECTION_LEN - chosen_sections_len - len(SEPARATOR)
chosen_sections.append(SEPARATOR + document_section.content[:space_left])
chosen_sections_indexes.append(str(section_index))
chosen_sections.append(SEPARATOR + document_section.content)
chosen_sections_indexes.append(str(section_index))
header = 'Manten tus respuestas en máximo 3 oraciones. Se conciso, y completa siempre las oraciones. \n Este es un contexto que puede ser útil :\n'
return (header + "".join(chosen_sections) + "\n\n\nQ: " + question + "\n\nA: "), ("".join(chosen_sections))
COMPLETIONS_API_PARAMS = {
# We use temperature of 0.0 because it gives the most predictable, factual answer.
"temperature": 0.0,
"max_tokens": 150,
"model": COMPLETIONS_MODEL,
}
def answer_query_with_context(query, df, embeddings):
# Compute query embedding
#query_embedding = np.mean(embeddings.embed_sentences([query]), axis=0)
query_embedding = np.array(get_embedding(query))
# Compute cosine similarity between query embedding and all document embeddings
#similarities = cosine_similarity(embeddings.embedding_matrix, query_embedding.reshape(1, -1))
similarities = cosine_similarity(list(embeddings.values()), query_embedding.reshape(1,-1))
# Find index of most similar document
most_similar_index = np.argmax(similarities)
print(most_similar_index)
#Construct Prompt
prompt, context = construct_prompt(
query,
embeddings,
df,
most_similar_index
)
print("===\n", prompt)
response = openai.Completion.create(
prompt=prompt,
**COMPLETIONS_API_PARAMS
)
return response["choices"][0]["text"].strip(" \n"), context
@st.cache_data
def load_data():
""" Utility function for loading the penguins dataset as a dataframe."""
# df = sns.load_dataset('penguins')
with open('embeddings.pkl', 'rb') as f:
doc_embeddings = pickle.load(f)
df = pd.read_csv('paginas.txt')
return df, doc_embeddings
# load dataset
df, doc_embeddings = load_data()
#---------------------------------------------------------------
# Ask a question and search for the answer
font_size = "20px"
background_color = "#F9F9F9"
text_color = "#00f900"
st.markdown(f"""
<style>
input {{
font-size: {font_size};
background-color: {background_color};
color: {text_color};
}}
</style>
""", unsafe_allow_html=True)
question = st.text_input(
"Ask a question:",
value="",
max_chars=None,
key=None,
type="default",
)
if question:
answer, context = answer_query_with_context(question, df, doc_embeddings)
# Replace newline characters with line breaks
answer_with_line_breaks = answer.replace('\n', '<br>')
# Display the answer as a paragraph of text with line breaks
st.markdown(f"<p>{answer_with_line_breaks}</p>", unsafe_allow_html=True)
st.markdown("""
<style>
p {
font-size: 18px;
line-height: 1.5;
text-align: justify;
}
</style>
""", unsafe_allow_html=True)
| [] |
2024-01-10 | y-h-Lin/GenerativeAIExamples | integrations~langchain~embeddings~nv_aiplay.py | """Chat Model Components Derived from ChatModel/NVAIPlay"""
import asyncio
from collections import abc
from typing import Any, List, Literal, Sequence
from integrations.langchain.llms.nv_aiplay import ClientModel, NVCRModel
from langchain.pydantic_v1 import Field
from langchain.schema.embeddings import Embeddings
class NVAIPlayEmbeddings(ClientModel, Embeddings):
"""NVIDIA's AI Playground NVOLVE Question-Answer Asymmetric Model."""
client: NVCRModel = Field(NVCRModel)
model: str = Field("nvolveqa")
max_length: int = Field(2048, ge=1, le=2048)
def __init__(self, *args: Sequence, **kwargs: Any):
if "client" not in kwargs:
kwargs["client"] = NVCRModel(**kwargs)
super().__init__(*args, **kwargs)
def _embed(self, text: str, model_type: Literal["passage", "query"]) -> List[float]:
"""Embed a single text entry to either passage or query type"""
if len(text) > self.max_length:
text = text[: self.max_length]
output = self.client.get_req_generation(
model_name=self.model,
payload={
"input": text,
"model": model_type,
"encoding_format": "float",
},
)
return output.get("embedding", [])
def embed_query(self, text: str) -> List[float]:
"""Input pathway for query embeddings."""
return self._embed(text, model_type="query")
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Input pathway for document embeddings."""
return [self._embed(text, model_type="passage") for text in texts]
async def aembed_batch_queries(
self,
texts: List[str],
max_concurrency: int = 10,
) -> List[List[float]]:
"""Embed search queries with Asynchronous Batching and Concurrency Control."""
semaphore = asyncio.Semaphore(max_concurrency)
async def embed_with_semaphore(text: str) -> abc.Coroutine:
async with semaphore:
return await self.aembed_query(text)
tasks = [embed_with_semaphore(text) for text in texts]
return await asyncio.gather(*tasks)
async def aembed_batch_documents(
self,
texts: List[str],
max_concurrency: int = 10,
) -> List[List[float]]:
"""Embed search docs with Asynchronous Batching and Concurrency Control."""
semaphore = asyncio.Semaphore(max_concurrency)
async def embed_with_semaphore(text: str) -> abc.Coroutine:
async with semaphore:
return await self.aembed_documents([text])
tasks = [embed_with_semaphore(text) for text in texts]
outs = await asyncio.gather(*tasks)
return [out[0] for out in outs]
| [] |
2024-01-10 | y-h-Lin/GenerativeAIExamples | integrations~langchain~llms~nv_aiplay.py | ## NOTE: This class is intentionally implemented to subclass either ChatModel or LLM for
## demonstrative purposes and to make it function as a simple standalone file.
from __future__ import annotations
import asyncio
import json
import logging
import re
from typing import (
Any,
AsyncIterator,
Callable,
Dict,
Generator,
Iterator,
List,
Optional,
Sequence,
Tuple,
Union,
)
import aiohttp
import requests
from requests.models import Response
from langchain.callbacks.manager import (
AsyncCallbackManager,
AsyncCallbackManagerForLLMRun,
CallbackManager,
)
from langchain.llms.base import LLM
from langchain.pydantic_v1 import BaseModel, Field, SecretStr, root_validator
from langchain.schema.messages import BaseMessage, ChatMessageChunk
from langchain.schema.output import ChatGenerationChunk, GenerationChunk
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
class ClientModel(BaseModel):
"""
Custom BaseModel subclass with some desirable properties for subclassing
"""
saved_parent: Optional[ClientModel] = None
def __init__(self, *args: Sequence, **kwargs: Any[str, Any]):
super().__init__(*args, **kwargs)
def subscope(self, *args: Sequence, **kwargs: Any) -> Any:
"""Create a new ClientModel with the same values but new arguments"""
named_args = dict({k: v for k, v in zip(getattr(self, "arg_keys", []), args)})
named_args = {**named_args, **kwargs}
out = self.copy(update=named_args)
out.validate(dict(out._iter(to_dict=False, by_alias=False, exclude_unset=True)))
for k, v in self.__dict__.items():
if isinstance(v, ClientModel):
setattr(out, k, v.subscope(*args, **kwargs))
out.saved_parent = self
return out
def dict(self, *args: Sequence, **kwargs: Any) -> dict:
"""Handle saved_parent bleeding into dict"""
out = super().dict(*args, **kwargs)
if "saved_parent" in out:
out.pop("saved_parent")
return out
def get(self, key: str) -> Any:
"""Get a value from the ClientModel, using it like a dictionary"""
return getattr(self, key)
def transfer_state(self, other: Optional[ClientModel]) -> None:
"""Transfer state from one ClientModel to another"""
if other is None:
return
for k, v in self.__dict__.items():
if k in getattr(self, "state_vars", []):
setattr(other, k, v)
elif hasattr(v, "transfer_state"):
other_sub = getattr(other, k, None)
if other_sub is not None:
v.transfer_state(other_sub)
@staticmethod
def desecretize(v: Any) -> Any:
"""Desecretize a collection of values"""
recurse = ClientModel.desecretize
if isinstance(v, SecretStr):
return v.get_secret_value()
if isinstance(v, str):
return v
if isinstance(v, dict):
return {k: recurse(v) for k, v in v.items()}
if isinstance(v, list):
return [recurse(subv) for subv in v]
if isinstance(v, tuple):
return tuple(recurse(subv) for subv in v)
return v
def __enter__(self) -> ClientModel:
return self
def __exit__(self, type: Any, value: Any, traceback: Any) -> None:
self.transfer_state(self.saved_parent)
self.saved_parent = None
class NVCRModel(ClientModel):
"""
Underlying Client for interacting with the AI Playground API.
Leveraged by the NVAIPlayBaseModel to provide a simple requests-oriented interface.
Direct abstraction over NGC-recommended streaming/non-streaming Python solutions.
NOTE: AI Playground does not currently support raw text continuation.
"""
## Core defaults. These probably should not be changed
fetch_url_format: str = Field("https://api.nvcf.nvidia.com/v2/nvcf/pexec/status/")
call_invoke_base: str = Field("https://api.nvcf.nvidia.com/v2/nvcf/pexec/functions")
get_session_fn: Callable = Field(requests.Session)
get_asession_fn: Callable = Field(aiohttp.ClientSession)
## Populated on construction/validation
nvapi_key: Optional[SecretStr]
is_staging: Optional[bool]
available_models: Optional[Dict[str, str]]
## Generation arguments
max_tries: int = Field(5, ge=1)
stop: Union[str, List[str]] = Field([])
headers = dict(
call={"Authorization": "Bearer {nvapi_key}", "Accept": "application/json"},
stream={
"Authorization": "Bearer {nvapi_key}",
"Accept": "text/event-stream",
"content-type": "application/json",
},
)
## Status Tracking Variables. Updated Progressively
last_inputs: Optional[dict] = Field(None)
last_response: Optional[Any] = Field(None)
last_msg: dict = Field({})
available_functions: List[dict] = Field([{}])
state_vars: Sequence[str] = Field(
[
"last_inputs",
"last_response",
"last_msg",
"available_functions",
]
)
@root_validator()
def validate_model(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Validate and update model arguments, including API key and formatting"""
values["nvapi_key"] = get_from_dict_or_env(values, "nvapi_key", "NVAPI_KEY")
if "nvapi-" not in values.get("nvapi_key", ""):
raise ValueError("Invalid NVAPI key detected. Should start with `nvapi-`")
values["is_staging"] = "nvapi-stg-" in values["nvapi_key"]
for header in values["headers"].values():
if "{nvapi_key}" in header["Authorization"]:
nvapi_key = ClientModel.desecretize(values["nvapi_key"])
header["Authorization"] = SecretStr(
header["Authorization"].format(nvapi_key=nvapi_key),
)
if isinstance(values["stop"], str):
values["stop"] = [values["stop"]]
return values
def __init__(self, *args: Sequence, **kwargs: Any):
"""Useful to define custom operations on construction after validation"""
super().__init__(*args, **kwargs)
self.fetch_url_format = self._stagify(self.fetch_url_format)
self.call_invoke_base = self._stagify(self.call_invoke_base)
try:
self.available_models = self.get_available_models()
except Exception as e:
raise Exception("Error retrieving model list. Verify your NVAPI key") from e
def _stagify(self, path: str) -> str:
"""Helper method to switch between staging and production endpoints"""
if self.is_staging and "stg.api" not in path:
return path.replace("api", "stg.api")
if not self.is_staging and "stg.api" in path:
return path.replace("stg.api", "api")
return path
####################################################################################
## Core utilities for posting and getting from NVCR
def _post(self, invoke_url: str, payload: dict = {}) -> Tuple[Response, Any]:
"""Method for posting to the AI Playground API."""
self.last_inputs = dict(
url=invoke_url,
headers=self.headers["call"],
json=payload,
stream=False,
)
session = self.get_session_fn()
self.last_response = session.post(**ClientModel.desecretize(self.last_inputs))
self._try_raise(self.last_response)
return self.last_response, session
def _get(self, invoke_url: str, payload: dict = {}) -> Tuple[Response, Any]:
"""Method for getting from the AI Playground API."""
self.last_inputs = dict(
url=invoke_url,
headers=self.headers["call"],
json=payload,
stream=False,
)
session = self.get_session_fn()
self.last_response = session.get(**ClientModel.desecretize(self.last_inputs))
self._try_raise(self.last_response)
return self.last_response, session
def _wait(self, response: Response, session: Any) -> Response:
"""Wait for a response from API after an initial response is made."""
i = 1
while response.status_code == 202:
request_id = response.headers.get("NVCF-REQID", "")
response = session.get(
self.fetch_url_format + request_id,
headers=ClientModel.desecretize(self.headers["call"]),
)
if response.status_code == 202:
try:
body = response.json()
except ValueError:
body = str(response)
if i > self.max_tries:
raise ValueError(f"Failed to get response with {i} tries: {body}")
self._try_raise(response)
return response
def _try_raise(self, response: Response) -> None:
"""Try to raise an error from a response"""
try:
response.raise_for_status()
except requests.HTTPError as e:
try:
rd = response.json()
except json.JSONDecodeError:
rd = response.__dict__
rd = rd.get("_content", rd)
if isinstance(rd, bytes):
rd = rd.decode("utf-8")[5:] ## lop of data: prefix
try:
rd = json.loads(rd)
except Exception:
rd = {"detail": rd}
title = f"[{rd.get('status', '###')}] {rd.get('title', 'Unknown Error')}"
body = f"{rd.get('detail', rd.get('type', rd))}"
raise Exception(f"{title}\n{body}") from e
####################################################################################
## Simple query interface to show the set of model options
def query(self, invoke_url: str, payload: dict = {}) -> dict:
"""Simple method for an end-to-end get query. Returns result dictionary"""
response, session = self._get(invoke_url, payload)
response = self._wait(response, session)
output = self._process_response(response)[0]
return output
def _process_response(self, response: Union[str, Response]) -> List[dict]:
"""General-purpose response processing for single responses and streams"""
if hasattr(response, "json"): ## For single response (i.e. non-streaming)
try:
return [response.json()]
except json.JSONDecodeError:
response = str(response.__dict__)
if isinstance(response, str): ## For set of responses (i.e. streaming)
msg_list = []
for msg in response.split("\n\n"):
if "{" not in msg:
continue
msg_list += [json.loads(msg[msg.find("{") :])]
return msg_list
raise ValueError(f"Received ill-formed response: {response}")
def get_available_models(self) -> dict:
"""Get a dictionary of available models from the AI Playground API."""
invoke_url = self._stagify("https://api.nvcf.nvidia.com/v2/nvcf/functions")
self.available_functions = self.query(invoke_url)["functions"]
live_fns = [v for v in self.available_functions if v.get("status") == "ACTIVE"]
return {v["name"]: v["id"] for v in live_fns}
def _get_invoke_url(
self, model_name: Optional[str] = None, invoke_url: Optional[str] = None
) -> str:
"""Helper method to get invoke URL from a model name, URL, or endpoint stub"""
if not invoke_url:
if not model_name:
raise ValueError("URL or model name must be specified to invoke")
available_models = self.available_models or self.get_available_models()
if model_name in available_models:
invoke_url = available_models.get(model_name)
else:
for key in sorted(available_models.keys()):
if model_name in key:
invoke_url = available_models[key]
break
if not invoke_url:
raise ValueError(f"Unknown model name {model_name} specified")
if "http" not in invoke_url:
invoke_url = f"{self.call_invoke_base}/{invoke_url}"
return invoke_url
####################################################################################
## Generation interface to allow users to generate new values from endpoints
def get_req_generation(
self,
model_name: Optional[str] = None,
payload: dict = {},
invoke_url: Optional[str] = None,
) -> dict:
"""Method for an end-to-end post query with NVCR post-processing."""
invoke_url = self._get_invoke_url(model_name, invoke_url)
if payload.get("stream", False) is True:
payload = {**payload, "stream": False}
response, session = self._post(invoke_url, payload)
response = self._wait(response, session)
output, _ = self.postprocess(response)
return output
def postprocess(self, response: Union[str, Response]) -> Tuple[dict, bool]:
"""Parses a response from the AI Playground API.
Strongly assumes that the API will return a single response.
"""
msg_list = self._process_response(response)
msg, is_stopped = self._aggregate_msgs(msg_list)
msg, is_stopped = self._early_stop_msg(msg, is_stopped)
return msg, is_stopped
def _aggregate_msgs(self, msg_list: Sequence[dict]) -> Tuple[dict, bool]:
"""Dig out relevant details of aggregated message"""
content_buffer: Dict[str, Any] = dict()
content_holder: Dict[Any, Any] = dict()
is_stopped = False
for msg in msg_list:
self.last_msg = msg
if "choices" in msg:
## Tease out ['choices'][0]...['delta'/'message']
msg = msg.get("choices", [{}])[0]
is_stopped = msg.get("finish_reason", "") == "stop"
msg = msg.get("delta", msg.get("message", {"content": ""}))
elif "data" in msg:
## Tease out ['data'][0]...['embedding']
msg = msg.get("data", [{}])[0]
content_holder = msg
for k, v in msg.items():
if k in ("content",) and k in content_buffer:
content_buffer[k] += v
else:
content_buffer[k] = v
if is_stopped:
break
content_holder = {**content_holder, **content_buffer}
return content_holder, is_stopped
def _early_stop_msg(self, msg: dict, is_stopped: bool) -> Tuple[dict, bool]:
"""Try to early-terminate streaming or generation by iterating over stop list"""
content = msg.get("content", "")
if content and self.stop:
for stop_str in self.stop:
if stop_str and stop_str in content:
msg["content"] = content[: content.find(stop_str) + 1]
is_stopped = True
return msg, is_stopped
####################################################################################
## Streaming interface to allow you to iterate through progressive generations
def get_req_stream(
self,
model: Optional[str] = None,
payload: dict = {},
invoke_url: Optional[str] = None,
) -> Iterator:
invoke_url = self._get_invoke_url(model, invoke_url)
if payload.get("stream", True) is False:
payload = {**payload, "stream": True}
self.last_inputs = dict(
url=invoke_url,
headers=self.headers["stream"],
json=payload,
stream=True,
)
raw_inputs = ClientModel.desecretize(self.last_inputs)
response = self.get_session_fn().post(**raw_inputs)
self.last_response = response
self._try_raise(response)
call = self.copy()
def out_gen() -> Generator[dict, Any, Any]:
## Good for client, since it allows self.last_input
for line in response.iter_lines():
if line and line.strip() != b"data: [DONE]":
line = line.decode("utf-8")
msg, final_line = call.postprocess(line)
yield msg
if final_line:
break
self._try_raise(response)
return (r for r in out_gen())
####################################################################################
## Asynchronous streaming interface to allow multiple generations to happen at once.
async def get_req_astream(
self,
model: Optional[str] = None,
payload: dict = {},
invoke_url: Optional[str] = None,
) -> AsyncIterator:
invoke_url = self._get_invoke_url(model, invoke_url)
if payload.get("stream", True) is False:
payload = {**payload, "stream": True}
self.last_inputs = dict(
url=invoke_url,
headers=self.headers["stream"],
json=payload,
)
async with self.get_asession_fn() as session:
raw_inputs = ClientModel.desecretize(self.last_inputs)
async with session.post(**raw_inputs) as self.last_response:
self._try_raise(self.last_response)
async for line in self.last_response.content.iter_any():
if line and line.strip() != b"data: [DONE]":
line = line.decode("utf-8")
msg, final_line = self.postprocess(line)
yield msg
if final_line:
break
class NVAIPlayClient(ClientModel):
"""
Higher-Level Client for interacting with AI Playground API with argument defaults.
Is subclassed by NVAIPlayLLM/NVAIPlayChat to provide a simple LangChain interface.
"""
client: NVCRModel = Field(NVCRModel)
model: str = Field("llama")
labels: dict = Field({})
temperature: float = Field(0.2, le=1.0, gt=0.0)
top_p: float = Field(0.7, le=1.0, ge=0.0)
max_tokens: int = Field(1024, le=1024, ge=32)
streaming: bool = Field(False)
inputs: Any = Field([])
stop: Union[Sequence[str], str] = Field([])
gen_keys: Sequence[str] = Field(["temperature", "top_p", "max_tokens", "streaming"])
arg_keys: Sequence[str] = Field(["inputs", "stop"])
valid_roles: Sequence[str] = Field(["user", "system", "assistant"])
class LabelModel(ClientModel):
creativity: int = Field(0, ge=0, le=9)
complexity: int = Field(0, ge=0, le=9)
verbosity: int = Field(0, ge=0, le=9)
####################################################################################
def __init__(self, *args: Sequence, **kwargs: Any):
super().__init__(*args, **kwargs)
@root_validator()
def validate_model(cls, values: Dict[str, Any]) -> Dict[str, Any]:
values["client"] = values["client"](**values)
if values.get("labels"):
values["labels"] = cls.LabelModel(**values["labels"]).dict()
return values
@classmethod
def is_lc_serializable(cls) -> bool:
return True
@property
def available_models(self) -> List[str]:
"""List the available models that can be invoked"""
return list(getattr(self.client, "available_models", {}).keys())
def get_model_details(self, model: Optional[str] = None) -> dict:
"""Get more meta-details about a model retrieved by a given name"""
if model is None:
model = self.model
model_key = self.client._get_invoke_url(model).split("/")[-1]
known_fns = self.client.available_functions
fn_spec = [f for f in known_fns if f.get("id") == model_key][0]
return fn_spec
def get_generation(self, *args: Sequence, **kwargs: Any) -> dict:
"""Call to client generate method with call scope"""
with self.subscope(*args, **kwargs) as call:
payload = call.get_payload(stream=False)
out = call.client.get_req_generation(call.model, payload=payload)
return out
def get_stream(self, *args: Sequence, **kwargs: Any) -> Iterator:
"""Call to client stream method with call scope"""
with self.subscope(*args, **kwargs) as call:
payload = call.get_payload(stream=True)
out = call.client.get_req_stream(call.model, payload=payload)
return out
def get_astream(self, *args: Sequence, **kwargs: Any) -> AsyncIterator:
"""Call to client astream method with call scope"""
with self.subscope(*args, **kwargs) as call:
payload = call.get_payload(stream=True)
out = call.client.get_req_astream(call.model, payload=payload)
return out
def get_payload(self, *args: Sequence, **kwargs: Any) -> dict:
"""Generates payload for the NVAIPlayClient API to send to service."""
def k_map(k: str) -> str:
return k if k != "streaming" else "stream"
out = {**self.preprocess(), **{k_map(k): self.get(k) for k in self.gen_keys}}
return out
def preprocess(self) -> dict:
"""Prepares a message or list of messages for the payload"""
if (
isinstance(self.inputs, str)
or not hasattr(self.inputs, "__iter__")
or isinstance(self.inputs, BaseMessage)
):
self.inputs = [self.inputs]
messages = [self.prep_msg(m) for m in self.inputs]
labels = self.labels
if labels:
messages += [{"labels": labels, "role": "assistant"}]
return {"messages": messages}
def prep_msg(self, msg: Union[str, dict, BaseMessage]) -> dict:
"""Helper Method: Ensures a message is a dictionary with a role and content."""
if isinstance(msg, str):
return dict(role="user", content=msg)
if isinstance(msg, dict):
if msg.get("role", "") not in self.valid_roles:
raise ValueError(f"Unknown message role \"{msg.get('role', '')}\"")
if msg.get("content", None) is None:
raise ValueError(f"Message {msg} has no content")
return msg
raise ValueError(f"Unknown message received: {msg} of type {type(msg)}")
class NVAIPlayBaseModel(NVAIPlayClient):
"""
Base class for NVIDIA AI Playground models which can interface with NVAIPlayClient.
To be subclassed by NVAIPlayLLM/NVAIPlayChat by combining with LLM/SimpleChatModel.
"""
@property
def _llm_type(self) -> str:
"""Return type of NVIDIA AI Playground Interface."""
return "nvidia_ai_playground"
def _call(
self,
messages: Union[List[BaseMessage], str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManager] = None,
**kwargs: Any,
) -> str:
"""hook for LLM/SimpleChatModel. Allows for easy standard/streaming calls"""
kwargs["labels"] = kwargs.get("labels", self.labels)
kwargs["stop"] = stop if stop else getattr(self.client, "stop")
if kwargs.get("streaming", self.streaming) or kwargs["stop"]:
buffer = ""
for chunk in self._stream(messages, run_manager=run_manager, **kwargs):
buffer += chunk if isinstance(chunk, str) else chunk.text
responses = {"content": buffer}
else:
inputs = self.custom_preprocess(messages)
responses = self.get_generation(inputs, **kwargs)
outputs = self.custom_postprocess(responses)
return outputs
def _get_filled_chunk(
self, text: str, role: Optional[str] = "assistant"
) -> Union[GenerationChunk, ChatGenerationChunk]:
"""LLM and BasicChatModel have different streaming chunk specifications"""
if isinstance(self, LLM):
return GenerationChunk(text=text)
return ChatGenerationChunk(message=ChatMessageChunk(content=text, role=role))
def _stream(
self,
messages: Union[List[BaseMessage], str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManager] = None,
**kwargs: Any,
) -> Iterator[Union[GenerationChunk, ChatGenerationChunk]]:
"""Allows streaming to model!"""
inputs = self.custom_preprocess(messages)
kwargs["labels"] = kwargs.get("labels", self.labels)
kwargs["stop"] = stop if stop else getattr(self.client, "stop")
for response in self.get_stream(inputs, **kwargs):
chunk = self._get_filled_chunk(self.custom_postprocess(response))
yield chunk
if run_manager:
async_mtypes = (AsyncCallbackManager, AsyncCallbackManagerForLLMRun)
if isinstance(run_manager, async_mtypes):
## Edge case from LLM/SimpleChatModel default async methods
asyncio.run(run_manager.on_llm_new_token(chunk.text, chunk=chunk))
else:
run_manager.on_llm_new_token(chunk.text, chunk=chunk)
async def _astream(
self,
messages: Union[List[BaseMessage], str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManager] = None,
**kwargs: Any,
) -> AsyncIterator[Union[GenerationChunk, ChatGenerationChunk]]:
inputs = self.custom_preprocess(messages)
kwargs["labels"] = kwargs.get("labels", self.labels)
kwargs["stop"] = stop if stop else getattr(self.client, "stop")
async for response in self.get_astream(inputs, **kwargs):
chunk = self._get_filled_chunk(self.custom_postprocess(response))
yield chunk
if run_manager:
await run_manager.on_llm_new_token(chunk.text, chunk=chunk)
def custom_preprocess(self, msgs: Union[str, Sequence]) -> List[Dict[str, str]]:
is_one = isinstance(msgs, (str, BaseMessage))
is_list = not is_one and hasattr(msgs, "__iter__")
is_solo = is_list and len(msgs) == 1 and isinstance(msgs[0], (str, BaseMessage))
msg_list: Sequence[Any] = []
if is_one or is_solo:
msg_val: Union[str, BaseMessage] = msgs if not is_list else msgs[0]
msg_str: str = getattr(msg_val, "content", msg_val)
msg_list = re.split("///ROLE ", msg_str.strip())
msg_list = [m for m in msg_list if m.strip()]
elif not is_list:
msg_list = [msgs]
elif is_list:
msg_list = msgs
out = [self.preprocess_msg(m) for m in msg_list]
return out
def preprocess_msg(
self, msg: Union[str, Sequence[str], dict, BaseMessage]
) -> Dict[str, str]:
## Support for just simple string inputs of ///ROLE SYS etc. inputs
if isinstance(msg, str):
msg_split = re.split("SYS: |USER: |AGENT: |CONTEXT:", msg)
if len(msg_split) == 1:
return {"role": "user", "content": msg}
role_convert = {
"agent": "assistant",
"sys": "system",
"context": "context",
}
role, _, content = msg.partition(": ")
role = role_convert.get(role.strip().lower(), "user")
return {"role": role, "content": content}
## Support for tuple inputs
if type(msg) in (list, tuple):
return {"role": msg[0], "content": msg[1]}
## Support for manually-specified default inputs to AI Playground
if isinstance(msg, dict) and msg.get("content"):
msg["role"] = msg.get("role", "user")
return msg
## Support for LangChain Messages
if hasattr(msg, "content"):
role_convert = {"ai": "assistant", "system": "system"}
role = getattr(msg, "type")
cont = getattr(msg, "content")
role = role_convert.get(role, "user")
if hasattr(msg, "role"):
cont = f"{getattr(msg, 'role')}: {cont}"
return {"role": role, "content": cont}
raise ValueError(f"Invalid message: {repr(msg)} of type {type(msg)}")
def custom_postprocess(self, msg: dict) -> str:
if "content" in msg:
return msg["content"]
logger.warning(
f"Got ambiguous message in postprocessing; returning as-is: msg = {msg}"
)
return str(msg)
####################################################################################
class GeneralBase(NVAIPlayBaseModel):
model: str = Field("llama2_13b")
class CodeBase(NVAIPlayBaseModel):
model: str = Field("llama2_code_13b")
class InstructBase(NVAIPlayBaseModel):
model: str = Field("mistral")
class SteerBase(NVAIPlayBaseModel):
model: str = Field("steerlm")
arg_keys: Sequence[str] = Field(["inputs", "labels", "stop"])
labels: dict = Field({"creativity": 0, "complexity": 9, "verbosity": 9})
class ContextBase(NVAIPlayBaseModel):
model: str = Field("_qa_")
valid_roles: Sequence[str] = Field(["user", "context"])
max_tokens: int = Field(512, ge=32, le=512)
class ImageBase(NVAIPlayBaseModel):
model: str = Field("neva")
arg_keys: Sequence[str] = Field(["inputs", "labels", "stop"])
labels: dict = Field({"creativity": 0, "complexity": 9, "verbosity": 9})
####################################################################################
class NVAIPlayLLM(NVAIPlayBaseModel, LLM):
pass
class GeneralLLM(GeneralBase, LLM):
pass
class CodeLLM(CodeBase, LLM):
pass
class InstructLLM(InstructBase, LLM):
pass
class SteerLLM(SteerBase, LLM):
pass
class ContextLLM(ContextBase, LLM):
pass
class ImageLLM(ImageBase, LLM):
pass
| [
"f\"{getattr(msg, 'role')}: {cont}",
"application/json"
] |
2024-01-10 | hunter-2e/openai-verbal-chat-bot | chatter.py | #TEXT TO SPEECH
from gtts import gTTS
import os
#RESPONSE AI
import openai
#SPEECH TO TEXT
import whisper
import wave
import pyaudio
model = whisper.load_model("medium")
openai.api_key = "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"
audio = pyaudio.PyAudio()
stream = audio.open(format=pyaudio.paInt16, channels=1, rate=44100, input=True, frames_per_buffer=1024)
frames = []
try:
while True:
data = stream.read(1024)
frames.append(data)
except KeyboardInterrupt:
pass
stream.start_stream()
stream.close()
audio.terminate()
sound_file = wave.open("human.wav", "wb")
sound_file.setnchannels(1)
sound_file.setsampwidth(audio.get_sample_size(pyaudio.paInt16))
sound_file.setframerate(44100)
sound_file.writeframes(b''.join(frames))
sound_file.close()
prompt = model.transcribe("human.wav")
response = openai.Completion.create(
model="text-davinci-003",
prompt="Why sky blue.",
temperature=0.9,
max_tokens=150,
top_p=1,
frequency_penalty=0.0,
presence_penalty=0.6,
stop=[" Human:", " AI:"]
)
myobj = gTTS(text=response["choices"][0]["text"], lang='en', slow=False)
#SAVE THEN PLAY MP3
myobj.save("bot.mp3")
os.system("bot.mp3")
print(response["choices"][0]["text"]) | [
"human.wav",
"Why sky blue."
] |
2024-01-10 | tomcotter7/gpt-cotts | gpt-cotts-fastapi~orchestrator.py | """Orchestrator module."""
from dotenv import load_dotenv
from openai import OpenAI
from database.pinecone import query_pinecone
from utils import load_config
load_dotenv()
class Orchestrator:
"""Orchestrator class - deals with the interaction between the user and the model.
Attributes:
query_calls (int): Number of times the model has been queried.
prompts (dict[str, str]): Dictionary containing the prompts for the model.
gen_model (str): Name of the model to use for generation.
embedding_model (str): Name of the model to use for embedding.
context (list[dict[str, str]]): List of dictionaries containing the context of the conversation.
client (OpenAI): OpenAI client.
"""
def __init__(self): # noqa: D107
self.query_calls = 0
config = load_config()
self.prompts = config["prompts"]
self.gen_model = config["gen_model"]
self.embedding_model = config["embedding_model"]
self.context = [{"role": "system", "content": self.prompts["regular"]}]
self.client = OpenAI()
def clear_context(self) -> None:
"""Clears the context and resets the query calls."""
self.query_calls = 0
self.context = [{"role": "system", "content": self.prompts["regular"]}]
def reduce_context_size(self) -> None:
"""Reduces the context size to 3 pairs of messages."""
self.query_calls = 3
self.context = self.context[-6:]
def build_rag_input_prompt(self, input_query: str, details: dict) -> str:
"""Builds the input prompt for RAG.
Args:
input_query: The query to be used.
details: dictionary containing the index and namespace to query.
Returns:
The query to be used for RAG, with the context prepended.
"""
vector_db_result = query_pinecone(
input_query, details["index"], self.embedding_model, details["namespace"]
)
chunks = []
for chunk in vector_db_result["matches"]:
chunks.append(chunk["metadata"]["doc"])
chunks = "\n".join(chunks)
return f"Potential Context: {chunks} ### Question: {input_query}"
def query(self, input_query: str, use_rag: bool, details: dict = {}) -> str:
"""Queries the model and returns the response.
Args:
input_query: The query to be used.
use_rag: Whether to use RAG or not.
details: dictionary containing the index and namespace to query.
Only used if use_rag is True.
Returns:
The response from the model.
"""
if self.query_calls > 3:
self.reduce_context_size()
if use_rag:
self.context[0] = {"role": "system", "content": self.prompts["rag"]}
input_query = self.build_rag_input_prompt(input_query, details)
else:
self.context[0] = {"role": "system", "content": self.prompts["regular"]}
self.context.append({"role": "user", "content": input_query})
self.query_calls += 1
stream = self.client.chat.completions.create(
messages=self.context,
model=self.gen_model,
stream=True,
)
self.context.append({"role": "assistant", "content": ""})
for chunk in stream:
if chunk.choices[0].delta.content is not None:
self.context[-1]["content"] += chunk.choices[0].delta.content
yield chunk.choices[0].delta.content
| [] |
2024-01-10 | RiemaruKarurosu/ZeroTier-GTK | .flatpak-builder~cache~objects~d5~e60916a82e67260cd2182d270904a27e01b93c7f51c3a8e8d730fae37c967a.file | import importlib
from codecs import IncrementalDecoder
from collections import Counter
from functools import lru_cache
from typing import Counter as TypeCounter, Dict, List, Optional, Tuple
from .assets import FREQUENCIES
from .constant import KO_NAMES, LANGUAGE_SUPPORTED_COUNT, TOO_SMALL_SEQUENCE, ZH_NAMES
from .md import is_suspiciously_successive_range
from .models import CoherenceMatches
from .utils import (
is_accentuated,
is_latin,
is_multi_byte_encoding,
is_unicode_range_secondary,
unicode_range,
)
def encoding_unicode_range(iana_name: str) -> List[str]:
"""
Return associated unicode ranges in a single byte code page.
"""
if is_multi_byte_encoding(iana_name):
raise IOError("Function not supported on multi-byte code page")
decoder = importlib.import_module(
"encodings.{}".format(iana_name)
).IncrementalDecoder
p: IncrementalDecoder = decoder(errors="ignore")
seen_ranges: Dict[str, int] = {}
character_count: int = 0
for i in range(0x40, 0xFF):
chunk: str = p.decode(bytes([i]))
if chunk:
character_range: Optional[str] = unicode_range(chunk)
if character_range is None:
continue
if is_unicode_range_secondary(character_range) is False:
if character_range not in seen_ranges:
seen_ranges[character_range] = 0
seen_ranges[character_range] += 1
character_count += 1
return sorted(
[
character_range
for character_range in seen_ranges
if seen_ranges[character_range] / character_count >= 0.15
]
)
def unicode_range_languages(primary_range: str) -> List[str]:
"""
Return inferred languages used with a unicode range.
"""
languages: List[str] = []
for language, characters in FREQUENCIES.items():
for character in characters:
if unicode_range(character) == primary_range:
languages.append(language)
break
return languages
@lru_cache()
def encoding_languages(iana_name: str) -> List[str]:
"""
Single-byte encoding language association. Some code page are heavily linked to particular language(s).
This function does the correspondence.
"""
unicode_ranges: List[str] = encoding_unicode_range(iana_name)
primary_range: Optional[str] = None
for specified_range in unicode_ranges:
if "Latin" not in specified_range:
primary_range = specified_range
break
if primary_range is None:
return ["Latin Based"]
return unicode_range_languages(primary_range)
@lru_cache()
def mb_encoding_languages(iana_name: str) -> List[str]:
"""
Multi-byte encoding language association. Some code page are heavily linked to particular language(s).
This function does the correspondence.
"""
if (
iana_name.startswith("shift_")
or iana_name.startswith("iso2022_jp")
or iana_name.startswith("euc_j")
or iana_name == "cp932"
):
return ["Japanese"]
if iana_name.startswith("gb") or iana_name in ZH_NAMES:
return ["Chinese"]
if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES:
return ["Korean"]
return []
@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT)
def get_target_features(language: str) -> Tuple[bool, bool]:
"""
Determine main aspects from a supported language if it contains accents and if is pure Latin.
"""
target_have_accents: bool = False
target_pure_latin: bool = True
for character in FREQUENCIES[language]:
if not target_have_accents and is_accentuated(character):
target_have_accents = True
if target_pure_latin and is_latin(character) is False:
target_pure_latin = False
return target_have_accents, target_pure_latin
def alphabet_languages(
characters: List[str], ignore_non_latin: bool = False
) -> List[str]:
"""
Return associated languages associated to given characters.
"""
languages: List[Tuple[str, float]] = []
source_have_accents = any(is_accentuated(character) for character in characters)
for language, language_characters in FREQUENCIES.items():
target_have_accents, target_pure_latin = get_target_features(language)
if ignore_non_latin and target_pure_latin is False:
continue
if target_have_accents is False and source_have_accents:
continue
character_count: int = len(language_characters)
character_match_count: int = len(
[c for c in language_characters if c in characters]
)
ratio: float = character_match_count / character_count
if ratio >= 0.2:
languages.append((language, ratio))
languages = sorted(languages, key=lambda x: x[1], reverse=True)
return [compatible_language[0] for compatible_language in languages]
def characters_popularity_compare(
language: str, ordered_characters: List[str]
) -> float:
"""
Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language.
The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit).
Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.)
"""
if language not in FREQUENCIES:
raise ValueError("{} not available".format(language))
character_approved_count: int = 0
FREQUENCIES_language_set = set(FREQUENCIES[language])
ordered_characters_count: int = len(ordered_characters)
target_language_characters_count: int = len(FREQUENCIES[language])
large_alphabet: bool = target_language_characters_count > 26
for character, character_rank in zip(
ordered_characters, range(0, ordered_characters_count)
):
if character not in FREQUENCIES_language_set:
continue
character_rank_in_language: int = FREQUENCIES[language].index(character)
expected_projection_ratio: float = (
target_language_characters_count / ordered_characters_count
)
character_rank_projection: int = int(character_rank * expected_projection_ratio)
if (
large_alphabet is False
and abs(character_rank_projection - character_rank_in_language) > 4
):
continue
if (
large_alphabet is True
and abs(character_rank_projection - character_rank_in_language)
< target_language_characters_count / 3
):
character_approved_count += 1
continue
characters_before_source: List[str] = FREQUENCIES[language][
0:character_rank_in_language
]
characters_after_source: List[str] = FREQUENCIES[language][
character_rank_in_language:
]
characters_before: List[str] = ordered_characters[0:character_rank]
characters_after: List[str] = ordered_characters[character_rank:]
before_match_count: int = len(
set(characters_before) & set(characters_before_source)
)
after_match_count: int = len(
set(characters_after) & set(characters_after_source)
)
if len(characters_before_source) == 0 and before_match_count <= 4:
character_approved_count += 1
continue
if len(characters_after_source) == 0 and after_match_count <= 4:
character_approved_count += 1
continue
if (
before_match_count / len(characters_before_source) >= 0.4
or after_match_count / len(characters_after_source) >= 0.4
):
character_approved_count += 1
continue
return character_approved_count / len(ordered_characters)
def alpha_unicode_split(decoded_sequence: str) -> List[str]:
"""
Given a decoded text sequence, return a list of str. Unicode range / alphabet separation.
Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list;
One containing the latin letters and the other hebrew.
"""
layers: Dict[str, str] = {}
for character in decoded_sequence:
if character.isalpha() is False:
continue
character_range: Optional[str] = unicode_range(character)
if character_range is None:
continue
layer_target_range: Optional[str] = None
for discovered_range in layers:
if (
is_suspiciously_successive_range(discovered_range, character_range)
is False
):
layer_target_range = discovered_range
break
if layer_target_range is None:
layer_target_range = character_range
if layer_target_range not in layers:
layers[layer_target_range] = character.lower()
continue
layers[layer_target_range] += character.lower()
return list(layers.values())
def merge_coherence_ratios(results: List[CoherenceMatches]) -> CoherenceMatches:
"""
This function merge results previously given by the function coherence_ratio.
The return type is the same as coherence_ratio.
"""
per_language_ratios: Dict[str, List[float]] = {}
for result in results:
for sub_result in result:
language, ratio = sub_result
if language not in per_language_ratios:
per_language_ratios[language] = [ratio]
continue
per_language_ratios[language].append(ratio)
merge = [
(
language,
round(
sum(per_language_ratios[language]) / len(per_language_ratios[language]),
4,
),
)
for language in per_language_ratios
]
return sorted(merge, key=lambda x: x[1], reverse=True)
def filter_alt_coherence_matches(results: CoherenceMatches) -> CoherenceMatches:
"""
We shall NOT return "English—" in CoherenceMatches because it is an alternative
of "English". This function only keeps the best match and remove the em-dash in it.
"""
index_results: Dict[str, List[float]] = dict()
for result in results:
language, ratio = result
no_em_name: str = language.replace("—", "")
if no_em_name not in index_results:
index_results[no_em_name] = []
index_results[no_em_name].append(ratio)
if any(len(index_results[e]) > 1 for e in index_results):
filtered_results: CoherenceMatches = []
for language in index_results:
filtered_results.append((language, max(index_results[language])))
return filtered_results
return results
@lru_cache(maxsize=2048)
def coherence_ratio(
decoded_sequence: str, threshold: float = 0.1, lg_inclusion: Optional[str] = None
) -> CoherenceMatches:
"""
Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers.
A layer = Character extraction by alphabets/ranges.
"""
results: List[Tuple[str, float]] = []
ignore_non_latin: bool = False
sufficient_match_count: int = 0
lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else []
if "Latin Based" in lg_inclusion_list:
ignore_non_latin = True
lg_inclusion_list.remove("Latin Based")
for layer in alpha_unicode_split(decoded_sequence):
sequence_frequencies: TypeCounter[str] = Counter(layer)
most_common = sequence_frequencies.most_common()
character_count: int = sum(o for c, o in most_common)
if character_count <= TOO_SMALL_SEQUENCE:
continue
popular_character_ordered: List[str] = [c for c, o in most_common]
for language in lg_inclusion_list or alphabet_languages(
popular_character_ordered, ignore_non_latin
):
ratio: float = characters_popularity_compare(
language, popular_character_ordered
)
if ratio < threshold:
continue
elif ratio >= 0.8:
sufficient_match_count += 1
results.append((language, round(ratio, 4)))
if sufficient_match_count >= 3:
break
return sorted(
filter_alt_coherence_matches(results), key=lambda x: x[1], reverse=True
)
| [] |
2024-01-10 | piyushmishra908/langchain | libs~langchain~langchain~document_loaders~parsers~language~cobol.py | import re
from typing import Callable, List
from langchain.document_loaders.parsers.language.code_segmenter import CodeSegmenter
class CobolSegmenter(CodeSegmenter):
"""Code segmenter for `COBOL`."""
PARAGRAPH_PATTERN = re.compile(r"^[A-Z0-9\-]+(\s+.*)?\.$", re.IGNORECASE)
DIVISION_PATTERN = re.compile(
r"^\s*(IDENTIFICATION|DATA|PROCEDURE|ENVIRONMENT)\s+DIVISION.*$", re.IGNORECASE
)
SECTION_PATTERN = re.compile(r"^\s*[A-Z0-9\-]+\s+SECTION.$", re.IGNORECASE)
def __init__(self, code: str):
super().__init__(code)
self.source_lines: List[str] = self.code.splitlines()
def is_valid(self) -> bool:
# Identify presence of any division to validate COBOL code
return any(self.DIVISION_PATTERN.match(line) for line in self.source_lines)
def _extract_code(self, start_idx: int, end_idx: int) -> str:
return "\n".join(self.source_lines[start_idx:end_idx]).rstrip("\n")
def _is_relevant_code(self, line: str) -> bool:
"""Check if a line is part of the procedure division or a relevant section."""
if "PROCEDURE DIVISION" in line.upper():
return True
# Add additional conditions for relevant sections if needed
return False
def _process_lines(self, func: Callable) -> List[str]:
"""A generic function to process COBOL lines based on provided func."""
elements: List[str] = []
start_idx = None
inside_relevant_section = False
for i, line in enumerate(self.source_lines):
if self._is_relevant_code(line):
inside_relevant_section = True
if inside_relevant_section and (
self.PARAGRAPH_PATTERN.match(line.strip().split(" ")[0])
or self.SECTION_PATTERN.match(line.strip())
):
if start_idx is not None:
func(elements, start_idx, i)
start_idx = i
# Handle the last element if exists
if start_idx is not None:
func(elements, start_idx, len(self.source_lines))
return elements
def extract_functions_classes(self) -> List[str]:
def extract_func(elements: List[str], start_idx: int, end_idx: int) -> None:
elements.append(self._extract_code(start_idx, end_idx))
return self._process_lines(extract_func)
def simplify_code(self) -> str:
simplified_lines: List[str] = []
inside_relevant_section = False
omitted_code_added = (
False # To track if "* OMITTED CODE *" has been added after the last header
)
for line in self.source_lines:
is_header = (
"PROCEDURE DIVISION" in line
or "DATA DIVISION" in line
or "IDENTIFICATION DIVISION" in line
or self.PARAGRAPH_PATTERN.match(line.strip().split(" ")[0])
or self.SECTION_PATTERN.match(line.strip())
)
if is_header:
inside_relevant_section = True
# Reset the flag since we're entering a new section/division or
# paragraph
omitted_code_added = False
if inside_relevant_section:
if is_header:
# Add header and reset the omitted code added flag
simplified_lines.append(line)
elif not omitted_code_added:
# Add omitted code comment only if it hasn't been added directly
# after the last header
simplified_lines.append("* OMITTED CODE *")
omitted_code_added = True
return "\n".join(simplified_lines)
| [] |
2024-01-10 | piyushmishra908/langchain | templates~rag-timescale-hybrid-search-time~rag_timescale_hybrid_search_time~load_sample_dataset.py | import os
import tempfile
from datetime import datetime, timedelta
import requests
from langchain.document_loaders import JSONLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.timescalevector import TimescaleVector
from timescale_vector import client
def parse_date(date_string: str) -> datetime:
if date_string is None:
return None
time_format = "%a %b %d %H:%M:%S %Y %z"
return datetime.strptime(date_string, time_format)
def extract_metadata(record: dict, metadata: dict) -> dict:
dt = parse_date(record["date"])
metadata["id"] = str(client.uuid_from_time(dt))
if dt is not None:
metadata["date"] = dt.isoformat()
else:
metadata["date"] = None
metadata["author"] = record["author"]
metadata["commit_hash"] = record["commit"]
return metadata
def load_ts_git_dataset(
service_url,
collection_name="timescale_commits",
num_records: int = 500,
partition_interval=timedelta(days=7),
):
json_url = "https://s3.amazonaws.com/assets.timescale.com/ai/ts_git_log.json"
tmp_file = "ts_git_log.json"
temp_dir = tempfile.gettempdir()
json_file_path = os.path.join(temp_dir, tmp_file)
if not os.path.exists(json_file_path):
response = requests.get(json_url)
if response.status_code == 200:
with open(json_file_path, "w") as json_file:
json_file.write(response.text)
else:
print(f"Failed to download JSON file. Status code: {response.status_code}")
loader = JSONLoader(
file_path=json_file_path,
jq_schema=".commit_history[]",
text_content=False,
metadata_func=extract_metadata,
)
documents = loader.load()
# Remove documents with None dates
documents = [doc for doc in documents if doc.metadata["date"] is not None]
if num_records > 0:
documents = documents[:num_records]
# Split the documents into chunks for embedding
text_splitter = CharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200,
)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
# Create a Timescale Vector instance from the collection of documents
TimescaleVector.from_documents(
embedding=embeddings,
ids=[doc.metadata["id"] for doc in docs],
documents=docs,
collection_name=collection_name,
service_url=service_url,
time_partition_interval=partition_interval,
)
| [] |
2024-01-10 | dzmitryashkinadze/gpt-fhir | gpt_fhir~gpt_fhir~llmExtractor.py | import datetime
from openai import OpenAI
class LLMExtractor:
"""
This class is responsible for running the FHIR resource extraction using OpenAI's LLM model.
"""
def __init__(self, config, fhir_tools):
# copy config
self.config = config
# copy functions
self.fhir_tools = fhir_tools
# set up openai client
self.client = OpenAI(api_key=config["OPENAI"]["API_KEY"])
def extract(self, text):
"""run the LLM model on the text"""
# create initial conversation
messages = [
{
"role": "system",
"content": self.config["GENAI"]["SYSTEM_PROMPT"].format(
date=datetime.datetime.now().strftime("%Y-%m-%d")
),
},
{
"role": "user",
"content": text,
},
]
# initial llm request
response = self.client.chat.completions.create(
model=self.config["OPENAI"]["MODEL"],
messages=messages,
tools=self.fhir_tools.tools,
tool_choice="auto",
)
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
# check if the model wanted to call a function
if tool_calls:
# extend conversation with assistant's reply
messages.append(response_message)
# apply all function calls
for tool_call in tool_calls:
# run the function call
function_response = self.fhir_tools.run(tool_call)
# extend conversation with function response
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": tool_call.function.name,
"content": function_response,
}
)
# send the conversation back to the model
second_response = self.client.chat.completions.create(
model=self.config["OPENAI"]["MODEL"],
messages=messages,
)
return second_response
else:
return response
| [] |
2024-01-10 | aidanmorgan/rag-theonion | mistral~mistral.py | from ctransformers import AutoModelForCausalLM, AutoTokenizer
from sentence_transformers import SentenceTransformer
import psycopg2
from pgvector.psycopg2 import register_vector
from functools import partial
from langchain.schema import StrOutputParser
from langchain_core.prompts import format_document
from langchain.chains.combine_documents.base import BaseCombineDocumentsChain
from langchain.chains.combine_documents.reduce import ReduceDocumentsChain
from langchain.llms import Ollama
from langchain.schema.document import Document
from langchain.callbacks.manager import Callbacks
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import PromptTemplate
from langchain.chains.summarize import load_summarize_chain
# maximum number of onion articles to load
__ARTICLE_LIMIT__:int = 5
# model to use for searching the pgvector database
article_model = SentenceTransformer('all-MiniLM-L6-v2')
# connection to the pgvector database for finding appropriate articles
conn = psycopg2.connect(database="rag_scratch",
host="localhost",
user="postgres",
port="5432")
register_vector(conn)
map_template = """
You will be given an article with a title and a body.
The article title will be enclosed in double backticks (``)
The article body will be enclosed in triple backticks (```)
Extract the most relevant facts and summarise them for answering questions about the article.
The facts should be presented as a set of bullet points.
```{text}```
SUMMARY:
"""
reduce_template = """
You will be given article summaries that contain facts that are relevant to answering questions.
The article summaries will be enclosed in triple backticks (```).
The question from the user will be enclosed in double backticks (``)
Using only the information supplied in the article summaries, answer the users question.
Your answer should be less than 300 words.
The style of the answer should be humourous and match the hunour of the original articles.
Summaries:
```{text}```
Question:
``{user_query}``
Answer:
"""
template = """
You will be given {article_count} articles which will be enclosed in triple backticks (```).
The title of the article will be enclosed by <title> and </title> tags.
The body of the article will be enclosed by <body> and </body> tags.
You will also be provided a question enclosed in double backticks(``).
Using only the article information supplied, provide an answer to the question in as much detail as possible.
Your answer should be less than 300 words.
Your answer humour should be considered acerbic or black humour and use a similar style of humour to the provided articles.
Do not summarise the articles, create a new article.
Articles:
```{text}```
Question:
``{creative_prompt}``
Answer:
"""
# swapping to using Ollama as it's a hell of a lot easier to get running than using
# mistral directly on my mac, running a custom Ollama model
llm = Ollama(
model="onion"
)
print("""
,----..
,---, ,-. ___ ,---, / / \
' .' \ ,--/ /| ,--.'|_ ,--.' | / . : ,--,
/ ; '. ,--. :/ | | | :,' | | : . / ;. \ ,---, ,--.'| ,---. ,---,
: : \ .--.--. : : ' / : : ' : : : : . ; / ` ; ,-+-. / || |, ' ,'\ ,-+-. / |
: | /\ \ / / ' | ' / .;__,' / : | |,--. ,---. ; | ; \ ; | ,--.'|' |`--'_ / / | ,--.'|' |
| : ' ;. :| : /`./ ' | : | | | | : ' | / \ | : | ; | '| | ,"' |,' ,'| . ; ,. :| | ,"' |
| | ;/ \ \ : ;_ | | \ :__,'| : | | /' : / / | . | ' ' ' :| | / | |' | | ' | |: :| | / | |
' : | \ \ ,'\ \ `. ' : |. \ ' : |__ ' : | | |. ' / | ' ; \; / || | | | || | : ' | .; :| | | | |
| | ' '--' `----. \| | ' \ \ | | '.'|| | ' | :' ; /| \ \ ', / | | | |/ ' : |_| : || | | |/
| : : / /`--' /' : |--' ; : ;| : :_:,'' | / | ; : / | | |--' | | '.'\ \ / | | |--'
| | ,' '--'. / ; |,' | , / | | ,' | : | \ \ .' | |/ ; : ;`----' | |/
`--'' `--'---' '--' ---`-' `--'' \ \ / `---` '---' | , / '---'
`----' ---`-'
""")
def main():
creative_prompt = ""
while(creative_prompt != "q!"):
print()
creative_prompt = input("Enter a theme (or q! to quit): ")
if(creative_prompt is None or len(creative_prompt) == 0):
continue
if(creative_prompt == "q!"):
return
cur = conn.cursor()
embedded = article_model.encode(creative_prompt)
# search the vector database for themes that match
cur.execute(f"SELECT title, body FROM onion_articles ORDER BY embedding <-> %s LIMIT %s;", (embedded,__ARTICLE_LIMIT__))
results = cur.fetchall()
if(len(results) == 0):
print("Couldn't find any matching articles for inspiration")
continue
# prompt = PromptTemplate(template=template, input_variables=["article_count", "text", "creative_prompt"])
# docs = [Document(page_content=f"<title>{t[0]}</title><body>{t[1]}</body>") for t in results]
# llm_chain = LLMChain(prompt=prompt, llm=llm)
# answer = llm_chain.run({
# "article_count": __ARTICLE_LIMIT__,
# "text": docs,
# "creative_prompt": creative_prompt,
# })
# print(answer)
chain = load_summarize_chain(
llm,
chain_type="map_reduce",
map_prompt=PromptTemplate(template=map_template, input_variables=["text"]),
combine_prompt=PromptTemplate(template=reduce_template, input_variables=["text", "user_query"]),
)
docs = [Document(page_content=f"{t[1]}", metadata={"title": f"t[0]"}) for t in results]
out = chain.run({
'input_documents': docs,
'user_query': creative_prompt,
})
print(out)
if __name__ == '__main__':
main() | [
"Enter a theme (or q! to quit): ",
"\n You will be given article summaries that contain facts that are relevant to answering questions.\n The article summaries will be enclosed in triple backticks (```).\n The question from the user will be enclosed in double backticks (``)\n Using only the information supplied in the article summaries, answer the users question.\n Your answer should be less than 300 words.\n The style of the answer should be humourous and match the hunour of the original articles.\n\n Summaries:\n ```{text}```\n\n Question:\n ``{user_query}``\n\n\n Answer:\n ",
"\n You will be given {article_count} articles which will be enclosed in triple backticks (```). \n The title of the article will be enclosed by <title> and </title> tags.\n The body of the article will be enclosed by <body> and </body> tags. \n You will also be provided a question enclosed in double backticks(``).\n Using only the article information supplied, provide an answer to the question in as much detail as possible.\n Your answer should be less than 300 words.\n Your answer humour should be considered acerbic or black humour and use a similar style of humour to the provided articles.\n Do not summarise the articles, create a new article.\n\n Articles:\n ```{text}```\n\n\n Question:\n ``{creative_prompt}``\n\n\n Answer:\n ",
"\n You will be given an article with a title and a body.\n The article title will be enclosed in double backticks (``)\n The article body will be enclosed in triple backticks (```)\n Extract the most relevant facts and summarise them for answering questions about the article.\n The facts should be presented as a set of bullet points.\n\n ```{text}```\n\n SUMMARY:\n "
] |
2024-01-10 | EGAdams/chrome-meta-gpt | metagpt~tools~ut_writer.py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
from pathlib import Path
from metagpt.provider.openai_api import OpenAIGPTAPI as GPTAPI
ICL_SAMPLE = '''接口定义:
```text
接口名称:元素打标签
接口路径:/projects/{project_key}/node-tags
Method:POST
请求参数:
路径参数:
project_key
Body参数:
名称 类型 是否必须 默认值 备注
nodes array 是 节点
node_key string 否 节点key
tags array 否 节点原标签列表
node_type string 否 节点类型 DATASET / RECIPE
operations array 是
tags array 否 操作标签列表
mode string 否 操作类型 ADD / DELETE
返回数据:
名称 类型 是否必须 默认值 备注
code integer 是 状态码
msg string 是 提示信息
data object 是 返回数据
list array 否 node列表 true / false
node_type string 否 节点类型 DATASET / RECIPE
node_key string 否 节点key
```
单元测试:
```swift
@pytest.mark.parametrize(
"project_key, nodes, operations, expected_msg",
[
("project_key", [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "success"),
("project_key", [{"node_key": "dataset_002", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["tag1"], "mode": "DELETE"}], "success"),
("", [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "缺少必要的参数 project_key"),
(123, [{"node_key": "dataset_001", "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "参数类型不正确"),
("project_key", [{"node_key": "a"*201, "tags": ["tag1", "tag2"], "node_type": "DATASET"}], [{"tags": ["new_tag1"], "mode": "ADD"}], "请求参数超出字段边界")
]
)
def test_node_tags(project_key, nodes, operations, expected_msg):
pass
```
以上是一个 接口定义 与 单元测试 样例。
接下来,请你扮演一个Google 20年经验的专家测试经理,在我给出 接口定义 后,回复我单元测试。有几个要求
1. 只输出一个 `@pytest.mark.parametrize` 与对应的test_<接口名>函数(内部pass,不实现)
-- 函数参数中包含expected_msg,用于结果校验
2. 生成的测试用例使用较短的文本或数字,并且尽量紧凑
3. 如果需要注释,使用中文
如果你明白了,请等待我给出接口定义,并只回答"明白",以节省token
'''
ACT_PROMPT_PREFIX = '''参考测试类型:如缺少请求参数,字段边界校验,字段类型不正确
请在一个 `@pytest.mark.parametrize` 作用域内输出10个测试用例
```text
'''
YFT_PROMPT_PREFIX = '''参考测试类型:如SQL注入,跨站点脚本(XSS),非法访问和越权访问,认证和授权,参数验证,异常处理,文件上传和下载
请在一个 `@pytest.mark.parametrize` 作用域内输出10个测试用例
```text
'''
OCR_API_DOC = '''```text
接口名称:OCR识别
接口路径:/api/v1/contract/treaty/task/ocr
Method:POST
请求参数:
路径参数:
Body参数:
名称 类型 是否必须 默认值 备注
file_id string 是
box array 是
contract_id number 是 合同id
start_time string 否 yyyy-mm-dd
end_time string 否 yyyy-mm-dd
extract_type number 否 识别类型 1-导入中 2-导入后 默认1
返回数据:
名称 类型 是否必须 默认值 备注
code integer 是
message string 是
data object 是
```
'''
class UTGenerator:
"""UT生成器:通过API文档构造UT"""
def __init__(self, swagger_file: str, ut_py_path: str, questions_path: str,
chatgpt_method: str = "API", template_prefix=YFT_PROMPT_PREFIX) -> None:
"""初始化UT生成器
Args:
swagger_file: swagger路径
ut_py_path: 用例存放路径
questions_path: 模版存放路径,便于后续排查
chatgpt_method: API
template_prefix: 使用模版,默认使用YFT_UT_PROMPT
"""
self.swagger_file = swagger_file
self.ut_py_path = ut_py_path
self.questions_path = questions_path
assert chatgpt_method in ["API"], "非法chatgpt_method"
self.chatgpt_method = chatgpt_method
# ICL: In-Context Learning,这里给出例子,要求GPT模仿例子
self.icl_sample = ICL_SAMPLE
self.template_prefix = template_prefix
def get_swagger_json(self) -> dict:
"""从本地文件加载Swagger JSON"""
with open(self.swagger_file, "r", encoding="utf-8") as file:
swagger_json = json.load(file)
return swagger_json
def __para_to_str(self, prop, required, name=""):
name = name or prop["name"]
ptype = prop["type"]
title = prop.get("title", "")
desc = prop.get("description", "")
return f'{name}\t{ptype}\t{"是" if required else "否"}\t{title}\t{desc}'
def _para_to_str(self, prop):
required = prop.get("required", False)
return self.__para_to_str(prop, required)
def para_to_str(self, name, prop, prop_object_required):
required = name in prop_object_required
return self.__para_to_str(prop, required, name)
def build_object_properties(self, node, prop_object_required, level: int = 0) -> str:
"""递归输出object和array[object]类型的子属性
Args:
node (_type_): 子项的值
prop_object_required (_type_): 是否必填项
level: 当前递归深度
"""
doc = ""
def dive_into_object(node):
"""如果是object类型,递归输出子属性"""
if node.get("type") == "object":
sub_properties = node.get("properties", {})
return self.build_object_properties(sub_properties, prop_object_required, level=level + 1)
return ""
if node.get("in", "") in ["query", "header", "formData"]:
doc += f'{" " * level}{self._para_to_str(node)}\n'
doc += dive_into_object(node)
return doc
for name, prop in node.items():
doc += f'{" " * level}{self.para_to_str(name, prop, prop_object_required)}\n'
doc += dive_into_object(prop)
if prop["type"] == "array":
items = prop.get("items", {})
doc += dive_into_object(items)
return doc
def get_tags_mapping(self) -> dict:
"""处理tag与path
Returns:
Dict: tag: path对应关系
"""
swagger_data = self.get_swagger_json()
paths = swagger_data["paths"]
tags = {}
for path, path_obj in paths.items():
for method, method_obj in path_obj.items():
for tag in method_obj["tags"]:
if tag not in tags:
tags[tag] = {}
if path not in tags[tag]:
tags[tag][path] = {}
tags[tag][path][method] = method_obj
return tags
def generate_ut(self, include_tags) -> bool:
"""生成用例文件"""
tags = self.get_tags_mapping()
for tag, paths in tags.items():
if include_tags is None or tag in include_tags:
self._generate_ut(tag, paths)
return True
def build_api_doc(self, node: dict, path: str, method: str) -> str:
summary = node["summary"]
doc = f"接口名称:{summary}\n接口路径:{path}\nMethod:{method.upper()}\n"
doc += "\n请求参数:\n"
if "parameters" in node:
parameters = node["parameters"]
doc += "路径参数:\n"
# param["in"]: path / formData / body / query / header
for param in parameters:
if param["in"] == "path":
doc += f'{param["name"]} \n'
doc += "\nBody参数:\n"
doc += "名称\t类型\t是否必须\t默认值\t备注\n"
for param in parameters:
if param["in"] == "body":
schema = param.get("schema", {})
prop_properties = schema.get("properties", {})
prop_required = schema.get("required", [])
doc += self.build_object_properties(prop_properties, prop_required)
else:
doc += self.build_object_properties(param, [])
# 输出返回数据信息
doc += "\n返回数据:\n"
doc += "名称\t类型\t是否必须\t默认值\t备注\n"
responses = node["responses"]
response = responses.get("200", {})
schema = response.get("schema", {})
properties = schema.get("properties", {})
required = schema.get("required", {})
doc += self.build_object_properties(properties, required)
doc += "\n"
doc += "```"
return doc
def _store(self, data, base, folder, fname):
file_path = self.get_file_path(Path(base) / folder, fname)
with open(file_path, "w", encoding="utf-8") as file:
file.write(data)
def ask_gpt_and_save(self, question: str, tag: str, fname: str):
"""生成问题,并且存储问题与答案"""
messages = [self.icl_sample, question]
result = self.gpt_msgs_to_code(messages=messages)
self._store(question, self.questions_path, tag, f"{fname}.txt")
self._store(result, self.ut_py_path, tag, f"{fname}.py")
def _generate_ut(self, tag, paths):
"""处理数据路径下的结构
Args:
tag (_type_): 模块名称
paths (_type_): 路径Object
"""
for path, path_obj in paths.items():
for method, node in path_obj.items():
summary = node["summary"]
question = self.template_prefix
question += self.build_api_doc(node, path, method)
self.ask_gpt_and_save(question, tag, summary)
def gpt_msgs_to_code(self, messages: list) -> str:
"""根据不同调用方式选择"""
result = ''
if self.chatgpt_method == "API":
result = GPTAPI().ask_code(msgs=messages)
return result
def get_file_path(self, base: Path, fname: str):
"""保存不同的文件路径
Args:
base (str): 路径
fname (str): 文件名称
"""
path = Path(base)
path.mkdir(parents=True, exist_ok=True)
file_path = path / fname
return str(file_path)
| [
"参考测试类型:如SQL注入,跨站点脚本(XSS),非法访问和越权访问,认证和授权,参数验证,异常处理,文件上传和下载\n请在一个 `@pytest.mark.parametrize` 作用域内输出10个测试用例\n```text\n",
"参考测试类型:如缺少请求参数,字段边界校验,字段类型不正确\n请在一个 `@pytest.mark.parametrize` 作用域内输出10个测试用例\n```text\n"
] |
2024-01-10 | promptengineers-ai/chat-stream-full-stack | src~factories~loader_factory.py | import nest_asyncio
from langchain.docstore.document import Document
from langchain.document_loaders import CSVLoader # UnstructuredPDFLoader,
from langchain.document_loaders import (DirectoryLoader, GitbookLoader,
PyPDFLoader, ReadTheDocsLoader,
TextLoader, UnstructuredHTMLLoader,
UnstructuredMarkdownLoader,
UnstructuredURLLoader, WebBaseLoader,
YoutubeLoader)
from langchain.document_loaders.sitemap import SitemapLoader
from src.services.logging_service import logger
from src.utils import get_links
nest_asyncio.apply()
class DocumentLoaderFactory():
@staticmethod
def create_loader(loader_type, body):
if loader_type == 'gitbook':
urls = body.get('urls', [])
logger.info(f'[DocumentLoaderFactory.create_loader] Gitbook Link: ' + urls[0])
loader = GitbookLoader(urls[0], load_all_paths=True)
loader.default_parser = "xml"
return loader
elif loader_type == 'web_base':
urls = body.get('urls', [])
logger.info(f'[DocumentLoaderFactory.create_loader] Web Base: ' + str(urls))
loader = WebBaseLoader(urls)
loader.default_parser = "xml"
return loader
elif loader_type == 'yt':
raise Exception("Non Implemented Yet, having issues uploading a directory")
yt_id = body.get('ytId')
logger.info(f'[DocumentLoaderFactory.create_loader] Youtube: https://youtube.com/watch?v=' + yt_id)
return YoutubeLoader(
yt_id,
# add_video_info=True
)
elif loader_type == 'sitemap':
urls = body.get('urls', [])
logger.info(f'[DocumentLoaderFactory.create_loader] Sitemap: ' + str(urls))
return SitemapLoader(web_path=urls[0])
elif loader_type == 'website':
urls = body.get('urls', [])
unique_links = get_links(urls[0])
logger.info(f'[DocumentLoaderFactory.create_loader] Website: ' + str(unique_links))
return UnstructuredURLLoader(urls=unique_links)
elif loader_type == 'urls':
urls = body.get('urls', [])
logger.info(f'[DocumentLoaderFactory.create_loader] URLs: ' + str(urls))
return UnstructuredURLLoader(urls=urls)
elif loader_type == 'copy':
raise Exception("Non Implemented Yet, having issues uploading a directory")
logger.info(f'[DocumentLoaderFactory.create_loader] Copy: ')
# metadata = body['metadata'] if body['metadata'] else None
return Document(page_content=body.get('text'))
elif loader_type == 'txt':
logger.info(f'[DocumentLoaderFactory.create_loader] Text: ' + body.get('file_path'))
return TextLoader(body.get('file_path'))
elif loader_type == 'html':
logger.info(f'[DocumentLoaderFactory.create_loader] HTML: ' + body.get('file_path'))
return UnstructuredHTMLLoader(body.get('file_path'))
elif loader_type == 'md':
logger.info(f'[DocumentLoaderFactory.create_loader] Markdown: ' + body.get('file_path'))
loader = UnstructuredMarkdownLoader(
body.get('file_path'),
# mode="elements"
)
logger.info(loader)
return loader
elif loader_type == 'directory':
raise Exception("Non Implemented Yet, having issues uploading a directory")
logger.info(f'[DocumentLoaderFactory.create_loader] Directory: ' + body.get('file_path'))
return DirectoryLoader(body.get('file_path'), glob="**/*")
elif loader_type == 'csv':
logger.info(f'[DocumentLoaderFactory.create_loader] CSV: ' + body.get('file_path'))
loader = CSVLoader(body.get('file_path'))
return loader
elif loader_type == 'pdf':
logger.info(f'[DocumentLoaderFactory.create_loader] PDF: ' + body.get('file_path'))
loader = PyPDFLoader(body.get('file_path'))
return loader
else:
raise ValueError('Unsupported document loader type: ' + loader_type) | [] |
2024-01-10 | promptengineers-ai/chat-stream-full-stack | src~services~message_service.py | """Message service for the chatbot."""
import asyncio
import json
import os
import traceback
from typing import AsyncIterable
from langchain.agents import load_tools
from langchain.callbacks import AsyncIteratorCallbackHandler
from langchain.callbacks.streaming_stdout_final_only import \
FinalStreamingStdOutCallbackHandler
from src.config import GOOGLE_API_KEY, GOOGLE_CSE_ID
from src.services.chain_service import ChainService
from src.services.loader_service import load_vectorstore
from src.services.logging_service import logger
from src.services.model_service import (chat_model,
openai_chat_functions_model,
openai_chat_model)
from src.services.storage_service import StorageService
from src.utils import wrap_done
os.environ['GOOGLE_CSE_ID'] = GOOGLE_CSE_ID
os.environ['GOOGLE_API_KEY'] = GOOGLE_API_KEY
def token_stream(token: str):
""" Use server-sent-events to stream the response"""
data = {
'sender': 'assistant',
'message': token,
'type': 'stream'
}
logger.debug('[POST /chat] Stream: %s', str(data))
return f"data: {json.dumps(data)}\n\n"
def end_stream():
"""Send the end of the stream"""
end_content = {
'sender': 'assistant',
'message': "",
'type': 'end'
}
logger.debug('[POST /chat] End: %s', str(end_content))
return f"data: {json.dumps(end_content)}\n\n"
def retrieve_system_message(messages):
"""Retrieve the system message"""
try:
return list(
filter(lambda message: message['role'] == 'system', messages)
)[0]['content']
except IndexError:
return None
def retrieve_chat_messages(messages):
"""Retrieve the chat messages"""
return [
(msg["content"]) for msg in messages if msg["role"] in ["user", "assistant"]
]
#######################################################
## Langchain Chat GPT
#######################################################
async def send_message(
messages,
model:str,
temperature: float or int = 0.9,
) -> AsyncIterable[str]:
"""Send a message to the chatbot and yield the response."""
callback = AsyncIteratorCallbackHandler()
model = chat_model(
model_name=model,
temperature=temperature,
streaming=True,
callbacks=[callback],
)
# Begin a task that runs in the background.
task = asyncio.create_task(wrap_done(
model.apredict_messages(messages=messages),
callback.done),
)
async for token in callback.aiter():
# Use server-sent-events to stream the response
yield token_stream(token)
yield end_stream()
await task
#######################################################
## Open AI Chat GPT
#######################################################
async def send_openai_message(
messages,
model:str,
temperature: float or int = 0.9,
) -> AsyncIterable[str]:
"""Send a message to the chatbot and yield the response."""
response = openai_chat_model(
messages=messages,
model_name=model,
temperature=temperature,
streaming=True,
)
print(response)
for chunk in response:
token = chunk['choices'][0]['delta'].get('content', '')
yield token_stream(token)
yield end_stream()
#######################################################
## Chat GPT
#######################################################
async def send_functions_message(
messages,
model:str,
temperature: float or int = 0.9,
functions: list[str] = [],
) -> AsyncIterable[str]:
"""Send a message to the chatbot and yield the response."""
response = openai_chat_functions_model(
messages=messages,
model_name=model,
temperature=temperature,
streaming=True,
keys=functions,
)
for chunk in response:
token = chunk['choices'][0]['delta'].get('content', '')
yield token_stream(token)
yield end_stream()
#######################################################
## Vectorstore
#######################################################
async def send_vectorstore_message(
messages,
vectorstore,
model: str,
temperature: float or int = 0.9,
) -> AsyncIterable[str]:
"""Send a message to the chatbot and yield the response."""
filtered_messages = retrieve_chat_messages(messages)
# Retrieve the chat history
chat_history = list(zip(filtered_messages[::2], filtered_messages[1::2]))
# Retrieve the system message
system_message = retrieve_system_message(messages)
# Create the callback
callback = AsyncIteratorCallbackHandler()
# Create the model
model = chat_model(
model_name=model,
temperature=temperature,
callbacks=[callback],
streaming=True,
)
# Create the query
query = {'question': filtered_messages[-1], 'chat_history': chat_history}
# Retrieve the conversation
qa_chain = ChainService(model).conversation_retrieval(vectorstore, system_message)
# Begin a task that runs in the background.
task = asyncio.create_task(wrap_done(
qa_chain.acall(query),
callback.done),
)
# Yield the tokens as they come in.
async for token in callback.aiter():
yield token_stream(token)
yield end_stream()
await task
#######################################################
## Agent
#######################################################
def send_agent_message(
messages,
model:str,
temperature: float or int = 0.9,
):
"""Send a message to the chatbot and yield the response."""
# Retrieve the chat messages
filtered_messages = retrieve_chat_messages(messages)
# Retrieve the chat history
chat_history = list(zip(filtered_messages[::2], filtered_messages[1::2]))
# Create the model
model = chat_model(
model_name=model,
temperature=temperature,
callbacks=[FinalStreamingStdOutCallbackHandler()]
)
tools = load_tools(["google-search", "llm-math"],llm=model)
agent_executor = ChainService(model).agent_search(tools, chat_history)
try:
response = agent_executor.run(filtered_messages[-1])
except BaseException as err:
tracer = traceback.format_exc()
logger.error('Error: %s\n%s', err, tracer)
response = str(err)
if response.startswith("Could not parse LLM output: "):
response = response.removeprefix("Could not parse LLM output: ")
# Yield the tokens as they come in.
for token in response:
yield token_stream(token)
yield end_stream()
| [] |
2024-01-10 | promptengineers-ai/chat-stream-full-stack | src~services~chain_service.py | """Chain Service"""
from langchain.agents import AgentType, initialize_agent
from langchain.chains import ConversationalRetrievalChain, LLMChain
from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT
from langchain.chains.question_answering import load_qa_chain
from langchain.memory import ConversationBufferMemory
from src.utils import get_chat_history, get_system_template
class ChainService:
"""Chain Service"""
def __init__(self, model):
self.model = model
def condense_question(self):
"""Condense a question into a single sentence."""
return LLMChain(
llm=self.model,
prompt=CONDENSE_QUESTION_PROMPT,
)
def collect_docs(self, system_message):
"""Collect documents from the vectorstore."""
return load_qa_chain(
self.model,
chain_type='stuff',
prompt=get_system_template(system_message)
)
def conversation_retrieval(
self,
vectorstore,
system_message
):
"""Retrieve a conversation."""
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
return ConversationalRetrievalChain(
question_generator=self.condense_question(),
retriever=vectorstore.as_retriever(),
memory=memory,
combine_docs_chain=self.collect_docs(system_message),
get_chat_history=get_chat_history,
)
def agent_search(self, tools, chat_history):
"""Agent search."""
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
if len(chat_history) > 0:
for message in chat_history:
if message[0] and message[1]:
memory.chat_memory.add_user_message(message[0])
memory.chat_memory.add_ai_message(message[1])
else:
memory.chat_memory.add_user_message(message[0])
return initialize_agent(
tools,
self.model,
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION or AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
memory=memory,
get_chat_history=get_chat_history
)
| [] |
2024-01-10 | promptengineers-ai/chat-stream-full-stack | src~utils~__init__.py | """Chat Utils"""
import asyncio
from typing import Awaitable
from urllib.parse import urljoin, urlparse
import openai
import requests
from bs4 import BeautifulSoup
from langchain import PromptTemplate
def get_chat_history(inputs: tuple) -> str:
"""Formats the chat history into a readable format for the chatbot"""
res = []
for human, assistant in inputs:
res.append(f"Human: {human}\nAI: {assistant}")
return "\n".join(res)
def get_system_template(system_message: str) -> PromptTemplate:
"""format the system message into a template for the chatbot to use"""
prompt_template = f"""{system_message}
---
{{context}}
Human: {{question}}
Assistant: """
template = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"]
)
return template
async def wrap_done(fn_name: Awaitable, event: asyncio.Event):
"""Wrap an awaitable with a event to signal when it's done or an exception is raised."""
try:
await fn_name
except asyncio.CancelledError:
pass
except openai.error.APIError as error:
print(f"Caught API error: {error}")
finally:
# Signal the aiter to stop.
event.set()
def get_links(url: str):
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
links = []
for link in soup.find_all('a'):
href = link.get('href')
if href and urlparse(href).netloc == '':
links.append(urljoin(url, href))
return links
# Function to match strings with an array of objects
def match_strings(keys: list[str], functions):
# Initialize array to store output
output = []
# Loop through the functions array
for function in functions:
# If name property of function matches one of the strings in keys
if function['name'] in keys:
# Append the function to the output array
output.append(function)
# Return the output array
return output
import json
from bson import ObjectId
class JSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, ObjectId):
return str(o)
return json.JSONEncoder.default(self, o) | [
"context",
"question",
"PLACEHOLDER\n---\n{context}\nHuman: {question}\nAssistant: "
] |
2024-01-10 | promptengineers-ai/chat-stream-full-stack | src~services~loader_service.py | """Loader service for loading the vectorstore."""
import pickle
from langchain.callbacks import get_openai_callback
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import FAISS
from src.services.logging_service import logger
def load_vectorstore(path: str):
"""Load the vectorstore."""
with open(path, 'rb') as file:
vectorstore = pickle.load(file)
return vectorstore
def split_docs(
documents,
chunk_size: int = 500,
chunk_overlap: int = 0,
):
"""Split the documents into chunks."""
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap
)
chunks = text_splitter.split_documents(documents)
logger.info("[loader_service.split_docs] Chunks: %s", str(len(chunks)))
return chunks
def create_vectorstore(docs):
"""Load the vectorstore."""
with get_openai_callback() as cb:
embeddings = OpenAIEmbeddings(max_retries=2)
logger.info(f'[loader_service.create_vectorstore] Tokens: ' + str(cb.total_tokens))
return FAISS.from_documents(
split_docs(docs, 1000, 0),
embeddings
)
def get_tools(query, retriever, toolkits_dict):
"""Get documents, which contain the Plugins to use"""
docs = retriever.get_relevant_documents(query)
# Get the toolkits, one for each plugin
tool_kits = [toolkits_dict[d.metadata["plugin_name"]] for d in docs]
# Get the tools: a separate NLAChain for each endpoint
tools = []
for tool in tool_kits:
tools.extend(tool.nla_tools)
return tools | [] |
2024-01-10 | promptengineers-ai/chat-stream-full-stack | src~services~model_service.py | """Retrieves model from OpenAI API and returns a ChatOpenAI object."""
import os
import openai
from langchain.chat_models import ChatOpenAI
from src.config import OPENAI_API_KEY
from src.config.functions import FUNCTIONS
from src.services.function_service import FunctionTypeFactory
from src.utils import match_strings
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
def openai_chat_model(
model_name: str,
streaming: bool = False,
temperature: float or int = 0.9,
messages: list or None = None,
functions: list or None = None,
function_call: str or None = None,
):
"""Query OpenAI API for a chat model."""
if functions:
return openai.ChatCompletion.create(
messages=messages,
model=model_name,
temperature=temperature,
functions=functions,
function_call=function_call,
)
return openai.ChatCompletion.create(
messages=messages,
model=model_name,
temperature=temperature,
stream=streaming,
)
def chat_model(
model_name: str,
streaming: bool = False,
temperature: float or int = 0.9,
callbacks: list or None = None,
):
"""Query Langchain for a chat model."""
return ChatOpenAI(
model_name=model_name,
temperature=temperature,
streaming=streaming,
callbacks=callbacks,
)
def openai_chat_functions_model(
model_name: str,
streaming: bool = False,
temperature: float or int = 0.9,
messages: list or None = None,
keys: list or None = None,
):
"""Query OpenAI API for a chat model."""
call_fn = openai_chat_model(
model_name=model_name,
messages=messages,
functions=match_strings(keys, FUNCTIONS),
function_call="auto",
)
response_message = call_fn["choices"][0]["message"]
if response_message.get("function_call"):
function_name = response_message["function_call"]["name"]
function_response = FunctionTypeFactory().get_result(
function_name,
response_message,
)
# Step 4: send the info on the function call and function response to GPT
messages.append(response_message) # extend conversation with assistant's reply
messages.append({
"role": "function",
"name": function_name,
"content": function_response,
})
return openai_chat_model(
messages=messages,
model_name=model_name,
temperature=temperature,
streaming=streaming,
)
| [] |
2024-01-10 | ellenealds/sandbox-conversant-lib | conversant~prompt_chatbot.py | # Copyright (c) 2022 Cohere Inc. and its affiliates.
#
# Licensed under the MIT License (the "License");
# you may not use this file except in compliance with the License.
#
# You may obtain a copy of the License in the LICENSE file at the top
# level of this repository.
import json
import logging
import os
import warnings
from typing import Any, Dict
import cohere
import jsonschema
import conversant
from conversant.chatbot import Chatbot, Interaction
from conversant.prompts.chat_prompt import ChatPrompt
from conversant.prompts.prompt import Prompt
PERSONA_MODEL_DIRECTORY = f"{os.path.dirname(conversant.__file__)}/personas"
PERSONA_JSON_SCHEMA = {
"type": "object",
"properties": {
"chatbot_config": {
"type": "object",
"properties": {
"max_context_examples": {"type": "integer"},
"avatar": {"type": "string"},
},
},
"client_config": {
"type": "object",
"properties": {
"model": {"type": "string"},
"max_tokens": {"type": "integer"},
"temperature": {"type": "number"},
"frequency_penalty": {"type": "number"},
"presence_penalty": {"type": "number"},
"stop_sequences": {"type": "array"},
},
},
"prompt_config": {
"type": "object",
},
},
}
class PromptChatbot(Chatbot):
"""Use prompt templates and LLM generation to define a chatbot.
This bot makes no use of external knowledge sources.
"""
def __init__(
self,
client: cohere.Client,
prompt: Prompt,
persona_name: str = "",
chatbot_config: Dict[str, Any] = {},
client_config: Dict[str, Any] = {},
):
"""Enriches init by adding a prompt.
Args:
client (cohere.Client): Cohere client for API
prompt (Prompt): Prompt object to direct behavior.
persona_name (str, optional): Bot's persona name. Defaults to empty string.
chatbot_config: (Dict[str, Any], optional): Bot's chat config. Defaults to
empty dict.
client_config (Dict[str, Any], optional): Bot's client config. Defaults to
empty dict.
"""
super().__init__(client)
self.prompt = prompt
self.persona_name = persona_name
self.configure_chatbot(chatbot_config)
self.configure_client(client_config)
self.chat_history = []
self.prompt_size_history = []
self.prompt_history = [self.prompt.to_string()]
self.curr_max_context_examples = self.chatbot_config["max_context_examples"]
# For the generation models, the maximum token length is 2048
# (prompt and generation). So the prompt sent to .generate should be
# MAX_GENERATE_TOKENS minus max tokens generated
self.max_prompt_size = MAX_GENERATE_TOKENS - self.client_config["max_tokens"]
self._check_prompt_size()
def __repr__(self) -> str:
return json.dumps(self.to_dict(), indent=4, default=str)
@property
def user_name(self):
"""
Returns:
str: The name of the user, defined in the prompt. Defaults to "User".
"""
if hasattr(self.prompt, "user_name"):
return self.prompt.user_name
else:
return "User"
@property
def bot_name(self):
"""
Returns:
str: The name of the chatbot, defined in the prompt. Defaults to
"PromptChatbot".
"""
if hasattr(self.prompt, "bot_name"):
return self.prompt.bot_name
else:
return "PromptChatbot"
@property
def latest_prompt(self) -> str:
"""Retrieves the latest prompt.
Returns:
str: The prompt most recently added to the prompt history.
"""
return self.prompt_history[-1]
def _update_max_context_examples(
self, prompt_size: int, max_context_examples: int
) -> int:
"""Adjust max_context_examples until a possible prompt size.
if this is not possible, send an error message.
Args:
prompt_size (int): Number of tokens of the prompt
max_context_examples (int): The length of the chat history for
the chatbot to use in reply.
Returns:
int: updated max_context_examples
"""
# Store original values
original_size = prompt_size
# If the size of chat_history is smaller than max_context_examples
# the value of the variable is already updated with the size value
trimmed_max_examples = min(len(self.chat_history), max_context_examples)
# Check if the max_context_examples is bigger than 0 so it can be reduced
if max_context_examples > 0:
# Reduce max_context_examples until the number of token of the prompt
# is less than maximum or reaches 1
for size in self.prompt_size_history[-max_context_examples:]:
prompt_size -= size
trimmed_max_examples -= 1
if prompt_size <= self.max_prompt_size:
if self.curr_max_context_examples == trimmed_max_examples:
warnings.warn(
"The parameter max_context_examples continues "
f"{self.curr_max_context_examples}"
", so that the total amount of tokens does not"
f" exceed {MAX_GENERATE_TOKENS}."
)
else:
warnings.warn(
"The parameter max_context_examples was changed for"
f" this turn, from {self.curr_max_context_examples} to "
f"{trimmed_max_examples}, so that "
"the total amount of tokens does not"
f" exceed {MAX_GENERATE_TOKENS}."
)
self.curr_max_context_examples = trimmed_max_examples
return trimmed_max_examples
raise ValueError(
"The total number of tokens (prompt and prediction) cannot exceed "
f"{MAX_GENERATE_TOKENS}. Try using a shorter start prompt, sending "
"smaller text messages in the chat, or setting a smaller value "
"for the parameter max_tokens. More details:\n"
f" - Start Prompt: {self.start_prompt_size} tokens\n"
f" - Messages sent in chat: {original_size - self.start_prompt_size} "
f"tokens\n - Parameter max_tokens: {self.client_config['max_tokens']} "
"tokens"
)
def reply(self, query: str) -> Interaction:
"""Replies to a query given a chat history.
The reply is then generated directly from a call to a LLM.
Args:
query (str): A query passed to the prompt chatbot.
Returns:
Interaction: Dictionary of query and generated LLM response
"""
# The current prompt is assembled from the initial prompt,
# from the chat history with a maximum of max_context_examples,
# and from the current query
current_prompt = self.get_current_prompt(query)
current_prompt_size = self.co.tokenize(current_prompt).length
if current_prompt_size > self.max_prompt_size:
max_context_examples = self._update_max_context_examples(
current_prompt_size, self.chatbot_config["max_context_examples"]
)
current_prompt = self.get_current_prompt(query, max_context_examples)
elif (
self.curr_max_context_examples
!= self.chatbot_config["max_context_examples"]
):
warnings.warn(
"The max_context_examples value returned"
f" to {self.chatbot_config['max_context_examples']} - "
f"value set in the original config"
)
# Make a call to Cohere's co.generate API
generated_object = self.co.generate(
model=self.client_config["model"],
prompt=current_prompt,
max_tokens=self.client_config["max_tokens"],
temperature=self.client_config["temperature"],
frequency_penalty=self.client_config["frequency_penalty"],
presence_penalty=self.client_config["presence_penalty"],
stop_sequences=self.client_config["stop_sequences"],
)
# If response was cut off by .generate() finding a stop sequence,
# remove that sequence from the response.
response = generated_object.generations[0].text
for stop_seq in self.client_config["stop_sequences"]:
if response.endswith(stop_seq):
response = response[: -len(stop_seq)]
response = response.lstrip()
# We need to remember the current response in the chat history for future
# responses.
self.chat_history.append(self.prompt.create_interaction(query, response))
self.prompt_size_history.append(
self.co.tokenize(
self.prompt.create_interaction_string(query, response)
).length
)
self.prompt_history.append(current_prompt)
return response
def get_current_prompt(self, query: str, max_context_examples: int = None) -> str:
"""Stitches the prompt with a trailing window of the chat.
Args:
query (str): The current user query.
max_context_examples (int): The length of the chat history for
the chatbot to use in reply.
Returns:
str: The current prompt given a query.
"""
if max_context_examples is None:
max_context_examples = self.chatbot_config["max_context_examples"]
# get base prompt
base_prompt = self.prompt.to_string() + "\n"
# get context prompt
context_prompt_lines = []
trimmed_chat_history = (
self.chat_history[-max_context_examples:]
if max_context_examples > 0
else []
)
# TODO when prompt is updated, the history is mutated
# as it is recreated using the new prompt. A possible fix is to save the old
# prompt in history and use it when recreating.
for turn in trimmed_chat_history:
context_prompt_lines.append(self.prompt.create_interaction_string(**turn))
context_prompt = self.prompt.example_separator + "".join(context_prompt_lines)
# get query prompt
query_prompt = self.prompt.create_interaction_string(query)
current_prompt = base_prompt + context_prompt + query_prompt
return current_prompt.strip()
def configure_chatbot(self, chatbot_config: Dict = {}) -> None:
"""Configures chatbot options.
Args:
chatbot_config (Dict, optional): Updates self.chatbot_config. Defaults
to {}.
"""
# We initialize the chatbot to these default config values.
if not hasattr(self, "chatbot_config"):
self.chatbot_config = {"max_context_examples": 10, "avatar": ":robot:"}
# Override default config values with the config passed in
if isinstance(chatbot_config, Dict):
self.chatbot_config.update(chatbot_config)
else:
raise TypeError(
"chatbot_config must be of type Dict, but was passed in as "
f"{type(chatbot_config)}"
)
def configure_client(self, client_config: Dict = {}) -> None:
"""Configures client options.
Args:
client_config (Dict, optional): Updates self.client_config. Defaults to {}.
"""
# We initialize the client to these default config values.
if not hasattr(self, "client_config"):
self.client_config = {
"model": "xlarge",
"max_tokens": 100,
"temperature": 0.75,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"stop_sequences": ["\n"],
}
# Override default config values with the config passed in
if isinstance(client_config, Dict):
self.client_config.update(client_config)
else:
raise TypeError(
"client_config must be of type Dict, but was passed in as "
f"{type(client_config)}"
)
# Checks if the parameter is equal or bigger than MAX_GENERATE_TOKENS
if self.client_config["max_tokens"] >= MAX_GENERATE_TOKENS:
raise ValueError(
f"The parameter max_tokens needs to be smaller than "
f"{MAX_GENERATE_TOKENS}. Try using a smaller value."
)
elif self.client_config["max_tokens"] > (MAX_GENERATE_TOKENS * 0.75):
warnings.warn(
"The parameter max_tokens has a value "
f"({self.client_config['max_tokens']}) close to the total allowed"
f" for prompt and prediction - {MAX_GENERATE_TOKENS} tokens"
)
@classmethod
def from_persona(
cls,
persona_name: str,
client: cohere.Client,
persona_dir: str = PERSONA_MODEL_DIRECTORY,
):
"""Initializes a PromptChatbot using a persona.
Args:
persona (str): Name of persona, corresponding to a .json file.
client (cohere.Client): Cohere client for API
persona_dir (str): Path to where pre-defined personas are.
"""
# Load the persona from a local directory
persona_path = os.path.join(persona_dir, persona_name, "config.json")
if os.path.isfile(persona_path):
logging.info(f"loading persona from {persona_path}")
else:
raise FileNotFoundError(f"{persona_path} cannot be found.")
with open(persona_path) as f:
persona = json.load(f)
# Validate that the persona follows our predefined schema
cls._validate_persona_dict(persona, persona_path)
return cls(
client=client,
prompt=ChatPrompt.from_dict(persona["chat_prompt_config"]),
persona_name=persona_name,
chatbot_config=persona["chatbot_config"],
client_config=persona["client_config"],
)
def to_dict(self) -> Dict[str, Any]:
"""Serializes this instance into a Python dictionary.
Returns:
Dict[str, Any]: Dictionary of attributes that defines this instance of a
PromptChatbot.
"""
return {
"co": self.co,
"prompt": self.prompt.to_dict(),
"persona_name": self.persona_name,
"chatbot_config": self.chatbot_config,
"client_config": self.client_config,
"chat_history": self.chat_history,
"prompt_history": self.prompt_history,
"user_name": self.user_name,
"bot_name": self.bot_name,
"latest_prompt": self.latest_prompt,
}
def _check_prompt_size(self) -> None:
self.start_prompt_size = self.co.tokenize(self.prompt.to_string()).length
if self.start_prompt_size > self.max_prompt_size:
raise ValueError(
f"The prompt given to PromptChatbot has {self.start_prompt_size}"
" tokens. And the value of the parameter max_tokens is"
f" {self.client_config['max_tokens']}. Adding the two values "
f"the total cannot exceed {MAX_GENERATE_TOKENS}. "
"Try using a shorter preamble or less examples."
)
elif self.start_prompt_size > (0.75 * self.max_prompt_size):
warnings.warn(
"The prompt given to PromptChatbot has "
f"{self.start_prompt_size} tokens. And the value of the parameter"
f" max_tokens is {self.client_config['max_tokens']}. "
"Adding the two together gives a value close to the total allowed"
f" for prompt and prediction - {MAX_GENERATE_TOKENS} tokens"
)
@staticmethod
def _validate_persona_dict(persona: Dict[str, Any], persona_path: str) -> None:
"""Validates formatting of a persona defined as a dictionary.
Args:
persona (Dict[str, Any]): A dictionary containing the persona.
persona_path: The path from which the persona was loaded.
"""
try:
jsonschema.validate(instance=persona, schema=PERSONA_JSON_SCHEMA)
except jsonschema.exceptions.ValidationError as e:
raise jsonschema.exceptions.ValidationError(
f"Type of values in given dictionary (persona from {persona_path}) do "
f"not match schema': {e}"
)
except KeyError as e:
raise KeyError(
f"Invalid key in given dictionary (persona from {persona_path})': {e}"
)
except Exception as e:
raise Exception(
"Failed to validate persona in given dictionary (persona from "
f"{persona_path}): {e}"
)
| [
"\n",
"[]",
"PLACEHOLDERPLACEHOLDERPLACEHOLDER"
] |
2024-01-10 | infiniterik/civilscript | chains~symbolic~symbolify_directly.py | # Symbolic Stance Detection
import requests
from langchain.chains.base import Chain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
stanceDescription = """A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate."""
abortion_belief_types = """'CHOOSE_LIFE', 'EXPERIENCE_PAIN', 'STRUGGLE', 'FREEDOM_OF_CHOICE', 'SUPPORT', 'REGRET', 'RUIN', 'AFFORD'"""
import json
import sys, os
from typing import Dict, List
llm = OpenAI(temperature=0.9)
getPredicate = PromptTemplate(
input_variables=["text", "domain"],
template=stanceDescription+"""
Consider the following comment about {domain} the text expresses.
What is the main predicate that stances refer to? The predicate is represented as a verb and the main argument of the verb in the form VERB[ARGUMENT].
Use the minimum number of words necessary to uniquely identify the predicate but remember that all the terms from the predicate must be in the original comment.
Remember that there may be multiple stances. Return a separate predicate representation for each stance separated by commas.
Comment:{text}
Predicate:""",
)
getPredicateChain = LLMChain(llm=llm, prompt=getPredicate, output_key="predicate")
getBeliefType = PromptTemplate(
input_variables=["text", "domain", "predicate"],
template=stanceDescription+"""Consider the following predicate extracted from the comment about {domain}.
What is the belief type of the predicate? Respond with one of the following:
"""+abortion_belief_types+"""
You must respond with one of the above terms. Ensure that only one of the above terms is used as the response.
Comment:{text}
Predicate: {predicate}
Belief Type:""")
getBeliefTypeChain = LLMChain(llm=llm, prompt=getBeliefType, output_key="belief_type")
getSentiment = PromptTemplate(
input_variables=["text", "belief_type", "predicate", "domain"],
template=stanceDescription+"""
Consider the following comment about {domain}.
What is the sentiment of the author towards the stance predicate {belief_type}[{predicate}]? Respond with one of the following:
- Positive
- Negative
- Neutral
Comment:{text}
Sentiment:""",
)
getSentimentChain = LLMChain(llm=llm, prompt=getSentiment, output_key="sentiment")
getBelief = PromptTemplate(
input_variables=["text", "belief_type", "predicate", "domain"],
template=stanceDescription+"""
Consider the following comment {domain}.
How strongly does the author believe the stance predicate {belief_type}[{predicate}]? Respond with one of the following:
- Very strongly believes
- Strongly believes
- Believes
- Does not believe
- Strongly does not believe
- Very strongly does not believe
Ensure that only one of the above terms is used as the response.
Comment:{text}
Belief Strength:""",
)
getBeliefChain = LLMChain(llm=llm, prompt=getBelief, output_key="belief")
class SymbolicExtractorWithoutExplanation(Chain):
@property
def input_keys(self) -> List[str]:
return ['text', 'domain']
@property
def output_keys(self) -> List[str]:
return ['stances']
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
predicates = getPredicateChain(inputs)
result = []
for p in predicates["predicate"].split(","):
i = dict()
inputs["predicate"] = p
i["predicate"] = p
i["belief_type"] = getBeliefTypeChain(inputs)["belief_type"]
inputs["belief_type"] = i["belief_type"]
i["sentiment"] = getSentimentChain(inputs)["sentiment"]
i["belief"] = getBeliefChain(inputs)["belief"]
result.append(i)
return {"stances": result}
SymbolifyWithoutExplanationChain = SymbolicExtractorWithoutExplanation() | [
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.\n Consider the following comment about {domain} the text expresses. \n What is the main predicate that stances refer to? The predicate is represented as a verb and the main argument of the verb in the form VERB[ARGUMENT].\n Use the minimum number of words necessary to uniquely identify the predicate but remember that all the terms from the predicate must be in the original comment.\n Remember that there may be multiple stances. Return a separate predicate representation for each stance separated by commas.\n Comment:{text}\n Predicate:",
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.Consider the following predicate extracted from the comment about {domain}.\n What is the belief type of the predicate? Respond with one of the following:\n 'CHOOSE_LIFE', 'EXPERIENCE_PAIN', 'STRUGGLE', 'FREEDOM_OF_CHOICE', 'SUPPORT', 'REGRET', 'RUIN', 'AFFORD'\n You must respond with one of the above terms. Ensure that only one of the above terms is used as the response.\n Comment:{text}\n Predicate: {predicate}\n Belief Type:",
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.\n Consider the following comment {domain}. \n How strongly does the author believe the stance predicate {belief_type}[{predicate}]? Respond with one of the following:\n - Very strongly believes\n - Strongly believes\n - Believes\n - Does not believe\n - Strongly does not believe\n - Very strongly does not believe\n Ensure that only one of the above terms is used as the response.\n Comment:{text}\n Belief Strength:",
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.\n Consider the following comment about {domain}. \n What is the sentiment of the author towards the stance predicate {belief_type}[{predicate}]? Respond with one of the following:\n - Positive\n - Negative\n - Neutral\n Comment:{text}\n Sentiment:"
] |
2024-01-10 | infiniterik/civilscript | chains~symbolic~StanceGetter.py | # Symbolic Stance Detection
import requests
from langchain.chains.base import Chain
import json
import sys, os
from typing import Dict, List
class StanceGetter(Chain):
url : str = f"{os.getenv('STANCE_SERVER')}/stance/sentence"
def get_stance(self, text, domain):
response = requests.post(self.url,
json.dumps({"text": text, "domain": domain}),
headers={"Content-Type": "application/json"}).json()
print(response, file=sys.stderr)
return response
def believes(self, strength):
strength = abs(float(strength))
if strength > 2.5:
return "very strongly believes"
elif strength > 1.5:
return "strongly believes"
elif strength > 0.5:
return "believes"
def believes_polarity(self, strength):
strength = float(strength)
if strength > 0:
return "is true"
elif strength < 0:
return "is false"
else:
return "is undetermined"
def sentiment(self, strength):
strength = float(strength)
if strength > 0.5:
return "positive"
elif strength < -0.5:
return "negative"
else:
return "neutral"
def stance_to_text(self, x):
predicate = f"{x['belief_type']}({x['belief_trigger']}, {x['belief_content']})"
belief = f"{self.believes(x['belief_strength'])} that the predicate {predicate} {self.believes_polarity(x['belief_strength'])}"
sent = f"feels {self.sentiment(x['sentiment_strength'])} sentiment towards the idea that {predicate}."
return [belief, sent]
@property
def input_keys(self) -> List[str]:
return ['text', 'domain']
@property
def output_keys(self) -> List[str]:
return ['stances', 'representations']
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
stance = self.get_stance(inputs['text'], inputs['domain'])
text = "The author of this statement:\n- " + "\n- ".join(sum([self.stance_to_text(y) for y in stance["stances"]], []))
return {'stances': text, "representations": [x["stance_rep"] for x in stance["stances"]]}
StanceGetterChain = StanceGetter()
| [] |
2024-01-10 | infiniterik/civilscript | chains~stanceDrivenChains.py | from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
from dotenv import load_dotenv
from chains import symbolic
load_dotenv()
llm = OpenAI(temperature=0.9)
from chains.symbolic import stanceDescription
# Symbolic Stance Detection with Neural Explanation
explanationPrompt = PromptTemplate(
input_variables=["text", "stances"],
template="""Write a short explanation why the comment below evokes the following stances.
Make sure not to add any hypotheses beyond what can be inferred directly from the text:
Comment:{text}
{stances}
Explanation:""",
)
explanationFromTextAndStanceChain = LLMChain(llm=llm, prompt=explanationPrompt, output_key="explanation")
explanationFromSymbolicStance = SequentialChain(chains=[symbolic.StanceGetterChain, explanationFromTextAndStanceChain],
input_variables=["text", "domain"],
output_variables=["explanation", "stances", "representations"],
verbose=True)
# Neural Stance Detection
stanceDetectionPrompt = PromptTemplate(
input_variables=["text", "domain"],
template=stanceDescription+"""Write a short explanation explaining why the comment below evokes the following stances about {domain}.
Express the response as a list of bullet points where each bullet point represents a belief type, a predicate, a belief strength towards the predicate, and a sentiment towards the belief.
Make sure not to add any hypotheses beyond what can be inferred directly from the text:
Comment:{text}
Stances:""",
)
neuralStanceDetectionChain = LLMChain(llm=llm, prompt=stanceDetectionPrompt, output_key="stances")
# Neural Stance Detection with Neural Explanation
explanationFromNeuralStance = SequentialChain(chains=[neuralStanceDetectionChain, explanationFromTextAndStanceChain],
input_variables=["text", "domain"],
output_variables=["explanation", "stances"],
verbose=True)
# Neural Stance Description without Stance Detection
explanationWithoutStancesPrompt = PromptTemplate(
input_variables=["text", "domain"],
template=stanceDescription+"""
Write a short explanation explaining why the comment below evokes stances about {domain}.
Be specific and respond in bullet point form but make sure not to add any hypotheses beyond what can be inferred directly from the text:
Comment:{text}
Explanation:""",
)
explanationFromTextChain = LLMChain(llm=llm,
prompt=explanationWithoutStancesPrompt,
output_key="explanation") | [
"domain",
"Write a short explanation explaining why the comment below evokes the following stances about {domain}. \n Express the response as a list of bullet points where each bullet point represents a belief type, a predicate, a belief strength towards the predicate, and a sentiment towards the belief.\n Make sure not to add any hypotheses beyond what can be inferred directly from the text:\n Comment:{text}\n Stances:",
"PLACEHOLDERWrite a short explanation explaining why the comment below evokes the following stances about {domain}. \n Express the response as a list of bullet points where each bullet point represents a belief type, a predicate, a belief strength towards the predicate, and a sentiment towards the belief.\n Make sure not to add any hypotheses beyond what can be inferred directly from the text:\n Comment:{text}\n Stances:",
"Write a short explanation why the comment below evokes the following stances. \n Make sure not to add any hypotheses beyond what can be inferred directly from the text:\n Comment:{text}\n {stances}\n Explanation:",
"PLACEHOLDER\n Write a short explanation explaining why the comment below evokes stances about {domain}. \n Be specific and respond in bullet point form but make sure not to add any hypotheses beyond what can be inferred directly from the text:\n Comment:{text}\n Explanation:",
"\n Write a short explanation explaining why the comment below evokes stances about {domain}. \n Be specific and respond in bullet point form but make sure not to add any hypotheses beyond what can be inferred directly from the text:\n Comment:{text}\n Explanation:",
"stances"
] |
2024-01-10 | infiniterik/civilscript | chains~symbolic~symbolify.py | # Symbolic Stance Detection
import requests
from langchain.chains.base import Chain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
stanceDescription = """A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate."""
abortion_belief_types = """'CHOOSE_LIFE', 'EXPERIENCE_PAIN', 'STRUGGLE', 'FREEDOM_OF_CHOICE', 'SUPPORT', 'REGRET', 'RUIN', 'AFFORD'"""
import json
import sys, os
from typing import Dict, List
llm = OpenAI(temperature=0.9)
oldPredicatetemplate=stanceDescription+"""
Consider the following comment and explanation regarding stances about {domain} the text expresses.
What is the main predicate that stances refer to? The predicate is represented as a verb and the main argument of the verb in the form VERB[ARGUMENT].
Return the response in the form "BELIEF_TYPE[PREDICATE]" and use the minimum number of words necessary to uniquely identify the predicate.
For abortion, the only allowable belief types are: """+abortion_belief_types+""".
Remember that there may be multiple stances. Return a separate predicate representation for each stance separated by commas.
Comment:{text}
Explanation: {explanation}
Predicate:"""
getPredicate = PromptTemplate(
input_variables=["text", "explanation", "domain"],
template=stanceDescription+"""
Consider the following comment and explanation regarding stances about {domain} the text expresses.
What is the main predicate that stances refer to? The predicate is represented as a verb and the main argument of the verb in the form VERB[ARGUMENT].
Use the minimum number of words necessary to uniquely identify the predicate but remember that all the terms from the predicate must be in the original comment.
Remember that there may be multiple stances. Return a separate predicate representation for each stance separated by commas.
Explanation: {explanation}
Comment:{text}
Predicate:""",
)
getPredicateChain = LLMChain(llm=llm, prompt=getPredicate, output_key="predicate")
getBeliefType = PromptTemplate(
input_variables=["text", "explanation", "domain", "predicate"],
template=stanceDescription+"""Consider the following predicate extracted from the comment and explanation regarding stances about {domain} the comment expresses.
What is the belief type of the predicate? Respond with one of the following:
"""+abortion_belief_types+"""
You must respond with one of the above terms. Ensure that only one of the above terms is used as the response.
Explanation: {explanation}
Comment:{text}
Predicate: {predicate}
Belief Type:""")
getBeliefTypeChain = LLMChain(llm=llm, prompt=getBeliefType, output_key="belief_type")
getSentiment = PromptTemplate(
input_variables=["text", "belief_type", "predicate", "explanation", "domain"],
template=stanceDescription+"""
Consider the following comment and explanation regarding stances about {domain} the text expresses.
What is the sentiment of the author towards the stance predicate {belief_type}[{predicate}]? Respond with one of the following:
- Positive
- Negative
- Neutral
Explanation: {explanation}
Comment:{text}
Sentiment:""",
)
getSentimentChain = LLMChain(llm=llm, prompt=getSentiment, output_key="sentiment")
getBelief = PromptTemplate(
input_variables=["text", "belief_type", "predicate", "explanation", "domain"],
template=stanceDescription+"""
Consider the following comment and explanation regarding stances about {domain} the text expresses.
How strongly does the author believe the stance predicate {belief_type}[{predicate}]? Respond with one of the following:
- Very strongly believes
- Strongly believes
- Believes
- Does not believe
- Strongly does not believe
- Very strongly does not believe
Ensure that only one of the above terms is used as the response.
Explanation: {explanation}
Comment:{text}
Belief Strength:""",
)
getBeliefChain = LLMChain(llm=llm, prompt=getBelief, output_key="belief")
class SymbolicExtractor(Chain):
@property
def input_keys(self) -> List[str]:
return ['text', 'domain', 'explanation']
@property
def output_keys(self) -> List[str]:
return ['stances']
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
predicates = getPredicateChain(inputs)
result = []
for p in predicates["predicate"].split(","):
i = dict()
inputs["predicate"] = p
i["predicate"] = p
i["belief_type"] = getBeliefTypeChain(inputs)["belief_type"]
inputs["belief_type"] = i["belief_type"]
i["sentiment"] = getSentimentChain(inputs)["sentiment"]
i["belief"] = getBeliefChain(inputs)["belief"]
result.append(i)
return {"stances": result}
SymbolifyChain2 = SequentialChain(chains=[getPredicateChain, getBeliefTypeChain, getSentimentChain, getBeliefChain],
input_variables=["text", "explanation", "domain"],
output_variables=["sentiment", "belief", "belief_type", "predicate"],
verbose=True)
SymbolifyChain = SymbolicExtractor() | [
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.Consider the following predicate extracted from the comment and explanation regarding stances about {domain} the comment expresses.\n What is the belief type of the predicate? Respond with one of the following:\n 'CHOOSE_LIFE', 'EXPERIENCE_PAIN', 'STRUGGLE', 'FREEDOM_OF_CHOICE', 'SUPPORT', 'REGRET', 'RUIN', 'AFFORD'\n You must respond with one of the above terms. Ensure that only one of the above terms is used as the response.\n Explanation: {explanation}\n Comment:{text}\n Predicate: {predicate}\n Belief Type:",
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.\n Consider the following comment and explanation regarding stances about {domain} the text expresses. \n What is the main predicate that stances refer to? The predicate is represented as a verb and the main argument of the verb in the form VERB[ARGUMENT].\n Use the minimum number of words necessary to uniquely identify the predicate but remember that all the terms from the predicate must be in the original comment.\n Remember that there may be multiple stances. Return a separate predicate representation for each stance separated by commas.\n Explanation: {explanation}\n Comment:{text}\n Predicate:",
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.\n Consider the following comment and explanation regarding stances about {domain} the text expresses. \n What is the sentiment of the author towards the stance predicate {belief_type}[{predicate}]? Respond with one of the following:\n - Positive\n - Negative\n - Neutral\n Explanation: {explanation}\n Comment:{text}\n Sentiment:",
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.\n Consider the following comment and explanation regarding stances about {domain} the text expresses. \n How strongly does the author believe the stance predicate {belief_type}[{predicate}]? Respond with one of the following:\n - Very strongly believes\n - Strongly believes\n - Believes\n - Does not believe\n - Strongly does not believe\n - Very strongly does not believe\n Ensure that only one of the above terms is used as the response.\n Explanation: {explanation}\n Comment:{text}\n Belief Strength:",
"A stance is a combination of a predicate expressed by the author, whether or not the author believes said predicate, and the author's sentiment towards the predicate.\n Consider the following comment and explanation regarding stances about {domain} the text expresses. \n What is the main predicate that stances refer to? The predicate is represented as a verb and the main argument of the verb in the form VERB[ARGUMENT].\n Return the response in the form \"BELIEF_TYPE[PREDICATE]\" and use the minimum number of words necessary to uniquely identify the predicate.\n For abortion, the only allowable belief types are: 'CHOOSE_LIFE', 'EXPERIENCE_PAIN', 'STRUGGLE', 'FREEDOM_OF_CHOICE', 'SUPPORT', 'REGRET', 'RUIN', 'AFFORD'. \n Remember that there may be multiple stances. Return a separate predicate representation for each stance separated by commas.\n Comment:{text}\n Explanation: {explanation}\n Predicate:"
] |
2024-01-10 | kida0/temp-chat | pages~01_DocumentGPT.py | import time
from typing import Any, Dict, List, Optional
from uuid import UUID
import streamlit as st
from langchain.storage.file_system import LocalFileStore
from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import UnstructuredFileLoader
from langchain.embeddings import OpenAIEmbeddings, CacheBackedEmbeddings
from langchain.vectorstores import FAISS, Chroma
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnableLambda, RunnablePassthrough
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import BaseCallbackHandler
st.set_page_config(
page_title="DocumentGPT",
page_icon="🐹"
)
class ChatCallbackHandler(BaseCallbackHandler):
message = ""
def on_llm_start(self, *args, **kwargs):
self.message_box = st.empty()
def on_llm_end(self, *args, **kwargs):
save_message(self.message, "ai")
# with st.sidebar:
# st.write("llm ended!")
def on_llm_new_token(self, token, *args, **kwargs):
self.message += token
self.message_box.markdown(self.message)
llm = ChatOpenAI(
temperature=0.1,
streaming=True,
callbacks=[
ChatCallbackHandler(),
]
)
if "messages" not in st.session_state:
st.session_state["messages"] = []
# input 파일이 동일하다면 함수를 재실행하지 않는 데코레이터
@st.cache_data(show_spinner="Emebedding file...")
def embed_file(file):
file_content = file.read()
file_path = f"./.cache/files/{file.name}"
with open(file_path, "wb") as f:
f.write(file_content)
cache_dir = LocalFileStore(f"./.cache/embeddings/{file.name}")
splitter = CharacterTextSplitter.from_tiktoken_encoder(
separator="\n",
chunk_size=600,
chunk_overlap=100,
)
loader = UnstructuredFileLoader(file_path)
docs = loader.load_and_split(text_splitter=splitter)
embeddings = OpenAIEmbeddings()
cached_embeddings = CacheBackedEmbeddings.from_bytes_store(embeddings, cache_dir)
vectorstore = FAISS.from_documents(docs, cached_embeddings)
retriever = vectorstore.as_retriever()
return retriever
def save_message(message, role, save=True):
with st.chat_message(role):
st.markdown(message)
if save:
save_message(message, role)
def send_message(message, role, save=True):
with st.chat_message(role):
st.markdown(message)
if save:
st.session_state["messages"].append({"message":message, "role": role})
def paint_history():
for message in st.session_state["messages"]:
send_message(message["message"], message["role"], save=False)
def format_docs(docs):
return "\n\n".join(document.page_content for document in docs)
prompt = ChatPromptTemplate.from_messages([
(
"system",
"""
Answer the question using ONLY the following context. If you don't know the answer
just say you don't know, DON'T make anything up.
Context: {context}
"""
),
("human", "{question}")
])
st.title("DocumentGPT")
st.markdown(
"""
Welcome!
Use this chatbot to ask questions to an AI about your files!
upload your files on the sidebar.
"""
)
with st.sidebar:
file = st.file_uploader(
"Upload a .txt .pdf or .docx file",
type=["pdf", "txt", "docx"]
)
if file:
retriever = embed_file(file)
send_message("I'm ready! Ask away!", "ai", save=False)
paint_history()
message = st.chat_input("Ask anything about your file...")
if message:
send_message(message, "human")
chain = {
"context": retriever | RunnableLambda(format_docs),
"question": RunnablePassthrough()
} | prompt | llm
with st.chat_message("ai"):
response = chain.invoke(message)
send_message(response.content, "ai")
else:
st.session_state["messages"] = [] | [
"\n Answer the question using ONLY the following context. If you don't know the answer\n just say you don't know, DON'T make anything up.\n \n Context: {context} \n ",
"human",
"[('system', \"\\n Answer the question using ONLY the following context. If you don't know the answer\\n just say you don't know, DON'T make anything up.\\n \\n Context: {context} \\n \"), ('human', '{question}')]",
"{question}"
] |
2024-01-10 | kida0/temp-chat | pages~05_MeetingGPT.py | # 미팅 영상의 음성만 추출한 후 Whisper를 통해 음성을 텍스트로 요약하여 사용
import enum
import os
import math
import subprocess
from pydub import AudioSegment
import streamlit as st
import openai
import glob
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.schema import StrOutputParser
from langchain.vectorstores.faiss import FAISS
from langchain.storage.file_system import LocalFileStore
from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import UnstructuredFileLoader
from langchain.embeddings import OpenAIEmbeddings, CacheBackedEmbeddings
from langchain.prompts import ChatPromptTemplate
st.set_page_config(
page_title="MeetingGPT",
page_icon="🦄",
)
st.title("MeetingGPT")
st.markdown(
"""
Welcome to MeetingGPT, upload a video and I will give you a transcript,
a summary and a chatbot to ask any questions about it.
Get started by uploading a video file in the sidebar.
"""
)
has_transcript = os.path.exists("./.cache/podcast.txt")
llm = ChatOpenAI(
temperature=0.1
)
splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=800,
chunk_overlap=100,
)
@st.cache_data()
def embed_file(file_path):
cache_dir = LocalFileStore(f"./.cache/embeddings/{file.name}")
splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=800,
chunk_overlap=100,
)
loader = TextLoader(file_path)
docs = loader.load_and_split(text_splitter=splitter)
embeddings = OpenAIEmbeddings()
cached_embeddings = CacheBackedEmbeddings.from_bytes_store(embeddings, cache_dir)
vectorstore = FAISS.from_documents(docs, cached_embeddings)
retriever = vectorstore.as_retriever()
return retriever
@st.cache_data()
def transcribe_chunks(chunk_folder, destination):
if has_transcript:
return
files = glob.glob(f"{chunk_folder}/*.mp3")
files.sort()
final_transcript = ""
for file in files:
with open(file, "rb") as audio_file, open(destination, "a") as text_file:
transcript = openai.Audio.transcribe(
"whisper-1",
audio_file
)
text_file.write(transcript["text"])
@st.cache_data()
def extract_audio_from_video(video_path):
if has_transcript:
return
audio_path = video_path.replace("mp4", "mp3")
command = ["ffmpeg", "-y", "-i", video_path, "-vn", audio_path]
subprocess.run(command)
@st.cache_data()
def cut_audio_in_chunks(audio_path, chunk_size, chunks_folder):
if has_transcript:
return
track = AudioSegment.from_mp3(audio_path)
chunk_len = chunk_size * 60 * 1000
chunks = math.ceil(len(track) / chunk_len)
for i in range(chunks):
start_time = i * chunk_len
end_time = (i+1) * chunk_len
chunk = track[start_time:end_time]
chunk.export(f"{chunks_folder}/chunk_{i}.mp3")
with st.sidebar:
video = st.file_uploader("Video", type=["mp4", "avi", "mkv", "mov"])
if video:
video_path = f"./.cache/{video.name}"
chunks_folder = "./.cache/chunks"
audio_path = video_path.replace("mp4", "mp3")
transcript_path = audio_path.replace("mp3", "txt")
with st.status("Loading video...") as status:
video_content = video.read()
with open(video_path, "wb") as f:
f.write(video_content)
status.update(label="Extracting audio...")
extract_audio_from_video(video_path)
status.update(label="Cutting audio...")
cut_audio_in_chunks(audio_path, 10, chunks_folder)
status.update(label="Transcribing audio...")
transcribe_chunks(chunks_folder, transcript_path)
transcript_tab, summary_tab, qna_tab = st.tabs(["Transcript", "Summary", "Q&A"])
with transcript_tab:
with open(transcript_path, "r") as file:
st.write(file.read())
with summary_tab:
start = st.button("Generate summary")
if start:
loader = TextLoader(transcript_path)
docs = loader.load_and_split(text_splitter=splitter)
first_summary_prompt = ChatPromptTemplate.from_template(
"""
다음 문장을 간결하게 요약해줘:
"{text}"
간결한 요약 내용:
"""
)
first_summary_chain = first_summary_prompt | llm | StrOutputParser()
summary = first_summary_chain.invoke({
"text": docs[0].page_content
})
refine_prompt = ChatPromptTemplate.from_template(
"""
Your job is to produce a final summary.
We have provided an existsing summary up to a
certain point: {existing_summary}
We have the opporunity to reifne the existing summary
(only if needed) with some more context below.
----------
{context}
----------
Given the new context, refine the original summary.
If the context isn't useful, RETURN the original summary.
"""
)
refine_chain = refine_prompt | llm | StrOutputParser()
with st.status("Summarizing...") as status:
for i, doc in enumerate(docs[1:]):
status.update(label=f"Processing document {i+1}/{len(docs)-1}")
summary = refine_chain.invoke({
"existing_summary": summary,
"context": doc.page_content,
})
st.write(summary)
with qna_tab:
retriever = embed_file(transcript_path)
docs = retriever.invoke("블리자드의 현재 상황에 대해 알려줘")
st.write(docs) | [
"\n 다음 문장을 간결하게 요약해줘:\n \"{text}\"\n 간결한 요약 내용:\n ",
"\n Your job is to produce a final summary.\n We have provided an existsing summary up to a\n certain point: {existing_summary}\n We have the opporunity to reifne the existing summary\n (only if needed) with some more context below.\n ----------\n {context}\n ----------\n Given the new context, refine the original summary.\n If the context isn't useful, RETURN the original summary.\n "
] |
2024-01-10 | kida0/temp-chat | pages~02_PrivateGPT.py | import time
from typing import Any, Dict, List, Optional
from uuid import UUID
import streamlit as st
from langchain.storage.file_system import LocalFileStore
from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import UnstructuredFileLoader
from langchain.embeddings import CacheBackedEmbeddings
# from langchain.embeddings import OpenAIEmbeddings
from langchain.embeddings import OllamaEmbeddings
from langchain.vectorstores import FAISS, Chroma
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnableLambda, RunnablePassthrough
# from langchain.chat_models import ChatOpenAI
from langchain.chat_models import ChatOllama
from langchain.callbacks.base import BaseCallbackHandler
st.set_page_config(
page_title="PrivateGPT",
page_icon="🐻❄️",
)
class ChatCallbackHandler(BaseCallbackHandler):
message = ""
def on_llm_start(self, *args, **kwargs):
self.message_box = st.empty()
def on_llm_end(self, *args, **kwargs):
save_message(self.message, "ai")
# with st.sidebar:
# st.write("llm ended!")
def on_llm_new_token(self, token, *args, **kwargs):
self.message += token
self.message_box.markdown(self.message)
llm = ChatOllama(
model="mistral:latest",
temperature=0.1,
streaming=True,
callbacks=[
ChatCallbackHandler(),
]
)
if "messages" not in st.session_state:
st.session_state["messages"] = []
# input 파일이 동일하다면 함수를 재실행하지 않는 데코레이터
@st.cache_data(show_spinner="Emebedding file...")
def embed_file(file):
file_content = file.read()
file_path = f"./.cache/private_files/{file.name}"
with open(file_path, "wb") as f:
f.write(file_content)
cache_dir = LocalFileStore(f"./.cache/private_embeddings/{file.name}")
splitter = CharacterTextSplitter.from_tiktoken_encoder(
separator="\n",
chunk_size=600,
chunk_overlap=100,
)
loader = UnstructuredFileLoader(file_path)
docs = loader.load_and_split(text_splitter=splitter)
embeddings = OllamaEmbeddings(
model="mistral:latest"
)
cached_embeddings = CacheBackedEmbeddings.from_bytes_store(embeddings, cache_dir)
vectorstore = FAISS.from_documents(docs, cached_embeddings)
retriever = vectorstore.as_retriever()
return retriever
def save_message(message, role, save=True):
with st.chat_message(role):
st.markdown(message)
if save:
save_message(message, role)
def send_message(message, role, save=True):
with st.chat_message(role):
st.markdown(message)
if save:
st.session_state["messages"].append({"message":message, "role": role})
def paint_history():
for message in st.session_state["messages"]:
send_message(message["message"], message["role"], save=False)
def format_docs(docs):
return "\n\n".join(document.page_content for document in docs)
prompt = ChatPromptTemplate.from_messages([
(
"system",
"""
Answer the question using ONLY the following context. If you don't know the answer
just say you don't know, DON'T make anything up.
Context: {context}
"""
),
("human", "{question}")
])
st.title("DocumentGPT")
st.markdown(
"""
Welcome!
Use this chatbot to ask questions to an AI about your files!
upload your files on the sidebar.
"""
)
with st.sidebar:
file = st.file_uploader(
"Upload a .txt .pdf or .docx file",
type=["pdf", "txt", "docx"]
)
if file:
retriever = embed_file(file)
send_message("I'm ready! Ask away!", "ai", save=False)
paint_history()
message = st.chat_input("Ask anything about your file...")
if message:
send_message(message, "human")
chain = {
"context": retriever | RunnableLambda(format_docs),
"question": RunnablePassthrough()
} | prompt | llm
with st.chat_message("ai"):
response = chain.invoke(message)
send_message(response.content, "ai")
else:
st.session_state["messages"] = [] | [
"\n Answer the question using ONLY the following context. If you don't know the answer\n just say you don't know, DON'T make anything up.\n \n Context: {context} \n ",
"human",
"[('system', \"\\n Answer the question using ONLY the following context. If you don't know the answer\\n just say you don't know, DON'T make anything up.\\n \\n Context: {context} \\n \"), ('human', '{question}')]",
"{question}"
] |
2024-01-10 | kida0/temp-chat | pages~04_SiteGPT.py | # # https://openai.com/sitemap.xml sitemap을 사용해서 웹 사이트에 존재하는 모든 page의 url을 찾아낼 수 있음
# # playwright: 브라우저 컨트롤을 할 수 있는 패키지(selenium과 비슷)
# # chronium
# # playwright install: 설치 방식이 좀 특이하네
# # playwright를 headless 모드로 실행 -> headlessfks browser process가 내 컴퓨터로부터 시작되는 것을 의미
# # -> 속도가 느려짐
# # 웹 사이트 스크랩을 너무 빠르게하면 차단당할 수 있음... -> Sitemap에서는 1초에 한 번씩 호출 수행
# # metadata 저장도 확인 가능 : metadata={'source': 'https://openai.com/research/weak-to-strong-generalization', 'loc': 'https://openai.com/research/weak-to-strong-generalization', 'lastmod': '2023-12-16T00:32:09.053Z', 'changefreq': 'daily', 'priority': '1.0'})
# # SitemapLoader는 내부적으로 beautiful soup 사용
# # vector store를 만들어서 연관 있는 docu를 검색
# # llm에게 답변의 유용함 평가 요청
# import streamlit as st
# from langchain.document_loaders import AsyncChromiumLoader
# from langchain.document_transformers import Html2TextTransformer
# from langchain.document_loaders import SitemapLoader
# from langchain.text_splitter import RecursiveCharacterTextSplitter
# from langchain.vectorstores.faiss import FAISS
# from langchain.embeddings import OpenAIEmbeddings
# from langchain.schema.runnable import RunnablePassthrough, RunnableLambda
# from langchain.chat_models import ChatOpenAI
# from langchain.prompts import ChatPromptTemplate
# st.set_page_config(
# page_title="SiteGPT",
# page_icon="🐋",
# )
# st.title("SiteGPT")
# st.markdown(
# """
# Ask question about the content of a website
# Start by writing the URL of the website on the sidebar
# """
# )
# # html을 받아서 text로 변환
# html2text_transformer = Html2TextTransformer()
# # 사이드바 입력 칸 생성
# with st.sidebar:
# url = st.text_input(
# "Write down a URL",
# placeholder="https://example.com",
# )
# # 사이드바에 url을 입력하면, 해당 페이지의 html을 읽어옴
# # if url:
# # # async chromium loader
# # loader = AsyncChromiumLoader([url])
# # docs = loader.load()
# # transformed = html2text_transformer.transform_documents(docs)
# # st.write(docs)
# # SitemapLoader 사용
# # if url:
# # if ".xml" not in url:
# # with st.sidebar:
# # st.error("Please write down a Sitemap URL.")
# # else:
# # loader = SitemapLoader(url)
# # # loader.requests_per_second = 1
# # docs = loader.load()
# # st.write(docs)
# llm = ChatOpenAI(
# temperature=0.1,
# )
# answers_prompt = ChatPromptTemplate.from_template(
# """
# Using ONLY the following context answer the user's question. If you can't just say you don't know, don't make anything up.
# Then, give a score to the answer between 0 and 5.
# If the answer answers the user question the score should be high, else it should be low.
# Make sure to always include the answer's score even if it's 0.
# Context: {context}
# Examples:
# Question: How far away is the moon?
# Answer: The moon is 384,400 km away.
# Score: 5
# Question: How far away is the sun?
# Answer: I don't know
# Score: 0
# Your turn!
# Question: {question}
# """
# )
# # soup은 beautifule soup object로 된 html 덩어리로 검색이나 삭제 작업 수행 가능
# def parse_page(soup):
# header = soup.find("header")
# footer = soup.find("footer")
# if header:
# header.decompose() # decompose: 제거
# if footer:
# footer.decompose()
# return (
# str(soup.get_text())
# .replace("\n", " ")
# .replace("\xa0", " ")
# .replace("ClosingSearch Submit Blog", "")
# )
# # 캐싱되는, url의 텍스트만 가져오는 함수
# @st.cache_data(show_spinner="Loading website...")
# def load_website(url):
# # splitter를 정의하여 load_and_split와 함께 사용 가능
# splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
# chunk_size=1000,
# chunk_overlap=200,
# )
# loader = SitemapLoader(
# url,
# # filter_urls=["http://openai.com/blog/data-par..."], # 직접적으로 주소를 줄 수도 있고
# # filter_urls=[r"^(?!.*\/blog\/).*"], # 정규표현식도 사용 가능(exclude /blog/)
# # filter_urls=[r"^(.*\/blog\/).*"], # 정규표현식도 사용 가능(include /blog/)
# parsing_function=parse_page
# )
# loader.requests_per_second = 2
# # docs = loader.load()
# docs = loader.load_and_split(text_splitter=splitter)
# vector_score = FAISS.from_documents(docs,OpenAIEmbeddings())
# return vector_score.as_retriever()
# choose_prompt = ChatPromptTemplate.from_messages(
# [
# (
# "system",
# """
# Use ONLY the following pre-existing answers to answer the user's question.
# Use the answers that have the highest score (more helpful) and favor the most recent ones.
# Cite sources and return the sources of the answers as they are, do not change them.
# Answers: {answers}
# """,
# ),
# ("human", "{question}"),
# ]
# )
# def choose_answer(inputs):
# answers = inputs["answers"]
# question = inputs["question"]
# choose_chain = choose_prompt | llm
# condensed = "\n\n".join(
# f"Answer: {answer['answer']}\nSource:{answer['source']}\nDate:{answer['date']}" for answer in answers
# )
# return choose_chain.invoke({
# "question": question,
# "answers": condensed
# })
# def get_answers(inputs):
# docs = inputs["docs"]
# question = inputs["question"]
# answers_chain = answers_prompt | llm
# # answers = []
# # for doc in docs:
# # result = answers_chain.invoke({
# # "question": question,
# # "context": doc.page_content
# # })
# # answers.append(result.content)
# # st.write(answers)
# return {
# "question": question,
# "answers": [
# {
# "answer": answers_chain.invoke(
# {"question": question, "context": doc.page_content}
# ).content,
# "source": doc.metadata["source"],
# "date": doc.metadata["lastmod"],
# } for doc in docs
# ]
# }
# if url:
# if ".xml" not in url:
# with st.sidebar:
# st.error("Please write down a Sitemap URL.")
# else:
# retriever = load_website(url)
# query = st.text_input("Ask a question to the websie.")
# if query:
# chain = {
# "docs": retriever,
# "question": RunnablePassthrough(),
# } | RunnableLambda(get_answers) | RunnableLambda(choose_answer)
# result = chain.invoke(query)
# st.write(result.content) | [] |
2024-01-10 | kida0/temp-chat | pages~03_QuizGPT.py | from ast import Str
from gc import callbacks
from unittest.mock import Base
from pydantic import Json
import streamlit as st
from langchain.retrievers import WikipediaRetriever
from langchain.text_splitter import CharacterTextSplitter
from langchain.document_loaders import UnstructuredFileLoader
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.schema import BaseOutputParser
import json
class JsonOutputParser(BaseOutputParser):
def parse(self, text):
text = text.replace("```", "").replace("json", "")
return json.loads(text)
output_parser = JsonOutputParser()
st.set_page_config(
page_title="QuizGPT",
page_icon="❓",
)
st.title("QuizGPT")
llm = ChatOpenAI(
temperature=0.1,
model="gpt-3.5-turbo-1106",
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()]
)
def format_docs(docs):
return "\n\n".join(document.page_content for document in docs)
questions_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""
You are a helpful assistant that is role playing as a teacher.
Based ONLY on the following context make 10 questions to test the user's
knowledge about the text.
Each question should have 4 answer, three of them must be incorrect and one
should be correct.
Use (o) to signal the correct answer.
Question examples:
Question: What is the color of the ocean?
Answer: Red | Yellow | Green | Blue(o)
Question: What is the capital or Georgia?
Answer: Baku | Tbilisi(o) | Manila | Beirut
Question: When was Avatar released?
Asnwer: 2007 | 2001 | 2009(o) | 1998
Question Who was Julius Caesar?
Answer: A Roman Emperor(o) | Painter | Actor | Model
Your turn!
Context: {context}
""",
)
]
)
questions_chain = {"context": format_docs} | questions_prompt | llm
formatting_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""
You are a powerful formatting algorithm.
You format exam questions into JSON format.
Answers with (o) are the correct ones.
Example Input:
Question: What is the color of the ocean?
Answers: Red|Yellow|Green|Blue(o)
Question: What is the capital or Georgia?
Answers: Baku|Tbilisi(o)|Manila|Beirut
Question: When was Avatar released?
Answers: 2007|2001|2009(o)|1998
Question: Who was Julius Caesar?
Answers: A Roman Emperor(o)|Painter|Actor|Model
Example Output:
```json
{{ "questions": [
{{
"question": "What is the color of the ocean?",
"answers": [
{{
"answer": "Red",
"correct": false
}},
{{
"answer": "Yellow",
"correct": false
}},
{{
"answer": "Green",
"correct": false
}},
{{
"answer": "Blue",
"correct": true
}}
]
}},
{{
"question": "What is the capital or Georgia?",
"answers": [
{{
"answer": "Baku",
"correct": false
}},
{{
"answer": "Tbilisi",
"correct": true
}},
{{
"answer": "Manila",
"correct": false
}},
{{
"answer": "Beirut",
"correct": false
}}
]
}},
{{
"question": "When was Avatar released?",
"answers": [
{{
"answer": "2007",
"correct": false
}},
{{
"answer": "2001",
"correct": false
}},
{{
"answer": "2009",
"correct": true
}},
{{
"answer": "1998",
"correct": false
}}
]
}},
{{
"question": "Who was Julius Caesar?",
"answers": [
{{
"answer": "A Roman Emperor",
"correct": true
}},
{{
"answer": "Painter",
"correct": false
}},
{{
"answer": "Actor",
"correct": false
}},
{{
"answer": "Model",
"correct": false
}}
]
}}
]
}}
```
Your turn!
Questions: {context}
""",
)
]
)
formatting_chain = formatting_prompt | llm
@st.cache_data(show_spinner="Loading file...")
def split_file(file):
file_content = file.read()
file_path = f"./.cache/quiz_files/{file.name}"
with open(file_path, "wb") as f:
f.write(file_content)
splitter = CharacterTextSplitter.from_tiktoken_encoder(
separator="\n",
chunk_size=600,
chunk_overlap=100,
)
loader = UnstructuredFileLoader(file_path)
docs = loader.load_and_split(text_splitter=splitter)
return docs
# 해시할 수 없는 매개변수가 있거나, streamlit이 데이터의 서명을 만들 수 없는 경우
# 다른 매개변수를 넣을 수 있는 파라미터를 만들어서, 해당 파라미터가 변경되면 streamlit이 함수를 재실행 시킬 수 있도록 설정
@st.cache_data(show_spinner="Making_quiz...")
def run_quiz_chain(_docs, topic):
chain = {"context": questions_chain} | formatting_chain | output_parser
return chain.invoke(_docs)
@st.cache_data(show_spinner="Searching Wikipedia...")
def wiki_search(term):
retriever = WikipediaRetriever(
top_k_results=5,
# lang="ko",
)
docs = retriever.get_relevant_documents(term)
return docs
def format_docs(docs):
return "\n\n".join(document.page_content for document in docs)
with st.sidebar:
docs = None
choice = st.selectbox(
"Choose what you want to use.",
(
"File",
"Wikipedia Article",
),
)
if choice == "File":
file = st.file_uploader("Upload a .docx, .txt or .pdf file", type=["pdf", "txt", "docs"])
if file:
docs = split_file(file)
else:
topic = st.text_input("Search Wikipedia...")
if topic:
docs = wiki_search(topic)
if not docs:
st.markdown(
"""
Welcome to QuizeGPT.
I will make a quiz from Wikipedia articles or file you upload to test your knowledge and help you study.
Get started by uploading a file searching on Wikipedia in the sidebar.
"""
)
else:
response = run_quiz_chain(docs, topic if topic else file.name)
st.write(response)
with st.form("questions_form"):
for question in response["questions"]:
st.write(question["question"])
value = st.radio(
"Select an option",
[answer["answer"] for answer in question["answers"]],
index=None,
)
if {"answer": value, "correct": True} in question["answers"]:
st.success("Correct!")
elif value is not None:
st.error("Wrong")
button = st.form_submit_button() | [
"[('system', '\\n You are a powerful formatting algorithm.\\n \\n You format exam questions into JSON format.\\n Answers with (o) are the correct ones.\\n \\n Example Input:\\n\\n Question: What is the color of the ocean?\\n Answers: Red|Yellow|Green|Blue(o)\\n \\n Question: What is the capital or Georgia?\\n Answers: Baku|Tbilisi(o)|Manila|Beirut\\n \\n Question: When was Avatar released?\\n Answers: 2007|2001|2009(o)|1998\\n \\n Question: Who was Julius Caesar?\\n Answers: A Roman Emperor(o)|Painter|Actor|Model\\n \\n \\n Example Output:\\n \\n ```json\\n {{ \"questions\": [\\n {{\\n \"question\": \"What is the color of the ocean?\",\\n \"answers\": [\\n {{\\n \"answer\": \"Red\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"Yellow\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"Green\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"Blue\",\\n \"correct\": true\\n }}\\n ]\\n }},\\n {{\\n \"question\": \"What is the capital or Georgia?\",\\n \"answers\": [\\n {{\\n \"answer\": \"Baku\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"Tbilisi\",\\n \"correct\": true\\n }},\\n {{\\n \"answer\": \"Manila\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"Beirut\",\\n \"correct\": false\\n }}\\n ]\\n }},\\n {{\\n \"question\": \"When was Avatar released?\",\\n \"answers\": [\\n {{\\n \"answer\": \"2007\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"2001\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"2009\",\\n \"correct\": true\\n }},\\n {{\\n \"answer\": \"1998\",\\n \"correct\": false\\n }}\\n ]\\n }},\\n {{\\n \"question\": \"Who was Julius Caesar?\",\\n \"answers\": [\\n {{\\n \"answer\": \"A Roman Emperor\",\\n \"correct\": true\\n }},\\n {{\\n \"answer\": \"Painter\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"Actor\",\\n \"correct\": false\\n }},\\n {{\\n \"answer\": \"Model\",\\n \"correct\": false\\n }}\\n ]\\n }}\\n ]\\n }}\\n ```\\n Your turn!\\n\\n Questions: {context}\\n\\n')]",
"\n You are a helpful assistant that is role playing as a teacher.\n \n Based ONLY on the following context make 10 questions to test the user's\n knowledge about the text.\n \n Each question should have 4 answer, three of them must be incorrect and one\n should be correct.\n \n Use (o) to signal the correct answer.\n \n Question examples:\n \n Question: What is the color of the ocean?\n Answer: Red | Yellow | Green | Blue(o)\n \n Question: What is the capital or Georgia?\n Answer: Baku | Tbilisi(o) | Manila | Beirut\n \n Question: When was Avatar released?\n Asnwer: 2007 | 2001 | 2009(o) | 1998\n \n Question Who was Julius Caesar?\n Answer: A Roman Emperor(o) | Painter | Actor | Model\n \n Your turn!\n \n Context: {context}\n ",
"[('system', \"\\n You are a helpful assistant that is role playing as a teacher.\\n \\n Based ONLY on the following context make 10 questions to test the user's\\n knowledge about the text.\\n \\n Each question should have 4 answer, three of them must be incorrect and one\\n should be correct.\\n \\n Use (o) to signal the correct answer.\\n \\n Question examples:\\n \\n Question: What is the color of the ocean?\\n Answer: Red | Yellow | Green | Blue(o)\\n \\n Question: What is the capital or Georgia?\\n Answer: Baku | Tbilisi(o) | Manila | Beirut\\n \\n Question: When was Avatar released?\\n Asnwer: 2007 | 2001 | 2009(o) | 1998\\n \\n Question Who was Julius Caesar?\\n Answer: A Roman Emperor(o) | Painter | Actor | Model\\n \\n Your turn!\\n \\n Context: {context}\\n \")]",
"\n You are a powerful formatting algorithm.\n \n You format exam questions into JSON format.\n Answers with (o) are the correct ones.\n \n Example Input:\n\n Question: What is the color of the ocean?\n Answers: Red|Yellow|Green|Blue(o)\n \n Question: What is the capital or Georgia?\n Answers: Baku|Tbilisi(o)|Manila|Beirut\n \n Question: When was Avatar released?\n Answers: 2007|2001|2009(o)|1998\n \n Question: Who was Julius Caesar?\n Answers: A Roman Emperor(o)|Painter|Actor|Model\n \n \n Example Output:\n \n ```json\n {{ \"questions\": [\n {{\n \"question\": \"What is the color of the ocean?\",\n \"answers\": [\n {{\n \"answer\": \"Red\",\n \"correct\": false\n }},\n {{\n \"answer\": \"Yellow\",\n \"correct\": false\n }},\n {{\n \"answer\": \"Green\",\n \"correct\": false\n }},\n {{\n \"answer\": \"Blue\",\n \"correct\": true\n }}\n ]\n }},\n {{\n \"question\": \"What is the capital or Georgia?\",\n \"answers\": [\n {{\n \"answer\": \"Baku\",\n \"correct\": false\n }},\n {{\n \"answer\": \"Tbilisi\",\n \"correct\": true\n }},\n {{\n \"answer\": \"Manila\",\n \"correct\": false\n }},\n {{\n \"answer\": \"Beirut\",\n \"correct\": false\n }}\n ]\n }},\n {{\n \"question\": \"When was Avatar released?\",\n \"answers\": [\n {{\n \"answer\": \"2007\",\n \"correct\": false\n }},\n {{\n \"answer\": \"2001\",\n \"correct\": false\n }},\n {{\n \"answer\": \"2009\",\n \"correct\": true\n }},\n {{\n \"answer\": \"1998\",\n \"correct\": false\n }}\n ]\n }},\n {{\n \"question\": \"Who was Julius Caesar?\",\n \"answers\": [\n {{\n \"answer\": \"A Roman Emperor\",\n \"correct\": true\n }},\n {{\n \"answer\": \"Painter\",\n \"correct\": false\n }},\n {{\n \"answer\": \"Actor\",\n \"correct\": false\n }},\n {{\n \"answer\": \"Model\",\n \"correct\": false\n }}\n ]\n }}\n ]\n }}\n ```\n Your turn!\n\n Questions: {context}\n\n"
] |
2024-01-10 | frimin/CLIPImageSearchWebUI | module~webui~search_vector.py | from PIL import Image
import gradio as gr
import os
import numpy as np
import time
import torch
import json
import hashlib
from pathlib import Path
from langchain.vectorstores.faiss import FAISS
from langchain.embeddings import FakeEmbeddings
from langchain.docstore.in_memory import InMemoryDocstore
from module.search_vector_core import (
clip_classifier,
clustering,
aesthetic_predictor,
deduplication,
search_image_load_vecdb,
)
from module.webui.components.search import create_image_delete
from module.search_vector_core.search_state import SearchVectorPageState, open_and_try_resize_image
import subprocess
import uuid
from tqdm import tqdm
import module.webui.components.search_image_save as search_image_save
import module.utils.constants_util as constants_util
import numpy as np
import module.webui.components.create_embedding as create_embedding
from module.webui.components import on_load_search_target, set_search_target
from module.data import (
get_clip_model,
get_vector_db_mgr,
get_webui_configs,
get_cache_root,
CLIPWarpper
)
local_state: SearchVectorPageState = SearchVectorPageState()
def get_compolent() -> SearchVectorPageState:
return local_state
def on_load_page(search_target: dict, select_search_target: dict):
cfg = get_webui_configs().get_cfg()
search_target, select_search_target = on_load_search_target(search_target, select_search_target)
return (
search_target,
select_search_target,
cfg.search.default_top_k,
)
def on_search_with_image(search_target: dict,
select_search_target: list[str] | None,
search_history: dict,
select_search_history: list[str] | None,
image: Image,
top_k_number,
page_size: float,
progress = gr.Progress(track_tqdm=True)):
vector_mgr = get_vector_db_mgr()
if not image:
raise gr.Error("未指定图片")
if vector_mgr.is_empty():
raise gr.Error("未加载任何库")
cfg = get_webui_configs().get_cfg()
max_top_k = cfg.search.max_top_k
max_page_size = cfg.search.max_page_size
if top_k_number > max_top_k:
raise gr.Error(f"搜索参数的 Top K 不能超过 {max_top_k}")
if page_size > max_page_size:
raise gr.Error(f"搜索参数的分页大小不能超过 {max_page_size}")
if select_search_target is None or len(select_search_target) == 0:
raise gr.Error("没有选择查询目标")
clip_model = get_clip_model()
with clip_model.get_model() as m:
clip_inputs = m.processor(images=image, return_tensors="pt", padding=True)
clip_inputs["pixel_values"] = clip_inputs["pixel_values"].to(clip_model.device)
image_features = m.model.get_image_features(**clip_inputs)
embedding = image_features[0]
embedding /= embedding.norm(dim=-1, keepdim=True)
msg = f"查询完毕"
search_name = f"#{{n}} 对 {{target}} 图片查询"
preview_image_with_label, preview_page_state, preview_search_name, search_history = local_state.search_with_save_page(
embedding=embedding.tolist(),
top_k=top_k_number,
search_history=search_history,
select_search_target=select_search_target,
page_size=page_size,
search_name=search_name
)
return local_state.update_viewer(
page_state=preview_page_state,
image_and_label=preview_image_with_label,
search_target=search_target,
search_history=search_history,
select_search_name=preview_search_name,
msg=msg,
progress=progress,
)
def on_search_with_prompt(search_target: dict,
select_search_target: list[str] | None,
search_history: dict,
select_search_history: list[str] | None,
prompt: str,
top_k_number:float,
page_size: float,
progress = gr.Progress(track_tqdm=True)):
vector_mgr = get_vector_db_mgr()
if not prompt:
raise gr.Error("未指定提示词")
if vector_mgr.is_empty():
raise gr.Error("未加载任何库")
cfg = get_webui_configs().get_cfg()
max_top_k = cfg.search.max_top_k
max_page_size = cfg.search.max_page_size
if top_k_number > max_top_k:
raise gr.Error(f"搜索参数的 Top K 不能超过 {max_top_k}")
if page_size > max_page_size:
raise gr.Error(f"搜索参数的分页大小不能超过 {max_page_size}")
if select_search_target is None or len(select_search_target) == 0:
raise gr.Error("没有选择查询目标")
print(f"search by prompt: {prompt}")
clip_model = get_clip_model()
with clip_model.get_model() as m:
clip_inputs = m.processor(text=prompt, return_tensors="pt", padding=True)
clip_inputs["input_ids"] = clip_inputs["input_ids"].to(clip_model.device)
clip_inputs["attention_mask"] = clip_inputs["attention_mask"].to(clip_model.device)
embedding = m.model.get_text_features(**clip_inputs)[0]
embedding /= embedding.norm(dim=-1, keepdim=True)
search_name = f"#{{n}} 对 {{target}} 文本查询: {prompt}"
preview_image_with_label, preview_page_state, preview_search_name, search_history = local_state.search_with_save_page(
embedding=embedding.tolist(),
top_k=top_k_number,
search_history=search_history,
select_search_target=select_search_target,
page_size=page_size,
search_name=search_name
)
msg = f"查询完毕"
return local_state.update_viewer(
page_state=preview_page_state,
image_and_label=preview_image_with_label,
search_target=search_target,
search_history=search_history,
select_search_name=preview_search_name,
msg=msg,
progress=progress,
)
def page(block: gr.Blocks, args, top_elems):
local_state.init()
with gr.Blocks() as pageBlock:
local_state.msg_text = msg_text = top_elems.msg_text
local_state.image_file_with_lable_list = gr.State()
local_state.image_select = gr.State()
local_state.search_target = gr.State()
with gr.Tab(label="提示查询"):
with gr.Row():
gr.Markdown("提供图片提示 (image prompt) 或 文本提示 (text prompt),从一个或多个指定的查询目标中进行嵌入向量相似性搜索 (embedding similarity search),一个或多个结果将被添加到历史查询中。")
with gr.Row():
#search_file = gr.File(label="上传Embedding或图片路径文件", file_types=[".json", ".txt"])
#local_state.search_file = search_file
search_image = gr.Image(label="图片提示", type="pil")
search_text = gr.TextArea(label="文本提示", value="person, happy", info="查询提示文本,仅限于英文")
with gr.Row():
#gr.Button(value="仅浏览", variant="primary")
top_k_number = gr.Number(label="Top K", value=0, info="查询结果数量")
search_with_image_btn = gr.Button(value="图搜图", variant="primary")
search_with_prompt_btn = gr.Button(value="文搜图", variant="primary")
clip_classifier_compolents = clip_classifier.on_gui()
repeat_query_compolents = clustering.on_gui()
aesthetic_predictor_compolents = aesthetic_predictor.on_gui()
deduplication_compolents = deduplication.on_gui()
with gr.Row():
local_state.select_search_target = gr.Dropdown(multiselect=True,
label="查询目标", info="可以指定一个或者多个数据集目标创建查询",
interactive=True,
)
local_state.page_size = page_size = gr.Number(label="分页大小", value=20, info="查询结果的每页大小")
with gr.Row():
local_state.search_history = gr.State()
local_state.select_search_history = select_search_history = gr.Dropdown(label="查询历史", info="每次执行查询操作新的查询都会追加到最前")
with gr.Row():
local_state.image_gallery = gallery = gr.Gallery(label="查询浏览", columns=8, object_fit="contain", scale=5)
with gr.Column():
local_state.page_state = gr.State()
with gr.Group():
local_state.first_page_btn = first_page_btn = gr.Button("首页")
local_state.last_page_btn = last_page_btn = gr.Button("尾页")
local_state.prev_page_btn = prev_page_btn = gr.Button("上一页")
local_state.next_page_btn = next_page_btn = gr.Button("下一页")
with gr.Row():
local_state.page_index = gr.Number(label="当前页", value=1, interactive=True, min_width=60)
local_state.page_count = gr.Number(label="总页数", value=1, interactive=False, min_width=60)
with gr.Group():
goto_page_btn = gr.Button("跳转到")
clear_search_btn = gr.Button("清空结果")
transfer_to_img_search_btn = gr.Button("发送到搜图")
open_img_folder_btn = gr.Button("打开目录")
save_select_image_btn = gr.Button("保存当前选中到输出")
with gr.Row():
local_state.select_img_info = select_img_info = gr.Markdown("", visible=True)
with gr.Accordion(open=False, label="查询结果处理"):
with gr.Tab(label="查询导出"):
with gr.Row():
save_to_outdir_copy_type = gr.Dropdown(label="保存模式", choices=["保存当前查询", "保存所有查询"], value="保存当前查询", type="index", interactive=True)
save_to_outdir_skip_img_filesize = gr.Number(label="跳过小于此文件大小", info="单位千字节(KB)", value=0, interactive=True)
format_choices = ["不修改", "JPEG", "PNG"]
save_to_outdir_format = gr.Dropdown(label="保存格式为", info="指定新的保存格式", choices=format_choices, value=format_choices[0], type="index", interactive=True)
save_to_outdir_quality = gr.Number(label="保存质量", info="仅保存为 JPEG 有效" , value=95, interactive=True)
with gr.Row():
save_to_outdir_skip_img_pixel = gr.Number(label="跳过最小的边", info="当高或宽小于此像素时跳过,0为不启用", value=0, interactive=True)
save_to_outdir_skip_img_scale = gr.Number(label="跳过比例", info="当高宽比或宽高比大于此值时跳过,0为不启用,推荐值在[2,3]之间", value=0, interactive=True)
save_to_outdir_max_pixel = gr.Number(label="压缩最大边到", info="当高或宽大于此像素时等比缩小,0为不启用", value=0, interactive=True)
with gr.Row():
save_to_outdir_start_page = gr.Number(label="起始页", value=1, interactive=True)
save_to_outdir_end_page = gr.Number(label="结束页", value=-1, interactive=True)
save_to_outdir_max_export_count = gr.Number(label="每个查询最大输出数量", info="输出每个查询前N个图片,0为不启用", value=0, interactive=True)
save_to_outdir_copy_same_name_ext = gr.Textbox(label="拷贝同名文件", info="拷贝同名的其它后缀文件" , value=".txt,.caption,.json", interactive=True)
save_to_outdir_random_new_name = gr.Checkbox(label="随机新的文件名称", info="避免多次输出后的长文件名" , value=False, interactive=True)
with gr.Row():
default_save_path = os.path.join(Path.home(), "Pictures", "CLIPImageSearchWebUI")
save_to_outdir = gr.Textbox(label="保存结果图片到目录", scale=5, value=default_save_path)
save_to_outdir_btn = gr.Button("保存")
create_image_delete(top_elems, local_state)
search_image_load_vecdb_compolents = search_image_load_vecdb.on_gui()
set_search_target([local_state.search_target, local_state.select_search_target])
pageBlock.load(fn=on_load_page, inputs=[
local_state.search_target,
local_state.select_search_target
],
outputs=[
local_state.search_target,
local_state.select_search_target,
top_k_number,
])
image_viewer_outputs = local_state.get_image_viewer_outputs()
search_with_image_btn.click(fn=on_search_with_image, inputs=[
local_state.search_target,
local_state.select_search_target,
local_state.search_history,
local_state.select_search_history,
search_image,
top_k_number,
page_size
], outputs=image_viewer_outputs)
search_with_prompt_btn.click(fn=on_search_with_prompt, inputs=[
local_state.search_target,
local_state.select_search_target,
local_state.search_history,
local_state.select_search_history,
search_text,
top_k_number,
page_size
], outputs=image_viewer_outputs)
# 翻页相关
page_state_inputs = [ local_state.page_state ]
def on_first_page(page_state, progress = gr.Progress()):
if page_state is None:
raise constants_util.NO_QUERY_RESULT_ERROR
page_state["page_index"] = 1
return local_state.update_viewer_page(page_state, progress)
def on_last_page(page_state, progress = gr.Progress()):
if page_state is None:
raise constants_util.NO_QUERY_RESULT_ERROR
page_state["page_index"] = page_state["page_count"]
return local_state.update_viewer_page(page_state, progress)
def on_prev_page(page_state, progress = gr.Progress()):
if page_state is None:
raise constants_util.NO_QUERY_RESULT_ERROR
page_state["page_index"] -= 1
page_state["page_index"] = max(page_state["page_index"], 1)
return local_state.update_viewer_page(page_state, progress)
def on_next_page(page_state, progress = gr.Progress()):
if page_state is None:
raise constants_util.NO_QUERY_RESULT_ERROR
page_state["page_index"] += 1
page_state["page_index"] = min(page_state["page_index"], page_state["page_count"])
return local_state.update_viewer_page(page_state, progress)
def on_goto_page(page_state, page_index: float, progress = gr.Progress()):
if page_state is None:
raise constants_util.NO_QUERY_RESULT_ERROR
page_state["page_index"] = int(page_index)
page_state["page_index"] = max(page_state["page_index"], 1)
page_state["page_index"] = min(page_state["page_index"], page_state["page_count"])
return local_state.update_viewer_page(page_state, progress)
def on_select_history(page_state, search_history: dict, select_search_history: str, progress = gr.Progress()):
search_id = None
for search_name, id in search_history["search"]:
if search_name == select_search_history:
search_id = id
if search_id is None:
raise constants_util.INVALID_QUERT_RECORD_ERROR
page_state = local_state.load_page_meta(search_id)
page_state["page_index"] = 1
return local_state.update_viewer_page(page_state, progress)
first_page_btn.click(on_first_page, inputs=page_state_inputs, outputs=image_viewer_outputs)
last_page_btn.click(on_last_page, inputs=page_state_inputs, outputs=image_viewer_outputs)
prev_page_btn.click(on_prev_page, inputs=page_state_inputs, outputs=image_viewer_outputs)
next_page_btn.click(on_next_page, inputs=page_state_inputs, outputs=image_viewer_outputs)
goto_page_btn.click(on_goto_page, inputs=page_state_inputs + [ local_state.page_index ], outputs=image_viewer_outputs)
select_search_history.select(on_select_history, inputs=page_state_inputs + [local_state.search_history, select_search_history], outputs=image_viewer_outputs)
def on_clear_search():
"""清理搜索结果"""
return local_state.update_viewer(None, [], msg="已清理", search_target=None, search_history=None)
clear_search_btn.click(fn=on_clear_search, outputs=image_viewer_outputs)
def on_select_img(image_file_with_lable_list: list[tuple[str, str]], evt: gr.SelectData):
"""选中图片时更新标签状态"""
item = image_file_with_lable_list[evt.index]
with Image.open(item[0]) as img:
width, height = img.width, img.height
text = f"标签: **{item[1]}**\n\n原始文件路径: **{item[0]}**\n\n分辨率:{width}x{height}"
return (text, evt.index)
gallery.select(fn=on_select_img, inputs=[local_state.image_file_with_lable_list], outputs=[select_img_info, local_state.image_select], show_progress=True)
def on_transfer_to_img_search(image_file_with_lable_list: list[tuple[str, str]], select_index: float):
if image_file_with_lable_list is None:
raise constants_util.NO_QUERY_RESULT_ERROR
select_index = int(select_index)
if select_index < 0:
return
item = image_file_with_lable_list[select_index]
filename_with_ext = item[0]
filename_without_ext, _ = os.path.splitext(filename_with_ext)
cache_root = os.path.join(get_cache_root().cache_root, "preview")
hash_id = hashlib.sha1(filename_without_ext.encode('utf-8')).hexdigest()
cache_file = os.path.join(cache_root, f"{hash_id}.jpg")
image_file = open_and_try_resize_image(filename_with_ext, cache_file, local_state.cache_image_max_size, local_state.greater_than_size)
return image_file
transfer_to_img_search_btn.click(fn=on_transfer_to_img_search, inputs=[local_state.image_file_with_lable_list, local_state.image_select], outputs=[search_image])
def on_open_folder(image_file_with_lable_list: list[tuple[str, str]], select_index: float):
""""打开选中图片所在文件夹"""
if image_file_with_lable_list is None:
raise constants_util.NO_QUERY_RESULT_ERROR
select_index = int(select_index)
if select_index < 0:
return
item = image_file_with_lable_list[select_index]
subprocess.Popen(f'explorer /select,"{item[0]}"')
open_img_folder_btn.click(fn=on_open_folder, inputs=[local_state.image_file_with_lable_list, local_state.image_select])
save_select_image_btn.click(fn=search_image_save.save_select_image,
inputs=[save_to_outdir, local_state.image_file_with_lable_list, local_state.image_select, save_to_outdir_copy_same_name_ext],
outputs=[select_img_info])
save_to_outdir_btn.click(fn=search_image_save.save_query_image_to_dir, inputs=page_state_inputs + [
local_state.search_history,
save_to_outdir_copy_type,
save_to_outdir,
save_to_outdir_start_page,
save_to_outdir_end_page,
save_to_outdir_max_export_count,
save_to_outdir_copy_same_name_ext,
save_to_outdir_random_new_name,
save_to_outdir_skip_img_filesize,
save_to_outdir_skip_img_pixel,
save_to_outdir_skip_img_scale,
save_to_outdir_max_pixel,
save_to_outdir_format,
save_to_outdir_quality
], outputs = [
msg_text,
save_to_outdir,
])
clip_classifier.on_bind(search_state=local_state, compolents=clip_classifier_compolents)
clustering.on_bind(search_state=local_state, compolents=repeat_query_compolents)
aesthetic_predictor.on_bind(search_state=local_state, compolents=aesthetic_predictor_compolents)
deduplication.on_bind(search_state=local_state, compolents=deduplication_compolents)
#search_image_delete.on_bind(search_state=local_state, compolents=search_image_delete_compolents)
search_image_load_vecdb.on_bind(search_state=local_state, compolents=search_image_load_vecdb_compolents)
| [] |
2024-01-10 | frimin/CLIPImageSearchWebUI | module~search_vector_core~search_state.py | import os
import hashlib
import gradio as gr
import module.utils.constants_util as constants_util
import json
from PIL import Image
import time
import uuid
import torch
from langchain.vectorstores.faiss import FAISS
from tqdm import tqdm
from module.data import (
get_vector_db_mgr,
get_webui_configs,
get_cache_root,
VectorDatabase,
VectorDatabaseManager
)
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i + n]
def open_and_try_resize_image(image_path, cache_file, max_side, greater_than_size) -> None:
if os.path.exists(cache_file):
return cache_file
file_stats = os.stat(image_path)
if file_stats.st_size <= greater_than_size:# 大于指定大小才建立缓存
return image_path
with Image.open(image_path) as image:
width, height = image.size
# 长款都小于指定尺寸,不调整了
if width < max_side and height < max_side:
return image_path
if width > height:
width_pct = max_side / width
width = max_side
height = height * width_pct
else:
height_pct = max_side / height
width = width * height_pct
height = max_side
new_image = image.convert('RGB')
new_image = new_image.resize((int(width), int(height)))
with open(cache_file, "w") as f:
new_image.save(cache_file, format="JPEG", quality=80, optimize=True)
return cache_file
return new_image
class SearchVectorPageState():
msg_text:gr.Textbox()
search_file: gr.File = None
page_index: gr.Number()
page_count: gr.Number()
page_state: gr.State()
page_select: gr.Number()
image_gallery: gr.Gallery()
# 当前页的原始文件路径和标签
image_file_with_lable_list: gr.State()
image_select: gr.State()
select_img_info: gr.Markdown()
first_page_btn: gr.Button()
last_page_btn: gr.Button()
prev_page_btn: gr.Button()
next_page_btn: gr.Button()
select_search_target: gr.Dropdown()
search_target: gr.State()
search_history: gr.State()
select_search_history: gr.Dropdown()
search_count = 0
page_size: gr.Number()
def __init__(self) -> None:
pass
def init(self):
cfg = get_webui_configs().get_cfg()
self.cache_image_max_size: int = cfg.cache.image.max_size
self.greater_than_size: int = cfg.cache.image.greater_than_size
def get_image_viewer_outputs(self):
return [
self.page_state,
self.msg_text,
self.image_file_with_lable_list,
self.image_gallery,
self.page_index,
self.page_count,
self.select_img_info,
self.image_select,
self.search_target,
self.search_history,
self.select_search_history,
]
def open_image_or_create_cache(self, image_and_label: list[tuple[str, str]], progress: gr.Progress):
preview_images = []
raw_images = []
labels = []
cache_root = os.path.join(get_cache_root().cache_root, "preview")
if not os.path.exists(cache_root):
os.mkdir(cache_root)
for filename, label in progress.tqdm(image_and_label, desc="创建图像缓存"):
find_img = False
hash_id = hashlib.sha1(filename.encode('utf-8')).hexdigest()
cache_file = os.path.join(cache_root, f"{hash_id}.jpg")
for image_ext in constants_util.IMAGE_EXTENSIONS:
filename_with_ext = filename + image_ext
if os.path.exists(filename_with_ext):
find_img=True
image_file = open_and_try_resize_image(filename_with_ext, cache_file, self.cache_image_max_size, self.greater_than_size)
preview_images.append(image_file)
raw_images.append(filename_with_ext)
break
if find_img:
labels.append(label)
else:
print(f"missing image file: {filename}")
return list(zip(preview_images, labels)), list(zip(raw_images, labels))
def load_page(self, search_id: str, page_index: int) -> list[tuple[str, str]]:
cache_root = os.path.join(get_cache_root().cache_root, "search_id", search_id)
pages_index_file = os.path.join(cache_root, "pages_index.json")
if not os.path.exists(pages_index_file):
return []
with open(os.path.join(cache_root, "pages_index.json"), "r") as f:
page_info = json.load(f)
page_pos = page_info[int(page_index) - 1]
with open(os.path.join(cache_root, "pages.json"), "r") as f:
f.seek(page_pos[0])
content = f.read(page_pos[1] - page_pos[0])
return json.loads(content)
def get_cache_root_path(self, search_id: str):
return os.path.join(get_cache_root().cache_root, "search_id", search_id)
def load_page_meta(self, search_id: str):
cache_root = self.get_cache_root_path(search_id)
with open(os.path.join(cache_root, "pages_meta.json"), "r") as f:
return json.load(f)
def save_pages(self,
search_id: str,
image_and_label: list[tuple[str, str]],
page_size: int,
indices = None,
db: VectorDatabase = None,
progress: gr.Progress = None) -> list[tuple[str, str]]:
cache_root = os.path.join(get_cache_root().cache_root, "search_id", search_id)
if not os.path.exists(cache_root):
os.makedirs(cache_root)
#pages = list(chunks(image_and_label, page_size))
#for i, v in enumerate(progress.tqdm(tqdm(pages, desc="写出页缓存文件"), desc="创建分页缓存")):
# with open(os.path.join(cache_root, f"page_{i + 1}.json"), "w") as f:
# json.dump(v, f)
n = 0
first_page = None
t0 = time.time()
with open(os.path.join(cache_root, "pages.json"), "w") as f:
page_info = []
for v in chunks(image_and_label, page_size):
if first_page is None:
first_page = v
n+=1
start_pos = f.tell()
# 每次 json.dump 都会导致 flush,很慢
json_string = json.dumps(v)
del v
f.write(json_string)
end_pos = f.tell()
page_info.append((start_pos, end_pos))
with open(os.path.join(cache_root, "pages_index.json"), "w") as f:
json.dump(page_info, f)
with open(os.path.join(cache_root, "pages_meta.json"), "w") as f:
json.dump({ "page_count": n, "search_id": search_id }, f)
t1 = time.time()
save_db_root = os.path.join(cache_root, "vecdb")
save_index = 0
#if indices is not None and len(indices) > 0:
# indices = [i for i in indices if i >= 0]
if indices is not None and len(indices) > 0:
loader = torch.utils.data.DataLoader(indices, batch_size=5000)
for batch in tqdm(loader, desc="保存向量库"):
batch_embeds = torch.tensor(db.db.index.reconstruct_batch(batch))
data = []
for i, j in enumerate(batch):
j = int(j)
doc_uuid = db.db.index_to_docstore_id[j]
doc = db.db.docstore.search(doc_uuid)
filename = doc.page_content
if doc.metadata:
image_root = doc.metadata["root"]
filename_witout_ext = os.path.join(image_root, filename)
else:
filename_witout_ext = filename
data.append((filename_witout_ext, batch_embeds[i]))
vectorstore_new = FAISS.from_embeddings(text_embeddings=data, embedding=VectorDatabase.fake_embeddings)
vectorstore_new.save_local(os.path.join(save_db_root, f"{save_index}"))
save_index += 1
del vectorstore_new, data
#print(f"缓存搜索结果分页, time={t1-t0}")
return first_page, n
def search_with_save_page(self,
embedding : list[float],
top_k : int,
search_history: dict,
select_search_target: list[str],
page_size: int,
search_name: str = "搜索 {n}"):
vector_mgr = get_vector_db_mgr()
"""搜索并保存搜索结果页"""
page_size = int(page_size)
page_size = max(page_size, 1)
page_size = min(page_size, 1000)
top_k = int(max(top_k, 1))
assert len(select_search_target) >= 1
preview_page_state = None
preview_image_with_label = None
preview_search_name = None
new_searchs = []
for i, target in enumerate(tqdm(select_search_target, desc="查询项目")):
search_id = str(uuid.uuid4())
image_and_label, indices, db = vector_mgr.search(embedding=embedding, top_k=top_k, variant=target)
self.search_count += 1
cur_search_name = search_name.format(n=self.search_count, target=target)
if i == 0:
preview_image_with_label, page_count = self.save_pages(search_id, image_and_label, page_size=page_size, indices=indices, db=db)
preview_page_state = { "search_id": search_id, "page_index": 1, "page_count": page_count }
preview_search_name = cur_search_name
else:
# 仅保存
self.save_pages(search_id, image_and_label, page_size=page_size, indices=indices, db=db)
# 更新搜索结果列表
new_searchs.append([cur_search_name, search_id])
if search_history is None:
search_history = { "search": [] }
search_history["search"] = new_searchs + search_history["search"]
# 仅返回第一个搜索项目的第一页的浏览内容
return preview_image_with_label, preview_page_state, preview_search_name, search_history
def update_viewer(self,
page_state: dict,
image_and_label: list[tuple[str, str]],
search_target: dict,
search_history: dict,
select_search_name: str = None,
msg: str = "已完成",
progress: gr.Progress = None):
if search_history is None:
search_history = { "search": [] }
if page_state is not None:
preview_images_with_label, raw_images_with_label = self.open_image_or_create_cache(image_and_label, progress=progress)
page_count = page_state["page_count"]
else:
select_search_name = ""
page_state = None
raw_images_with_label = preview_images_with_label = None
page_count = 1
return (
# page_state
page_state,
# msg_text
msg,
# image_file_with_lable_list
raw_images_with_label,
# image_gallery
preview_images_with_label,
# page_index
"1",
# page_count
page_count,
# select_img_info
"",
# image_select,
-1,
# search_target,
search_target if search_target is not None else gr.update(),
# search_history,
search_history,
# select_search_history,
gr.Dropdown.update(choices=[i[0] for i in search_history["search"]], value=select_search_name),
)
def update_viewer_page(self, page_state: dict(), progress: gr.Progress):
cur_page = self.load_page(page_state["search_id"], page_state["page_index"])
preview_images_with_label, raw_images_with_label = self.open_image_or_create_cache(cur_page, progress=progress)
return (
# page_state
page_state,
# msg_text
"已更新页面",
# image_file_with_lable_list
raw_images_with_label,
# image_gallery
preview_images_with_label,
# page_index
page_state["page_index"],
# page_count
page_state["page_count"],
# select_img_info
"",
# image_select
-1,
# select_search_target
gr.Dropdown.update(),
# search_history
gr.update(),
# select_search_history,
gr.update(),
) | [] |
2024-01-10 | eduardagnz/Modulo6 | ponderadas~pond-hist~chat_openai.py | import os
import openai
from dotenv import load_dotenv
load_dotenv()
def chat_whit_gpt(command):
openai.api_key = "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"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "Complete essa história em um breve texto em apenas um pequeno parágrafo"},
{"role": "user", "content": command},
]
)
return response.choices[0].message["content"] | [
"Complete essa história em um breve texto em apenas um pequeno parágrafo"
] |
2024-01-10 | pdbarma/ComfyUI-AnimateDiff-Evolved | animatediff~motion_module.py | import torch
import torch.nn.functional as F
from einops import rearrange
from torch import Tensor, nn
import comfy.model_management as model_management
import comfy.model_patcher as comfy_model_patcher
from comfy.ldm.modules.diffusionmodules import openaimodel
from comfy.ldm.modules.diffusionmodules.openaimodel import ResBlock, SpatialTransformer
from comfy.model_patcher import ModelPatcher
from comfy.utils import calculate_parameters, load_torch_file
from .logger import logger
from .model_utils import ModelTypesSD, calculate_file_hash, get_motion_lora_path, get_motion_model_path, \
get_sd_model_type
from .motion_lora import MotionLoRAList, MotionLoRAWrapper
from .motion_module_ad import AnimDiffMotionWrapper, has_mid_block
from .motion_module_hsxl import HotShotXLMotionWrapper, TransformerTemporal
from .motion_utils import GenericMotionWrapper, InjectorVersion, normalize_min_max
# inject into ModelPatcher.clone to carry over injected params over to cloned ModelPatcher
orig_modelpatcher_clone = comfy_model_patcher.ModelPatcher.clone
def clone_injection(self, *args, **kwargs):
model = orig_modelpatcher_clone(self, *args, **kwargs)
if is_injected_mm_params(self):
set_injected_mm_params(model, get_injected_mm_params(self))
return model
comfy_model_patcher.ModelPatcher.clone = clone_injection
# cached motion modules
motion_modules: dict[str, GenericMotionWrapper] = {}
# cached motion loras
motion_loras: dict[str, MotionLoRAWrapper] = {}
# adapted from https://github.com/guoyww/AnimateDiff/blob/main/animatediff/utils/convert_lora_safetensor_to_diffusers.py
# Example LoRA keys:
# down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_q_lora.down.weight
# down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.processor.to_q_lora.up.weight
#
# Example model keys:
# down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight
#
def apply_lora_to_mm_state_dict(model_dict: dict[str, Tensor], lora: MotionLoRAWrapper):
# TODO: generalize for both AD and HSXL
model_has_midblock = has_mid_block(model_dict)
lora_has_midblock = has_mid_block(lora.state_dict)
def get_version(has_midblock: bool):
return "v2" if has_midblock else "v1"
logger.info(f"Applying a {get_version(lora_has_midblock)} LoRA ({lora.info.name}) to a {get_version(model_has_midblock)} motion model.")
for key in lora.state_dict:
# if motion model doesn't have a mid_block, skip mid_block entries
if not model_has_midblock:
if "mid_block" in key: continue
# only process lora down key (we will process up at the same time as down)
if "up." in key: continue
# key to get up value
up_key = key.replace(".down.", ".up.")
# adapt key to match model_dict format - remove 'processor.', '_lora', 'down.', and 'up.'
model_key = key.replace("processor.", "").replace("_lora", "").replace("down.", "").replace("up.", "")
# model keys have a '0.' after all 'to_out.' weight keys
model_key = model_key.replace("to_out.", "to_out.0.")
weight_down = lora.state_dict[key]
weight_up = lora.state_dict[up_key]
# apply weights to model_dict - multiply strength by matrix multiplication of up and down weights
model_dict[model_key] += lora.info.strength * torch.mm(weight_up, weight_down).to(model_dict[model_key].device)
def load_motion_lora(lora_name: str) -> MotionLoRAWrapper:
# if already loaded, return it
lora_path = get_motion_lora_path(lora_name)
lora_hash = calculate_file_hash(lora_path, hash_every_n=3)
if lora_hash in motion_loras:
return motion_loras[lora_hash]
logger.info(f"Loading motion LoRA {lora_name}")
l_state_dict = load_torch_file(lora_path)
lora = MotionLoRAWrapper(l_state_dict, lora_hash)
# add motion LoRA to cache
motion_loras[lora_hash] = lora
return lora
def interpolate_pe_to_length(model_dict: dict[str, Tensor], key: str, new_length: int):
pe_shape = model_dict[key].shape
temp_pe = rearrange(model_dict[key], "(t b) f d -> t b f d", t=1)
temp_pe = F.interpolate(temp_pe, size=(new_length, pe_shape[-1]), mode="bilinear")
temp_pe = rearrange(temp_pe, "t b f d -> (t b) f d", t=1)
model_dict[key] = temp_pe
del temp_pe
def interpolate_pe_to_length_diffs(model_dict: dict[str, Tensor], key: str, new_length: int):
# TODO: fill out and try out
pe_shape = model_dict[key].shape
temp_pe = rearrange(model_dict[key], "(t b) f d -> t b f d", t=1)
temp_pe = F.interpolate(temp_pe, size=(new_length, pe_shape[-1]), mode="bilinear")
temp_pe = rearrange(temp_pe, "t b f d -> (t b) f d", t=1)
model_dict[key] = temp_pe
del temp_pe
def interpolate_pe_to_length_pingpong(model_dict: dict[str, Tensor], key: str, new_length: int):
if model_dict[key].shape[1] < new_length:
temp_pe = model_dict[key]
flipped_temp_pe = torch.flip(temp_pe[:, 1:-1, :], [1])
use_flipped = True
preview_pe = None
while model_dict[key].shape[1] < new_length:
preview_pe = model_dict[key]
model_dict[key] = torch.cat([model_dict[key], flipped_temp_pe if use_flipped else temp_pe], dim=1)
use_flipped = not use_flipped
del temp_pe
del flipped_temp_pe
del preview_pe
model_dict[key] = model_dict[key][:, :new_length]
def freeze_mask_of_pe(model_dict: dict[str, Tensor], key: str):
pe_portion = model_dict[key].shape[2] // 64
first_pe = model_dict[key][:,:1,:]
model_dict[key][:,:,pe_portion:] = first_pe[:,:,pe_portion:]
del first_pe
def freeze_mask_of_attn(model_dict: dict[str, Tensor], key: str):
attn_portion = model_dict[key].shape[0] // 2
model_dict[key][:attn_portion,:attn_portion] *= 1.5
def apply_mm_settings(model_dict: dict[str, Tensor], mm_settings: 'MotionModelSettings') -> dict[str, Tensor]:
if not mm_settings.has_anything_to_apply():
return model_dict
for key in model_dict:
if "attention_blocks" in key:
if "pos_encoder" in key:
# apply simple motion pe stretch, if needed
if mm_settings.has_motion_pe_stretch():
new_pe_length = model_dict[key].shape[1] + mm_settings.motion_pe_stretch
interpolate_pe_to_length(model_dict, key, new_length=new_pe_length)
# apply pe_strength, if needed
if mm_settings.has_pe_strength():
model_dict[key] *= mm_settings.pe_strength
# apply pe_idx_offset, if needed
if mm_settings.has_initial_pe_idx_offset():
model_dict[key] = model_dict[key][:, mm_settings.initial_pe_idx_offset:]
# apply has_cap_initial_pe_length, if needed
if mm_settings.has_cap_initial_pe_length():
model_dict[key] = model_dict[key][:, :mm_settings.cap_initial_pe_length]
# apply interpolate_pe_to_length, if needed
if mm_settings.has_interpolate_pe_to_length():
interpolate_pe_to_length(model_dict, key, new_length=mm_settings.interpolate_pe_to_length)
# apply final_pe_idx_offset, if needed
if mm_settings.has_final_pe_idx_offset():
model_dict[key] = model_dict[key][:, mm_settings.final_pe_idx_offset:]
else:
# apply attn_strenth, if needed
if mm_settings.has_attn_strength():
model_dict[key] *= mm_settings.attn_strength
# apply specific attn_strengths, if needed
if mm_settings.has_any_attn_sub_strength():
if "to_q" in key and mm_settings.has_attn_q_strength():
model_dict[key] *= mm_settings.attn_q_strength
elif "to_k" in key and mm_settings.has_attn_k_strength():
model_dict[key] *= mm_settings.attn_k_strength
elif "to_v" in key and mm_settings.has_attn_v_strength():
model_dict[key] *= mm_settings.attn_v_strength
elif "to_out" in key:
if key.strip().endswith("weight") and mm_settings.has_attn_out_weight_strength():
model_dict[key] *= mm_settings.attn_out_weight_strength
elif key.strip().endswith("bias") and mm_settings.has_attn_out_bias_strength():
model_dict[key] *= mm_settings.attn_out_bias_strength
# apply other strength, if needed
elif mm_settings.has_other_strength():
model_dict[key] *= mm_settings.other_strength
return model_dict
def load_motion_module(model_name: str, motion_lora: MotionLoRAList = None, model: ModelPatcher = None, motion_model_settings = None) -> GenericMotionWrapper:
# if already loaded, return it
model_path = get_motion_model_path(model_name)
model_hash = calculate_file_hash(model_path, hash_every_n=50)
# load lora, if present
loras = []
if motion_lora is not None:
for lora_info in motion_lora.loras:
lora = load_motion_lora(lora_info.name)
lora.set_info(lora_info)
loras.append(lora)
loras.sort(key=lambda x: x.hash)
# use lora hashes with model hash
for lora in loras:
model_hash += lora.hash
model_hash = str(hash(model_hash))
# models are determined by combo self + applied loras
if model_hash in motion_modules:
return motion_modules[model_hash]
logger.info(f"Loading motion module {model_name}")
mm_state_dict = load_torch_file(model_path)
if motion_model_settings != None:
mm_state_dict = apply_mm_settings(mm_state_dict, motion_model_settings)
# load lora state dicts if exist
if len(loras) > 0:
for lora in loras:
# apply LoRA to mm_state_dict
apply_lora_to_mm_state_dict(mm_state_dict, lora)
# determine if motion module is SD_1.5 compatible or SDXL compatible
sd_model_type = ModelTypesSD.SD1_5
if model is not None:
sd_model_type = get_sd_model_type(model)
motion_module: GenericMotionWrapper = None
if sd_model_type == ModelTypesSD.SD1_5:
try:
motion_module = AnimDiffMotionWrapper(mm_state_dict=mm_state_dict, mm_hash=model_hash, mm_name=model_name, loras=loras)
except ValueError as e:
raise ValueError(f"Motion model {model_name} is not compatible with SD1.5-based model.", e)
elif sd_model_type == ModelTypesSD.SDXL:
try:
motion_module = HotShotXLMotionWrapper(mm_state_dict=mm_state_dict, mm_hash=model_hash, mm_name=model_name, loras=loras)
except ValueError as e:
raise ValueError(f"Motion model {model_name} is not compatible with SDXL-based model.", e)
else:
raise ValueError(f"SD model must be either SD1.5-based for AnimateDiff or SDXL-based for HotShotXL.")
# continue loading model
parameters = calculate_parameters(mm_state_dict, "")
usefp16 = model_management.should_use_fp16(model_params=parameters)
if usefp16:
logger.info("Using fp16, converting motion module to fp16")
motion_module.half()
offload_device = model_management.unet_offload_device()
motion_module = motion_module.to(offload_device)
motion_module.load_state_dict(mm_state_dict)
# add to motion_module cache
motion_modules[model_hash] = motion_module
return motion_module
def unload_motion_module(motion_module: GenericMotionWrapper):
logger.info(f"Removing motion module {motion_module.mm_name} from cache")
motion_modules.pop(motion_module.mm_hash, None)
##################################################################################
##################################################################################
# Injection-related classes and functions
def inject_params_into_model(model: ModelPatcher, params: 'InjectionParams') -> ModelPatcher:
model = model.clone()
# clean unet, if necessary
clean_contained_unet(model)
set_injected_mm_params(model, params)
return model
def eject_params_from_model(model: ModelPatcher) -> ModelPatcher:
model = model.clone()
# clean unet, if necessary
clean_contained_unet(model)
del_injected_mm_params(model)
return model
def inject_motion_module(model: ModelPatcher, motion_module: GenericMotionWrapper, params: 'InjectionParams'):
if params.context_length and params.video_length > params.context_length:
logger.info(f"Sliding context window activated - latents passed in ({params.video_length}) greater than context_length {params.context_length}.")
else:
logger.info(f"Regular AnimateDiff activated - latents passed in ({params.video_length}) less or equal to context_length {params.context_length}.")
params.reset_context()
# if no context_length, treat video length as intended AD frame window
if not params.context_length:
if params.video_length > motion_module.encoding_max_len:
raise ValueError(f"Without a context window, AnimateDiff model {motion_module.mm_name} has upper limit of {motion_module.encoding_max_len} frames, but received {params.video_length} latents.")
motion_module.set_video_length(params.video_length, params.full_length)
# otherwise, treat context_length as intended AD frame window
else:
if params.context_length > motion_module.encoding_max_len:
raise ValueError(f"AnimateDiff model {motion_module.mm_name} has upper limit of {motion_module.encoding_max_len} frames for a context window, but received context length of {params.context_length}.")
motion_module.set_video_length(params.context_length, params.full_length)
# inject model
params.set_version(motion_module)
logger.info(f"Injecting motion module {motion_module.mm_name} version {motion_module.version}.")
injectors[params.injector](model, motion_module)
def eject_motion_module(model: ModelPatcher):
try:
# handle injected params
if is_injected_mm_params(model):
params = get_injected_mm_params(model)
logger.info(f"Ejecting motion module {params.model_name} version {params.version}.")
else:
logger.info(f"Motion module not injected, skip unloading.")
# clean unet, just in case
finally:
clean_contained_unet(model)
def clean_contained_unet(model: ModelPatcher):
if is_injected_unet_version(model):
logger.info("Cleaning motion module from unet.")
injector = get_injected_unet_version(model)
ejectors[injector](model)
############################################################################################################
## AnimateDiff
def _inject_motion_module_to_unet(model: ModelPatcher, motion_module: 'AnimDiffMotionWrapper'):
unet: openaimodel.UNetModel = model.model.diffusion_model
for mm_idx, unet_idx in enumerate([1, 2, 4, 5, 7, 8, 10, 11]):
mm_idx0, mm_idx1 = mm_idx // 2, mm_idx % 2
unet.input_blocks[unet_idx].append(
motion_module.down_blocks[mm_idx0].motion_modules[mm_idx1]
)
for unet_idx in range(12):
mm_idx0, mm_idx1 = unet_idx // 3, unet_idx % 3
if unet_idx % 3 == 2 and unet_idx != 11:
unet.output_blocks[unet_idx].insert(
-1, motion_module.up_blocks[mm_idx0].motion_modules[mm_idx1]
)
else:
unet.output_blocks[unet_idx].append(
motion_module.up_blocks[mm_idx0].motion_modules[mm_idx1]
)
if motion_module.mid_block is not None:
unet.middle_block.insert(-1, motion_module.mid_block.motion_modules[0]) # only 1 VanillaTemporalModule
# keep track of if unet blocks actually affected
set_injected_unet_version(model, InjectorVersion.V1_V2)
def _eject_motion_module_from_unet(model: ModelPatcher):
unet: openaimodel.UNetModel = model.model.diffusion_model
for unet_idx in [1, 2, 4, 5, 7, 8, 10, 11]:
unet.input_blocks[unet_idx].pop(-1)
for unet_idx in range(12):
if unet_idx % 3 == 2 and unet_idx != 11:
unet.output_blocks[unet_idx].pop(-2)
else:
unet.output_blocks[unet_idx].pop(-1)
if len(unet.middle_block) > 3: # SD1.5 UNet has 3 expected middle_blocks - more means injected
unet.middle_block.pop(-2)
# remove attr; ejected
del_injected_unet_version(model)
############################################################################################################
############################################################################################################
## HotShot XL
def _inject_hsxl_motion_module_to_unet(model: ModelPatcher, motion_module: 'HotShotXLMotionWrapper'):
unet: openaimodel.UNetModel = model.model.diffusion_model
# inject input (down) blocks
# HotShotXL mm contains 3 downblocks, each with 2 TransformerTemporals - 6 in total
# per_block is the amount of Temporal Blocks per down block
_perform_hsxl_motion_module_injection(unet.input_blocks, motion_module.down_blocks, injection_goal=6, per_block=2)
# inject output (up) blocks
# HotShotXL mm contains 3 upblocks, each with 3 TransformerTemporals - 9 in total
_perform_hsxl_motion_module_injection(unet.output_blocks, motion_module.up_blocks, injection_goal=9, per_block=3)
# inject mid block, if needed (encapsulate in list to make structure compatible)
if motion_module.mid_block is not None:
_perform_hsxl_motion_module_injection(unet.middle_block, [motion_module.mid_block], injection_goal=1, per_block=1)
# keep track of if unet blocks actually affected
set_injected_unet_version(model, InjectorVersion.HOTSHOTXL_V1)
def _perform_hsxl_motion_module_injection(unet_blocks: nn.ModuleList, mm_blocks: nn.ModuleList, injection_goal: int, per_block: int):
# Rules for injection:
# For each component list in a unet block:
# if SpatialTransformer exists in list, place next block after last occurrence
# elif ResBlock exists in list, place next block after first occurrence
# else don't place block
injection_count = 0
unet_idx = 0
# only stop injecting when modules exhausted
while injection_count < injection_goal:
# figure out which TransformerTemporal from mm to inject
mm_blk_idx, mm_tt_idx = injection_count // per_block, injection_count % per_block
# figure out layout of unet block components
st_idx = -1 # SpatialTransformer index
res_idx = -1 # first ResBlock index
# first, figure out indeces of relevant blocks
for idx, component in enumerate(unet_blocks[unet_idx]):
if type(component) == SpatialTransformer:
st_idx = idx
elif type(component) == ResBlock and res_idx < 0:
res_idx = idx
# if SpatialTransformer exists, inject right after
if st_idx >= 0:
#logger.info(f"HSXL: injecting after ST({st_idx})")
unet_blocks[unet_idx].insert(st_idx+1, mm_blocks[mm_blk_idx].temporal_attentions[mm_tt_idx])
injection_count += 1
# otherwise, if only ResBlock exists, inject right after
elif res_idx >= 0:
#logger.info(f"HSXL: injecting after Res({res_idx})")
unet_blocks[unet_idx].insert(res_idx+1, mm_blocks[mm_blk_idx].temporal_attentions[mm_tt_idx])
injection_count += 1
# increment unet_idx
unet_idx += 1
def _eject_hsxl_motion_module_from_unet(model: ModelPatcher):
unet: openaimodel.UNetModel = model.model.diffusion_model
# remove from input blocks
_perform_hsxl_motion_module_ejection(unet.input_blocks)
# remove from output blocks
_perform_hsxl_motion_module_ejection(unet.output_blocks)
# remove from middle block (encapsulate in list to make structure compatible)
_perform_hsxl_motion_module_ejection([unet.middle_block])
# remove attr; ejected
del_injected_unet_version(model)
def _perform_hsxl_motion_module_ejection(unet_blocks: nn.ModuleList):
# eject all TransformerTemporal objects from all blocks
for block in unet_blocks:
idx_to_pop = []
for idx, component in enumerate(block):
if type(component) == TransformerTemporal:
idx_to_pop.append(idx)
# pop in backwards order, as to not disturb what the indeces refer to
for idx in sorted(idx_to_pop, reverse=True):
block.pop(idx)
#logger.info(f"HSXL: ejecting {idx_to_pop}")
############################################################################################################
injectors = {
InjectorVersion.V1_V2: _inject_motion_module_to_unet,
InjectorVersion.HOTSHOTXL_V1: _inject_hsxl_motion_module_to_unet,
}
ejectors = {
InjectorVersion.V1_V2: _eject_motion_module_from_unet,
InjectorVersion.HOTSHOTXL_V1: _eject_hsxl_motion_module_from_unet,
}
MM_INJECTED_ATTR = "_mm_injected_params"
MM_UNET_INJECTION_ATTR = "_mm_is_unet_injected"
class InjectionParams:
def __init__(self, video_length: int, unlimited_area_hack: bool, apply_mm_groupnorm_hack: bool, beta_schedule: str, injector: str, model_name: str,
apply_v2_models_properly: bool=False) -> None:
self.video_length = video_length
self.full_length = None
self.unlimited_area_hack = unlimited_area_hack
self.apply_mm_groupnorm_hack = apply_mm_groupnorm_hack
self.beta_schedule = beta_schedule
self.injector = injector
self.model_name = model_name
self.apply_v2_models_properly = apply_v2_models_properly
self.context_length: int = None
self.context_stride: int = None
self.context_overlap: int = None
self.context_schedule: str = None
self.closed_loop: bool = False
self.sync_context_to_pe = False
self.version: str = None
self.loras: MotionLoRAList = None
self.motion_model_settings = MotionModelSettings()
def set_version(self, motion_module: GenericMotionWrapper):
self.version = motion_module.version
def set_context(self, context_length: int, context_stride: int, context_overlap: int, context_schedule: str, closed_loop: bool, sync_context_to_pe: bool=False):
self.context_length = context_length
self.context_stride = context_stride
self.context_overlap = context_overlap
self.context_schedule = context_schedule
self.closed_loop = closed_loop
self.sync_context_to_pe = sync_context_to_pe
def set_loras(self, loras: MotionLoRAList):
self.loras = loras.clone()
def set_motion_model_settings(self, motion_model_settings: 'MotionModelSettings'):
if motion_model_settings is None:
self.motion_model_settings = MotionModelSettings()
else:
self.motion_model_settings = motion_model_settings
def reset_context(self):
self.context_length = None
self.context_stride = None
self.context_overlap = None
self.context_schedule = None
self.closed_loop = False
def clone(self) -> 'InjectionParams':
new_params = InjectionParams(
self.video_length, self.unlimited_area_hack, self.apply_mm_groupnorm_hack,
self.beta_schedule, self.injector, self.model_name, apply_v2_models_properly=self.apply_v2_models_properly,
)
new_params.full_length = self.full_length
new_params.version = self.version
new_params.set_context(
context_length=self.context_length, context_stride=self.context_stride,
context_overlap=self.context_overlap, context_schedule=self.context_schedule,
closed_loop=self.closed_loop, sync_context_to_pe=self.sync_context_to_pe,
)
if self.loras is not None:
new_params.loras = self.loras.clone()
new_params.set_motion_model_settings(self.motion_model_settings)
return new_params
# Injected Param Functions
def is_injected_mm_params(model: ModelPatcher) -> bool:
return hasattr(model, MM_INJECTED_ATTR)
def get_injected_mm_params(model: ModelPatcher) -> InjectionParams:
if is_injected_mm_params(model):
return getattr(model, MM_INJECTED_ATTR)
return None
def set_injected_mm_params(model: ModelPatcher, injection_params: InjectionParams):
setattr(model, MM_INJECTED_ATTR, injection_params)
def del_injected_mm_params(model: ModelPatcher):
if is_injected_mm_params(model):
delattr(model, MM_INJECTED_ATTR)
# Injected Unet Functions
def is_injected_unet_version(model: ModelPatcher) -> bool:
return hasattr(model.model.diffusion_model, MM_UNET_INJECTION_ATTR)
def get_injected_unet_version(model: ModelPatcher) -> str:
if is_injected_unet_version(model):
return getattr(model.model.diffusion_model, MM_UNET_INJECTION_ATTR)
def set_injected_unet_version(model: ModelPatcher, value: str):
setattr(model.model.diffusion_model, MM_UNET_INJECTION_ATTR, value)
def del_injected_unet_version(model: ModelPatcher):
if is_injected_unet_version(model):
delattr(model.model.diffusion_model, MM_UNET_INJECTION_ATTR)
##################################################################################
##################################################################################
class MotionModelSettings:
def __init__(self,
pe_strength: float=1.0,
attn_strength: float=1.0,
attn_q_strength: float=1.0,
attn_k_strength: float=1.0,
attn_v_strength: float=1.0,
attn_out_weight_strength: float=1.0,
attn_out_bias_strength: float=1.0,
other_strength: float=1.0,
cap_initial_pe_length: int=0, interpolate_pe_to_length: int=0,
initial_pe_idx_offset: int=0, final_pe_idx_offset: int=0,
motion_pe_stretch: int=0,
attn_scale: float=1.0,
mask_attn_scale: Tensor=None,
mask_attn_scale_min: float=1.0,
mask_attn_scale_max: float=1.0,
):
# general strengths
self.pe_strength = pe_strength
self.attn_strength = attn_strength
self.other_strength = other_strength
# specific attn strengths
self.attn_q_strength = attn_q_strength
self.attn_k_strength = attn_k_strength
self.attn_v_strength = attn_v_strength
self.attn_out_weight_strength = attn_out_weight_strength
self.attn_out_bias_strength = attn_out_bias_strength
# PE-interpolation settings
self.cap_initial_pe_length = cap_initial_pe_length
self.interpolate_pe_to_length = interpolate_pe_to_length
self.initial_pe_idx_offset = initial_pe_idx_offset
self.final_pe_idx_offset = final_pe_idx_offset
self.motion_pe_stretch = motion_pe_stretch
# attention scale settings
self.attn_scale = attn_scale
# attention scale mask settings
self.mask_attn_scale = mask_attn_scale.clone() if mask_attn_scale is not None else mask_attn_scale
self.mask_attn_scale_min = mask_attn_scale_min
self.mask_attn_scale_max = mask_attn_scale_max
self._prepare_mask_attn_scale()
def _prepare_mask_attn_scale(self):
if self.mask_attn_scale is not None:
self.mask_attn_scale = normalize_min_max(self.mask_attn_scale, self.mask_attn_scale_min, self.mask_attn_scale_max)
def has_mask_attn_scale(self) -> bool:
return self.mask_attn_scale is not None
def has_pe_strength(self) -> bool:
return self.pe_strength != 1.0
def has_attn_strength(self) -> bool:
return self.attn_strength != 1.0
def has_other_strength(self) -> bool:
return self.other_strength != 1.0
def has_cap_initial_pe_length(self) -> bool:
return self.cap_initial_pe_length > 0
def has_interpolate_pe_to_length(self) -> bool:
return self.interpolate_pe_to_length > 0
def has_initial_pe_idx_offset(self) -> bool:
return self.initial_pe_idx_offset > 0
def has_final_pe_idx_offset(self) -> bool:
return self.final_pe_idx_offset > 0
def has_motion_pe_stretch(self) -> bool:
return self.motion_pe_stretch > 0
def has_anything_to_apply(self) -> bool:
return self.has_pe_strength() \
or self.has_attn_strength() \
or self.has_other_strength() \
or self.has_cap_initial_pe_length() \
or self.has_interpolate_pe_to_length() \
or self.has_initial_pe_idx_offset() \
or self.has_final_pe_idx_offset() \
or self.has_motion_pe_stretch() \
or self.has_any_attn_sub_strength()
def has_any_attn_sub_strength(self) -> bool:
return self.has_attn_q_strength() \
or self.has_attn_k_strength() \
or self.has_attn_v_strength() \
or self.has_attn_out_weight_strength() \
or self.has_attn_out_bias_strength()
def has_attn_q_strength(self) -> bool:
return self.attn_q_strength != 1.0
def has_attn_k_strength(self) -> bool:
return self.attn_k_strength != 1.0
def has_attn_v_strength(self) -> bool:
return self.attn_v_strength != 1.0
def has_attn_out_weight_strength(self) -> bool:
return self.attn_out_weight_strength != 1.0
def has_attn_out_bias_strength(self) -> bool:
return self.attn_out_bias_strength != 1.0
| [] |
2024-01-10 | evnkm/conjure | google_asr.py | import speech_recognition as sr
import openai
import gtts
from playsound import playsound
from gtts import gTTS
from io import BytesIO
def Speak(text):
# Generate the speech audio and store it in mp3_fp
mp3_fp = BytesIO()
#intro = "Greetings, this is a United States Army device, my purpose is to assess your injury and report back to my home base, and then proceed with further instructions."
tts = gTTS(text, lang='en')
tts.write_to_fp(mp3_fp)
# Save the speech audio to a temporary file
mp3_fp.seek(0)
with open('temp.mp3', 'wb') as f:
f.write(mp3_fp.read())
# Play the audio using playsound
playsound('temp.mp3')
def transcribe_speech():
'''
This function uses the microphone to listen for 10 seconds and then returns the transcribed text.
'''
# Create a recognizer object
recognizer = sr.Recognizer()
# Set the duration for listening
duration = 10 # Number of seconds to listen
# Use the default microphone as the audio source
with sr.Microphone() as source:
print("Listening...")
# Adjust for ambient noise for better recognition
recognizer.adjust_for_ambient_noise(source)
# Record audio for the specified duration
audio = recognizer.listen(source, timeout=duration)
print("Finished recording.")
try:
# Recognize the speech using the default API
transcript = recognizer.recognize_google(audio)
return transcript
except sr.UnknownValueError:
return "Speech recognition could not understand audio"
except sr.RequestError:
return "Could not request results from the speech recognition service"
def ask_question(prompt):
# Set up OpenAI API credentials
openai.api_key = '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'
# Set the GPT-3.5 model name
model_name = "gpt-3.5-turbo"
# Generate the question using GPT-3.5 chat model
response = openai.ChatCompletion.create(
model=model_name,
messages=[
{"role": "system", "content": "You are a helpful assistant in the year 2023."},
{"role": "user", "content": prompt}
],
max_tokens=100,
temperature=0.7
)
# Extract and return the generated answer
answer = response['choices'][0]['message']['content'].strip()
return answer
def main():
intro = ("Hi, hackMIT testing! Please say something hehehe")
Speak(intro)
# Get the transcribed text
question = transcribe_speech()
print(question)
prompt = "Insert prompt here. This will be info about the image."
# Generate the question and get the answer
answer = ask_question(prompt)
Speak(answer)
if __name__ == '__main__':
main()
| [
"Insert prompt here. This will be info about the image.",
"You are a helpful assistant in the year 2023."
] |
2024-01-10 | evnkm/conjure | querydb.py | import chromadb
from langchain.embeddings import HuggingFaceEmbeddings
client = chromadb.PersistentClient(path="./db")
embedding_function = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
collection = client.get_collection(name="demo-dataset-600", embedding_function=embedding_function.embed_documents)
def query(prompt, n=10):
vectors = embedding_function.embed_documents([prompt])
docs = collection.query(
query_embeddings=vectors,
n_results=n
)
files = [m["filename"] for m in docs["metadatas"][0]]
return files
# if __name__ == "__main__":
# images = query("Show me the images of nature.", 10)
# print(images)
| [] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~07_how_to_create_a_router_chain.py | # Author: Rajib
# Description: This is an example of how to create a router chain using semantic kernel
# It uses the Router Chain class in router_chain.py
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
from chains.router_chain import RouterChain
if __name__ == "__main__":
templates = {}
# One of the prompt templates to be fed to the router chain
tourist_guide_template = "You are an expert guide." \
"You have knowledge of all the historic places in the world." \
"Please provide answer to the provided question below." \
"question:{{$question}}"
# The second prompt template to be fed to the router chain
teacher_template = "You are a teacher of history. You teach students the history of the world. " \
"Please provide answer to the provided question below." \
"question:{{$question}}"
# templates["tourist_guide_template"] = tourist_guide_template
# templates["teacher_template"] = teacher_template
# Creating a list of the prompt templates to send to the router chain
# Prompt name and description are very important. Needs to clearly mention what the prompt should do
prompt_templates = [
{"prompt_name": "tourist_guide_template",
"prompt_desc": "Good for answering questions about historic placess in the world",
"prompt_template": tourist_guide_template,
},
{"prompt_name": "teacher_template",
"prompt_desc": "Good for answering student questions on the history of the world",
"prompt_template": teacher_template
}
]
# Initializing the kernel
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("gpt-4", OpenAIChatCompletion("gpt-4", api_key))
# The question to be asked to the router chain. I used this for testing
# input = "Where is TajMahal?"
input = "When did India became independent?"
rtc = RouterChain()
# After the chain runs, it sends back the appropiate goal(prompt template)
# along with the question that needs to be part of the goal.
goal, input = rtc.run(prompt_templates,
input,
kernel)
# Initializing the context
sk_context = kernel.create_new_context()
sk_context["question"] = input
# Destination chain
qa_chat_bot = kernel.create_semantic_function(
prompt_template=goal,
description="Provides answer to an input question",
max_tokens=1000
)
# Getting the final answer
answer = qa_chat_bot.invoke(context=sk_context)
print(answer)
| [
"You are an expert guide.You have knowledge of all the historic places in the world.Please provide answer to the provided question below.question:{{$question}}",
"{}",
"[{'prompt_name': 'tourist_guide_template', 'prompt_desc': 'Good for answering questions about historic placess in the world', 'prompt_template': PLACEHOLDER}, {'prompt_name': 'teacher_template', 'prompt_desc': 'Good for answering student questions on the history of the world', 'prompt_template': PLACEHOLDER}]",
"You are a teacher of history. You teach students the history of the world. Please provide answer to the provided question below.question:{{$question}}"
] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~11_how_to_dod_function_calling.py | # Copyright (c) Microsoft. All rights reserved.
import asyncio
import semantic_kernel as sk
import semantic_kernel.connectors.ai.open_ai as sk_oai
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
kernel = sk.Kernel()
# deployment_name, api_key, endpoint = sk.azure_openai_settings_from_dot_env()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("chat-gpt", OpenAIChatCompletion("gpt-4", api_key))
# enabling or disabling function calling is done by setting the function_call parameter for the completion.
# when the function_call parameter is set to "auto" the model will decide which function to use, if any.
# if you only want to use a specific function, set the name of that function in this parameter,
# the format for that is 'SkillName-FunctionName', (i.e. 'math-Add').
# if the model or api version do not support this you will get an error.
prompt_config = sk.PromptTemplateConfig.from_completion_parameters(
max_tokens=2000,
temperature=0.7,
top_p=0.8,
function_call="auto",
chat_system_prompt="You are a AI assistant.",
)
prompt_template = sk.ChatPromptTemplate(
"{{$user_input}}", kernel.prompt_template_engine, prompt_config
)
function_config = sk.SemanticFunctionConfig(prompt_config, prompt_template)
chat_function = kernel.register_semantic_function("ChatBot", "Chat", function_config)
# define the functions available
functions = [
{
"name": "search_hotels",
"description": "Retrieves hotels from the search index based on the parameters provided",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location of the hotel (i.e. Seattle, WA)",
},
"max_price": {
"type": "number",
"description": "The maximum price for the hotel",
},
"features": {
"type": "string",
"description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)",
},
},
"required": ["location"],
},
}
]
async def main() -> None:
context = kernel.create_new_context()
context.variables[
"user_input"
] = "I want to find a hotel in Seattle with free wifi and a pool and my max price is $200"
context = await chat_function.invoke_async(context=context, functions=functions)
if function_call := context.objects.pop('function_call', None):
print(f"Function to be called: {function_call.name}")
print(f"Function parameters: \n{function_call.arguments}")
return
print("No function was called")
print(f"Output was: {str(context)}")
if __name__ == "__main__":
asyncio.run(main()) | [
"{{$user_input}}",
"You are a AI assistant."
] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~03_how_to_create_a_sequential_planner.py | import asyncio
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
from semantic_kernel.core_skills import TextSkill
from semantic_kernel.planning import SequentialPlanner, Plan
with open("./prompts/sk_seq_prompt", "r") as f:
PROMPT = f.read()
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("gpt-3.5", OpenAIChatCompletion("gpt-3.5-turbo", api_key, org_id))
skills_directory = "../skills/"
writer_skill = kernel.import_semantic_skill_from_directory(skills_directory, "WriterSkill")
# writer_skill = kernel.import_semantic_skill_from_directory(skills_directory, "WriterSkill")
summarize_skill = kernel.import_semantic_skill_from_directory(skills_directory, "SummarizeSkill")
text_skill = kernel.import_skill(TextSkill(), "TextSkill")
# sk_prompt = """
# {{$input}}
#
# Rewrite the above in the style of Shakespeare.
# """
# shakespeareFunction = kernel.create_semantic_function(sk_prompt, "shakespeare", "ShakespeareSkill",
# max_tokens=2000, temperature=0.8)
ask = """
Tomorrow is Valentine's day. I need to come up with a few date ideas.
Convert the text to lowercase, summarize the text and then convert to french."""
planner = SequentialPlanner(kernel,prompt=PROMPT)
sequential_plan = asyncio.run(planner.create_plan_async(goal=ask))
# for step in sequential_plan._steps:
# print(step.description, ":", step._state.__dict__)
#
result = asyncio.run(sequential_plan.invoke_async())
print("final result is ", result)
#
# print(result) | [] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~06_how_to_create_a_guided_question.py | import asyncio
import json
import re
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
from semantic_kernel.planning import SequentialPlanner
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("gpt-4", OpenAIChatCompletion("gpt-4", api_key, org_id))
skills_directory = "../skills/"
advisor_skill = kernel.import_semantic_skill_from_directory(skills_directory, "AdvisorSkill")
planner = SequentialPlanner(kernel)
ask = "What investments are best for me for retirement planning?"
sequential_plan = asyncio.run(planner.create_plan_async(goal=ask))
# for step in sequential_plan._steps:
# print(step.description, ":", step._state.__dict__)
result = asyncio.run(sequential_plan.invoke_async())
match = re.search(r"```(json)?(.*)```", str(result), re.DOTALL)
json_str = match.group(2)
json_str = json_str.strip()
parsed = json.loads(json_str)
print(parsed) | [] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~05_how_to_create_a_stepwise_planner.py | # Borrowed from semantic kernel github
import asyncio
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
from semantic_kernel.planning import StepwisePlanner
from semantic_kernel.planning.stepwise_planner.stepwise_planner_config import (
StepwisePlannerConfig,
)
import semantic_kernel as sk
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("gpt-3.5", OpenAIChatCompletion("gpt-3.5-turbo-16k", api_key, org_id))
class WebSearchEngineSkill:
"""
A search engine skill.
"""
from semantic_kernel.orchestration.sk_context import SKContext
from semantic_kernel.skill_definition import sk_function, sk_function_context_parameter
def __init__(self, connector) -> None:
self._connector = connector
@sk_function(
description="Performs a web search for a given query", name="searchAsync"
)
@sk_function_context_parameter(
name="query",
description="The search query",
)
async def search_async(self, query: str, context: SKContext) -> str:
query = query or context.variables.get("query")[1]
result = await self._connector.search_async(query, num_results=5, offset=0)
return str(result)
from semantic_kernel.connectors.search_engine import BingConnector
BING_API_KEY = sk.bing_search_settings_from_dot_env()
connector = BingConnector(BING_API_KEY)
kernel.import_skill(WebSearchEngineSkill(connector), skill_name="WebSearch")
from semantic_kernel.core_skills.math_skill import MathSkill
from semantic_kernel.core_skills.time_skill import TimeSkill
kernel.import_skill(TimeSkill(), "time")
kernel.import_skill(MathSkill(), "math")
planner = StepwisePlanner(
kernel, StepwisePlannerConfig(max_iterations=10, min_iteration_time_ms=1000)
)
ask = """How many total championships combined do the top 5 teams in the NBA have?"""
plan = planner.create_plan(goal=ask)
result = asyncio.run(plan.invoke_async())
print(result)
for index, step in enumerate(plan._steps):
print("Step:", index)
print("Description:",step.description)
print("Function:", step.skill_name + "." + step._function.name)
if len(step._outputs) > 0:
print( " Output:\n", str.replace(result[step._outputs[0]],"\n", "\n ")) | [] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~12_how_to_do_function_calling_01.py | # Copyright (c) Microsoft. All rights reserved.
import asyncio
import os
from typing import Tuple
import semantic_kernel as sk
import semantic_kernel.connectors.ai.open_ai as sk_oai
from semantic_kernel.connectors.ai.open_ai.semantic_functions.open_ai_chat_prompt_template import (
OpenAIChatPromptTemplate,
)
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
from semantic_kernel.connectors.ai.open_ai.utils import (
chat_completion_with_function_call,
get_function_calling_object,
)
from semantic_kernel.core_skills import MathSkill
from skills.convert_to_lower_case import ConvertToLowerCase
system_message = """
You are a chat bot. Your name is Mosscap and
you have one goal: CONVERT TEXT TO LOWER CASE.
"""
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("chat-gpt", OpenAIChatCompletion("gpt-4", api_key))
skills_directory = os.path.join(__file__, "../../../../samples/skills")
# adding skills to the kernel
# the joke skill in the FunSkills is a semantic skill and has the function calling disabled.
# kernel.import_semantic_skill_from_directory(skills_directory, "FunSkill")
# the math skill is a core skill and has the function calling enabled.
kernel.import_skill(ConvertToLowerCase(), skill_name="convert")
# enabling or disabling function calling is done by setting the function_call parameter for the completion.
# when the function_call parameter is set to "auto" the model will decide which function to use, if any.
# if you only want to use a specific function, set the name of that function in this parameter,
# the format for that is 'SkillName-FunctionName', (i.e. 'math-Add').
# if the model or api version do not support this you will get an error.
prompt_config = sk.PromptTemplateConfig.from_completion_parameters(
max_tokens=2000,
temperature=0.7,
top_p=0.8,
function_call="auto",
chat_system_prompt=system_message,
)
prompt_template = OpenAIChatPromptTemplate(
"{{$user_input}}", kernel.prompt_template_engine, prompt_config
)
prompt_template.add_user_message("Hi there, who are you?")
prompt_template.add_assistant_message(
"I am Mosscap, a chat bot. I'm trying to figure out what people need."
)
function_config = sk.SemanticFunctionConfig(prompt_config, prompt_template)
chat_function = kernel.register_semantic_function("ChatBot", "Chat", function_config)
# calling the chat, you could add a overloaded version of the settings here,
# to enable or disable function calling or set the function calling to a specific skill.
# see the openai_function_calling example for how to use this with a unrelated function definition
filter = {"exclude_skill": ["ChatBot"]}
functions = get_function_calling_object(kernel, filter)
async def chat(context: sk.SKContext) -> Tuple[bool, sk.SKContext]:
try:
user_input = input("User:> ")
context.variables["user_input"] = user_input
except KeyboardInterrupt:
print("\n\nExiting chat...")
return False, None
except EOFError:
print("\n\nExiting chat...")
return False, None
if user_input == "exit":
print("\n\nExiting chat...")
return False, None
context = await chat_completion_with_function_call(
kernel,
chat_skill_name="ChatBot",
chat_function_name="Chat",
context=context,
functions=functions,
function_call_with_new_context=False,
)
print(f"Mosscap:> {context.result}")
return True, context
async def main() -> None:
chatting = True
context = kernel.create_new_context()
print(
"Welcome to the chat bot!\
\n Type 'exit' to exit.\
\n Try a math question to see the function calling in action (i.e. what is 3+3?)."
)
while chatting:
chatting, context = await chat(context)
if __name__ == "__main__":
asyncio.run(main()) | [
"{{$user_input}}"
] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~01_how_to_run_your_first_hello_world.py | # pip install semantic-kernel
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai.services.open_ai_text_completion import OpenAITextCompletion
# Creating the kernel
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_text_completion_service("OpenAI_davinci", OpenAITextCompletion("text-davinci-003", api_key))
# Craft your prompt here
prompt = """
You are a helpful chatbot. Please answer the question based on the provided context below.
Do not make up your answer or add anything which is not in the context. If the answer is not provided in the context, politely say that
you do not know.
context :{{$context_str}}
User: {{$question_str}}
"""
# Instantiate the semantic function
qa_chat_bot = kernel.create_semantic_function(
prompt_template=prompt,
description="Answers question based on provided context",
max_tokens=1000
)
# This is the context to be used to answer question
context_str = "Semantic Kernel is an SDK that integrates Large Language Models (LLMs) " \
"like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages " \
"like C#, Python, and Java. Semantic Kernel achieves this by allowing " \
"you to define plugins that can be chained together " \
"in just a few lines of code.What makes Semantic Kernel special, " \
"however, is its ability to automatically orchestrate plugins with AI. " \
"With Semantic Kernel planners, you can ask an LLM to generate a plan " \
"that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user."
# This is something unique. It returns the SKContext object
sk_context = kernel.create_new_context()
sk_context["context_str"] = context_str
while True:
question_str = input("Enter your Question\n\n")
sk_context["question_str"] = question_str
answer = qa_chat_bot.invoke(context=sk_context)
print(answer)
| [
"\nYou are a helpful chatbot. Please answer the question based on the provided context below.\nDo not make up your answer or add anything which is not in the context. If the answer is not provided in the context, politely say that\nyou do not know.\ncontext :{{$context_str}}\nUser: {{$question_str}}\n"
] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~04_how_to_create_a_sequential_planner_snowflake.py | import asyncio
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
from semantic_kernel.planning import SequentialPlanner
from skills.snowflake_operations import SnowflakeOperations
with open("./prompts/sk_seq_prompt", "r") as f:
PROMPT = f.read()
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("gpt-3.5", OpenAIChatCompletion("gpt-3.5-turbo", api_key, org_id))
skills_directory = "../skills/"
snowflake_skill = kernel.import_semantic_skill_from_directory(skills_directory, "DatabaseSkill")
snowflake_query_execute_skill = kernel.import_skill(SnowflakeOperations(),"SnowflakeOperations")
promotion_skill = kernel.import_semantic_skill_from_directory(skills_directory,"PromotionSkill")
ask = """
I want to get a list of customer name who are at the risk of churn.
My customers are in a snowflake database.
Please create and execute snowflake query to get the customer information. After that write a
personalized email to the customer to inform about or new promotion."""
planner = SequentialPlanner(kernel,prompt=PROMPT)
sequential_plan = asyncio.run(planner.create_plan_async(goal=ask))
# Plan(
# name=step.name,
# skill_name=step.skill_name,
# description=step.description,
# next_step_index=0,
# state=ContextVariables(),
# parameters=ContextVariables(),
# outputs=[],
# steps=[],
# )
# for step in sequential_plan._steps:
# print(step.parameters)
kernel.remove_chat_service("gpt-3.5")
kernel.add_chat_service("text-davinci-003", OpenAIChatCompletion("text-davinci-003", api_key, org_id))
result = asyncio.run(sequential_plan.invoke_async())
print("model is:", kernel.get_chat_service_service_id())
print("final result is \n\n", result)
| [] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~08_how_to_create_a_summarization_chain.py | import concurrent.futures
import logging
import os
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import List, Optional, cast, NamedTuple
import semantic_kernel as sk
from pydantic import Field
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
logger = logging.getLogger(__name__)
@dataclass
class Document():
page_content: str
metadata: dict = Field(default_factory=dict)
# Borrowed from Langchain TextLoader
# Below code is borrowed from Langchain's TextLoader implementation
class FileEncoding(NamedTuple):
"""File encoding as the NamedTuple."""
encoding: Optional[str]
"""The encoding of the file."""
confidence: float
"""The confidence of the encoding."""
language: Optional[str]
"""The language of the file."""
class ContentLoader(ABC):
def __init__(self):
self.module = "Loader"
@abstractmethod
def load(self) -> List[Document]:
"""Implement it into the derived loader class."""
def detect_file_encodings(file_path: str, timeout: int = 5) -> List[FileEncoding]:
"""Try to detect the file encoding.
Returns a list of `FileEncoding` tuples with the detected encodings ordered
by confidence.
Args:
file_path: The path to the file to detect the encoding for.
timeout: The timeout in seconds for the encoding detection.
"""
import chardet
def read_and_detect(file_path: str) -> List[dict]:
with open(file_path, "rb") as f:
rawdata = f.read()
return cast(List[dict], chardet.detect_all(rawdata))
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(read_and_detect, file_path)
try:
encodings = future.result(timeout=timeout)
except concurrent.futures.TimeoutError:
raise TimeoutError(
f"Timeout reached while detecting encoding for {file_path}"
)
if all(encoding["encoding"] is None for encoding in encodings):
raise RuntimeError(f"Could not detect encoding for {file_path}")
return [FileEncoding(**enc) for enc in encodings if enc["encoding"] is not None]
class TextContentLoader(ContentLoader):
def __init__(self, content_path: str,
content_encoding: Optional[str] = None,
autodetect_content_encoding: bool = False):
"""Initialize the content details."""
super().__init__()
self.content_path = content_path
self.content_encoding = content_encoding
self.autodetect_content_encoding = autodetect_content_encoding
def load(self) -> List[Document]:
"""Load from file path."""
text = ""
try:
with open(self.content_path, encoding=self.content_encoding) as f:
text = f.read()
except UnicodeDecodeError as e:
if self.autodetect_content_encoding:
detected_encodings = self.detect_file_encodings(self.content_path)
for encoding in detected_encodings:
logger.debug(f"Trying encoding: {encoding.encoding}")
try:
with open(self.content_path, encoding=encoding.encoding) as f:
text = f.read()
break
except UnicodeDecodeError:
continue
else:
raise RuntimeError(f"Error loading {self.content_path}") from e
except Exception as e:
raise RuntimeError(f"Error loading {self.content_path}") from e
metadata = {"source": self.content_path}
return [Document(page_content=text, metadata=metadata)]
# Borrowed from Langchain TextLoader
if __name__=="__main__":
def get_files_from_dir(dir):
files = [os.path.join(dir, f) for f in os.listdir(dir) if os.path.isfile(os.path.join(dir, f))]
return files
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_text_completion_service("gpt-3.5-turbo-16k", OpenAIChatCompletion("gpt-3.5-turbo-16k", api_key))
list_all_docs = []
files = get_files_from_dir("/Users/joyeed/semantic_kernel/semantic_kernel_examples/data/pdf/chunks/")
for file in files:
tl = TextContentLoader(file)
documents = tl.load()
documents[0].metadata["source"] = file
list_all_docs.append(documents[0])
map_prompt = """
Generate a concise and coherent summary from the given document.
Condense the document content into a well-written summary that captures the main ideas, key points, and insights presented in the document.
Prioritize clarity and brevity while retaining the essential information.
Aim to convey the document's core message and any supporting details that contribute to a comprehensive understanding.
Craft the summary to be self-contained, ensuring that readers can grasp the content even if they haven't read the document.
The goal is to create a summary that effectively communicates the document's content while being easily digestible and engaging.
Summary should NOT be more than 150 words.
Document:
{{$document}}
"""
map_chain = kernel.create_semantic_function(
prompt_template=map_prompt,
description="Extracts main themes from a set of documents",
max_tokens=2000
)
themes =[]
for document in list_all_docs:
sk_context = kernel.create_new_context()
sk_context["document"] = document.page_content
answer = map_chain.invoke(context=sk_context)
themes.append(str(answer))
reduce_prompt = """
Generate a concise and coherent summary from the given document summaries.
Condense the document summaries into a well-written consolidated summary that captures the main ideas, key points, and insights presented in the document summaries.
Prioritize clarity and brevity while retaining the essential information.
Aim to convey the document's core message and any supporting details that contribute to a comprehensive understanding.
Craft the summary to be self-contained, ensuring that readers can grasp the content even if they haven't read the document.
The goal is to create a summary that effectively communicates the content from all the summaries, while being easily digestible and engaging.
Final summary should NOT be more than 1000 words.
Document:
{{$document_summaries}}
"""
reduce_chain = kernel.create_semantic_function(
prompt_template=reduce_prompt,
description="creates a final summary from a set of document summaries",
max_tokens=2000
)
sk_context = kernel.create_new_context()
sk_context["document_summaries"] = "\n".join([t for t in themes])
answer = reduce_chain.invoke(context=sk_context)
print(answer)
| [
"\n Generate a concise and coherent summary from the given document. \n Condense the document content into a well-written summary that captures the main ideas, key points, and insights presented in the document. \n Prioritize clarity and brevity while retaining the essential information. \n Aim to convey the document's core message and any supporting details that contribute to a comprehensive understanding. \n Craft the summary to be self-contained, ensuring that readers can grasp the content even if they haven't read the document. \n The goal is to create a summary that effectively communicates the document's content while being easily digestible and engaging.\n Summary should NOT be more than 150 words.\n \n Document:\n {{$document}}\n ",
"\n Generate a concise and coherent summary from the given document summaries. \n Condense the document summaries into a well-written consolidated summary that captures the main ideas, key points, and insights presented in the document summaries. \n Prioritize clarity and brevity while retaining the essential information. \n Aim to convey the document's core message and any supporting details that contribute to a comprehensive understanding. \n Craft the summary to be self-contained, ensuring that readers can grasp the content even if they haven't read the document. \n The goal is to create a summary that effectively communicates the content from all the summaries, while being easily digestible and engaging.\n Final summary should NOT be more than 1000 words.\n\n Document:\n {{$document_summaries}}\n "
] |
2024-01-10 | rajib76/semantic_kernel_examples | examples~02_how_to_create_a_basic_planner.py | import asyncio
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion import OpenAIChatCompletion
from semantic_kernel.planning import BasicPlanner
from skills.identify_prime_skill import IdentifyPrime
# Creating the kernel
kernel = sk.Kernel()
api_key, org_id = sk.openai_settings_from_dot_env()
kernel.add_chat_service("gpt-3.5", OpenAIChatCompletion("gpt-3.5-turbo", api_key, org_id))
#skills_directory = "./skills"
prime_identifier_skill = kernel.import_skill(IdentifyPrime(),"IdentifyPrime")
goal = "check if {number} is prime"
PROMPT = """
You are a planner for the Semantic Kernel.
Your job is to create a properly formatted JSON plan step by step, to satisfy the goal given.
Create a list of subtasks based off the [GOAL] provided.
Each subtask must be from within the [AVAILABLE FUNCTIONS] list. Do not use any functions that are not in the list.
Base your decisions on which functions to use from the description and the name of the function.
Sometimes, a function may take arguments. Provide them if necessary.
The plan should be as short as possible.
For example:
[AVAILABLE FUNCTIONS]
IdentifyPrime.identify_prime_number
description: Identifies if a number is prime
args:
- input: the number to validate if it is a prime number
[GOAL]
"check if 17 is a prime number"
[OUTPUT]
{
"input": 17,
"subtasks": [
{"function": "IdentifyPrime.identify_prime_number","args": {"number": 17}}
]
}
[AVAILABLE FUNCTIONS]
{{$available_functions}}
[GOAL]
{{$goal}}
[OUTPUT]
"""
planner = BasicPlanner()
while True:
number = input("Enter the number which you want to check if it is a prime number:\n\n")
basic_plan = asyncio.run(planner.create_plan_async(goal=goal.format(number=number), kernel=kernel,prompt=PROMPT))
print("generated plan ",basic_plan.generated_plan)
# print("generated prompt ", basic_plan.prompt)
# print("generated goal " ,basic_plan.goal)
# print("generated str " ,basic_plan.__str__) # THERE IS A DEFECT, RAISED IT TO SEMANTIC KERNEL
results = asyncio.run(planner.execute_plan_async(basic_plan, kernel))
print(results)
| [
"\nYou are a planner for the Semantic Kernel.\nYour job is to create a properly formatted JSON plan step by step, to satisfy the goal given.\nCreate a list of subtasks based off the [GOAL] provided.\nEach subtask must be from within the [AVAILABLE FUNCTIONS] list. Do not use any functions that are not in the list.\nBase your decisions on which functions to use from the description and the name of the function.\nSometimes, a function may take arguments. Provide them if necessary.\nThe plan should be as short as possible.\nFor example:\n\n[AVAILABLE FUNCTIONS]\nIdentifyPrime.identify_prime_number\ndescription: Identifies if a number is prime\nargs:\n- input: the number to validate if it is a prime number\n\n[GOAL]\n\"check if 17 is a prime number\"\n[OUTPUT]\n {\n \"input\": 17,\n \"subtasks\": [\n {\"function\": \"IdentifyPrime.identify_prime_number\",\"args\": {\"number\": 17}}\n ]\n }\n\n[AVAILABLE FUNCTIONS]\n{{$available_functions}}\n\n[GOAL]\n{{$goal}}\n\n[OUTPUT]\n"
] |
2024-01-10 | whyismynamerudy/Nzyme | text_summarization.py | """Generated summarized paragraph for input text"""
import cohere
import pandas as pd
api_key = 'MklIKiJvqX1nFagSi1jRU4k9YxoxfLwZvRG6xIUJ'
co = cohere.Client(api_key)
def summarize(prompt: str):
"""summarizes the input prompt
prompt: string containing the info that needs to be summarized"""
sample_text = '''Passage: " Combinational logic is often grouped into larger building blocks to build more complex systems. This is an application of the principle of abstraction, hiding the unnecessary gate-level details to emphasize the function of the building block. We have already studied three such building blocks: full adders, priority circuits, and seven-segment display decoders. This section introduces two more commonly used building blocks: multiplexers and decoders. Chapter 5 covers other combinational building blocks. Multiplexers are among the most commonly used combinational circuits. They choose an output from among several possible inputs based on the value of a select signal. A multiplexer is sometimes affectionately called a mux. Figure 2.54 shows the schematic and truth table for a 2:1 multiplexer with two data inputs, D0 and D1, a select input, S, and one output, Y. The multiplexer chooses between the two data inputs based on the select. A 2:1 multiplexer can be built from sum-of-products logic as shown in Figure 2.55. The Boolean equation for the multiplexer may be derived with a Karnaugh map or read off by inspection. Alternatively, multiplexers can be built from tristate buffers, as shown in Figure 2.56. The tristate enables are arranged such that, at all times, exactly one tristate buffer is active. A 4:1 multiplexer has four data inputs and one output, as shown in Figure 2.57. Two select signals are needed to choose among the four data inputs. The 4:1 multiplexer can be built using sum-of-products logic, tristates, or multiple 2:1 multiplexers, as shown in Figure 2.58. The product terms enabling the tristates can be formed using AND gates and inverters. They can also be formed using a decoder, which we will introduce in Section 2.8.2. Wider multiplexers, such as 8:1 and 16:1 multiplexers, can be built by expanding the methods shown in Figure 2.58. In general, an N:1 multiplexer needs log2N select lines. Again, the best implementation choice depends on the target technology. Multiplexers can be used as lookup tables to perform logic functions. Figure 2.59 shows a 4:1 multiplexer used to implement a two-input AND gate. The inputs, A and B, serve as select lines. The multiplexer data inputs are connected to 0 or 1 according to the corresponding row of the truth table. In general, a 2N-input multiplexer can be programmed to perform any N-input logic function by applying 0’s and 1’s to the appropriate data inputs. Indeed, by changing the data inputs, the multiplexer can be reprogrammed to perform a different function. With a little cleverness, we can cut the multiplexer size in half, using only a 2N1-input multiplexer to perform any N-input logic function. The strategy is to provide one of the literals, as well as 0’s and 1’s, to the multiplexer data inputs. To illustrate this principle, Figure 2.60 shows two-input AND and XOR functions implemented with 2:1 multiplexers. We start with an ordinary truth table, and then combine pairs of rows to eliminate the rightmost input variable by expressing the output in terms of this variable. We then use the multiplexer as a lookup table according to the new, smaller truth table. A decoder has N inputs and 2N outputs. It asserts exactly one of its outputs depending on the input combination. Figure 2.63 shows a 2:4 decoder. The outputs are called one-hot, because exactly one is “hot” (HIGH) at a given time. Decoders can be combined with OR gates to build logic functions. Figure 2.65 shows the two-input XNOR function using a 2:4 decoder and a single OR gate. Because each output of a decoder represents a single minterm, the function is built as the OR of all the minterms in the function. When using decoders to build logic, it is easiest to express functions as a truth table or in canonical sum-of-products form. An N-input function with M 1’s in the truth table can be built with an N:2N decoder and an M-input OR gate attached to all of the minterms containing 1’s in the truth table. This concept will be applied to the building of Read Only Memories (ROMs) in Section 5.5.6."
Summary of Passage: "Logic gates are combined to produce larger circuits such as multiplexers, decoders, and priority circuits. A multiplexer chooses one of the data inputs based on the select input. A decoder sets one of the outputs HIGH according to the input. A priority circuit produces an output indicating the highest priority input."
---
Passage: "**********"
Summary of Passage:"'''
summary_input = sample_text.replace("**********", prompt)
# print(summary_input)
response = co.generate(
model='xlarge',
prompt=summary_input,
return_likelihoods='GENERATION',
stop_sequences=['"'],
max_tokens=200,
temperature=0.7,
num_generations=5,
k=0,
p=0.75)
gens = []
likelihoods = []
for gen in response.generations:
gens.append(gen.text)
sum_likelihood = 0
for t in gen.token_likelihoods:
sum_likelihood += t.likelihood
likelihoods.append(sum_likelihood)
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'generation': gens, 'likelihood': likelihoods})
df = df.drop_duplicates(subset=['generation'])
df = df.sort_values('likelihood', ascending=False, ignore_index=True)
# print(df)
return df["generation"].iloc[0]
# humble = "The Humble Object pattern is a design pattern that was originally identified as a way to help unit testers to separate behaviors that are hard to test from behaviors that are easy to test. The idea is very simple: Split the behaviors into two modules or classes. One of those modules is humble; it contains all the hard-to-test behaviors stripped down to their barest essence. The other module contains all the testable behaviors that were stripped out of the humble object. For example, GUIs are hard to unit test because it is very difficult to write tests that can see the screen and check that the appropriate elements are displayed there. However, most of the behavior of a GUI is, in fact, easy to test. Using the Humble Object pattern, we can separate these two kinds of behaviors into two different classes called the Presenter and the View."
# my_text2 = "It is possible that a single input transition can cause multiple output transitions. These are called glitches or hazards. Although glitches usually don’t cause problems, it is important to realize that they exist and recognize them when looking at timing diagrams."
# my_text = "The test for single-gene inheritance is to mate individuals showing the mutant property with wild-type and then analyze the first and second generation of descendants. As an example, a mutant plant with white flowers would be crossed to the wild type showing red flowers. The progeny of this cross are analyzed, and then they themselves are interbred to produce a second generation of descendants. In each generation, the diagnostic ratios of plants with red flowers to those with white flowers will reveal whether a single gene controls flower color. If so, then by inference, the wild type would be encoded by the wild-type form of the gene and the mutant would be encoded by a form of the same gene in which a mutation event has altered the DNA sequence in some way. Other mutations affecting flower color (perhaps mauve, blotched, striped, and so on) would be analyzed in the same way, resulting overall in a set of defined “flower-color genes.” The use of mutants in this way is sometimes called genetic dissection, because the biological property in question (flower color in this case) is picked apart to reveal its underlying genetic program, not with a scalpel but with mutants. Each mutant potentially identifies a separate gene affecting that property. After a set of key genes has been defined in this way, several different molecular methods can be used to establish the functions of each of the genes. These methods will be covered in later chapters. Hence, genetics has been used to define the set of gene functions that interact to produce the property we call flower color (in this example). This type of approach to gene discovery is sometimes called forward genetics, a strategy to understanding biological function starting with random single-gene mutants and ending with their DNA sequence and biochemical function."
# bio = "What kinds of research do biologists do? One central area of research in the biology of all organisms is the attempt to understand how an organism develops from a fertilized egg into an adult—in other words, what makes an organism the way it is. Usually, this overall goal is broken down into the study of individual biological properties such as the development of plant flower color, or animal locomotion, or nutrient uptake, although biologists also study some general areas such as how a cell works. How do geneticists analyze biological properties? The genetic approach to understanding any biological property is to find the subset of genes in the genome that influence that property, a process sometimes referred to as gene discovery. After these genes have been identified, their cellular functions can be elucidated through further research. There are several different types of analytical approaches to gene discovery, but one widely used method relies on the detection of single-gene inheritance patterns, and that is the topic of this chapter. All of genetics, in one aspect or another, is based on heritable variants. The basic approach of genetics is to compare and contrast the properties of variants, and from these comparisons make deductions about genetic function. It is similar to the way in which you could make inferences about how an unfamiliar machine works by changing the composition or positions of the working parts, or even by removing parts one at a time. Each variant represents a 'tweak”'of the biological machine, from which its function can be deduced."
# print(summarize(my_text))
| [] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.