Datasets:
repo_name
stringlengths 7
71
| file_path
stringlengths 5
118
| context
list | import_statement
stringlengths 45
12.5k
| token_num
int64 641
99.4k
| cropped_code
stringlengths 44
17k
| all_code
stringlengths 43
754k
| next_line
stringlengths 2
330
| gold_snippet_index
int64 0
68
| created_at
stringlengths 25
25
| level
stringclasses 9
values |
---|---|---|---|---|---|---|---|---|---|---|
DLYuanGod/TinyGPT-V | minigpt4/processors/blip_processors.py | [
{
"identifier": "registry",
"path": "minigpt4/common/registry.py",
"snippet": "class Registry:\n def register_builder(cls, name):\n def wrap(builder_cls):\n def register_task(cls, name):\n def wrap(task_cls):\n def register_model(cls, name):\n def wrap(model_cls):\n def register_processor(cls, name):\n def wrap(processor_cls):\n def register_lr_scheduler(cls, name):\n def wrap(lr_sched_cls):\n def register_runner(cls, name):\n def wrap(runner_cls):\n def register_path(cls, name, path):\n def register(cls, name, obj):\n def get_builder_class(cls, name):\n def get_model_class(cls, name):\n def get_task_class(cls, name):\n def get_processor_class(cls, name):\n def get_lr_scheduler_class(cls, name):\n def get_runner_class(cls, name):\n def list_runners(cls):\n def list_models(cls):\n def list_tasks(cls):\n def list_processors(cls):\n def list_lr_schedulers(cls):\n def list_datasets(cls):\n def get_path(cls, name):\n def get(cls, name, default=None, no_warning=False):\n def unregister(cls, name):"
},
{
"identifier": "BaseProcessor",
"path": "minigpt4/processors/base_processor.py",
"snippet": "class BaseProcessor:\n def __init__(self):\n self.transform = lambda x: x\n return\n\n def __call__(self, item):\n return self.transform(item)\n\n @classmethod\n def from_config(cls, cfg=None):\n return cls()\n\n def build(self, **kwargs):\n cfg = OmegaConf.create(kwargs)\n\n return self.from_config(cfg)"
},
{
"identifier": "RandomAugment",
"path": "minigpt4/processors/randaugment.py",
"snippet": "class RandomAugment(object):\n def __init__(self, N=2, M=10, isPIL=False, augs=[]):\n self.N = N\n self.M = M\n self.isPIL = isPIL\n if augs:\n self.augs = augs\n else:\n self.augs = list(arg_dict.keys())\n\n def get_random_ops(self):\n sampled_ops = np.random.choice(self.augs, self.N)\n return [(op, 0.5, self.M) for op in sampled_ops]\n\n def __call__(self, img):\n if self.isPIL:\n img = np.array(img)\n ops = self.get_random_ops()\n for name, prob, level in ops:\n if np.random.random() > prob:\n continue\n args = arg_dict[name](level)\n img = func_dict[name](img, *args)\n return img"
}
] | import re
from minigpt4.common.registry import registry
from minigpt4.processors.base_processor import BaseProcessor
from minigpt4.processors.randaugment import RandomAugment
from omegaconf import OmegaConf
from torchvision import transforms
from torchvision.transforms.functional import InterpolationMode | 756 | """
Copyright (c) 2022, salesforce.com, inc.
All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
"""
| """
Copyright (c) 2022, salesforce.com, inc.
All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
"""
| class BlipImageBaseProcessor(BaseProcessor): | 1 | 2023-12-28 05:47:18+00:00 | 2k |
jianchang512/vocal-separate | start.py | [
{
"identifier": "cfg",
"path": "vocal/cfg.py",
"snippet": "LANG = \"en\" if locale.getdefaultlocale()[0].split('_')[0].lower() != 'zh' else \"zh\"\nROOT_DIR = os.getcwd()\nMODEL_DIR = os.path.join(ROOT_DIR, 'pretrained_models')\nSTATIC_DIR = os.path.join(ROOT_DIR, 'static')\nTMP_DIR = os.path.join(STATIC_DIR, 'tmp')\nFILES_DIR = os.path.join(STATIC_DIR, 'files')"
},
{
"identifier": "tool",
"path": "vocal/tool.py",
"snippet": "def runffmpeg(arg):\ndef checkupdate():\ndef openweb(web_address):"
},
{
"identifier": "ROOT_DIR",
"path": "vocal/cfg.py",
"snippet": "ROOT_DIR = os.getcwd()"
}
] | import logging
import threading
import sys
import os
import subprocess
from flask import Flask, request, render_template, jsonify, send_from_directory
from gevent.pywsgi import WSGIServer, WSGIHandler,LoggingLogAdapter
from logging.handlers import RotatingFileHandler
from vocal import cfg, tool
from vocal.cfg import ROOT_DIR
from spleeter.separator import Separator | 795 |
class CustomRequestHandler(WSGIHandler):
def log_request(self):
pass
# 禁用 Werkzeug 默认的日志处理器
log = logging.getLogger('werkzeug')
log.handlers[:] = []
log.setLevel(logging.WARNING)
app = Flask(__name__, static_folder=os.path.join(ROOT_DIR, 'static'), static_url_path='/static',
template_folder=os.path.join(ROOT_DIR, 'templates'))
root_log = logging.getLogger() # Flask的根日志记录器
root_log.handlers = []
root_log.setLevel(logging.WARNING)
# 配置日志
app.logger.setLevel(logging.WARNING) # 设置日志级别为 INFO
# 创建 RotatingFileHandler 对象,设置写入的文件路径和大小限制
file_handler = RotatingFileHandler(os.path.join(ROOT_DIR, 'vocal.log'), maxBytes=1024 * 1024, backupCount=5)
# 创建日志的格式
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# 设置文件处理器的级别和格式
file_handler.setLevel(logging.WARNING)
file_handler.setFormatter(formatter)
# 将文件处理器添加到日志记录器中
app.logger.addHandler(file_handler)
@app.route('/static/<path:filename>')
def static_files(filename):
return send_from_directory(app.config['STATIC_FOLDER'], filename)
@app.route('/')
def index():
return render_template("index.html",cuda=cfg.cuda, language=cfg.LANG,root_dir=ROOT_DIR.replace('\\', '/'))
# 上传音频
@app.route('/upload', methods=['POST'])
def upload():
try:
# 获取上传的文件
audio_file = request.files['audio']
# 如果是mp4
noextname, ext = os.path.splitext(audio_file.filename)
ext = ext.lower()
# 如果是视频,先分离
wav_file = os.path.join(cfg.TMP_DIR, f'{noextname}.wav')
if os.path.exists(wav_file) and os.path.getsize(wav_file) > 0:
return jsonify({'code': 0, 'msg': cfg.transobj['lang1'], "data": os.path.basename(wav_file)})
msg=""
if ext in ['.mp4', '.mov', '.avi', '.mkv', '.mpeg', '.mp3', '.flac']:
video_file = os.path.join(cfg.TMP_DIR, f'{noextname}{ext}')
audio_file.save(video_file)
params = [
"-i",
video_file,
]
if ext not in ['.mp3', '.flac']:
params.append('-vn')
params.append(wav_file)
|
class CustomRequestHandler(WSGIHandler):
def log_request(self):
pass
# 禁用 Werkzeug 默认的日志处理器
log = logging.getLogger('werkzeug')
log.handlers[:] = []
log.setLevel(logging.WARNING)
app = Flask(__name__, static_folder=os.path.join(ROOT_DIR, 'static'), static_url_path='/static',
template_folder=os.path.join(ROOT_DIR, 'templates'))
root_log = logging.getLogger() # Flask的根日志记录器
root_log.handlers = []
root_log.setLevel(logging.WARNING)
# 配置日志
app.logger.setLevel(logging.WARNING) # 设置日志级别为 INFO
# 创建 RotatingFileHandler 对象,设置写入的文件路径和大小限制
file_handler = RotatingFileHandler(os.path.join(ROOT_DIR, 'vocal.log'), maxBytes=1024 * 1024, backupCount=5)
# 创建日志的格式
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# 设置文件处理器的级别和格式
file_handler.setLevel(logging.WARNING)
file_handler.setFormatter(formatter)
# 将文件处理器添加到日志记录器中
app.logger.addHandler(file_handler)
@app.route('/static/<path:filename>')
def static_files(filename):
return send_from_directory(app.config['STATIC_FOLDER'], filename)
@app.route('/')
def index():
return render_template("index.html",cuda=cfg.cuda, language=cfg.LANG,root_dir=ROOT_DIR.replace('\\', '/'))
# 上传音频
@app.route('/upload', methods=['POST'])
def upload():
try:
# 获取上传的文件
audio_file = request.files['audio']
# 如果是mp4
noextname, ext = os.path.splitext(audio_file.filename)
ext = ext.lower()
# 如果是视频,先分离
wav_file = os.path.join(cfg.TMP_DIR, f'{noextname}.wav')
if os.path.exists(wav_file) and os.path.getsize(wav_file) > 0:
return jsonify({'code': 0, 'msg': cfg.transobj['lang1'], "data": os.path.basename(wav_file)})
msg=""
if ext in ['.mp4', '.mov', '.avi', '.mkv', '.mpeg', '.mp3', '.flac']:
video_file = os.path.join(cfg.TMP_DIR, f'{noextname}{ext}')
audio_file.save(video_file)
params = [
"-i",
video_file,
]
if ext not in ['.mp3', '.flac']:
params.append('-vn')
params.append(wav_file) | rs = tool.runffmpeg(params) | 1 | 2023-12-26 06:20:35+00:00 | 2k |
ali-vilab/dreamtalk | core/networks/dynamic_fc_decoder.py | [
{
"identifier": "_get_activation_fn",
"path": "core/networks/transformer.py",
"snippet": "def _get_activation_fn(activation):\r\n \"\"\"Return an activation function given a string\"\"\"\r\n if activation == \"relu\":\r\n return F.relu\r\n if activation == \"gelu\":\r\n return F.gelu\r\n if activation == \"glu\":\r\n return F.glu\r\n raise RuntimeError(F\"activation should be relu/gelu, not {activation}.\")\r"
},
{
"identifier": "_get_clones",
"path": "core/networks/transformer.py",
"snippet": "def _get_clones(module, N):\r\n return nn.ModuleList([copy.deepcopy(module) for i in range(N)])\r"
},
{
"identifier": "DynamicLinear",
"path": "core/networks/dynamic_linear.py",
"snippet": "class DynamicLinear(nn.Module):\n def __init__(self, in_planes, out_planes, cond_planes, bias=True, K=4, temperature=30, ratio=4, init_weight=True):\n super().__init__()\n\n self.dynamic_conv = DynamicConv(\n in_planes,\n out_planes,\n cond_planes,\n kernel_size=1,\n stride=1,\n padding=0,\n bias=bias,\n K=K,\n ratio=ratio,\n temperature=temperature,\n init_weight=init_weight,\n )\n\n def forward(self, x, cond):\n \"\"\"\n\n Args:\n x (_type_): (L, B, C_in)\n cond (_type_): (B, C_style)\n\n Returns:\n _type_: (L, B, C_out)\n \"\"\"\n x = x.permute(1, 2, 0).unsqueeze(-1)\n out = self.dynamic_conv(x, cond)\n # (B, C_out, L, 1)\n out = out.squeeze().permute(2, 0, 1)\n return out"
}
] | import torch.nn as nn
import torch
from core.networks.transformer import _get_activation_fn, _get_clones
from core.networks.dynamic_linear import DynamicLinear | 1,476 |
class DynamicFCDecoderLayer(nn.Module):
def __init__(
self,
d_model,
nhead,
d_style,
dynamic_K,
dynamic_ratio,
dim_feedforward=2048,
dropout=0.1,
activation="relu",
normalize_before=False,
):
super().__init__()
self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
# Implementation of Feedforward model
# self.linear1 = nn.Linear(d_model, dim_feedforward)
self.linear1 = DynamicLinear(d_model, dim_feedforward, d_style, K=dynamic_K, ratio=dynamic_ratio)
self.dropout = nn.Dropout(dropout)
self.linear2 = nn.Linear(dim_feedforward, d_model)
# self.linear2 = DynamicLinear(dim_feedforward, d_model, d_style, K=dynamic_K, ratio=dynamic_ratio)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.norm3 = nn.LayerNorm(d_model)
self.dropout1 = nn.Dropout(dropout)
self.dropout2 = nn.Dropout(dropout)
self.dropout3 = nn.Dropout(dropout)
self.activation = _get_activation_fn(activation)
self.normalize_before = normalize_before
def with_pos_embed(self, tensor, pos):
return tensor if pos is None else tensor + pos
def forward_post(
self,
tgt,
memory,
style,
tgt_mask=None,
memory_mask=None,
tgt_key_padding_mask=None,
memory_key_padding_mask=None,
pos=None,
query_pos=None,
):
# q = k = self.with_pos_embed(tgt, query_pos)
tgt2 = self.self_attn(tgt, tgt, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask)[0]
tgt = tgt + self.dropout1(tgt2)
tgt = self.norm1(tgt)
tgt2 = self.multihead_attn(
query=tgt, key=memory, value=memory, attn_mask=memory_mask, key_padding_mask=memory_key_padding_mask
)[0]
tgt = tgt + self.dropout2(tgt2)
tgt = self.norm2(tgt)
# tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt, style))), style)
tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt, style))))
tgt = tgt + self.dropout3(tgt2)
tgt = self.norm3(tgt)
return tgt
# def forward_pre(
# self,
# tgt,
# memory,
# tgt_mask=None,
# memory_mask=None,
# tgt_key_padding_mask=None,
# memory_key_padding_mask=None,
# pos=None,
# query_pos=None,
# ):
# tgt2 = self.norm1(tgt)
# # q = k = self.with_pos_embed(tgt2, query_pos)
# tgt2 = self.self_attn(tgt2, tgt2, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask)[0]
# tgt = tgt + self.dropout1(tgt2)
# tgt2 = self.norm2(tgt)
# tgt2 = self.multihead_attn(
# query=tgt2, key=memory, value=memory, attn_mask=memory_mask, key_padding_mask=memory_key_padding_mask
# )[0]
# tgt = tgt + self.dropout2(tgt2)
# tgt2 = self.norm3(tgt)
# tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
# tgt = tgt + self.dropout3(tgt2)
# return tgt
def forward(
self,
tgt,
memory,
style,
tgt_mask=None,
memory_mask=None,
tgt_key_padding_mask=None,
memory_key_padding_mask=None,
pos=None,
query_pos=None,
):
if self.normalize_before:
raise NotImplementedError
# return self.forward_pre(
# tgt, memory, tgt_mask, memory_mask, tgt_key_padding_mask, memory_key_padding_mask, pos, query_pos
# )
return self.forward_post(
tgt, memory, style, tgt_mask, memory_mask, tgt_key_padding_mask, memory_key_padding_mask, pos, query_pos
)
class DynamicFCDecoder(nn.Module):
def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False):
super().__init__()
|
class DynamicFCDecoderLayer(nn.Module):
def __init__(
self,
d_model,
nhead,
d_style,
dynamic_K,
dynamic_ratio,
dim_feedforward=2048,
dropout=0.1,
activation="relu",
normalize_before=False,
):
super().__init__()
self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
# Implementation of Feedforward model
# self.linear1 = nn.Linear(d_model, dim_feedforward)
self.linear1 = DynamicLinear(d_model, dim_feedforward, d_style, K=dynamic_K, ratio=dynamic_ratio)
self.dropout = nn.Dropout(dropout)
self.linear2 = nn.Linear(dim_feedforward, d_model)
# self.linear2 = DynamicLinear(dim_feedforward, d_model, d_style, K=dynamic_K, ratio=dynamic_ratio)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.norm3 = nn.LayerNorm(d_model)
self.dropout1 = nn.Dropout(dropout)
self.dropout2 = nn.Dropout(dropout)
self.dropout3 = nn.Dropout(dropout)
self.activation = _get_activation_fn(activation)
self.normalize_before = normalize_before
def with_pos_embed(self, tensor, pos):
return tensor if pos is None else tensor + pos
def forward_post(
self,
tgt,
memory,
style,
tgt_mask=None,
memory_mask=None,
tgt_key_padding_mask=None,
memory_key_padding_mask=None,
pos=None,
query_pos=None,
):
# q = k = self.with_pos_embed(tgt, query_pos)
tgt2 = self.self_attn(tgt, tgt, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask)[0]
tgt = tgt + self.dropout1(tgt2)
tgt = self.norm1(tgt)
tgt2 = self.multihead_attn(
query=tgt, key=memory, value=memory, attn_mask=memory_mask, key_padding_mask=memory_key_padding_mask
)[0]
tgt = tgt + self.dropout2(tgt2)
tgt = self.norm2(tgt)
# tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt, style))), style)
tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt, style))))
tgt = tgt + self.dropout3(tgt2)
tgt = self.norm3(tgt)
return tgt
# def forward_pre(
# self,
# tgt,
# memory,
# tgt_mask=None,
# memory_mask=None,
# tgt_key_padding_mask=None,
# memory_key_padding_mask=None,
# pos=None,
# query_pos=None,
# ):
# tgt2 = self.norm1(tgt)
# # q = k = self.with_pos_embed(tgt2, query_pos)
# tgt2 = self.self_attn(tgt2, tgt2, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask)[0]
# tgt = tgt + self.dropout1(tgt2)
# tgt2 = self.norm2(tgt)
# tgt2 = self.multihead_attn(
# query=tgt2, key=memory, value=memory, attn_mask=memory_mask, key_padding_mask=memory_key_padding_mask
# )[0]
# tgt = tgt + self.dropout2(tgt2)
# tgt2 = self.norm3(tgt)
# tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
# tgt = tgt + self.dropout3(tgt2)
# return tgt
def forward(
self,
tgt,
memory,
style,
tgt_mask=None,
memory_mask=None,
tgt_key_padding_mask=None,
memory_key_padding_mask=None,
pos=None,
query_pos=None,
):
if self.normalize_before:
raise NotImplementedError
# return self.forward_pre(
# tgt, memory, tgt_mask, memory_mask, tgt_key_padding_mask, memory_key_padding_mask, pos, query_pos
# )
return self.forward_post(
tgt, memory, style, tgt_mask, memory_mask, tgt_key_padding_mask, memory_key_padding_mask, pos, query_pos
)
class DynamicFCDecoder(nn.Module):
def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False):
super().__init__() | self.layers = _get_clones(decoder_layer, num_layers) | 1 | 2023-12-28 05:39:31+00:00 | 2k |
jiawei-ren/dreamgaussian4d | diffusers/src/diffusers/models/activations.py | [
{
"identifier": "USE_PEFT_BACKEND",
"path": "diffusers/src/diffusers/utils/constants.py",
"snippet": "USE_PEFT_BACKEND = _required_peft_version and _required_transformers_version"
},
{
"identifier": "LoRACompatibleLinear",
"path": "diffusers/src/diffusers/models/lora.py",
"snippet": "class LoRACompatibleLinear(nn.Linear):\n \"\"\"\n A Linear layer that can be used with LoRA.\n \"\"\"\n\n def __init__(self, *args, lora_layer: Optional[LoRALinearLayer] = None, **kwargs):\n super().__init__(*args, **kwargs)\n self.lora_layer = lora_layer\n\n def set_lora_layer(self, lora_layer: Optional[LoRALinearLayer]):\n self.lora_layer = lora_layer\n\n def _fuse_lora(self, lora_scale: float = 1.0, safe_fusing: bool = False):\n if self.lora_layer is None:\n return\n\n dtype, device = self.weight.data.dtype, self.weight.data.device\n\n w_orig = self.weight.data.float()\n w_up = self.lora_layer.up.weight.data.float()\n w_down = self.lora_layer.down.weight.data.float()\n\n if self.lora_layer.network_alpha is not None:\n w_up = w_up * self.lora_layer.network_alpha / self.lora_layer.rank\n\n fused_weight = w_orig + (lora_scale * torch.bmm(w_up[None, :], w_down[None, :])[0])\n\n if safe_fusing and torch.isnan(fused_weight).any().item():\n raise ValueError(\n \"This LoRA weight seems to be broken. \"\n f\"Encountered NaN values when trying to fuse LoRA weights for {self}.\"\n \"LoRA weights will not be fused.\"\n )\n\n self.weight.data = fused_weight.to(device=device, dtype=dtype)\n\n # we can drop the lora layer now\n self.lora_layer = None\n\n # offload the up and down matrices to CPU to not blow the memory\n self.w_up = w_up.cpu()\n self.w_down = w_down.cpu()\n self._lora_scale = lora_scale\n\n def _unfuse_lora(self):\n if not (getattr(self, \"w_up\", None) is not None and getattr(self, \"w_down\", None) is not None):\n return\n\n fused_weight = self.weight.data\n dtype, device = fused_weight.dtype, fused_weight.device\n\n w_up = self.w_up.to(device=device).float()\n w_down = self.w_down.to(device).float()\n\n unfused_weight = fused_weight.float() - (self._lora_scale * torch.bmm(w_up[None, :], w_down[None, :])[0])\n self.weight.data = unfused_weight.to(device=device, dtype=dtype)\n\n self.w_up = None\n self.w_down = None\n\n def forward(self, hidden_states: torch.Tensor, scale: float = 1.0) -> torch.Tensor:\n if self.lora_layer is None:\n out = super().forward(hidden_states)\n return out\n else:\n out = super().forward(hidden_states) + (scale * self.lora_layer(hidden_states))\n return out"
}
] | import torch
import torch.nn.functional as F
from torch import nn
from ..utils import USE_PEFT_BACKEND
from .lora import LoRACompatibleLinear | 1,423 | # coding=utf-8
# Copyright 2023 HuggingFace Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ACTIVATION_FUNCTIONS = {
"swish": nn.SiLU(),
"silu": nn.SiLU(),
"mish": nn.Mish(),
"gelu": nn.GELU(),
"relu": nn.ReLU(),
}
def get_activation(act_fn: str) -> nn.Module:
"""Helper function to get activation function from string.
Args:
act_fn (str): Name of activation function.
Returns:
nn.Module: Activation function.
"""
act_fn = act_fn.lower()
if act_fn in ACTIVATION_FUNCTIONS:
return ACTIVATION_FUNCTIONS[act_fn]
else:
raise ValueError(f"Unsupported activation function: {act_fn}")
class GELU(nn.Module):
r"""
GELU activation function with tanh approximation support with `approximate="tanh"`.
Parameters:
dim_in (`int`): The number of channels in the input.
dim_out (`int`): The number of channels in the output.
approximate (`str`, *optional*, defaults to `"none"`): If `"tanh"`, use tanh approximation.
"""
def __init__(self, dim_in: int, dim_out: int, approximate: str = "none"):
super().__init__()
self.proj = nn.Linear(dim_in, dim_out)
self.approximate = approximate
def gelu(self, gate: torch.Tensor) -> torch.Tensor:
if gate.device.type != "mps":
return F.gelu(gate, approximate=self.approximate)
# mps: gelu is not implemented for float16
return F.gelu(gate.to(dtype=torch.float32), approximate=self.approximate).to(dtype=gate.dtype)
def forward(self, hidden_states):
hidden_states = self.proj(hidden_states)
hidden_states = self.gelu(hidden_states)
return hidden_states
class GEGLU(nn.Module):
r"""
A [variant](https://arxiv.org/abs/2002.05202) of the gated linear unit activation function.
Parameters:
dim_in (`int`): The number of channels in the input.
dim_out (`int`): The number of channels in the output.
"""
def __init__(self, dim_in: int, dim_out: int):
super().__init__()
| # coding=utf-8
# Copyright 2023 HuggingFace Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ACTIVATION_FUNCTIONS = {
"swish": nn.SiLU(),
"silu": nn.SiLU(),
"mish": nn.Mish(),
"gelu": nn.GELU(),
"relu": nn.ReLU(),
}
def get_activation(act_fn: str) -> nn.Module:
"""Helper function to get activation function from string.
Args:
act_fn (str): Name of activation function.
Returns:
nn.Module: Activation function.
"""
act_fn = act_fn.lower()
if act_fn in ACTIVATION_FUNCTIONS:
return ACTIVATION_FUNCTIONS[act_fn]
else:
raise ValueError(f"Unsupported activation function: {act_fn}")
class GELU(nn.Module):
r"""
GELU activation function with tanh approximation support with `approximate="tanh"`.
Parameters:
dim_in (`int`): The number of channels in the input.
dim_out (`int`): The number of channels in the output.
approximate (`str`, *optional*, defaults to `"none"`): If `"tanh"`, use tanh approximation.
"""
def __init__(self, dim_in: int, dim_out: int, approximate: str = "none"):
super().__init__()
self.proj = nn.Linear(dim_in, dim_out)
self.approximate = approximate
def gelu(self, gate: torch.Tensor) -> torch.Tensor:
if gate.device.type != "mps":
return F.gelu(gate, approximate=self.approximate)
# mps: gelu is not implemented for float16
return F.gelu(gate.to(dtype=torch.float32), approximate=self.approximate).to(dtype=gate.dtype)
def forward(self, hidden_states):
hidden_states = self.proj(hidden_states)
hidden_states = self.gelu(hidden_states)
return hidden_states
class GEGLU(nn.Module):
r"""
A [variant](https://arxiv.org/abs/2002.05202) of the gated linear unit activation function.
Parameters:
dim_in (`int`): The number of channels in the input.
dim_out (`int`): The number of channels in the output.
"""
def __init__(self, dim_in: int, dim_out: int):
super().__init__() | linear_cls = LoRACompatibleLinear if not USE_PEFT_BACKEND else nn.Linear | 1 | 2023-12-28 08:17:40+00:00 | 2k |
Meituan-AutoML/MobileVLM | mobilevlm/model/mobilevlm.py | [
{
"identifier": "build_vision_tower",
"path": "mobilevlm/model/vision_encoder.py",
"snippet": "def build_vision_tower(model_cfg, **kwargs):\n vision_tower = getattr(model_cfg, 'mm_vision_tower', getattr(model_cfg, 'vision_tower', None))\n is_absolute_path_exists = os.path.exists(vision_tower)\n if is_absolute_path_exists or vision_tower.startswith(\"openai\") or vision_tower.startswith(\"laion\"):\n vision_tower_type = getattr(model_cfg, 'vision_tower_type', None)\n if vision_tower_type == \"clip\":\n return CLIPVisionTower(vision_tower, args=model_cfg, **kwargs)\n raise ValueError(f'Unknown vision tower: {vision_tower}')"
},
{
"identifier": "build_vision_projector",
"path": "mobilevlm/model/vision_projector.py",
"snippet": "def build_vision_projector(config, delay_load=False, **kwargs):\n projector_type = getattr(config, 'mm_projector_type', 'linear')\n\n if projector_type == 'linear':\n return nn.Linear(config.mm_hidden_size, config.hidden_size)\n elif projector_type.startswith('mlp'):\n mlp_gelu_match = re.match(r'^mlp(\\d+)x_gelu$', projector_type)\n if mlp_gelu_match:\n mlp_depth = int(mlp_gelu_match.group(1))\n modules = [nn.Linear(config.mm_hidden_size, config.hidden_size)]\n for _ in range(1, mlp_depth):\n modules.append(nn.GELU())\n modules.append(nn.Linear(config.hidden_size, config.hidden_size))\n return nn.Sequential(*modules)\n elif projector_type.startswith('ldpnet'):\n return LDPNetProjector(config)\n raise ValueError(f'Unknown projector type: {projector_type}')"
},
{
"identifier": "IGNORE_INDEX",
"path": "mobilevlm/constants.py",
"snippet": "IGNORE_INDEX = -100"
},
{
"identifier": "IMAGE_TOKEN_INDEX",
"path": "mobilevlm/constants.py",
"snippet": "IMAGE_TOKEN_INDEX = -200"
},
{
"identifier": "DEFAULT_IMAGE_PATCH_TOKEN",
"path": "mobilevlm/constants.py",
"snippet": "DEFAULT_IMAGE_PATCH_TOKEN = \"<im_patch>\""
},
{
"identifier": "DEFAULT_IM_START_TOKEN",
"path": "mobilevlm/constants.py",
"snippet": "DEFAULT_IM_START_TOKEN = \"<im_start>\""
},
{
"identifier": "DEFAULT_IM_END_TOKEN",
"path": "mobilevlm/constants.py",
"snippet": "DEFAULT_IM_END_TOKEN = \"<im_end>\""
}
] | import torch
import torch.nn as nn
from abc import ABC, abstractmethod
from transformers import AutoTokenizer, BitsAndBytesConfig
from mobilevlm.model.vision_encoder import build_vision_tower
from mobilevlm.model.vision_projector import build_vision_projector
from mobilevlm.constants import IGNORE_INDEX, IMAGE_TOKEN_INDEX, \
DEFAULT_IMAGE_PATCH_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
from mobilevlm.model.mobilellama import MobileLlamaForCausalLM | 1,423 |
class MobileVLMMetaModel:
def __init__(self, config):
super(MobileVLMMetaModel, self).__init__(config)
if hasattr(config, "mm_vision_tower"):
self.vision_tower = build_vision_tower(config, delay_load=False)
self.mm_projector = build_vision_projector(config)
def get_vision_tower(self):
vision_tower = getattr(self, 'vision_tower', None)
if type(vision_tower) is list:
vision_tower = vision_tower[0]
return vision_tower
def initialize_vision_modules(self, model_args, fsdp=None):
mm_vision_select_layer = model_args.mm_vision_select_layer
mm_vision_select_feature = model_args.mm_vision_select_feature
pretrain_mm_mlp_adapter = model_args.pretrain_mm_mlp_adapter
self.config.mm_vision_tower = model_args.vision_tower
self.config.use_mm_proj = True
self.config.mm_projector_type = getattr(model_args, 'mm_projector_type', 'linear')
self.config.mm_vision_select_layer = mm_vision_select_layer
self.config.mm_vision_select_feature = mm_vision_select_feature
# Build VisionTower
vision_tower = build_vision_tower(model_args)
if fsdp is not None and len(fsdp) > 0:
self.vision_tower = [vision_tower]
else:
self.vision_tower = vision_tower
self.config.mm_hidden_size = vision_tower.hidden_size
# Build Vision-Projector
self.mm_projector = build_vision_projector(self.config)
if pretrain_mm_mlp_adapter is not None:
mm_projector_weights = torch.load(pretrain_mm_mlp_adapter, map_location='cpu')
def get_w(weights, keyword):
return {k.split(keyword + '.')[1]: v for k, v in weights.items() if keyword in k}
self.mm_projector.load_state_dict(get_w(mm_projector_weights, 'mm_projector'))
class MobileVLMMetaForCausalLM(ABC):
@abstractmethod
def get_model(self):
pass
def get_vision_tower(self):
return self.get_model().get_vision_tower()
def encode_images(self, images):
image_features = self.get_model().get_vision_tower()(images)
image_features = self.get_model().mm_projector(image_features)
return image_features
def prepare_inputs_labels_for_multimodal(
self, input_ids, attention_mask, past_key_values, labels, images
):
vision_tower = self.get_vision_tower()
if vision_tower is None or images is None or input_ids.shape[1] == 1:
if past_key_values is not None and vision_tower is not None and images is not None and input_ids.shape[1] == 1:
attention_mask = torch.ones((attention_mask.shape[0], past_key_values[-1][-1].shape[-2] + 1), dtype=attention_mask.dtype, device=attention_mask.device)
return input_ids, attention_mask, past_key_values, None, labels
if type(images) is list or images.ndim == 5:
concat_images = torch.cat([image for image in images], dim=0)
image_features = self.encode_images(concat_images)
split_sizes = [image.shape[0] for image in images]
image_features = torch.split(image_features, split_sizes, dim=0)
image_features = [x.flatten(0, 1) for x in image_features]
else:
image_features = self.encode_images(images)
new_input_embeds = []
new_labels = [] if labels is not None else None
cur_image_idx = 0
for batch_idx, cur_input_ids in enumerate(input_ids):
|
class MobileVLMMetaModel:
def __init__(self, config):
super(MobileVLMMetaModel, self).__init__(config)
if hasattr(config, "mm_vision_tower"):
self.vision_tower = build_vision_tower(config, delay_load=False)
self.mm_projector = build_vision_projector(config)
def get_vision_tower(self):
vision_tower = getattr(self, 'vision_tower', None)
if type(vision_tower) is list:
vision_tower = vision_tower[0]
return vision_tower
def initialize_vision_modules(self, model_args, fsdp=None):
mm_vision_select_layer = model_args.mm_vision_select_layer
mm_vision_select_feature = model_args.mm_vision_select_feature
pretrain_mm_mlp_adapter = model_args.pretrain_mm_mlp_adapter
self.config.mm_vision_tower = model_args.vision_tower
self.config.use_mm_proj = True
self.config.mm_projector_type = getattr(model_args, 'mm_projector_type', 'linear')
self.config.mm_vision_select_layer = mm_vision_select_layer
self.config.mm_vision_select_feature = mm_vision_select_feature
# Build VisionTower
vision_tower = build_vision_tower(model_args)
if fsdp is not None and len(fsdp) > 0:
self.vision_tower = [vision_tower]
else:
self.vision_tower = vision_tower
self.config.mm_hidden_size = vision_tower.hidden_size
# Build Vision-Projector
self.mm_projector = build_vision_projector(self.config)
if pretrain_mm_mlp_adapter is not None:
mm_projector_weights = torch.load(pretrain_mm_mlp_adapter, map_location='cpu')
def get_w(weights, keyword):
return {k.split(keyword + '.')[1]: v for k, v in weights.items() if keyword in k}
self.mm_projector.load_state_dict(get_w(mm_projector_weights, 'mm_projector'))
class MobileVLMMetaForCausalLM(ABC):
@abstractmethod
def get_model(self):
pass
def get_vision_tower(self):
return self.get_model().get_vision_tower()
def encode_images(self, images):
image_features = self.get_model().get_vision_tower()(images)
image_features = self.get_model().mm_projector(image_features)
return image_features
def prepare_inputs_labels_for_multimodal(
self, input_ids, attention_mask, past_key_values, labels, images
):
vision_tower = self.get_vision_tower()
if vision_tower is None or images is None or input_ids.shape[1] == 1:
if past_key_values is not None and vision_tower is not None and images is not None and input_ids.shape[1] == 1:
attention_mask = torch.ones((attention_mask.shape[0], past_key_values[-1][-1].shape[-2] + 1), dtype=attention_mask.dtype, device=attention_mask.device)
return input_ids, attention_mask, past_key_values, None, labels
if type(images) is list or images.ndim == 5:
concat_images = torch.cat([image for image in images], dim=0)
image_features = self.encode_images(concat_images)
split_sizes = [image.shape[0] for image in images]
image_features = torch.split(image_features, split_sizes, dim=0)
image_features = [x.flatten(0, 1) for x in image_features]
else:
image_features = self.encode_images(images)
new_input_embeds = []
new_labels = [] if labels is not None else None
cur_image_idx = 0
for batch_idx, cur_input_ids in enumerate(input_ids): | if (cur_input_ids == IMAGE_TOKEN_INDEX).sum() == 0: | 3 | 2023-12-29 03:35:49+00:00 | 2k |
kinggongzilla/ai-clone-whatsapp | utils/config_utils.py | [
{
"identifier": "datasets",
"path": "configs/datasets.py",
"snippet": "class custom_dataset:"
},
{
"identifier": "lora_config",
"path": "configs/peft.py",
"snippet": "class lora_config:\n r: int=8\n lora_alpha: int=32\n target_modules: List[str] = field(default_factory=lambda: [\"q_proj\", \"v_proj\"])\n bias= \"none\"\n task_type: str= \"CAUSAL_LM\"\n lora_dropout: float=0.05\n inference_mode: bool = False"
},
{
"identifier": "llama_adapter_config",
"path": "configs/peft.py",
"snippet": "class llama_adapter_config:\n adapter_len: int= 10\n adapter_layers: int= 30\n task_type: str= \"CAUSAL_LM\""
},
{
"identifier": "prefix_config",
"path": "configs/peft.py",
"snippet": "class prefix_config:\n num_virtual_tokens: int=30\n task_type: str= \"CAUSAL_LM\" "
},
{
"identifier": "train_config",
"path": "configs/training.py",
"snippet": "class train_config:\n whatsapp_username: str=\"\" # your own whatsapp user name as it is in the chat .txt files\n model_name: str=\"mistralai/Mistral-7B-Instruct-v0.2\"\n enable_fsdp: bool=False\n low_cpu_fsdp: bool=False\n run_validation: bool=False\n batch_size_training: int=1\n batching_strategy: str=\"packing\" #alternative: padding\n context_length: int=4096\n gradient_accumulation_steps: int=1\n gradient_clipping: bool = False\n gradient_clipping_threshold: float = 1.0\n num_epochs: int=1\n num_workers_dataloader: int=1\n lr: float=1e-4\n weight_decay: float=0.0\n gamma: float= 0.85\n seed: int=42\n use_fp16: bool=True\n mixed_precision: bool=True\n val_batch_size: int=1\n dataset = \"custom_dataset\"\n data_dir: str = \"data/preprocessing/processed_chats\"\n peft_method: str = \"lora\" # None , llama_adapter, prefix\n use_peft: bool=True\n output_dir: str = \"checkpoints\"\n freeze_layers: bool = False\n num_freeze_layers: int = 1\n quantization: bool = True\n one_gpu: bool = False\n save_model: bool = True\n dist_checkpoint_root_folder: str=\"PATH/to/save/FSDP/model\" # will be used if using FSDP\n dist_checkpoint_folder: str=\"fine-tuned\" # will be used if using FSDP\n save_optimizer: bool=False # will be used if using FSDP\n use_fast_kernels: bool = False # Enable using SDPA from PyTroch Accelerated Transformers, make use Flash Attention and Xformer memory-efficient kernels"
},
{
"identifier": "LengthBasedBatchSampler",
"path": "data/sampler.py",
"snippet": "class LengthBasedBatchSampler(torch.utils.data.BatchSampler):\n def __init__(self, data_source, batch_size: int, drop_last: bool, shuffle: bool=True) -> None:\n if isinstance(next(iter(data_source)), dict):\n first_key = next(iter(next(iter(data_source)).keys()))\n self.lengths = [len(d[first_key]) for d in data_source]\n else:\n self.lengths = [len(d) for d in data_source]\n self.batch_size = batch_size\n self.drop_last = drop_last\n self.shuffle = shuffle\n\n def __iter__(self):\n ids = np.argsort(self.lengths)\n if self.drop_last:\n ids = ids[:len(ids) // self.batch_size * self.batch_size]\n\n batches = [ids[i:i+self.batch_size] for i in range(0, len(ids), self.batch_size)]\n\n if self.shuffle:\n random.shuffle(batches)\n\n for b in batches:\n yield b\n\n def __len__(self):\n if self.drop_last:\n return len(self.lengths) // self.batch_size\n else:\n return len(self.lengths) // self.batch_size + (len(self.lengths) % self.batch_size > 0)"
},
{
"identifier": "DistributedLengthBasedBatchSampler",
"path": "data/sampler.py",
"snippet": "class DistributedLengthBasedBatchSampler(torch.utils.data.BatchSampler):\n def __init__(self, data_source, batch_size: int, num_replicas: int, rank: int, shuffle: bool = True, seed: int = 0) -> None:\n random.seed(seed)\n self.batch_sampler = LengthBasedBatchSampler(\n data_source, batch_size=batch_size, drop_last=True, shuffle=shuffle\n )\n self.num_replicas = num_replicas\n self.rank = rank\n \n def __iter__(self):\n max_length = len(self.batch_sampler) // self.num_replicas * self.num_replicas\n return islice(self.batch_sampler, self.rank, max_length, self.num_replicas)\n \n def __len__(self):\n return len(self.batch_sampler) // self.num_replicas"
},
{
"identifier": "DATASET_PREPROC",
"path": "utils/dataset_utils.py",
"snippet": "DATASET_PREPROC = {\n \"custom_dataset\": get_custom_dataset,\n}"
}
] | import inspect
import torch.distributed as dist
from dataclasses import asdict
from torch.utils.data import DistributedSampler
from peft import (
LoraConfig,
AdaptionPromptConfig,
PrefixTuningConfig,
)
from transformers import default_data_collator
from transformers.data import DataCollatorForSeq2Seq
from configs import datasets, lora_config, llama_adapter_config, prefix_config, train_config
from data.sampler import LengthBasedBatchSampler, DistributedLengthBasedBatchSampler
from utils.dataset_utils import DATASET_PREPROC | 1,507 | # Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
def update_config(config, **kwargs):
if isinstance(config, (tuple, list)):
for c in config:
update_config(c, **kwargs)
else:
for k, v in kwargs.items():
if hasattr(config, k):
setattr(config, k, v)
elif "." in k:
# allow --some_config.some_param=True
config_name, param_name = k.split(".")
if type(config).__name__ == config_name:
if hasattr(config, param_name):
setattr(config, param_name, v)
else:
# In case of specialized config we can warm user
print(f"Warning: {config_name} does not accept parameter: {k}")
elif isinstance(config, train_config):
print(f"Warning: unknown parameter {k}")
def generate_peft_config(train_config, kwargs):
| # Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
def update_config(config, **kwargs):
if isinstance(config, (tuple, list)):
for c in config:
update_config(c, **kwargs)
else:
for k, v in kwargs.items():
if hasattr(config, k):
setattr(config, k, v)
elif "." in k:
# allow --some_config.some_param=True
config_name, param_name = k.split(".")
if type(config).__name__ == config_name:
if hasattr(config, param_name):
setattr(config, param_name, v)
else:
# In case of specialized config we can warm user
print(f"Warning: {config_name} does not accept parameter: {k}")
elif isinstance(config, train_config):
print(f"Warning: unknown parameter {k}")
def generate_peft_config(train_config, kwargs): | configs = (lora_config, llama_adapter_config, prefix_config) | 1 | 2023-12-28 00:02:08+00:00 | 2k |
FoundationVision/UniRef | projects/UniRef/uniref/models/deformable_detr/matcher.py | [
{
"identifier": "box_cxcywh_to_xyxy",
"path": "projects/UniRef/uniref/util/box_ops.py",
"snippet": "def box_cxcywh_to_xyxy(x):\n # print('box:\\n', x)\n\n x_c, y_c, w, h = x.unbind(-1)\n b = [(x_c - 0.5 * w), (y_c - 0.5 * h),\n (x_c + 0.5 * w), (y_c + 0.5 * h)]\n return torch.stack(b, dim=-1)"
},
{
"identifier": "generalized_box_iou",
"path": "projects/UniRef/uniref/util/box_ops.py",
"snippet": "def generalized_box_iou(boxes1, boxes2):\n \"\"\"\n Generalized IoU from https://giou.stanford.edu/\n\n The boxes should be in [x0, y0, x1, y1] format\n\n Returns a [N, M] pairwise matrix, where N = len(boxes1)\n and M = len(boxes2)\n \"\"\"\n # degenerate boxes gives inf / nan results\n # so do an early check\n\n assert (boxes1[:, 2:] >= boxes1[:, :2]).all()\n assert (boxes2[:, 2:] >= boxes2[:, :2]).all()\n iou, union = box_iou(boxes1, boxes2)\n\n lt = torch.min(boxes1[:, None, :2], boxes2[:, :2])\n rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])\n\n wh = (rb - lt).clamp(min=0) # [N,M,2]\n area = wh[:, :, 0] * wh[:, :, 1]\n\n return iou - (area - union) / (area+1e-7)"
}
] | import torch
import torch.nn.functional as F
import torchvision.ops as ops
from scipy.optimize import linear_sum_assignment
from torch import nn
from ...util.box_ops import box_cxcywh_to_xyxy, generalized_box_iou | 1,206 | # ------------------------------------------------------------------------
# Deformable DETR
# Copyright (c) 2020 SenseTime. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
# ------------------------------------------------------------------------
# Modified from DETR (https://github.com/facebookresearch/detr)
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# ------------------------------------------------------------------------
"""
Modules to compute the matching cost and solve the corresponding LSAP.
"""
class HungarianMatcher(nn.Module):
"""This class computes an assignment between the targets and the predictions of the network
For efficiency reasons, the targets don't include the no_object. Because of this, in general,
there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,
while the others are un-matched (and thus treated as non-objects).
"""
def __init__(self,
cost_class: float = 1,
cost_bbox: float = 1,
cost_giou: float = 1):
"""Creates the matcher
Params:
cost_class: This is the relative weight of the classification error in the matching cost
cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost
cost_giou: This is the relative weight of the giou loss of the bounding box in the matching cost
"""
super().__init__()
self.cost_class = cost_class
self.cost_bbox = cost_bbox
self.cost_giou = cost_giou
assert cost_class != 0 or cost_bbox != 0 or cost_giou != 0, "all costs cant be 0"
def forward_ota(self, outputs, targets):
""" simOTA for detr
"""
with torch.no_grad():
bs, num_queries = outputs["pred_logits"].shape[:2]
out_prob = outputs["pred_logits"].sigmoid()
out_bbox = outputs["pred_boxes"] # 跳过frame 维度
indices = []
matched_ids = []
for batch_idx in range(bs):
bz_boxes = out_bbox[batch_idx] #[300,4]
bz_out_prob = out_prob[batch_idx]
bz_tgt_ids = targets[batch_idx]["labels"]
num_insts = len(bz_tgt_ids)
bz_gtboxs = targets[batch_idx]['boxes'].reshape(num_insts,4) #[num_gt, 4]
fg_mask, is_in_boxes_and_center = \
self.get_in_boxes_info(bz_boxes,bz_gtboxs,expanded_strides=32)
pair_wise_ious = ops.box_iou(box_cxcywh_to_xyxy(bz_boxes), box_cxcywh_to_xyxy(bz_gtboxs))
# pair_wise_ious_loss = -torch.log(pair_wise_ious + 1e-8)
# Compute the classification cost.
alpha = 0.25
gamma = 2.0
neg_cost_class = (1 - alpha) * (bz_out_prob ** gamma) * (-(1 - bz_out_prob + 1e-8).log())
pos_cost_class = alpha * ((1 - bz_out_prob) ** gamma) * (-(bz_out_prob + 1e-8).log())
cost_class = pos_cost_class[:, bz_tgt_ids] - neg_cost_class[:, bz_tgt_ids]
| # ------------------------------------------------------------------------
# Deformable DETR
# Copyright (c) 2020 SenseTime. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
# ------------------------------------------------------------------------
# Modified from DETR (https://github.com/facebookresearch/detr)
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
# ------------------------------------------------------------------------
"""
Modules to compute the matching cost and solve the corresponding LSAP.
"""
class HungarianMatcher(nn.Module):
"""This class computes an assignment between the targets and the predictions of the network
For efficiency reasons, the targets don't include the no_object. Because of this, in general,
there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,
while the others are un-matched (and thus treated as non-objects).
"""
def __init__(self,
cost_class: float = 1,
cost_bbox: float = 1,
cost_giou: float = 1):
"""Creates the matcher
Params:
cost_class: This is the relative weight of the classification error in the matching cost
cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost
cost_giou: This is the relative weight of the giou loss of the bounding box in the matching cost
"""
super().__init__()
self.cost_class = cost_class
self.cost_bbox = cost_bbox
self.cost_giou = cost_giou
assert cost_class != 0 or cost_bbox != 0 or cost_giou != 0, "all costs cant be 0"
def forward_ota(self, outputs, targets):
""" simOTA for detr
"""
with torch.no_grad():
bs, num_queries = outputs["pred_logits"].shape[:2]
out_prob = outputs["pred_logits"].sigmoid()
out_bbox = outputs["pred_boxes"] # 跳过frame 维度
indices = []
matched_ids = []
for batch_idx in range(bs):
bz_boxes = out_bbox[batch_idx] #[300,4]
bz_out_prob = out_prob[batch_idx]
bz_tgt_ids = targets[batch_idx]["labels"]
num_insts = len(bz_tgt_ids)
bz_gtboxs = targets[batch_idx]['boxes'].reshape(num_insts,4) #[num_gt, 4]
fg_mask, is_in_boxes_and_center = \
self.get_in_boxes_info(bz_boxes,bz_gtboxs,expanded_strides=32)
pair_wise_ious = ops.box_iou(box_cxcywh_to_xyxy(bz_boxes), box_cxcywh_to_xyxy(bz_gtboxs))
# pair_wise_ious_loss = -torch.log(pair_wise_ious + 1e-8)
# Compute the classification cost.
alpha = 0.25
gamma = 2.0
neg_cost_class = (1 - alpha) * (bz_out_prob ** gamma) * (-(1 - bz_out_prob + 1e-8).log())
pos_cost_class = alpha * ((1 - bz_out_prob) ** gamma) * (-(bz_out_prob + 1e-8).log())
cost_class = pos_cost_class[:, bz_tgt_ids] - neg_cost_class[:, bz_tgt_ids] | cost_giou = -generalized_box_iou(box_cxcywh_to_xyxy(bz_boxes), box_cxcywh_to_xyxy(bz_gtboxs)) | 1 | 2023-12-22 13:31:33+00:00 | 2k |
xhuangcv/humannorm | threestudio/models/materials/neural_radiance_material.py | [
{
"identifier": "BaseMaterial",
"path": "threestudio/models/materials/base.py",
"snippet": "class BaseMaterial(BaseModule):\n @dataclass\n class Config(BaseModule.Config):\n pass\n\n cfg: Config\n requires_normal: bool = False\n requires_tangent: bool = False\n\n def configure(self):\n pass\n\n def forward(self, *args, **kwargs) -> Float[Tensor, \"*B 3\"]:\n raise NotImplementedError\n\n def export(self, *args, **kwargs) -> Dict[str, Any]:\n return {}"
},
{
"identifier": "get_encoding",
"path": "threestudio/models/networks.py",
"snippet": "def get_encoding(n_input_dims: int, config) -> nn.Module:\n # input suppose to be range [0, 1]\n encoding: nn.Module\n if config.otype == \"ProgressiveBandFrequency\":\n encoding = ProgressiveBandFrequency(n_input_dims, config_to_primitive(config))\n elif config.otype == \"ProgressiveBandHashGrid\":\n encoding = ProgressiveBandHashGrid(n_input_dims, config_to_primitive(config))\n else:\n encoding = TCNNEncoding(n_input_dims, config_to_primitive(config))\n encoding = CompositeEncoding(\n encoding,\n include_xyz=config.get(\"include_xyz\", False),\n xyz_scale=2.0,\n xyz_offset=-1.0,\n ) # FIXME: hard coded\n return encoding"
},
{
"identifier": "get_mlp",
"path": "threestudio/models/networks.py",
"snippet": "def get_mlp(n_input_dims, n_output_dims, config) -> nn.Module:\n network: nn.Module\n if config.otype == \"VanillaMLP\":\n network = VanillaMLP(n_input_dims, n_output_dims, config_to_primitive(config))\n elif config.otype == \"SphereInitVanillaMLP\":\n network = SphereInitVanillaMLP(\n n_input_dims, n_output_dims, config_to_primitive(config)\n )\n else:\n assert (\n config.get(\"sphere_init\", False) is False\n ), \"sphere_init=True only supported by VanillaMLP\"\n network = TCNNNetwork(n_input_dims, n_output_dims, config_to_primitive(config))\n return network"
},
{
"identifier": "dot",
"path": "threestudio/utils/ops.py",
"snippet": "def dot(x, y):\n return torch.sum(x * y, -1, keepdim=True)"
},
{
"identifier": "get_activation",
"path": "threestudio/utils/ops.py",
"snippet": "def get_activation(name) -> Callable:\n if name is None:\n return lambda x: x\n name = name.lower()\n if name == \"none\":\n return lambda x: x\n elif name == \"lin2srgb\":\n return lambda x: torch.where(\n x > 0.0031308,\n torch.pow(torch.clamp(x, min=0.0031308), 1.0 / 2.4) * 1.055 - 0.055,\n 12.92 * x,\n ).clamp(0.0, 1.0)\n elif name == \"exp\":\n return lambda x: torch.exp(x)\n elif name == \"shifted_exp\":\n return lambda x: torch.exp(x - 1.0)\n elif name == \"trunc_exp\":\n return trunc_exp\n elif name == \"shifted_trunc_exp\":\n return lambda x: trunc_exp(x - 1.0)\n elif name == \"sigmoid\":\n return lambda x: torch.sigmoid(x)\n elif name == \"tanh\":\n return lambda x: torch.tanh(x)\n elif name == \"shifted_softplus\":\n return lambda x: F.softplus(x - 1.0)\n elif name == \"scale_-11_01\":\n return lambda x: x * 0.5 + 0.5\n else:\n try:\n return getattr(F, name)\n except AttributeError:\n raise ValueError(f\"Unknown activation function: {name}\")"
}
] | import random
import torch
import torch.nn as nn
import torch.nn.functional as F
import threestudio
from dataclasses import dataclass, field
from threestudio.models.materials.base import BaseMaterial
from threestudio.models.networks import get_encoding, get_mlp
from threestudio.utils.ops import dot, get_activation
from threestudio.utils.typing import * | 1,149 |
@threestudio.register("neural-radiance-material")
class NeuralRadianceMaterial(BaseMaterial):
@dataclass
class Config(BaseMaterial.Config):
input_feature_dims: int = 8
color_activation: str = "sigmoid"
dir_encoding_config: dict = field(
default_factory=lambda: {"otype": "SphericalHarmonics", "degree": 3}
)
mlp_network_config: dict = field(
default_factory=lambda: {
"otype": "FullyFusedMLP",
"activation": "ReLU",
"n_neurons": 16,
"n_hidden_layers": 2,
}
)
cfg: Config
def configure(self) -> None:
|
@threestudio.register("neural-radiance-material")
class NeuralRadianceMaterial(BaseMaterial):
@dataclass
class Config(BaseMaterial.Config):
input_feature_dims: int = 8
color_activation: str = "sigmoid"
dir_encoding_config: dict = field(
default_factory=lambda: {"otype": "SphericalHarmonics", "degree": 3}
)
mlp_network_config: dict = field(
default_factory=lambda: {
"otype": "FullyFusedMLP",
"activation": "ReLU",
"n_neurons": 16,
"n_hidden_layers": 2,
}
)
cfg: Config
def configure(self) -> None: | self.encoding = get_encoding(3, self.cfg.dir_encoding_config) | 1 | 2023-12-23 12:37:48+00:00 | 2k |
jianchang512/stt | start.py | [
{
"identifier": "cfg",
"path": "stslib/cfg.py",
"snippet": "LANG = \"en\" if locale.getdefaultlocale()[0].split('_')[0].lower() != 'zh' else \"zh\"\nROOT_DIR = os.getcwd()\nMODEL_DIR = os.path.join(ROOT_DIR, 'models')\nSTATIC_DIR = os.path.join(ROOT_DIR, 'static')\nTMP_DIR = os.path.join(STATIC_DIR, 'tmp')"
},
{
"identifier": "tool",
"path": "stslib/tool.py",
"snippet": "def runffmpeg(arg):\ndef checkupdate():\ndef openweb(web_address):\ndef ms_to_time_string(*, ms=0, seconds=None):"
},
{
"identifier": "ROOT_DIR",
"path": "stslib/cfg.py",
"snippet": "ROOT_DIR = os.getcwd()"
}
] | import logging
import re
import threading
import sys
import torch
import os
from flask import Flask, request, render_template, jsonify, send_from_directory
from gevent.pywsgi import WSGIServer, WSGIHandler, LoggingLogAdapter
from logging.handlers import RotatingFileHandler
from stslib import cfg, tool
from stslib.cfg import ROOT_DIR
from faster_whisper import WhisperModel | 836 |
device = "cuda" if torch.cuda.is_available() else "cpu"
class CustomRequestHandler(WSGIHandler):
def log_request(self):
pass
# 配置日志
# 禁用 Werkzeug 默认的日志处理器
log = logging.getLogger('werkzeug')
log.handlers[:] = []
log.setLevel(logging.WARNING)
app = Flask(__name__, static_folder=os.path.join(ROOT_DIR, 'static'), static_url_path='/static',
template_folder=os.path.join(ROOT_DIR, 'templates'))
root_log = logging.getLogger() # Flask的根日志记录器
root_log.handlers = []
root_log.setLevel(logging.WARNING)
# 配置日志
app.logger.setLevel(logging.WARNING) # 设置日志级别为 INFO
# 创建 RotatingFileHandler 对象,设置写入的文件路径和大小限制
file_handler = RotatingFileHandler(os.path.join(ROOT_DIR, 'sts.log'), maxBytes=1024 * 1024, backupCount=5)
# 创建日志的格式
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# 设置文件处理器的级别和格式
file_handler.setLevel(logging.WARNING)
file_handler.setFormatter(formatter)
# 将文件处理器添加到日志记录器中
app.logger.addHandler(file_handler)
@app.route('/static/<path:filename>')
def static_files(filename):
return send_from_directory(app.config['STATIC_FOLDER'], filename)
@app.route('/')
def index():
return render_template("index.html",
cuda=cfg.cuda,
lang_code=cfg.lang_code,
language=cfg.LANG,
root_dir=ROOT_DIR.replace('\\', '/'))
# 上传音频
@app.route('/upload', methods=['POST'])
def upload():
try:
# 获取上传的文件
audio_file = request.files['audio']
# 如果是mp4
noextname, ext = os.path.splitext(audio_file.filename)
ext = ext.lower()
# 如果是视频,先分离
wav_file = os.path.join(cfg.TMP_DIR, f'{noextname}.wav')
if os.path.exists(wav_file) and os.path.getsize(wav_file) > 0:
return jsonify({'code': 0, 'msg': cfg.transobj['lang1'], "data": os.path.basename(wav_file)})
msg = ""
if ext in ['.mp4', '.mov', '.avi', '.mkv', '.mpeg', '.mp3', '.flac']:
video_file = os.path.join(cfg.TMP_DIR, f'{noextname}{ext}')
audio_file.save(video_file)
params = [
"-i",
video_file,
]
if ext not in ['.mp3', '.flac']:
params.append('-vn')
params.append(wav_file)
|
device = "cuda" if torch.cuda.is_available() else "cpu"
class CustomRequestHandler(WSGIHandler):
def log_request(self):
pass
# 配置日志
# 禁用 Werkzeug 默认的日志处理器
log = logging.getLogger('werkzeug')
log.handlers[:] = []
log.setLevel(logging.WARNING)
app = Flask(__name__, static_folder=os.path.join(ROOT_DIR, 'static'), static_url_path='/static',
template_folder=os.path.join(ROOT_DIR, 'templates'))
root_log = logging.getLogger() # Flask的根日志记录器
root_log.handlers = []
root_log.setLevel(logging.WARNING)
# 配置日志
app.logger.setLevel(logging.WARNING) # 设置日志级别为 INFO
# 创建 RotatingFileHandler 对象,设置写入的文件路径和大小限制
file_handler = RotatingFileHandler(os.path.join(ROOT_DIR, 'sts.log'), maxBytes=1024 * 1024, backupCount=5)
# 创建日志的格式
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# 设置文件处理器的级别和格式
file_handler.setLevel(logging.WARNING)
file_handler.setFormatter(formatter)
# 将文件处理器添加到日志记录器中
app.logger.addHandler(file_handler)
@app.route('/static/<path:filename>')
def static_files(filename):
return send_from_directory(app.config['STATIC_FOLDER'], filename)
@app.route('/')
def index():
return render_template("index.html",
cuda=cfg.cuda,
lang_code=cfg.lang_code,
language=cfg.LANG,
root_dir=ROOT_DIR.replace('\\', '/'))
# 上传音频
@app.route('/upload', methods=['POST'])
def upload():
try:
# 获取上传的文件
audio_file = request.files['audio']
# 如果是mp4
noextname, ext = os.path.splitext(audio_file.filename)
ext = ext.lower()
# 如果是视频,先分离
wav_file = os.path.join(cfg.TMP_DIR, f'{noextname}.wav')
if os.path.exists(wav_file) and os.path.getsize(wav_file) > 0:
return jsonify({'code': 0, 'msg': cfg.transobj['lang1'], "data": os.path.basename(wav_file)})
msg = ""
if ext in ['.mp4', '.mov', '.avi', '.mkv', '.mpeg', '.mp3', '.flac']:
video_file = os.path.join(cfg.TMP_DIR, f'{noextname}{ext}')
audio_file.save(video_file)
params = [
"-i",
video_file,
]
if ext not in ['.mp3', '.flac']:
params.append('-vn')
params.append(wav_file) | rs = tool.runffmpeg(params) | 1 | 2023-12-28 16:02:55+00:00 | 2k |
jesenzhang/ComfyUI_StreamDiffusion | streamdiffusion/pipeline.py | [
{
"identifier": "SimilarImageFilter",
"path": "streamdiffusion/image_filter.py",
"snippet": "class SimilarImageFilter:\n def __init__(self, threshold: float = 0.98, max_skip_frame: float = 10) -> None:\n self.threshold = threshold\n self.prev_tensor = None\n self.cos = torch.nn.CosineSimilarity(dim=0, eps=1e-6)\n self.max_skip_frame = max_skip_frame\n self.skip_count = 0\n\n def __call__(self, x: torch.Tensor) -> Optional[torch.Tensor]:\n if self.prev_tensor is None:\n self.prev_tensor = x.detach().clone()\n return x\n else:\n cos_sim = self.cos(self.prev_tensor.reshape(-1), x.reshape(-1)).item()\n sample = random.uniform(0, 1)\n if self.threshold >= 1:\n skip_prob = 0\n else:\n skip_prob = max(0, 1 - (1 - cos_sim) / (1 - self.threshold))\n\n # not skip frame\n if skip_prob < sample:\n self.prev_tensor = x.detach().clone()\n return x\n # skip frame\n else:\n if self.skip_count > self.max_skip_frame:\n self.skip_count = 0\n self.prev_tensor = x.detach().clone()\n return x\n else:\n self.skip_count += 1\n return None\n\n def set_threshold(self, threshold: float) -> None:\n self.threshold = threshold\n \n def set_max_skip_frame(self, max_skip_frame: float) -> None:\n self.max_skip_frame = max_skip_frame"
},
{
"identifier": "postprocess_image",
"path": "streamdiffusion/image_utils.py",
"snippet": "def postprocess_image(\n image: torch.Tensor,\n output_type: str = \"pil\",\n do_denormalize: Optional[List[bool]] = None,\n) -> Union[torch.Tensor, np.ndarray, PIL.Image.Image]:\n if not isinstance(image, torch.Tensor):\n raise ValueError(\n f\"Input for postprocessing is in incorrect format: {type(image)}. We only support pytorch tensor\"\n )\n\n if output_type == \"latent\":\n return image\n\n do_normalize_flg = True\n if do_denormalize is None:\n do_denormalize = [do_normalize_flg] * image.shape[0]\n\n image = torch.stack(\n [\n denormalize(image[i]) if do_denormalize[i] else image[i]\n for i in range(image.shape[0])\n ]\n )\n\n if output_type == \"pt\":\n return image\n\n image = pt_to_numpy(image)\n\n if output_type == \"np\":\n return image\n\n if output_type == \"pil\":\n return numpy_to_pil(image)"
}
] | import time
import numpy as np
import PIL.Image
import torch
from typing import List, Optional, Union, Any, Dict, Tuple, Literal
from diffusers import LCMScheduler, StableDiffusionPipeline
from diffusers.image_processor import VaeImageProcessor
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img import (
retrieve_latents,
)
from .image_filter import SimilarImageFilter
from .image_utils import postprocess_image | 1,162 |
class StreamDiffusion:
def __init__(
self,
pipe: StableDiffusionPipeline,
t_index_list: List[int],
torch_dtype: torch.dtype = torch.float16,
width: int = 512,
height: int = 512,
do_add_noise: bool = True,
use_denoising_batch: bool = True,
frame_buffer_size: int = 1,
cfg_type: Literal["none", "full", "self", "initialize"] = "self",
) -> None:
self.device = pipe.device
self.dtype = torch_dtype
self.generator = None
self.height = height
self.width = width
self.latent_height = int(height // pipe.vae_scale_factor)
self.latent_width = int(width // pipe.vae_scale_factor)
self.frame_bff_size = frame_buffer_size
self.denoising_steps_num = len(t_index_list)
self.cfg_type = cfg_type
if use_denoising_batch:
self.batch_size = self.denoising_steps_num * frame_buffer_size
if self.cfg_type == "initialize":
self.trt_unet_batch_size = (
self.denoising_steps_num + 1
) * self.frame_bff_size
elif self.cfg_type == "full":
self.trt_unet_batch_size = (
2 * self.denoising_steps_num * self.frame_bff_size
)
else:
self.trt_unet_batch_size = self.denoising_steps_num * frame_buffer_size
else:
self.trt_unet_batch_size = self.frame_bff_size
self.batch_size = frame_buffer_size
self.t_list = t_index_list
self.do_add_noise = do_add_noise
self.use_denoising_batch = use_denoising_batch
self.similar_image_filter = False
|
class StreamDiffusion:
def __init__(
self,
pipe: StableDiffusionPipeline,
t_index_list: List[int],
torch_dtype: torch.dtype = torch.float16,
width: int = 512,
height: int = 512,
do_add_noise: bool = True,
use_denoising_batch: bool = True,
frame_buffer_size: int = 1,
cfg_type: Literal["none", "full", "self", "initialize"] = "self",
) -> None:
self.device = pipe.device
self.dtype = torch_dtype
self.generator = None
self.height = height
self.width = width
self.latent_height = int(height // pipe.vae_scale_factor)
self.latent_width = int(width // pipe.vae_scale_factor)
self.frame_bff_size = frame_buffer_size
self.denoising_steps_num = len(t_index_list)
self.cfg_type = cfg_type
if use_denoising_batch:
self.batch_size = self.denoising_steps_num * frame_buffer_size
if self.cfg_type == "initialize":
self.trt_unet_batch_size = (
self.denoising_steps_num + 1
) * self.frame_bff_size
elif self.cfg_type == "full":
self.trt_unet_batch_size = (
2 * self.denoising_steps_num * self.frame_bff_size
)
else:
self.trt_unet_batch_size = self.denoising_steps_num * frame_buffer_size
else:
self.trt_unet_batch_size = self.frame_bff_size
self.batch_size = frame_buffer_size
self.t_list = t_index_list
self.do_add_noise = do_add_noise
self.use_denoising_batch = use_denoising_batch
self.similar_image_filter = False | self.similar_filter = SimilarImageFilter() | 0 | 2023-12-29 09:00:03+00:00 | 2k |
neobundy/MLX-Stable-Diffusion-WebUI | model_inspector.py | [
{
"identifier": "PathConfig",
"path": "stable_diffusion/config.py",
"snippet": "class DiffuserModelPathConfig:\nclass BaseConfig:\nclass AutoencoderConfig(BaseConfig):\nclass CLIPTextModelConfig(BaseConfig):\nclass UNetConfig(BaseConfig):\nclass DiffusionConfig(BaseConfig):\n def __init__(self, model_path: str = \"./diffuser_models\"):\n def unet_config(self):\n def unet(self):\n def scheduler(self):\n def text_encoder_config(self):\n def text_encoder(self):\n def vae_config(self):\n def vae(self):\n def diffusion_config(self):\n def tokenizer_vocab(self):\n def tokenizer_merges(self):\n def __getitem__(self, key):\n def __setitem__(self, key, value):"
},
{
"identifier": "preload_models_from_safetensor_weights",
"path": "stable_diffusion/model_io.py",
"snippet": "_DEBUG = False\ndef _debug_print(*args, **kwargs):\ndef _from_numpy(x):\ndef map_unet_weights(key, value):\ndef map_clip_text_encoder_weights(key, value):\ndef map_vae_weights(key, value):\ndef _flatten(params):\ndef _load_safetensor_weights(mapper, model, weight_file, float16: bool = False):\ndef _check_key(key: str, part: str):\ndef load_unet(key: str = _DEFAULT_MODEL, float16: bool = False):\ndef load_text_encoder(key: str = _DEFAULT_MODEL, float16: bool = False):\ndef load_autoencoder(key: str = _DEFAULT_MODEL, float16: bool = False):\ndef load_diffusion_config(key: str = _DEFAULT_MODEL):\ndef load_tokenizer(key: str = _DEFAULT_MODEL):\ndef load_unet_local(weights_path: str, config_path: str, float16: bool = False):\ndef load_text_encoder_local(weights_path: str, config_path: str, float16: bool = False):\ndef load_autoencoder_local(weights_path: str, config_path: str, float16: bool = False):\ndef load_diffusion_config_local(config_path:str):\ndef load_tokenizer_local(vocab_path: str, merges_path: str):\ndef load_diffuser_model(diffuser_model_path: str, float16: bool = False):"
},
{
"identifier": "_state_dict",
"path": "utils.py",
"snippet": "def _state_dict(model):\n \"\"\"Return the model's state_dict as a dictionary.\"\"\"\n state_dict = {}\n for name, param in model.parameters().items():\n state_dict[name] = param\n return state_dict"
},
{
"identifier": "get_state_dict_from_safetensor",
"path": "utils.py",
"snippet": "def get_state_dict_from_safetensor(checkpoint_path: str):\n \"\"\"Return the state_dict from the checkpoint.\"\"\"\n state_dict = {}\n with safetensor_open(checkpoint_path, framework=\"numpy\") as f:\n # Access the data in the file\n for key in f.keys():\n tensor = f.get_tensor(key)\n state_dict[key] = tensor\n return state_dict"
}
] | from stable_diffusion.config import PathConfig
from stable_diffusion.model_io import preload_models_from_safetensor_weights
from utils import _state_dict
from utils import get_state_dict_from_safetensor | 1,090 |
INSPECTION_FILE = "model_inspection.txt"
NUM_ITEMS = 100
MODEL_FILE = "./models/v2-1_512-ema-pruned.safetensors"
MODEL_FILE1 = "./unet/diffusion_pytorch_model_test.safetensors"
MODEL_FILE2 = "./unet/xxmix9realistic_v40.safetensors"
# Recreate the inspection file at every execution of the script
with open(INSPECTION_FILE, 'w') as f:
pass
def write_to_file(*args, **kwargs):
"""Write the text to the inspection file."""
# Convert the arguments to a string
message = ' '.join(map(str, args))
# Print the message to the console
print(message, **kwargs)
# Open the log file in append mode and write the message
with open(INSPECTION_FILE, 'a') as f:
f.write(message + '\n')
def inspect_model(path_config: PathConfig, keys_only=True):
"""Inspect the contents of the models."""
# Load the models using the provided config and weights paths
unet_model = load_unet_local(path_config.unet_config, MODEL_FILE)
text_encoder_model = load_text_encoder_local(MODEL_FILE)
autoencoder_model = load_autoencoder_local(MODEL_FILE)
diffusion_config = load_diffusion_config_local(path_config.diffusion_config)
tokenizer = load_tokenizer_local(path_config.tokenizer_vocab, path_config.tokenizer_merges)
# Convert the models' state_dict to a dictionary and iterate over it
for model_name, model in zip(["unet", "text_encoder", "autoencoder"], [unet_model, text_encoder_model, autoencoder_model]):
write_to_file("-" * 50)
write_to_file(f"Model: {model_name}")
write_to_file("-" * 50)
|
INSPECTION_FILE = "model_inspection.txt"
NUM_ITEMS = 100
MODEL_FILE = "./models/v2-1_512-ema-pruned.safetensors"
MODEL_FILE1 = "./unet/diffusion_pytorch_model_test.safetensors"
MODEL_FILE2 = "./unet/xxmix9realistic_v40.safetensors"
# Recreate the inspection file at every execution of the script
with open(INSPECTION_FILE, 'w') as f:
pass
def write_to_file(*args, **kwargs):
"""Write the text to the inspection file."""
# Convert the arguments to a string
message = ' '.join(map(str, args))
# Print the message to the console
print(message, **kwargs)
# Open the log file in append mode and write the message
with open(INSPECTION_FILE, 'a') as f:
f.write(message + '\n')
def inspect_model(path_config: PathConfig, keys_only=True):
"""Inspect the contents of the models."""
# Load the models using the provided config and weights paths
unet_model = load_unet_local(path_config.unet_config, MODEL_FILE)
text_encoder_model = load_text_encoder_local(MODEL_FILE)
autoencoder_model = load_autoencoder_local(MODEL_FILE)
diffusion_config = load_diffusion_config_local(path_config.diffusion_config)
tokenizer = load_tokenizer_local(path_config.tokenizer_vocab, path_config.tokenizer_merges)
# Convert the models' state_dict to a dictionary and iterate over it
for model_name, model in zip(["unet", "text_encoder", "autoencoder"], [unet_model, text_encoder_model, autoencoder_model]):
write_to_file("-" * 50)
write_to_file(f"Model: {model_name}")
write_to_file("-" * 50) | for key, value in _state_dict(model).items(): | 2 | 2023-12-25 05:49:34+00:00 | 2k |
ffmemes/ff-backend | src/storage/service.py | [
{
"identifier": "language",
"path": "src/database.py",
"snippet": "DATABASE_URL = str(settings.DATABASE_URL)\nasync def fetch_one(select_query: Select | Insert | Update) -> dict[str, Any] | None:\nasync def fetch_all(select_query: Select | Insert | Update) -> list[dict[str, Any]]:\nasync def execute(select_query: Insert | Update) -> CursorResult:"
},
{
"identifier": "TgChannelPostParsingResult",
"path": "src/storage/parsers/schemas.py",
"snippet": "class TgChannelPostParsingResult(CustomModel):\n post_id: int\n url: str\n content: str | None = None # post text\n media: list[dict] | None = None\n views: int\n date: datetime\n\n mentions: list[str] | None = None # mentioned usernames\n hashtags: list[str] | None = None\n forwarded: dict | None = None\n forwarded_url: str | None = None # url to forwarded post\n link_preview: dict | None = None\n out_links: list[str] | None = None"
},
{
"identifier": "VkGroupPostParsingResult",
"path": "src/storage/parsers/schemas.py",
"snippet": "class VkGroupPostParsingResult(CustomModel):\n post_id: str\n url: str\n content: str | None = None # post text\n media: list[str]\n date: datetime\n views: int\n likes: int\n reposts: int\n comments: int"
},
{
"identifier": "MemeSourceType",
"path": "src/storage/constants.py",
"snippet": "class MemeSourceType(str, Enum):\n TELEGRAM = \"telegram\"\n VK = \"vk\"\n REDDIT = \"reddit\"\n INSTAGRAM = \"instagram\"\n TWITTER = \"twitter\"\n TIKTOK = \"tiktok\"\n USER_UPLOAD = \"user upload\""
},
{
"identifier": "MemeSourceStatus",
"path": "src/storage/constants.py",
"snippet": "class MemeSourceStatus(str, Enum):\n IN_MODERATION = \"in_moderation\"\n PARSING_ENABLED = \"parsing_enabled\"\n PARSING_DISABLED = \"parsing_disabled\""
},
{
"identifier": "MemeType",
"path": "src/storage/constants.py",
"snippet": "class MemeType(str, Enum):\n IMAGE = \"image\"\n ANIMATION = \"animation\"\n VIDEO = \"video\""
},
{
"identifier": "MemeStatus",
"path": "src/storage/constants.py",
"snippet": "class MemeStatus(str, Enum):\n CREATED = \"created\"\n OK = \"ok\"\n DUPLICATE = \"duplicate\"\n AD = \"ad\"\n BROKEN_CONTENT_LINK = \"broken_content_link\"\n \n # TODO: more statuses?\n # IN_MODERATION = \"in_moderation\""
},
{
"identifier": "MEME_RAW_TELEGRAM_MEME_SOURCE_POST_UNIQUE_CONSTRAINT",
"path": "src/storage/constants.py",
"snippet": "MEME_RAW_TELEGRAM_MEME_SOURCE_POST_UNIQUE_CONSTRAINT = \"meme_raw_telegram_meme_source_id_post_id_key\""
},
{
"identifier": "MEME_RAW_VK_MEME_SOURCE_POST_UNIQUE_CONSTRAINT",
"path": "src/storage/constants.py",
"snippet": "MEME_RAW_VK_MEME_SOURCE_POST_UNIQUE_CONSTRAINT = \"meme_raw_vk_meme_source_id_post_id_key\""
}
] | from typing import Any
from datetime import datetime
from sqlalchemy import select, nulls_first, text
from sqlalchemy.dialects.postgresql import insert
from src.database import (
language,
meme,
meme_source,
meme_raw_telegram,
meme_raw_vk,
execute, fetch_one, fetch_all,
)
from src.storage.parsers.schemas import TgChannelPostParsingResult, VkGroupPostParsingResult
from src.storage.constants import (
MemeSourceType,
MemeSourceStatus,
MemeType,
MemeStatus,
MEME_RAW_TELEGRAM_MEME_SOURCE_POST_UNIQUE_CONSTRAINT,
MEME_RAW_VK_MEME_SOURCE_POST_UNIQUE_CONSTRAINT,
) | 1,154 |
async def insert_parsed_posts_from_telegram(
meme_source_id: int,
telegram_posts: list[TgChannelPostParsingResult],
) -> None:
posts = [
post.model_dump() | {"meme_source_id": meme_source_id}
for post in telegram_posts
]
insert_statement = insert(meme_raw_telegram).values(posts)
insert_posts_query = insert_statement.on_conflict_do_update(
constraint=MEME_RAW_TELEGRAM_MEME_SOURCE_POST_UNIQUE_CONSTRAINT,
set_={
"media": insert_statement.excluded.media,
"views": insert_statement.excluded.views,
"updated_at": datetime.utcnow(),
},
)
await execute(insert_posts_query)
async def insert_parsed_posts_from_vk(
meme_source_id: int,
vk_posts: list[VkGroupPostParsingResult],
) -> None:
posts = [
post.model_dump() | {"meme_source_id": meme_source_id}
for post in vk_posts
]
insert_statement = insert(meme_raw_vk).values(posts)
insert_posts_query = insert_statement.on_conflict_do_update(
constraint=MEME_RAW_VK_MEME_SOURCE_POST_UNIQUE_CONSTRAINT,
set_={
"media": insert_statement.excluded.media,
"views": insert_statement.excluded.views,
"likes": insert_statement.excluded.likes,
"reposts": insert_statement.excluded.reposts,
"comments": insert_statement.excluded.comments,
"updated_at": datetime.utcnow(),
},
)
await execute(insert_posts_query)
async def get_telegram_sources_to_parse(limit=10) -> list[dict[str, Any]]:
select_query = (
select(meme_source)
|
async def insert_parsed_posts_from_telegram(
meme_source_id: int,
telegram_posts: list[TgChannelPostParsingResult],
) -> None:
posts = [
post.model_dump() | {"meme_source_id": meme_source_id}
for post in telegram_posts
]
insert_statement = insert(meme_raw_telegram).values(posts)
insert_posts_query = insert_statement.on_conflict_do_update(
constraint=MEME_RAW_TELEGRAM_MEME_SOURCE_POST_UNIQUE_CONSTRAINT,
set_={
"media": insert_statement.excluded.media,
"views": insert_statement.excluded.views,
"updated_at": datetime.utcnow(),
},
)
await execute(insert_posts_query)
async def insert_parsed_posts_from_vk(
meme_source_id: int,
vk_posts: list[VkGroupPostParsingResult],
) -> None:
posts = [
post.model_dump() | {"meme_source_id": meme_source_id}
for post in vk_posts
]
insert_statement = insert(meme_raw_vk).values(posts)
insert_posts_query = insert_statement.on_conflict_do_update(
constraint=MEME_RAW_VK_MEME_SOURCE_POST_UNIQUE_CONSTRAINT,
set_={
"media": insert_statement.excluded.media,
"views": insert_statement.excluded.views,
"likes": insert_statement.excluded.likes,
"reposts": insert_statement.excluded.reposts,
"comments": insert_statement.excluded.comments,
"updated_at": datetime.utcnow(),
},
)
await execute(insert_posts_query)
async def get_telegram_sources_to_parse(limit=10) -> list[dict[str, Any]]:
select_query = (
select(meme_source) | .where(meme_source.c.type == MemeSourceType.TELEGRAM) | 3 | 2023-12-23 12:55:43+00:00 | 2k |
Con6924/SPM | src/configs/prompt.py | [
{
"identifier": "imagenet_templates",
"path": "src/misc/clip_templates.py",
"snippet": ""
},
{
"identifier": "encode_prompts",
"path": "src/engine/train_util.py",
"snippet": "def encode_prompts(\n tokenizer: CLIPTokenizer,\n text_encoder: CLIPTokenizer,\n prompts: list[str],\n return_tokens: bool = False,\n):\n text_tokens = text_tokenize(tokenizer, prompts)\n text_embeddings = text_encode(text_encoder, text_tokens)\n\n if return_tokens:\n return text_embeddings, torch.unique(text_tokens, dim=1)\n return text_embeddings"
}
] | from typing import Literal, Optional, Union
from pathlib import Path
from pydantic import BaseModel, root_validator
from transformers import CLIPTextModel, CLIPTokenizer
from src.misc.clip_templates import imagenet_templates
from src.engine.train_util import encode_prompts
import yaml
import pandas as pd
import random
import torch | 1,147 |
class PromptEmbedsXL:
text_embeds: torch.FloatTensor
pooled_embeds: torch.FloatTensor
def __init__(self, embeds) -> None:
self.text_embeds, self.pooled_embeds = embeds
PROMPT_EMBEDDING = Union[torch.FloatTensor, PromptEmbedsXL]
class PromptEmbedsCache:
prompts: dict[str, PROMPT_EMBEDDING] = {}
def __setitem__(self, __name: str, __value: PROMPT_EMBEDDING) -> None:
self.prompts[__name] = __value
def __getitem__(self, __name: str) -> Optional[PROMPT_EMBEDDING]:
if __name in self.prompts:
return self.prompts[__name]
else:
return None
class PromptSettings(BaseModel): # yaml
target: str
positive: str = None # if None, target will be used
unconditional: str = "" # default is ""
neutral: str = None # if None, unconditional will be used
action: ACTION_TYPES = "erase" # default is "erase"
guidance_scale: float = 1.0 # default is 1.0
resolution: int = 512 # default is 512
dynamic_resolution: bool = False # default is False
batch_size: int = 1 # default is 1
dynamic_crops: bool = False # default is False. only used when model is XL
use_template: bool = False # default is False
la_strength: float = 1000.0
sampling_batch_size: int = 4
seed: int = None
case_number: int = 0
@root_validator(pre=True)
def fill_prompts(cls, values):
keys = values.keys()
if "target" not in keys:
raise ValueError("target must be specified")
if "positive" not in keys:
values["positive"] = values["target"]
if "unconditional" not in keys:
values["unconditional"] = ""
if "neutral" not in keys:
values["neutral"] = values["unconditional"]
return values
class PromptEmbedsPair:
target: PROMPT_EMBEDDING # the concept that do not want to generate
positive: PROMPT_EMBEDDING # generate the concept
unconditional: PROMPT_EMBEDDING # uncondition (default should be empty)
neutral: PROMPT_EMBEDDING # base condition (default should be empty)
use_template: bool = False # use clip template or not
guidance_scale: float
resolution: int
dynamic_resolution: bool
batch_size: int
dynamic_crops: bool
loss_fn: torch.nn.Module
action: ACTION_TYPES
def __init__(
self,
loss_fn: torch.nn.Module,
target: PROMPT_EMBEDDING,
positive: PROMPT_EMBEDDING,
unconditional: PROMPT_EMBEDDING,
neutral: PROMPT_EMBEDDING,
settings: PromptSettings,
) -> None:
self.loss_fn = loss_fn
self.target = target
self.positive = positive
self.unconditional = unconditional
self.neutral = neutral
self.settings = settings
self.use_template = settings.use_template
self.guidance_scale = settings.guidance_scale
self.resolution = settings.resolution
self.dynamic_resolution = settings.dynamic_resolution
self.batch_size = settings.batch_size
self.dynamic_crops = settings.dynamic_crops
self.action = settings.action
self.la_strength = settings.la_strength
self.sampling_batch_size = settings.sampling_batch_size
def _prepare_embeddings(
self,
cache: PromptEmbedsCache,
tokenizer: CLIPTokenizer,
text_encoder: CLIPTextModel,
):
"""
Prepare embeddings for training. When use_template is True, the embeddings will be
format using a template, and then be processed by the model.
"""
if not self.use_template:
return
template = random.choice(imagenet_templates)
target_prompt = template.format(self.settings.target)
if cache[target_prompt]:
self.target = cache[target_prompt]
else:
|
ACTION_TYPES = Literal[
"erase",
"erase_with_la",
]
class PromptEmbedsXL:
text_embeds: torch.FloatTensor
pooled_embeds: torch.FloatTensor
def __init__(self, embeds) -> None:
self.text_embeds, self.pooled_embeds = embeds
PROMPT_EMBEDDING = Union[torch.FloatTensor, PromptEmbedsXL]
class PromptEmbedsCache:
prompts: dict[str, PROMPT_EMBEDDING] = {}
def __setitem__(self, __name: str, __value: PROMPT_EMBEDDING) -> None:
self.prompts[__name] = __value
def __getitem__(self, __name: str) -> Optional[PROMPT_EMBEDDING]:
if __name in self.prompts:
return self.prompts[__name]
else:
return None
class PromptSettings(BaseModel): # yaml
target: str
positive: str = None # if None, target will be used
unconditional: str = "" # default is ""
neutral: str = None # if None, unconditional will be used
action: ACTION_TYPES = "erase" # default is "erase"
guidance_scale: float = 1.0 # default is 1.0
resolution: int = 512 # default is 512
dynamic_resolution: bool = False # default is False
batch_size: int = 1 # default is 1
dynamic_crops: bool = False # default is False. only used when model is XL
use_template: bool = False # default is False
la_strength: float = 1000.0
sampling_batch_size: int = 4
seed: int = None
case_number: int = 0
@root_validator(pre=True)
def fill_prompts(cls, values):
keys = values.keys()
if "target" not in keys:
raise ValueError("target must be specified")
if "positive" not in keys:
values["positive"] = values["target"]
if "unconditional" not in keys:
values["unconditional"] = ""
if "neutral" not in keys:
values["neutral"] = values["unconditional"]
return values
class PromptEmbedsPair:
target: PROMPT_EMBEDDING # the concept that do not want to generate
positive: PROMPT_EMBEDDING # generate the concept
unconditional: PROMPT_EMBEDDING # uncondition (default should be empty)
neutral: PROMPT_EMBEDDING # base condition (default should be empty)
use_template: bool = False # use clip template or not
guidance_scale: float
resolution: int
dynamic_resolution: bool
batch_size: int
dynamic_crops: bool
loss_fn: torch.nn.Module
action: ACTION_TYPES
def __init__(
self,
loss_fn: torch.nn.Module,
target: PROMPT_EMBEDDING,
positive: PROMPT_EMBEDDING,
unconditional: PROMPT_EMBEDDING,
neutral: PROMPT_EMBEDDING,
settings: PromptSettings,
) -> None:
self.loss_fn = loss_fn
self.target = target
self.positive = positive
self.unconditional = unconditional
self.neutral = neutral
self.settings = settings
self.use_template = settings.use_template
self.guidance_scale = settings.guidance_scale
self.resolution = settings.resolution
self.dynamic_resolution = settings.dynamic_resolution
self.batch_size = settings.batch_size
self.dynamic_crops = settings.dynamic_crops
self.action = settings.action
self.la_strength = settings.la_strength
self.sampling_batch_size = settings.sampling_batch_size
def _prepare_embeddings(
self,
cache: PromptEmbedsCache,
tokenizer: CLIPTokenizer,
text_encoder: CLIPTextModel,
):
"""
Prepare embeddings for training. When use_template is True, the embeddings will be
format using a template, and then be processed by the model.
"""
if not self.use_template:
return
template = random.choice(imagenet_templates)
target_prompt = template.format(self.settings.target)
if cache[target_prompt]:
self.target = cache[target_prompt]
else: | self.target = encode_prompts(tokenizer, text_encoder, [target_prompt]) | 1 | 2023-12-26 03:19:16+00:00 | 2k |
dakpinaroglu/Frame2seq | frame2seq/utils/score.py | [
{
"identifier": "residue_constants",
"path": "frame2seq/utils/residue_constants.py",
"snippet": "def load_stereo_chemical_props() -> Tuple[Mapping[str, List[Bond]],\n def make_bond_key(atom1_name, atom2_name):\ndef sequence_to_onehot(\n sequence: str,\n mapping: Mapping[str, int],\n) -> np.ndarray:\ndef _make_standard_atom_mask() -> np.ndarray:\ndef _make_rigid_transformation_4x4(ex, ey, translation):\nAA_TO_ID = {\n 'A': 0,\n 'C': 1,\n 'D': 2,\n 'E': 3,\n 'F': 4,\n 'G': 5,\n 'H': 6,\n 'I': 7,\n 'K': 8,\n 'L': 9,\n 'M': 10,\n 'N': 11,\n 'P': 12,\n 'Q': 13,\n 'R': 14,\n 'S': 15,\n 'T': 16,\n 'V': 17,\n 'W': 18,\n 'Y': 19,\n 'X': 20,\n}\nID_TO_AA = {\n 0: 'A',\n 1: 'C',\n 2: 'D',\n 3: 'E',\n 4: 'F',\n 5: 'G',\n 6: 'H',\n 7: 'I',\n 8: 'K',\n 9: 'L',\n 10: 'M',\n 11: 'N',\n 12: 'P',\n 13: 'Q',\n 14: 'R',\n 15: 'S',\n 16: 'T',\n 17: 'V',\n 18: 'W',\n 19: 'Y',\n 20: 'X',\n}\nSTANDARD_ATOM_MASK = _make_standard_atom_mask()"
},
{
"identifier": "get_neg_pll",
"path": "frame2seq/utils/util.py",
"snippet": "def get_neg_pll(probs, seq):\n seq_probs = torch.gather(probs, 1, seq.unsqueeze(-1)).squeeze(-1)\n neg_pll = -1 * torch.log(seq_probs)\n avg_neg_pll = neg_pll.sum().item() / len(neg_pll)\n return neg_pll, avg_neg_pll"
},
{
"identifier": "read_fasta_file",
"path": "frame2seq/utils/util.py",
"snippet": "def read_fasta_file(fasta_file):\n \"\"\"\n Read a fasta file and return a list of sequences.\n \"\"\"\n with open(fasta_file, 'r') as f:\n lines = f.readlines()\n sequences = []\n for line in lines:\n if line[0] == '>':\n sequences.append(lines[lines.index(line) + 1].strip())\n return sequences"
},
{
"identifier": "get_inference_inputs",
"path": "frame2seq/utils/pdb2input.py",
"snippet": "def get_inference_inputs(pdb_file, chain_id):\n atom_positions, aatype, seq_mask = get_parsed_inputs(pdb_file, chain_id)\n seq_mask = seq_mask.unsqueeze(0)\n aatype = torch.from_numpy(aatype)\n aatype = aatype.unsqueeze(0)\n X = atom_positions\n X = X.unsqueeze(0)\n return seq_mask, aatype, X"
},
{
"identifier": "output_csv",
"path": "frame2seq/utils/pred2output.py",
"snippet": "def output_csv(preds, csv_dir):\n \"\"\"\n Given average negative pseudo-log-likelihoods, write to a csv file.\n \"\"\"\n df = pd.DataFrame(columns=[\n 'PDBID', 'Chain ID', 'Sample Number', 'Scored sequence',\n 'Average negative pseudo-log-likelihood', 'Temperature'\n ],\n data=preds)\n df.to_csv(f\"{csv_dir}/scores.csv\", index=False)"
},
{
"identifier": "output_indiv_csv",
"path": "frame2seq/utils/pred2output.py",
"snippet": "def output_indiv_csv(scores, csv_dir):\n \"\"\"\n Given per-residue negative pseudo-log-likelihoods, write to a csv file.\n \"\"\"\n pdbid = scores['pdbid']\n chain = scores['chain']\n sample = scores['sample']\n res_idx = scores['res_idx']\n neg_pll = scores['neg_pll']\n\n df = pd.DataFrame(\n list(zip(res_idx, neg_pll)),\n columns=['Residue index', 'Negative pseudo-log-likelihood'])\n df.to_csv(f\"{csv_dir}/{pdbid}_{chain}_seq{sample}.csv\", index=False)"
}
] | import os
import torch
from tqdm import tqdm
from frame2seq.utils import residue_constants
from frame2seq.utils.util import get_neg_pll, read_fasta_file
from frame2seq.utils.pdb2input import get_inference_inputs
from frame2seq.utils.pred2output import output_csv, output_indiv_csv | 1,471 |
def score(self, pdb_file, chain_id, fasta_file, save_indiv_neg_pll):
temperature = 1.0
seq_mask, aatype, X = get_inference_inputs(pdb_file, chain_id)
seq_mask = seq_mask.to(self.device)
aatype = aatype.to(self.device)
X = X.to(self.device)
str_form = [residue_constants.ID_TO_AA[int(i)] for i in aatype[0]]
input_aatype_onehot = residue_constants.sequence_to_onehot(
sequence=str_form,
mapping=residue_constants.AA_TO_ID,
)
input_aatype_onehot = torch.from_numpy(input_aatype_onehot).float()
input_aatype_onehot = input_aatype_onehot.unsqueeze(0)
input_aatype_onehot = input_aatype_onehot.to(self.device)
input_aatype_onehot = torch.zeros_like(input_aatype_onehot)
input_aatype_onehot[:, :,
20] = 1 # all positions are masked (set to unknown)
scores, preds = {}, []
with torch.no_grad():
pred_seq1 = self.models[0].forward(X, seq_mask, input_aatype_onehot)
pred_seq2 = self.models[1].forward(X, seq_mask, input_aatype_onehot)
pred_seq3 = self.models[2].forward(X, seq_mask, input_aatype_onehot)
pred_seq = (pred_seq1 + pred_seq2 + pred_seq3) / 3 # ensemble
pred_seq = pred_seq / temperature
pred_seq = torch.nn.functional.softmax(pred_seq, dim=-1)
pred_seq = pred_seq[seq_mask]
if fasta_file is not None:
|
def score(self, pdb_file, chain_id, fasta_file, save_indiv_neg_pll):
temperature = 1.0
seq_mask, aatype, X = get_inference_inputs(pdb_file, chain_id)
seq_mask = seq_mask.to(self.device)
aatype = aatype.to(self.device)
X = X.to(self.device)
str_form = [residue_constants.ID_TO_AA[int(i)] for i in aatype[0]]
input_aatype_onehot = residue_constants.sequence_to_onehot(
sequence=str_form,
mapping=residue_constants.AA_TO_ID,
)
input_aatype_onehot = torch.from_numpy(input_aatype_onehot).float()
input_aatype_onehot = input_aatype_onehot.unsqueeze(0)
input_aatype_onehot = input_aatype_onehot.to(self.device)
input_aatype_onehot = torch.zeros_like(input_aatype_onehot)
input_aatype_onehot[:, :,
20] = 1 # all positions are masked (set to unknown)
scores, preds = {}, []
with torch.no_grad():
pred_seq1 = self.models[0].forward(X, seq_mask, input_aatype_onehot)
pred_seq2 = self.models[1].forward(X, seq_mask, input_aatype_onehot)
pred_seq3 = self.models[2].forward(X, seq_mask, input_aatype_onehot)
pred_seq = (pred_seq1 + pred_seq2 + pred_seq3) / 3 # ensemble
pred_seq = pred_seq / temperature
pred_seq = torch.nn.functional.softmax(pred_seq, dim=-1)
pred_seq = pred_seq[seq_mask]
if fasta_file is not None: | input_seqs = read_fasta_file(fasta_file) | 2 | 2023-12-25 09:29:36+00:00 | 2k |
davep/oshit | oshit/app/oshit.py | [
{
"identifier": "load_configuration",
"path": "oshit/app/data/config.py",
"snippet": "@lru_cache(maxsize=None)\ndef load_configuration() -> Configuration:\n \"\"\"Load the configuration.\n\n Returns:\n The configuration.\n\n Note:\n As a side-effect, if the configuration doesn't exist a default one\n will be saved to storage.\n\n This function is designed so that it's safe and low-cost to\n repeatedly call it. The configuration is cached and will only be\n loaded from storage when necessary.\n \"\"\"\n source = configuration_file()\n return (\n Configuration(**loads(source.read_text(encoding=\"utf-8\")))\n if source.exists()\n else save_configuration(Configuration())\n )"
},
{
"identifier": "save_configuration",
"path": "oshit/app/data/config.py",
"snippet": "def save_configuration(configuration: Configuration) -> Configuration:\n \"\"\"Save the given configuration.\n\n Args:\n The configuration to store.\n\n Returns:\n The configuration.\n \"\"\"\n load_configuration.cache_clear()\n configuration_file().write_text(\n dumps(asdict(configuration), indent=4), encoding=\"utf-8\"\n )\n return load_configuration()"
},
{
"identifier": "Main",
"path": "oshit/app/screens/main.py",
"snippet": "class Main(Screen[None]):\n \"\"\"The main screen of the application.\"\"\"\n\n CONTEXT_HELP = \"\"\"\n ## Application keys\n\n | Key | Description |\n | - | - |\n | <kbd>F1</kbd> | This help screen. |\n | <kbd>F2</kbd> | Toggle compact/relaxed display. |\n | <kbd>F3</kbd> | Toggle dark/light mode. |\n | <kbd>F12</kbd> | Quit the application. |\n | <kbd>t</kbd> | View the top stories. |\n | <kbd>n</kbd> | View the new stories. |\n | <kbd>b</kbd> | View the best stories. |\n | <kbd>a</kbd> | View the AskHN stories. |\n | <kbd>s</kbd> | View the ShowHN stories. |\n | <kbd>j</kbd> | View the jobs. |\n \"\"\"\n\n CSS = \"\"\"\n TabbedContent, LoadingIndicator {\n background: $panel;\n }\n \"\"\"\n\n TITLE = f\"Orange Site Hit v{__version__}\"\n\n BINDINGS = [\n Binding(\"f1\", \"help\", \"Help\"),\n Binding(\"f2\", \"compact\", \"Compact/Relaxed\"),\n Binding(\"f3\", \"toggle_dark\"),\n Binding(\"f12\", \"quit\", \"Quit\"),\n Binding(\"t\", \"go('top')\"),\n Binding(\"n\", \"go('new')\"),\n Binding(\"b\", \"go('best')\"),\n Binding(\"a\", \"go('ask')\"),\n Binding(\"s\", \"go('show')\"),\n Binding(\"j\", \"go('jobs')\"),\n Binding(\"down, enter\", \"pane\"),\n ]\n\n def __init__(self) -> None:\n \"\"\"Initialise the screen.\"\"\"\n super().__init__()\n config = load_configuration()\n self._hn = HN(\n max_concurrency=config.maximum_concurrency,\n timeout=config.connection_timeout,\n )\n \"\"\"The HackerNews client object.\"\"\"\n\n def compose(self) -> ComposeResult:\n \"\"\"Compose the main screen's layout.\"\"\"\n yield Header()\n with HackerNews():\n yield Items(\"top\", \"t\", self._hn.top_stories)\n yield Items(\"new\", \"n\", self._hn.new_stories)\n yield Items(\"best\", \"b\", self._hn.best_stories)\n yield Items(\"ask\", \"a\", self._hn.latest_ask_stories)\n yield Items(\"show\", \"s\", self._hn.latest_show_stories)\n yield Items(\"jobs\", \"j\", self._hn.latest_job_stories)\n yield Footer()\n\n def _refresh_subtitle(self) -> None:\n \"\"\"Refresh the subtitle of the screen.\"\"\"\n self.sub_title = self.query_one(HackerNews).description\n\n def on_mount(self) -> None:\n \"\"\"Configure things once the DOM is ready.\"\"\"\n self.set_interval(0.95, self._refresh_subtitle)\n\n def action_help(self) -> None:\n \"\"\"Show the help screen.\"\"\"\n self.app.push_screen(Help(self))\n\n def action_go(self, items: str) -> None:\n \"\"\"Go to the given list of items.\n\n Args:\n items: The name of the list of items to go to.\n \"\"\"\n self.query_one(HackerNews).active = items\n self.query_one(HackerNews).focus_active_pane()\n\n def action_compact(self) -> None:\n \"\"\"Toggle the compact display.\"\"\"\n news = self.query_one(HackerNews)\n news.compact = not news.compact\n\n @on(ShowUser)\n def show_user(self, event: ShowUser) -> None:\n \"\"\"Handle a request to show the details of a user.\"\"\"\n self.app.push_screen(UserDetails(self._hn, event.user))\n\n @on(ShowComments)\n def show_comments(self, event: ShowComments) -> None:\n \"\"\"Handle a request to show the comments for an article.\"\"\"\n self.app.push_screen(Comments(self._hn, event.article))"
}
] | from textual.app import App
from .data import load_configuration, save_configuration
from .screens import Main | 1,359 | """The main application class."""
##############################################################################
# Textual imports.
##############################################################################
# Local imports.
##############################################################################
class OSHit(App[None]):
"""The Orange Site Hit application."""
ENABLE_COMMAND_PALETTE = False
def __init__(self) -> None:
"""Initialise the application."""
super().__init__()
self.dark = load_configuration().dark_mode
def on_mount(self) -> None:
"""Get things going once the app is up and running."""
| """The main application class."""
##############################################################################
# Textual imports.
##############################################################################
# Local imports.
##############################################################################
class OSHit(App[None]):
"""The Orange Site Hit application."""
ENABLE_COMMAND_PALETTE = False
def __init__(self) -> None:
"""Initialise the application."""
super().__init__()
self.dark = load_configuration().dark_mode
def on_mount(self) -> None:
"""Get things going once the app is up and running.""" | self.push_screen(Main()) | 2 | 2023-12-25 14:06:07+00:00 | 2k |
Maximilian-Winter/llama-cpp-agent | src/llama_cpp_agent/agent_memory/memory_tools.py | [
{
"identifier": "LlamaCppFunctionTool",
"path": "src/llama_cpp_agent/function_calling.py",
"snippet": "class LlamaCppFunctionTool:\n def __init__(self, pydantic_model: Type[BaseModel], has_markdown_code_block=False, has_triple_quoted_string=False,\n **additional_parameters):\n self.model = pydantic_model\n self.look_for_field_string = has_markdown_code_block or has_triple_quoted_string\n self.has_markdown_code_block = has_markdown_code_block\n self.has_triple_quoted_string = has_triple_quoted_string\n self.additional_parameters = additional_parameters if additional_parameters else {}\n\n def __call__(self, *args, **kwargs):\n return self.model(**kwargs)"
},
{
"identifier": "CoreMemoryManager",
"path": "src/llama_cpp_agent/agent_memory/core_memory_manager.py",
"snippet": "class CoreMemoryManager:\n def __init__(self, core_memory: dict):\n self.core_memory = core_memory\n\n def add_to_core_memory(self, key: str, child_key: str, value) -> str:\n \"\"\"\n Adds or updates an entry in the core memory.\n \"\"\"\n if key not in self.core_memory:\n self.core_memory[key] = {}\n self.core_memory[key][child_key] = value\n return f\"Core memory updated. Key: {key}, Child Key: {child_key}\"\n\n def replace_in_core_memory(self, key: str, child_key: str, new_value) -> str:\n \"\"\"\n Replaces an existing entry in the core memory.\n \"\"\"\n if key in self.core_memory and child_key in self.core_memory[key]:\n self.core_memory[key][child_key] = new_value\n return f\"Core memory replaced. Key: {key}, Child Key: {child_key}\"\n else:\n return \"Key or child key not found in core memory.\"\n\n def remove_from_core_memory(self, key: str, child_key: str) -> str:\n \"\"\"\n Removes a specific field from a core memory entry.\n \"\"\"\n if key in self.core_memory and child_key in self.core_memory[key]:\n del self.core_memory[key][child_key]\n return f\"Core memory entry removed. Key: {key}, Child Key: {child_key}\"\n else:\n return \"Key or child key not found in core memory.\"\n\n def build_core_memory_context(self):\n output = json.dumps(self.core_memory, indent=4)\n context = f\"# Core-Memory:\\n{output if output != '{}' else 'Empty'}\"\n return context\n\n def load(self, file_path):\n with open(file_path, 'r', encoding='utf-8') as file:\n self.core_memory = json.load(file)\n\n def save(self, file_path):\n with open(file_path, 'w', encoding='utf-8') as file:\n json.dump(self.core_memory, file, indent=4)"
},
{
"identifier": "RetrievalMemoryManager",
"path": "src/llama_cpp_agent/agent_memory/retrieval_memory_manager.py",
"snippet": "class RetrievalMemoryManager:\n def __init__(self, retrieval_memory: RetrievalMemory):\n def add_memory_to_retrieval(self, description: str, importance: float = 1.0) -> str:\n def retrieve_memories(self, query: str, max_results: int = 5) -> str:"
}
] | from pydantic import BaseModel, Field
from ..function_calling import LlamaCppFunctionTool
from .core_memory_manager import CoreMemoryManager
from .retrieval_memory_manager import RetrievalMemoryManager, RetrievalMemory | 1,362 |
class AddCoreMemory(BaseModel):
"""
Add a new entry to the core memory.
"""
key: str = Field(..., description="The key identifier for the core memory entry.")
field: str = Field(..., description="A secondary key or field within the core memory entry.")
value: str = Field(..., description="The value or data to be stored in the specified core memory entry.")
def run(self, core_memory_manager: CoreMemoryManager):
return core_memory_manager.add_to_core_memory(self.key, self.field, self.value)
# Replace Core Memory Model
class ReplaceCoreMemory(BaseModel):
"""
Replace an entry in the core memory.
"""
key: str = Field(..., description="The key identifier for the core memory entry.")
field: str = Field(..., description="The specific field within the core memory entry to be replaced.")
new_value: str = Field(...,
description="The new value to replace the existing data in the specified core memory field.")
def run(self, core_memory_manager: CoreMemoryManager):
return core_memory_manager.replace_in_core_memory(self.key, self.field, self.value)
class RemoveCoreMemory(BaseModel):
"""
Remove an entry in the core memory.
"""
key: str = Field(..., description="The key identifier for the core memory entry to be removed.")
field: str = Field(..., description="The specific field within the core memory entry to be removed.")
def run(self, core_memory_manager: CoreMemoryManager):
return core_memory_manager.remove_from_core_memory(self.key, self.field)
class RetrieveMemories(BaseModel):
"""
Retrieve memories from the retrieval memory based on a query.
"""
query: str = Field(..., description="The query to be used to retrieve memories from the retrieval memory.")
def run(self, retrieval_memory_manager: RetrievalMemoryManager):
return retrieval_memory_manager.retrieve_memories(self.query)
class AddRetrievalMemory(BaseModel):
"""
Add memory to the retrieval memory.
"""
memory: str = Field(..., description="The memory to be added to the retrieval memory.")
importance: float = Field(..., description="The importance of the memory to be added to the retrieval memory.")
def run(self, retrieval_memory_manager: RetrievalMemoryManager):
return retrieval_memory_manager.add_memory_to_retrieval(self.memory, self.importance)
class AgentRetrievalMemory:
def __init__(self, persistent_db_path="./retrieval_memory", embedding_model_name="all-MiniLM-L6-v2",
collection_name="retrieval_memory_collection"):
|
class AddCoreMemory(BaseModel):
"""
Add a new entry to the core memory.
"""
key: str = Field(..., description="The key identifier for the core memory entry.")
field: str = Field(..., description="A secondary key or field within the core memory entry.")
value: str = Field(..., description="The value or data to be stored in the specified core memory entry.")
def run(self, core_memory_manager: CoreMemoryManager):
return core_memory_manager.add_to_core_memory(self.key, self.field, self.value)
# Replace Core Memory Model
class ReplaceCoreMemory(BaseModel):
"""
Replace an entry in the core memory.
"""
key: str = Field(..., description="The key identifier for the core memory entry.")
field: str = Field(..., description="The specific field within the core memory entry to be replaced.")
new_value: str = Field(...,
description="The new value to replace the existing data in the specified core memory field.")
def run(self, core_memory_manager: CoreMemoryManager):
return core_memory_manager.replace_in_core_memory(self.key, self.field, self.value)
class RemoveCoreMemory(BaseModel):
"""
Remove an entry in the core memory.
"""
key: str = Field(..., description="The key identifier for the core memory entry to be removed.")
field: str = Field(..., description="The specific field within the core memory entry to be removed.")
def run(self, core_memory_manager: CoreMemoryManager):
return core_memory_manager.remove_from_core_memory(self.key, self.field)
class RetrieveMemories(BaseModel):
"""
Retrieve memories from the retrieval memory based on a query.
"""
query: str = Field(..., description="The query to be used to retrieve memories from the retrieval memory.")
def run(self, retrieval_memory_manager: RetrievalMemoryManager):
return retrieval_memory_manager.retrieve_memories(self.query)
class AddRetrievalMemory(BaseModel):
"""
Add memory to the retrieval memory.
"""
memory: str = Field(..., description="The memory to be added to the retrieval memory.")
importance: float = Field(..., description="The importance of the memory to be added to the retrieval memory.")
def run(self, retrieval_memory_manager: RetrievalMemoryManager):
return retrieval_memory_manager.add_memory_to_retrieval(self.memory, self.importance)
class AgentRetrievalMemory:
def __init__(self, persistent_db_path="./retrieval_memory", embedding_model_name="all-MiniLM-L6-v2",
collection_name="retrieval_memory_collection"): | self.retrieval_memory = RetrievalMemory(persistent_db_path, embedding_model_name, collection_name) | 2 | 2023-12-29 16:54:39+00:00 | 2k |
tedivm/paracelsus | paracelsus/cli.py | [
{
"identifier": "Dot",
"path": "paracelsus/transformers/dot.py",
"snippet": "class Dot:\n comment_format: str = \"dot\"\n metadata: MetaData\n graph: pydot.Dot\n\n def __init__(self, metaclass: MetaData) -> None:\n self.metadata = metaclass\n self.graph = pydot.Dot(\"database\", graph_type=\"graph\")\n\n for table in self.metadata.tables.values():\n node = pydot.Node(name=table.name)\n node.set_label(self._table_label(table))\n node.set_shape(\"none\")\n node.set_margin(\"0\")\n self.graph.add_node(node)\n for column in table.columns:\n for foreign_key in column.foreign_keys:\n key_parts = foreign_key.target_fullname.split(\".\")\n left_table = key_parts[0]\n left_column = key_parts[1]\n edge = pydot.Edge(left_table, table.name)\n edge.set_label(column.name)\n edge.set_dir(\"both\")\n\n edge.set_arrowhead(\"none\")\n if not column.unique:\n edge.set_arrowhead(\"crow\")\n\n l_column = self.metadata.tables[left_table].columns[left_column]\n edge.set_arrowtail(\"none\")\n if not l_column.unique and not l_column.primary_key:\n edge.set_arrowtail(\"crow\")\n\n self.graph.add_edge(edge)\n\n def _table_label(self, table: Table) -> str:\n column_output = \"\"\n columns = sorted(table.columns, key=utils.column_sort_key)\n for column in columns:\n attributes = set([])\n if column.primary_key:\n attributes.add(\"Primary Key\")\n\n if len(column.foreign_keys) > 0:\n attributes.add(\"Foreign Key\")\n\n if column.unique:\n attributes.add(\"Unique\")\n\n column_output += f' <tr><td align=\"left\">{column.type}</td><td align=\"left\">{column.name}</td><td>{\", \".join(sorted(attributes))}</td></tr>\\n'\n\n return f\"\"\"<\n <table border=\"0\" cellborder=\"1\" cellspacing=\"0\" cellpadding=\"4\">\n <tr><td colspan=\"3\" bgcolor=\"lightblue\"><b>{table.name}</b></td></tr>\n{column_output.rstrip()}\n </table>\n>\"\"\"\n\n def __str__(self) -> str:\n return self.graph.to_string()"
},
{
"identifier": "Mermaid",
"path": "paracelsus/transformers/mermaid.py",
"snippet": "class Mermaid:\n comment_format: str = \"mermaid\"\n metadata: MetaData\n\n def __init__(self, metaclass: MetaData) -> None:\n self.metadata = metaclass\n\n def _table(self, table: Table) -> str:\n output = f\"\\t{table.name}\"\n output += \" {\\n\"\n columns = sorted(table.columns, key=utils.column_sort_key)\n for column in columns:\n output += self._column(column)\n output += \"\\t}\\n\\n\"\n return output\n\n def _column(self, column: Column) -> str:\n column_str = f\"{column.type} {column.name}\"\n\n if column.primary_key:\n if len(column.foreign_keys) > 0:\n column_str += \" PK,FK\"\n else:\n column_str += \" PK\"\n elif len(column.foreign_keys) > 0:\n column_str += \" FK\"\n\n options = []\n\n if column.nullable:\n options.append(\"nullable\")\n\n if column.unique:\n options.append(\"unique\")\n\n if column.index:\n options.append(\"indexed\")\n\n if len(options) > 0:\n column_str += f' \"{\",\".join(options)}\"'\n\n return f\"\\t\\t{column_str}\\n\"\n\n def _relationships(self, column: Column) -> str:\n output = \"\"\n\n column_name = column.name\n right_table = column.table.name\n\n if column.unique:\n right_operand = \"o|\"\n else:\n right_operand = \"o{\"\n\n for foreign_key in column.foreign_keys:\n key_parts = foreign_key.target_fullname.split(\".\")\n left_table = key_parts[0]\n left_column = key_parts[1]\n left_operand = \"\"\n\n lcolumn = self.metadata.tables[left_table].columns[left_column]\n if lcolumn.unique or lcolumn.primary_key:\n left_operand = \"||\"\n else:\n left_operand = \"}o\"\n\n output += f\"\\t{left_table} {left_operand}--{right_operand} {right_table} : {column_name}\\n\"\n return output\n\n def __str__(self) -> str:\n output = \"erDiagram\\n\"\n for table in self.metadata.tables.values():\n output += self._table(table)\n\n for table in self.metadata.tables.values():\n for column in table.columns.values():\n if len(column.foreign_keys) > 0:\n output += self._relationships(column)\n\n return output"
}
] | import importlib
import re
import sys
import typer
from enum import Enum
from pathlib import Path
from typing import List
from typing_extensions import Annotated
from .transformers.dot import Dot
from .transformers.mermaid import Mermaid
from . import _version | 1,289 |
app = typer.Typer()
transformers = {
"mmd": Mermaid,
"mermaid": Mermaid,
|
app = typer.Typer()
transformers = {
"mmd": Mermaid,
"mermaid": Mermaid, | "dot": Dot, | 0 | 2023-12-29 22:13:23+00:00 | 2k |
winniesi/tg-gemini-bot | api/handle.py | [
{
"identifier": "is_authorized",
"path": "api/auth.py",
"snippet": "def is_authorized(from_id: int, user_name: str) -> bool:\n if str(user_name) in ALLOWED_USERS:\n return True\n return False"
},
{
"identifier": "ChatManager",
"path": "api/context.py",
"snippet": "class ChatManager:\n \"\"\"setting up a basic conversation storage manager\"\"\"\n\n def __init__(self):\n self.chats: Dict[str, ChatConversation] = {}\n\n def _new_chat(self, username: str) -> ChatConversation:\n chat = ChatConversation()\n self.chats[username] = chat\n return chat\n\n def get_chat(self, username: str) -> ChatConversation:\n if self.chats.get(username) is None:\n return self._new_chat(username)\n return self.chats[username]"
},
{
"identifier": "ImageChatManger",
"path": "api/context.py",
"snippet": "class ImageChatManger:\n def __init__(self, prompt, file_id: str) -> None:\n self.prompt = prompt\n self.file_id = file_id\n\n def tel_photo_url(self) -> str:\n \"\"\"process telegram photo url\"\"\"\n r_file_id = requests.get(\n f\"https://api.telegram.org/bot{BOT_TOKEN}/getFile?file_id={self.file_id}\"\n )\n file_path = r_file_id.json().get(\"result\").get(\"file_path\")\n download_url = f\"https://api.telegram.org/file/bot{BOT_TOKEN}/{file_path}\"\n return download_url\n\n def photo_bytes(self) -> BytesIO:\n \"\"\"get photo bytes\"\"\"\n photo_url = self.tel_photo_url()\n response = requests.get(photo_url)\n photo_bytes = BytesIO(response.content)\n return photo_bytes\n\n def send_image(self) -> str:\n response = generate_text_with_image(self.prompt, self.photo_bytes())\n return response"
},
{
"identifier": "Update",
"path": "api/telegram.py",
"snippet": "class Update:\n def __init__(self, update: Dict) -> None:\n self.update = update\n self.from_id = update[\"message\"][\"from\"][\"id\"]\n self.type = self._type()\n self.text = self._text()\n self.photo_caption = self._photo_caption()\n self.file_id = self._file_id()\n self.user_name = update[\"message\"][\"from\"][\"username\"]\n\n def _type(self):\n if \"text\" in self.update[\"message\"]:\n return \"text\"\n elif \"photo\" in self.update[\"message\"]:\n return \"photo\"\n else:\n return \"\"\n\n def _photo_caption(self):\n if self.type == \"photo\":\n return self.update[\"message\"].get(\"caption\", \"describe the photo\")\n return \"\"\n\n def _text(self):\n if self.type == \"text\":\n return self.update[\"message\"][\"text\"]\n return \"\"\n\n def _file_id(self):\n if self.type == \"photo\":\n return self.update[\"message\"][\"photo\"][0][\"file_id\"]\n return \"\""
},
{
"identifier": "send_message",
"path": "api/telegram.py",
"snippet": "def send_message(chat_id, text):\n \"\"\"send text message\"\"\"\n payload = {\n \"chat_id\": chat_id,\n \"text\": escape(text),\n \"parse_mode\": \"MarkdownV2\",\n }\n r = requests.post(f\"{TELEGRAM_API}/sendMessage\", data=payload)\n print(f\"Sent message: {text} to {chat_id}\")\n return r"
}
] | from .auth import is_authorized
from .context import ChatManager, ImageChatManger
from .telegram import Update, send_message | 971 | """
All the chat that comes through the Telegram bot gets passed to the
handle_message function. This function checks out if the user has the
green light to chat with the bot. Once that's sorted, it figures out if
the user sent words or an image and deals with it accordingly.
For text messages, it fires up the ChatManager class that keeps track of
the back-and-forth with that user.
As for images, in Gemini pro, they're context-free, so you can handle
them pretty straight-up without much fuss.
"""
chat_manager = ChatManager()
def handle_message(update_data):
update = Update(update_data)
authorized = is_authorized(update.from_id, update.user_name)
if not authorized:
| """
All the chat that comes through the Telegram bot gets passed to the
handle_message function. This function checks out if the user has the
green light to chat with the bot. Once that's sorted, it figures out if
the user sent words or an image and deals with it accordingly.
For text messages, it fires up the ChatManager class that keeps track of
the back-and-forth with that user.
As for images, in Gemini pro, they're context-free, so you can handle
them pretty straight-up without much fuss.
"""
chat_manager = ChatManager()
def handle_message(update_data):
update = Update(update_data)
authorized = is_authorized(update.from_id, update.user_name)
if not authorized: | send_message(update.from_id, "😫 You are not allowed to use this bot.") | 4 | 2023-12-25 03:27:43+00:00 | 2k |
usail-hkust/LLMTSCS | run_advanced_maxpressure.py | [
{
"identifier": "oneline_wrapper",
"path": "utils/utils.py",
"snippet": "def oneline_wrapper(dic_agent_conf, dic_traffic_env_conf, dic_path, roadnet, trafficflow):\n results_table = []\n all_rewards = []\n all_queue_len = []\n all_travel_time = []\n for i in range(1):\n dic_path[\"PATH_TO_MODEL\"] = (dic_path[\"PATH_TO_MODEL\"].split(\".\")[0] + \".json\" +\n time.strftime('%m_%d_%H_%M_%S', time.localtime(time.time())))\n dic_path[\"PATH_TO_WORK_DIRECTORY\"] = (dic_path[\"PATH_TO_WORK_DIRECTORY\"].split(\".\")[0] + \".json\" +\n time.strftime('%m_%d_%H_%M_%S', time.localtime(time.time())))\n oneline = OneLine(dic_agent_conf=dic_agent_conf,\n dic_traffic_env_conf=merge(config.dic_traffic_env_conf, dic_traffic_env_conf),\n dic_path=merge(config.DIC_PATH, dic_path),\n roadnet=roadnet,\n trafficflow=trafficflow\n )\n round_results = oneline.train(round=i)\n results_table.append([round_results['test_reward_over'], round_results['test_avg_queue_len_over'],\n round_results['test_avg_travel_time_over']])\n all_rewards.append(round_results['test_reward_over'])\n all_queue_len.append(round_results['test_avg_queue_len_over'])\n all_travel_time.append(round_results['test_avg_travel_time_over'])\n\n # delete junk\n cmd_delete_model = 'rm -rf <dir>'.replace(\"<dir>\", dic_path[\"PATH_TO_MODEL\"])\n cmd_delete_work = 'find <dir> -type f ! -name \"state_action.json\" -exec rm -rf {} \\;'.replace(\"<dir>\", dic_path[\"PATH_TO_WORK_DIRECTORY\"])\n os.system(cmd_delete_model)\n os.system(cmd_delete_work)\n\n results_table.append([np.average(all_rewards), np.average(all_queue_len), np.average(all_travel_time)])\n results_table.append([np.std(all_rewards), np.std(all_queue_len), np.std(all_travel_time)])\n\n table_logger = wandb.init(\n project=dic_traffic_env_conf['PROJECT_NAME'],\n group=f\"{dic_traffic_env_conf['MODEL_NAME']}-{roadnet}-{trafficflow}-{len(dic_agent_conf['FIXED_TIME'])}_Phases\",\n name=\"exp_results\",\n config=merge(merge(dic_agent_conf, dic_path), dic_traffic_env_conf),\n )\n columns = [\"reward\", \"avg_queue_len\", \"avg_travel_time\"]\n logger_table = wandb.Table(columns=columns, data=results_table)\n table_logger.log({\"results\": logger_table})\n wandb.finish()\n\n return"
},
{
"identifier": "error",
"path": "utils/error.py",
"snippet": "class flowFileException(Exception):\n def __init__(self, message):\n def __str__(self):"
}
] | from utils.utils import oneline_wrapper
from utils import error
from multiprocessing import Process
import os
import time
import argparse | 1,154 |
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--memo", type=str, default='AdvancedMaxPressure')
parser.add_argument("--model", type=str, default="AdvancedMaxPressure")
parser.add_argument("--proj_name", type=str, default="chatgpt-TSCS")
parser.add_argument("--eightphase", action="store_true", default=False)
parser.add_argument("--multi_process", action="store_true", default=True)
parser.add_argument("--workers", type=int, default=1)
parser.add_argument("--dataset", type=str, default="template")
parser.add_argument("--traffic_file", type=str, default="flow_main_stream.json")
return parser.parse_args()
def main(in_args):
traffic_file_list = []
if in_args.dataset == 'jinan':
count = 3600
road_net = "3_4"
traffic_file_list = ["anon_3_4_jinan_real.json", "anon_3_4_jinan_real_2000.json", "anon_3_4_jinan_real_2500.json"]
template = "Jinan"
elif in_args.dataset == 'hangzhou':
count = 3600
road_net = "4_4"
traffic_file_list = ["anon_4_4_hangzhou_real.json", "anon_4_4_hangzhou_real_5816.json"]
template = "Hangzhou"
elif in_args.dataset == 'newyork_16x3':
count = 3600
road_net = "16_3"
traffic_file_list = ["anon_16_3_newyork_real.json"]
template = "NewYork"
elif in_args.dataset == 'newyork_28x7':
count = 3600
road_net = "28_7"
traffic_file_list = ["anon_28_7_newyork_real_double.json", "anon_28_7_newyork_real_triple.json"]
template = "NewYork"
elif in_args.dataset == 'template':
count = 3600
road_net = "1_1"
traffic_file_list = ["flow_main_stream.json"]
template = "template"
# flow_file error
try:
if in_args.traffic_file not in traffic_file_list:
|
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--memo", type=str, default='AdvancedMaxPressure')
parser.add_argument("--model", type=str, default="AdvancedMaxPressure")
parser.add_argument("--proj_name", type=str, default="chatgpt-TSCS")
parser.add_argument("--eightphase", action="store_true", default=False)
parser.add_argument("--multi_process", action="store_true", default=True)
parser.add_argument("--workers", type=int, default=1)
parser.add_argument("--dataset", type=str, default="template")
parser.add_argument("--traffic_file", type=str, default="flow_main_stream.json")
return parser.parse_args()
def main(in_args):
traffic_file_list = []
if in_args.dataset == 'jinan':
count = 3600
road_net = "3_4"
traffic_file_list = ["anon_3_4_jinan_real.json", "anon_3_4_jinan_real_2000.json", "anon_3_4_jinan_real_2500.json"]
template = "Jinan"
elif in_args.dataset == 'hangzhou':
count = 3600
road_net = "4_4"
traffic_file_list = ["anon_4_4_hangzhou_real.json", "anon_4_4_hangzhou_real_5816.json"]
template = "Hangzhou"
elif in_args.dataset == 'newyork_16x3':
count = 3600
road_net = "16_3"
traffic_file_list = ["anon_16_3_newyork_real.json"]
template = "NewYork"
elif in_args.dataset == 'newyork_28x7':
count = 3600
road_net = "28_7"
traffic_file_list = ["anon_28_7_newyork_real_double.json", "anon_28_7_newyork_real_triple.json"]
template = "NewYork"
elif in_args.dataset == 'template':
count = 3600
road_net = "1_1"
traffic_file_list = ["flow_main_stream.json"]
template = "template"
# flow_file error
try:
if in_args.traffic_file not in traffic_file_list: | raise error.flowFileException('Flow file does not exist.') | 1 | 2023-12-26 08:31:47+00:00 | 2k |
ohadmata/shmessy | src/shmessy/types/unix_timestamp.py | [
{
"identifier": "InferredField",
"path": "src/shmessy/schema.py",
"snippet": "class InferredField(BaseModel):\n inferred_type: Optional[str] = None\n inferred_pattern: Optional[Any] = None"
},
{
"identifier": "ValidatorTypes",
"path": "src/shmessy/schema.py",
"snippet": "class ValidatorTypes(str, Enum):\n NUMERIC = \"NUMERIC\"\n STRING = \"STRING\""
},
{
"identifier": "BaseType",
"path": "src/shmessy/types/base.py",
"snippet": "class BaseType(ABC):\n weight: int = 0\n validator_types: Tuple[ValidatorTypes]\n\n @abstractmethod\n def validate(self, data: ndarray) -> Optional[InferredField]:\n pass\n\n @abstractmethod\n def fix(self, column: Series, inferred_field: InferredField) -> Series:\n pass\n\n def is_validator_type_valid(self, dtype: Type) -> bool:\n for possible_validator_type in self.validator_types:\n if self._check_single_validator_type(dtype, possible_validator_type):\n return True\n return False\n\n @staticmethod\n def _check_single_validator_type(\n dtype: Type, possible_validator_type: ValidatorTypes\n ) -> bool:\n if possible_validator_type == ValidatorTypes.NUMERIC and not issubdtype(\n dtype, number\n ):\n return False\n\n if possible_validator_type == ValidatorTypes.STRING and not (\n issubdtype(dtype, object_) or issubdtype(dtype, str_)\n ):\n return False\n return True\n\n @property\n def name(self) -> str:\n return str(self.__class__.__name__.replace(\"Type\", \"\"))"
}
] | import logging
import math
from datetime import datetime
from enum import Enum
from typing import Optional
from numpy import ndarray
from pandas import Series, to_datetime
from ..schema import InferredField, ValidatorTypes
from .base import BaseType | 669 |
logger = logging.getLogger(__name__)
class TimestampResolution(str, Enum):
SECONDS = "s"
MILLISECONDS = "ms"
NANOSECONDS = "ns"
class UnixTimestampType(BaseType):
weight = 4
validator_types = (ValidatorTypes.NUMERIC,)
min_valid_year: int = 1980
max_valid_year: int = 2100
@staticmethod
def _unix_timestamp_resolution(value: float) -> TimestampResolution:
number_of_digits = len(str(int(value)))
if number_of_digits == 10:
return TimestampResolution.SECONDS
if number_of_digits == 13:
return TimestampResolution.MILLISECONDS
if number_of_digits == 16:
return TimestampResolution.NANOSECONDS
@staticmethod
def _fix_input_resolution(
value: float, selected_resolution: TimestampResolution
) -> float:
if selected_resolution == TimestampResolution.SECONDS:
return value
if selected_resolution == TimestampResolution.MILLISECONDS:
return value / 1000
if selected_resolution == TimestampResolution.NANOSECONDS:
return value / 1000 / 1000
|
logger = logging.getLogger(__name__)
class TimestampResolution(str, Enum):
SECONDS = "s"
MILLISECONDS = "ms"
NANOSECONDS = "ns"
class UnixTimestampType(BaseType):
weight = 4
validator_types = (ValidatorTypes.NUMERIC,)
min_valid_year: int = 1980
max_valid_year: int = 2100
@staticmethod
def _unix_timestamp_resolution(value: float) -> TimestampResolution:
number_of_digits = len(str(int(value)))
if number_of_digits == 10:
return TimestampResolution.SECONDS
if number_of_digits == 13:
return TimestampResolution.MILLISECONDS
if number_of_digits == 16:
return TimestampResolution.NANOSECONDS
@staticmethod
def _fix_input_resolution(
value: float, selected_resolution: TimestampResolution
) -> float:
if selected_resolution == TimestampResolution.SECONDS:
return value
if selected_resolution == TimestampResolution.MILLISECONDS:
return value / 1000
if selected_resolution == TimestampResolution.NANOSECONDS:
return value / 1000 / 1000
| def validate(self, data: ndarray) -> Optional[InferredField]: | 0 | 2023-12-27 20:15:01+00:00 | 2k |
kokiez/solana-sniper | monitor_price_strategy.py | [
{
"identifier": "get_price",
"path": "birdeye.py",
"snippet": "def get_price(token_address):\r\n url = f\"https://api.dexscreener.com/latest/dex/tokens/{token_address}\"\r\n exclude = ['EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v', 'Es9vMFrzaCERmJfrF4H2FYD4KCoNkY11McCe8BenwNYB']\r\n response = requests.get(url).json()\r\n \r\n if token_address not in exclude:\r\n for pair in response['pairs']:\r\n if pair['quoteToken']['address'] == 'So11111111111111111111111111111111111111112':\r\n return float(pair['priceUsd'])\r\n else:\r\n return response['pairs'][0]['priceUsd']\r\n return None\r"
},
{
"identifier": "getSymbol",
"path": "birdeye.py",
"snippet": "def getSymbol(token):\r\n # usdc and usdt\r\n exclude = ['EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v', 'Es9vMFrzaCERmJfrF4H2FYD4KCoNkY11McCe8BenwNYB']\r\n \r\n if token not in exclude:\r\n url = f\"https://api.dexscreener.com/latest/dex/tokens/{token}\"\r\n\r\n Token_Symbol = \"\"\r\n Sol_symbol=\"\"\r\n try:\r\n response = requests.get(url)\r\n\r\n # Check if the request was successful (status code 200)\r\n if response.status_code == 200:\r\n resp = response.json()\r\n print(\"Response:\",resp['pairs'][0]['baseToken']['symbol'])\r\n for pair in resp['pairs']:\r\n quoteToken = pair['quoteToken']['symbol']\r\n\r\n if quoteToken == 'SOL':\r\n Token_Symbol = pair['baseToken']['symbol']\r\n Sol_symbol = quoteToken\r\n return Token_Symbol, Sol_symbol\r\n\r\n\r\n else:\r\n print(f\"[getSymbol] Request failed with status code {response.status_code}\")\r\n\r\n except requests.exceptions.RequestException as e:\r\n print(f\"[getSymbol] error occurred: {e}\")\r\n except: \r\n a = 1\r\n\r\n return Token_Symbol, Sol_symbol\r\n else:\r\n if token == 'EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v':\r\n return \"USDC\", \"SOL\"\r\n elif token == 'EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v':\r\n return \"USDT\", \"SOL\"\r"
},
{
"identifier": "sendWebhook",
"path": "webhook.py",
"snippet": "def sendWebhook(title_type_info, description):\r\n global error_webhook\r\n global webhook_url\r\n title = \"\"\r\n title_type = title_type_info.split(\"|\")\r\n if title_type[0] == \"msg\":\r\n title = title_type[1]\r\n color = colors[\"Green\"]\r\n webhook(title, color, description, webhook_url)\r\n \r\n elif title_type[0] == \"msg_b\":\r\n title = title_type[1]\r\n color = colors[\"DarkAqua\"]\r\n webhook(title, color, description, webhook_url)\r\n\r\n elif title_type[0] == \"msg_s\":\r\n title = title_type[1]\r\n color = colors[\"DarkAqua\"]\r\n webhook(title, color, description, webhook_url)\r\n\r\n elif title_type[0] == \"i_s\": #invest or slippage was changed etc\r\n title = title_type[1]\r\n color = colors[\"DarkPurple\"]\r\n webhook(title, color, description, webhook_url)\r\n \r\n elif title_type[0] == \"e\": #error\r\n title = title_type[1]\r\n color = colors[\"DarkRed\"]\r\n webhook(title, color, description, error_webhook)\r\n\r\n elif title_type[0] == \"a\": #alert\r\n title = title_type[1]\r\n color = colors[\"LuminousVividPink\"]\r\n webhook(title, color, description, webhook_url)\r\n\r\n elif title_type[0] == \"w\": #wallet info\r\n title = title_type[1]\r\n color = colors[\"Gold\"]\r\n webhook(title, color, description, webhook_url)\r"
}
] | import time
from birdeye import get_price, getSymbol
from webhook import sendWebhook
| 1,376 |
"""If you have ton of trades then best to use Simulate Transaction and modify this part of code to your needs"""
"""
Only Take Profit
"""
def limit_order(bought_token_price,desired_token_address, take_profit_ratio, execution_time, txB):
token_symbol, SOl_Symbol = getSymbol(desired_token_address)
# CALCULATE SELL LIMIT
sell_limit_token_price = bought_token_price * take_profit_ratio
print("-" * 79)
print(f"| {'Bought Price':<12} | {'Sell Limit':<12} | {'Tx Buy':<50} |")
print("-" * 79)
print(f"|{bought_token_price:.12f} | {sell_limit_token_price:.12f} {txB:<50} |")
print("-" * 79)
sendWebhook(f"msg_b|BUY INFO {token_symbol}",f"Bought Price: {bought_token_price:.12f}\n**Sell Limit: {sell_limit_token_price:.15f}**\nTotal Buy Execution time: {execution_time} seconds\nBuy TXN: https://solscan.io/tx/{txB} |")
# LOOP = CHECK IF PRICE >= SELL LIMIT | checks price every 5 seconds
priceLow = True
# while priceLow and isTimePassed(time_limit) == False:
while priceLow:
# Check if time limit has been passed for the token bought or not
|
"""If you have ton of trades then best to use Simulate Transaction and modify this part of code to your needs"""
"""
Only Take Profit
"""
def limit_order(bought_token_price,desired_token_address, take_profit_ratio, execution_time, txB):
token_symbol, SOl_Symbol = getSymbol(desired_token_address)
# CALCULATE SELL LIMIT
sell_limit_token_price = bought_token_price * take_profit_ratio
print("-" * 79)
print(f"| {'Bought Price':<12} | {'Sell Limit':<12} | {'Tx Buy':<50} |")
print("-" * 79)
print(f"|{bought_token_price:.12f} | {sell_limit_token_price:.12f} {txB:<50} |")
print("-" * 79)
sendWebhook(f"msg_b|BUY INFO {token_symbol}",f"Bought Price: {bought_token_price:.12f}\n**Sell Limit: {sell_limit_token_price:.15f}**\nTotal Buy Execution time: {execution_time} seconds\nBuy TXN: https://solscan.io/tx/{txB} |")
# LOOP = CHECK IF PRICE >= SELL LIMIT | checks price every 5 seconds
priceLow = True
# while priceLow and isTimePassed(time_limit) == False:
while priceLow:
# Check if time limit has been passed for the token bought or not
| bought_token_curr_price = get_price(desired_token_address)
| 0 | 2023-12-26 11:40:05+00:00 | 2k |
enochyearn/MLX_RoBERTa | mlx_roberta.py | [
{
"identifier": "LayerNormBasselCorrected",
"path": "custom/nn/layers/normalization.py",
"snippet": "class LayerNormBasselCorrected(Module):\n r\"\"\"Applies layer normalization [1] on the inputs with Bessel's Correction used by default like PyTorch.\n\n Computes\n\n .. math::\n\n y = \\frac{x - E[x]}{\\sqrt{Var[x]} + \\epsilon} \\gamma + \\beta,\n\n where :math:`\\gamma` and :math:`\\beta` are learned per feature dimension\n parameters initialized at 1 and 0 respectively.\n\n Var[x] would by default apply Bessel's Correction.\n\n [1]: https://arxiv.org/abs/1607.06450\n\n Args:\n dims (int): The feature dimension of the input to normalize over\n eps (float): A small additive constant for numerical stability\n affine (bool): If True learn an affine transform to apply after the\n normalization\n correction (bool): \n \"\"\"\n\n def __init__(self, dims: int, eps: float = 1e-5, affine: bool = True, correction: bool = True):\n super().__init__()\n if affine:\n self.bias = mx.zeros((dims,))\n self.weight = mx.ones((dims,))\n self.eps = eps\n self.dims = dims\n self.correction = correction\n\n def _extra_repr(self):\n return f\"{self.dims}, eps={self.eps}, affine={'weight' in self}\"\n\n def __call__(self, x):\n means = mx.mean(x, axis=-1, keepdims=True)\n var = mx.var(x, axis=-1, keepdims=True, ddof=int(self.correction))\n x = (x - means) * mx.rsqrt(var + self.eps)\n return (self.weight * x + self.bias) if \"weight\" in self else x"
},
{
"identifier": "LayerNormTorchAlike",
"path": "custom/nn/layers/normalization.py",
"snippet": "class LayerNormTorchAlike(Module):\n r\"\"\"Applies layer normalization [1] on the inputs in PyTorch's style.\n MLX's official LayerNorm has a different behavior with PyTorch's.\n\n Computes\n\n .. math::\n\n y = \\frac{x - E[x]}{\\sqrt{Var[x]} + \\epsilon} \\gamma + \\beta,\n\n where :math:`\\gamma` and :math:`\\beta` are learned per feature dimension\n parameters initialized at 1 and 0 respectively.\n\n Var[x] would by default apply Bessel's Correction.\n\n [1]: https://arxiv.org/abs/1607.06450\n\n Args:\n dims (int): The feature dimension of the input to normalize over\n eps (float): A small additive constant for numerical stability\n affine (bool): If True learn an affine transform to apply after the\n normalization\n correction (bool): \n \"\"\"\n\n def __init__(self, dims: int, eps: float = 1e-5, affine: bool = True, correction: bool = True):\n super().__init__()\n if affine:\n self.bias = mx.zeros((dims,))\n self.weight = mx.ones((dims,))\n self.eps = eps\n self.dims = dims\n self.correction = correction\n\n def _extra_repr(self):\n return f\"{self.dims}, eps={self.eps}, affine={'weight' in self}\"\n\n def __call__(self, x):\n # Calculate the mean of all elements;\n # i.e. the means for each element $\\mathbb{E}[X]$\n mean = x.mean(axis=-1, keepdims=True)\n # Calculate the squared mean of all elements;\n # i.e. the means for each element $\\mathbb{E}[X^2]$\n mean_x2 = (x ** 2).mean(axis=-1, keepdims=True)\n # Variance of all element $Var[X] = \\mathbb{E}[X^2] - \\mathbb{E}[X]^2$\n var = mean_x2 - mean ** 2\n\n # Normalize $$\\hat{X} = \\frac{X - \\mathbb{E}[X]}{\\sqrt{Var[X] + \\epsilon}}$$\n x_norm = (x - mean) / mx.sqrt(var + self.eps)\n # Scale and shift $$\\text{LN}(x) = \\gamma \\hat{X} + \\beta$$ \n x_norm = self.weight * x_norm + self.bias\n return x_norm"
}
] | import argparse
import time
import mlx.core as mx
import mlx.nn as nn
import numpy as np
import math
from mlx.utils import tree_unflatten
from collections import OrderedDict
from custom.nn.layers.normalization import LayerNormBasselCorrected, LayerNormTorchAlike
from transformers import RobertaTokenizer
from dataclasses import dataclass | 1,439 |
# utils
@dataclass
class ModelConfig:
intermediate_size: int = 3072
hidden_size: int = 768
no_heads: int = 12
hidden_layers: int = 12
vocab_size: int = 50265
attention_probs_dropout_prob: float = 0.1
hidden_dropout_prob: float = 0.1
layer_norm_eps: float = 1e-5
max_position_embeddings: int = 514
# QA model's parameters
num_labels: int = 2
type_vocab_size: int = 2
pad_token_id: int = 1
chunk_size_feed_forward: int = 0
model_configs = {
"deepset/roberta-base-squad2": ModelConfig(),
"roberta-base": ModelConfig(),
}
model_types = {
"deepset/roberta-base-squad2": "qa",
"roberta-base": "base",
}
class RobertaEmbeddings(nn.Module):
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
|
# utils
@dataclass
class ModelConfig:
intermediate_size: int = 3072
hidden_size: int = 768
no_heads: int = 12
hidden_layers: int = 12
vocab_size: int = 50265
attention_probs_dropout_prob: float = 0.1
hidden_dropout_prob: float = 0.1
layer_norm_eps: float = 1e-5
max_position_embeddings: int = 514
# QA model's parameters
num_labels: int = 2
type_vocab_size: int = 2
pad_token_id: int = 1
chunk_size_feed_forward: int = 0
model_configs = {
"deepset/roberta-base-squad2": ModelConfig(),
"roberta-base": ModelConfig(),
}
model_types = {
"deepset/roberta-base-squad2": "qa",
"roberta-base": "base",
}
class RobertaEmbeddings(nn.Module):
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
| self.LayerNorm = LayerNormTorchAlike(config.hidden_size, eps=config.layer_norm_eps, correction=True) | 1 | 2023-12-22 05:48:57+00:00 | 2k |
zy7y/dfs-generate | main.py | [
{
"identifier": "CodeGen",
"path": "entity.py",
"snippet": "class CodeGen(BaseVo):\n name: str\n code: str\n\n @field_serializer(\"code\")\n def serialize_code(self, code: str, _info):\n _code = black.format_str(code, mode=black.FileMode())\n return isort.code(_code)"
},
{
"identifier": "Conf",
"path": "entity.py",
"snippet": "class Conf(SQLModel, table=True):\n __tablename__ = \"dfs_conf\"\n id: int = Field(None, primary_key=True)\n db_uri: str = Field(..., description=\"数据库连接\")\n\n @classmethod\n def get_db_uri_last_new(cls):\n \"\"\"获取最新的db_url\"\"\"\n with Session(engine) as session:\n query = select(cls).order_by(cls.id.desc())\n latest_conf = session.exec(query).first()\n if latest_conf:\n return latest_conf.db_uri\n else:\n return None\n\n @classmethod\n def create(cls, uri) -> \"Conf\":\n with Session(engine) as session:\n obj = cls(db_uri=uri)\n session.add(obj)\n session.commit()\n session.refresh(obj)\n return obj\n\n @classmethod\n def get_last_uri_with_metadata(cls):\n uri = cls.get_db_uri_last_new()\n return uri, get_metadata_by_db_uri(uri)"
},
{
"identifier": "DBConf",
"path": "entity.py",
"snippet": "class DBConf(SQLModel):\n user: str\n password: str\n port: int\n host: str\n db: str\n\n def get_db_uri(self):\n return f\"mysql+pymysql://{self.user}:{self.password}@{self.host}:{self.port}/{self.db}\"\n\n def get_metadata(self):\n return get_metadata_by_db_uri(self.get_db_uri())"
},
{
"identifier": "R",
"path": "entity.py",
"snippet": "class R(BaseModel, Generic[T]):\n code: int = 20000\n msg: str = \"ok\"\n data: Optional[T] = None\n\n @classmethod\n def success(cls, **kwargs):\n return cls(**kwargs)\n\n @classmethod\n def error(cls, msg):\n return cls(code=40000, msg=msg)"
},
{
"identifier": "RList",
"path": "entity.py",
"snippet": "class RList(R[T]):\n data: List[T] = Field(default_factory=list)"
},
{
"identifier": "Table",
"path": "entity.py",
"snippet": "class Table(BaseVo):\n table_name: str\n table_comment: Optional[str] = None"
},
{
"identifier": "generate_code",
"path": "generate/main.py",
"snippet": "def generate_code(table: Table, uri: str):\n return [\n {\"name\": \"model.py\", \"code\": GenerateEntity(table).render()},\n {\"name\": \"router.py\", \"code\": render_router(table.name)},\n {\"name\": \"main.py\", \"code\": render_main(table.name)},\n {\"name\": \"db.py\", \"code\": render_db(uri)},\n ]"
}
] | from fastapi import FastAPI, Query
from fastapi.requests import Request
from fastapi.responses import FileResponse
from fastapi.staticfiles import StaticFiles
from entity import CodeGen, Conf, DBConf, R, RList, Table
from generate.main import generate_code
import uvicorn | 789 |
app = FastAPI(
title="dfs-generate", description="FastAPI SQLModel 逆向生成代码", docs_url=None
)
app.mount("/static", StaticFiles(directory="static"), name="static")
@app.get("/", include_in_schema=False)
def index():
return FileResponse("static/index.html")
|
app = FastAPI(
title="dfs-generate", description="FastAPI SQLModel 逆向生成代码", docs_url=None
)
app.mount("/static", StaticFiles(directory="static"), name="static")
@app.get("/", include_in_schema=False)
def index():
return FileResponse("static/index.html")
| @app.get("/tables", response_model=RList[Table]) | 5 | 2023-12-23 08:32:58+00:00 | 2k |
CrawlScript/Torch-MGDCF | torch_mgdcf/evaluation/ranking.py | [
{
"identifier": "ndcg_score",
"path": "torch_mgdcf/metrics/ranking.py",
"snippet": "def ndcg_score(reference, hypothesis):\n \"\"\"\n Normalized Discounted Cumulative Gain (nDCG)\n Normalized version of DCG:\n nDCG = DCG(hypothesis)/DCG(reference)\n\n Parameters:\n reference - a gold standard (perfect) ordering Ex: [5,4,3,2,1]\n hypothesis - a proposed ordering Ex: [5,2,2,3,1]\n\n Returns:\n ndcg_score - normalized score\n \"\"\"\n\n return dcg_score(hypothesis)/dcg_score(reference)"
},
{
"identifier": "precision_score",
"path": "torch_mgdcf/metrics/ranking.py",
"snippet": "def precision_score(reference, hypothesis):\n result = np.sum(hypothesis, dtype=np.float32)/len(hypothesis)\n return result"
},
{
"identifier": "recall_score",
"path": "torch_mgdcf/metrics/ranking.py",
"snippet": "def recall_score(reference, hypothesis):\n result = np.sum(hypothesis, dtype=np.float32) / len(reference)\n return result"
},
{
"identifier": "VectorSearchEngine",
"path": "torch_mgdcf/vector_search/vector_search.py",
"snippet": "class VectorSearchEngine(object):\n def __init__(self, vectors):\n super().__init__()\n if isinstance(vectors, torch.Tensor):\n self.vectors = vectors.detach().cpu().numpy()\n else:\n self.vectors = np.array(vectors)\n self.dim = self.vectors.shape[1]\n\n self.index = faiss.IndexFlatIP(self.dim)\n self.index.add(self.vectors)\n\n def search(self, query_vectors, k=10):\n query_vectors = np.asarray(query_vectors)\n topK_distances, topK_indices = self.index.search(query_vectors, k)\n\n return topK_distances, topK_indices"
}
] | from tqdm import tqdm
from torch_mgdcf.metrics.ranking import ndcg_score, precision_score, recall_score
from torch_mgdcf.vector_search.vector_search import VectorSearchEngine
import numpy as np
import torch | 765 | # coding=utf-8
# The code is from our another project GRecX: https://github.com/maenzhier/grecx_datasets
def score(ground_truth, pred_items, k_list, metrics):
pred_match = [1 if item in ground_truth else 0 for item in pred_items]
max_k = k_list[-1]
if len(ground_truth) > max_k:
ndcg_gold = [1] * max_k
else:
ndcg_gold = [1] * len(ground_truth) + [0] * (max_k - len(ground_truth))
res_score = []
for metric in metrics:
if metric == "ndcg":
score_func = ndcg_score
elif metric == "precision":
score_func = precision_score
elif metric == "recall":
score_func = recall_score
else:
raise Exception("Not Found Metric : {}".format(metric))
for k in k_list:
if metric == "ndcg":
res_score.append(score_func(ndcg_gold[:k], pred_match[:k]))
else:
res_score.append(score_func(ground_truth, pred_match[:k]))
return res_score
def evaluate_mean_global_metrics(user_items_dict, user_mask_items_dict,
user_embedding, item_embedding,
k_list=[10, 20], metrics=["ndcg"]):
| # coding=utf-8
# The code is from our another project GRecX: https://github.com/maenzhier/grecx_datasets
def score(ground_truth, pred_items, k_list, metrics):
pred_match = [1 if item in ground_truth else 0 for item in pred_items]
max_k = k_list[-1]
if len(ground_truth) > max_k:
ndcg_gold = [1] * max_k
else:
ndcg_gold = [1] * len(ground_truth) + [0] * (max_k - len(ground_truth))
res_score = []
for metric in metrics:
if metric == "ndcg":
score_func = ndcg_score
elif metric == "precision":
score_func = precision_score
elif metric == "recall":
score_func = recall_score
else:
raise Exception("Not Found Metric : {}".format(metric))
for k in k_list:
if metric == "ndcg":
res_score.append(score_func(ndcg_gold[:k], pred_match[:k]))
else:
res_score.append(score_func(ground_truth, pred_match[:k]))
return res_score
def evaluate_mean_global_metrics(user_items_dict, user_mask_items_dict,
user_embedding, item_embedding,
k_list=[10, 20], metrics=["ndcg"]):
| v_search = VectorSearchEngine(item_embedding) | 3 | 2023-12-26 10:26:50+00:00 | 2k |
KyanChen/TTP | opencd/models/data_preprocessor.py | [
{
"identifier": "SampleList",
"path": "mmseg/utils/typing_utils.py",
"snippet": ""
},
{
"identifier": "MODELS",
"path": "opencd/registry.py",
"snippet": "MODELS = Registry('model', parent=MMENGINE_MODELS, locations=['opencd.models'])"
}
] | from numbers import Number
from typing import Any, Dict, List, Optional, Sequence, Union
from mmengine.model import BaseDataPreprocessor
from mmseg.utils import SampleList
from opencd.registry import MODELS
import numpy as np
import torch
import torch.nn.functional as F | 1,234 | # Copyright (c) Open-CD. All rights reserved.
def stack_batch(inputs: List[torch.Tensor],
data_samples: Optional[SampleList] = None,
size: Optional[tuple] = None,
size_divisor: Optional[int] = None,
pad_val: Union[int, float] = 0,
seg_pad_val: Union[int, float] = 255) -> torch.Tensor:
"""Stack multiple inputs to form a batch and pad the images and gt_sem_segs
to the max shape use the right bottom padding mode.
Args:
inputs (List[Tensor]): The input multiple tensors. each is a
CHW 3D-tensor.
data_samples (list[:obj:`SegDataSample`]): The list of data samples.
It usually includes information such as `gt_sem_seg`.
size (tuple, optional): Fixed padding size.
size_divisor (int, optional): The divisor of padded size.
pad_val (int, float): The padding value. Defaults to 0
seg_pad_val (int, float): The padding value. Defaults to 255
Returns:
Tensor: The 4D-tensor.
List[:obj:`SegDataSample`]: After the padding of the gt_seg_map.
"""
assert isinstance(inputs, list), \
f'Expected input type to be list, but got {type(inputs)}'
assert len({tensor.ndim for tensor in inputs}) == 1, \
f'Expected the dimensions of all inputs must be the same, ' \
f'but got {[tensor.ndim for tensor in inputs]}'
assert inputs[0].ndim == 3, f'Expected tensor dimension to be 3, ' \
f'but got {inputs[0].ndim}'
assert len({tensor.shape[0] for tensor in inputs}) == 1, \
f'Expected the channels of all inputs must be the same, ' \
f'but got {[tensor.shape[0] for tensor in inputs]}'
# only one of size and size_divisor should be valid
assert (size is not None) ^ (size_divisor is not None), \
'only one of size and size_divisor should be valid'
padded_inputs = []
padded_samples = []
inputs_sizes = [(img.shape[-2], img.shape[-1]) for img in inputs]
max_size = np.stack(inputs_sizes).max(0)
if size_divisor is not None and size_divisor > 1:
# the last two dims are H,W, both subject to divisibility requirement
max_size = (max_size +
(size_divisor - 1)) // size_divisor * size_divisor
for i in range(len(inputs)):
tensor = inputs[i]
if size is not None:
width = max(size[-1] - tensor.shape[-1], 0)
height = max(size[-2] - tensor.shape[-2], 0)
# (padding_left, padding_right, padding_top, padding_bottom)
padding_size = (0, width, 0, height)
elif size_divisor is not None:
width = max(max_size[-1] - tensor.shape[-1], 0)
height = max(max_size[-2] - tensor.shape[-2], 0)
padding_size = (0, width, 0, height)
else:
padding_size = [0, 0, 0, 0]
# pad img
pad_img = F.pad(tensor, padding_size, value=pad_val)
padded_inputs.append(pad_img)
# pad gt_sem_seg
if data_samples is not None:
data_sample = data_samples[i]
gt_sem_seg = data_sample.gt_sem_seg.data
del data_sample.gt_sem_seg.data
data_sample.gt_sem_seg.data = F.pad(
gt_sem_seg, padding_size, value=seg_pad_val)
if 'gt_edge_map' in data_sample:
gt_edge_map = data_sample.gt_edge_map.data
del data_sample.gt_edge_map.data
data_sample.gt_edge_map.data = F.pad(
gt_edge_map, padding_size, value=seg_pad_val)
if 'gt_seg_map_from' in data_sample:
gt_seg_map_from = data_sample.gt_seg_map_from.data
del data_sample.gt_seg_map_from.data
data_sample.gt_seg_map_from.data = F.pad(
gt_seg_map_from, padding_size, value=seg_pad_val)
if 'gt_seg_map_to' in data_sample:
gt_seg_map_to = data_sample.gt_seg_map_to.data
del data_sample.gt_seg_map_to.data
data_sample.gt_seg_map_to.data = F.pad(
gt_seg_map_to, padding_size, value=seg_pad_val)
data_sample.set_metainfo({
'img_shape': tensor.shape[-2:],
'pad_shape': data_sample.gt_sem_seg.shape,
'padding_size': padding_size
})
padded_samples.append(data_sample)
else:
padded_samples.append(
dict(
img_padding_size=padding_size,
pad_shape=pad_img.shape[-2:]))
return torch.stack(padded_inputs, dim=0), padded_samples
| # Copyright (c) Open-CD. All rights reserved.
def stack_batch(inputs: List[torch.Tensor],
data_samples: Optional[SampleList] = None,
size: Optional[tuple] = None,
size_divisor: Optional[int] = None,
pad_val: Union[int, float] = 0,
seg_pad_val: Union[int, float] = 255) -> torch.Tensor:
"""Stack multiple inputs to form a batch and pad the images and gt_sem_segs
to the max shape use the right bottom padding mode.
Args:
inputs (List[Tensor]): The input multiple tensors. each is a
CHW 3D-tensor.
data_samples (list[:obj:`SegDataSample`]): The list of data samples.
It usually includes information such as `gt_sem_seg`.
size (tuple, optional): Fixed padding size.
size_divisor (int, optional): The divisor of padded size.
pad_val (int, float): The padding value. Defaults to 0
seg_pad_val (int, float): The padding value. Defaults to 255
Returns:
Tensor: The 4D-tensor.
List[:obj:`SegDataSample`]: After the padding of the gt_seg_map.
"""
assert isinstance(inputs, list), \
f'Expected input type to be list, but got {type(inputs)}'
assert len({tensor.ndim for tensor in inputs}) == 1, \
f'Expected the dimensions of all inputs must be the same, ' \
f'but got {[tensor.ndim for tensor in inputs]}'
assert inputs[0].ndim == 3, f'Expected tensor dimension to be 3, ' \
f'but got {inputs[0].ndim}'
assert len({tensor.shape[0] for tensor in inputs}) == 1, \
f'Expected the channels of all inputs must be the same, ' \
f'but got {[tensor.shape[0] for tensor in inputs]}'
# only one of size and size_divisor should be valid
assert (size is not None) ^ (size_divisor is not None), \
'only one of size and size_divisor should be valid'
padded_inputs = []
padded_samples = []
inputs_sizes = [(img.shape[-2], img.shape[-1]) for img in inputs]
max_size = np.stack(inputs_sizes).max(0)
if size_divisor is not None and size_divisor > 1:
# the last two dims are H,W, both subject to divisibility requirement
max_size = (max_size +
(size_divisor - 1)) // size_divisor * size_divisor
for i in range(len(inputs)):
tensor = inputs[i]
if size is not None:
width = max(size[-1] - tensor.shape[-1], 0)
height = max(size[-2] - tensor.shape[-2], 0)
# (padding_left, padding_right, padding_top, padding_bottom)
padding_size = (0, width, 0, height)
elif size_divisor is not None:
width = max(max_size[-1] - tensor.shape[-1], 0)
height = max(max_size[-2] - tensor.shape[-2], 0)
padding_size = (0, width, 0, height)
else:
padding_size = [0, 0, 0, 0]
# pad img
pad_img = F.pad(tensor, padding_size, value=pad_val)
padded_inputs.append(pad_img)
# pad gt_sem_seg
if data_samples is not None:
data_sample = data_samples[i]
gt_sem_seg = data_sample.gt_sem_seg.data
del data_sample.gt_sem_seg.data
data_sample.gt_sem_seg.data = F.pad(
gt_sem_seg, padding_size, value=seg_pad_val)
if 'gt_edge_map' in data_sample:
gt_edge_map = data_sample.gt_edge_map.data
del data_sample.gt_edge_map.data
data_sample.gt_edge_map.data = F.pad(
gt_edge_map, padding_size, value=seg_pad_val)
if 'gt_seg_map_from' in data_sample:
gt_seg_map_from = data_sample.gt_seg_map_from.data
del data_sample.gt_seg_map_from.data
data_sample.gt_seg_map_from.data = F.pad(
gt_seg_map_from, padding_size, value=seg_pad_val)
if 'gt_seg_map_to' in data_sample:
gt_seg_map_to = data_sample.gt_seg_map_to.data
del data_sample.gt_seg_map_to.data
data_sample.gt_seg_map_to.data = F.pad(
gt_seg_map_to, padding_size, value=seg_pad_val)
data_sample.set_metainfo({
'img_shape': tensor.shape[-2:],
'pad_shape': data_sample.gt_sem_seg.shape,
'padding_size': padding_size
})
padded_samples.append(data_sample)
else:
padded_samples.append(
dict(
img_padding_size=padding_size,
pad_shape=pad_img.shape[-2:]))
return torch.stack(padded_inputs, dim=0), padded_samples
| @MODELS.register_module() | 1 | 2023-12-23 08:36:47+00:00 | 2k |
N0rz3/Phunter | lib/lookup.py | [
{
"identifier": "free",
"path": "lib/free_lookup.py",
"snippet": "async def free(phone_number):\r\n r = await Request(\"https://free-lookup.net/{}\".format(phone_number), headers={'user-agent': random.choice(agent)}).get()\r\n\r\n html_body = BeautifulSoup(r.text, \"html.parser\")\r\n list_info = html_body.findChild(\"ul\", class_=\"report-summary__list\").findAll(\"div\")\r\n\r\n info_dict = {\r\n k.text.strip(): info.text.strip() if info.text.strip() else \"Not found\"\r\n for _, (k, info) in enumerate(zip(list_info[::2], list_info[1::2]))\r\n }\r\n\r\n print(f\"\\n [{GREEN}>{WHITE}] Free-lookup\")\r\n\r\n for key, value in info_dict.items():\r\n if value != \"Not found\":\r\n print(f\" ├── {key}: {value}\")\r\n\r\n else:\r\n continue"
},
{
"identifier": "spamcalls",
"path": "lib/spam.py",
"snippet": "async def spamcalls(p_n):\r\n print(f\"\\n [{GREEN}>{WHITE}] Spamcalls\")\r\n\r\n url = f\"https://spamcalls.net/en/number/{p_n}\"\r\n\r\n r = await Request(url, headers={'user-agent': random.choice(user_agent)}).get()\r\n\r\n if r.status_code == 200:\r\n print(f\" └── {RED}!{WHITE} Spammer\")\r\n\r\n else:\r\n print(f\" └── {GREEN}>{WHITE} Not spammer\")"
}
] | import phonenumbers
import json
from phonenumbers import carrier
from .reputation import *
from .free_lookup import free
from .spam import spamcalls
from lib.text import *
| 809 |
async def lookup(phone_number):
print()
parsed = phonenumbers.parse(phone_number)
operator = carrier.name_for_number(parsed, "fr")
line = phonenumbers.number_type(parsed)
if line == phonenumbers.PhoneNumberType.FIXED_LINE:
ligne = f" [{GREEN}>{WHITE}] Line type: Fixed"
elif line == phonenumbers.PhoneNumberType.MOBILE:
ligne = f" [{GREEN}>{WHITE}] Line type: Mobile"
else:
ligne = " [-] Line not found"
possible = phonenumbers.is_possible_number(parsed)
valid = phonenumbers.is_valid_number(parsed)
with open("lib/country.json", "r") as file:
read = json.load(file)
d = 0
countrys = []
for country, code in read.items():
d += 1
if phone_number.startswith(code):
countrys.append(country)
if d == 153:
break
else:
continue
else:
continue
print(f"{WHITE}📞 Phone number: {BLUE}{phone_number}{WHITE}")
if possible == True:
pos = {"possible": "✔️"}
else:
pos = {"possible": "❌"}
if valid == True:
val = {"valid": "✔️"}
else:
val = {"valid": "❌"}
print(f" [{GREEN}>{WHITE}] Possible: {pos['possible']}")
print(f" [{GREEN}>{WHITE}] Valid: {val['valid']}")
print()
if operator != "":
print(f" [{GREEN}>{WHITE}] Operator: {operator}")
else:
print(f" [-] Not Operator")
try:
print(f" [{GREEN}>{WHITE}] Possible location: " + str(countrys).replace("[", "").replace("]", "").replace("'", ""))
except:
print(f" [-] Not location")
print(ligne)
await reputation(phone_number)
|
async def lookup(phone_number):
print()
parsed = phonenumbers.parse(phone_number)
operator = carrier.name_for_number(parsed, "fr")
line = phonenumbers.number_type(parsed)
if line == phonenumbers.PhoneNumberType.FIXED_LINE:
ligne = f" [{GREEN}>{WHITE}] Line type: Fixed"
elif line == phonenumbers.PhoneNumberType.MOBILE:
ligne = f" [{GREEN}>{WHITE}] Line type: Mobile"
else:
ligne = " [-] Line not found"
possible = phonenumbers.is_possible_number(parsed)
valid = phonenumbers.is_valid_number(parsed)
with open("lib/country.json", "r") as file:
read = json.load(file)
d = 0
countrys = []
for country, code in read.items():
d += 1
if phone_number.startswith(code):
countrys.append(country)
if d == 153:
break
else:
continue
else:
continue
print(f"{WHITE}📞 Phone number: {BLUE}{phone_number}{WHITE}")
if possible == True:
pos = {"possible": "✔️"}
else:
pos = {"possible": "❌"}
if valid == True:
val = {"valid": "✔️"}
else:
val = {"valid": "❌"}
print(f" [{GREEN}>{WHITE}] Possible: {pos['possible']}")
print(f" [{GREEN}>{WHITE}] Valid: {val['valid']}")
print()
if operator != "":
print(f" [{GREEN}>{WHITE}] Operator: {operator}")
else:
print(f" [-] Not Operator")
try:
print(f" [{GREEN}>{WHITE}] Possible location: " + str(countrys).replace("[", "").replace("]", "").replace("'", ""))
except:
print(f" [-] Not location")
print(ligne)
await reputation(phone_number)
| await free(str(phone_number).replace("+", ""))
| 0 | 2023-12-30 13:21:14+00:00 | 2k |
dan-r/HomeAssistant-Ohme | custom_components/ohme/binary_sensor.py | [
{
"identifier": "DOMAIN",
"path": "custom_components/ohme/const.py",
"snippet": "DOMAIN = \"ohme\""
},
{
"identifier": "DATA_COORDINATORS",
"path": "custom_components/ohme/const.py",
"snippet": "DATA_COORDINATORS = \"coordinators\""
},
{
"identifier": "COORDINATOR_CHARGESESSIONS",
"path": "custom_components/ohme/const.py",
"snippet": "COORDINATOR_CHARGESESSIONS = 0"
},
{
"identifier": "COORDINATOR_ADVANCED",
"path": "custom_components/ohme/const.py",
"snippet": "COORDINATOR_ADVANCED = 3"
},
{
"identifier": "DATA_CLIENT",
"path": "custom_components/ohme/const.py",
"snippet": "DATA_CLIENT = \"client\""
},
{
"identifier": "OhmeChargeSessionsCoordinator",
"path": "custom_components/ohme/coordinator.py",
"snippet": "class OhmeChargeSessionsCoordinator(DataUpdateCoordinator):\n \"\"\"Coordinator to pull main charge state and power/current draw.\"\"\"\n\n def __init__(self, hass):\n \"\"\"Initialise coordinator.\"\"\"\n super().__init__(\n hass,\n _LOGGER,\n name=\"Ohme Charge Sessions\",\n update_interval=timedelta(seconds=30),\n )\n self._client = hass.data[DOMAIN][DATA_CLIENT]\n\n async def _async_update_data(self):\n \"\"\"Fetch data from API endpoint.\"\"\"\n try:\n return await self._client.async_get_charge_sessions()\n\n except BaseException:\n raise UpdateFailed(\"Error communicating with API\")"
},
{
"identifier": "OhmeAdvancedSettingsCoordinator",
"path": "custom_components/ohme/coordinator.py",
"snippet": "class OhmeAdvancedSettingsCoordinator(DataUpdateCoordinator):\n \"\"\"Coordinator to pull CT clamp reading.\"\"\"\n\n def __init__(self, hass):\n \"\"\"Initialise coordinator.\"\"\"\n super().__init__(\n hass,\n _LOGGER,\n name=\"Ohme Advanced Settings\",\n update_interval=timedelta(minutes=1),\n )\n self._client = hass.data[DOMAIN][DATA_CLIENT]\n\n async def _async_update_data(self):\n \"\"\"Fetch data from API endpoint.\"\"\"\n try:\n return await self._client.async_get_advanced_settings()\n\n except BaseException:\n raise UpdateFailed(\"Error communicating with API\")"
},
{
"identifier": "charge_graph_in_slot",
"path": "custom_components/ohme/utils.py",
"snippet": "def charge_graph_in_slot(charge_start, points, skip_format=False):\n \"\"\"Are we currently in a charge slot?\"\"\"\n now = int(time())\n data = points if skip_format else _format_charge_graph(charge_start, points)\n\n # Loop through every value, skipping the last\n for idx in range(0, len(data) - 1):\n # This is our current point\n if data[idx][\"t\"] < now and data[idx + 1][\"t\"] > now:\n # If the delta line we are on is steeper than 10,\n # we are in a charge slot.\n if data[idx + 1][\"y\"] - data[idx][\"y\"] > 10:\n return True\n break\n\n return False"
}
] | import logging
from homeassistant.components.binary_sensor import (
BinarySensorDeviceClass,
BinarySensorEntity
)
from homeassistant.helpers.update_coordinator import CoordinatorEntity
from homeassistant.core import HomeAssistant, callback
from homeassistant.helpers.entity import generate_entity_id
from homeassistant.util.dt import (utcnow)
from .const import DOMAIN, DATA_COORDINATORS, COORDINATOR_CHARGESESSIONS, COORDINATOR_ADVANCED, DATA_CLIENT
from .coordinator import OhmeChargeSessionsCoordinator, OhmeAdvancedSettingsCoordinator
from .utils import charge_graph_in_slot | 823 | """Platform for sensor integration."""
from __future__ import annotations
_LOGGER = logging.getLogger(__name__)
async def async_setup_entry(
hass: core.HomeAssistant,
config_entry: config_entries.ConfigEntry,
async_add_entities,
):
"""Setup sensors and configure coordinator."""
client = hass.data[DOMAIN][DATA_CLIENT]
| """Platform for sensor integration."""
from __future__ import annotations
_LOGGER = logging.getLogger(__name__)
async def async_setup_entry(
hass: core.HomeAssistant,
config_entry: config_entries.ConfigEntry,
async_add_entities,
):
"""Setup sensors and configure coordinator."""
client = hass.data[DOMAIN][DATA_CLIENT] | coordinator = hass.data[DOMAIN][DATA_COORDINATORS][COORDINATOR_CHARGESESSIONS] | 1 | 2023-12-24 20:59:18+00:00 | 2k |
Almas-Ali/SpyIP | spyip/backend.py | [
{
"identifier": "TooManyRequests",
"path": "spyip/exceptions.py",
"snippet": "class TooManyRequests(Exception):\n pass"
},
{
"identifier": "ConnectionTimeout",
"path": "spyip/exceptions.py",
"snippet": "class ConnectionTimeout(Exception):\n pass"
},
{
"identifier": "StatusError",
"path": "spyip/exceptions.py",
"snippet": "class StatusError(Exception):\n pass"
},
{
"identifier": "IPResponse",
"path": "spyip/models.py",
"snippet": "class IPResponse(BaseModel):\n \"\"\"\n Example response from API:\n\n {\n \"status\": \"success\",\n \"continent\": \"Asia\",\n \"continentCode\": \"AS\",\n \"country\": \"India\",\n \"countryCode\": \"IN\",\n \"region\": \"DL\",\n \"regionName\": \"National Capital Territory of Delhi\",\n \"city\": \"New Delhi\",\n \"district\": \"\",\n \"zip\": \"110001\",\n \"lat\": 28.6139,\n \"lon\": 77.209,\n \"timezone\": \"Asia/Kolkata\",\n \"offset\": 19800,\n \"currency\": \"INR\",\n \"isp\": \"Google LLC\",\n \"org\": \"Google LLC\",\n \"as\": \"AS15169 Google LLC\",\n \"asname\": \"GOOGLE\",\n \"mobile\": false,\n \"proxy\": false,\n \"hosting\": true,\n \"query\": \"142.250.193.206\",\n }\n \"\"\"\n\n status: str = Field(..., description='Status of the request.')\n continent: str = Field(..., description='Continent name.')\n continentCode: str = Field(..., description='Continent code.')\n country: str = Field(..., description='Country name.')\n countryCode: str = Field(..., description='Country code.')\n region: str = Field(..., description='Region code.')\n regionName: str = Field(..., description='Region name.')\n city: str = Field(..., description='City name.')\n district: str = Field(..., description='District name.')\n zip_: str = Field(..., description='Zip code.')\n lat: float = Field(..., description='Latitude.')\n lon: float = Field(..., description='Longitude.')\n timezone: str = Field(..., description='Timezone.')\n offset: int = Field(..., description='Offset.')\n currency: str = Field(..., description='Currency.')\n isp: str = Field(..., description='ISP name.')\n org: str = Field(..., description='Organization name.')\n as_: str = Field(..., description='AS number and name.')\n asname: str = Field(..., description='AS name.')\n mobile: bool = Field(..., description='Mobile status.')\n proxy: bool = Field(..., description='Proxy status.')\n hosting: bool = Field(..., description='Hosting status.')\n query: str = Field(..., description='IP address.')\n\n class Config:\n def alias_generator(x):\n return x.replace('_', '')\n\n populate_by_name = True\n # fields = { # Alias for reserved keywords\n # \"as_\": \"as\",\n # \"zip_\": \"zip\",\n # }\n\n @field_validator('status')\n def check_status(cls, v):\n if v != 'success':\n raise ValueError('Invalid IP address.')\n return v\n\n def json(self, **kwargs) -> str:\n return self.model_dump_json(**kwargs)"
},
{
"identifier": "DNSResponse",
"path": "spyip/models.py",
"snippet": "class DNSResponse(BaseModel):\n \"\"\"\n Example response from API:\n \"dns\": {\n \"ip\": \"74.125.73.83\",\n \"geo\": \"United States - Google\"\n }\n \"\"\"\n\n ip: str = Field(..., description='IP address.')\n geo: str = Field(..., description='Geo location.')\n\n def json(self, **kwargs) -> str:\n return self.model_dump_json(**kwargs)"
}
] | from typing import List, Union
from .exceptions import (
TooManyRequests,
ConnectionTimeout,
StatusError,
)
from .models import (
IPResponse,
DNSResponse,
)
import asyncio
import random
import string
import httpx | 1,207 |
def get_random_string(length: int = 32) -> str:
"""Generate a random string of fixed length."""
letters = string.ascii_lowercase + string.digits
return ''.join(random.sample(letters, length))
# API endpoints for IP address lookup
trace_me_url = 'http://ip-api.com/json/'
trace_ip_url = 'http://ip-api.com/json/%(query)s'
trace_dns_url = f'http://{get_random_string(32)}.edns.ip-api.com/json/'
trace_ip_batch_url = 'http://ip-api.com/batch'
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-US,en;q=0.5',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0',
}
def trace_me(
timeout: int = 5,
lang: str = 'en',
|
def get_random_string(length: int = 32) -> str:
"""Generate a random string of fixed length."""
letters = string.ascii_lowercase + string.digits
return ''.join(random.sample(letters, length))
# API endpoints for IP address lookup
trace_me_url = 'http://ip-api.com/json/'
trace_ip_url = 'http://ip-api.com/json/%(query)s'
trace_dns_url = f'http://{get_random_string(32)}.edns.ip-api.com/json/'
trace_ip_batch_url = 'http://ip-api.com/batch'
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-US,en;q=0.5',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0',
}
def trace_me(
timeout: int = 5,
lang: str = 'en', | ) -> Union[IPResponse, None]: | 3 | 2023-12-31 19:43:38+00:00 | 2k |
leopedroso45/Stable-Diffusion-ImageGen | tests/test_process_task.py | [
{
"identifier": "check_cuda_and_clear_cache",
"path": "sevsd/process_task.py",
"snippet": "def check_cuda_and_clear_cache():\n r\"\"\"\n Clears the CUDA cache if available, otherwise performs garbage collection.\n This function is called to manage memory usage, particularly when working with large models or multiple image generations.\n \"\"\"\n if torch.cuda.is_available():\n torch.cuda.empty_cache()\n else:\n gc.collect()"
},
{
"identifier": "process_task",
"path": "sevsd/process_task.py",
"snippet": "def process_task(job, pipeline, executor, path, parallel_exec=True):\n r\"\"\"\n Processes a single image generation job using the specified pipeline and execution parameters.\n\n This function handles the generation of one or more images based on a given job description. It supports both parallel and sequential execution modes. Generated images are saved to the specified path.\n\n Parameters:\n job (dict): A dictionary containing details for the image generation task. It includes 'prompt' and optionally 'negative_prompt'.\n pipeline (callable): The Stable Diffusion pipeline callable used for generating images.\n executor (dict): A dictionary containing execution parameters such as 'num_of_exec', 'cfg_scale', and 'inference_steps'.\n path (str): The directory path where generated images will be saved.\n parallel_exec (bool, optional): If True, generates all specified images in parallel. Defaults to True.\n\n The function saves each generated image with a unique timestamp in the specified path and prints the save location. In case of any exceptions, they are caught and printed.\n\n Example:\n job = {\n \"prompt\": \"A scenic landscape\",\n \"negative_prompt\": \"blurred image, black and white, watermarked image\"\n }\n executor = {\n \"num_of_exec\": 2,\n \"cfg_scale\": 7,\n \"inference_steps\": 50\n }\n pipeline = setup_pipeline(\"CompVis/stable-diffusion-v1-4\")\n process_task(job, pipeline, executor, \"./generated-images\", parallel_exec=False)\n\n Note:\n This function also handles CUDA cache clearing and garbage collection for memory management.\n \"\"\"\n \n def call_generate_image():\n images = generate_image(job, pipeline, executor, parallel_exec)\n if images is not None:\n for image in images:\n timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S%f\")\n image_path = f\"{path}/generated_image_{timestamp}.png\"\n image.save(image_path)\n print(f\"[sevsd] - image saved at {image_path}\")\n else:\n print(\"[sevsd] - image generation failed due to memory constraints.\")\n check_cuda_and_clear_cache()\n \n try:\n path = check_os_path(path)\n if job is not None:\n if parallel_exec is not True:\n num_images = executor.get(\"num_of_exec\", 1)\n for _ in range(num_images):\n call_generate_image()\n else:\n call_generate_image()\n except Exception as e:\n print(f\"[sevsd] - exception: {e}\")\n finally:\n check_cuda_and_clear_cache()"
},
{
"identifier": "check_os_path",
"path": "sevsd/process_task.py",
"snippet": "def check_os_path(path):\n r\"\"\"\n Checks if the given path exists, and if not, creates the necessary directories.\n This function ensures that the output path for saving images is available.\n\n Parameters:\n path (str): The directory path to check and create if necessary.\n\n Returns:\n str: The verified or created directory path.\n \"\"\"\n if not os.path.exists(path):\n os.makedirs(path)\n print(f\"[sevsd] - created path: {path}\")\n return path"
}
] | import unittest
import sys
from unittest.mock import patch, MagicMock
from sevsd.process_task import check_cuda_and_clear_cache, process_task, check_os_path | 991 | sys.path.append('../')
class TestProcessTask(unittest.TestCase):
@patch('sevsd.process_task.generate_image')
def test_process_task(self, mock_generate_image):
mock_image = MagicMock()
mock_image.save = MagicMock()
mock_generate_image.return_value = [mock_image]
fake_job = {"prompt": "prompt", "details": (None, 50, 1, 7.5)}
fake_pipeline = MagicMock()
fake_executor = {"num_of_exec": 1, "cfg_scale": 7}
fake_path = "test_path"
| sys.path.append('../')
class TestProcessTask(unittest.TestCase):
@patch('sevsd.process_task.generate_image')
def test_process_task(self, mock_generate_image):
mock_image = MagicMock()
mock_image.save = MagicMock()
mock_generate_image.return_value = [mock_image]
fake_job = {"prompt": "prompt", "details": (None, 50, 1, 7.5)}
fake_pipeline = MagicMock()
fake_executor = {"num_of_exec": 1, "cfg_scale": 7}
fake_path = "test_path"
| process_task(fake_job, fake_pipeline, fake_executor, fake_path, parallel_exec=True) | 1 | 2023-12-28 16:19:12+00:00 | 2k |
RepoBench v1.1 (Python)
Introduction
This dataset presents the Python portion of RepoBench v1.1 (ICLR 2024). The data encompasses a collection from GitHub, spanning the period from October 6th to December 31st, 2023. With a commitment to data integrity, we've implemented a deduplication process based on file content against the Stack v2 dataset (coming soon), aiming to mitigate data leakage and memorization concerns.
Resources and Links
FAQs
Q: What do the features in the dataset mean?
A: Imagine you're coding in Python and you want to write the next line of your code. The dataset provides you the following information:
repo_name
(string): the name of the repositoryfile_path
(string): the path of the current filecontext
(list): the cross-file code snippets that might be helpful for writing the next line:identifier
(string): the identifier of the code snippetpath
(string): the path of the code snippetsnippet
(string): the code snippet
import_statement
(string): the import statement of the current filecropped_code
(string): the cropped code of the current file (up to previous 120 lines)all_code
(string): the entire code of the current file (not cropped)next_line
(string): the next line of the code (this serves as the target)gold_snippet_index
(int): the index of the gold snippet in the context (which will be used in next line, just for reference, you should not use this for next line prediction)created_at
(string): the creation time of the repositorylevel
(string): the level of next line completion, which is measured by the number of tokens for the whole prompt (including all the context, import statement, cropped code and some neccessary separator tokens)
Q: How does the level be defined?
A: The level is determined by the number of tokens for the whole prompt (including all the context, import statement, cropped code and some neccessary separator tokens). The token number is calculated by the tokenizer of GPT-4 by using tiktoken. The following table shows the level definition:
Level Prompt Length (Number of Tokens) 2k 640 - 1,600 4k 1,600 - 3,600 8k 3,600 - 7,200 12k 7,200 - 10,800 16k 10,800 - 14,400 24k 14,400 - 21,600 32k 21,600 - 28,800 64k 28,800 - 57,600 128k 57,600 - 100,000 Q: What does the different splits mean?
A: The dataset is split into three parts:
cross_file_first
: the next line of code utilizes content from a cross-file code snippet and it is its first usage within current file.cross_file_random
: the next line of code utilizes content from a cross-file code snippet and it is NOT its first usage within current file.in_file
: the next line of code does not utilize content from a cross-file code snippet.
Q: How to construct the prompt for next line prediction?
A: We hereby provide the official implementation for constructing prompts. Please note that the methods described below are not necessarily the optimal way of construction. Reordering, retrieval argumentation, or employing different cropping/construction techniques could potentially lead to varying degrees of improvement. Ensure that your model evaluations are conducted in a fair manner.
import re def construct_prompt( data: dict, language: str = "python", tokenizer= None, max_token_nums: int = 15800 ) -> str: """ Construct the prompt for next line prediction. :param data: data point from the dataset :param language: the language of the code :param tokenizer: the tokenizer of the evaluation model :param max_token_nums: the maximum number of tokens constraint for the prompt :return: the constructed prompt """ # comment symbol for different languages comment_symbol = "#" if language == "python" else "//" # construct the cross-file prompt and in-file prompt separately # cross-file prompt cross_file_prompt = f"{comment_symbol} Repo Name: {data['repo_name']}\n" for snippet in data['context']: cross_file_prompt += f"{comment_symbol} Path: {snippet['path']}\n{snippet['snippet']}" + "\n\n" # in-file prompt in_file_prompt = f"{comment_symbol} Path: {data['file_path']}\n{data['import_statement']}\n{data['cropped_code']}\n" # if we assign the tokenizer and the max_token_nums, we will truncate the cross-file prompt to meet the constraint if tokenizer is not None and max_token_nums is not None: cross_file_prompt_token_nums = len(tokenizer.encode(cross_file_prompt)) in_file_prompt_token_nums = len(tokenizer.encode(in_file_prompt)) exceed_token_nums = cross_file_prompt_token_nums + in_file_prompt_token_nums - max_token_nums if exceed_token_nums > 0: # split the cross-file prompt into lines cross_file_prompt_lines = cross_file_prompt.split("\n") # drop lines from end until the extra token number is less than 0 for i in range(len(repo_prompt_lines)-1, -1, -1): extra_token_num -= len(tokenizer.encode(cross_file_prompt_lines[i])) if extra_token_num < 0: break # join the lines back cross_file_prompt = "\n".join(cross_file_prompt_lines[:i]) + "\n\n" # combine the cross-file prompt and in-file prompt prompt = cross_file_prompt + in_file_prompt # normalize some empty lines prompt = re.sub(r'\n{4,}', '\n\n', prompt) return prompt
Q: How to load the dataset?
A: You can simply use the following code to load the dataset:
from datasets import load_dataset dataset = load_dataset("tianyang/repobench_python_v1.1")
To construct the prompt for next line prediction, you can refer to the official implementation provided in the previous question and use the
construct_prompt
function to construct the prompt, for example:from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base") model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base") prompt = construct_prompt(dataset['cross_file_first'][0], tokenizer=tokenizer, max_token_nums=15800)
Q: How often will the dataset be updated?
A: We plan to update the dataset every three months, but there might be slight delays considering the time required for data crawling and our own schedules. If you require updated data, please feel free to contact us, and we can coordinate the timing and expedite the process.
Q: What models should I use to evaluate the dataset?
A: RepoBench is designed to evaluate base models, not those that have been instruction fine-tuned. Please use base models for evaluation.
Q: I am training a new model but the knowledge cutoff date is after the dataset's. Can you provide me with the latest data?
A: Sure! We are happy to provide you with the latest data (even customized data with specific requirements). Please feel free to contact us.
Q: Can I opt-out?
A: Yes, you can opt-out your repository from the dataset. Please check Am I in RepoBench?, we will upload the raw data of the repository information we crawled at least 15 days before the dataset creation and release. We will respect your decision and remove your repository from the dataset if you opt-out.
Citation
If you find RepoBench useful in your research, please consider citing the paper using the following BibTeX entry:
@misc{liu2023repobench,
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
author={Tianyang Liu and Canwen Xu and Julian McAuley},
year={2024},
url={https://arxiv.org/abs/2306.03091},
booktitle={International Conference on Learning Representations}
}
Your interest and contributions to RepoBench are immensely valued. Happy coding! 🚀
- Downloads last month
- 1,112