text
stringlengths 8
1.72M
| id
stringlengths 22
143
| metadata
dict | __index_level_0__
int64 0
104
|
---|---|---|---|
# Conditional flow for switch scenario
This example is a conditional flow for switch scenario.
By following this example, you will learn how to create a conditional flow using the `activate config`.
## Flow description
In this flow, we set the background to the search function of a certain mall, use `activate config` to implement switch logic and determine user intent based on the input queries to achieve dynamic processing and generate user-oriented output.
- The `classify_with_llm` node analyzes user intent based on input query and provides one of the following results: "product_recommendation," "order_search," or "product_info".
- The `class_check` node generates the correctly formatted user intent.
- The `product_recommendation`, `order_search`, and `product_info` nodes are configured with activate config and are only executed when the output from `class_check` meets the specified conditions.
- The `generate_response` node generates user-facing output.
For example, as the shown below, the input query is "When will my order be shipped" and the LLM node classifies the user intent as "order_search", resulting in both the `product_info` and `product_recommendation` nodes being bypassed and only the `order_search` node being executed, and then generating the outputs.

## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Setup connection
Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
Note in this example, we are using [chat api](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), please use `gpt-35-turbo` or `gpt-4` model deployment.
Create connection if you haven't done that. Ensure you have put your azure open ai endpoint key in [azure_openai.yml](../../../connections/azure_openai.yml) file.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f ../../../connections/azure_openai.yml --name open_ai_connection --set api_key=<your_api_key> api_base=<your_api_base>
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
## Run flow
- Test flow
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs query="When will my order be shipped?"
```
- Create run with multiple lines of data
```bash
# create a random run name
run_name="conditional_flow_for_switch_"$(openssl rand -hex 12)
# create run
pf run create --flow . --data ./data.jsonl --column-mapping query='${data.query}' --stream --name $run_name
```
- List and show run metadata
```bash
# list created run
pf run list
# show specific run detail
pf run show --name $run_name
# show output
pf run show-details --name $run_name
# visualize run in browser
pf run visualize --name $run_name
```
| promptflow/examples/flows/standard/conditional-flow-for-switch/README.md/0 | {
"file_path": "promptflow/examples/flows/standard/conditional-flow-for-switch/README.md",
"repo_id": "promptflow",
"token_count": 925
} | 17 |
import os
from promptflow import tool
from promptflow.connections import CustomConnection
from intent import extract_intent
@tool
def extract_intent_tool(chat_prompt, connection: CustomConnection) -> str:
# set environment variables
for key, value in dict(connection).items():
os.environ[key] = value
# call the entry function
return extract_intent(
chat_prompt=chat_prompt,
)
| promptflow/examples/flows/standard/customer-intent-extraction/extract_intent_tool.py/0 | {
"file_path": "promptflow/examples/flows/standard/customer-intent-extraction/extract_intent_tool.py",
"repo_id": "promptflow",
"token_count": 134
} | 18 |
import logging
import re
from typing import List
class Settings:
divide_file = {
"py": r"(?<!.)(class|def)",
}
divide_func = {
"py": r"((\n {,6})|^)(class|def)\s+(\S+(?=\())\s*(\([^)]*\))?\s*(->[^:]*:|:) *"
}
class Divider:
language = 'py'
@classmethod
def divide_file(cls, text) -> List[str]:
matches = list(re.finditer(Settings.divide_file[Divider.language], text))
splitted_content = []
min_pos = matches[0].start() if len(matches) > 0 else len(text)
for i in range(len(matches)):
start = matches[i].start()
end = matches[i + 1].start() if i + 1 < len(matches) else len(text)
splitted_content.append(text[start:end])
if min_pos != 0:
splitted_content.insert(0, text[0:min_pos])
return splitted_content
@classmethod
def divide_half(cls, text) -> List[str]:
"""
Divide the content into two parts, but ensure that the function body is not split.
"""
_, pos = Divider.get_functions_and_pos(text)
if len(pos) > 1: # Divide the code into two parts and every part start with a function.
i = len(pos) // 2
return [text[0:pos[i][0]], text[pos[i][0]:]]
if len(pos) == 1: # Divide the code into two parts, [function define + body, other body].
body = text[pos[0][1]:]
body_lines = body.split('\n')
body_ten_lines = '\n'.join(body_lines[0:10])
return [text[0:pos[0][1]] + body_ten_lines, body[len(body_ten_lines):]]
return [text]
@classmethod
def get_functions_and_pos(cls, text):
matches = re.finditer(Settings.divide_func[Divider.language], text)
functions = []
pos = []
for match in matches:
matched_text = match.group().replace('\n', '')
func = re.sub(r' +', ' ', matched_text).replace(' :', ':')
func = re.sub(r'[\s,]+\)', ')', func)
func = re.sub(r'\([\s,]+', '(', func)
functions.append(func.strip())
pos.append((match.start(), match.end()))
return functions, pos
@classmethod
def combine(cls, divided: List[str]):
return ''.join(divided)
@classmethod
def merge_doc2code(cls, docstring: str, origin_code: str) -> str:
funcs1, pos1 = Divider.get_functions_and_pos(docstring)
funcs2, pos2 = Divider.get_functions_and_pos(origin_code)
pattern = r'""".*?"""'
code = origin_code if len(funcs2) == 0 else origin_code[0:pos2[0][0]]
pos1.append((len(docstring), len(docstring))) # avoid index out of range
pos2.append((len(origin_code), len(origin_code))) # avoid index out of range
for i2 in range(len(funcs2)): # add docstring for each function in origin_code
part_full_code = origin_code[pos2[i2][0]:pos2[i2 + 1][0]]
try:
i1 = funcs1.index(funcs2[i2])
except ValueError:
logging.warning(f"No docstring found for {funcs2[i2]}")
code += part_full_code
continue
new_doc = re.findall(pattern, docstring[pos1[i1][1]:pos1[i1 + 1][0]], re.DOTALL)
if new_doc:
func_line = origin_code[pos2[i2][0]:pos2[i2][1]].replace('\n', '')
empty_line_num = (len(func_line) - len(func_line.lstrip()) + 4)
func_body = origin_code[pos2[i2][1]:pos2[i2 + 1][0]]
code_doc = list(re.finditer(pattern, func_body, re.DOTALL))
format_new_doc = Divider.format_indentation(new_doc[0], empty_line_num)
is_replace_doc = len(code_doc) > 0 and (re.sub(r'\s+', '', func_body[0:code_doc[0].start()]) == '')
if is_replace_doc:
code += part_full_code.replace(code_doc[0].group(), format_new_doc.strip(), 1)
else:
code += origin_code[pos2[i2][0]:pos2[i2][1]] + '\n' + format_new_doc + '\n' + origin_code[
pos2[i2][1]:
pos2[i2 + 1][0]]
else:
code += part_full_code
return code
@classmethod
def format_indentation(cls, text, empty_line_num):
lines = text.splitlines()
last_line_space_num = len(lines[-1]) - len(lines[-1].lstrip())
need_add_space = max(empty_line_num - last_line_space_num, 0) * ' '
lines[0] = last_line_space_num * ' ' + lines[0].lstrip() # Align the first row to the last row
indented_lines = [(need_add_space + line).rstrip() for line in lines]
indented_string = '\n'.join(indented_lines)
return indented_string
@classmethod
def has_class_or_func(cls, text):
funcs, _ = Divider.get_functions_and_pos(text)
return len(funcs) > 0
| promptflow/examples/flows/standard/gen-docstring/divider.py/0 | {
"file_path": "promptflow/examples/flows/standard/gen-docstring/divider.py",
"repo_id": "promptflow",
"token_count": 2530
} | 19 |
from promptflow import tool
@tool
def prepare_example():
return [
{
"question": "What is 37593 * 67?",
"code": "{\n \"code\": \"print(37593 * 67)\"\n}",
"answer": "2512641",
},
{
"question": "What is the value of x in the equation 2x + 3 = 11?",
"code": "{\n \"code\": \"print((11-3)/2)\"\n}",
"answer": "4",
},
{
"question": "How many of the integers between 0 and 99 inclusive are divisible by 8?",
"code": "{\n \"code\": \"count = 0\\nfor i in range(100):\\n \
if i % 8 == 0:\\n count += 1\\nprint(count)\"\n}",
"answer": "10",
},
{
"question": "Janet's ducks lay 16 eggs per day. \
She eats three for breakfast every morning and bakes muffins for her friends every day with four.\
She sells the remainder at the farmers' market daily for $2 per fresh duck egg. \
How much in dollars does she make every day at the farmers' market?",
"code": "{\n \"code\": \"print((16-3-4)*2)\"\n}",
"answer": "18",
},
{
"question": "What is the sum of the powers of 3 (3^i) that are smaller than 100?",
"code": "{\n \"code\": \"sum = 0\\ni = 0\n\
while 3**i < 100:\\n sum += 3**i\\n i += 1\\nprint(sum)\"\n}",
"answer": "40",
},
{
"question": "Carla is downloading a 200 GB file. She can download 2 GB/minute, \
but 40% of the way through the download, the download fails.\
Then Carla has to restart the download from the beginning. \
How load did it take her to download the file in minutes?",
"code": "{\n \"code\": \"print(200/2*1.4)\"\n}",
"answer": "140",
},
{
"question": "What is the sum of the 10 first positive integers?",
"code": "{\n \"code\": \"print(sum(range(1,11)))\"\n}",
"answer": "55",
}
]
| promptflow/examples/flows/standard/maths-to-code/math_example.py/0 | {
"file_path": "promptflow/examples/flows/standard/maths-to-code/math_example.py",
"repo_id": "promptflow",
"token_count": 907
} | 20 |
import json
from promptflow import tool
@tool
def convert_to_dict(input_str: str):
try:
return json.loads(input_str)
except Exception as e:
print("The input is not valid, error: {}".format(e))
return {"category": "None", "evidence": "None"}
| promptflow/examples/flows/standard/web-classification/convert_to_dict.py/0 | {
"file_path": "promptflow/examples/flows/standard/web-classification/convert_to_dict.py",
"repo_id": "promptflow",
"token_count": 105
} | 21 |
my_tool_package.tools.tool_with_dynamic_list_input.my_tool:
function: my_tool
inputs:
input_prefix:
type:
- string
input_text:
type:
- list
dynamic_list:
func_path: my_tool_package.tools.tool_with_dynamic_list_input.my_list_func
func_kwargs:
- name: prefix # argument name to be passed to the function
type:
- string
# if optional is not specified, default to false.
# this is for UX pre-validaton. If optional is false, but no input. UX can throw error in advanced.
optional: true
reference: ${inputs.input_prefix} # dynamic reference to another input parameter
- name: size # another argument name to be passed to the function
type:
- int
optional: true
default: 10
# enum and dynamic list may need below setting.
# allow user to enter input value manually, default false.
allow_manual_entry: true
# allow user to select multiple values, default false.
is_multi_select: true
endpoint_name:
type:
- string
dynamic_list:
func_path: my_tool_package.tools.tool_with_dynamic_list_input.list_endpoint_names
func_kwargs:
- name: prefix
type:
- string
optional: true
reference: ${inputs.input_prefix}
allow_manual_entry: false
is_multi_select: false
module: my_tool_package.tools.tool_with_dynamic_list_input
name: My Tool with Dynamic List Input
description: This is my tool with dynamic list input
type: python
| promptflow/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_dynamic_list_input.yaml/0 | {
"file_path": "promptflow/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_dynamic_list_input.yaml",
"repo_id": "promptflow",
"token_count": 672
} | 22 |
---
resources: examples/tutorials/flow-deploy/create-service-with-flow
---
# Create service with flow
This example shows how to create a simple service with flow.
You can create your own service by utilize `flow-as-function`.
This folder contains a example on how to build a service with a flow.
Reference [here](./simple_score.py) for a minimal service example.
The output of score.py will be a json serialized dictionary.
You can use json parser to parse the output.
## 1. Start the service and put in background
```bash
nohup python simple_score.py &
# Note: added this to run in our CI pipeline, not needed for user.
sleep 10
```
## 2. Test the service with request
Executing the following command to send a request to execute a flow.
```bash
curl -X POST http://127.0.0.1:5000/score --header "Content-Type: application/json" --data '{"flow_input": "some_flow_input", "node_input": "some_node_input"}'
```
Sample output of the request:
```json
{
"output": {
"value": "some_flow_input"
}
}
```
Reference [here](./simple_score.py) for more.
| promptflow/examples/tutorials/flow-deploy/create-service-with-flow/README.md/0 | {
"file_path": "promptflow/examples/tutorials/flow-deploy/create-service-with-flow/README.md",
"repo_id": "promptflow",
"token_count": 327
} | 23 |
<jupyter_start><jupyter_text>Use Flow as Component in Pipeline**Requirements** - In order to benefit from this tutorial, you will need:- A basic understanding of Machine Learning- An Azure account with an active subscription - [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)- An Azure ML workspace with computer cluster - [Configure workspace](../../configuration.ipynb)- A python environment- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section**Learning Objectives** - By the end of this tutorial, you should be able to:- Connect to your AML workspace from the Python SDK- Create `Pipeline` with a component loaded from `flow.dag.yaml`**Motivations** - This notebook explains how to run a pipeline with distributed training component. 1. Connect to Azure Machine Learning WorkspaceThe [workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-workspace) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section we will connect to the workspace in which the job will be run. 1.1 Import the required libraries<jupyter_code># import required libraries
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
from azure.ai.ml import MLClient, load_component, Input
from azure.ai.ml.constants import AssetTypes
from azure.ai.ml.dsl import pipeline<jupyter_output><empty_output><jupyter_text>1.2 Configure credentialWe are using `DefaultAzureCredential` to get access to workspace. `DefaultAzureCredential` should be capable of handling most Azure SDK authentication scenarios. Reference for more available credentials if it does not work for you: [configure credential example](../../configuration.ipynb), [azure-identity reference doc](https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity?view=azure-python).<jupyter_code>try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()<jupyter_output><empty_output><jupyter_text>1.3 Get a handle to the workspaceWe use config file to connect to a workspace. The Azure ML workspace should be configured with computer cluster. [Check this notebook for configure a workspace](../../configuration.ipynb)<jupyter_code># Get a handle to workspace
ml_client = MLClient.from_config(credential=credential)
# Retrieve an already attached Azure Machine Learning Compute.
cluster_name = "cpu-cluster"
print(ml_client.compute.get(cluster_name))<jupyter_output><empty_output><jupyter_text>2. Load flow as componentWe suppose that there has already been a flow authored with Promptflow SDK/CLI/portal. Then we can load its flow dag yaml as a component like regular component specs.<jupyter_code>flow_component = load_component("../../flows/standard/web-classification/flow.dag.yaml")<jupyter_output><empty_output><jupyter_text>3. Pipeline job 3.1 Build pipeline<jupyter_code>data_input = Input(
path="../../flows/standard/web-classification/data.jsonl", type=AssetTypes.URI_FILE
)
@pipeline()
def pipeline_func_with_flow(data):
flow_node = flow_component(
data=data,
url="${data.url}",
connections={
"summarize_text_content": {
"connection": "azure_open_ai_connection",
"deployment_name": "gpt-35-turbo",
},
"classify_with_llm": {
"connection": "azure_open_ai_connection",
"deployment_name": "gpt-35-turbo",
},
},
)
flow_node.compute = "cpu-cluster"
# create pipeline instance
pipeline_job = pipeline_func_with_flow(data=data_input)<jupyter_output><empty_output><jupyter_text>3.2 Submit pipeline job<jupyter_code># submit job to workspace
pipeline_job = ml_client.jobs.create_or_update(
pipeline_job, experiment_name="pipeline_samples"
)
pipeline_job
# Wait until the job completes
ml_client.jobs.stream(pipeline_job.name)<jupyter_output><empty_output> | promptflow/examples/tutorials/flow-in-pipeline/pipeline.ipynb/0 | {
"file_path": "promptflow/examples/tutorials/flow-in-pipeline/pipeline.ipynb",
"repo_id": "promptflow",
"token_count": 1361
} | 24 |
name: release-env
channels:
- defaults
- conda-forge
dependencies:
- python=3.8
- pip
- pip:
- setuptools
- twine==4.0.0
- azure-storage-blob==12.16.0
| promptflow/scripts/distributing/configs/promptflow-tools-release-env.yaml/0 | {
"file_path": "promptflow/scripts/distributing/configs/promptflow-tools-release-env.yaml",
"repo_id": "promptflow",
"token_count": 81
} | 25 |
<?xml version="1.0" encoding="UTF-8"?>
<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
<?define ProductVersion="$(env.CLI_VERSION)" ?>
<?define ProductName = "promptflow" ?>
<?define ProductDescription = "Command-line tools for prompt flow." ?>
<?define ProductAuthor = "Microsoft Corporation" ?>
<?define ProductResources = ".\resources\" ?>
<?define UpgradeCode32 = "8b748161-e07a-48f2-8cdf-401480df4694" ?>
<?if $(var.Platform) = "x64" ?>
<?define PromptflowCliRegistryGuid = "0efd984f-9eec-425b-b230-a3994b69649a" ?>
<?define PromptflowServiceGuid = "d4e99207-77be-4bdf-a430-b08632c5aa2b" ?>
<?define PromptflowSystemPathGuid = "4c321045-d4e0-4446-bda4-8c19eaa42af1" ?>
<?define ProgramFilesFolder = "ProgramFiles64Folder" ?>
<?define RemovePromptflowFolderGuid = "ee843aa5-2b72-4958-be84-53dbac17efc7" ?>
<?define UpgradeCode = "772aa21f-f8d4-4771-b910-1dbce3f1920c" ?>
<?define Architecture = "64-bit" ?>
<?elseif $(var.Platform) = "x86" ?>
<?define PromptflowCliRegistryGuid = "7c2c792d-c395-44a1-8222-8e4ea006abb9" ?>
<?define PromptflowServiceGuid = "f706b208-a15d-4ae7-9185-cfcc43656570" ?>
<?define PromptflowSystemPathGuid = "9661fe6a-ff48-4e7c-a60d-fc34c2d06ef3" ?>
<?define ProgramFilesFolder = "ProgramFilesFolder" ?>
<?define RemovePromptflowFolderGuid = "588ca5e1-38c6-4659-8b38-762df7ed5b28" ?>
<?define UpgradeCode = $(var.UpgradeCode32) ?>
<?define Architecture = "32-bit" ?>
<?else ?>
<?error Unsupported platform "$(var.Platform)" ?>
<?endif ?>
<Product Id="*" Name="$(var.ProductName) ($(var.Architecture))" Language="1033" Version="$(var.ProductVersion)" Manufacturer="$(var.ProductAuthor)" UpgradeCode="$(var.UpgradeCode)">
<Package InstallerVersion="200" Compressed="yes" InstallScope="perUser" />
<Upgrade Id="$(var.UpgradeCode)">
<UpgradeVersion Property="WIX_UPGRADE_DETECTED" Maximum="$(var.ProductVersion)" IncludeMaximum="no" MigrateFeatures="yes" />
<UpgradeVersion Property="WIX_DOWNGRADE_DETECTED" Minimum="$(var.ProductVersion)" IncludeMinimum="no" OnlyDetect="yes" />
</Upgrade>
<InstallExecuteSequence>
<RemoveExistingProducts After="InstallExecute" />
</InstallExecuteSequence>
<!-- New product architectures should upgrade the original x86 product - even of the same version. -->
<?if $(var.UpgradeCode) != $(var.UpgradeCode32) ?>
<Upgrade Id="$(var.UpgradeCode32)">
<UpgradeVersion Property="WIX_X86_UPGRADE_DETECTED" Maximum="$(var.ProductVersion)" IncludeMaximum="yes" MigrateFeatures="yes" />
<UpgradeVersion Property="WIX_X86_DOWNGRADE_DETECTED" Minimum="$(var.ProductVersion)" IncludeMinimum="no" OnlyDetect="yes" />
</Upgrade>
<Condition Message="A newer version of $(var.ProductName) is already installed.">NOT (WIX_DOWNGRADE_DETECTED OR WIX_X86_DOWNGRADE_DETECTED)</Condition>
<?else ?>
<Condition Message="A newer version of $(var.ProductName) is already installed.">NOT WIX_DOWNGRADE_DETECTED</Condition>
<?endif ?>
<Media Id="1" Cabinet="promptflow.cab" EmbedCab="yes" CompressionLevel="high" />
<Icon Id="PromptflowIcon" SourceFile="$(var.ProductResources)logo32.ico" />
<Property Id="ARPPRODUCTICON" Value="PromptflowIcon" />
<Property Id="ARPHELPLINK" Value="https://microsoft.github.io/promptflow/how-to-guides/quick-start.html" />
<Property Id="ARPURLINFOABOUT" Value="https://microsoft.github.io/promptflow/how-to-guides/quick-start.html" />
<Property Id="ARPURLUPDATEINFO" Value="https://microsoft.github.io/promptflow/how-to-guides/quick-start.html" />
<Property Id="MSIFASTINSTALL" Value="7" />
<Property Id="ApplicationFolderName" Value="promptflow" />
<Property Id="WixAppFolder" Value="WixPerUserFolder" />
<Feature Id="ProductFeature" Title="promptflow" Level="1" AllowAdvertise="no">
<ComponentGroupRef Id="ProductComponents" />
</Feature>
<!--Custom action to propagate path env variable change-->
<CustomActionRef Id="WixBroadcastEnvironmentChange" />
<!-- User Interface -->
<WixVariable Id="WixUILicenseRtf" Value="$(var.ProductResources)CLI_LICENSE.rtf"/>
<UIRef Id="WixUI_ErrorProgressText"/>
<!-- Show message to restart any terminals only if the PATH is changed -->
<CustomAction Id="Set_WIXUI_EXITDIALOGOPTIONALTEXT" Property="WIXUI_EXITDIALOGOPTIONALTEXT" Value="Please close and reopen any active terminal window to use prompt flow." />
<InstallUISequence>
<Custom Action="Set_WIXUI_EXITDIALOGOPTIONALTEXT" After="CostFinalize">NOT Installed AND NOT WIX_UPGRADE_DETECTED</Custom>
</InstallUISequence>
<CustomAction Id="StartPromptFlowService"
Directory="APPLICATIONFOLDER"
Execute="deferred"
ExeCommand="wscript.exe promptflow_service.vbs"
Return="asyncNoWait" />
<InstallExecuteSequence>
<Custom Action="StartPromptFlowService" Before="InstallFinalize">NOT Installed OR WIX_UPGRADE_DETECTED</Custom>
</InstallExecuteSequence>
</Product>
<Fragment>
<Directory Id="TARGETDIR" Name="SourceDir">
<Directory Id="$(var.ProgramFilesFolder)">
<Directory Id="APPLICATIONFOLDER" Name="promptflow" />
</Directory>
<Directory Id="StartupFolder" />
</Directory>
<UIRef Id="WixUI_Advanced" />
</Fragment>
<Fragment>
<ComponentGroup Id="PromptflowCliSettingsGroup">
<Component Id="RemovePromptflowFolder" Directory="APPLICATIONFOLDER" Guid="$(var.RemovePromptflowFolderGuid)">
<RemoveFolder Id="APPLICATIONFOLDER" On="uninstall" />
</Component>
<Component Id="PromptflowSystemPath" Directory="APPLICATIONFOLDER" Guid="$(var.PromptflowSystemPathGuid)">
<Environment Id="PromptflowAddedToPATH"
Name="PATH"
Value="[APPLICATIONFOLDER]"
Permanent="no"
Part="first"
Action="set"
System="no" />
<CreateFolder />
</Component>
<Component Id="promptflow_service.vbs" Directory="APPLICATIONFOLDER" Guid="$(var.PromptflowServiceGuid)">
<File Id="promptflow_service.vbs" Source="scripts\promptflow_service.vbs" KeyPath="yes" Checksum="yes"/>
</Component>
<Component Id="ApplicationShortcut" Directory="StartupFolder" Guid="$(var.PromptflowCliRegistryGuid)">
<Shortcut Id="ApplicationStartMenuShortcut"
Name="Prompt flow service"
Description="Prompt Flow Service"
Target="[#promptflow_service.vbs]"
WorkingDirectory="APPLICATIONFOLDER"
Advertise="no">
<Icon Id="PromptflowServiceIcon" SourceFile="$(var.ProductResources)logo32.ico" />
</Shortcut>
<RemoveFile Id="CleanUpShortCut" Directory="StartupFolder" Name="Prompt flow service" On="uninstall"/>
<RegistryKey Root="HKCU" Key="Software\Microsoft\$(var.ProductName)" Action="createAndRemoveOnUninstall">
<RegistryValue Name="installed" Type="integer" Value="1" />
<RegistryValue Name="version" Type="string" Value="$(var.ProductVersion)" KeyPath="yes"/>
</RegistryKey>
</Component>
</ComponentGroup>
<ComponentGroup Id="ProductComponents">
<ComponentGroupRef Id="PromptflowCliComponentGroup"/>
<ComponentGroupRef Id="PromptflowCliSettingsGroup"/>
</ComponentGroup>
</Fragment>
</Wix>
| promptflow/scripts/installer/windows/product.wxs/0 | {
"file_path": "promptflow/scripts/installer/windows/product.wxs",
"repo_id": "promptflow",
"token_count": 3455
} | 26 |
FROM mcr.microsoft.com/azureml/promptflow/promptflow-runtime:latest
COPY ./requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
| promptflow/scripts/runtime_mgmt/runtime-env/context/Dockerfile/0 | {
"file_path": "promptflow/scripts/runtime_mgmt/runtime-env/context/Dockerfile",
"repo_id": "promptflow",
"token_count": 51
} | 27 |
{{ package_name }}.tools.{{ tool_name }}.{{ function_name }}:
function: {{ function_name }}
inputs:
connection:
type:
- CustomConnection
input_text:
type:
- string
module: {{ package_name }}.tools.{{ tool_name }}
name: Hello World Tool
description: This is hello world tool
type: python
| promptflow/scripts/tool/templates/tool.yaml.j2/0 | {
"file_path": "promptflow/scripts/tool/templates/tool.yaml.j2",
"repo_id": "promptflow",
"token_count": 119
} | 28 |
# Avoid circular dependencies: Use import 'from promptflow._internal' instead of 'from promptflow'
# since the code here is in promptflow namespace as well
from promptflow._internal import tool
from promptflow.tools.common import render_jinja_template
@tool
def render_template_jinja2(template: str, **kwargs) -> str:
return render_jinja_template(template, trim_blocks=True, keep_trailing_newline=True, **kwargs)
| promptflow/src/promptflow-tools/promptflow/tools/template_rendering.py/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/template_rendering.py",
"repo_id": "promptflow",
"token_count": 117
} | 29 |
import pytest
from promptflow.contracts.multimedia import Image
from promptflow.tools.common import ChatAPIInvalidFunctions, validate_functions, process_function_call, \
parse_chat, find_referenced_image_set, preprocess_template_string, convert_to_chat_list, ChatInputList
class TestCommon:
@pytest.mark.parametrize(
"functions, error_message",
[
([], "functions cannot be an empty list"),
(["str"],
"is not a dict. Here is a valid function example"),
([{"name": "func1"}], "does not have 'parameters' property"),
([{"name": "func1", "parameters": "param1"}],
"should be described as a JSON Schema object"),
([{"name": "func1", "parameters": {"type": "int", "properties": {}}}],
"parameters 'type' should be 'object'"),
([{"name": "func1", "parameters": {"type": "object", "properties": []}}],
"should be described as a JSON Schema object"),
],
)
def test_chat_api_invalid_functions(self, functions, error_message):
error_codes = "UserError/ToolValidationError/ChatAPIInvalidFunctions"
with pytest.raises(ChatAPIInvalidFunctions) as exc_info:
validate_functions(functions)
assert error_message in exc_info.value.message
assert exc_info.value.error_codes == error_codes.split("/")
@pytest.mark.parametrize(
"function_call, error_message",
[
("123", "function_call parameter '123' must be a dict"),
({"name1": "get_current_weather"},
'function_call parameter {"name1": "get_current_weather"} must '
'contain "name" field'),
],
)
def test_chat_api_invalid_function_call(self, function_call, error_message):
error_codes = "UserError/ToolValidationError/ChatAPIInvalidFunctions"
with pytest.raises(ChatAPIInvalidFunctions) as exc_info:
process_function_call(function_call)
assert error_message in exc_info.value.message
assert exc_info.value.error_codes == error_codes.split("/")
@pytest.mark.parametrize(
"chat_str, images, expected_result",
[
("system:\nthis is my function:\ndef hello", None, [
{'role': 'system', 'content': 'this is my function:\ndef hello'}]),
("#system:\nthis is my ##function:\ndef hello", None, [
{'role': 'system', 'content': 'this is my ##function:\ndef hello'}]),
(" \n system:\nthis is my function:\ndef hello", None, [
{'role': 'system', 'content': 'this is my function:\ndef hello'}]),
(" \n # system:\nthis is my function:\ndef hello", None, [
{'role': 'system', 'content': 'this is my function:\ndef hello'}]),
("user:\nhi\nassistant:\nanswer\nfunction:\nname:\nn\ncontent:\nc", None, [
{'role': 'user', 'content': 'hi'},
{'role': 'assistant', 'content': 'answer'},
{'role': 'function', 'name': 'n', 'content': 'c'}]),
("#user :\nhi\n #assistant:\nanswer\n# function:\n##name:\nn\n##content:\nc", None, [
{'role': 'user', 'content': 'hi'},
{'role': 'assistant', 'content': 'answer'},
{'role': 'function', 'name': 'n', 'content': 'c'}]),
("\nsystem:\nfirst\n\nsystem:\nsecond", None, [
{'role': 'system', 'content': 'first'}, {'role': 'system', 'content': 'second'}]),
("\n#system:\nfirst\n\n#system:\nsecond", None, [
{'role': 'system', 'content': 'first'}, {'role': 'system', 'content': 'second'}]),
("\n#system:\nfirst\n#assistant:\n#user:\nsecond", None, [
{'role': 'system', 'content': 'first'},
{'role': 'assistant', 'content': ''},
{'role': 'user', 'content': 'second'}
]),
# todo: enable this test case after we support image_url officially
# ("#user:\ntell me about the images\nImage(1edf82c2)\nImage(9b65b0f4)", [
# Image("image1".encode()), Image("image2".encode(), "image/png", "https://image_url")], [
# {'role': 'user', 'content': [
# {'type': 'text', 'text': 'tell me about the images'},
# {'type': 'image_url', 'image_url': {'url': 'data:image/*;base64,aW1hZ2Ux'}},
# {'type': 'image_url', 'image_url': 'https://image_url'}]},
# ])
]
)
def test_success_parse_role_prompt(self, chat_str, images, expected_result):
actual_result = parse_chat(chat_str, images)
assert actual_result == expected_result
@pytest.mark.parametrize(
"chat_str, expected_result",
[
("\n#system:\n##name:\nAI \n content:\nfirst\n\n#user:\nsecond", [
{'role': 'system', 'name': 'AI', 'content': 'first'}, {'role': 'user', 'content': 'second'}]),
("\nuser:\nname:\n\nperson\n content:\n", [
{'role': 'user', 'name': 'person', 'content': ''}]),
("\nsystem:\nname:\n\n content:\nfirst", [
{'role': 'system', 'content': 'name:\n\n content:\nfirst'}]),
("\nsystem:\nname:\n\n", [
{'role': 'system', 'content': 'name:'}])
]
)
def test_parse_chat_with_name_in_role_prompt(self, chat_str, expected_result):
actual_result = parse_chat(chat_str)
assert actual_result == expected_result
@pytest.mark.parametrize(
"kwargs, expected_result",
[
({}, set()),
({"image_1": Image("image1".encode()), "image_2": Image("image2".encode()), "t1": "text"}, {
Image("image1".encode()), Image("image2".encode())
}),
({"images": [Image("image1".encode()), Image("image2".encode())]}, {
Image("image1".encode()), Image("image2".encode())
}),
({"image_1": Image("image1".encode()), "image_2": Image("image1".encode())}, {
Image("image1".encode())
}),
({"images": {"image_1": Image("image1".encode()), "image_2": Image("image2".encode())}}, {
Image("image1".encode()), Image("image2".encode())
})
]
)
def test_find_referenced_image_set(self, kwargs, expected_result):
actual_result = find_referenced_image_set(kwargs)
assert actual_result == expected_result
@pytest.mark.parametrize(
"input_string, expected_output",
[
("", "\n{{img1}}\n"),
("", "\n{{img1}}\n\n{{img2}}\n"),
("No image here", "No image here"),
(" Some text ", "\n{{img1}}\n Some text \n{{img2}}\n"),
],
)
def test_preprocess_template_string(self, input_string, expected_output):
actual_result = preprocess_template_string(input_string)
assert actual_result == expected_output
@pytest.mark.parametrize(
"input_data, expected_output",
[
({}, {}),
({"key": "value"}, {"key": "value"}),
(["item1", "item2"], ChatInputList(["item1", "item2"])),
({"key": ["item1", "item2"]}, {"key": ChatInputList(["item1", "item2"])}),
(["item1", ["nested_item1", "nested_item2"]],
ChatInputList(["item1", ChatInputList(["nested_item1", "nested_item2"])])),
],
)
def test_convert_to_chat_list(self, input_data, expected_output):
actual_result = convert_to_chat_list(input_data)
assert actual_result == expected_output
| promptflow/src/promptflow-tools/tests/test_common.py/0 | {
"file_path": "promptflow/src/promptflow-tools/tests/test_common.py",
"repo_id": "promptflow",
"token_count": 3626
} | 30 |
include promptflow/azure/resources/*
include promptflow/_sdk/_serving/static/*
recursive-include promptflow/_cli/data *
recursive-include promptflow/_sdk/data *
| promptflow/src/promptflow/MANIFEST.in/0 | {
"file_path": "promptflow/src/promptflow/MANIFEST.in",
"repo_id": "promptflow",
"token_count": 47
} | 31 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import argparse
import json
from typing import Callable, Dict, List, Optional, Tuple
from promptflow._cli._params import (
add_param_all_results,
add_param_archived_only,
add_param_columns_mapping,
add_param_connections,
add_param_environment_variables,
add_param_include_archived,
add_param_max_results,
add_param_output_format,
add_param_run_name,
add_param_set,
add_param_yes,
add_parser_build,
base_params,
)
from promptflow._cli._utils import (
_output_result_list_with_format,
activate_action,
confirm,
exception_handler,
list_of_dict_to_dict,
list_of_dict_to_nested_dict,
pretty_print_dataframe_as_table,
)
from promptflow._sdk._constants import MAX_SHOW_DETAILS_RESULTS, get_list_view_type
from promptflow._sdk._load_functions import load_run
from promptflow._sdk._pf_client import PFClient
from promptflow._sdk._run_functions import _create_run
from promptflow._sdk._utils import safe_parse_object_list
from promptflow._sdk.entities import Run
from promptflow.exceptions import UserErrorException
def add_run_parser(subparsers):
run_parser = subparsers.add_parser("run", description="A CLI tool to manage runs for prompt flow.", help="pf run")
subparsers = run_parser.add_subparsers()
add_run_create(subparsers)
# add_run_cancel(subparsers)
add_run_update(subparsers)
add_run_stream(subparsers)
add_run_list(subparsers)
add_run_show(subparsers)
add_run_show_details(subparsers)
add_run_show_metrics(subparsers)
add_run_visualize(subparsers)
add_run_archive(subparsers)
add_run_restore(subparsers)
add_run_delete(subparsers)
add_parser_build(subparsers, "run")
run_parser.set_defaults(action="run")
def add_run_create_common(subparsers, add_param_list, epilog: Optional[str] = None):
# pf run create --file batch_run.yaml [--stream]
add_param_file = lambda parser: parser.add_argument( # noqa: E731
"-f",
"--file",
dest="file",
type=str,
help="Local path to the YAML file containing the run definition. "
"Reference https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json for the schema.",
)
add_param_stream = lambda parser: parser.add_argument( # noqa: E731
"-s",
"--stream",
action="store_true",
default=False,
help="Indicates whether to stream the run's logs to the console.",
)
add_param_flow = lambda parser: parser.add_argument( # noqa: E731
"--flow",
type=str,
help="Local path to the flow directory."
"If --file is provided, this path should be relative path to the file.",
)
add_param_variant = lambda parser: parser.add_argument( # noqa: E731
"--variant", type=str, help="Node & variant name in format of ${node_name.variant_name}."
)
add_param_run = lambda parser: parser.add_argument( # noqa: E731
"--run",
type=str,
help="Referenced flow run name referenced by current run. "
"For example, you can run an evaluation flow against an existing run.",
)
add_param_name = lambda parser: parser.add_argument("-n", "--name", type=str, help="Name of the run.") # noqa: E731
add_params = [
add_param_file,
add_param_stream,
add_param_flow,
add_param_variant,
add_param_run,
add_param_name,
add_param_columns_mapping,
# add env var overwrite
add_param_environment_variables,
add_param_connections,
add_param_set,
] + base_params
add_params.extend(add_param_list)
create_parser = activate_action(
name="create",
description=None,
epilog=epilog or "pf run create --file <local-path-to-yaml> [--stream]",
add_params=add_params,
subparsers=subparsers,
help_message="Create a run.",
action_param_name="sub_action",
)
return create_parser
def add_run_create(subparsers):
epilog = """
Examples:
# Create a run with YAML file:
pf run create -f <yaml-filename>
# Create a run with YAML file and replace another data in the YAML file:
pf run create -f <yaml-filename> --data <path-to-new-data-file-relative-to-yaml-file>
# Create a run from flow directory and reference a run:
pf run create --flow <path-to-flow-directory> --data <path-to-data-file> --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run <run-name> --variant "${summarize_text_content.variant_0}" --stream # noqa: E501
# Create a run from an existing run record folder
pf run create --source <path-to-run-folder>
"""
# data for pf has different help doc than pfazure
def add_param_data(parser):
parser.add_argument(
"--data",
type=str,
help="Local path to the data file." "If --file is provided, this path should be relative path to the file.",
)
def add_param_source(parser):
parser.add_argument("--source", type=str, help="Local path to the existing run record folder.")
add_run_create_common(subparsers, [add_param_data, add_param_source], epilog=epilog)
def add_run_cancel(subparsers):
epilog = """
Example:
# Cancel a run:
pf run cancel --name <name>
"""
add_params = [add_param_run_name] + base_params
activate_action(
name="cancel",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Cancel a run.",
action_param_name="sub_action",
)
def add_run_update(subparsers):
epilog = """
Example:
# Update a run metadata:
pf run update --name <name> --set display_name="<display-name>" description="<description>" tags.key="<value>"
"""
add_params = [
add_param_run_name,
add_param_set,
] + base_params
activate_action(
name="update",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Update a run metadata, including display name, description and tags.",
action_param_name="sub_action",
)
def add_run_stream(subparsers):
epilog = """
Example:
# Stream run logs:
pf run stream --name <name>
"""
add_params = [add_param_run_name] + base_params
activate_action(
name="stream",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Stream run logs to the console.",
action_param_name="sub_action",
)
def add_run_list(subparsers):
epilog = """
Examples:
# List runs status:
pf run list
# List most recent 10 runs status:
pf run list --max-results 10
# List active and archived runs status:
pf run list --include-archived
# List archived runs status only:
pf run list --archived-only
# List all runs status:
pf run list --all-results
# List all runs status as table:
pf run list --output table
"""
add_params = [
add_param_max_results,
add_param_all_results,
add_param_archived_only,
add_param_include_archived,
add_param_output_format,
] + base_params
activate_action(
name="list",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="List runs.",
action_param_name="sub_action",
)
def add_run_show(subparsers):
epilog = """
Example:
# Show the status of a run:
pf run show --name <name>
"""
add_params = [add_param_run_name] + base_params
activate_action(
name="show",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Show details for a run.",
action_param_name="sub_action",
)
def add_run_show_details(subparsers):
epilog = """
Example:
# View input(s) and output(s) of a run:
pf run show-details --name <name>
"""
add_param_max_results = lambda parser: parser.add_argument( # noqa: E731
"-r",
"--max-results",
dest="max_results",
type=int,
default=MAX_SHOW_DETAILS_RESULTS,
help=f"Number of lines to show. Default is {MAX_SHOW_DETAILS_RESULTS}.",
)
add_params = [add_param_max_results, add_param_run_name, add_param_all_results] + base_params
activate_action(
name="show-details",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Preview a run's input(s) and output(s).",
action_param_name="sub_action",
)
def add_run_show_metrics(subparsers):
epilog = """
Example:
# View metrics of a run:
pf run show-metrics --name <name>
"""
add_params = [add_param_run_name] + base_params
activate_action(
name="show-metrics",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Print run metrics to the console.",
action_param_name="sub_action",
)
def add_run_visualize(subparsers):
epilog = """
Examples:
# Visualize a run:
pf run visualize -n <name>
# Visualize runs:
pf run visualize --names "<name1,name2>"
pf run visualize --names "<name1>, <name2>"
"""
add_param_name = lambda parser: parser.add_argument( # noqa: E731
"-n", "--names", type=str, required=True, help="Name of the runs, comma separated."
)
add_param_html_path = lambda parser: parser.add_argument( # noqa: E731
"--html-path", type=str, default=None, help=argparse.SUPPRESS
)
add_params = [add_param_name, add_param_html_path] + base_params
activate_action(
name="visualize",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Visualize a run.",
action_param_name="sub_action",
)
def add_run_delete(subparsers):
epilog = """
Example:
# Caution: pf run delete is irreversible.
# This operation will delete the run permanently from your local disk.
# Both run entity and output data will be deleted.
# Delete a run:
pf run delete -n "<name>"
"""
add_params = [add_param_run_name, add_param_yes] + base_params
activate_action(
name="delete",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Delete a run irreversible.",
action_param_name="sub_action",
)
def add_run_archive(subparsers):
epilog = """
Example:
# Archive a run:
pf run archive --name <name>
"""
add_params = [add_param_run_name] + base_params
activate_action(
name="archive",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Archive a run.",
action_param_name="sub_action",
)
def add_run_restore(subparsers):
epilog = """
Example:
# Restore an archived run:
pf run restore --name <name>
"""
add_params = [add_param_run_name] + base_params
activate_action(
name="restore",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Restore an archived run.",
action_param_name="sub_action",
)
def dispatch_run_commands(args: argparse.Namespace):
if args.sub_action == "create":
create_run(create_func=_create_run, args=args)
elif args.sub_action == "update":
update_run(name=args.name, params=args.params_override)
elif args.sub_action == "stream":
stream_run(name=args.name)
elif args.sub_action == "list":
list_runs(
max_results=args.max_results,
all_results=args.all_results,
archived_only=args.archived_only,
include_archived=args.include_archived,
output=args.output,
)
elif args.sub_action == "show":
show_run(name=args.name)
elif args.sub_action == "show-details":
show_run_details(name=args.name, max_results=args.max_results, all_results=args.all_results)
elif args.sub_action == "show-metrics":
show_run_metrics(name=args.name)
elif args.sub_action == "visualize":
visualize_run(names=args.names, html_path=args.html_path)
elif args.sub_action == "archive":
archive_run(name=args.name)
elif args.sub_action == "restore":
restore_run(name=args.name)
elif args.sub_action == "export":
export_run(args)
elif args.sub_action == "delete":
delete_run(args.name, args.yes)
else:
raise ValueError(f"Unrecognized command: {args.sub_action}")
def _parse_metadata_args(params: List[Dict[str, str]]) -> Tuple[Optional[str], Optional[str], Optional[Dict[str, str]]]:
display_name, description, tags = None, None, {}
for param in params:
for k, v in param.items():
if k == "display_name":
if display_name is not None:
raise ValueError("Duplicate argument: 'display_name'.")
display_name = v
elif k == "description":
if description is not None:
raise ValueError("Duplicate argument: 'description'.")
description = v
elif k.startswith("tags."):
tag_key = k.replace("tags.", "")
if tag_key in tags:
raise ValueError(f"Duplicate argument: 'tags.{tag_key}'.")
tags[tag_key] = v
if len(tags) == 0:
tags = None
return display_name, description, tags
@exception_handler("Update run")
def update_run(name: str, params: List[Dict[str, str]]) -> None:
# params_override can have multiple items when user specifies with
# `--set key1=value1 key2=value`
# so we need to merge them first.
display_name, description, tags = _parse_metadata_args(params)
pf_client = PFClient()
run = pf_client.runs.update(
name=name,
display_name=display_name,
description=description,
tags=tags,
)
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Stream run")
def stream_run(name: str) -> None:
pf_client = PFClient()
run = pf_client.runs.stream(name=name)
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("List runs")
def list_runs(
max_results: int,
all_results: bool,
archived_only: bool,
include_archived: bool,
output,
):
pf_client = PFClient()
# aligned behaviour with v2 SDK, all_results will overwrite max_results
if all_results:
max_results = None
runs = pf_client.runs.list(
max_results=max_results,
list_view_type=get_list_view_type(archived_only=archived_only, include_archived=include_archived),
)
# hide additional info and debug info in run list for better user experience
parser = lambda run: run._to_dict(exclude_additional_info=True, exclude_debug_info=True) # noqa: E731
json_list = safe_parse_object_list(
obj_list=runs,
parser=parser,
message_generator=lambda x: f"Error parsing run {x.name!r}, skipped.",
)
_output_result_list_with_format(result_list=json_list, output_format=output)
return runs
@exception_handler("Show run")
def show_run(name: str) -> None:
pf_client = PFClient()
run = pf_client.runs.get(name=name)
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Show run details")
def show_run_details(name: str, max_results: int, all_results: bool) -> None:
pf_client = PFClient()
details = pf_client.runs.get_details(name=name, max_results=max_results, all_results=all_results)
pretty_print_dataframe_as_table(details)
@exception_handler("Show run metrics")
def show_run_metrics(name: str) -> None:
pf_client = PFClient()
metrics = pf_client.runs.get_metrics(name=name)
print(json.dumps(metrics, indent=4))
@exception_handler("Visualize run")
def visualize_run(names: str, html_path: Optional[str] = None) -> None:
run_names = [name.strip() for name in names.split(",")]
pf_client = PFClient()
pf_client.runs.visualize(run_names, html_path=html_path)
@exception_handler("Archive run")
def archive_run(name: str) -> None:
pf_client = PFClient()
run = pf_client.runs.archive(name=name)
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Restore run")
def restore_run(name: str) -> None:
pf_client = PFClient()
run = pf_client.runs.restore(name=name)
print(json.dumps(run._to_dict(), indent=4))
def _parse_kv_pair(kv_pairs: str) -> Dict[str, str]:
result = {}
for kv_pairs in kv_pairs.split(","):
kv_pair = kv_pairs.strip()
if "=" not in kv_pair:
raise ValueError(f"Invalid key-value pair: {kv_pair}")
key, value = kv_pair.split("=", 1)
result[key] = value
return result
@exception_handler("Create run")
def create_run(create_func: Callable, args):
file = args.file
flow = args.flow
run_source = getattr(args, "source", None) # source is only available for pf args, not pfazure.
data = args.data
column_mapping = args.column_mapping
variant = args.variant
name = args.name
run = args.run
stream = args.stream
environment_variables = args.environment_variables
connections = args.connections
params_override = args.params_override or []
if environment_variables:
environment_variables = list_of_dict_to_dict(environment_variables)
if connections:
connections = list_of_dict_to_nested_dict(connections)
if column_mapping:
column_mapping = list_of_dict_to_dict(column_mapping)
if file:
for param_key, param in {
"name": name,
"flow": flow,
"variant": variant,
"data": data,
"column_mapping": column_mapping,
"run": run,
"environment_variables": environment_variables,
"connections": connections,
}.items():
if not param:
continue
params_override.append({param_key: param})
run = load_run(source=file, params_override=params_override)
elif flow:
run_data = {
"name": name,
"flow": flow,
"data": data,
"column_mapping": column_mapping,
"run": run,
"variant": variant,
"environment_variables": environment_variables,
"connections": connections,
}
# remove empty fields
run_data = {k: v for k, v in run_data.items() if v is not None}
run = Run._load(data=run_data, params_override=params_override)
elif run_source:
display_name, description, tags = _parse_metadata_args(params_override)
processed_params = {
"display_name": display_name,
"description": description,
"tags": tags,
}
run = Run._load_from_source(source=run_source, params_override=processed_params)
else:
raise UserErrorException("To create a run, one of [file, flow, source] must be specified.")
run = create_func(run=run, stream=stream)
if stream:
print("\n") # change new line to show run info
print(json.dumps(run._to_dict(), indent=4))
@exception_handler("Delete run")
def delete_run(name: str, skip_confirm: bool = False) -> None:
if confirm("Are you sure to delete run irreversibly?", skip_confirm):
pf_client = PFClient()
pf_client.runs.delete(name=name)
else:
print("The delete operation was canceled.")
def export_run(args):
raise NotImplementedError()
| promptflow/src/promptflow/promptflow/_cli/_pf/_run.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf/_run.py",
"repo_id": "promptflow",
"token_count": 8431
} | 32 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: {{ connection }}
type: azure_open_ai
api_key: "<user-input>"
api_base: "<user-input>"
api_type: "azure"
| promptflow/src/promptflow/promptflow/_cli/data/chat_flow/template/azure_openai.yaml.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/chat_flow/template/azure_openai.yaml.jinja2",
"repo_id": "promptflow",
"token_count": 83
} | 33 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import inspect
from typing import Callable
class MetricLoggerManager:
_instance = None
def __init__(self):
self._metric_loggers = []
@staticmethod
def get_instance() -> "MetricLoggerManager":
if MetricLoggerManager._instance is None:
MetricLoggerManager._instance = MetricLoggerManager()
return MetricLoggerManager._instance
def log_metric(self, key, value, variant_id=None):
for logger in self._metric_loggers:
if len(inspect.signature(logger).parameters) == 2:
logger(key, value) # If the logger only accepts two parameters, we don't pass variant_id
else:
logger(key, value, variant_id)
def add_metric_logger(self, logger_func: Callable):
existing_logger = next((logger for logger in self._metric_loggers if logger is logger_func), None)
if existing_logger:
return
if not callable(logger_func):
return
sign = inspect.signature(logger_func)
# We accept two kinds of metric loggers:
# def log_metric(k, v)
# def log_metric(k, v, variant_id)
if len(sign.parameters) not in [2, 3]:
return
self._metric_loggers.append(logger_func)
def remove_metric_logger(self, logger_func: Callable):
self._metric_loggers.remove(logger_func)
def log_metric(key, value, variant_id=None):
"""Log a metric for current promptflow run.
:param key: Metric name.
:type key: str
:param value: Metric value.
:type value: float
:param variant_id: Variant id for the metric.
:type variant_id: str
"""
MetricLoggerManager.get_instance().log_metric(key, value, variant_id)
def add_metric_logger(logger_func: Callable):
MetricLoggerManager.get_instance().add_metric_logger(logger_func)
def remove_metric_logger(logger_func: Callable):
MetricLoggerManager.get_instance().remove_metric_logger(logger_func)
| promptflow/src/promptflow/promptflow/_core/metric_logger.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/metric_logger.py",
"repo_id": "promptflow",
"token_count": 838
} | 34 |
# flake8: noqa
"""Put some imports here for mlflow promptflow flavor usage.
DO NOT change the module names in "all" list. If the interface has changed in source code, wrap it here and keep
original function/module names the same as before, otherwise mldesigner will be broken by this change.
"""
from promptflow._sdk._constants import DAG_FILE_NAME
from promptflow._sdk._serving.flow_invoker import FlowInvoker
from promptflow._sdk._submitter import remove_additional_includes
from promptflow._sdk._utils import _merge_local_code_and_additional_includes
from promptflow._sdk.entities._flow import Flow
__all__ = [
"Flow",
"FlowInvoker",
"remove_additional_includes",
"_merge_local_code_and_additional_includes",
"DAG_FILE_NAME",
]
| promptflow/src/promptflow/promptflow/_sdk/_mlflow.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_mlflow.py",
"repo_id": "promptflow",
"token_count": 236
} | 35 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import logging
from logging.handlers import RotatingFileHandler
from flask import Blueprint, Flask, jsonify
from werkzeug.exceptions import HTTPException
from promptflow._sdk._constants import HOME_PROMPT_FLOW_DIR, PF_SERVICE_LOG_FILE
from promptflow._sdk._service import Api
from promptflow._sdk._service.apis.connection import api as connection_api
from promptflow._sdk._service.apis.run import api as run_api
from promptflow._sdk._service.apis.telemetry import api as telemetry_api
from promptflow._sdk._service.utils.utils import FormattedException
from promptflow._sdk._utils import get_promptflow_sdk_version, read_write_by_user
def heartbeat():
response = {"promptflow": get_promptflow_sdk_version()}
return jsonify(response)
def create_app():
app = Flask(__name__)
app.add_url_rule("/heartbeat", view_func=heartbeat)
with app.app_context():
api_v1 = Blueprint("Prompt Flow Service", __name__, url_prefix="/v1.0")
# Registers resources from namespace for current instance of api
api = Api(api_v1, title="Prompt Flow Service", version="1.0")
api.add_namespace(connection_api)
api.add_namespace(run_api)
api.add_namespace(telemetry_api)
app.register_blueprint(api_v1)
# Disable flask-restx set X-Fields in header. https://flask-restx.readthedocs.io/en/latest/mask.html#usage
app.config["RESTX_MASK_SWAGGER"] = False
# Enable log
app.logger.setLevel(logging.INFO)
log_file = HOME_PROMPT_FLOW_DIR / PF_SERVICE_LOG_FILE
log_file.touch(mode=read_write_by_user(), exist_ok=True)
# Create a rotating file handler with a max size of 1 MB and keeping up to 1 backup files
handler = RotatingFileHandler(filename=log_file, maxBytes=1_000_000, backupCount=1)
formatter = logging.Formatter("[%(asctime)s][%(name)s][%(levelname)s] - %(message)s")
handler.setFormatter(formatter)
app.logger.addHandler(handler)
# Basic error handler
@api.errorhandler(Exception)
def handle_exception(e):
"""When any error occurs on the server, return a formatted error message."""
from dataclasses import asdict
if isinstance(e, HTTPException):
return asdict(FormattedException(e), dict_factory=lambda x: {k: v for (k, v) in x if v}), e.code
app.logger.error(e, exc_info=True, stack_info=True)
formatted_exception = FormattedException(e)
return (
asdict(formatted_exception, dict_factory=lambda x: {k: v for (k, v) in x if v}),
formatted_exception.status_code,
)
return app, api
| promptflow/src/promptflow/promptflow/_sdk/_service/app.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/app.py",
"repo_id": "promptflow",
"token_count": 1100
} | 36 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from enum import Enum
from promptflow._sdk._serving.extension.default_extension import AppExtension
class ExtensionType(Enum):
"""Extension type used to identify which extension to load in serving app."""
Default = "local"
AzureML = "azureml"
class ExtensionFactory:
"""ExtensionFactory is used to create extension based on extension type."""
@staticmethod
def create_extension(logger, **kwargs) -> AppExtension:
"""Create extension based on extension type."""
extension_type_str = kwargs.get("extension_type", ExtensionType.Default.value)
if not extension_type_str:
extension_type_str = ExtensionType.Default.value
extension_type = ExtensionType(extension_type_str.lower())
if extension_type == ExtensionType.AzureML:
from promptflow._sdk._serving.extension.azureml_extension import AzureMLExtension
return AzureMLExtension(logger=logger, **kwargs)
else:
from promptflow._sdk._serving.extension.default_extension import DefaultAppExtension
return DefaultAppExtension(logger=logger, **kwargs)
| promptflow/src/promptflow/promptflow/_sdk/_serving/extension/extension_factory.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/extension/extension_factory.py",
"repo_id": "promptflow",
"token_count": 424
} | 37 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# this file is a middle layer between the local SDK and executor.
import contextlib
import logging
from pathlib import Path
from types import GeneratorType
from typing import Any, Mapping, Union
from promptflow._internal import ConnectionManager
from promptflow._sdk._constants import PROMPT_FLOW_DIR_NAME
from promptflow._sdk._utils import dump_flow_result, parse_variant
from promptflow._sdk.entities._flow import FlowContext, ProtectedFlow
from promptflow._sdk.operations._local_storage_operations import LoggerOperations
from promptflow._utils.context_utils import _change_working_dir
from promptflow._utils.exception_utils import ErrorResponse
from promptflow._utils.multimedia_utils import persist_multimedia_data
from promptflow.batch._csharp_executor_proxy import CSharpExecutorProxy
from promptflow.contracts.flow import Flow as ExecutableFlow
from promptflow.contracts.run_info import Status
from promptflow.exceptions import UserErrorException
from promptflow.executor._result import LineResult
from promptflow.storage._run_storage import DefaultRunStorage
from ..._utils.async_utils import async_run_allowing_running_loop
from ..._utils.logger_utils import get_cli_sdk_logger
from ..entities._eager_flow import EagerFlow
from .utils import (
SubmitterHelper,
print_chat_output,
resolve_generator,
show_node_log_and_output,
variant_overwrite_context,
)
logger = get_cli_sdk_logger()
class TestSubmitter:
def __init__(self, flow: Union[ProtectedFlow, EagerFlow], flow_context: FlowContext, client=None):
self.flow = flow
self.entry = flow.entry if isinstance(flow, EagerFlow) else None
self._origin_flow = flow
self._dataplane_flow = None
self.flow_context = flow_context
# TODO: remove this
self._variant = flow_context.variant
from .._pf_client import PFClient
self._client = client if client else PFClient()
@property
def dataplane_flow(self):
if not self._dataplane_flow:
self._dataplane_flow = ExecutableFlow.from_yaml(flow_file=self.flow.path, working_dir=self.flow.code)
return self._dataplane_flow
@contextlib.contextmanager
def init(self):
if isinstance(self.flow, EagerFlow):
flow_content_manager = self._eager_flow_init
else:
flow_content_manager = self._dag_flow_init
with flow_content_manager() as submitter:
yield submitter
@contextlib.contextmanager
def _eager_flow_init(self):
# no variant overwrite for eager flow
# no connection overwrite for eager flow
# TODO(2897147): support additional includes
with _change_working_dir(self.flow.code):
self._tuning_node = None
self._node_variant = None
yield self
self._dataplane_flow = None
@contextlib.contextmanager
def _dag_flow_init(self):
if self.flow_context.variant:
tuning_node, node_variant = parse_variant(self.flow_context.variant)
else:
tuning_node, node_variant = None, None
with variant_overwrite_context(
flow_path=self._origin_flow.code,
tuning_node=tuning_node,
variant=node_variant,
connections=self.flow_context.connections,
overrides=self.flow_context.overrides,
) as temp_flow:
# TODO execute flow test in a separate process.
with _change_working_dir(temp_flow.code):
self.flow = temp_flow
self._tuning_node = tuning_node
self._node_variant = node_variant
yield self
self.flow = self._origin_flow
self._dataplane_flow = None
self._tuning_node = None
self._node_variant = None
def resolve_data(
self, node_name: str = None, inputs: dict = None, chat_history_name: str = None, dataplane_flow=None
):
"""
Resolve input to flow/node test inputs.
Raise user error when missing required inputs. And log warning when unknown inputs appeared.
:param node_name: Node name.
:type node_name: str
:param inputs: Inputs of flow/node test.
:type inputs: dict
:param chat_history_name: Chat history name.
:type chat_history_name: str
:return: Dict of flow inputs, Dict of reference node output.
:rtype: dict, dict
"""
from promptflow.contracts.flow import InputValueType
# TODO: only store dataplane flow in context resolver
dataplane_flow = dataplane_flow or self.dataplane_flow
inputs = (inputs or {}).copy()
flow_inputs, dependency_nodes_outputs, merged_inputs = {}, {}, {}
missing_inputs = []
# Using default value of inputs as flow input
if node_name:
node = next(filter(lambda item: item.name == node_name, dataplane_flow.nodes), None)
if not node:
raise UserErrorException(f"Cannot find {node_name} in the flow.")
for name, value in node.inputs.items():
if value.value_type == InputValueType.NODE_REFERENCE:
input_name = (
f"{value.value}.{value.section}.{value.property}"
if value.property
else f"{value.value}.{value.section}"
)
if input_name in inputs:
dependency_input = inputs.pop(input_name)
elif name in inputs:
dependency_input = inputs.pop(name)
else:
missing_inputs.append(name)
continue
if value.property:
dependency_nodes_outputs[value.value] = dependency_nodes_outputs.get(value.value, {})
if isinstance(dependency_input, dict) and value.property in dependency_input:
dependency_nodes_outputs[value.value][value.property] = dependency_input[value.property]
elif dependency_input:
dependency_nodes_outputs[value.value][value.property] = dependency_input
else:
dependency_nodes_outputs[value.value] = dependency_input
merged_inputs[name] = dependency_input
elif value.value_type == InputValueType.FLOW_INPUT:
input_name = f"{value.prefix}{value.value}"
if input_name in inputs:
flow_input = inputs.pop(input_name)
elif name in inputs:
flow_input = inputs.pop(name)
else:
flow_input = dataplane_flow.inputs[value.value].default
if flow_input is None:
missing_inputs.append(name)
continue
flow_inputs[value.value] = flow_input
merged_inputs[name] = flow_input
else:
flow_inputs[name] = inputs.pop(name) if name in inputs else value.value
merged_inputs[name] = flow_inputs[name]
else:
for name, value in dataplane_flow.inputs.items():
if name in inputs:
flow_inputs[name] = inputs.pop(name)
merged_inputs[name] = flow_inputs[name]
else:
if value.default is None:
# When the flow is a chat flow and chat_history has no default value, set an empty list for it
if chat_history_name and name == chat_history_name:
flow_inputs[name] = []
else:
missing_inputs.append(name)
else:
flow_inputs[name] = value.default
merged_inputs[name] = flow_inputs[name]
prefix = node_name or "flow"
if missing_inputs:
raise UserErrorException(f'Required input(s) {missing_inputs} are missing for "{prefix}".')
if inputs:
logger.warning(f"Unknown input(s) of {prefix}: {inputs}")
flow_inputs.update(inputs)
merged_inputs.update(inputs)
logger.info(f"{prefix} input(s): {merged_inputs}")
return flow_inputs, dependency_nodes_outputs
def flow_test(
self,
inputs: Mapping[str, Any],
environment_variables: dict = None,
stream_log: bool = True,
allow_generator_output: bool = False, # TODO: remove this
connections: dict = None, # executable connections dict, to avoid http call each time in chat mode
stream_output: bool = True,
):
from promptflow.executor.flow_executor import execute_flow
if not connections:
connections = SubmitterHelper.resolve_connections(flow=self.flow, client=self._client)
credential_list = ConnectionManager(connections).get_secret_list()
# resolve environment variables
environment_variables = SubmitterHelper.load_and_resolve_environment_variables(
flow=self.flow, environment_variables=environment_variables, client=self._client
)
environment_variables = environment_variables if environment_variables else {}
SubmitterHelper.init_env(environment_variables=environment_variables)
with LoggerOperations(
file_path=self.flow.code / PROMPT_FLOW_DIR_NAME / "flow.log",
stream=stream_log,
credential_list=credential_list,
):
storage = DefaultRunStorage(base_dir=self.flow.code, sub_dir=Path(".promptflow/intermediate"))
line_result = execute_flow(
flow_file=self.flow.path,
working_dir=self.flow.code,
output_dir=Path(".promptflow/output"),
connections=connections,
inputs=inputs,
enable_stream_output=stream_output,
allow_generator_output=allow_generator_output,
entry=self.entry,
storage=storage,
)
if isinstance(line_result.output, dict):
generator_outputs = self._get_generator_outputs(line_result.output)
if generator_outputs:
logger.info(f"Some streaming outputs in the result, {generator_outputs.keys()}")
return line_result
def node_test(
self,
node_name: str,
flow_inputs: Mapping[str, Any],
dependency_nodes_outputs: Mapping[str, Any],
environment_variables: dict = None,
stream: bool = True,
):
from promptflow.executor import FlowExecutor
connections = SubmitterHelper.resolve_connections(flow=self.flow, client=self._client)
credential_list = ConnectionManager(connections).get_secret_list()
# resolve environment variables
environment_variables = SubmitterHelper.load_and_resolve_environment_variables(
flow=self.flow, environment_variables=environment_variables, client=self._client
)
SubmitterHelper.init_env(environment_variables=environment_variables)
with LoggerOperations(
file_path=self.flow.code / PROMPT_FLOW_DIR_NAME / f"{node_name}.node.log",
stream=stream,
credential_list=credential_list,
):
storage = DefaultRunStorage(base_dir=self.flow.code, sub_dir=Path(".promptflow/intermediate"))
result = FlowExecutor.load_and_exec_node(
self.flow.path,
node_name,
flow_inputs=flow_inputs,
dependency_nodes_outputs=dependency_nodes_outputs,
connections=connections,
working_dir=self.flow.code,
storage=storage,
)
return result
def _chat_flow(self, inputs, chat_history_name, environment_variables: dict = None, show_step_output=False):
"""
Interact with Chat Flow. Do the following:
1. Combine chat_history and user input as the input for each round of the chat flow.
2. Each round of chat is executed once flow test.
3. Prefix the output for distinction.
"""
from colorama import Fore, init
@contextlib.contextmanager
def change_logger_level(level):
origin_level = logger.level
logger.setLevel(level)
yield
logger.setLevel(origin_level)
init(autoreset=True)
chat_history = []
generator_record = {}
input_name = next(
filter(lambda key: self.dataplane_flow.inputs[key].is_chat_input, self.dataplane_flow.inputs.keys())
)
output_name = next(
filter(
lambda key: self.dataplane_flow.outputs[key].is_chat_output,
self.dataplane_flow.outputs.keys(),
)
)
# Pass connections to avoid duplicate calculation (especially http call)
connections = SubmitterHelper.resolve_connections(flow=self.flow, client=self._client)
while True:
try:
print(f"{Fore.GREEN}User: ", end="")
input_value = input()
if not input_value.strip():
continue
except (KeyboardInterrupt, EOFError):
print("Terminate the chat.")
break
inputs = inputs or {}
inputs[input_name] = input_value
inputs[chat_history_name] = chat_history
with change_logger_level(level=logging.WARNING):
chat_inputs, _ = self.resolve_data(inputs=inputs)
flow_result = self.flow_test(
inputs=chat_inputs,
environment_variables=environment_variables,
stream_log=False,
allow_generator_output=True,
connections=connections,
stream_output=True,
)
self._raise_error_when_test_failed(flow_result, show_trace=True)
show_node_log_and_output(flow_result.node_run_infos, show_step_output, generator_record)
print(f"{Fore.YELLOW}Bot: ", end="")
print_chat_output(flow_result.output[output_name], generator_record)
flow_result = resolve_generator(flow_result, generator_record)
flow_outputs = {k: v for k, v in flow_result.output.items()}
history = {"inputs": {input_name: input_value}, "outputs": flow_outputs}
chat_history.append(history)
dump_flow_result(flow_folder=self._origin_flow.code, flow_result=flow_result, prefix="chat")
@staticmethod
def _raise_error_when_test_failed(test_result, show_trace=False):
from promptflow.executor._result import LineResult
test_status = test_result.run_info.status if isinstance(test_result, LineResult) else test_result.status
if test_status == Status.Failed:
error_dict = test_result.run_info.error if isinstance(test_result, LineResult) else test_result.error
error_response = ErrorResponse.from_error_dict(error_dict)
user_execution_error = error_response.get_user_execution_error_info()
error_message = error_response.message
stack_trace = user_execution_error.get("traceback", "")
error_type = user_execution_error.get("type", "Exception")
if show_trace:
print(stack_trace)
raise UserErrorException(f"{error_type}: {error_message}")
@staticmethod
def _get_generator_outputs(outputs):
outputs = outputs or {}
return {key: outputs for key, output in outputs.items() if isinstance(output, GeneratorType)}
class TestSubmitterViaProxy(TestSubmitter):
def __init__(self, flow: ProtectedFlow, flow_context: FlowContext, client=None):
super().__init__(flow, flow_context, client)
def flow_test(
self,
inputs: Mapping[str, Any],
environment_variables: dict = None,
stream_log: bool = True,
allow_generator_output: bool = False,
connections: dict = None, # executable connections dict, to avoid http call each time in chat mode
stream_output: bool = True,
):
from promptflow._constants import LINE_NUMBER_KEY
if not connections:
connections = SubmitterHelper.resolve_used_connections(
flow=self.flow,
tools_meta=CSharpExecutorProxy.get_tool_metadata(
flow_file=self.flow.flow_dag_path,
working_dir=self.flow.code,
),
client=self._client,
)
credential_list = ConnectionManager(connections).get_secret_list()
# resolve environment variables
environment_variables = SubmitterHelper.load_and_resolve_environment_variables(
flow=self.flow, environment_variables=environment_variables, client=self._client
)
environment_variables = environment_variables if environment_variables else {}
SubmitterHelper.init_env(environment_variables=environment_variables)
log_path = self.flow.code / PROMPT_FLOW_DIR_NAME / "flow.log"
with LoggerOperations(
file_path=log_path,
stream=stream_log,
credential_list=credential_list,
):
try:
storage = DefaultRunStorage(base_dir=self.flow.code, sub_dir=Path(".promptflow/intermediate"))
flow_executor: CSharpExecutorProxy = async_run_allowing_running_loop(
CSharpExecutorProxy.create,
self.flow.path,
self.flow.code,
connections=connections,
storage=storage,
log_path=log_path,
)
line_result: LineResult = async_run_allowing_running_loop(
flow_executor.exec_line_async, inputs, index=0
)
line_result.output = persist_multimedia_data(
line_result.output, base_dir=self.flow.code, sub_dir=Path(".promptflow/output")
)
if line_result.aggregation_inputs:
# Convert inputs of aggregation to list type
flow_inputs = {k: [v] for k, v in inputs.items()}
aggregation_inputs = {k: [v] for k, v in line_result.aggregation_inputs.items()}
aggregation_results = async_run_allowing_running_loop(
flow_executor.exec_aggregation_async, flow_inputs, aggregation_inputs
)
line_result.node_run_infos.update(aggregation_results.node_run_infos)
line_result.run_info.metrics = aggregation_results.metrics
if isinstance(line_result.output, dict):
# Remove line_number from output
line_result.output.pop(LINE_NUMBER_KEY, None)
generator_outputs = self._get_generator_outputs(line_result.output)
if generator_outputs:
logger.info(f"Some streaming outputs in the result, {generator_outputs.keys()}")
return line_result
finally:
async_run_allowing_running_loop(flow_executor.destroy)
def exec_with_inputs(self, inputs):
from promptflow._constants import LINE_NUMBER_KEY
connections = SubmitterHelper.resolve_used_connections(
flow=self.flow,
tools_meta=CSharpExecutorProxy.get_tool_metadata(
flow_file=self.flow.path,
working_dir=self.flow.code,
),
client=self._client,
)
storage = DefaultRunStorage(base_dir=self.flow.code, sub_dir=Path(".promptflow/intermediate"))
flow_executor = CSharpExecutorProxy.create(
flow_file=self.flow.path,
working_dir=self.flow.code,
connections=connections,
storage=storage,
)
try:
# validate inputs
flow_inputs, _ = self.resolve_data(inputs=inputs, dataplane_flow=self.dataplane_flow)
line_result = async_run_allowing_running_loop(flow_executor.exec_line_async, inputs, index=0)
# line_result = flow_executor.exec_line(inputs, index=0)
if isinstance(line_result.output, dict):
# Remove line_number from output
line_result.output.pop(LINE_NUMBER_KEY, None)
return line_result
finally:
flow_executor.destroy()
| promptflow/src/promptflow/promptflow/_sdk/_submitter/test_submitter.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_submitter/test_submitter.py",
"repo_id": "promptflow",
"token_count": 9673
} | 38 |
#! /bin/bash
CONDA_ENV_PATH="$(conda info --base)/envs/{{env.conda_env_name}}"
export PATH="$CONDA_ENV_PATH/bin:$PATH"
{% if connection_yaml_paths %}
{% if show_comment %}
# hack: for some unknown reason, without this ls, the connection creation will be failed
{% endif %}
ls
ls /connections
{% endif %}
{% for connection_yaml_path in connection_yaml_paths %}
pf connection create --file /{{ connection_yaml_path }}
{% endfor %}
echo "start promptflow serving with worker_num: 8, worker_threads: 1"
cd /flow
gunicorn -w 8 --threads 1 -b "0.0.0.0:8080" --timeout 300 "promptflow._sdk._serving.app:create_app()" | promptflow/src/promptflow/promptflow/_sdk/data/docker/runit/promptflow-serve/run.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/data/docker/runit/promptflow-serve/run.jinja2",
"repo_id": "promptflow",
"token_count": 230
} | 39 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# isort: skip_file
# skip to avoid circular import
__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
from ._connection import (
AzureContentSafetyConnection,
AzureOpenAIConnection,
CognitiveSearchConnection,
CustomConnection,
OpenAIConnection,
SerpConnection,
QdrantConnection,
WeaviateConnection,
FormRecognizerConnection,
CustomStrongTypeConnection,
)
from ._run import Run
from ._validation import ValidationResult
from ._flow import FlowContext
__all__ = [
# region: Connection
"AzureContentSafetyConnection",
"AzureOpenAIConnection",
"OpenAIConnection",
"CustomConnection",
"CustomStrongTypeConnection",
"CognitiveSearchConnection",
"SerpConnection",
"QdrantConnection",
"WeaviateConnection",
"FormRecognizerConnection",
# endregion
# region Run
"Run",
"ValidationResult",
# endregion
# region Flow
"FlowContext",
# endregion
]
| promptflow/src/promptflow/promptflow/_sdk/entities/__init__.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/entities/__init__.py",
"repo_id": "promptflow",
"token_count": 372
} | 40 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import re
from typing import List
from promptflow._sdk._constants import AZURE_WORKSPACE_REGEX_FORMAT, MAX_LIST_CLI_RESULTS
from promptflow._sdk._telemetry import ActivityType, WorkspaceTelemetryMixin, monitor_operation
from promptflow._sdk._utils import interactive_credential_disabled, is_from_cli, is_github_codespaces, print_red_error
from promptflow._sdk.entities._connection import _Connection
from promptflow._utils.logger_utils import get_cli_sdk_logger
from promptflow.azure._utils.gerneral import get_arm_token
logger = get_cli_sdk_logger()
class LocalAzureConnectionOperations(WorkspaceTelemetryMixin):
def __init__(self, connection_provider, **kwargs):
self._subscription_id, self._resource_group, self._workspace_name = self._extract_workspace(connection_provider)
self._credential = kwargs.pop("credential", None) or self._get_credential()
super().__init__(
subscription_id=self._subscription_id,
resource_group_name=self._resource_group,
workspace_name=self._workspace_name,
**kwargs,
)
# Lazy init client as ml_client initialization require workspace read permission
self._pfazure_client = None
self._user_agent = kwargs.pop("user_agent", None)
@property
def _client(self):
if self._pfazure_client is None:
from promptflow.azure._pf_client import PFClient as PFAzureClient
self._pfazure_client = PFAzureClient(
# TODO: disable interactive credential when starting as a service
credential=self._credential,
subscription_id=self._subscription_id,
resource_group_name=self._resource_group,
workspace_name=self._workspace_name,
user_agent=self._user_agent,
)
return self._pfazure_client
@classmethod
def _get_credential(cls):
from azure.ai.ml._azure_environments import AzureEnvironments, EndpointURLS, _get_cloud, _get_default_cloud_name
from azure.identity import DefaultAzureCredential, DeviceCodeCredential
if is_from_cli():
try:
# Try getting token for cli without interactive login
cloud_name = _get_default_cloud_name()
if cloud_name != AzureEnvironments.ENV_DEFAULT:
cloud = _get_cloud(cloud=cloud_name)
authority = cloud.get(EndpointURLS.ACTIVE_DIRECTORY_ENDPOINT)
credential = DefaultAzureCredential(authority=authority, exclude_shared_token_cache_credential=True)
else:
credential = DefaultAzureCredential()
get_arm_token(credential=credential)
except Exception:
print_red_error(
"Please run 'az login' or 'az login --use-device-code' to set up account. "
"See https://docs.microsoft.com/cli/azure/authenticate-azure-cli for more details."
)
exit(1)
if interactive_credential_disabled():
return DefaultAzureCredential(exclude_interactive_browser_credential=True)
if is_github_codespaces():
# For code spaces, append device code credential as the fallback option.
credential = DefaultAzureCredential()
credential.credentials = (*credential.credentials, DeviceCodeCredential())
return credential
return DefaultAzureCredential(exclude_interactive_browser_credential=False)
@classmethod
def _extract_workspace(cls, connection_provider):
match = re.match(AZURE_WORKSPACE_REGEX_FORMAT, connection_provider)
if not match or len(match.groups()) != 5:
raise ValueError(
"Malformed connection provider string, expected azureml:/subscriptions/<subscription_id>/"
"resourceGroups/<resource_group>/providers/Microsoft.MachineLearningServices/"
f"workspaces/<workspace_name>, got {connection_provider}"
)
subscription_id = match.group(1)
resource_group = match.group(3)
workspace_name = match.group(5)
return subscription_id, resource_group, workspace_name
@monitor_operation(activity_name="pf.connections.azure.list", activity_type=ActivityType.PUBLICAPI)
def list(
self,
max_results: int = MAX_LIST_CLI_RESULTS,
all_results: bool = False,
) -> List[_Connection]:
"""List connections.
:return: List of run objects.
:rtype: List[~promptflow.sdk.entities._connection._Connection]
"""
if max_results != MAX_LIST_CLI_RESULTS or all_results:
logger.warning(
"max_results and all_results are not supported for workspace connection and will be ignored."
)
return self._client._connections.list()
@monitor_operation(activity_name="pf.connections.azure.get", activity_type=ActivityType.PUBLICAPI)
def get(self, name: str, **kwargs) -> _Connection:
"""Get a connection entity.
:param name: Name of the connection.
:type name: str
:return: connection object retrieved from the database.
:rtype: ~promptflow.sdk.entities._connection._Connection
"""
with_secrets = kwargs.get("with_secrets", False)
if with_secrets:
# Do not use pfazure_client here as it requires workspace read permission
# Get secrets from arm only requires workspace listsecrets permission
from promptflow.azure.operations._arm_connection_operations import ArmConnectionOperations
return ArmConnectionOperations._direct_get(
name, self._subscription_id, self._resource_group, self._workspace_name, self._credential
)
return self._client._connections.get(name)
@monitor_operation(activity_name="pf.connections.azure.delete", activity_type=ActivityType.PUBLICAPI)
def delete(self, name: str) -> None:
"""Delete a connection entity.
:param name: Name of the connection.
:type name: str
"""
raise NotImplementedError(
"Delete workspace connection is not supported in promptflow, "
"please manage it in workspace portal, az ml cli or AzureML SDK."
)
@monitor_operation(activity_name="pf.connections.azure.create_or_update", activity_type=ActivityType.PUBLICAPI)
def create_or_update(self, connection: _Connection, **kwargs):
"""Create or update a connection.
:param connection: Run object to create or update.
:type connection: ~promptflow.sdk.entities._connection._Connection
"""
raise NotImplementedError(
"Create or update workspace connection is not supported in promptflow, "
"please manage it in workspace portal, az ml cli or AzureML SDK."
)
| promptflow/src/promptflow/promptflow/_sdk/operations/_local_azure_connection_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/operations/_local_azure_connection_operations.py",
"repo_id": "promptflow",
"token_count": 2867
} | 41 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import re
class CredentialScrubber:
"""Scrub sensitive information in string."""
PLACE_HOLDER = "**data_scrubbed**"
LENGTH_THRESHOLD = 2
def __init__(self):
self.default_regex_set = set(
[
r"(?<=sig=)[^\s;&]+", # Replace signature.
r"(?<=key=)[^\s;&]+", # Replace key.
]
)
self.default_str_set = set()
self.custom_regex_set = set()
self.custom_str_set = set()
def scrub(self, input: str):
"""Replace sensitive information in input string with PLACE_HOLDER.
For example, for input string: "print accountkey=accountKey", the output will be:
"print accountkey=**data_scrubbed**"
"""
output = input
regex_set = self.default_regex_set.union(self.custom_regex_set)
for regex in regex_set:
output = re.sub(regex, self.PLACE_HOLDER, output, flags=re.IGNORECASE)
str_set = self.default_str_set.union(self.custom_str_set)
for s in str_set:
output = output.replace(s, self.PLACE_HOLDER)
return output
def add_regex(self, pattern: str):
# policy: http://policheck.azurewebsites.net/Pages/TermInfo.aspx?LCID=9&TermID=79458
"""Add regex pattern to checklist."""
self.custom_regex_set.add(pattern)
def add_str(self, s: str):
"""Add string to checklist.
Only scrub string with length > LENGTH_THRESHOLD.
"""
if s is None:
return
if len(s) <= self.LENGTH_THRESHOLD:
return
self.custom_str_set.add(s)
def clear(self):
"""Clear custom regex and string set."""
self.custom_regex_set = set()
self.custom_str_set = set()
| promptflow/src/promptflow/promptflow/_utils/credential_scrubber.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/credential_scrubber.py",
"repo_id": "promptflow",
"token_count": 856
} | 42 |
from io import StringIO
from os import PathLike
from typing import IO, AnyStr, Dict, Optional, Union
from ruamel.yaml import YAML, YAMLError
from promptflow._constants import DEFAULT_ENCODING
from promptflow._utils._errors import YamlParseError
def load_yaml(source: Optional[Union[AnyStr, PathLike, IO]]) -> Dict:
# null check - just return an empty dict.
# Certain CLI commands rely on this behavior to produce a resource
# via CLI, which is then populated through CLArgs.
"""Load a local YAML file or a readable stream object.
.. note::
1. For a local file yaml
.. code-block:: python
yaml_path = "path/to/yaml"
content = load_yaml(yaml_path)
2. For a readable stream object
.. code-block:: python
with open("path/to/yaml", "r", encoding="utf-8") as f:
content = load_yaml(f)
:param source: The relative or absolute path to the local file, or a readable stream object.
:type source: str
:return: A dictionary representation of the local file's contents.
:rtype: Dict
"""
if source is None:
return {}
# pylint: disable=redefined-builtin
input = None
must_open_file = False
try: # check source type by duck-typing it as an IOBase
readable = source.readable()
if not readable: # source is misformatted stream or file
msg = "File Permissions Error: The already-open \n\n inputted file is not readable."
raise Exception(msg)
# source is an already-open stream or file, we can read() from it directly.
input = source
except AttributeError:
# source has no writable() function, assume it's a string or file path.
must_open_file = True
if must_open_file: # If supplied a file path, open it.
try:
input = open(source, "r", encoding=DEFAULT_ENCODING)
except OSError: # FileNotFoundError introduced in Python 3
msg = "No such file or directory: {}"
raise Exception(msg.format(source))
# input should now be a readable file or stream. Parse it.
cfg = {}
try:
yaml = YAML()
yaml.preserve_quotes = True
cfg = yaml.load(input)
except YAMLError as e:
msg = f"Error while parsing yaml file: {source} \n\n {str(e)}"
raise Exception(msg)
finally:
if must_open_file:
input.close()
return cfg
def load_yaml_string(yaml_string: str):
"""Load a yaml string.
.. code-block:: python
yaml_string = "some yaml string"
object = load_yaml_string(yaml_string)
:param yaml_string: A yaml string.
:type yaml_string: str
"""
yaml = YAML()
yaml.preserve_quotes = True
return yaml.load(yaml_string)
def dump_yaml(*args, **kwargs):
"""Dump data to a yaml string or stream.
.. note::
1. Dump to a yaml string
.. code-block:: python
data = {"key": "value"}
yaml_string = dump_yaml(data)
2. Dump to a stream
.. code-block:: python
data = {"key": "value"}
with open("path/to/yaml", "w", encoding="utf-8") as f:
dump_yaml(data, f)
"""
yaml = YAML()
yaml.default_flow_style = False
# when using with no stream parameter but just the data, dump to yaml string and return
if len(args) == 1:
string_stream = StringIO()
yaml.dump(args[0], string_stream, **kwargs)
output_string = string_stream.getvalue()
string_stream.close()
return output_string
# when using with stream parameter, dump to stream. e.g.:
# open('test.yaml', 'w', encoding='utf-8') as f:
# dump_yaml(data, f)
elif len(args) == 2:
return yaml.dump(*args, **kwargs)
else:
raise YamlParseError("Only 1 or 2 positional arguments are allowed for dump yaml util function.")
| promptflow/src/promptflow/promptflow/_utils/yaml_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/yaml_utils.py",
"repo_id": "promptflow",
"token_count": 1625
} | 43 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import TYPE_CHECKING
from azure.core.configuration import Configuration
from azure.core.pipeline import policies
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Optional
VERSION = "unknown"
class AzureMachineLearningDesignerServiceClientConfiguration(Configuration):
"""Configuration for AzureMachineLearningDesignerServiceClient.
Note that all parameters used to create this instance are saved as instance
attributes.
:param api_version: Api Version. The default value is "1.0.0".
:type api_version: str
"""
def __init__(
self,
api_version="1.0.0", # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> None
super(AzureMachineLearningDesignerServiceClientConfiguration, self).__init__(**kwargs)
self.api_version = api_version
kwargs.setdefault('sdk_moniker', 'azuremachinelearningdesignerserviceclient/{}'.format(VERSION))
self._configure(**kwargs)
def _configure(
self,
**kwargs # type: Any
):
# type: (...) -> None
self.user_agent_policy = kwargs.get('user_agent_policy') or policies.UserAgentPolicy(**kwargs)
self.headers_policy = kwargs.get('headers_policy') or policies.HeadersPolicy(**kwargs)
self.proxy_policy = kwargs.get('proxy_policy') or policies.ProxyPolicy(**kwargs)
self.logging_policy = kwargs.get('logging_policy') or policies.NetworkTraceLoggingPolicy(**kwargs)
self.http_logging_policy = kwargs.get('http_logging_policy') or policies.HttpLoggingPolicy(**kwargs)
self.retry_policy = kwargs.get('retry_policy') or policies.RetryPolicy(**kwargs)
self.custom_hook_policy = kwargs.get('custom_hook_policy') or policies.CustomHookPolicy(**kwargs)
self.redirect_policy = kwargs.get('redirect_policy') or policies.RedirectPolicy(**kwargs)
self.authentication_policy = kwargs.get('authentication_policy')
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/_configuration.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/_configuration.py",
"repo_id": "promptflow",
"token_count": 812
} | 44 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import functools
from typing import TYPE_CHECKING
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from msrest import Serializer
from .. import models as _models
from .._vendor import _convert_request, _format_url_section
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
# fmt: off
def build_create_flow_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
experiment_id = kwargs.pop('experiment_id', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if experiment_id is not None:
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_list_flows_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
experiment_id = kwargs.pop('experiment_id', None) # type: Optional[str]
owned_only = kwargs.pop('owned_only', None) # type: Optional[bool]
flow_type = kwargs.pop('flow_type', None) # type: Optional[Union[str, "_models.FlowType"]]
list_view_type = kwargs.pop('list_view_type', None) # type: Optional[Union[str, "_models.ListViewType"]]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if experiment_id is not None:
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
if owned_only is not None:
query_parameters['ownedOnly'] = _SERIALIZER.query("owned_only", owned_only, 'bool')
if flow_type is not None:
query_parameters['flowType'] = _SERIALIZER.query("flow_type", flow_type, 'str')
if list_view_type is not None:
query_parameters['listViewType'] = _SERIALIZER.query("list_view_type", list_view_type, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_clone_flow_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/clone')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_create_flow_from_sample_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
experiment_id = kwargs.pop('experiment_id', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/fromsample')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if experiment_id is not None:
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_update_flow_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="PUT",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_patch_flow_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="PATCH",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_submit_flow_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
experiment_id = kwargs.pop('experiment_id') # type: str
endpoint_name = kwargs.pop('endpoint_name', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/submit')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
if endpoint_name is not None:
query_parameters['endpointName'] = _SERIALIZER.query("endpoint_name", endpoint_name, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_run_status_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
experiment_id = kwargs.pop('experiment_id', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/{flowRunId}/status')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if experiment_id is not None:
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_run_info_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_child_runs_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
index = kwargs.pop('index', None) # type: Optional[int]
start_index = kwargs.pop('start_index', None) # type: Optional[int]
end_index = kwargs.pop('end_index', None) # type: Optional[int]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/childRuns')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if index is not None:
query_parameters['index'] = _SERIALIZER.query("index", index, 'int')
if start_index is not None:
query_parameters['startIndex'] = _SERIALIZER.query("start_index", start_index, 'int')
if end_index is not None:
query_parameters['endIndex'] = _SERIALIZER.query("end_index", end_index, 'int')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_node_runs_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
node_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
index = kwargs.pop('index', None) # type: Optional[int]
start_index = kwargs.pop('start_index', None) # type: Optional[int]
end_index = kwargs.pop('end_index', None) # type: Optional[int]
aggregation = kwargs.pop('aggregation', False) # type: Optional[bool]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/nodeRuns/{nodeName}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
"nodeName": _SERIALIZER.url("node_name", node_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if index is not None:
query_parameters['index'] = _SERIALIZER.query("index", index, 'int')
if start_index is not None:
query_parameters['startIndex'] = _SERIALIZER.query("start_index", start_index, 'int')
if end_index is not None:
query_parameters['endIndex'] = _SERIALIZER.query("end_index", end_index, 'int')
if aggregation is not None:
query_parameters['aggregation'] = _SERIALIZER.query("aggregation", aggregation, 'bool')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_node_run_base_path_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
node_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/nodeRuns/{nodeName}/basePath')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
"nodeName": _SERIALIZER.url("node_name", node_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_clone_flow_from_flow_run_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/clone')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_list_bulk_tests_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
experiment_id = kwargs.pop('experiment_id', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/bulkTests')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if experiment_id is not None:
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_bulk_test_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_test_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/bulkTests/{bulkTestId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"bulkTestId": _SERIALIZER.url("bulk_test_id", bulk_test_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_samples_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
use_snapshot = kwargs.pop('use_snapshot', False) # type: Optional[bool]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/samples')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if use_snapshot is not None:
query_parameters['useSnapshot'] = _SERIALIZER.query("use_snapshot", use_snapshot, 'bool')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_evaluate_flow_samples_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
use_snapshot = kwargs.pop('use_snapshot', False) # type: Optional[bool]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/evaluateSamples')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if use_snapshot is not None:
query_parameters['useSnapshot'] = _SERIALIZER.query("use_snapshot", use_snapshot, 'bool')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_deploy_reserved_environment_variable_names_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/DeployReservedEnvironmentVariableNames')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_deploy_flow_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
async_call = kwargs.pop('async_call', False) # type: Optional[bool]
msi_token = kwargs.pop('msi_token', False) # type: Optional[bool]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/deploy')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if async_call is not None:
query_parameters['asyncCall'] = _SERIALIZER.query("async_call", async_call, 'bool')
if msi_token is not None:
query_parameters['msiToken'] = _SERIALIZER.query("msi_token", msi_token, 'bool')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_run_log_content_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/logContent')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_cancel_flow_run_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "text/plain, application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/runs/{flowRunId}/cancel')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_cancel_flow_test_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "text/plain, application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/flowTests/{flowRunId}/cancel')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
"flowRunId": _SERIALIZER.url("flow_run_id", flow_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_cancel_bulk_test_run_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
bulk_test_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "text/plain, application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/bulkTests/{bulkTestRunId}/cancel')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"bulkTestRunId": _SERIALIZER.url("bulk_test_run_id", bulk_test_run_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_flow_snapshot_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/FlowSnapshot')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_connection_override_settings_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
runtime_name = kwargs.pop('runtime_name', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/connectionOverride')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if runtime_name is not None:
query_parameters['runtimeName'] = _SERIALIZER.query("runtime_name", runtime_name, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_inputs_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/flowInputs')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_load_as_component_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/LoadAsComponent')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_flow_tools_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
experiment_id = kwargs.pop('experiment_id') # type: str
flow_runtime_name = kwargs.pop('flow_runtime_name', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/flowTools')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if flow_runtime_name is not None:
query_parameters['flowRuntimeName'] = _SERIALIZER.query("flow_runtime_name", flow_runtime_name, 'str')
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_setup_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_delete_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="DELETE",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_flow_session_status_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
experiment_id = kwargs.pop('experiment_id') # type: str
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions/status')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"flowId": _SERIALIZER.url("flow_id", flow_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
query_parameters['experimentId'] = _SERIALIZER.query("experiment_id", experiment_id, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
# fmt: on
class FlowsOperations(object):
"""FlowsOperations operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace
def create_flow(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
experiment_id=None, # type: Optional[str]
body=None, # type: Optional["_models.CreateFlowRequest"]
**kwargs # type: Any
):
# type: (...) -> "_models.FlowDto"
"""create_flow.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param experiment_id:
:type experiment_id: str
:param body:
:type body: ~flow.models.CreateFlowRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowDto, or the result of cls(response)
:rtype: ~flow.models.FlowDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowRequest')
else:
_json = None
request = build_create_flow_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
experiment_id=experiment_id,
template_url=self.create_flow.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_flow.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows'} # type: ignore
@distributed_trace
def list_flows(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
experiment_id=None, # type: Optional[str]
owned_only=None, # type: Optional[bool]
flow_type=None, # type: Optional[Union[str, "_models.FlowType"]]
list_view_type=None, # type: Optional[Union[str, "_models.ListViewType"]]
**kwargs # type: Any
):
# type: (...) -> List["_models.FlowBaseDto"]
"""list_flows.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param experiment_id:
:type experiment_id: str
:param owned_only:
:type owned_only: bool
:param flow_type:
:type flow_type: str or ~flow.models.FlowType
:param list_view_type:
:type list_view_type: str or ~flow.models.ListViewType
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of FlowBaseDto, or the result of cls(response)
:rtype: list[~flow.models.FlowBaseDto]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.FlowBaseDto"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_list_flows_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
experiment_id=experiment_id,
owned_only=owned_only,
flow_type=flow_type,
list_view_type=list_view_type,
template_url=self.list_flows.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[FlowBaseDto]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
list_flows.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows'} # type: ignore
@distributed_trace
def clone_flow(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id, # type: str
body=None, # type: Optional["_models.CreateFlowRequest"]
**kwargs # type: Any
):
# type: (...) -> "_models.FlowDto"
"""clone_flow.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:param body:
:type body: ~flow.models.CreateFlowRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowDto, or the result of cls(response)
:rtype: ~flow.models.FlowDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowRequest')
else:
_json = None
request = build_clone_flow_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
content_type=content_type,
experiment_id=experiment_id,
json=_json,
template_url=self.clone_flow.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
clone_flow.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/clone'} # type: ignore
@distributed_trace
def create_flow_from_sample(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
experiment_id=None, # type: Optional[str]
body=None, # type: Optional["_models.CreateFlowFromSampleRequest"]
**kwargs # type: Any
):
# type: (...) -> "_models.FlowDto"
"""create_flow_from_sample.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param experiment_id:
:type experiment_id: str
:param body:
:type body: ~flow.models.CreateFlowFromSampleRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowDto, or the result of cls(response)
:rtype: ~flow.models.FlowDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowFromSampleRequest')
else:
_json = None
request = build_create_flow_from_sample_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
experiment_id=experiment_id,
template_url=self.create_flow_from_sample.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_flow_from_sample.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/fromsample'} # type: ignore
@distributed_trace
def update_flow(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id, # type: str
body=None, # type: Optional["_models.UpdateFlowRequest"]
**kwargs # type: Any
):
# type: (...) -> str
"""update_flow.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:param body:
:type body: ~flow.models.UpdateFlowRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'UpdateFlowRequest')
else:
_json = None
request = build_update_flow_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
content_type=content_type,
experiment_id=experiment_id,
json=_json,
template_url=self.update_flow.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
update_flow.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}'} # type: ignore
@distributed_trace
def patch_flow(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id, # type: str
body=None, # type: Optional["_models.PatchFlowRequest"]
**kwargs # type: Any
):
# type: (...) -> str
"""patch_flow.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:param body:
:type body: ~flow.models.PatchFlowRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json-patch+json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'PatchFlowRequest')
else:
_json = None
request = build_patch_flow_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
content_type=content_type,
experiment_id=experiment_id,
json=_json,
template_url=self.patch_flow.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
patch_flow.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}'} # type: ignore
@distributed_trace
def get_flow(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.FlowDto"
"""get_flow.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowDto, or the result of cls(response)
:rtype: ~flow.models.FlowDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
experiment_id=experiment_id,
template_url=self.get_flow.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}'} # type: ignore
@distributed_trace
def submit_flow(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
experiment_id, # type: str
endpoint_name=None, # type: Optional[str]
body=None, # type: Optional["_models.SubmitFlowRequest"]
**kwargs # type: Any
):
# type: (...) -> "_models.FlowRunResult"
"""submit_flow.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param experiment_id:
:type experiment_id: str
:param endpoint_name:
:type endpoint_name: str
:param body:
:type body: ~flow.models.SubmitFlowRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowRunResult, or the result of cls(response)
:rtype: ~flow.models.FlowRunResult
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowRunResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'SubmitFlowRequest')
else:
_json = None
request = build_submit_flow_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
experiment_id=experiment_id,
json=_json,
endpoint_name=endpoint_name,
template_url=self.submit_flow.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowRunResult', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
submit_flow.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/submit'} # type: ignore
@distributed_trace
def get_flow_run_status(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
experiment_id=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> "_models.FlowRunResult"
"""get_flow_run_status.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param experiment_id:
:type experiment_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowRunResult, or the result of cls(response)
:rtype: ~flow.models.FlowRunResult
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowRunResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_run_status_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
experiment_id=experiment_id,
template_url=self.get_flow_run_status.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowRunResult', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_run_status.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/{flowRunId}/status'} # type: ignore
@distributed_trace
def get_flow_run_info(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
experiment_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.FlowRunInfo"
"""get_flow_run_info.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param experiment_id:
:type experiment_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowRunInfo, or the result of cls(response)
:rtype: ~flow.models.FlowRunInfo
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowRunInfo"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_run_info_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
experiment_id=experiment_id,
template_url=self.get_flow_run_info.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowRunInfo', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_run_info.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}'} # type: ignore
@distributed_trace
def get_flow_child_runs(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
index=None, # type: Optional[int]
start_index=None, # type: Optional[int]
end_index=None, # type: Optional[int]
**kwargs # type: Any
):
# type: (...) -> List[Any]
"""get_flow_child_runs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param index:
:type index: int
:param start_index:
:type start_index: int
:param end_index:
:type end_index: int
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of any, or the result of cls(response)
:rtype: list[any]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[Any]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_child_runs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
index=index,
start_index=start_index,
end_index=end_index,
template_url=self.get_flow_child_runs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[object]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_child_runs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/childRuns'} # type: ignore
@distributed_trace
def get_flow_node_runs(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
node_name, # type: str
index=None, # type: Optional[int]
start_index=None, # type: Optional[int]
end_index=None, # type: Optional[int]
aggregation=False, # type: Optional[bool]
**kwargs # type: Any
):
# type: (...) -> List[Any]
"""get_flow_node_runs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param node_name:
:type node_name: str
:param index:
:type index: int
:param start_index:
:type start_index: int
:param end_index:
:type end_index: int
:param aggregation:
:type aggregation: bool
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of any, or the result of cls(response)
:rtype: list[any]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[Any]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_node_runs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
node_name=node_name,
index=index,
start_index=start_index,
end_index=end_index,
aggregation=aggregation,
template_url=self.get_flow_node_runs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[object]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_node_runs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/nodeRuns/{nodeName}'} # type: ignore
@distributed_trace
def get_flow_node_run_base_path(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
node_name, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.FlowRunBasePath"
"""get_flow_node_run_base_path.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param node_name:
:type node_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowRunBasePath, or the result of cls(response)
:rtype: ~flow.models.FlowRunBasePath
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowRunBasePath"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_node_run_base_path_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
node_name=node_name,
template_url=self.get_flow_node_run_base_path.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowRunBasePath', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_node_run_base_path.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/nodeRuns/{nodeName}/basePath'} # type: ignore
@distributed_trace
def clone_flow_from_flow_run(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
experiment_id, # type: str
body=None, # type: Optional["_models.CreateFlowRequest"]
**kwargs # type: Any
):
# type: (...) -> "_models.FlowDto"
"""clone_flow_from_flow_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:param experiment_id:
:type experiment_id: str
:param body:
:type body: ~flow.models.CreateFlowRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowDto, or the result of cls(response)
:rtype: ~flow.models.FlowDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowRequest')
else:
_json = None
request = build_clone_flow_from_flow_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
content_type=content_type,
experiment_id=experiment_id,
json=_json,
template_url=self.clone_flow_from_flow_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
clone_flow_from_flow_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/clone'} # type: ignore
@distributed_trace
def list_bulk_tests(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> List["_models.BulkTestDto"]
"""list_bulk_tests.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of BulkTestDto, or the result of cls(response)
:rtype: list[~flow.models.BulkTestDto]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.BulkTestDto"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_list_bulk_tests_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
experiment_id=experiment_id,
template_url=self.list_bulk_tests.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[BulkTestDto]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
list_bulk_tests.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/bulkTests'} # type: ignore
@distributed_trace
def get_bulk_test(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
bulk_test_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.BulkTestDto"
"""get_bulk_test.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param bulk_test_id:
:type bulk_test_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: BulkTestDto, or the result of cls(response)
:rtype: ~flow.models.BulkTestDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.BulkTestDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_bulk_test_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
bulk_test_id=bulk_test_id,
template_url=self.get_bulk_test.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('BulkTestDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_bulk_test.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/bulkTests/{bulkTestId}'} # type: ignore
@distributed_trace
def get_samples(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
use_snapshot=False, # type: Optional[bool]
**kwargs # type: Any
):
# type: (...) -> Dict[str, "_models.FlowSampleDto"]
"""get_samples.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param use_snapshot:
:type use_snapshot: bool
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to FlowSampleDto, or the result of cls(response)
:rtype: dict[str, ~flow.models.FlowSampleDto]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, "_models.FlowSampleDto"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_samples_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
use_snapshot=use_snapshot,
template_url=self.get_samples.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('{FlowSampleDto}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_samples.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/samples'} # type: ignore
@distributed_trace
def get_evaluate_flow_samples(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
use_snapshot=False, # type: Optional[bool]
**kwargs # type: Any
):
# type: (...) -> Dict[str, "_models.FlowSampleDto"]
"""get_evaluate_flow_samples.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param use_snapshot:
:type use_snapshot: bool
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to FlowSampleDto, or the result of cls(response)
:rtype: dict[str, ~flow.models.FlowSampleDto]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, "_models.FlowSampleDto"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_evaluate_flow_samples_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
use_snapshot=use_snapshot,
template_url=self.get_evaluate_flow_samples.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('{FlowSampleDto}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_evaluate_flow_samples.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/evaluateSamples'} # type: ignore
@distributed_trace
def get_flow_deploy_reserved_environment_variable_names(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> List[str]
"""get_flow_deploy_reserved_environment_variable_names.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of str, or the result of cls(response)
:rtype: list[str]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List[str]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_deploy_reserved_environment_variable_names_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_flow_deploy_reserved_environment_variable_names.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[str]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_deploy_reserved_environment_variable_names.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/DeployReservedEnvironmentVariableNames'} # type: ignore
@distributed_trace
def deploy_flow(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
async_call=False, # type: Optional[bool]
msi_token=False, # type: Optional[bool]
body=None, # type: Optional["_models.DeployFlowRequest"]
**kwargs # type: Any
):
# type: (...) -> str
"""deploy_flow.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param async_call:
:type async_call: bool
:param msi_token:
:type msi_token: bool
:param body:
:type body: ~flow.models.DeployFlowRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'DeployFlowRequest')
else:
_json = None
request = build_deploy_flow_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
async_call=async_call,
msi_token=msi_token,
template_url=self.deploy_flow.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
deploy_flow.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/deploy'} # type: ignore
@distributed_trace
def get_flow_run_log_content(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> str
"""get_flow_run_log_content.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_run_log_content_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
template_url=self.get_flow_run_log_content.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_run_log_content.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/runs/{flowRunId}/logContent'} # type: ignore
@distributed_trace
def cancel_flow_run(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> str
"""cancel_flow_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_run_id:
:type flow_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_cancel_flow_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_run_id=flow_run_id,
template_url=self.cancel_flow_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
cancel_flow_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/runs/{flowRunId}/cancel'} # type: ignore
@distributed_trace
def cancel_flow_test(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
flow_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> str
"""cancel_flow_test.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param flow_run_id:
:type flow_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_cancel_flow_test_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
flow_run_id=flow_run_id,
template_url=self.cancel_flow_test.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
cancel_flow_test.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/flowTests/{flowRunId}/cancel'} # type: ignore
@distributed_trace
def cancel_bulk_test_run(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
bulk_test_run_id, # type: str
**kwargs # type: Any
):
# type: (...) -> str
"""cancel_bulk_test_run.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param bulk_test_run_id:
:type bulk_test_run_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_cancel_bulk_test_run_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
bulk_test_run_id=bulk_test_run_id,
template_url=self.cancel_bulk_test_run.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
cancel_bulk_test_run.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/bulkTests/{bulkTestRunId}/cancel'} # type: ignore
@distributed_trace
def get_flow_snapshot(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
body=None, # type: Optional["_models.CreateFlowRequest"]
**kwargs # type: Any
):
# type: (...) -> "_models.FlowSnapshot"
"""get_flow_snapshot.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param body:
:type body: ~flow.models.CreateFlowRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowSnapshot, or the result of cls(response)
:rtype: ~flow.models.FlowSnapshot
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowSnapshot"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowRequest')
else:
_json = None
request = build_get_flow_snapshot_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
template_url=self.get_flow_snapshot.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowSnapshot', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_snapshot.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/FlowSnapshot'} # type: ignore
@distributed_trace
def get_connection_override_settings(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
runtime_name=None, # type: Optional[str]
body=None, # type: Optional["_models.FlowGraphReference"]
**kwargs # type: Any
):
# type: (...) -> List["_models.ConnectionOverrideSetting"]
"""get_connection_override_settings.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param runtime_name:
:type runtime_name: str
:param body:
:type body: ~flow.models.FlowGraphReference
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of ConnectionOverrideSetting, or the result of cls(response)
:rtype: list[~flow.models.ConnectionOverrideSetting]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.ConnectionOverrideSetting"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'FlowGraphReference')
else:
_json = None
request = build_get_connection_override_settings_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
runtime_name=runtime_name,
template_url=self.get_connection_override_settings.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[ConnectionOverrideSetting]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_connection_override_settings.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/connectionOverride'} # type: ignore
@distributed_trace
def get_flow_inputs(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
body=None, # type: Optional["_models.FlowGraphReference"]
**kwargs # type: Any
):
# type: (...) -> Dict[str, "_models.FlowInputDefinition"]
"""get_flow_inputs.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param body:
:type body: ~flow.models.FlowGraphReference
:keyword callable cls: A custom type or function that will be passed the direct response
:return: dict mapping str to FlowInputDefinition, or the result of cls(response)
:rtype: dict[str, ~flow.models.FlowInputDefinition]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Dict[str, "_models.FlowInputDefinition"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'FlowGraphReference')
else:
_json = None
request = build_get_flow_inputs_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
template_url=self.get_flow_inputs.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('{FlowInputDefinition}', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_inputs.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/flowInputs'} # type: ignore
@distributed_trace
def load_as_component(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
body=None, # type: Optional["_models.LoadFlowAsComponentRequest"]
**kwargs # type: Any
):
# type: (...) -> str
"""load_as_component.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param body:
:type body: ~flow.models.LoadFlowAsComponentRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'LoadFlowAsComponentRequest')
else:
_json = None
request = build_load_as_component_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
content_type=content_type,
json=_json,
template_url=self.load_as_component.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
load_as_component.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/LoadAsComponent'} # type: ignore
@distributed_trace
def get_flow_tools(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id, # type: str
flow_runtime_name=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> "_models.FlowToolsDto"
"""get_flow_tools.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:param flow_runtime_name:
:type flow_runtime_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowToolsDto, or the result of cls(response)
:rtype: ~flow.models.FlowToolsDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowToolsDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_tools_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
experiment_id=experiment_id,
flow_runtime_name=flow_runtime_name,
template_url=self.get_flow_tools.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowToolsDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_tools.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/flowTools'} # type: ignore
@distributed_trace
def setup_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id, # type: str
body=None, # type: Optional["_models.SetupFlowSessionRequest"]
**kwargs # type: Any
):
# type: (...) -> Any
"""setup_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:param body:
:type body: ~flow.models.SetupFlowSessionRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'SetupFlowSessionRequest')
else:
_json = None
request = build_setup_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
content_type=content_type,
experiment_id=experiment_id,
json=_json,
template_url=self.setup_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
setup_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions'} # type: ignore
@distributed_trace
def delete_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id, # type: str
**kwargs # type: Any
):
# type: (...) -> Any
"""delete_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_delete_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
experiment_id=experiment_id,
template_url=self.delete_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
delete_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions'} # type: ignore
@distributed_trace
def get_flow_session_status(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
flow_id, # type: str
experiment_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.FlowSessionDto"
"""get_flow_session_status.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param flow_id:
:type flow_id: str
:param experiment_id:
:type experiment_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: FlowSessionDto, or the result of cls(response)
:rtype: ~flow.models.FlowSessionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.FlowSessionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_session_status_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
flow_id=flow_id,
experiment_id=experiment_id,
template_url=self.get_flow_session_status.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('FlowSessionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_session_status.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/Flows/{flowId}/sessions/status'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flows_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flows_operations.py",
"repo_id": "promptflow",
"token_count": 59142
} | 45 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from typing import Dict
from azure.ai.ml._scope_dependent_operations import (
OperationConfig,
OperationsContainer,
OperationScope,
_ScopeDependentOperations,
)
from promptflow._sdk._utils import safe_parse_object_list
from promptflow._sdk.entities._connection import _Connection
from promptflow._utils.logger_utils import get_cli_sdk_logger
from promptflow.azure._entities._workspace_connection_spec import WorkspaceConnectionSpec
from promptflow.azure._restclient.flow_service_caller import FlowServiceCaller
logger = get_cli_sdk_logger()
class ConnectionOperations(_ScopeDependentOperations):
"""ConnectionOperations.
You should not instantiate this class directly. Instead, you should
create an PFClient instance that instantiates it for you and
attaches it as an attribute.
"""
def __init__(
self,
operation_scope: OperationScope,
operation_config: OperationConfig,
all_operations: OperationsContainer,
credential,
service_caller: FlowServiceCaller,
**kwargs: Dict,
):
super(ConnectionOperations, self).__init__(operation_scope, operation_config)
self._all_operations = all_operations
self._service_caller = service_caller
self._credential = credential
def create_or_update(self, connection, **kwargs):
rest_conn = connection._to_rest_object()
# create flow draft
rest_conn_result = self._service_caller.create_connection(
subscription_id=self._operation_scope.subscription_id,
resource_group_name=self._operation_scope.resource_group_name,
workspace_name=self._operation_scope.workspace_name,
connection_name=connection.name,
body=rest_conn,
)
return _Connection._from_mt_rest_object(rest_conn_result)
def get(self, name, **kwargs):
rest_conn = self._service_caller.get_connection(
subscription_id=self._operation_scope.subscription_id,
resource_group_name=self._operation_scope.resource_group_name,
workspace_name=self._operation_scope.workspace_name,
connection_name=name,
**kwargs,
)
return _Connection._from_mt_rest_object(rest_conn)
def delete(self, name, **kwargs):
return self._service_caller.delete_connection(
subscription_id=self._operation_scope.subscription_id,
resource_group_name=self._operation_scope.resource_group_name,
workspace_name=self._operation_scope.workspace_name,
connection_name=name,
**kwargs,
)
def list(self, **kwargs):
rest_connections = self._service_caller.list_connections(
subscription_id=self._operation_scope.subscription_id,
resource_group_name=self._operation_scope.resource_group_name,
workspace_name=self._operation_scope.workspace_name,
**kwargs,
)
return safe_parse_object_list(
obj_list=rest_connections,
parser=_Connection._from_mt_rest_object,
message_generator=lambda x: f"Failed to load connection {x.connection_name}, skipped.",
)
def list_connection_specs(self, **kwargs):
results = self._service_caller.list_connection_specs(
subscription_id=self._operation_scope.subscription_id,
resource_group_name=self._operation_scope.resource_group_name,
workspace_name=self._operation_scope.workspace_name,
**kwargs,
)
return [WorkspaceConnectionSpec._from_rest_object(spec) for spec in results]
| promptflow/src/promptflow/promptflow/azure/operations/_connection_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/operations/_connection_operations.py",
"repo_id": "promptflow",
"token_count": 1476
} | 46 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from promptflow._sdk._constants import VIS_JS_BUNDLE_FILENAME
@dataclass
class RunDetail:
flow_runs: List[dict]
node_runs: List[dict]
@dataclass
class RunMetadata:
name: str
display_name: str
create_time: str
flow_path: str
output_path: str
tags: Optional[List[Dict[str, str]]]
lineage: Optional[str]
metrics: Optional[Dict[str, Any]]
dag: Optional[str]
flow_tools_json: Optional[dict]
mode: Optional[str] = ""
@dataclass
class VisualizationConfig:
# use camel name here to fit contract requirement from js
availableIDEList: List[str]
@dataclass
class RunVisualization:
detail: List[RunDetail]
metadata: List[RunMetadata]
config: List[VisualizationConfig]
@dataclass
class VisualizationRender:
data: dict
js_path: str = VIS_JS_BUNDLE_FILENAME
def __post_init__(self):
self.data = json.dumps(json.dumps(self.data)) # double json.dumps to match JS requirements
| promptflow/src/promptflow/promptflow/contracts/_run_management.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/contracts/_run_management.py",
"repo_id": "promptflow",
"token_count": 421
} | 47 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import asyncio
import contextvars
import inspect
import threading
from concurrent import futures
from concurrent.futures import Future, ThreadPoolExecutor
from typing import Dict, List, Optional, Tuple
from promptflow._core.flow_execution_context import FlowExecutionContext
from promptflow._core.tools_manager import ToolsManager
from promptflow._utils.logger_utils import flow_logger
from promptflow._utils.utils import set_context
from promptflow.contracts.flow import Node
from promptflow.executor._dag_manager import DAGManager
from promptflow.executor._errors import LineExecutionTimeoutError, NoNodeExecutedError
RUN_FLOW_NODES_LINEARLY = 1
DEFAULT_CONCURRENCY_BULK = 2
DEFAULT_CONCURRENCY_FLOW = 16
class FlowNodesScheduler:
def __init__(
self,
tools_manager: ToolsManager,
inputs: Dict,
nodes_from_invoker: List[Node],
node_concurrency: int,
context: FlowExecutionContext,
) -> None:
self._tools_manager = tools_manager
self._future_to_node: Dict[Future, Node] = {}
self._node_concurrency = min(node_concurrency, DEFAULT_CONCURRENCY_FLOW)
flow_logger.info(f"Start to run {len(nodes_from_invoker)} nodes with concurrency level {node_concurrency}.")
self._dag_manager = DAGManager(nodes_from_invoker, inputs)
self._context = context
def wait_within_timeout(self, execution_event: threading.Event, timeout: int):
flow_logger.info(f"Timeout task is scheduled to wait for {timeout} seconds.")
signal = execution_event.wait(timeout=timeout)
if signal:
flow_logger.info("Timeout task is cancelled because the execution is finished.")
else:
flow_logger.warning(f"Timeout task timeouted after waiting for {timeout} seconds.")
def execute(
self,
line_timeout_sec: Optional[int] = None,
) -> Tuple[dict, dict]:
parent_context = contextvars.copy_context()
with ThreadPoolExecutor(
max_workers=self._node_concurrency, initializer=set_context, initargs=(parent_context,)
) as executor:
self._execute_nodes(executor)
timeout_task = None
event = threading.Event()
if line_timeout_sec is not None:
timeout_task = executor.submit(self.wait_within_timeout, event, line_timeout_sec)
try:
while not self._dag_manager.completed():
if not self._future_to_node:
raise NoNodeExecutedError("No nodes are ready for execution, but the flow is not completed.")
tasks_to_wait = list(self._future_to_node.keys())
if timeout_task is not None:
tasks_to_wait.append(timeout_task)
completed_futures_with_wait, _ = futures.wait(tasks_to_wait, return_when=futures.FIRST_COMPLETED)
completed_futures = [f for f in completed_futures_with_wait if f in self._future_to_node]
self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))
for each_future in completed_futures:
del self._future_to_node[each_future]
if timeout_task and timeout_task.done():
raise LineExecutionTimeoutError(self._context._line_number, line_timeout_sec)
self._execute_nodes(executor)
except Exception as e:
err_msg = "Flow execution has failed."
if isinstance(e, LineExecutionTimeoutError):
err_msg = f"Line execution timeout after {line_timeout_sec} seconds."
self._context.cancel_node_runs(err_msg)
node_names = ",".join(node.name for node in self._future_to_node.values())
flow_logger.error(f"{err_msg} Cancelling all running nodes: {node_names}.")
for unfinished_future in self._future_to_node.keys():
# We can't cancel running tasks here, only pending tasks could be cancelled.
unfinished_future.cancel()
# Even we raise exception here, still need to wait all running jobs finish to exit.
raise e
finally:
# Cancel timeout task no matter the execution is finished or failed.
event.set()
for node in self._dag_manager.bypassed_nodes:
self._dag_manager.completed_nodes_outputs[node] = None
return self._dag_manager.completed_nodes_outputs, self._dag_manager.bypassed_nodes
def _execute_nodes(self, executor: ThreadPoolExecutor):
# Skip nodes and update node run info until there are no nodes to bypass
nodes_to_bypass = self._dag_manager.pop_bypassable_nodes()
while nodes_to_bypass:
for node in nodes_to_bypass:
self._context.bypass_node(node)
nodes_to_bypass = self._dag_manager.pop_bypassable_nodes()
# Submit nodes that are ready to run
nodes_to_exec = self._dag_manager.pop_ready_nodes()
if nodes_to_exec:
self._submit_nodes(executor, nodes_to_exec)
def _collect_outputs(self, completed_futures: List[Future]):
completed_nodes_outputs = {}
for each_future in completed_futures:
each_node_result = each_future.result()
each_node = self._future_to_node[each_future]
completed_nodes_outputs[each_node.name] = each_node_result
return completed_nodes_outputs
def _submit_nodes(self, executor: ThreadPoolExecutor, nodes):
for each_node in nodes:
future = executor.submit(self._exec_single_node_in_thread, (each_node, self._dag_manager))
self._future_to_node[future] = each_node
def _exec_single_node_in_thread(self, args: Tuple[Node, DAGManager]):
node, dag_manager = args
# We are using same run tracker and cache manager for all threads, which may not thread safe.
# But for bulk run scenario, we've doing this for a long time, and it works well.
context = self._context
f = self._tools_manager.get_tool(node.name)
kwargs = dag_manager.get_node_valid_inputs(node, f)
if inspect.iscoroutinefunction(f):
# TODO: Run async functions in flow level event loop
result = asyncio.run(context.invoke_tool_async(node, f, kwargs=kwargs))
else:
result = context.invoke_tool(node, f, kwargs=kwargs)
return result
| promptflow/src/promptflow/promptflow/executor/_flow_nodes_scheduler.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/executor/_flow_nodes_scheduler.py",
"repo_id": "promptflow",
"token_count": 2820
} | 48 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from functools import partial
from pathlib import Path
from typing import Union
from promptflow._utils.multimedia_utils import _process_recursively, get_file_reference_encoder
from promptflow.contracts.multimedia import Image
from promptflow.contracts.run_info import FlowRunInfo
from promptflow.contracts.run_info import RunInfo as NodeRunInfo
class AbstractRunStorage:
def persist_node_run(self, run_info: NodeRunInfo):
"""Write the node run info to somewhere immediately after the node is executed.
:param run_info: The run info of the node.
:type run_info: ~promptflow.contracts.run_info.RunInfo
"""
raise NotImplementedError("AbstractRunStorage is an abstract class, no implementation for persist_node_run.")
def persist_flow_run(self, run_info: FlowRunInfo):
"""Write the flow run info to somewhere immediately after one line data is executed for the flow.
:param run_info: The run info of the node.
:type run_info: ~promptflow.contracts.run_info.RunInfo
"""
raise NotImplementedError("AbstractRunStorage is an abstract class, no implementation for persist_flow_run.")
class DummyRunStorage(AbstractRunStorage):
def persist_node_run(self, run_info: NodeRunInfo):
"""Dummy implementation for persist_node_run
:param run_info: The run info of the node.
:type run_info: ~promptflow.contracts.run_info.RunInfo
"""
pass
def persist_flow_run(self, run_info: FlowRunInfo):
"""Dummy implementation for persist_flow_run
:param run_info: The run info of the node.
:type run_info: ~promptflow.contracts.run_info.RunInfo
"""
pass
class DefaultRunStorage(AbstractRunStorage):
def __init__(self, base_dir: Path = None, sub_dir: Path = None):
"""Initialize the default run storage.
:param base_dir: The base directory to store the multimedia data.
:type base_dir: Path
:param sub_dir: The sub directory to store the multimedia data.
:type sub_dir: Path
"""
self._base_dir = base_dir
self._sub_dir = sub_dir
def persist_run_info(self, run_info: Union[FlowRunInfo, NodeRunInfo]):
"""Persist the multimedia data in run info after execution.
:param run_info: The run info of the node or flow.
:type run_info: ~promptflow.contracts.run_info.RunInfo or ~promptflow.contracts.run_info.FlowRunInfo
"""
# Persist and convert images in inputs to path dictionaries.
# This replaces any image objects with their corresponding file path dictionaries.
if run_info.inputs:
run_info.inputs = self._persist_and_convert_images_to_path_dicts(run_info.inputs)
# Persist and convert images in output to path dictionaries.
# This replaces any image objects with their corresponding file path dictionaries.
if run_info.output:
serialized_output = self._persist_and_convert_images_to_path_dicts(run_info.output)
run_info.output = serialized_output
run_info.result = serialized_output
# Persist and convert images in api_calls to path dictionaries.
# The `inplace=True` parameter is used here to ensure that the original list structure holding generator outputs
# is maintained. This allows us to keep tracking the list as it dynamically changes when the generator is
# consumed. It is crucial to process the api_calls list in place to avoid losing the reference to the list that
# holds the generator items, which is essential for tracing generator execution.
if run_info.api_calls:
run_info.api_calls = self._persist_and_convert_images_to_path_dicts(run_info.api_calls, inplace=True)
def persist_node_run(self, run_info: NodeRunInfo):
"""Persist the multimedia data in node run info after the node is executed.
This method now delegates to the shared persist_run_info method.
:param run_info: The run info of the node.
:type run_info: NodeRunInfo
"""
self.persist_run_info(run_info)
def persist_flow_run(self, run_info: FlowRunInfo):
"""Persist the multimedia data in flow run info after one line data is executed for the flow.
This method now delegates to the shared persist_run_info method.
:param run_info: The run info of the flow.
:type run_info: FlowRunInfo
"""
self.persist_run_info(run_info)
def _persist_and_convert_images_to_path_dicts(self, value, inplace=False):
"""Persist image objects within a Python object to disk and convert them to path dictionaries.
This function recursively processes a given Python object, which can be a list, a dictionary, or a nested
combination of these, searching for image objects. Each image object encountered is serialized and saved to
disk in a pre-defined location using the `_base_dir` and `_sub_dir` attributes. The image object within the
original data structure is then replaced with a dictionary that indicates the file path of the serialized
image, following the format: `{'data:image/<ext>;path': '.promptflow/intermediate/<image_uuid>.<ext>'}`.
The operation can be performed in-place on the original object or on a new copy, depending on the value of
the `inplace` parameter. When `inplace` is set to `True`, the original object is modified; when set to `False`,
a new object with the converted path dictionaries is returned.
:param value: The Python object to be processed, potentially containing image objects.
:type value: Any
:param inplace: Whether to modify the original object in place (True) or to create a new object with converted
path dictionaries (False).
:type inplace: bool
:return: The original object with converted path dictionaries if `inplace` is True, otherwise a new object with
the conversions.
:rtype: Any
"""
if self._base_dir:
pfbytes_file_reference_encoder = get_file_reference_encoder(
folder_path=self._base_dir,
relative_path=self._sub_dir,
)
else:
pfbytes_file_reference_encoder = None
serialization_funcs = {Image: partial(Image.serialize, **{"encoder": pfbytes_file_reference_encoder})}
return _process_recursively(value, process_funcs=serialization_funcs, inplace=inplace)
| promptflow/src/promptflow/promptflow/storage/_run_storage.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/storage/_run_storage.py",
"repo_id": "promptflow",
"token_count": 2371
} | 49 |
import pytest
from promptflow.contracts.run_info import Status
from promptflow.executor import FlowExecutor
from ..utils import (
get_yaml_file,
)
SAMPLE_FLOW = "web_classification_no_variants"
SAMPLE_EVAL_FLOW = "classification_accuracy_evaluation"
SAMPLE_FLOW_WITH_PARTIAL_FAILURE = "python_tool_partial_failure"
SAMPLE_FLOW_WITH_LANGCHAIN_TRACES = "flow_with_langchain_traces"
expected_stack_traces = {
"sync_tools_failures": """Traceback (most recent call last):
sync_fail.py", line 11, in raise_an_exception
raise_exception(s)
sync_fail.py", line 5, in raise_exception
raise Exception(msg)
Exception: In raise_exception: dummy_input
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
sync_fail.py", line 13, in raise_an_exception
raise Exception(f"In tool raise_an_exception: {s}") from e
Exception: In tool raise_an_exception: dummy_input
""".split("\n"),
"async_tools_failures": """Traceback (most recent call last):
async_fail.py", line 11, in raise_an_exception_async
await raise_exception_async(s)
async_fail.py", line 5, in raise_exception_async
raise Exception(msg)
Exception: In raise_exception_async: dummy_input
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
in raise_an_exception_async
raise Exception(f"In tool raise_an_exception_async: {s}") from e
Exception: In tool raise_an_exception_async: dummy_input
""".split("\n"),
}
@pytest.mark.e2etest
class TestExecutorFailures:
@pytest.mark.parametrize(
"flow_folder, node_name, message",
[
("sync_tools_failures", "sync_fail", "In tool raise_an_exception: dummy_input"),
("async_tools_failures", "async_fail", "In tool raise_an_exception_async: dummy_input"),
],
)
def test_executor_exec_node_fail(self, flow_folder, node_name, message):
yaml_file = get_yaml_file(flow_folder)
run_info = FlowExecutor.load_and_exec_node(yaml_file, node_name)
assert run_info.output is None
assert run_info.status == Status.Failed
assert isinstance(run_info.api_calls, list)
assert len(run_info.api_calls) == 1
assert run_info.node == node_name
assert run_info.system_metrics["duration"] >= 0
assert run_info.error is not None
assert f"Execution failure in '{node_name}'" in run_info.error["message"]
assert len(run_info.error["additionalInfo"]) == 1
user_error_info_dict = run_info.error["additionalInfo"][0]
assert "ToolExecutionErrorDetails" == user_error_info_dict["type"]
user_error_info = user_error_info_dict["info"]
assert message == user_error_info["message"]
# Make sure the stack trace is as expected
stacktrace = user_error_info["traceback"].split("\n")
expected_stack_trace = expected_stack_traces[flow_folder]
assert len(stacktrace) == len(expected_stack_trace)
for expected_item, actual_item in zip(expected_stack_trace, stacktrace):
assert expected_item in actual_item
@pytest.mark.parametrize(
"flow_folder, failed_node_name, message",
[
("sync_tools_failures", "sync_fail", "In tool raise_an_exception: dummy_input"),
("async_tools_failures", "async_fail", "In tool raise_an_exception_async: dummy_input"),
],
)
def test_executor_exec_line_fail(self, flow_folder, failed_node_name, message):
yaml_file = get_yaml_file(flow_folder)
executor = FlowExecutor.create(yaml_file, {}, raise_ex=False)
line_result = executor.exec_line({})
run_info = line_result.run_info
assert run_info.output is None
assert run_info.status == Status.Failed
assert isinstance(run_info.api_calls, list)
assert len(run_info.api_calls) == 1
assert run_info.system_metrics["duration"] >= 0
assert run_info.error is not None
assert f"Execution failure in '{failed_node_name}'" in run_info.error["message"]
assert len(run_info.error["additionalInfo"]) == 1
user_error_info_dict = run_info.error["additionalInfo"][0]
assert "ToolExecutionErrorDetails" == user_error_info_dict["type"]
user_error_info = user_error_info_dict["info"]
assert message == user_error_info["message"]
# Make sure the stack trace is as expected
stacktrace = user_error_info["traceback"].split("\n")
expected_stack_trace = expected_stack_traces[flow_folder]
assert len(stacktrace) == len(expected_stack_trace)
for expected_item, actual_item in zip(expected_stack_trace, stacktrace):
assert expected_item in actual_item
| promptflow/src/promptflow/tests/executor/e2etests/test_executor_execution_failures.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/e2etests/test_executor_execution_failures.py",
"repo_id": "promptflow",
"token_count": 1879
} | 50 |
inputs:
text:
type: string
outputs:
output:
type: string
reference: ${custom_llm_tool_with_duplicated_inputs.output}
nodes:
- name: custom_llm_tool_with_duplicated_inputs
type: custom_llm
source:
type: package_with_prompt
tool: custom_llm_tool.TestCustomLLMTool.call
path: ./prompt_with_duplicated_inputs.jinja2
inputs:
connection: azure_open_ai_connection
api: completion
text: ${inputs.text}
| promptflow/src/promptflow/tests/executor/package_tools/custom_llm_tool_with_duplicated_inputs/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/package_tools/custom_llm_tool_with_duplicated_inputs/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 181
} | 51 |
import threading
import pytest
from promptflow._core.operation_context import OperationContext
from promptflow._version import VERSION
from promptflow.contracts.run_mode import RunMode
def set_run_mode(context: OperationContext, run_mode: RunMode):
"""This method simulates the runtime.execute_request()
It is aimed to set the run_mode into operation context.
"""
context.run_mode = run_mode.name if run_mode is not None else ""
@pytest.mark.unittest
class TestOperationContext:
def test_get_user_agent(self):
operation_context = OperationContext()
assert operation_context.get_user_agent() == f"promptflow/{VERSION}"
operation_context.user_agent = "test_agent/0.0.2"
assert operation_context.get_user_agent() == f"test_agent/0.0.2 promptflow/{VERSION}"
@pytest.mark.parametrize(
"run_mode, expected",
[
(RunMode.Test, "Test"),
(RunMode.SingleNode, "SingleNode"),
(RunMode.Batch, "Batch"),
],
)
def test_run_mode(self, run_mode, expected):
context = OperationContext()
set_run_mode(context, run_mode)
assert context.run_mode == expected
def test_context_dict(self):
context = OperationContext()
context.run_mode = "Flow"
context.user_agent = "test_agent/0.0.2"
context.none_value = None
context_dict = context.get_context_dict()
assert context_dict["run_mode"] == "Flow"
assert context_dict["user_agent"] == "test_agent/0.0.2"
assert context_dict["none_value"] is None
def test_setattr(self):
context = OperationContext()
context.run_mode = "Flow"
assert context["run_mode"] == "Flow"
def test_setattr_non_primitive(self):
# Test set non-primitive type
context = OperationContext()
with pytest.raises(TypeError):
context.foo = [1, 2, 3]
def test_getattr(self):
context = OperationContext()
context["run_mode"] = "Flow"
assert context.run_mode == "Flow"
def test_getattr_missing(self):
context = OperationContext()
with pytest.raises(AttributeError):
context.foo
def test_delattr(self):
# test that delattr works as expected
context = OperationContext()
context.foo = "bar"
del context.foo
assert "foo" not in context
# test that delattr raises AttributeError for non-existent name
with pytest.raises(AttributeError):
del context.baz
def test_append_user_agent(self):
context = OperationContext()
user_agent = ' ' + context.user_agent if 'user_agent' in context else ''
context.append_user_agent("test_agent/0.0.2")
assert context.user_agent == "test_agent/0.0.2" + user_agent
context.append_user_agent("test_agent/0.0.3")
assert context.user_agent == "test_agent/0.0.2 test_agent/0.0.3" + user_agent
def test_get_instance(self):
context1 = OperationContext.get_instance()
context2 = OperationContext.get_instance()
assert context1 is context2
def test_set_batch_input_source_from_inputs_mapping_run(self):
input_mapping = {"input1": "${run.outputs.output1}", "input2": "${run.outputs.output2}"}
context = OperationContext()
context.set_batch_input_source_from_inputs_mapping(input_mapping)
assert context.batch_input_source == "Run"
def test_set_batch_input_source_from_inputs_mapping_data(self):
input_mapping = {"url": "${data.url}"}
context = OperationContext()
context.set_batch_input_source_from_inputs_mapping(input_mapping)
assert context.batch_input_source == "Data"
def test_set_batch_input_source_from_inputs_mapping_none(self):
input_mapping = None
context = OperationContext()
assert not hasattr(context, "batch_input_source")
context.set_batch_input_source_from_inputs_mapping(input_mapping)
assert context.batch_input_source == "Data"
def test_set_batch_input_source_from_inputs_mapping_empty(self):
input_mapping = {}
context = OperationContext()
assert not hasattr(context, "batch_input_source")
context.set_batch_input_source_from_inputs_mapping(input_mapping)
assert context.batch_input_source == "Data"
def test_different_thread_have_different_instance(self):
# create a list to store the OperationContext instances from each thread
instances = []
# define a function that gets the OperationContext instance and appends it to the list
def get_instance():
instance = OperationContext.get_instance()
instances.append(instance)
# create two threads and run the function in each thread
thread1 = threading.Thread(target=get_instance)
thread2 = threading.Thread(target=get_instance)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
# assert that the list has two elements and they are different objects
assert len(instances) == 2
assert instances[0] is not instances[1]
| promptflow/src/promptflow/tests/executor/unittests/_core/test_operation_context.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/_core/test_operation_context.py",
"repo_id": "promptflow",
"token_count": 2081
} | 52 |
import re
import sys
import time
from io import StringIO
from logging import WARNING, Logger, StreamHandler
import pytest
from promptflow._utils.thread_utils import RepeatLogTimer
from promptflow._utils.utils import generate_elapsed_time_messages
class DummyException(Exception):
pass
@pytest.mark.skipif(sys.platform == "darwin", reason="Skip on Mac")
@pytest.mark.unittest
class TestRepeatLogTimer:
def test_context_manager(self):
s = StringIO()
logger = Logger("test_repeat_log_timer")
logger.addHandler(StreamHandler(s))
interval_seconds = 1
start_time = time.perf_counter()
with RepeatLogTimer(
interval_seconds=interval_seconds,
logger=logger,
level=WARNING,
log_message_function=generate_elapsed_time_messages,
args=("Test", start_time, interval_seconds, None),
):
time.sleep(10.5)
logs = s.getvalue().split("\n")
logs = [log for log in logs if log]
log_pattern = re.compile(
r"^Test has been running for [0-9]+ seconds, thread None cannot be found in sys._current_frames, "
r"maybe it has been terminated due to unexpected errors.$"
)
assert logs, "Logs are empty."
for log in logs:
assert re.match(log_pattern, log), f"The wrong log: {log}"
| promptflow/src/promptflow/tests/executor/unittests/_utils/test_thread_utils.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/_utils/test_thread_utils.py",
"repo_id": "promptflow",
"token_count": 561
} | 53 |
import pytest
from promptflow.contracts.types import AssistantDefinition, Secret, PromptTemplate, FilePath
from promptflow.executor._assistant_tool_invoker import AssistantToolInvoker
@pytest.mark.unittest
def test_secret():
secret = Secret('my_secret')
secret.set_secret_name('secret_name')
assert secret.secret_name == 'secret_name'
@pytest.mark.unittest
def test_prompt_template():
prompt = PromptTemplate('my_prompt')
assert isinstance(prompt, str)
assert str(prompt) == 'my_prompt'
@pytest.mark.unittest
def test_file_path():
file_path = FilePath('my_file_path')
assert isinstance(file_path, str)
@pytest.mark.unittest
def test_assistant_definition():
data = {"model": "model", "instructions": "instructions", "tools": []}
assistant_definition = AssistantDefinition.deserialize(data)
assert isinstance(assistant_definition, AssistantDefinition)
assert assistant_definition.model == "model"
assert assistant_definition.instructions == "instructions"
assert assistant_definition.tools == []
assert assistant_definition.serialize() == data
assert isinstance(assistant_definition.init_tool_invoker(), AssistantToolInvoker)
| promptflow/src/promptflow/tests/executor/unittests/contracts/test_types.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/contracts/test_types.py",
"repo_id": "promptflow",
"token_count": 382
} | 54 |
import json
from datetime import datetime
import pytest
from promptflow._utils.dataclass_serializer import serialize
from promptflow.contracts.run_info import FlowRunInfo, RunInfo, Status
from promptflow.storage.run_records import LineRunRecord, NodeRunRecord
@pytest.mark.unittest
def test_line_record():
start_time = datetime(2023, 7, 12)
end_time = datetime(2023, 7, 13)
flow_run_info = FlowRunInfo(
run_id=None,
status=Status.Completed,
error=None,
inputs=None,
output=None,
metrics=None,
request=None,
parent_run_id=None,
root_run_id=None,
source_run_id=None,
flow_id=None,
start_time=start_time,
end_time=end_time,
index=0,
variant_id=None,
)
line_record = LineRunRecord.from_run_info(flow_run_info)
assert line_record.line_number == 0
assert line_record.start_time == start_time.isoformat()
assert line_record.end_time == end_time.isoformat()
assert line_record.status == Status.Completed.value
assert line_record.run_info == serialize(flow_run_info)
@pytest.mark.unittest
def test_line_serialize():
start_time = datetime(2023, 7, 12)
end_time = datetime(2023, 7, 13)
flow_run_info = FlowRunInfo(
run_id=None,
status=Status.Completed,
error=None,
inputs=None,
output=None,
metrics=None,
request=None,
parent_run_id=None,
root_run_id=None,
source_run_id=None,
flow_id=None,
start_time=start_time,
end_time=end_time,
index=0,
variant_id=None,
)
line_record = LineRunRecord.from_run_info(flow_run_info)
result = line_record.serialize()
expected_result = json.dumps(line_record.__dict__)
assert result == expected_result
@pytest.mark.unittest
def test_node_record():
start_time = datetime(2023, 7, 12)
end_time = datetime(2023, 7, 13)
node_run_info = RunInfo(
node=None,
run_id=None,
flow_run_id=None,
status=Status.Completed,
inputs=None,
output=None,
metrics=None,
error=None,
parent_run_id=None,
start_time=start_time,
end_time=end_time,
index=0,
)
node_record = NodeRunRecord.from_run_info(node_run_info)
assert node_record.line_number == 0
assert node_record.start_time == start_time.isoformat()
assert node_record.end_time == end_time.isoformat()
assert node_record.status == Status.Completed.value
assert node_record.run_info == serialize(node_run_info)
@pytest.mark.unittest
def test_node_serialize():
start_time = datetime(2023, 7, 12)
end_time = datetime(2023, 7, 13)
node_run_info = RunInfo(
node=None,
run_id=None,
flow_run_id=None,
status=Status.Completed,
inputs=None,
output=None,
metrics=None,
error=None,
parent_run_id=None,
start_time=start_time,
end_time=end_time,
index=0,
)
node_record = NodeRunRecord.from_run_info(node_run_info)
result = node_record.serialize()
expected_result = json.dumps(node_record.__dict__)
assert result == expected_result
| promptflow/src/promptflow/tests/executor/unittests/storage/test_run_records.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/storage/test_run_records.py",
"repo_id": "promptflow",
"token_count": 1479
} | 55 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import contextlib
import os
import shutil
import sys
import tempfile
import uuid
from logging import Logger
from pathlib import Path
from typing import Callable
from unittest.mock import MagicMock, patch
import pydash
import pytest
from promptflow import load_run
from promptflow._constants import PF_USER_AGENT
from promptflow._core.operation_context import OperationContext
from promptflow._sdk._configuration import Configuration
from promptflow._sdk._errors import RunNotFoundError
from promptflow._sdk._telemetry import (
ActivityType,
PromptFlowSDKLogHandler,
get_appinsights_log_handler,
get_telemetry_logger,
is_telemetry_enabled,
log_activity,
)
from promptflow._sdk._utils import ClientUserAgentUtil, call_from_extension
from promptflow._utils.utils import environment_variable_overwrite, parse_ua_to_dict
from .._azure_utils import DEFAULT_TEST_TIMEOUT, PYTEST_TIMEOUT_METHOD
@contextlib.contextmanager
def cli_consent_config_overwrite(val):
config = Configuration.get_instance()
original_consent = config.get_telemetry_consent()
config.set_telemetry_consent(val)
try:
yield
finally:
if original_consent:
config.set_telemetry_consent(original_consent)
else:
config.set_telemetry_consent(True)
@contextlib.contextmanager
def extension_consent_config_overwrite(val):
config = Configuration.get_instance()
original_consent = config.get_config(key=Configuration.EXTENSION_COLLECT_TELEMETRY)
config.set_config(key=Configuration.EXTENSION_COLLECT_TELEMETRY, value=val)
try:
yield
finally:
if original_consent:
config.set_config(key=Configuration.EXTENSION_COLLECT_TELEMETRY, value=original_consent)
else:
config.set_config(key=Configuration.EXTENSION_COLLECT_TELEMETRY, value=True)
RUNS_DIR = "./tests/test_configs/runs"
FLOWS_DIR = "./tests/test_configs/flows"
@pytest.mark.timeout(timeout=DEFAULT_TEST_TIMEOUT, method=PYTEST_TIMEOUT_METHOD)
@pytest.mark.usefixtures("mock_set_headers_with_user_aml_token", "single_worker_thread_pool", "vcr_recording")
@pytest.mark.e2etest
class TestTelemetry:
def test_logging_handler(self):
# override environment variable
with cli_consent_config_overwrite(True):
handler = get_appinsights_log_handler()
assert isinstance(handler, PromptFlowSDKLogHandler)
assert handler._is_telemetry_enabled is True
with cli_consent_config_overwrite(False):
handler = get_appinsights_log_handler()
assert isinstance(handler, PromptFlowSDKLogHandler)
assert handler._is_telemetry_enabled is False
def test_call_from_extension(self):
from promptflow._core.operation_context import OperationContext
assert call_from_extension() is False
with environment_variable_overwrite(PF_USER_AGENT, "prompt-flow-extension/1.0.0"):
assert call_from_extension() is True
# remove extension ua in context
context = OperationContext().get_instance()
context.user_agent = context.user_agent.replace("prompt-flow-extension/1.0.0", "")
def test_custom_event(self, pf):
from promptflow._sdk._telemetry.logging_handler import PromptFlowSDKLogHandler
def log_event(*args, **kwargs):
record = args[0]
assert record.custom_dimensions is not None
logger = get_telemetry_logger()
handler = logger.handlers[0]
assert isinstance(handler, PromptFlowSDKLogHandler)
envelope = handler.log_record_to_envelope(record)
custom_dimensions = pydash.get(envelope, "data.baseData.properties")
assert isinstance(custom_dimensions, dict)
# Note: need privacy review if we add new fields.
if "start" in record.message:
assert custom_dimensions.keys() == {
"request_id",
"activity_name",
"activity_type",
"subscription_id",
"resource_group_name",
"workspace_name",
"level",
"python_version",
"user_agent",
"installation_id",
"first_call",
"from_ci",
}
elif "complete" in record.message:
assert custom_dimensions.keys() == {
"request_id",
"activity_name",
"activity_type",
"subscription_id",
"resource_group_name",
"workspace_name",
"completion_status",
"duration_ms",
"level",
"python_version",
"user_agent",
"installation_id",
"first_call",
"from_ci",
}
else:
raise ValueError("Invalid message: {}".format(record.message))
assert record.message.startswith("pfazure.runs.get")
with patch.object(PromptFlowSDKLogHandler, "emit") as mock_logger:
mock_logger.side_effect = log_event
# mock_error_logger.side_effect = log_event
try:
pf.runs.get("not_exist")
except RunNotFoundError:
pass
def test_default_logging_behavior(self):
assert is_telemetry_enabled() is True
# default enable telemetry
logger = get_telemetry_logger()
handler = logger.handlers[0]
assert isinstance(handler, PromptFlowSDKLogHandler)
assert handler._is_telemetry_enabled is True
def test_close_logging_handler(self):
with cli_consent_config_overwrite(False):
logger = get_telemetry_logger()
handler = logger.handlers[0]
assert isinstance(handler, PromptFlowSDKLogHandler)
assert handler._is_telemetry_enabled is False
with extension_consent_config_overwrite(False):
with environment_variable_overwrite(PF_USER_AGENT, "prompt-flow-extension/1.0.0"):
logger = get_telemetry_logger()
handler = logger.handlers[0]
assert isinstance(handler, PromptFlowSDKLogHandler)
assert handler._is_telemetry_enabled is False
# default enable telemetry
logger = get_telemetry_logger()
handler = logger.handlers[0]
assert isinstance(handler, PromptFlowSDKLogHandler)
assert handler._is_telemetry_enabled is True
def test_cached_logging_handler(self):
# should get same logger & handler instance if called multiple times
logger = get_telemetry_logger()
handler = next((h for h in logger.handlers if isinstance(h, PromptFlowSDKLogHandler)), None)
another_logger = get_telemetry_logger()
another_handler = next((h for h in another_logger.handlers if isinstance(h, PromptFlowSDKLogHandler)), None)
assert logger is another_logger
assert handler is another_handler
def test_sdk_telemetry_ua(self, pf):
from promptflow import PFClient
from promptflow.azure import PFClient as PFAzureClient
# log activity will pick correct ua
def assert_ua(*args, **kwargs):
ua = pydash.get(kwargs, "extra.custom_dimensions.user_agent", None)
ua_dict = parse_ua_to_dict(ua)
assert ua_dict.keys() == {"promptflow-sdk"}
logger = MagicMock()
logger.info = MagicMock()
logger.info.side_effect = assert_ua
# clear user agent before test
context = OperationContext().get_instance()
context.user_agent = ""
# get telemetry logger from SDK should not have extension ua
# start a clean local SDK client
with environment_variable_overwrite(PF_USER_AGENT, ""):
PFClient()
user_agent = ClientUserAgentUtil.get_user_agent()
ua_dict = parse_ua_to_dict(user_agent)
assert ua_dict.keys() == {"promptflow-sdk"}
# Call log_activity
with log_activity(logger, "test_activity", activity_type=ActivityType.PUBLICAPI):
# Perform some activity
pass
# start a clean Azure SDK client
with environment_variable_overwrite(PF_USER_AGENT, ""):
PFAzureClient(
ml_client=pf._ml_client,
subscription_id=pf._ml_client.subscription_id,
resource_group_name=pf._ml_client.resource_group_name,
workspace_name=pf._ml_client.workspace_name,
)
user_agent = ClientUserAgentUtil.get_user_agent()
ua_dict = parse_ua_to_dict(user_agent)
assert ua_dict.keys() == {"promptflow-sdk"}
# Call log_activity
with log_activity(logger, "test_activity", activity_type=ActivityType.PUBLICAPI):
# Perform some activity
pass
PFAzureClient(
ml_client=pf._ml_client,
subscription_id=pf._ml_client.subscription_id,
resource_group_name=pf._ml_client.resource_group_name,
workspace_name=pf._ml_client.workspace_name,
user_agent="a/1.0.0",
)
user_agent = ClientUserAgentUtil.get_user_agent()
ua_dict = parse_ua_to_dict(user_agent)
assert ua_dict.keys() == {"promptflow-sdk", "a"}
context = OperationContext().get_instance()
context.user_agent = ""
def test_inner_function_call(self, pf, runtime: str, randstr: Callable[[str], str]):
request_ids = set()
first_sdk_calls = []
def check_inner_call(*args, **kwargs):
if "extra" in kwargs:
request_id = pydash.get(kwargs, "extra.custom_dimensions.request_id")
first_sdk_call = pydash.get(kwargs, "extra.custom_dimensions.first_call")
request_ids.add(request_id)
first_sdk_calls.append(first_sdk_call)
with patch.object(Logger, "info") as mock_logger:
mock_logger.side_effect = check_inner_call
run = load_run(
source=f"{RUNS_DIR}/run_with_env.yaml",
params_override=[{"runtime": runtime}],
)
run.name = randstr("name")
pf.runs.create_or_update(run=run)
# only 1 request id
assert len(request_ids) == 1
# only 1 and last call is public call
assert first_sdk_calls[0] is True
assert first_sdk_calls[-1] is True
assert set(first_sdk_calls[1:-1]) == {False}
def test_different_request_id(self):
from promptflow import PFClient
pf = PFClient()
request_ids = set()
first_sdk_calls = []
def check_inner_call(*args, **kwargs):
if "extra" in kwargs:
request_id = pydash.get(kwargs, "extra.custom_dimensions.request_id")
first_sdk_call = pydash.get(kwargs, "extra.custom_dimensions.first_call")
request_ids.add(request_id)
first_sdk_calls.append(first_sdk_call)
with patch.object(Logger, "info") as mock_logger:
mock_logger.side_effect = check_inner_call
run = load_run(
source=f"{RUNS_DIR}/run_with_env.yaml",
)
# create 2 times will get 2 request ids
run.name = str(uuid.uuid4())
pf.runs.create_or_update(run=run)
run.name = str(uuid.uuid4())
pf.runs.create_or_update(run=run)
# only 1 request id
assert len(request_ids) == 2
# 1 and last call is public call
assert first_sdk_calls[0] is True
assert first_sdk_calls[-1] is True
def test_scrub_fields(self):
from promptflow import PFClient
pf = PFClient()
from promptflow._sdk._telemetry.logging_handler import PromptFlowSDKLogHandler
def log_event(*args, **kwargs):
record = args[0]
assert record.custom_dimensions is not None
logger = get_telemetry_logger()
handler = logger.handlers[0]
assert isinstance(handler, PromptFlowSDKLogHandler)
envelope = handler.log_record_to_envelope(record)
# device name removed
assert "ai.cloud.roleInstance" not in envelope.tags
assert "ai.device.id" not in envelope.tags
# role name should be scrubbed or kept in whitelist
assert envelope.tags["ai.cloud.role"] in [os.path.basename(sys.argv[0]), "***"]
with patch.object(PromptFlowSDKLogHandler, "emit") as mock_logger:
mock_logger.side_effect = log_event
# mock_error_logger.side_effect = log_event
try:
pf.runs.get("not_exist")
except RunNotFoundError:
pass
def test_different_event_for_node_run(self):
from promptflow import PFClient
pf = PFClient()
from promptflow._sdk._telemetry.logging_handler import PromptFlowSDKLogHandler
def assert_node_run(*args, **kwargs):
record = args[0]
assert record.msg.startswith("pf.flows.node_test")
assert record.custom_dimensions["activity_name"] == "pf.flows.node_test"
def assert_flow_test(*args, **kwargs):
record = args[0]
assert record.msg.startswith("pf.flows.test")
assert record.custom_dimensions["activity_name"] == "pf.flows.test"
with tempfile.TemporaryDirectory() as temp_dir:
shutil.copytree((Path(FLOWS_DIR) / "print_env_var").resolve().as_posix(), temp_dir, dirs_exist_ok=True)
with patch.object(PromptFlowSDKLogHandler, "emit") as mock_logger:
mock_logger.side_effect = assert_node_run
pf.flows.test(temp_dir, node="print_env", inputs={"key": "API_BASE"})
with patch.object(PromptFlowSDKLogHandler, "emit") as mock_logger:
mock_logger.side_effect = assert_flow_test
pf.flows.test(temp_dir, inputs={"key": "API_BASE"})
| promptflow/src/promptflow/tests/sdk_cli_azure_test/e2etests/test_telemetry.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_azure_test/e2etests/test_telemetry.py",
"repo_id": "promptflow",
"token_count": 6615
} | 56 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import shutil
from pathlib import Path
from tempfile import TemporaryDirectory
from unittest.mock import Mock
import pytest
from promptflow._sdk.entities import Run
from promptflow._utils.flow_utils import get_flow_lineage_id
from promptflow.exceptions import UserErrorException
PROMOTFLOW_ROOT = Path(__file__) / "../../../.."
TEST_ROOT = Path(__file__).parent.parent.parent
MODEL_ROOT = TEST_ROOT / "test_configs/e2e_samples"
CONNECTION_FILE = (PROMOTFLOW_ROOT / "connections.json").resolve().absolute().as_posix()
FLOWS_DIR = "./tests/test_configs/flows"
RUNS_DIR = "./tests/test_configs/runs"
DATAS_DIR = "./tests/test_configs/datas"
@pytest.mark.unittest
class TestRun:
def test_input_mapping_types(self, pf):
data_path = f"{DATAS_DIR}/webClassification3.jsonl"
flow_path = Path(f"{FLOWS_DIR}/flow_with_dict_input")
# run with dict inputs
run = Run(
flow=flow_path,
data=data_path,
column_mapping=dict(key={"a": 1}),
)
rest_run = run._to_rest_object()
assert rest_run.inputs_mapping == {"key": '{"a": 1}'}
# run with list inputs
run = Run(
flow=flow_path,
data=data_path,
column_mapping=dict(key=["a", "b"]),
)
rest_run = run._to_rest_object()
assert rest_run.inputs_mapping == {"key": '["a", "b"]'}
# unsupported inputs
run = Run(
flow=flow_path,
data=data_path,
column_mapping=dict(key=Mock()),
)
with pytest.raises(UserErrorException):
run._to_rest_object()
run = Run(flow=flow_path, data=data_path, column_mapping="str")
with pytest.raises(UserErrorException):
run._to_rest_object()
def test_flow_id(self):
# same flow id for same flow in same GIT repo
flow_path = Path(f"{FLOWS_DIR}/flow_with_dict_input")
# run with dict inputs
session_id1 = get_flow_lineage_id(flow_path)
session_id2 = get_flow_lineage_id(flow_path)
assert session_id1 == session_id2
# same flow id for same flow in same device
with TemporaryDirectory() as tmp_dir:
shutil.copytree(f"{FLOWS_DIR}/flow_with_dict_input", f"{tmp_dir}/flow_with_dict_input")
session_id3 = get_flow_lineage_id(f"{tmp_dir}/flow_with_dict_input")
session_id4 = get_flow_lineage_id(f"{tmp_dir}/flow_with_dict_input")
assert session_id3 == session_id4
assert session_id3 != session_id1
| promptflow/src/promptflow/tests/sdk_cli_azure_test/unittests/test_run_entity.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_azure_test/unittests/test_run_entity.py",
"repo_id": "promptflow",
"token_count": 1208
} | 57 |
from pathlib import Path
import pytest
from ruamel.yaml import YAML
from promptflow import PFClient
from promptflow._sdk._constants import ExperimentStatus, RunStatus
from promptflow._sdk._load_functions import load_common
from promptflow._sdk.entities._experiment import (
CommandNode,
Experiment,
ExperimentData,
ExperimentInput,
ExperimentTemplate,
FlowNode,
)
TEST_ROOT = Path(__file__).parent.parent.parent
EXP_ROOT = TEST_ROOT / "test_configs/experiments"
FLOW_ROOT = TEST_ROOT / "test_configs/flows"
yaml = YAML(typ="safe")
@pytest.mark.e2etest
@pytest.mark.usefixtures("setup_experiment_table")
class TestExperiment:
def test_experiment_from_template(self):
template_path = EXP_ROOT / "basic-no-script-template" / "basic.exp.yaml"
# Load template and create experiment
template = load_common(ExperimentTemplate, source=template_path)
experiment = Experiment.from_template(template)
# Assert experiment parts are resolved
assert len(experiment.nodes) == 2
assert all(isinstance(n, FlowNode) for n in experiment.nodes)
assert len(experiment.data) == 1
assert isinstance(experiment.data[0], ExperimentData)
assert len(experiment.inputs) == 1
assert isinstance(experiment.inputs[0], ExperimentInput)
# Assert type is resolved
assert experiment.inputs[0].default == 1
# Pop schema and resolve path
expected = dict(yaml.load(open(template_path, "r", encoding="utf-8").read()))
expected.pop("$schema")
expected["data"][0]["path"] = (FLOW_ROOT / "web_classification" / "data.jsonl").absolute().as_posix()
expected["nodes"][0]["path"] = (experiment._output_dir / "snapshots" / "main").absolute().as_posix()
expected["nodes"][1]["path"] = (experiment._output_dir / "snapshots" / "eval").absolute().as_posix()
experiment_dict = experiment._to_dict()
assert experiment_dict["data"][0].items() == expected["data"][0].items()
assert experiment_dict["nodes"][0].items() == expected["nodes"][0].items()
assert experiment_dict["nodes"][1].items() == expected["nodes"][1].items()
assert experiment_dict.items() >= expected.items()
def test_experiment_from_template_with_script_node(self):
template_path = EXP_ROOT / "basic-script-template" / "basic-script.exp.yaml"
# Load template and create experiment
template = load_common(ExperimentTemplate, source=template_path)
experiment = Experiment.from_template(template)
# Assert command node load correctly
assert len(experiment.nodes) == 4
expected = dict(yaml.load(open(template_path, "r", encoding="utf-8").read()))
experiment_dict = experiment._to_dict()
assert isinstance(experiment.nodes[0], CommandNode)
assert isinstance(experiment.nodes[1], FlowNode)
assert isinstance(experiment.nodes[2], FlowNode)
assert isinstance(experiment.nodes[3], CommandNode)
gen_data_snapshot_path = experiment._output_dir / "snapshots" / "gen_data"
echo_snapshot_path = experiment._output_dir / "snapshots" / "echo"
expected["nodes"][0]["code"] = gen_data_snapshot_path.absolute().as_posix()
expected["nodes"][3]["code"] = echo_snapshot_path.absolute().as_posix()
expected["nodes"][3]["environment_variables"] = {}
assert experiment_dict["nodes"][0].items() == expected["nodes"][0].items()
assert experiment_dict["nodes"][3].items() == expected["nodes"][3].items()
# Assert snapshots
assert gen_data_snapshot_path.exists()
file_count = len(list(gen_data_snapshot_path.rglob("*")))
assert file_count == 1
assert (gen_data_snapshot_path / "generate_data.py").exists()
# Assert no file exists in echo path
assert echo_snapshot_path.exists()
file_count = len(list(echo_snapshot_path.rglob("*")))
assert file_count == 0
def test_experiment_create_and_get(self):
template_path = EXP_ROOT / "basic-no-script-template" / "basic.exp.yaml"
# Load template and create experiment
template = load_common(ExperimentTemplate, source=template_path)
experiment = Experiment.from_template(template)
client = PFClient()
exp = client._experiments.create_or_update(experiment)
assert len(client._experiments.list()) > 0
exp_get = client._experiments.get(name=exp.name)
assert exp_get._to_dict() == exp._to_dict()
@pytest.mark.usefixtures("use_secrets_config_file", "recording_injection", "setup_local_connection")
def test_experiment_start(self):
template_path = EXP_ROOT / "basic-no-script-template" / "basic.exp.yaml"
# Load template and create experiment
template = load_common(ExperimentTemplate, source=template_path)
experiment = Experiment.from_template(template)
client = PFClient()
exp = client._experiments.create_or_update(experiment)
exp = client._experiments.start(exp.name)
assert exp.status == ExperimentStatus.TERMINATED
# Assert main run
assert len(exp.node_runs["main"]) > 0
main_run = client.runs.get(name=exp.node_runs["main"][0]["name"])
assert main_run.status == RunStatus.COMPLETED
assert main_run.variant == "${summarize_text_content.variant_0}"
assert main_run.display_name == "main"
assert len(exp.node_runs["eval"]) > 0
# Assert eval run and metrics
eval_run = client.runs.get(name=exp.node_runs["eval"][0]["name"])
assert eval_run.status == RunStatus.COMPLETED
assert eval_run.display_name == "eval"
metrics = client.runs.get_metrics(name=eval_run.name)
assert "accuracy" in metrics
@pytest.mark.usefixtures("use_secrets_config_file", "recording_injection", "setup_local_connection")
def test_experiment_with_script_start(self):
template_path = EXP_ROOT / "basic-script-template" / "basic-script.exp.yaml"
# Load template and create experiment
template = load_common(ExperimentTemplate, source=template_path)
experiment = Experiment.from_template(template)
client = PFClient()
exp = client._experiments.create_or_update(experiment)
exp = client._experiments.start(exp.name)
assert exp.status == ExperimentStatus.TERMINATED
assert len(exp.node_runs) == 4
for key, val in exp.node_runs.items():
assert val[0]["status"] == RunStatus.COMPLETED, f"Node {key} run failed"
| promptflow/src/promptflow/tests/sdk_cli_test/e2etests/test_experiment.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_test/e2etests/test_experiment.py",
"repo_id": "promptflow",
"token_count": 2569
} | 58 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from pathlib import Path
from unittest.mock import patch
import pytest
from promptflow._cli._pf._connection import validate_and_interactive_get_secrets
from promptflow._sdk._constants import SCRUBBED_VALUE, CustomStrongTypeConnectionConfigs
from promptflow._sdk._load_functions import _load_env_to_connection
from promptflow._sdk.entities._connection import (
AzureContentSafetyConnection,
AzureOpenAIConnection,
CognitiveSearchConnection,
CustomConnection,
FormRecognizerConnection,
OpenAIConnection,
QdrantConnection,
SerpConnection,
WeaviateConnection,
_Connection,
)
from promptflow._utils.yaml_utils import load_yaml
from promptflow.exceptions import UserErrorException
TEST_ROOT = Path(__file__).parent.parent.parent
CONNECTION_ROOT = TEST_ROOT / "test_configs/connections"
@pytest.mark.unittest
class TestConnection:
@pytest.mark.parametrize(
"file_name, class_name, init_param, expected",
[
(
"azure_openai_connection.yaml",
AzureOpenAIConnection,
{
"name": "my_azure_open_ai_connection",
"api_type": "azure",
"api_version": "2023-07-01-preview",
"api_key": "<to-be-replaced>",
"api_base": "aoai-api-endpoint",
},
{
"module": "promptflow.connections",
"type": "azure_open_ai",
},
),
(
"openai_connection.yaml",
OpenAIConnection,
{
"name": "my_open_ai_connection",
"api_key": "<to-be-replaced>",
"organization": "org",
},
{
"module": "promptflow.connections",
"type": "open_ai",
},
),
(
"openai_connection_base_url.yaml",
OpenAIConnection,
{
"name": "my_open_ai_connection",
"api_key": "<to-be-replaced>",
"organization": "org",
"base_url": "custom_base_url",
},
{
"module": "promptflow.connections",
"type": "open_ai",
},
),
(
"custom_connection.yaml",
CustomConnection,
{
"name": "my_custom_connection",
"configs": {"key1": "test1"},
"secrets": {"key2": "test2"},
},
{
"module": "promptflow.connections",
"type": "custom",
},
),
(
"azure_content_safety_connection.yaml",
AzureContentSafetyConnection,
{
"name": "my_azure_content_safety_connection",
"api_key": "<to-be-replaced>",
"endpoint": "endpoint",
"api_version": "2023-04-30-preview",
"api_type": "Content Safety",
},
{
"module": "promptflow.connections",
"type": "azure_content_safety",
},
),
(
"cognitive_search_connection.yaml",
CognitiveSearchConnection,
{
"name": "my_cognitive_search_connection",
"api_key": "<to-be-replaced>",
"api_base": "endpoint",
"api_version": "2023-07-01-Preview",
},
{
"module": "promptflow.connections",
"type": "cognitive_search",
},
),
(
"serp_connection.yaml",
SerpConnection,
{
"name": "my_serp_connection",
"api_key": "<to-be-replaced>",
},
{
"module": "promptflow.connections",
"type": "serp",
},
),
(
"form_recognizer_connection.yaml",
FormRecognizerConnection,
{
"name": "my_form_recognizer_connection",
"api_key": "<to-be-replaced>",
"endpoint": "endpoint",
"api_version": "2023-07-31",
"api_type": "Form Recognizer",
},
{
"module": "promptflow.connections",
"type": "form_recognizer",
},
),
(
"qdrant_connection.yaml",
QdrantConnection,
{
"name": "my_qdrant_connection",
"api_key": "<to-be-replaced>",
"api_base": "endpoint",
},
{
"module": "promptflow_vectordb.connections",
"type": "qdrant",
},
),
(
"weaviate_connection.yaml",
WeaviateConnection,
{
"name": "my_weaviate_connection",
"api_key": "<to-be-replaced>",
"api_base": "endpoint",
},
{
"module": "promptflow_vectordb.connections",
"type": "weaviate",
},
),
],
)
def test_connection_load_dump(self, file_name, class_name, init_param, expected):
conn = _Connection._load(data=load_yaml(CONNECTION_ROOT / file_name))
expected = {**expected, **init_param}
assert dict(conn._to_dict()) == expected
assert class_name(**init_param)._to_dict() == expected
def test_connection_load_from_env(self):
connection = _load_env_to_connection(source=CONNECTION_ROOT / ".env", params_override=[{"name": "env_conn"}])
assert connection._to_dict() == {
"name": "env_conn",
"module": "promptflow.connections",
"type": "custom",
"configs": {},
"secrets": {"aaa": "bbb", "ccc": "ddd"},
}
assert (
connection.__str__()
== """name: env_conn
module: promptflow.connections
type: custom
configs: {}
secrets:
aaa: bbb
ccc: ddd
"""
)
def test_connection_load_from_env_file_bad_case(self):
# Test file not found
with pytest.raises(FileNotFoundError) as e:
_load_env_to_connection(source=CONNECTION_ROOT / "mock.env", params_override=[{"name": "env_conn"}])
assert "not found" in str(e.value)
# Test file empty
with pytest.raises(Exception) as e:
_load_env_to_connection(source=CONNECTION_ROOT / "empty.env", params_override=[{"name": "env_conn"}])
assert "Load nothing" in str(e.value)
def test_to_execution_connection_dict(self):
# Assert custom connection build
connection = CustomConnection(name="test_connection", configs={"a": "1"}, secrets={"b": "2"})
assert connection._to_execution_connection_dict() == {
"module": "promptflow.connections",
"secret_keys": ["b"],
"type": "CustomConnection",
"value": {"a": "1", "b": "2"},
}
# Assert strong type - AzureOpenAI
connection = AzureOpenAIConnection(
name="test_connection_1",
type="AzureOpenAI",
api_key="test_key",
api_base="test_base",
api_type="azure",
api_version="2023-07-01-preview",
)
assert connection._to_execution_connection_dict() == {
"module": "promptflow.connections",
"secret_keys": ["api_key"],
"type": "AzureOpenAIConnection",
"value": {
"api_base": "test_base",
"api_key": "test_key",
"api_type": "azure",
"api_version": "2023-07-01-preview",
},
}
# Assert strong type - OpenAI
connection = OpenAIConnection(
name="test_connection_1",
type="AzureOpenAI",
api_key="test_key",
organization="test_org",
)
assert connection._to_execution_connection_dict() == {
"module": "promptflow.connections",
"secret_keys": ["api_key"],
"type": "OpenAIConnection",
"value": {"api_key": "test_key", "organization": "test_org"},
}
def test_validate_and_interactive_get_secrets(self):
# Path 1: Create
connection = CustomConnection(
name="test_connection",
secrets={"key1": SCRUBBED_VALUE, "key2": "", "key3": "<no-change>", "key4": "<user-input>", "key5": "**"},
)
with patch("promptflow._cli._pf._connection.get_secret_input", new=lambda prompt: "test_value"):
validate_and_interactive_get_secrets(connection, is_update=False)
assert connection.secrets == {
"key1": "test_value",
"key2": "test_value",
"key3": "test_value",
"key4": "test_value",
"key5": "test_value",
}
# Path 2: Update
# Scrubbed value will be filled in _validate_and_encrypt_secrets for update, so no changes here.
connection = CustomConnection(
name="test_connection",
secrets={"key1": SCRUBBED_VALUE, "key2": "", "key3": "<no-change>", "key4": "<user-input>", "key5": "**"},
)
with patch("promptflow._cli._pf._connection.get_secret_input", new=lambda prompt: "test_value"):
validate_and_interactive_get_secrets(connection, is_update=True)
assert connection.secrets == {
"key1": SCRUBBED_VALUE,
"key2": "",
"key3": "<no-change>",
"key4": "test_value",
"key5": "**",
}
def test_validate_and_encrypt_secrets(self):
# Path 1: Create
connection = CustomConnection(
name="test_connection",
secrets={"key1": SCRUBBED_VALUE, "key2": "", "key3": "<no-change>", "key4": "<user-input>", "key5": "**"},
)
with pytest.raises(Exception) as e:
connection._validate_and_encrypt_secrets()
assert "secrets ['key1', 'key2', 'key3', 'key4', 'key5'] value invalid, please fill them" in str(e.value)
# Path 2: Update
connection._secrets = {"key1": "val1", "key2": "val2", "key4": "val4", "key5": "*"}
# raise error for key3 as original value missing.
# raise error for key5 as original value still scrubbed.
# raise error for key4 even if it was in _secrets, because it requires <user-input>.
with pytest.raises(Exception) as e:
connection._validate_and_encrypt_secrets()
assert "secrets ['key3', 'key4', 'key5'] value invalid, please fill them" in str(e.value)
def test_convert_to_custom_strong_type(self, install_custom_tool_pkg):
module_name = "my_tool_package.tools.my_tool_2"
custom_conn_type = "MyFirstConnection"
import importlib
module = importlib.import_module(module_name)
# Connection created by custom strong type connection template for package tool
connection = CustomConnection(
name="test_connection",
configs={
"a": "1",
CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY: module_name,
CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY: custom_conn_type,
},
secrets={"b": "2"},
)
res = connection._convert_to_custom_strong_type()
assert isinstance(res, module.MyFirstConnection)
assert res.secrets == {"b": "2"}
# Connection created by custom connection template for script tool
connection = CustomConnection(name="test_connection", configs={"a": "1"}, secrets={"b": "2"})
res = connection._convert_to_custom_strong_type(module=module, to_class=custom_conn_type)
assert isinstance(res, module.MyFirstConnection)
assert res.configs == {"a": "1"}
# Connection created with custom connection type in portal for package tool
connection._convert_to_custom_strong_type(module=module_name, to_class=custom_conn_type)
assert isinstance(res, module.MyFirstConnection)
assert res.configs == {"a": "1"}
# Invalid module
module_name = "not_existing_module"
with pytest.raises(ModuleNotFoundError, match=r".*No module named 'not_existing_module'*"):
connection._convert_to_custom_strong_type(module=module_name, to_class=custom_conn_type)
module_name = None
with pytest.raises(
UserErrorException,
match=r".*Failed to convert to custom strong type connection because of invalid module or class*",
):
connection._convert_to_custom_strong_type(module=module_name, to_class=custom_conn_type)
custom_conn_type = None
with pytest.raises(
UserErrorException,
match=r".*Failed to convert to custom strong type connection because of invalid module or class*",
):
connection._convert_to_custom_strong_type(module=module_name, to_class=custom_conn_type)
| promptflow/src/promptflow/tests/sdk_cli_test/unittests/test_connection.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_test/unittests/test_connection.py",
"repo_id": "promptflow",
"token_count": 7157
} | 59 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import subprocess
import sys
from time import sleep
import pytest
import requests
from promptflow._sdk._service.entry import main
from promptflow._sdk._service.utils.utils import get_port_from_config, get_random_port, kill_exist_service
@pytest.mark.e2etest
class TestPromptflowServiceCLI:
def _run_pfs_command(self, *args):
"""Run a pfs command with the given arguments."""
origin_argv = sys.argv
try:
sys.argv = ["pfs"] + list(args)
main()
finally:
sys.argv = origin_argv
def _test_start_service(self, port=None, force=False):
command = f"pfs start --port {port}" if port else "pfs start"
if force:
command = f"{command} --force"
start_pfs = subprocess.Popen(command, shell=True)
# Wait for service to be started
sleep(5)
assert self._is_service_healthy()
start_pfs.terminate()
start_pfs.wait(10)
def _is_service_healthy(self, port=None):
port = port or get_port_from_config()
response = requests.get(f"http://localhost:{port}/heartbeat")
return response.status_code == 200
def test_start_service(self):
try:
# start pfs by pf.yaml
self._test_start_service()
# Start pfs by specified port
random_port = get_random_port()
self._test_start_service(port=random_port, force=True)
# Force start pfs
start_pfs = subprocess.Popen("pfs start", shell=True)
# Wait for service to be started
sleep(5)
self._test_start_service(force=True)
# previous pfs is killed
assert start_pfs.poll() is not None
finally:
port = get_port_from_config()
kill_exist_service(port=port)
def test_show_service_status(self, capsys):
with pytest.raises(SystemExit):
self._run_pfs_command("show-status")
start_pfs = subprocess.Popen("pfs start", shell=True)
# Wait for service to be started
sleep(5)
self._run_pfs_command("show-status")
output, _ = capsys.readouterr()
assert str(get_port_from_config()) in output
start_pfs.terminate()
start_pfs.wait(10)
| promptflow/src/promptflow/tests/sdk_pfs_test/e2etests/test_cli.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_pfs_test/e2etests/test_cli.py",
"repo_id": "promptflow",
"token_count": 1061
} | 60 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/FormRecognizerConnection.schema.json
name: my_form_recognizer_connection
type: form_recognizer
api_key: "<to-be-replaced>"
endpoint: "endpoint"
api_version: "2023-07-31"
api_type: Form Recognizer
| promptflow/src/promptflow/tests/test_configs/connections/form_recognizer_connection.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/connections/form_recognizer_connection.yaml",
"repo_id": "promptflow",
"token_count": 96
} | 61 |
{"text":"data_0000"}
{"text":"data_0001"}
{"text":"data_0002"}
{"text":"data_0003"}
{"text":"data_0004"}
{"text":"data_0005"}
{"text":"data_0006"}
{"text":"data_0007"}
{"text":"data_0008"}
{"text":"data_0009"}
{"text":"data_0010"}
{"text":"data_0011"}
{"text":"data_0012"}
{"text":"data_0013"}
{"text":"data_0014"}
{"text":"data_0015"}
{"text":"data_0016"}
{"text":"data_0017"}
{"text":"data_0018"}
{"text":"data_0019"}
{"text":"data_0020"}
{"text":"data_0021"}
{"text":"data_0022"}
{"text":"data_0023"}
{"text":"data_0024"}
{"text":"data_0025"}
{"text":"data_0026"}
{"text":"data_0027"}
{"text":"data_0028"}
{"text":"data_0029"}
{"text":"data_0030"}
{"text":"data_0031"}
{"text":"data_0032"}
{"text":"data_0033"}
{"text":"data_0034"}
{"text":"data_0035"}
{"text":"data_0036"}
{"text":"data_0037"}
{"text":"data_0038"}
{"text":"data_0039"}
{"text":"data_0040"}
{"text":"data_0041"}
{"text":"data_0042"}
{"text":"data_0043"}
{"text":"data_0044"}
{"text":"data_0045"}
{"text":"data_0046"}
{"text":"data_0047"}
{"text":"data_0048"}
{"text":"data_0049"}
{"text":"data_0050"}
{"text":"data_0051"}
{"text":"data_0052"}
{"text":"data_0053"}
{"text":"data_0054"}
{"text":"data_0055"}
{"text":"data_0056"}
{"text":"data_0057"}
{"text":"data_0058"}
{"text":"data_0059"}
{"text":"data_0060"}
{"text":"data_0061"}
{"text":"data_0062"}
{"text":"data_0063"}
{"text":"data_0064"}
{"text":"data_0065"}
{"text":"data_0066"}
{"text":"data_0067"}
{"text":"data_0068"}
{"text":"data_0069"}
{"text":"data_0070"}
{"text":"data_0071"}
{"text":"data_0072"}
{"text":"data_0073"}
{"text":"data_0074"}
{"text":"data_0075"}
{"text":"data_0076"}
{"text":"data_0077"}
{"text":"data_0078"}
{"text":"data_0079"}
{"text":"data_0080"}
{"text":"data_0081"}
{"text":"data_0082"}
{"text":"data_0083"}
{"text":"data_0084"}
{"text":"data_0085"}
{"text":"data_0086"}
{"text":"data_0087"}
{"text":"data_0088"}
{"text":"data_0089"}
{"text":"data_0090"}
{"text":"data_0091"}
{"text":"data_0092"}
{"text":"data_0093"}
{"text":"data_0094"}
{"text":"data_0095"}
{"text":"data_0096"}
{"text":"data_0097"}
{"text":"data_0098"}
{"text":"data_0099"}
{"text":"data_0100"}
{"text":"data_0101"}
{"text":"data_0102"}
{"text":"data_0103"}
{"text":"data_0104"}
{"text":"data_0105"}
{"text":"data_0106"}
{"text":"data_0107"}
{"text":"data_0108"}
{"text":"data_0109"}
{"text":"data_0110"}
{"text":"data_0111"}
{"text":"data_0112"}
{"text":"data_0113"}
{"text":"data_0114"}
{"text":"data_0115"}
{"text":"data_0116"}
{"text":"data_0117"}
{"text":"data_0118"}
{"text":"data_0119"}
{"text":"data_0120"}
{"text":"data_0121"}
{"text":"data_0122"}
{"text":"data_0123"}
{"text":"data_0124"}
{"text":"data_0125"}
{"text":"data_0126"}
{"text":"data_0127"}
{"text":"data_0128"}
{"text":"data_0129"}
{"text":"data_0130"}
{"text":"data_0131"}
{"text":"data_0132"}
{"text":"data_0133"}
{"text":"data_0134"}
{"text":"data_0135"}
{"text":"data_0136"}
{"text":"data_0137"}
{"text":"data_0138"}
{"text":"data_0139"}
{"text":"data_0140"}
{"text":"data_0141"}
{"text":"data_0142"}
{"text":"data_0143"}
{"text":"data_0144"}
{"text":"data_0145"}
{"text":"data_0146"}
{"text":"data_0147"}
{"text":"data_0148"}
{"text":"data_0149"}
{"text":"data_0150"}
{"text":"data_0151"}
{"text":"data_0152"}
{"text":"data_0153"}
{"text":"data_0154"}
{"text":"data_0155"}
{"text":"data_0156"}
{"text":"data_0157"}
{"text":"data_0158"}
{"text":"data_0159"}
{"text":"data_0160"}
{"text":"data_0161"}
{"text":"data_0162"}
{"text":"data_0163"}
{"text":"data_0164"}
{"text":"data_0165"}
{"text":"data_0166"}
{"text":"data_0167"}
{"text":"data_0168"}
{"text":"data_0169"}
{"text":"data_0170"}
{"text":"data_0171"}
{"text":"data_0172"}
{"text":"data_0173"}
{"text":"data_0174"}
{"text":"data_0175"}
{"text":"data_0176"}
{"text":"data_0177"}
{"text":"data_0178"}
{"text":"data_0179"}
{"text":"data_0180"}
{"text":"data_0181"}
{"text":"data_0182"}
{"text":"data_0183"}
{"text":"data_0184"}
{"text":"data_0185"}
{"text":"data_0186"}
{"text":"data_0187"}
{"text":"data_0188"}
{"text":"data_0189"}
{"text":"data_0190"}
{"text":"data_0191"}
{"text":"data_0192"}
{"text":"data_0193"}
{"text":"data_0194"}
{"text":"data_0195"}
{"text":"data_0196"}
{"text":"data_0197"}
{"text":"data_0198"}
{"text":"data_0199"}
{"text":"data_0200"}
{"text":"data_0201"}
{"text":"data_0202"}
{"text":"data_0203"}
{"text":"data_0204"}
{"text":"data_0205"}
{"text":"data_0206"}
{"text":"data_0207"}
{"text":"data_0208"}
{"text":"data_0209"}
{"text":"data_0210"}
{"text":"data_0211"}
{"text":"data_0212"}
{"text":"data_0213"}
{"text":"data_0214"}
{"text":"data_0215"}
{"text":"data_0216"}
{"text":"data_0217"}
{"text":"data_0218"}
{"text":"data_0219"}
{"text":"data_0220"}
{"text":"data_0221"}
{"text":"data_0222"}
{"text":"data_0223"}
{"text":"data_0224"}
{"text":"data_0225"}
{"text":"data_0226"}
{"text":"data_0227"}
{"text":"data_0228"}
{"text":"data_0229"}
{"text":"data_0230"}
{"text":"data_0231"}
{"text":"data_0232"}
{"text":"data_0233"}
{"text":"data_0234"}
{"text":"data_0235"}
{"text":"data_0236"}
{"text":"data_0237"}
{"text":"data_0238"}
{"text":"data_0239"}
{"text":"data_0240"}
{"text":"data_0241"}
{"text":"data_0242"}
{"text":"data_0243"}
{"text":"data_0244"}
{"text":"data_0245"}
{"text":"data_0246"}
{"text":"data_0247"}
{"text":"data_0248"}
{"text":"data_0249"}
{"text":"data_0250"}
{"text":"data_0251"}
{"text":"data_0252"}
{"text":"data_0253"}
{"text":"data_0254"}
{"text":"data_0255"}
{"text":"data_0256"}
{"text":"data_0257"}
{"text":"data_0258"}
{"text":"data_0259"}
{"text":"data_0260"}
{"text":"data_0261"}
{"text":"data_0262"}
{"text":"data_0263"}
{"text":"data_0264"}
{"text":"data_0265"}
{"text":"data_0266"}
{"text":"data_0267"}
{"text":"data_0268"}
{"text":"data_0269"}
{"text":"data_0270"}
{"text":"data_0271"}
{"text":"data_0272"}
{"text":"data_0273"}
{"text":"data_0274"}
{"text":"data_0275"}
{"text":"data_0276"}
{"text":"data_0277"}
{"text":"data_0278"}
{"text":"data_0279"}
{"text":"data_0280"}
{"text":"data_0281"}
{"text":"data_0282"}
{"text":"data_0283"}
{"text":"data_0284"}
{"text":"data_0285"}
{"text":"data_0286"}
{"text":"data_0287"}
{"text":"data_0288"}
{"text":"data_0289"}
{"text":"data_0290"}
{"text":"data_0291"}
{"text":"data_0292"}
{"text":"data_0293"}
{"text":"data_0294"}
{"text":"data_0295"}
{"text":"data_0296"}
{"text":"data_0297"}
{"text":"data_0298"}
{"text":"data_0299"}
{"text":"data_0300"}
{"text":"data_0301"}
{"text":"data_0302"}
{"text":"data_0303"}
{"text":"data_0304"}
{"text":"data_0305"}
{"text":"data_0306"}
{"text":"data_0307"}
{"text":"data_0308"}
{"text":"data_0309"}
{"text":"data_0310"}
{"text":"data_0311"}
{"text":"data_0312"}
{"text":"data_0313"}
{"text":"data_0314"}
{"text":"data_0315"}
{"text":"data_0316"}
{"text":"data_0317"}
{"text":"data_0318"}
{"text":"data_0319"}
{"text":"data_0320"}
{"text":"data_0321"}
{"text":"data_0322"}
{"text":"data_0323"}
{"text":"data_0324"}
{"text":"data_0325"}
{"text":"data_0326"}
{"text":"data_0327"}
{"text":"data_0328"}
{"text":"data_0329"}
{"text":"data_0330"}
{"text":"data_0331"}
{"text":"data_0332"}
{"text":"data_0333"}
{"text":"data_0334"}
{"text":"data_0335"}
{"text":"data_0336"}
{"text":"data_0337"}
{"text":"data_0338"}
{"text":"data_0339"}
{"text":"data_0340"}
{"text":"data_0341"}
{"text":"data_0342"}
{"text":"data_0343"}
{"text":"data_0344"}
{"text":"data_0345"}
{"text":"data_0346"}
{"text":"data_0347"}
{"text":"data_0348"}
{"text":"data_0349"}
{"text":"data_0350"}
{"text":"data_0351"}
{"text":"data_0352"}
{"text":"data_0353"}
{"text":"data_0354"}
{"text":"data_0355"}
{"text":"data_0356"}
{"text":"data_0357"}
{"text":"data_0358"}
{"text":"data_0359"}
{"text":"data_0360"}
{"text":"data_0361"}
{"text":"data_0362"}
{"text":"data_0363"}
{"text":"data_0364"}
{"text":"data_0365"}
{"text":"data_0366"}
{"text":"data_0367"}
{"text":"data_0368"}
{"text":"data_0369"}
{"text":"data_0370"}
{"text":"data_0371"}
{"text":"data_0372"}
{"text":"data_0373"}
{"text":"data_0374"}
{"text":"data_0375"}
{"text":"data_0376"}
{"text":"data_0377"}
{"text":"data_0378"}
{"text":"data_0379"}
{"text":"data_0380"}
{"text":"data_0381"}
{"text":"data_0382"}
{"text":"data_0383"}
{"text":"data_0384"}
{"text":"data_0385"}
{"text":"data_0386"}
{"text":"data_0387"}
{"text":"data_0388"}
{"text":"data_0389"}
{"text":"data_0390"}
{"text":"data_0391"}
{"text":"data_0392"}
{"text":"data_0393"}
{"text":"data_0394"}
{"text":"data_0395"}
{"text":"data_0396"}
{"text":"data_0397"}
{"text":"data_0398"}
{"text":"data_0399"}
{"text":"data_0400"}
{"text":"data_0401"}
{"text":"data_0402"}
{"text":"data_0403"}
{"text":"data_0404"}
{"text":"data_0405"}
{"text":"data_0406"}
{"text":"data_0407"}
{"text":"data_0408"}
{"text":"data_0409"}
{"text":"data_0410"}
{"text":"data_0411"}
{"text":"data_0412"}
{"text":"data_0413"}
{"text":"data_0414"}
{"text":"data_0415"}
{"text":"data_0416"}
{"text":"data_0417"}
{"text":"data_0418"}
{"text":"data_0419"}
{"text":"data_0420"}
{"text":"data_0421"}
{"text":"data_0422"}
{"text":"data_0423"}
{"text":"data_0424"}
{"text":"data_0425"}
{"text":"data_0426"}
{"text":"data_0427"}
{"text":"data_0428"}
{"text":"data_0429"}
{"text":"data_0430"}
{"text":"data_0431"}
{"text":"data_0432"}
{"text":"data_0433"}
{"text":"data_0434"}
{"text":"data_0435"}
{"text":"data_0436"}
{"text":"data_0437"}
{"text":"data_0438"}
{"text":"data_0439"}
{"text":"data_0440"}
{"text":"data_0441"}
{"text":"data_0442"}
{"text":"data_0443"}
{"text":"data_0444"}
{"text":"data_0445"}
{"text":"data_0446"}
{"text":"data_0447"}
{"text":"data_0448"}
{"text":"data_0449"}
{"text":"data_0450"}
{"text":"data_0451"}
{"text":"data_0452"}
{"text":"data_0453"}
{"text":"data_0454"}
{"text":"data_0455"}
{"text":"data_0456"}
{"text":"data_0457"}
{"text":"data_0458"}
{"text":"data_0459"}
{"text":"data_0460"}
{"text":"data_0461"}
{"text":"data_0462"}
{"text":"data_0463"}
{"text":"data_0464"}
{"text":"data_0465"}
{"text":"data_0466"}
{"text":"data_0467"}
{"text":"data_0468"}
{"text":"data_0469"}
{"text":"data_0470"}
{"text":"data_0471"}
{"text":"data_0472"}
{"text":"data_0473"}
{"text":"data_0474"}
{"text":"data_0475"}
{"text":"data_0476"}
{"text":"data_0477"}
{"text":"data_0478"}
{"text":"data_0479"}
{"text":"data_0480"}
{"text":"data_0481"}
{"text":"data_0482"}
{"text":"data_0483"}
{"text":"data_0484"}
{"text":"data_0485"}
{"text":"data_0486"}
{"text":"data_0487"}
{"text":"data_0488"}
{"text":"data_0489"}
{"text":"data_0490"}
{"text":"data_0491"}
{"text":"data_0492"}
{"text":"data_0493"}
{"text":"data_0494"}
{"text":"data_0495"}
{"text":"data_0496"}
{"text":"data_0497"}
{"text":"data_0498"}
{"text":"data_0499"}
{"text":"data_0500"}
{"text":"data_0501"}
{"text":"data_0502"}
{"text":"data_0503"}
{"text":"data_0504"}
{"text":"data_0505"}
{"text":"data_0506"}
{"text":"data_0507"}
{"text":"data_0508"}
{"text":"data_0509"}
{"text":"data_0510"}
{"text":"data_0511"}
{"text":"data_0512"}
{"text":"data_0513"}
{"text":"data_0514"}
{"text":"data_0515"}
{"text":"data_0516"}
{"text":"data_0517"}
{"text":"data_0518"}
{"text":"data_0519"}
{"text":"data_0520"}
{"text":"data_0521"}
{"text":"data_0522"}
{"text":"data_0523"}
{"text":"data_0524"}
{"text":"data_0525"}
{"text":"data_0526"}
{"text":"data_0527"}
{"text":"data_0528"}
{"text":"data_0529"}
{"text":"data_0530"}
{"text":"data_0531"}
{"text":"data_0532"}
{"text":"data_0533"}
{"text":"data_0534"}
{"text":"data_0535"}
{"text":"data_0536"}
{"text":"data_0537"}
{"text":"data_0538"}
{"text":"data_0539"}
{"text":"data_0540"}
{"text":"data_0541"}
{"text":"data_0542"}
{"text":"data_0543"}
{"text":"data_0544"}
{"text":"data_0545"}
{"text":"data_0546"}
{"text":"data_0547"}
{"text":"data_0548"}
{"text":"data_0549"}
{"text":"data_0550"}
{"text":"data_0551"}
{"text":"data_0552"}
{"text":"data_0553"}
{"text":"data_0554"}
{"text":"data_0555"}
{"text":"data_0556"}
{"text":"data_0557"}
{"text":"data_0558"}
{"text":"data_0559"}
{"text":"data_0560"}
{"text":"data_0561"}
{"text":"data_0562"}
{"text":"data_0563"}
{"text":"data_0564"}
{"text":"data_0565"}
{"text":"data_0566"}
{"text":"data_0567"}
{"text":"data_0568"}
{"text":"data_0569"}
{"text":"data_0570"}
{"text":"data_0571"}
{"text":"data_0572"}
{"text":"data_0573"}
{"text":"data_0574"}
{"text":"data_0575"}
{"text":"data_0576"}
{"text":"data_0577"}
{"text":"data_0578"}
{"text":"data_0579"}
{"text":"data_0580"}
{"text":"data_0581"}
{"text":"data_0582"}
{"text":"data_0583"}
{"text":"data_0584"}
{"text":"data_0585"}
{"text":"data_0586"}
{"text":"data_0587"}
{"text":"data_0588"}
{"text":"data_0589"}
{"text":"data_0590"}
{"text":"data_0591"}
{"text":"data_0592"}
{"text":"data_0593"}
{"text":"data_0594"}
{"text":"data_0595"}
{"text":"data_0596"}
{"text":"data_0597"}
{"text":"data_0598"}
{"text":"data_0599"}
{"text":"data_0600"}
{"text":"data_0601"}
{"text":"data_0602"}
{"text":"data_0603"}
{"text":"data_0604"}
{"text":"data_0605"}
{"text":"data_0606"}
{"text":"data_0607"}
{"text":"data_0608"}
{"text":"data_0609"}
{"text":"data_0610"}
{"text":"data_0611"}
{"text":"data_0612"}
{"text":"data_0613"}
{"text":"data_0614"}
{"text":"data_0615"}
{"text":"data_0616"}
{"text":"data_0617"}
{"text":"data_0618"}
{"text":"data_0619"}
{"text":"data_0620"}
{"text":"data_0621"}
{"text":"data_0622"}
{"text":"data_0623"}
{"text":"data_0624"}
{"text":"data_0625"}
{"text":"data_0626"}
{"text":"data_0627"}
{"text":"data_0628"}
{"text":"data_0629"}
{"text":"data_0630"}
{"text":"data_0631"}
{"text":"data_0632"}
{"text":"data_0633"}
{"text":"data_0634"}
{"text":"data_0635"}
{"text":"data_0636"}
{"text":"data_0637"}
{"text":"data_0638"}
{"text":"data_0639"}
{"text":"data_0640"}
{"text":"data_0641"}
{"text":"data_0642"}
{"text":"data_0643"}
{"text":"data_0644"}
{"text":"data_0645"}
{"text":"data_0646"}
{"text":"data_0647"}
{"text":"data_0648"}
{"text":"data_0649"}
{"text":"data_0650"}
{"text":"data_0651"}
{"text":"data_0652"}
{"text":"data_0653"}
{"text":"data_0654"}
{"text":"data_0655"}
{"text":"data_0656"}
{"text":"data_0657"}
{"text":"data_0658"}
{"text":"data_0659"}
{"text":"data_0660"}
{"text":"data_0661"}
{"text":"data_0662"}
{"text":"data_0663"}
{"text":"data_0664"}
{"text":"data_0665"}
{"text":"data_0666"}
{"text":"data_0667"}
{"text":"data_0668"}
{"text":"data_0669"}
{"text":"data_0670"}
{"text":"data_0671"}
{"text":"data_0672"}
{"text":"data_0673"}
{"text":"data_0674"}
{"text":"data_0675"}
{"text":"data_0676"}
{"text":"data_0677"}
{"text":"data_0678"}
{"text":"data_0679"}
{"text":"data_0680"}
{"text":"data_0681"}
{"text":"data_0682"}
{"text":"data_0683"}
{"text":"data_0684"}
{"text":"data_0685"}
{"text":"data_0686"}
{"text":"data_0687"}
{"text":"data_0688"}
{"text":"data_0689"}
{"text":"data_0690"}
{"text":"data_0691"}
{"text":"data_0692"}
{"text":"data_0693"}
{"text":"data_0694"}
{"text":"data_0695"}
{"text":"data_0696"}
{"text":"data_0697"}
{"text":"data_0698"}
{"text":"data_0699"}
{"text":"data_0700"}
{"text":"data_0701"}
{"text":"data_0702"}
{"text":"data_0703"}
{"text":"data_0704"}
{"text":"data_0705"}
{"text":"data_0706"}
{"text":"data_0707"}
{"text":"data_0708"}
{"text":"data_0709"}
{"text":"data_0710"}
{"text":"data_0711"}
{"text":"data_0712"}
{"text":"data_0713"}
{"text":"data_0714"}
{"text":"data_0715"}
{"text":"data_0716"}
{"text":"data_0717"}
{"text":"data_0718"}
{"text":"data_0719"}
{"text":"data_0720"}
{"text":"data_0721"}
{"text":"data_0722"}
{"text":"data_0723"}
{"text":"data_0724"}
{"text":"data_0725"}
{"text":"data_0726"}
{"text":"data_0727"}
{"text":"data_0728"}
{"text":"data_0729"}
{"text":"data_0730"}
{"text":"data_0731"}
{"text":"data_0732"}
{"text":"data_0733"}
{"text":"data_0734"}
{"text":"data_0735"}
{"text":"data_0736"}
{"text":"data_0737"}
{"text":"data_0738"}
{"text":"data_0739"}
{"text":"data_0740"}
{"text":"data_0741"}
{"text":"data_0742"}
{"text":"data_0743"}
{"text":"data_0744"}
{"text":"data_0745"}
{"text":"data_0746"}
{"text":"data_0747"}
{"text":"data_0748"}
{"text":"data_0749"}
{"text":"data_0750"}
{"text":"data_0751"}
{"text":"data_0752"}
{"text":"data_0753"}
{"text":"data_0754"}
{"text":"data_0755"}
{"text":"data_0756"}
{"text":"data_0757"}
{"text":"data_0758"}
{"text":"data_0759"}
{"text":"data_0760"}
{"text":"data_0761"}
{"text":"data_0762"}
{"text":"data_0763"}
{"text":"data_0764"}
{"text":"data_0765"}
{"text":"data_0766"}
{"text":"data_0767"}
{"text":"data_0768"}
{"text":"data_0769"}
{"text":"data_0770"}
{"text":"data_0771"}
{"text":"data_0772"}
{"text":"data_0773"}
{"text":"data_0774"}
{"text":"data_0775"}
{"text":"data_0776"}
{"text":"data_0777"}
{"text":"data_0778"}
{"text":"data_0779"}
{"text":"data_0780"}
{"text":"data_0781"}
{"text":"data_0782"}
{"text":"data_0783"}
{"text":"data_0784"}
{"text":"data_0785"}
{"text":"data_0786"}
{"text":"data_0787"}
{"text":"data_0788"}
{"text":"data_0789"}
{"text":"data_0790"}
{"text":"data_0791"}
{"text":"data_0792"}
{"text":"data_0793"}
{"text":"data_0794"}
{"text":"data_0795"}
{"text":"data_0796"}
{"text":"data_0797"}
{"text":"data_0798"}
{"text":"data_0799"}
{"text":"data_0800"}
{"text":"data_0801"}
{"text":"data_0802"}
{"text":"data_0803"}
{"text":"data_0804"}
{"text":"data_0805"}
{"text":"data_0806"}
{"text":"data_0807"}
{"text":"data_0808"}
{"text":"data_0809"}
{"text":"data_0810"}
{"text":"data_0811"}
{"text":"data_0812"}
{"text":"data_0813"}
{"text":"data_0814"}
{"text":"data_0815"}
{"text":"data_0816"}
{"text":"data_0817"}
{"text":"data_0818"}
{"text":"data_0819"}
{"text":"data_0820"}
{"text":"data_0821"}
{"text":"data_0822"}
{"text":"data_0823"}
{"text":"data_0824"}
{"text":"data_0825"}
{"text":"data_0826"}
{"text":"data_0827"}
{"text":"data_0828"}
{"text":"data_0829"}
{"text":"data_0830"}
{"text":"data_0831"}
{"text":"data_0832"}
{"text":"data_0833"}
{"text":"data_0834"}
{"text":"data_0835"}
{"text":"data_0836"}
{"text":"data_0837"}
{"text":"data_0838"}
{"text":"data_0839"}
{"text":"data_0840"}
{"text":"data_0841"}
{"text":"data_0842"}
{"text":"data_0843"}
{"text":"data_0844"}
{"text":"data_0845"}
{"text":"data_0846"}
{"text":"data_0847"}
{"text":"data_0848"}
{"text":"data_0849"}
{"text":"data_0850"}
{"text":"data_0851"}
{"text":"data_0852"}
{"text":"data_0853"}
{"text":"data_0854"}
{"text":"data_0855"}
{"text":"data_0856"}
{"text":"data_0857"}
{"text":"data_0858"}
{"text":"data_0859"}
{"text":"data_0860"}
{"text":"data_0861"}
{"text":"data_0862"}
{"text":"data_0863"}
{"text":"data_0864"}
{"text":"data_0865"}
{"text":"data_0866"}
{"text":"data_0867"}
{"text":"data_0868"}
{"text":"data_0869"}
{"text":"data_0870"}
{"text":"data_0871"}
{"text":"data_0872"}
{"text":"data_0873"}
{"text":"data_0874"}
{"text":"data_0875"}
{"text":"data_0876"}
{"text":"data_0877"}
{"text":"data_0878"}
{"text":"data_0879"}
{"text":"data_0880"}
{"text":"data_0881"}
{"text":"data_0882"}
{"text":"data_0883"}
{"text":"data_0884"}
{"text":"data_0885"}
{"text":"data_0886"}
{"text":"data_0887"}
{"text":"data_0888"}
{"text":"data_0889"}
{"text":"data_0890"}
{"text":"data_0891"}
{"text":"data_0892"}
{"text":"data_0893"}
{"text":"data_0894"}
{"text":"data_0895"}
{"text":"data_0896"}
{"text":"data_0897"}
{"text":"data_0898"}
{"text":"data_0899"}
{"text":"data_0900"}
{"text":"data_0901"}
{"text":"data_0902"}
{"text":"data_0903"}
{"text":"data_0904"}
{"text":"data_0905"}
{"text":"data_0906"}
{"text":"data_0907"}
{"text":"data_0908"}
{"text":"data_0909"}
{"text":"data_0910"}
{"text":"data_0911"}
{"text":"data_0912"}
{"text":"data_0913"}
{"text":"data_0914"}
{"text":"data_0915"}
{"text":"data_0916"}
{"text":"data_0917"}
{"text":"data_0918"}
{"text":"data_0919"}
{"text":"data_0920"}
{"text":"data_0921"}
{"text":"data_0922"}
{"text":"data_0923"}
{"text":"data_0924"}
{"text":"data_0925"}
{"text":"data_0926"}
{"text":"data_0927"}
{"text":"data_0928"}
{"text":"data_0929"}
{"text":"data_0930"}
{"text":"data_0931"}
{"text":"data_0932"}
{"text":"data_0933"}
{"text":"data_0934"}
{"text":"data_0935"}
{"text":"data_0936"}
{"text":"data_0937"}
{"text":"data_0938"}
{"text":"data_0939"}
{"text":"data_0940"}
{"text":"data_0941"}
{"text":"data_0942"}
{"text":"data_0943"}
{"text":"data_0944"}
{"text":"data_0945"}
{"text":"data_0946"}
{"text":"data_0947"}
{"text":"data_0948"}
{"text":"data_0949"}
{"text":"data_0950"}
{"text":"data_0951"}
{"text":"data_0952"}
{"text":"data_0953"}
{"text":"data_0954"}
{"text":"data_0955"}
{"text":"data_0956"}
{"text":"data_0957"}
{"text":"data_0958"}
{"text":"data_0959"}
{"text":"data_0960"}
{"text":"data_0961"}
{"text":"data_0962"}
{"text":"data_0963"}
{"text":"data_0964"}
{"text":"data_0965"}
{"text":"data_0966"}
{"text":"data_0967"}
{"text":"data_0968"}
{"text":"data_0969"}
{"text":"data_0970"}
{"text":"data_0971"}
{"text":"data_0972"}
{"text":"data_0973"}
{"text":"data_0974"}
{"text":"data_0975"}
{"text":"data_0976"}
{"text":"data_0977"}
{"text":"data_0978"}
{"text":"data_0979"}
{"text":"data_0980"}
{"text":"data_0981"}
{"text":"data_0982"}
{"text":"data_0983"}
{"text":"data_0984"}
{"text":"data_0985"}
{"text":"data_0986"}
{"text":"data_0987"}
{"text":"data_0988"}
{"text":"data_0989"}
{"text":"data_0990"}
{"text":"data_0991"}
{"text":"data_0992"}
{"text":"data_0993"}
{"text":"data_0994"}
{"text":"data_0995"}
{"text":"data_0996"}
{"text":"data_0997"}
{"text":"data_0998"}
{"text":"data_0999"}
{"text":"data_1000"}
{"text":"data_1001"}
{"text":"data_1002"}
{"text":"data_1003"}
{"text":"data_1004"}
{"text":"data_1005"}
{"text":"data_1006"}
{"text":"data_1007"}
{"text":"data_1008"}
{"text":"data_1009"}
{"text":"data_1010"}
{"text":"data_1011"}
{"text":"data_1012"}
{"text":"data_1013"}
{"text":"data_1014"}
{"text":"data_1015"}
{"text":"data_1016"}
{"text":"data_1017"}
{"text":"data_1018"}
{"text":"data_1019"}
{"text":"data_1020"}
{"text":"data_1021"}
{"text":"data_1022"}
{"text":"data_1023"}
{"text":"data_1024"}
{"text":"data_1025"}
{"text":"data_1026"}
{"text":"data_1027"}
{"text":"data_1028"}
{"text":"data_1029"}
{"text":"data_1030"}
{"text":"data_1031"}
{"text":"data_1032"}
{"text":"data_1033"}
{"text":"data_1034"}
{"text":"data_1035"}
{"text":"data_1036"}
{"text":"data_1037"}
{"text":"data_1038"}
{"text":"data_1039"}
{"text":"data_1040"}
{"text":"data_1041"}
{"text":"data_1042"}
{"text":"data_1043"}
{"text":"data_1044"}
{"text":"data_1045"}
{"text":"data_1046"}
{"text":"data_1047"}
{"text":"data_1048"}
{"text":"data_1049"}
{"text":"data_1050"}
{"text":"data_1051"}
{"text":"data_1052"}
{"text":"data_1053"}
{"text":"data_1054"}
{"text":"data_1055"}
{"text":"data_1056"}
{"text":"data_1057"}
{"text":"data_1058"}
{"text":"data_1059"}
{"text":"data_1060"}
{"text":"data_1061"}
{"text":"data_1062"}
{"text":"data_1063"}
{"text":"data_1064"}
{"text":"data_1065"}
{"text":"data_1066"}
{"text":"data_1067"}
{"text":"data_1068"}
{"text":"data_1069"}
{"text":"data_1070"}
{"text":"data_1071"}
{"text":"data_1072"}
{"text":"data_1073"}
{"text":"data_1074"}
{"text":"data_1075"}
{"text":"data_1076"}
{"text":"data_1077"}
{"text":"data_1078"}
{"text":"data_1079"}
{"text":"data_1080"}
{"text":"data_1081"}
{"text":"data_1082"}
{"text":"data_1083"}
{"text":"data_1084"}
{"text":"data_1085"}
{"text":"data_1086"}
{"text":"data_1087"}
{"text":"data_1088"}
{"text":"data_1089"}
{"text":"data_1090"}
{"text":"data_1091"}
{"text":"data_1092"}
{"text":"data_1093"}
{"text":"data_1094"}
{"text":"data_1095"}
{"text":"data_1096"}
{"text":"data_1097"}
{"text":"data_1098"}
{"text":"data_1099"}
{"text":"data_1100"}
{"text":"data_1101"}
{"text":"data_1102"}
{"text":"data_1103"}
{"text":"data_1104"}
{"text":"data_1105"}
{"text":"data_1106"}
{"text":"data_1107"}
{"text":"data_1108"}
{"text":"data_1109"}
{"text":"data_1110"}
{"text":"data_1111"}
{"text":"data_1112"}
{"text":"data_1113"}
{"text":"data_1114"}
{"text":"data_1115"}
{"text":"data_1116"}
{"text":"data_1117"}
{"text":"data_1118"}
{"text":"data_1119"}
{"text":"data_1120"}
{"text":"data_1121"}
{"text":"data_1122"}
{"text":"data_1123"}
{"text":"data_1124"}
{"text":"data_1125"}
{"text":"data_1126"}
{"text":"data_1127"}
{"text":"data_1128"}
{"text":"data_1129"}
{"text":"data_1130"}
{"text":"data_1131"}
{"text":"data_1132"}
{"text":"data_1133"}
{"text":"data_1134"}
{"text":"data_1135"}
{"text":"data_1136"}
{"text":"data_1137"}
{"text":"data_1138"}
{"text":"data_1139"}
{"text":"data_1140"}
{"text":"data_1141"}
{"text":"data_1142"}
{"text":"data_1143"}
{"text":"data_1144"}
{"text":"data_1145"}
{"text":"data_1146"}
{"text":"data_1147"}
{"text":"data_1148"}
{"text":"data_1149"}
{"text":"data_1150"}
{"text":"data_1151"}
{"text":"data_1152"}
{"text":"data_1153"}
{"text":"data_1154"}
{"text":"data_1155"}
{"text":"data_1156"}
{"text":"data_1157"}
{"text":"data_1158"}
{"text":"data_1159"}
{"text":"data_1160"}
{"text":"data_1161"}
{"text":"data_1162"}
{"text":"data_1163"}
{"text":"data_1164"}
{"text":"data_1165"}
{"text":"data_1166"}
{"text":"data_1167"}
{"text":"data_1168"}
{"text":"data_1169"}
{"text":"data_1170"}
{"text":"data_1171"}
{"text":"data_1172"}
{"text":"data_1173"}
{"text":"data_1174"}
{"text":"data_1175"}
{"text":"data_1176"}
{"text":"data_1177"}
{"text":"data_1178"}
{"text":"data_1179"}
{"text":"data_1180"}
{"text":"data_1181"}
{"text":"data_1182"}
{"text":"data_1183"}
{"text":"data_1184"}
{"text":"data_1185"}
{"text":"data_1186"}
{"text":"data_1187"}
{"text":"data_1188"}
{"text":"data_1189"}
{"text":"data_1190"}
{"text":"data_1191"}
{"text":"data_1192"}
{"text":"data_1193"}
{"text":"data_1194"}
{"text":"data_1195"}
{"text":"data_1196"}
{"text":"data_1197"}
{"text":"data_1198"}
{"text":"data_1199"}
{"text":"data_1200"}
{"text":"data_1201"}
{"text":"data_1202"}
{"text":"data_1203"}
{"text":"data_1204"}
{"text":"data_1205"}
{"text":"data_1206"}
{"text":"data_1207"}
{"text":"data_1208"}
{"text":"data_1209"}
{"text":"data_1210"}
{"text":"data_1211"}
{"text":"data_1212"}
{"text":"data_1213"}
{"text":"data_1214"}
{"text":"data_1215"}
{"text":"data_1216"}
{"text":"data_1217"}
{"text":"data_1218"}
{"text":"data_1219"}
{"text":"data_1220"}
{"text":"data_1221"}
{"text":"data_1222"}
{"text":"data_1223"}
{"text":"data_1224"}
{"text":"data_1225"}
{"text":"data_1226"}
{"text":"data_1227"}
{"text":"data_1228"}
{"text":"data_1229"}
{"text":"data_1230"}
{"text":"data_1231"}
{"text":"data_1232"}
{"text":"data_1233"}
{"text":"data_1234"}
{"text":"data_1235"}
{"text":"data_1236"}
{"text":"data_1237"}
{"text":"data_1238"}
{"text":"data_1239"}
{"text":"data_1240"}
{"text":"data_1241"}
{"text":"data_1242"}
{"text":"data_1243"}
{"text":"data_1244"}
{"text":"data_1245"}
{"text":"data_1246"}
{"text":"data_1247"}
{"text":"data_1248"}
{"text":"data_1249"}
{"text":"data_1250"}
{"text":"data_1251"}
{"text":"data_1252"}
{"text":"data_1253"}
{"text":"data_1254"}
{"text":"data_1255"}
{"text":"data_1256"}
{"text":"data_1257"}
{"text":"data_1258"}
{"text":"data_1259"}
{"text":"data_1260"}
{"text":"data_1261"}
{"text":"data_1262"}
{"text":"data_1263"}
{"text":"data_1264"}
{"text":"data_1265"}
{"text":"data_1266"}
{"text":"data_1267"}
{"text":"data_1268"}
{"text":"data_1269"}
{"text":"data_1270"}
{"text":"data_1271"}
{"text":"data_1272"}
{"text":"data_1273"}
{"text":"data_1274"}
{"text":"data_1275"}
{"text":"data_1276"}
{"text":"data_1277"}
{"text":"data_1278"}
{"text":"data_1279"}
{"text":"data_1280"}
{"text":"data_1281"}
{"text":"data_1282"}
{"text":"data_1283"}
{"text":"data_1284"}
{"text":"data_1285"}
{"text":"data_1286"}
{"text":"data_1287"}
{"text":"data_1288"}
{"text":"data_1289"}
{"text":"data_1290"}
{"text":"data_1291"}
{"text":"data_1292"}
{"text":"data_1293"}
{"text":"data_1294"}
{"text":"data_1295"}
{"text":"data_1296"}
{"text":"data_1297"}
{"text":"data_1298"}
{"text":"data_1299"}
{"text":"data_1300"}
{"text":"data_1301"}
{"text":"data_1302"}
{"text":"data_1303"}
{"text":"data_1304"}
{"text":"data_1305"}
{"text":"data_1306"}
{"text":"data_1307"}
{"text":"data_1308"}
{"text":"data_1309"}
{"text":"data_1310"}
{"text":"data_1311"}
{"text":"data_1312"}
{"text":"data_1313"}
{"text":"data_1314"}
{"text":"data_1315"}
{"text":"data_1316"}
{"text":"data_1317"}
{"text":"data_1318"}
{"text":"data_1319"}
{"text":"data_1320"}
{"text":"data_1321"}
{"text":"data_1322"}
{"text":"data_1323"}
{"text":"data_1324"}
{"text":"data_1325"}
{"text":"data_1326"}
{"text":"data_1327"}
{"text":"data_1328"}
{"text":"data_1329"}
{"text":"data_1330"}
{"text":"data_1331"}
{"text":"data_1332"}
{"text":"data_1333"}
{"text":"data_1334"}
{"text":"data_1335"}
{"text":"data_1336"}
{"text":"data_1337"}
{"text":"data_1338"}
{"text":"data_1339"}
{"text":"data_1340"}
{"text":"data_1341"}
{"text":"data_1342"}
{"text":"data_1343"}
{"text":"data_1344"}
{"text":"data_1345"}
{"text":"data_1346"}
{"text":"data_1347"}
{"text":"data_1348"}
{"text":"data_1349"}
{"text":"data_1350"}
{"text":"data_1351"}
{"text":"data_1352"}
{"text":"data_1353"}
{"text":"data_1354"}
{"text":"data_1355"}
{"text":"data_1356"}
{"text":"data_1357"}
{"text":"data_1358"}
{"text":"data_1359"}
{"text":"data_1360"}
{"text":"data_1361"}
{"text":"data_1362"}
{"text":"data_1363"}
{"text":"data_1364"}
{"text":"data_1365"}
{"text":"data_1366"}
{"text":"data_1367"}
{"text":"data_1368"}
{"text":"data_1369"}
{"text":"data_1370"}
{"text":"data_1371"}
{"text":"data_1372"}
{"text":"data_1373"}
{"text":"data_1374"}
{"text":"data_1375"}
{"text":"data_1376"}
{"text":"data_1377"}
{"text":"data_1378"}
{"text":"data_1379"}
{"text":"data_1380"}
{"text":"data_1381"}
{"text":"data_1382"}
{"text":"data_1383"}
{"text":"data_1384"}
{"text":"data_1385"}
{"text":"data_1386"}
{"text":"data_1387"}
{"text":"data_1388"}
{"text":"data_1389"}
{"text":"data_1390"}
{"text":"data_1391"}
{"text":"data_1392"}
{"text":"data_1393"}
{"text":"data_1394"}
{"text":"data_1395"}
{"text":"data_1396"}
{"text":"data_1397"}
{"text":"data_1398"}
{"text":"data_1399"}
{"text":"data_1400"}
{"text":"data_1401"}
{"text":"data_1402"}
{"text":"data_1403"}
{"text":"data_1404"}
{"text":"data_1405"}
{"text":"data_1406"}
{"text":"data_1407"}
{"text":"data_1408"}
{"text":"data_1409"}
{"text":"data_1410"}
{"text":"data_1411"}
{"text":"data_1412"}
{"text":"data_1413"}
{"text":"data_1414"}
{"text":"data_1415"}
{"text":"data_1416"}
{"text":"data_1417"}
{"text":"data_1418"}
{"text":"data_1419"}
{"text":"data_1420"}
{"text":"data_1421"}
{"text":"data_1422"}
{"text":"data_1423"}
{"text":"data_1424"}
{"text":"data_1425"}
{"text":"data_1426"}
{"text":"data_1427"}
{"text":"data_1428"}
{"text":"data_1429"}
{"text":"data_1430"}
{"text":"data_1431"}
{"text":"data_1432"}
{"text":"data_1433"}
{"text":"data_1434"}
{"text":"data_1435"}
{"text":"data_1436"}
{"text":"data_1437"}
{"text":"data_1438"}
{"text":"data_1439"}
{"text":"data_1440"}
{"text":"data_1441"}
{"text":"data_1442"}
{"text":"data_1443"}
{"text":"data_1444"}
{"text":"data_1445"}
{"text":"data_1446"}
{"text":"data_1447"}
{"text":"data_1448"}
{"text":"data_1449"}
{"text":"data_1450"}
{"text":"data_1451"}
{"text":"data_1452"}
{"text":"data_1453"}
{"text":"data_1454"}
{"text":"data_1455"}
{"text":"data_1456"}
{"text":"data_1457"}
{"text":"data_1458"}
{"text":"data_1459"}
{"text":"data_1460"}
{"text":"data_1461"}
{"text":"data_1462"}
{"text":"data_1463"}
{"text":"data_1464"}
{"text":"data_1465"}
{"text":"data_1466"}
{"text":"data_1467"}
{"text":"data_1468"}
{"text":"data_1469"}
{"text":"data_1470"}
{"text":"data_1471"}
{"text":"data_1472"}
{"text":"data_1473"}
{"text":"data_1474"}
{"text":"data_1475"}
{"text":"data_1476"}
{"text":"data_1477"}
{"text":"data_1478"}
{"text":"data_1479"}
{"text":"data_1480"}
{"text":"data_1481"}
{"text":"data_1482"}
{"text":"data_1483"}
{"text":"data_1484"}
{"text":"data_1485"}
{"text":"data_1486"}
{"text":"data_1487"}
{"text":"data_1488"}
{"text":"data_1489"}
{"text":"data_1490"}
{"text":"data_1491"}
{"text":"data_1492"}
{"text":"data_1493"}
{"text":"data_1494"}
{"text":"data_1495"}
{"text":"data_1496"}
{"text":"data_1497"}
{"text":"data_1498"}
{"text":"data_1499"}
{"text":"data_1500"}
{"text":"data_1501"}
{"text":"data_1502"}
{"text":"data_1503"}
{"text":"data_1504"}
{"text":"data_1505"}
{"text":"data_1506"}
{"text":"data_1507"}
{"text":"data_1508"}
{"text":"data_1509"}
{"text":"data_1510"}
{"text":"data_1511"}
{"text":"data_1512"}
{"text":"data_1513"}
{"text":"data_1514"}
{"text":"data_1515"}
{"text":"data_1516"}
{"text":"data_1517"}
{"text":"data_1518"}
{"text":"data_1519"}
{"text":"data_1520"}
{"text":"data_1521"}
{"text":"data_1522"}
{"text":"data_1523"}
{"text":"data_1524"}
{"text":"data_1525"}
{"text":"data_1526"}
{"text":"data_1527"}
{"text":"data_1528"}
{"text":"data_1529"}
{"text":"data_1530"}
{"text":"data_1531"}
{"text":"data_1532"}
{"text":"data_1533"}
{"text":"data_1534"}
{"text":"data_1535"}
{"text":"data_1536"}
{"text":"data_1537"}
{"text":"data_1538"}
{"text":"data_1539"}
{"text":"data_1540"}
{"text":"data_1541"}
{"text":"data_1542"}
{"text":"data_1543"}
{"text":"data_1544"}
{"text":"data_1545"}
{"text":"data_1546"}
{"text":"data_1547"}
{"text":"data_1548"}
{"text":"data_1549"}
{"text":"data_1550"}
{"text":"data_1551"}
{"text":"data_1552"}
{"text":"data_1553"}
{"text":"data_1554"}
{"text":"data_1555"}
{"text":"data_1556"}
{"text":"data_1557"}
{"text":"data_1558"}
{"text":"data_1559"}
{"text":"data_1560"}
{"text":"data_1561"}
{"text":"data_1562"}
{"text":"data_1563"}
{"text":"data_1564"}
{"text":"data_1565"}
{"text":"data_1566"}
{"text":"data_1567"}
{"text":"data_1568"}
{"text":"data_1569"}
{"text":"data_1570"}
{"text":"data_1571"}
{"text":"data_1572"}
{"text":"data_1573"}
{"text":"data_1574"}
{"text":"data_1575"}
{"text":"data_1576"}
{"text":"data_1577"}
{"text":"data_1578"}
{"text":"data_1579"}
{"text":"data_1580"}
{"text":"data_1581"}
{"text":"data_1582"}
{"text":"data_1583"}
{"text":"data_1584"}
{"text":"data_1585"}
{"text":"data_1586"}
{"text":"data_1587"}
{"text":"data_1588"}
{"text":"data_1589"}
{"text":"data_1590"}
{"text":"data_1591"}
{"text":"data_1592"}
{"text":"data_1593"}
{"text":"data_1594"}
{"text":"data_1595"}
{"text":"data_1596"}
{"text":"data_1597"}
{"text":"data_1598"}
{"text":"data_1599"}
{"text":"data_1600"}
{"text":"data_1601"}
{"text":"data_1602"}
{"text":"data_1603"}
{"text":"data_1604"}
{"text":"data_1605"}
{"text":"data_1606"}
{"text":"data_1607"}
{"text":"data_1608"}
{"text":"data_1609"}
{"text":"data_1610"}
{"text":"data_1611"}
{"text":"data_1612"}
{"text":"data_1613"}
{"text":"data_1614"}
{"text":"data_1615"}
{"text":"data_1616"}
{"text":"data_1617"}
{"text":"data_1618"}
{"text":"data_1619"}
{"text":"data_1620"}
{"text":"data_1621"}
{"text":"data_1622"}
{"text":"data_1623"}
{"text":"data_1624"}
{"text":"data_1625"}
{"text":"data_1626"}
{"text":"data_1627"}
{"text":"data_1628"}
{"text":"data_1629"}
{"text":"data_1630"}
{"text":"data_1631"}
{"text":"data_1632"}
{"text":"data_1633"}
{"text":"data_1634"}
{"text":"data_1635"}
{"text":"data_1636"}
{"text":"data_1637"}
{"text":"data_1638"}
{"text":"data_1639"}
{"text":"data_1640"}
{"text":"data_1641"}
{"text":"data_1642"}
{"text":"data_1643"}
{"text":"data_1644"}
{"text":"data_1645"}
{"text":"data_1646"}
{"text":"data_1647"}
{"text":"data_1648"}
{"text":"data_1649"}
{"text":"data_1650"}
{"text":"data_1651"}
{"text":"data_1652"}
{"text":"data_1653"}
{"text":"data_1654"}
{"text":"data_1655"}
{"text":"data_1656"}
{"text":"data_1657"}
{"text":"data_1658"}
{"text":"data_1659"}
{"text":"data_1660"}
{"text":"data_1661"}
{"text":"data_1662"}
{"text":"data_1663"}
{"text":"data_1664"}
{"text":"data_1665"}
{"text":"data_1666"}
{"text":"data_1667"}
{"text":"data_1668"}
{"text":"data_1669"}
{"text":"data_1670"}
{"text":"data_1671"}
{"text":"data_1672"}
{"text":"data_1673"}
{"text":"data_1674"}
{"text":"data_1675"}
{"text":"data_1676"}
{"text":"data_1677"}
{"text":"data_1678"}
{"text":"data_1679"}
{"text":"data_1680"}
{"text":"data_1681"}
{"text":"data_1682"}
{"text":"data_1683"}
{"text":"data_1684"}
{"text":"data_1685"}
{"text":"data_1686"}
{"text":"data_1687"}
{"text":"data_1688"}
{"text":"data_1689"}
{"text":"data_1690"}
{"text":"data_1691"}
{"text":"data_1692"}
{"text":"data_1693"}
{"text":"data_1694"}
{"text":"data_1695"}
{"text":"data_1696"}
{"text":"data_1697"}
{"text":"data_1698"}
{"text":"data_1699"}
{"text":"data_1700"}
{"text":"data_1701"}
{"text":"data_1702"}
{"text":"data_1703"}
{"text":"data_1704"}
{"text":"data_1705"}
{"text":"data_1706"}
{"text":"data_1707"}
{"text":"data_1708"}
{"text":"data_1709"}
{"text":"data_1710"}
{"text":"data_1711"}
{"text":"data_1712"}
{"text":"data_1713"}
{"text":"data_1714"}
{"text":"data_1715"}
{"text":"data_1716"}
{"text":"data_1717"}
{"text":"data_1718"}
{"text":"data_1719"}
{"text":"data_1720"}
{"text":"data_1721"}
{"text":"data_1722"}
{"text":"data_1723"}
{"text":"data_1724"}
{"text":"data_1725"}
{"text":"data_1726"}
{"text":"data_1727"}
{"text":"data_1728"}
{"text":"data_1729"}
{"text":"data_1730"}
{"text":"data_1731"}
{"text":"data_1732"}
{"text":"data_1733"}
{"text":"data_1734"}
{"text":"data_1735"}
{"text":"data_1736"}
{"text":"data_1737"}
{"text":"data_1738"}
{"text":"data_1739"}
{"text":"data_1740"}
{"text":"data_1741"}
{"text":"data_1742"}
{"text":"data_1743"}
{"text":"data_1744"}
{"text":"data_1745"}
{"text":"data_1746"}
{"text":"data_1747"}
{"text":"data_1748"}
{"text":"data_1749"}
{"text":"data_1750"}
{"text":"data_1751"}
{"text":"data_1752"}
{"text":"data_1753"}
{"text":"data_1754"}
{"text":"data_1755"}
{"text":"data_1756"}
{"text":"data_1757"}
{"text":"data_1758"}
{"text":"data_1759"}
{"text":"data_1760"}
{"text":"data_1761"}
{"text":"data_1762"}
{"text":"data_1763"}
{"text":"data_1764"}
{"text":"data_1765"}
{"text":"data_1766"}
{"text":"data_1767"}
{"text":"data_1768"}
{"text":"data_1769"}
{"text":"data_1770"}
{"text":"data_1771"}
{"text":"data_1772"}
{"text":"data_1773"}
{"text":"data_1774"}
{"text":"data_1775"}
{"text":"data_1776"}
{"text":"data_1777"}
{"text":"data_1778"}
{"text":"data_1779"}
{"text":"data_1780"}
{"text":"data_1781"}
{"text":"data_1782"}
{"text":"data_1783"}
{"text":"data_1784"}
{"text":"data_1785"}
{"text":"data_1786"}
{"text":"data_1787"}
{"text":"data_1788"}
{"text":"data_1789"}
{"text":"data_1790"}
{"text":"data_1791"}
{"text":"data_1792"}
{"text":"data_1793"}
{"text":"data_1794"}
{"text":"data_1795"}
{"text":"data_1796"}
{"text":"data_1797"}
{"text":"data_1798"}
{"text":"data_1799"}
{"text":"data_1800"}
{"text":"data_1801"}
{"text":"data_1802"}
{"text":"data_1803"}
{"text":"data_1804"}
{"text":"data_1805"}
{"text":"data_1806"}
{"text":"data_1807"}
{"text":"data_1808"}
{"text":"data_1809"}
{"text":"data_1810"}
{"text":"data_1811"}
{"text":"data_1812"}
{"text":"data_1813"}
{"text":"data_1814"}
{"text":"data_1815"}
{"text":"data_1816"}
{"text":"data_1817"}
{"text":"data_1818"}
{"text":"data_1819"}
{"text":"data_1820"}
{"text":"data_1821"}
{"text":"data_1822"}
{"text":"data_1823"}
{"text":"data_1824"}
{"text":"data_1825"}
{"text":"data_1826"}
{"text":"data_1827"}
{"text":"data_1828"}
{"text":"data_1829"}
{"text":"data_1830"}
{"text":"data_1831"}
{"text":"data_1832"}
{"text":"data_1833"}
{"text":"data_1834"}
{"text":"data_1835"}
{"text":"data_1836"}
{"text":"data_1837"}
{"text":"data_1838"}
{"text":"data_1839"}
{"text":"data_1840"}
{"text":"data_1841"}
{"text":"data_1842"}
{"text":"data_1843"}
{"text":"data_1844"}
{"text":"data_1845"}
{"text":"data_1846"}
{"text":"data_1847"}
{"text":"data_1848"}
{"text":"data_1849"}
{"text":"data_1850"}
{"text":"data_1851"}
{"text":"data_1852"}
{"text":"data_1853"}
{"text":"data_1854"}
{"text":"data_1855"}
{"text":"data_1856"}
{"text":"data_1857"}
{"text":"data_1858"}
{"text":"data_1859"}
{"text":"data_1860"}
{"text":"data_1861"}
{"text":"data_1862"}
{"text":"data_1863"}
{"text":"data_1864"}
{"text":"data_1865"}
{"text":"data_1866"}
{"text":"data_1867"}
{"text":"data_1868"}
{"text":"data_1869"}
{"text":"data_1870"}
{"text":"data_1871"}
{"text":"data_1872"}
{"text":"data_1873"}
{"text":"data_1874"}
{"text":"data_1875"}
{"text":"data_1876"}
{"text":"data_1877"}
{"text":"data_1878"}
{"text":"data_1879"}
{"text":"data_1880"}
{"text":"data_1881"}
{"text":"data_1882"}
{"text":"data_1883"}
{"text":"data_1884"}
{"text":"data_1885"}
{"text":"data_1886"}
{"text":"data_1887"}
{"text":"data_1888"}
{"text":"data_1889"}
{"text":"data_1890"}
{"text":"data_1891"}
{"text":"data_1892"}
{"text":"data_1893"}
{"text":"data_1894"}
{"text":"data_1895"}
{"text":"data_1896"}
{"text":"data_1897"}
{"text":"data_1898"}
{"text":"data_1899"}
{"text":"data_1900"}
{"text":"data_1901"}
{"text":"data_1902"}
{"text":"data_1903"}
{"text":"data_1904"}
{"text":"data_1905"}
{"text":"data_1906"}
{"text":"data_1907"}
{"text":"data_1908"}
{"text":"data_1909"}
{"text":"data_1910"}
{"text":"data_1911"}
{"text":"data_1912"}
{"text":"data_1913"}
{"text":"data_1914"}
{"text":"data_1915"}
{"text":"data_1916"}
{"text":"data_1917"}
{"text":"data_1918"}
{"text":"data_1919"}
{"text":"data_1920"}
{"text":"data_1921"}
{"text":"data_1922"}
{"text":"data_1923"}
{"text":"data_1924"}
{"text":"data_1925"}
{"text":"data_1926"}
{"text":"data_1927"}
{"text":"data_1928"}
{"text":"data_1929"}
{"text":"data_1930"}
{"text":"data_1931"}
{"text":"data_1932"}
{"text":"data_1933"}
{"text":"data_1934"}
{"text":"data_1935"}
{"text":"data_1936"}
{"text":"data_1937"}
{"text":"data_1938"}
{"text":"data_1939"}
{"text":"data_1940"}
{"text":"data_1941"}
{"text":"data_1942"}
{"text":"data_1943"}
{"text":"data_1944"}
{"text":"data_1945"}
{"text":"data_1946"}
{"text":"data_1947"}
{"text":"data_1948"}
{"text":"data_1949"}
{"text":"data_1950"}
{"text":"data_1951"}
{"text":"data_1952"}
{"text":"data_1953"}
{"text":"data_1954"}
{"text":"data_1955"}
{"text":"data_1956"}
{"text":"data_1957"}
{"text":"data_1958"}
{"text":"data_1959"}
{"text":"data_1960"}
{"text":"data_1961"}
{"text":"data_1962"}
{"text":"data_1963"}
{"text":"data_1964"}
{"text":"data_1965"}
{"text":"data_1966"}
{"text":"data_1967"}
{"text":"data_1968"}
{"text":"data_1969"}
{"text":"data_1970"}
{"text":"data_1971"}
{"text":"data_1972"}
{"text":"data_1973"}
{"text":"data_1974"}
{"text":"data_1975"}
{"text":"data_1976"}
{"text":"data_1977"}
{"text":"data_1978"}
{"text":"data_1979"}
{"text":"data_1980"}
{"text":"data_1981"}
{"text":"data_1982"}
{"text":"data_1983"}
{"text":"data_1984"}
{"text":"data_1985"}
{"text":"data_1986"}
{"text":"data_1987"}
{"text":"data_1988"}
{"text":"data_1989"}
{"text":"data_1990"}
{"text":"data_1991"}
{"text":"data_1992"}
{"text":"data_1993"}
{"text":"data_1994"}
{"text":"data_1995"}
{"text":"data_1996"}
{"text":"data_1997"}
{"text":"data_1998"}
{"text":"data_1999"}
{"text":"data_2000"}
{"text":"data_2001"}
{"text":"data_2002"}
{"text":"data_2003"}
{"text":"data_2004"}
{"text":"data_2005"}
{"text":"data_2006"}
{"text":"data_2007"}
{"text":"data_2008"}
{"text":"data_2009"}
{"text":"data_2010"}
{"text":"data_2011"}
{"text":"data_2012"}
{"text":"data_2013"}
{"text":"data_2014"}
{"text":"data_2015"}
{"text":"data_2016"}
{"text":"data_2017"}
{"text":"data_2018"}
{"text":"data_2019"}
{"text":"data_2020"}
{"text":"data_2021"}
{"text":"data_2022"}
{"text":"data_2023"}
{"text":"data_2024"}
{"text":"data_2025"}
{"text":"data_2026"}
{"text":"data_2027"}
{"text":"data_2028"}
{"text":"data_2029"}
{"text":"data_2030"}
{"text":"data_2031"}
{"text":"data_2032"}
{"text":"data_2033"}
{"text":"data_2034"}
{"text":"data_2035"}
{"text":"data_2036"}
{"text":"data_2037"}
{"text":"data_2038"}
{"text":"data_2039"}
{"text":"data_2040"}
{"text":"data_2041"}
{"text":"data_2042"}
{"text":"data_2043"}
{"text":"data_2044"}
{"text":"data_2045"}
{"text":"data_2046"}
{"text":"data_2047"}
{"text":"data_2048"}
{"text":"data_2049"}
{"text":"data_2050"}
{"text":"data_2051"}
{"text":"data_2052"}
{"text":"data_2053"}
{"text":"data_2054"}
{"text":"data_2055"}
{"text":"data_2056"}
{"text":"data_2057"}
{"text":"data_2058"}
{"text":"data_2059"}
{"text":"data_2060"}
{"text":"data_2061"}
{"text":"data_2062"}
{"text":"data_2063"}
{"text":"data_2064"}
{"text":"data_2065"}
{"text":"data_2066"}
{"text":"data_2067"}
{"text":"data_2068"}
{"text":"data_2069"}
{"text":"data_2070"}
{"text":"data_2071"}
{"text":"data_2072"}
{"text":"data_2073"}
{"text":"data_2074"}
{"text":"data_2075"}
{"text":"data_2076"}
{"text":"data_2077"}
{"text":"data_2078"}
{"text":"data_2079"}
{"text":"data_2080"}
{"text":"data_2081"}
{"text":"data_2082"}
{"text":"data_2083"}
{"text":"data_2084"}
{"text":"data_2085"}
{"text":"data_2086"}
{"text":"data_2087"}
{"text":"data_2088"}
{"text":"data_2089"}
{"text":"data_2090"}
{"text":"data_2091"}
{"text":"data_2092"}
{"text":"data_2093"}
{"text":"data_2094"}
{"text":"data_2095"}
{"text":"data_2096"}
{"text":"data_2097"}
{"text":"data_2098"}
{"text":"data_2099"}
{"text":"data_2100"}
{"text":"data_2101"}
{"text":"data_2102"}
{"text":"data_2103"}
{"text":"data_2104"}
{"text":"data_2105"}
{"text":"data_2106"}
{"text":"data_2107"}
{"text":"data_2108"}
{"text":"data_2109"}
{"text":"data_2110"}
{"text":"data_2111"}
{"text":"data_2112"}
{"text":"data_2113"}
{"text":"data_2114"}
{"text":"data_2115"}
{"text":"data_2116"}
{"text":"data_2117"}
{"text":"data_2118"}
{"text":"data_2119"}
{"text":"data_2120"}
{"text":"data_2121"}
{"text":"data_2122"}
{"text":"data_2123"}
{"text":"data_2124"}
{"text":"data_2125"}
{"text":"data_2126"}
{"text":"data_2127"}
{"text":"data_2128"}
{"text":"data_2129"}
{"text":"data_2130"}
{"text":"data_2131"}
{"text":"data_2132"}
{"text":"data_2133"}
{"text":"data_2134"}
{"text":"data_2135"}
{"text":"data_2136"}
{"text":"data_2137"}
{"text":"data_2138"}
{"text":"data_2139"}
{"text":"data_2140"}
{"text":"data_2141"}
{"text":"data_2142"}
{"text":"data_2143"}
{"text":"data_2144"}
{"text":"data_2145"}
{"text":"data_2146"}
{"text":"data_2147"}
{"text":"data_2148"}
{"text":"data_2149"}
{"text":"data_2150"}
{"text":"data_2151"}
{"text":"data_2152"}
{"text":"data_2153"}
{"text":"data_2154"}
{"text":"data_2155"}
{"text":"data_2156"}
{"text":"data_2157"}
{"text":"data_2158"}
{"text":"data_2159"}
{"text":"data_2160"}
{"text":"data_2161"}
{"text":"data_2162"}
{"text":"data_2163"}
{"text":"data_2164"}
{"text":"data_2165"}
{"text":"data_2166"}
{"text":"data_2167"}
{"text":"data_2168"}
{"text":"data_2169"}
{"text":"data_2170"}
{"text":"data_2171"}
{"text":"data_2172"}
{"text":"data_2173"}
{"text":"data_2174"}
{"text":"data_2175"}
{"text":"data_2176"}
{"text":"data_2177"}
{"text":"data_2178"}
{"text":"data_2179"}
{"text":"data_2180"}
{"text":"data_2181"}
{"text":"data_2182"}
{"text":"data_2183"}
{"text":"data_2184"}
{"text":"data_2185"}
{"text":"data_2186"}
{"text":"data_2187"}
{"text":"data_2188"}
{"text":"data_2189"}
{"text":"data_2190"}
{"text":"data_2191"}
{"text":"data_2192"}
{"text":"data_2193"}
{"text":"data_2194"}
{"text":"data_2195"}
{"text":"data_2196"}
{"text":"data_2197"}
{"text":"data_2198"}
{"text":"data_2199"}
{"text":"data_2200"}
{"text":"data_2201"}
{"text":"data_2202"}
{"text":"data_2203"}
{"text":"data_2204"}
{"text":"data_2205"}
{"text":"data_2206"}
{"text":"data_2207"}
{"text":"data_2208"}
{"text":"data_2209"}
{"text":"data_2210"}
{"text":"data_2211"}
{"text":"data_2212"}
{"text":"data_2213"}
{"text":"data_2214"}
{"text":"data_2215"}
{"text":"data_2216"}
{"text":"data_2217"}
{"text":"data_2218"}
{"text":"data_2219"}
{"text":"data_2220"}
{"text":"data_2221"}
{"text":"data_2222"}
{"text":"data_2223"}
{"text":"data_2224"}
{"text":"data_2225"}
{"text":"data_2226"}
{"text":"data_2227"}
{"text":"data_2228"}
{"text":"data_2229"}
{"text":"data_2230"}
{"text":"data_2231"}
{"text":"data_2232"}
{"text":"data_2233"}
{"text":"data_2234"}
{"text":"data_2235"}
{"text":"data_2236"}
{"text":"data_2237"}
{"text":"data_2238"}
{"text":"data_2239"}
{"text":"data_2240"}
{"text":"data_2241"}
{"text":"data_2242"}
{"text":"data_2243"}
{"text":"data_2244"}
{"text":"data_2245"}
{"text":"data_2246"}
{"text":"data_2247"}
{"text":"data_2248"}
{"text":"data_2249"}
{"text":"data_2250"}
{"text":"data_2251"}
{"text":"data_2252"}
{"text":"data_2253"}
{"text":"data_2254"}
{"text":"data_2255"}
{"text":"data_2256"}
{"text":"data_2257"}
{"text":"data_2258"}
{"text":"data_2259"}
{"text":"data_2260"}
{"text":"data_2261"}
{"text":"data_2262"}
{"text":"data_2263"}
{"text":"data_2264"}
{"text":"data_2265"}
{"text":"data_2266"}
{"text":"data_2267"}
{"text":"data_2268"}
{"text":"data_2269"}
{"text":"data_2270"}
{"text":"data_2271"}
{"text":"data_2272"}
{"text":"data_2273"}
{"text":"data_2274"}
{"text":"data_2275"}
{"text":"data_2276"}
{"text":"data_2277"}
{"text":"data_2278"}
{"text":"data_2279"}
{"text":"data_2280"}
{"text":"data_2281"}
{"text":"data_2282"}
{"text":"data_2283"}
{"text":"data_2284"}
{"text":"data_2285"}
{"text":"data_2286"}
{"text":"data_2287"}
{"text":"data_2288"}
{"text":"data_2289"}
{"text":"data_2290"}
{"text":"data_2291"}
{"text":"data_2292"}
{"text":"data_2293"}
{"text":"data_2294"}
{"text":"data_2295"}
{"text":"data_2296"}
{"text":"data_2297"}
{"text":"data_2298"}
{"text":"data_2299"}
{"text":"data_2300"}
{"text":"data_2301"}
{"text":"data_2302"}
{"text":"data_2303"}
{"text":"data_2304"}
{"text":"data_2305"}
{"text":"data_2306"}
{"text":"data_2307"}
{"text":"data_2308"}
{"text":"data_2309"}
{"text":"data_2310"}
{"text":"data_2311"}
{"text":"data_2312"}
{"text":"data_2313"}
{"text":"data_2314"}
{"text":"data_2315"}
{"text":"data_2316"}
{"text":"data_2317"}
{"text":"data_2318"}
{"text":"data_2319"}
{"text":"data_2320"}
{"text":"data_2321"}
{"text":"data_2322"}
{"text":"data_2323"}
{"text":"data_2324"}
{"text":"data_2325"}
{"text":"data_2326"}
{"text":"data_2327"}
{"text":"data_2328"}
{"text":"data_2329"}
{"text":"data_2330"}
{"text":"data_2331"}
{"text":"data_2332"}
{"text":"data_2333"}
{"text":"data_2334"}
{"text":"data_2335"}
{"text":"data_2336"}
{"text":"data_2337"}
{"text":"data_2338"}
{"text":"data_2339"}
{"text":"data_2340"}
{"text":"data_2341"}
{"text":"data_2342"}
{"text":"data_2343"}
{"text":"data_2344"}
{"text":"data_2345"}
{"text":"data_2346"}
{"text":"data_2347"}
{"text":"data_2348"}
{"text":"data_2349"}
{"text":"data_2350"}
{"text":"data_2351"}
{"text":"data_2352"}
{"text":"data_2353"}
{"text":"data_2354"}
{"text":"data_2355"}
{"text":"data_2356"}
{"text":"data_2357"}
{"text":"data_2358"}
{"text":"data_2359"}
{"text":"data_2360"}
{"text":"data_2361"}
{"text":"data_2362"}
{"text":"data_2363"}
{"text":"data_2364"}
{"text":"data_2365"}
{"text":"data_2366"}
{"text":"data_2367"}
{"text":"data_2368"}
{"text":"data_2369"}
{"text":"data_2370"}
{"text":"data_2371"}
{"text":"data_2372"}
{"text":"data_2373"}
{"text":"data_2374"}
{"text":"data_2375"}
{"text":"data_2376"}
{"text":"data_2377"}
{"text":"data_2378"}
{"text":"data_2379"}
{"text":"data_2380"}
{"text":"data_2381"}
{"text":"data_2382"}
{"text":"data_2383"}
{"text":"data_2384"}
{"text":"data_2385"}
{"text":"data_2386"}
{"text":"data_2387"}
{"text":"data_2388"}
{"text":"data_2389"}
{"text":"data_2390"}
{"text":"data_2391"}
{"text":"data_2392"}
{"text":"data_2393"}
{"text":"data_2394"}
{"text":"data_2395"}
{"text":"data_2396"}
{"text":"data_2397"}
{"text":"data_2398"}
{"text":"data_2399"}
{"text":"data_2400"}
{"text":"data_2401"}
{"text":"data_2402"}
{"text":"data_2403"}
{"text":"data_2404"}
{"text":"data_2405"}
{"text":"data_2406"}
{"text":"data_2407"}
{"text":"data_2408"}
{"text":"data_2409"}
{"text":"data_2410"}
{"text":"data_2411"}
{"text":"data_2412"}
{"text":"data_2413"}
{"text":"data_2414"}
{"text":"data_2415"}
{"text":"data_2416"}
{"text":"data_2417"}
{"text":"data_2418"}
{"text":"data_2419"}
{"text":"data_2420"}
{"text":"data_2421"}
{"text":"data_2422"}
{"text":"data_2423"}
{"text":"data_2424"}
{"text":"data_2425"}
{"text":"data_2426"}
{"text":"data_2427"}
{"text":"data_2428"}
{"text":"data_2429"}
{"text":"data_2430"}
{"text":"data_2431"}
{"text":"data_2432"}
{"text":"data_2433"}
{"text":"data_2434"}
{"text":"data_2435"}
{"text":"data_2436"}
{"text":"data_2437"}
{"text":"data_2438"}
{"text":"data_2439"}
{"text":"data_2440"}
{"text":"data_2441"}
{"text":"data_2442"}
{"text":"data_2443"}
{"text":"data_2444"}
{"text":"data_2445"}
{"text":"data_2446"}
{"text":"data_2447"}
{"text":"data_2448"}
{"text":"data_2449"}
{"text":"data_2450"}
{"text":"data_2451"}
{"text":"data_2452"}
{"text":"data_2453"}
{"text":"data_2454"}
{"text":"data_2455"}
{"text":"data_2456"}
{"text":"data_2457"}
{"text":"data_2458"}
{"text":"data_2459"}
{"text":"data_2460"}
{"text":"data_2461"}
{"text":"data_2462"}
{"text":"data_2463"}
{"text":"data_2464"}
{"text":"data_2465"}
{"text":"data_2466"}
{"text":"data_2467"}
{"text":"data_2468"}
{"text":"data_2469"}
{"text":"data_2470"}
{"text":"data_2471"}
{"text":"data_2472"}
{"text":"data_2473"}
{"text":"data_2474"}
{"text":"data_2475"}
{"text":"data_2476"}
{"text":"data_2477"}
{"text":"data_2478"}
{"text":"data_2479"}
{"text":"data_2480"}
{"text":"data_2481"}
{"text":"data_2482"}
{"text":"data_2483"}
{"text":"data_2484"}
{"text":"data_2485"}
{"text":"data_2486"}
{"text":"data_2487"}
{"text":"data_2488"}
{"text":"data_2489"}
{"text":"data_2490"}
{"text":"data_2491"}
{"text":"data_2492"}
{"text":"data_2493"}
{"text":"data_2494"}
{"text":"data_2495"}
{"text":"data_2496"}
{"text":"data_2497"}
{"text":"data_2498"}
{"text":"data_2499"}
{"text":"data_2500"}
{"text":"data_2501"}
{"text":"data_2502"}
{"text":"data_2503"}
{"text":"data_2504"}
{"text":"data_2505"}
{"text":"data_2506"}
{"text":"data_2507"}
{"text":"data_2508"}
{"text":"data_2509"}
{"text":"data_2510"}
{"text":"data_2511"}
{"text":"data_2512"}
{"text":"data_2513"}
{"text":"data_2514"}
{"text":"data_2515"}
{"text":"data_2516"}
{"text":"data_2517"}
{"text":"data_2518"}
{"text":"data_2519"}
{"text":"data_2520"}
{"text":"data_2521"}
{"text":"data_2522"}
{"text":"data_2523"}
{"text":"data_2524"}
{"text":"data_2525"}
{"text":"data_2526"}
{"text":"data_2527"}
{"text":"data_2528"}
{"text":"data_2529"}
{"text":"data_2530"}
{"text":"data_2531"}
{"text":"data_2532"}
{"text":"data_2533"}
{"text":"data_2534"}
{"text":"data_2535"}
{"text":"data_2536"}
{"text":"data_2537"}
{"text":"data_2538"}
{"text":"data_2539"}
{"text":"data_2540"}
{"text":"data_2541"}
{"text":"data_2542"}
{"text":"data_2543"}
{"text":"data_2544"}
{"text":"data_2545"}
{"text":"data_2546"}
{"text":"data_2547"}
{"text":"data_2548"}
{"text":"data_2549"}
{"text":"data_2550"}
{"text":"data_2551"}
{"text":"data_2552"}
{"text":"data_2553"}
{"text":"data_2554"}
{"text":"data_2555"}
{"text":"data_2556"}
{"text":"data_2557"}
{"text":"data_2558"}
{"text":"data_2559"}
{"text":"data_2560"}
{"text":"data_2561"}
{"text":"data_2562"}
{"text":"data_2563"}
{"text":"data_2564"}
{"text":"data_2565"}
{"text":"data_2566"}
{"text":"data_2567"}
{"text":"data_2568"}
{"text":"data_2569"}
{"text":"data_2570"}
{"text":"data_2571"}
{"text":"data_2572"}
{"text":"data_2573"}
{"text":"data_2574"}
{"text":"data_2575"}
{"text":"data_2576"}
{"text":"data_2577"}
{"text":"data_2578"}
{"text":"data_2579"}
{"text":"data_2580"}
{"text":"data_2581"}
{"text":"data_2582"}
{"text":"data_2583"}
{"text":"data_2584"}
{"text":"data_2585"}
{"text":"data_2586"}
{"text":"data_2587"}
{"text":"data_2588"}
{"text":"data_2589"}
{"text":"data_2590"}
{"text":"data_2591"}
{"text":"data_2592"}
{"text":"data_2593"}
{"text":"data_2594"}
{"text":"data_2595"}
{"text":"data_2596"}
{"text":"data_2597"}
{"text":"data_2598"}
{"text":"data_2599"}
{"text":"data_2600"}
{"text":"data_2601"}
{"text":"data_2602"}
{"text":"data_2603"}
{"text":"data_2604"}
{"text":"data_2605"}
{"text":"data_2606"}
{"text":"data_2607"}
{"text":"data_2608"}
{"text":"data_2609"}
{"text":"data_2610"}
{"text":"data_2611"}
{"text":"data_2612"}
{"text":"data_2613"}
{"text":"data_2614"}
{"text":"data_2615"}
{"text":"data_2616"}
{"text":"data_2617"}
{"text":"data_2618"}
{"text":"data_2619"}
{"text":"data_2620"}
{"text":"data_2621"}
{"text":"data_2622"}
{"text":"data_2623"}
{"text":"data_2624"}
{"text":"data_2625"}
{"text":"data_2626"}
{"text":"data_2627"}
{"text":"data_2628"}
{"text":"data_2629"}
{"text":"data_2630"}
{"text":"data_2631"}
{"text":"data_2632"}
{"text":"data_2633"}
{"text":"data_2634"}
{"text":"data_2635"}
{"text":"data_2636"}
{"text":"data_2637"}
{"text":"data_2638"}
{"text":"data_2639"}
{"text":"data_2640"}
{"text":"data_2641"}
{"text":"data_2642"}
{"text":"data_2643"}
{"text":"data_2644"}
{"text":"data_2645"}
{"text":"data_2646"}
{"text":"data_2647"}
{"text":"data_2648"}
{"text":"data_2649"}
{"text":"data_2650"}
{"text":"data_2651"}
{"text":"data_2652"}
{"text":"data_2653"}
{"text":"data_2654"}
{"text":"data_2655"}
{"text":"data_2656"}
{"text":"data_2657"}
{"text":"data_2658"}
{"text":"data_2659"}
{"text":"data_2660"}
{"text":"data_2661"}
{"text":"data_2662"}
{"text":"data_2663"}
{"text":"data_2664"}
{"text":"data_2665"}
{"text":"data_2666"}
{"text":"data_2667"}
{"text":"data_2668"}
{"text":"data_2669"}
{"text":"data_2670"}
{"text":"data_2671"}
{"text":"data_2672"}
{"text":"data_2673"}
{"text":"data_2674"}
{"text":"data_2675"}
{"text":"data_2676"}
{"text":"data_2677"}
{"text":"data_2678"}
{"text":"data_2679"}
{"text":"data_2680"}
{"text":"data_2681"}
{"text":"data_2682"}
{"text":"data_2683"}
{"text":"data_2684"}
{"text":"data_2685"}
{"text":"data_2686"}
{"text":"data_2687"}
{"text":"data_2688"}
{"text":"data_2689"}
{"text":"data_2690"}
{"text":"data_2691"}
{"text":"data_2692"}
{"text":"data_2693"}
{"text":"data_2694"}
{"text":"data_2695"}
{"text":"data_2696"}
{"text":"data_2697"}
{"text":"data_2698"}
{"text":"data_2699"}
{"text":"data_2700"}
{"text":"data_2701"}
{"text":"data_2702"}
{"text":"data_2703"}
{"text":"data_2704"}
{"text":"data_2705"}
{"text":"data_2706"}
{"text":"data_2707"}
{"text":"data_2708"}
{"text":"data_2709"}
{"text":"data_2710"}
{"text":"data_2711"}
{"text":"data_2712"}
{"text":"data_2713"}
{"text":"data_2714"}
{"text":"data_2715"}
{"text":"data_2716"}
{"text":"data_2717"}
{"text":"data_2718"}
{"text":"data_2719"}
{"text":"data_2720"}
{"text":"data_2721"}
{"text":"data_2722"}
{"text":"data_2723"}
{"text":"data_2724"}
{"text":"data_2725"}
{"text":"data_2726"}
{"text":"data_2727"}
{"text":"data_2728"}
{"text":"data_2729"}
{"text":"data_2730"}
{"text":"data_2731"}
{"text":"data_2732"}
{"text":"data_2733"}
{"text":"data_2734"}
{"text":"data_2735"}
{"text":"data_2736"}
{"text":"data_2737"}
{"text":"data_2738"}
{"text":"data_2739"}
{"text":"data_2740"}
{"text":"data_2741"}
{"text":"data_2742"}
{"text":"data_2743"}
{"text":"data_2744"}
{"text":"data_2745"}
{"text":"data_2746"}
{"text":"data_2747"}
{"text":"data_2748"}
{"text":"data_2749"}
{"text":"data_2750"}
{"text":"data_2751"}
{"text":"data_2752"}
{"text":"data_2753"}
{"text":"data_2754"}
{"text":"data_2755"}
{"text":"data_2756"}
{"text":"data_2757"}
{"text":"data_2758"}
{"text":"data_2759"}
{"text":"data_2760"}
{"text":"data_2761"}
{"text":"data_2762"}
{"text":"data_2763"}
{"text":"data_2764"}
{"text":"data_2765"}
{"text":"data_2766"}
{"text":"data_2767"}
{"text":"data_2768"}
{"text":"data_2769"}
{"text":"data_2770"}
{"text":"data_2771"}
{"text":"data_2772"}
{"text":"data_2773"}
{"text":"data_2774"}
{"text":"data_2775"}
{"text":"data_2776"}
{"text":"data_2777"}
{"text":"data_2778"}
{"text":"data_2779"}
{"text":"data_2780"}
{"text":"data_2781"}
{"text":"data_2782"}
{"text":"data_2783"}
{"text":"data_2784"}
{"text":"data_2785"}
{"text":"data_2786"}
{"text":"data_2787"}
{"text":"data_2788"}
{"text":"data_2789"}
{"text":"data_2790"}
{"text":"data_2791"}
{"text":"data_2792"}
{"text":"data_2793"}
{"text":"data_2794"}
{"text":"data_2795"}
{"text":"data_2796"}
{"text":"data_2797"}
{"text":"data_2798"}
{"text":"data_2799"}
{"text":"data_2800"}
{"text":"data_2801"}
{"text":"data_2802"}
{"text":"data_2803"}
{"text":"data_2804"}
{"text":"data_2805"}
{"text":"data_2806"}
{"text":"data_2807"}
{"text":"data_2808"}
{"text":"data_2809"}
{"text":"data_2810"}
{"text":"data_2811"}
{"text":"data_2812"}
{"text":"data_2813"}
{"text":"data_2814"}
{"text":"data_2815"}
{"text":"data_2816"}
{"text":"data_2817"}
{"text":"data_2818"}
{"text":"data_2819"}
{"text":"data_2820"}
{"text":"data_2821"}
{"text":"data_2822"}
{"text":"data_2823"}
{"text":"data_2824"}
{"text":"data_2825"}
{"text":"data_2826"}
{"text":"data_2827"}
{"text":"data_2828"}
{"text":"data_2829"}
{"text":"data_2830"}
{"text":"data_2831"}
{"text":"data_2832"}
{"text":"data_2833"}
{"text":"data_2834"}
{"text":"data_2835"}
{"text":"data_2836"}
{"text":"data_2837"}
{"text":"data_2838"}
{"text":"data_2839"}
{"text":"data_2840"}
{"text":"data_2841"}
{"text":"data_2842"}
{"text":"data_2843"}
{"text":"data_2844"}
{"text":"data_2845"}
{"text":"data_2846"}
{"text":"data_2847"}
{"text":"data_2848"}
{"text":"data_2849"}
{"text":"data_2850"}
{"text":"data_2851"}
{"text":"data_2852"}
{"text":"data_2853"}
{"text":"data_2854"}
{"text":"data_2855"}
{"text":"data_2856"}
{"text":"data_2857"}
{"text":"data_2858"}
{"text":"data_2859"}
{"text":"data_2860"}
{"text":"data_2861"}
{"text":"data_2862"}
{"text":"data_2863"}
{"text":"data_2864"}
{"text":"data_2865"}
{"text":"data_2866"}
{"text":"data_2867"}
{"text":"data_2868"}
{"text":"data_2869"}
{"text":"data_2870"}
{"text":"data_2871"}
{"text":"data_2872"}
{"text":"data_2873"}
{"text":"data_2874"}
{"text":"data_2875"}
{"text":"data_2876"}
{"text":"data_2877"}
{"text":"data_2878"}
{"text":"data_2879"}
{"text":"data_2880"}
{"text":"data_2881"}
{"text":"data_2882"}
{"text":"data_2883"}
{"text":"data_2884"}
{"text":"data_2885"}
{"text":"data_2886"}
{"text":"data_2887"}
{"text":"data_2888"}
{"text":"data_2889"}
{"text":"data_2890"}
{"text":"data_2891"}
{"text":"data_2892"}
{"text":"data_2893"}
{"text":"data_2894"}
{"text":"data_2895"}
{"text":"data_2896"}
{"text":"data_2897"}
{"text":"data_2898"}
{"text":"data_2899"}
{"text":"data_2900"}
{"text":"data_2901"}
{"text":"data_2902"}
{"text":"data_2903"}
{"text":"data_2904"}
{"text":"data_2905"}
{"text":"data_2906"}
{"text":"data_2907"}
{"text":"data_2908"}
{"text":"data_2909"}
{"text":"data_2910"}
{"text":"data_2911"}
{"text":"data_2912"}
{"text":"data_2913"}
{"text":"data_2914"}
{"text":"data_2915"}
{"text":"data_2916"}
{"text":"data_2917"}
{"text":"data_2918"}
{"text":"data_2919"}
{"text":"data_2920"}
{"text":"data_2921"}
{"text":"data_2922"}
{"text":"data_2923"}
{"text":"data_2924"}
{"text":"data_2925"}
{"text":"data_2926"}
{"text":"data_2927"}
{"text":"data_2928"}
{"text":"data_2929"}
{"text":"data_2930"}
{"text":"data_2931"}
{"text":"data_2932"}
{"text":"data_2933"}
{"text":"data_2934"}
{"text":"data_2935"}
{"text":"data_2936"}
{"text":"data_2937"}
{"text":"data_2938"}
{"text":"data_2939"}
{"text":"data_2940"}
{"text":"data_2941"}
{"text":"data_2942"}
{"text":"data_2943"}
{"text":"data_2944"}
{"text":"data_2945"}
{"text":"data_2946"}
{"text":"data_2947"}
{"text":"data_2948"}
{"text":"data_2949"}
{"text":"data_2950"}
{"text":"data_2951"}
{"text":"data_2952"}
{"text":"data_2953"}
{"text":"data_2954"}
{"text":"data_2955"}
{"text":"data_2956"}
{"text":"data_2957"}
{"text":"data_2958"}
{"text":"data_2959"}
{"text":"data_2960"}
{"text":"data_2961"}
{"text":"data_2962"}
{"text":"data_2963"}
{"text":"data_2964"}
{"text":"data_2965"}
{"text":"data_2966"}
{"text":"data_2967"}
{"text":"data_2968"}
{"text":"data_2969"}
{"text":"data_2970"}
{"text":"data_2971"}
{"text":"data_2972"}
{"text":"data_2973"}
{"text":"data_2974"}
{"text":"data_2975"}
{"text":"data_2976"}
{"text":"data_2977"}
{"text":"data_2978"}
{"text":"data_2979"}
{"text":"data_2980"}
{"text":"data_2981"}
{"text":"data_2982"}
{"text":"data_2983"}
{"text":"data_2984"}
{"text":"data_2985"}
{"text":"data_2986"}
{"text":"data_2987"}
{"text":"data_2988"}
{"text":"data_2989"}
{"text":"data_2990"}
{"text":"data_2991"}
{"text":"data_2992"}
{"text":"data_2993"}
{"text":"data_2994"}
{"text":"data_2995"}
{"text":"data_2996"}
{"text":"data_2997"}
{"text":"data_2998"}
{"text":"data_2999"}
{"text":"data_3000"}
{"text":"data_3001"}
{"text":"data_3002"}
{"text":"data_3003"}
{"text":"data_3004"}
{"text":"data_3005"}
{"text":"data_3006"}
{"text":"data_3007"}
{"text":"data_3008"}
{"text":"data_3009"}
{"text":"data_3010"}
{"text":"data_3011"}
{"text":"data_3012"}
{"text":"data_3013"}
{"text":"data_3014"}
{"text":"data_3015"}
{"text":"data_3016"}
{"text":"data_3017"}
{"text":"data_3018"}
{"text":"data_3019"}
{"text":"data_3020"}
{"text":"data_3021"}
{"text":"data_3022"}
{"text":"data_3023"}
{"text":"data_3024"}
{"text":"data_3025"}
{"text":"data_3026"}
{"text":"data_3027"}
{"text":"data_3028"}
{"text":"data_3029"}
{"text":"data_3030"}
{"text":"data_3031"}
{"text":"data_3032"}
{"text":"data_3033"}
{"text":"data_3034"}
{"text":"data_3035"}
{"text":"data_3036"}
{"text":"data_3037"}
{"text":"data_3038"}
{"text":"data_3039"}
{"text":"data_3040"}
{"text":"data_3041"}
{"text":"data_3042"}
{"text":"data_3043"}
{"text":"data_3044"}
{"text":"data_3045"}
{"text":"data_3046"}
{"text":"data_3047"}
{"text":"data_3048"}
{"text":"data_3049"}
{"text":"data_3050"}
{"text":"data_3051"}
{"text":"data_3052"}
{"text":"data_3053"}
{"text":"data_3054"}
{"text":"data_3055"}
{"text":"data_3056"}
{"text":"data_3057"}
{"text":"data_3058"}
{"text":"data_3059"}
{"text":"data_3060"}
{"text":"data_3061"}
{"text":"data_3062"}
{"text":"data_3063"}
{"text":"data_3064"}
{"text":"data_3065"}
{"text":"data_3066"}
{"text":"data_3067"}
{"text":"data_3068"}
{"text":"data_3069"}
{"text":"data_3070"}
{"text":"data_3071"}
{"text":"data_3072"}
{"text":"data_3073"}
{"text":"data_3074"}
{"text":"data_3075"}
{"text":"data_3076"}
{"text":"data_3077"}
{"text":"data_3078"}
{"text":"data_3079"}
{"text":"data_3080"}
{"text":"data_3081"}
{"text":"data_3082"}
{"text":"data_3083"}
{"text":"data_3084"}
{"text":"data_3085"}
{"text":"data_3086"}
{"text":"data_3087"}
{"text":"data_3088"}
{"text":"data_3089"}
{"text":"data_3090"}
{"text":"data_3091"}
{"text":"data_3092"}
{"text":"data_3093"}
{"text":"data_3094"}
{"text":"data_3095"}
{"text":"data_3096"}
{"text":"data_3097"}
{"text":"data_3098"}
{"text":"data_3099"}
{"text":"data_3100"}
{"text":"data_3101"}
{"text":"data_3102"}
{"text":"data_3103"}
{"text":"data_3104"}
{"text":"data_3105"}
{"text":"data_3106"}
{"text":"data_3107"}
{"text":"data_3108"}
{"text":"data_3109"}
{"text":"data_3110"}
{"text":"data_3111"}
{"text":"data_3112"}
{"text":"data_3113"}
{"text":"data_3114"}
{"text":"data_3115"}
{"text":"data_3116"}
{"text":"data_3117"}
{"text":"data_3118"}
{"text":"data_3119"}
{"text":"data_3120"}
{"text":"data_3121"}
{"text":"data_3122"}
{"text":"data_3123"}
{"text":"data_3124"}
{"text":"data_3125"}
{"text":"data_3126"}
{"text":"data_3127"}
{"text":"data_3128"}
{"text":"data_3129"}
{"text":"data_3130"}
{"text":"data_3131"}
{"text":"data_3132"}
{"text":"data_3133"}
{"text":"data_3134"}
{"text":"data_3135"}
{"text":"data_3136"}
{"text":"data_3137"}
{"text":"data_3138"}
{"text":"data_3139"}
{"text":"data_3140"}
{"text":"data_3141"}
{"text":"data_3142"}
{"text":"data_3143"}
{"text":"data_3144"}
{"text":"data_3145"}
{"text":"data_3146"}
{"text":"data_3147"}
{"text":"data_3148"}
{"text":"data_3149"}
{"text":"data_3150"}
{"text":"data_3151"}
{"text":"data_3152"}
{"text":"data_3153"}
{"text":"data_3154"}
{"text":"data_3155"}
{"text":"data_3156"}
{"text":"data_3157"}
{"text":"data_3158"}
{"text":"data_3159"}
{"text":"data_3160"}
{"text":"data_3161"}
{"text":"data_3162"}
{"text":"data_3163"}
{"text":"data_3164"}
{"text":"data_3165"}
{"text":"data_3166"}
{"text":"data_3167"}
{"text":"data_3168"}
{"text":"data_3169"}
{"text":"data_3170"}
{"text":"data_3171"}
{"text":"data_3172"}
{"text":"data_3173"}
{"text":"data_3174"}
{"text":"data_3175"}
{"text":"data_3176"}
{"text":"data_3177"}
{"text":"data_3178"}
{"text":"data_3179"}
{"text":"data_3180"}
{"text":"data_3181"}
{"text":"data_3182"}
{"text":"data_3183"}
{"text":"data_3184"}
{"text":"data_3185"}
{"text":"data_3186"}
{"text":"data_3187"}
{"text":"data_3188"}
{"text":"data_3189"}
{"text":"data_3190"}
{"text":"data_3191"}
{"text":"data_3192"}
{"text":"data_3193"}
{"text":"data_3194"}
{"text":"data_3195"}
{"text":"data_3196"}
{"text":"data_3197"}
{"text":"data_3198"}
{"text":"data_3199"}
{"text":"data_3200"}
{"text":"data_3201"}
{"text":"data_3202"}
{"text":"data_3203"}
{"text":"data_3204"}
{"text":"data_3205"}
{"text":"data_3206"}
{"text":"data_3207"}
{"text":"data_3208"}
{"text":"data_3209"}
{"text":"data_3210"}
{"text":"data_3211"}
{"text":"data_3212"}
{"text":"data_3213"}
{"text":"data_3214"}
{"text":"data_3215"}
{"text":"data_3216"}
{"text":"data_3217"}
{"text":"data_3218"}
{"text":"data_3219"}
{"text":"data_3220"}
{"text":"data_3221"}
{"text":"data_3222"}
{"text":"data_3223"}
{"text":"data_3224"}
{"text":"data_3225"}
{"text":"data_3226"}
{"text":"data_3227"}
{"text":"data_3228"}
{"text":"data_3229"}
{"text":"data_3230"}
{"text":"data_3231"}
{"text":"data_3232"}
{"text":"data_3233"}
{"text":"data_3234"}
{"text":"data_3235"}
{"text":"data_3236"}
{"text":"data_3237"}
{"text":"data_3238"}
{"text":"data_3239"}
{"text":"data_3240"}
{"text":"data_3241"}
{"text":"data_3242"}
{"text":"data_3243"}
{"text":"data_3244"}
{"text":"data_3245"}
{"text":"data_3246"}
{"text":"data_3247"}
{"text":"data_3248"}
{"text":"data_3249"}
{"text":"data_3250"}
{"text":"data_3251"}
{"text":"data_3252"}
{"text":"data_3253"}
{"text":"data_3254"}
{"text":"data_3255"}
{"text":"data_3256"}
{"text":"data_3257"}
{"text":"data_3258"}
{"text":"data_3259"}
{"text":"data_3260"}
{"text":"data_3261"}
{"text":"data_3262"}
{"text":"data_3263"}
{"text":"data_3264"}
{"text":"data_3265"}
{"text":"data_3266"}
{"text":"data_3267"}
{"text":"data_3268"}
{"text":"data_3269"}
{"text":"data_3270"}
{"text":"data_3271"}
{"text":"data_3272"}
{"text":"data_3273"}
{"text":"data_3274"}
{"text":"data_3275"}
{"text":"data_3276"}
{"text":"data_3277"}
{"text":"data_3278"}
{"text":"data_3279"}
{"text":"data_3280"}
{"text":"data_3281"}
{"text":"data_3282"}
{"text":"data_3283"}
{"text":"data_3284"}
{"text":"data_3285"}
{"text":"data_3286"}
{"text":"data_3287"}
{"text":"data_3288"}
{"text":"data_3289"}
{"text":"data_3290"}
{"text":"data_3291"}
{"text":"data_3292"}
{"text":"data_3293"}
{"text":"data_3294"}
{"text":"data_3295"}
{"text":"data_3296"}
{"text":"data_3297"}
{"text":"data_3298"}
{"text":"data_3299"}
{"text":"data_3300"}
{"text":"data_3301"}
{"text":"data_3302"}
{"text":"data_3303"}
{"text":"data_3304"}
{"text":"data_3305"}
{"text":"data_3306"}
{"text":"data_3307"}
{"text":"data_3308"}
{"text":"data_3309"}
{"text":"data_3310"}
{"text":"data_3311"}
{"text":"data_3312"}
{"text":"data_3313"}
{"text":"data_3314"}
{"text":"data_3315"}
{"text":"data_3316"}
{"text":"data_3317"}
{"text":"data_3318"}
{"text":"data_3319"}
{"text":"data_3320"}
{"text":"data_3321"}
{"text":"data_3322"}
{"text":"data_3323"}
{"text":"data_3324"}
{"text":"data_3325"}
{"text":"data_3326"}
{"text":"data_3327"}
{"text":"data_3328"}
{"text":"data_3329"}
{"text":"data_3330"}
{"text":"data_3331"}
{"text":"data_3332"}
{"text":"data_3333"}
{"text":"data_3334"}
{"text":"data_3335"}
{"text":"data_3336"}
{"text":"data_3337"}
{"text":"data_3338"}
{"text":"data_3339"}
{"text":"data_3340"}
{"text":"data_3341"}
{"text":"data_3342"}
{"text":"data_3343"}
{"text":"data_3344"}
{"text":"data_3345"}
{"text":"data_3346"}
{"text":"data_3347"}
{"text":"data_3348"}
{"text":"data_3349"}
{"text":"data_3350"}
{"text":"data_3351"}
{"text":"data_3352"}
{"text":"data_3353"}
{"text":"data_3354"}
{"text":"data_3355"}
{"text":"data_3356"}
{"text":"data_3357"}
{"text":"data_3358"}
{"text":"data_3359"}
{"text":"data_3360"}
{"text":"data_3361"}
{"text":"data_3362"}
{"text":"data_3363"}
{"text":"data_3364"}
{"text":"data_3365"}
{"text":"data_3366"}
{"text":"data_3367"}
{"text":"data_3368"}
{"text":"data_3369"}
{"text":"data_3370"}
{"text":"data_3371"}
{"text":"data_3372"}
{"text":"data_3373"}
{"text":"data_3374"}
{"text":"data_3375"}
{"text":"data_3376"}
{"text":"data_3377"}
{"text":"data_3378"}
{"text":"data_3379"}
{"text":"data_3380"}
{"text":"data_3381"}
{"text":"data_3382"}
{"text":"data_3383"}
{"text":"data_3384"}
{"text":"data_3385"}
{"text":"data_3386"}
{"text":"data_3387"}
{"text":"data_3388"}
{"text":"data_3389"}
{"text":"data_3390"}
{"text":"data_3391"}
{"text":"data_3392"}
{"text":"data_3393"}
{"text":"data_3394"}
{"text":"data_3395"}
{"text":"data_3396"}
{"text":"data_3397"}
{"text":"data_3398"}
{"text":"data_3399"}
{"text":"data_3400"}
{"text":"data_3401"}
{"text":"data_3402"}
{"text":"data_3403"}
{"text":"data_3404"}
{"text":"data_3405"}
{"text":"data_3406"}
{"text":"data_3407"}
{"text":"data_3408"}
{"text":"data_3409"}
{"text":"data_3410"}
{"text":"data_3411"}
{"text":"data_3412"}
{"text":"data_3413"}
{"text":"data_3414"}
{"text":"data_3415"}
{"text":"data_3416"}
{"text":"data_3417"}
{"text":"data_3418"}
{"text":"data_3419"}
{"text":"data_3420"}
{"text":"data_3421"}
{"text":"data_3422"}
{"text":"data_3423"}
{"text":"data_3424"}
{"text":"data_3425"}
{"text":"data_3426"}
{"text":"data_3427"}
{"text":"data_3428"}
{"text":"data_3429"}
{"text":"data_3430"}
{"text":"data_3431"}
{"text":"data_3432"}
{"text":"data_3433"}
{"text":"data_3434"}
{"text":"data_3435"}
{"text":"data_3436"}
{"text":"data_3437"}
{"text":"data_3438"}
{"text":"data_3439"}
{"text":"data_3440"}
{"text":"data_3441"}
{"text":"data_3442"}
{"text":"data_3443"}
{"text":"data_3444"}
{"text":"data_3445"}
{"text":"data_3446"}
{"text":"data_3447"}
{"text":"data_3448"}
{"text":"data_3449"}
{"text":"data_3450"}
{"text":"data_3451"}
{"text":"data_3452"}
{"text":"data_3453"}
{"text":"data_3454"}
{"text":"data_3455"}
{"text":"data_3456"}
{"text":"data_3457"}
{"text":"data_3458"}
{"text":"data_3459"}
{"text":"data_3460"}
{"text":"data_3461"}
{"text":"data_3462"}
{"text":"data_3463"}
{"text":"data_3464"}
{"text":"data_3465"}
{"text":"data_3466"}
{"text":"data_3467"}
{"text":"data_3468"}
{"text":"data_3469"}
{"text":"data_3470"}
{"text":"data_3471"}
{"text":"data_3472"}
{"text":"data_3473"}
{"text":"data_3474"}
{"text":"data_3475"}
{"text":"data_3476"}
{"text":"data_3477"}
{"text":"data_3478"}
{"text":"data_3479"}
{"text":"data_3480"}
{"text":"data_3481"}
{"text":"data_3482"}
{"text":"data_3483"}
{"text":"data_3484"}
{"text":"data_3485"}
{"text":"data_3486"}
{"text":"data_3487"}
{"text":"data_3488"}
{"text":"data_3489"}
{"text":"data_3490"}
{"text":"data_3491"}
{"text":"data_3492"}
{"text":"data_3493"}
{"text":"data_3494"}
{"text":"data_3495"}
{"text":"data_3496"}
{"text":"data_3497"}
{"text":"data_3498"}
{"text":"data_3499"}
{"text":"data_3500"}
{"text":"data_3501"}
{"text":"data_3502"}
{"text":"data_3503"}
{"text":"data_3504"}
{"text":"data_3505"}
{"text":"data_3506"}
{"text":"data_3507"}
{"text":"data_3508"}
{"text":"data_3509"}
{"text":"data_3510"}
{"text":"data_3511"}
{"text":"data_3512"}
{"text":"data_3513"}
{"text":"data_3514"}
{"text":"data_3515"}
{"text":"data_3516"}
{"text":"data_3517"}
{"text":"data_3518"}
{"text":"data_3519"}
{"text":"data_3520"}
{"text":"data_3521"}
{"text":"data_3522"}
{"text":"data_3523"}
{"text":"data_3524"}
{"text":"data_3525"}
{"text":"data_3526"}
{"text":"data_3527"}
{"text":"data_3528"}
{"text":"data_3529"}
{"text":"data_3530"}
{"text":"data_3531"}
{"text":"data_3532"}
{"text":"data_3533"}
{"text":"data_3534"}
{"text":"data_3535"}
{"text":"data_3536"}
{"text":"data_3537"}
{"text":"data_3538"}
{"text":"data_3539"}
{"text":"data_3540"}
{"text":"data_3541"}
{"text":"data_3542"}
{"text":"data_3543"}
{"text":"data_3544"}
{"text":"data_3545"}
{"text":"data_3546"}
{"text":"data_3547"}
{"text":"data_3548"}
{"text":"data_3549"}
{"text":"data_3550"}
{"text":"data_3551"}
{"text":"data_3552"}
{"text":"data_3553"}
{"text":"data_3554"}
{"text":"data_3555"}
{"text":"data_3556"}
{"text":"data_3557"}
{"text":"data_3558"}
{"text":"data_3559"}
{"text":"data_3560"}
{"text":"data_3561"}
{"text":"data_3562"}
{"text":"data_3563"}
{"text":"data_3564"}
{"text":"data_3565"}
{"text":"data_3566"}
{"text":"data_3567"}
{"text":"data_3568"}
{"text":"data_3569"}
{"text":"data_3570"}
{"text":"data_3571"}
{"text":"data_3572"}
{"text":"data_3573"}
{"text":"data_3574"}
{"text":"data_3575"}
{"text":"data_3576"}
{"text":"data_3577"}
{"text":"data_3578"}
{"text":"data_3579"}
{"text":"data_3580"}
{"text":"data_3581"}
{"text":"data_3582"}
{"text":"data_3583"}
{"text":"data_3584"}
{"text":"data_3585"}
{"text":"data_3586"}
{"text":"data_3587"}
{"text":"data_3588"}
{"text":"data_3589"}
{"text":"data_3590"}
{"text":"data_3591"}
{"text":"data_3592"}
{"text":"data_3593"}
{"text":"data_3594"}
{"text":"data_3595"}
{"text":"data_3596"}
{"text":"data_3597"}
{"text":"data_3598"}
{"text":"data_3599"}
{"text":"data_3600"}
{"text":"data_3601"}
{"text":"data_3602"}
{"text":"data_3603"}
{"text":"data_3604"}
{"text":"data_3605"}
{"text":"data_3606"}
{"text":"data_3607"}
{"text":"data_3608"}
{"text":"data_3609"}
{"text":"data_3610"}
{"text":"data_3611"}
{"text":"data_3612"}
{"text":"data_3613"}
{"text":"data_3614"}
{"text":"data_3615"}
{"text":"data_3616"}
{"text":"data_3617"}
{"text":"data_3618"}
{"text":"data_3619"}
{"text":"data_3620"}
{"text":"data_3621"}
{"text":"data_3622"}
{"text":"data_3623"}
{"text":"data_3624"}
{"text":"data_3625"}
{"text":"data_3626"}
{"text":"data_3627"}
{"text":"data_3628"}
{"text":"data_3629"}
{"text":"data_3630"}
{"text":"data_3631"}
{"text":"data_3632"}
{"text":"data_3633"}
{"text":"data_3634"}
{"text":"data_3635"}
{"text":"data_3636"}
{"text":"data_3637"}
{"text":"data_3638"}
{"text":"data_3639"}
{"text":"data_3640"}
{"text":"data_3641"}
{"text":"data_3642"}
{"text":"data_3643"}
{"text":"data_3644"}
{"text":"data_3645"}
{"text":"data_3646"}
{"text":"data_3647"}
{"text":"data_3648"}
{"text":"data_3649"}
{"text":"data_3650"}
{"text":"data_3651"}
{"text":"data_3652"}
{"text":"data_3653"}
{"text":"data_3654"}
{"text":"data_3655"}
{"text":"data_3656"}
{"text":"data_3657"}
{"text":"data_3658"}
{"text":"data_3659"}
{"text":"data_3660"}
{"text":"data_3661"}
{"text":"data_3662"}
{"text":"data_3663"}
{"text":"data_3664"}
{"text":"data_3665"}
{"text":"data_3666"}
{"text":"data_3667"}
{"text":"data_3668"}
{"text":"data_3669"}
{"text":"data_3670"}
{"text":"data_3671"}
{"text":"data_3672"}
{"text":"data_3673"}
{"text":"data_3674"}
{"text":"data_3675"}
{"text":"data_3676"}
{"text":"data_3677"}
{"text":"data_3678"}
{"text":"data_3679"}
{"text":"data_3680"}
{"text":"data_3681"}
{"text":"data_3682"}
{"text":"data_3683"}
{"text":"data_3684"}
{"text":"data_3685"}
{"text":"data_3686"}
{"text":"data_3687"}
{"text":"data_3688"}
{"text":"data_3689"}
{"text":"data_3690"}
{"text":"data_3691"}
{"text":"data_3692"}
{"text":"data_3693"}
{"text":"data_3694"}
{"text":"data_3695"}
{"text":"data_3696"}
{"text":"data_3697"}
{"text":"data_3698"}
{"text":"data_3699"}
{"text":"data_3700"}
{"text":"data_3701"}
{"text":"data_3702"}
{"text":"data_3703"}
{"text":"data_3704"}
{"text":"data_3705"}
{"text":"data_3706"}
{"text":"data_3707"}
{"text":"data_3708"}
{"text":"data_3709"}
{"text":"data_3710"}
{"text":"data_3711"}
{"text":"data_3712"}
{"text":"data_3713"}
{"text":"data_3714"}
{"text":"data_3715"}
{"text":"data_3716"}
{"text":"data_3717"}
{"text":"data_3718"}
{"text":"data_3719"}
{"text":"data_3720"}
{"text":"data_3721"}
{"text":"data_3722"}
{"text":"data_3723"}
{"text":"data_3724"}
{"text":"data_3725"}
{"text":"data_3726"}
{"text":"data_3727"}
{"text":"data_3728"}
{"text":"data_3729"}
{"text":"data_3730"}
{"text":"data_3731"}
{"text":"data_3732"}
{"text":"data_3733"}
{"text":"data_3734"}
{"text":"data_3735"}
{"text":"data_3736"}
{"text":"data_3737"}
{"text":"data_3738"}
{"text":"data_3739"}
{"text":"data_3740"}
{"text":"data_3741"}
{"text":"data_3742"}
{"text":"data_3743"}
{"text":"data_3744"}
{"text":"data_3745"}
{"text":"data_3746"}
{"text":"data_3747"}
{"text":"data_3748"}
{"text":"data_3749"}
{"text":"data_3750"}
{"text":"data_3751"}
{"text":"data_3752"}
{"text":"data_3753"}
{"text":"data_3754"}
{"text":"data_3755"}
{"text":"data_3756"}
{"text":"data_3757"}
{"text":"data_3758"}
{"text":"data_3759"}
{"text":"data_3760"}
{"text":"data_3761"}
{"text":"data_3762"}
{"text":"data_3763"}
{"text":"data_3764"}
{"text":"data_3765"}
{"text":"data_3766"}
{"text":"data_3767"}
{"text":"data_3768"}
{"text":"data_3769"}
{"text":"data_3770"}
{"text":"data_3771"}
{"text":"data_3772"}
{"text":"data_3773"}
{"text":"data_3774"}
{"text":"data_3775"}
{"text":"data_3776"}
{"text":"data_3777"}
{"text":"data_3778"}
{"text":"data_3779"}
{"text":"data_3780"}
{"text":"data_3781"}
{"text":"data_3782"}
{"text":"data_3783"}
{"text":"data_3784"}
{"text":"data_3785"}
{"text":"data_3786"}
{"text":"data_3787"}
{"text":"data_3788"}
{"text":"data_3789"}
{"text":"data_3790"}
{"text":"data_3791"}
{"text":"data_3792"}
{"text":"data_3793"}
{"text":"data_3794"}
{"text":"data_3795"}
{"text":"data_3796"}
{"text":"data_3797"}
{"text":"data_3798"}
{"text":"data_3799"}
{"text":"data_3800"}
{"text":"data_3801"}
{"text":"data_3802"}
{"text":"data_3803"}
{"text":"data_3804"}
{"text":"data_3805"}
{"text":"data_3806"}
{"text":"data_3807"}
{"text":"data_3808"}
{"text":"data_3809"}
{"text":"data_3810"}
{"text":"data_3811"}
{"text":"data_3812"}
{"text":"data_3813"}
{"text":"data_3814"}
{"text":"data_3815"}
{"text":"data_3816"}
{"text":"data_3817"}
{"text":"data_3818"}
{"text":"data_3819"}
{"text":"data_3820"}
{"text":"data_3821"}
{"text":"data_3822"}
{"text":"data_3823"}
{"text":"data_3824"}
{"text":"data_3825"}
{"text":"data_3826"}
{"text":"data_3827"}
{"text":"data_3828"}
{"text":"data_3829"}
{"text":"data_3830"}
{"text":"data_3831"}
{"text":"data_3832"}
{"text":"data_3833"}
{"text":"data_3834"}
{"text":"data_3835"}
{"text":"data_3836"}
{"text":"data_3837"}
{"text":"data_3838"}
{"text":"data_3839"}
{"text":"data_3840"}
{"text":"data_3841"}
{"text":"data_3842"}
{"text":"data_3843"}
{"text":"data_3844"}
{"text":"data_3845"}
{"text":"data_3846"}
{"text":"data_3847"}
{"text":"data_3848"}
{"text":"data_3849"}
{"text":"data_3850"}
{"text":"data_3851"}
{"text":"data_3852"}
{"text":"data_3853"}
{"text":"data_3854"}
{"text":"data_3855"}
{"text":"data_3856"}
{"text":"data_3857"}
{"text":"data_3858"}
{"text":"data_3859"}
{"text":"data_3860"}
{"text":"data_3861"}
{"text":"data_3862"}
{"text":"data_3863"}
{"text":"data_3864"}
{"text":"data_3865"}
{"text":"data_3866"}
{"text":"data_3867"}
{"text":"data_3868"}
{"text":"data_3869"}
{"text":"data_3870"}
{"text":"data_3871"}
{"text":"data_3872"}
{"text":"data_3873"}
{"text":"data_3874"}
{"text":"data_3875"}
{"text":"data_3876"}
{"text":"data_3877"}
{"text":"data_3878"}
{"text":"data_3879"}
{"text":"data_3880"}
{"text":"data_3881"}
{"text":"data_3882"}
{"text":"data_3883"}
{"text":"data_3884"}
{"text":"data_3885"}
{"text":"data_3886"}
{"text":"data_3887"}
{"text":"data_3888"}
{"text":"data_3889"}
{"text":"data_3890"}
{"text":"data_3891"}
{"text":"data_3892"}
{"text":"data_3893"}
{"text":"data_3894"}
{"text":"data_3895"}
{"text":"data_3896"}
{"text":"data_3897"}
{"text":"data_3898"}
{"text":"data_3899"}
{"text":"data_3900"}
{"text":"data_3901"}
{"text":"data_3902"}
{"text":"data_3903"}
{"text":"data_3904"}
{"text":"data_3905"}
{"text":"data_3906"}
{"text":"data_3907"}
{"text":"data_3908"}
{"text":"data_3909"}
{"text":"data_3910"}
{"text":"data_3911"}
{"text":"data_3912"}
{"text":"data_3913"}
{"text":"data_3914"}
{"text":"data_3915"}
{"text":"data_3916"}
{"text":"data_3917"}
{"text":"data_3918"}
{"text":"data_3919"}
{"text":"data_3920"}
{"text":"data_3921"}
{"text":"data_3922"}
{"text":"data_3923"}
{"text":"data_3924"}
{"text":"data_3925"}
{"text":"data_3926"}
{"text":"data_3927"}
{"text":"data_3928"}
{"text":"data_3929"}
{"text":"data_3930"}
{"text":"data_3931"}
{"text":"data_3932"}
{"text":"data_3933"}
{"text":"data_3934"}
{"text":"data_3935"}
{"text":"data_3936"}
{"text":"data_3937"}
{"text":"data_3938"}
{"text":"data_3939"}
{"text":"data_3940"}
{"text":"data_3941"}
{"text":"data_3942"}
{"text":"data_3943"}
{"text":"data_3944"}
{"text":"data_3945"}
{"text":"data_3946"}
{"text":"data_3947"}
{"text":"data_3948"}
{"text":"data_3949"}
{"text":"data_3950"}
{"text":"data_3951"}
{"text":"data_3952"}
{"text":"data_3953"}
{"text":"data_3954"}
{"text":"data_3955"}
{"text":"data_3956"}
{"text":"data_3957"}
{"text":"data_3958"}
{"text":"data_3959"}
{"text":"data_3960"}
{"text":"data_3961"}
{"text":"data_3962"}
{"text":"data_3963"}
{"text":"data_3964"}
{"text":"data_3965"}
{"text":"data_3966"}
{"text":"data_3967"}
{"text":"data_3968"}
{"text":"data_3969"}
{"text":"data_3970"}
{"text":"data_3971"}
{"text":"data_3972"}
{"text":"data_3973"}
{"text":"data_3974"}
{"text":"data_3975"}
{"text":"data_3976"}
{"text":"data_3977"}
{"text":"data_3978"}
{"text":"data_3979"}
{"text":"data_3980"}
{"text":"data_3981"}
{"text":"data_3982"}
{"text":"data_3983"}
{"text":"data_3984"}
{"text":"data_3985"}
{"text":"data_3986"}
{"text":"data_3987"}
{"text":"data_3988"}
{"text":"data_3989"}
{"text":"data_3990"}
{"text":"data_3991"}
{"text":"data_3992"}
{"text":"data_3993"}
{"text":"data_3994"}
{"text":"data_3995"}
{"text":"data_3996"}
{"text":"data_3997"}
{"text":"data_3998"}
{"text":"data_3999"}
{"text":"data_4000"}
{"text":"data_4001"}
{"text":"data_4002"}
{"text":"data_4003"}
{"text":"data_4004"}
{"text":"data_4005"}
{"text":"data_4006"}
{"text":"data_4007"}
{"text":"data_4008"}
{"text":"data_4009"}
{"text":"data_4010"}
{"text":"data_4011"}
{"text":"data_4012"}
{"text":"data_4013"}
{"text":"data_4014"}
{"text":"data_4015"}
{"text":"data_4016"}
{"text":"data_4017"}
{"text":"data_4018"}
{"text":"data_4019"}
{"text":"data_4020"}
{"text":"data_4021"}
{"text":"data_4022"}
{"text":"data_4023"}
{"text":"data_4024"}
{"text":"data_4025"}
{"text":"data_4026"}
{"text":"data_4027"}
{"text":"data_4028"}
{"text":"data_4029"}
{"text":"data_4030"}
{"text":"data_4031"}
{"text":"data_4032"}
{"text":"data_4033"}
{"text":"data_4034"}
{"text":"data_4035"}
{"text":"data_4036"}
{"text":"data_4037"}
{"text":"data_4038"}
{"text":"data_4039"}
{"text":"data_4040"}
{"text":"data_4041"}
{"text":"data_4042"}
{"text":"data_4043"}
{"text":"data_4044"}
{"text":"data_4045"}
{"text":"data_4046"}
{"text":"data_4047"}
{"text":"data_4048"}
{"text":"data_4049"}
{"text":"data_4050"}
{"text":"data_4051"}
{"text":"data_4052"}
{"text":"data_4053"}
{"text":"data_4054"}
{"text":"data_4055"}
{"text":"data_4056"}
{"text":"data_4057"}
{"text":"data_4058"}
{"text":"data_4059"}
{"text":"data_4060"}
{"text":"data_4061"}
{"text":"data_4062"}
{"text":"data_4063"}
{"text":"data_4064"}
{"text":"data_4065"}
{"text":"data_4066"}
{"text":"data_4067"}
{"text":"data_4068"}
{"text":"data_4069"}
{"text":"data_4070"}
{"text":"data_4071"}
{"text":"data_4072"}
{"text":"data_4073"}
{"text":"data_4074"}
{"text":"data_4075"}
{"text":"data_4076"}
{"text":"data_4077"}
{"text":"data_4078"}
{"text":"data_4079"}
{"text":"data_4080"}
{"text":"data_4081"}
{"text":"data_4082"}
{"text":"data_4083"}
{"text":"data_4084"}
{"text":"data_4085"}
{"text":"data_4086"}
{"text":"data_4087"}
{"text":"data_4088"}
{"text":"data_4089"}
{"text":"data_4090"}
{"text":"data_4091"}
{"text":"data_4092"}
{"text":"data_4093"}
{"text":"data_4094"}
{"text":"data_4095"}
{"text":"data_4096"}
{"text":"data_4097"}
{"text":"data_4098"}
{"text":"data_4099"}
{"text":"data_4100"}
{"text":"data_4101"}
{"text":"data_4102"}
{"text":"data_4103"}
{"text":"data_4104"}
{"text":"data_4105"}
{"text":"data_4106"}
{"text":"data_4107"}
{"text":"data_4108"}
{"text":"data_4109"}
{"text":"data_4110"}
{"text":"data_4111"}
{"text":"data_4112"}
{"text":"data_4113"}
{"text":"data_4114"}
{"text":"data_4115"}
{"text":"data_4116"}
{"text":"data_4117"}
{"text":"data_4118"}
{"text":"data_4119"}
{"text":"data_4120"}
{"text":"data_4121"}
{"text":"data_4122"}
{"text":"data_4123"}
{"text":"data_4124"}
{"text":"data_4125"}
{"text":"data_4126"}
{"text":"data_4127"}
{"text":"data_4128"}
{"text":"data_4129"}
{"text":"data_4130"}
{"text":"data_4131"}
{"text":"data_4132"}
{"text":"data_4133"}
{"text":"data_4134"}
{"text":"data_4135"}
{"text":"data_4136"}
{"text":"data_4137"}
{"text":"data_4138"}
{"text":"data_4139"}
{"text":"data_4140"}
{"text":"data_4141"}
{"text":"data_4142"}
{"text":"data_4143"}
{"text":"data_4144"}
{"text":"data_4145"}
{"text":"data_4146"}
{"text":"data_4147"}
{"text":"data_4148"}
{"text":"data_4149"}
{"text":"data_4150"}
{"text":"data_4151"}
{"text":"data_4152"}
{"text":"data_4153"}
{"text":"data_4154"}
{"text":"data_4155"}
{"text":"data_4156"}
{"text":"data_4157"}
{"text":"data_4158"}
{"text":"data_4159"}
{"text":"data_4160"}
{"text":"data_4161"}
{"text":"data_4162"}
{"text":"data_4163"}
{"text":"data_4164"}
{"text":"data_4165"}
{"text":"data_4166"}
{"text":"data_4167"}
{"text":"data_4168"}
{"text":"data_4169"}
{"text":"data_4170"}
{"text":"data_4171"}
{"text":"data_4172"}
{"text":"data_4173"}
{"text":"data_4174"}
{"text":"data_4175"}
{"text":"data_4176"}
{"text":"data_4177"}
{"text":"data_4178"}
{"text":"data_4179"}
{"text":"data_4180"}
{"text":"data_4181"}
{"text":"data_4182"}
{"text":"data_4183"}
{"text":"data_4184"}
{"text":"data_4185"}
{"text":"data_4186"}
{"text":"data_4187"}
{"text":"data_4188"}
{"text":"data_4189"}
{"text":"data_4190"}
{"text":"data_4191"}
{"text":"data_4192"}
{"text":"data_4193"}
{"text":"data_4194"}
{"text":"data_4195"}
{"text":"data_4196"}
{"text":"data_4197"}
{"text":"data_4198"}
{"text":"data_4199"}
{"text":"data_4200"}
{"text":"data_4201"}
{"text":"data_4202"}
{"text":"data_4203"}
{"text":"data_4204"}
{"text":"data_4205"}
{"text":"data_4206"}
{"text":"data_4207"}
{"text":"data_4208"}
{"text":"data_4209"}
{"text":"data_4210"}
{"text":"data_4211"}
{"text":"data_4212"}
{"text":"data_4213"}
{"text":"data_4214"}
{"text":"data_4215"}
{"text":"data_4216"}
{"text":"data_4217"}
{"text":"data_4218"}
{"text":"data_4219"}
{"text":"data_4220"}
{"text":"data_4221"}
{"text":"data_4222"}
{"text":"data_4223"}
{"text":"data_4224"}
{"text":"data_4225"}
{"text":"data_4226"}
{"text":"data_4227"}
{"text":"data_4228"}
{"text":"data_4229"}
{"text":"data_4230"}
{"text":"data_4231"}
{"text":"data_4232"}
{"text":"data_4233"}
{"text":"data_4234"}
{"text":"data_4235"}
{"text":"data_4236"}
{"text":"data_4237"}
{"text":"data_4238"}
{"text":"data_4239"}
{"text":"data_4240"}
{"text":"data_4241"}
{"text":"data_4242"}
{"text":"data_4243"}
{"text":"data_4244"}
{"text":"data_4245"}
{"text":"data_4246"}
{"text":"data_4247"}
{"text":"data_4248"}
{"text":"data_4249"}
{"text":"data_4250"}
{"text":"data_4251"}
{"text":"data_4252"}
{"text":"data_4253"}
{"text":"data_4254"}
{"text":"data_4255"}
{"text":"data_4256"}
{"text":"data_4257"}
{"text":"data_4258"}
{"text":"data_4259"}
{"text":"data_4260"}
{"text":"data_4261"}
{"text":"data_4262"}
{"text":"data_4263"}
{"text":"data_4264"}
{"text":"data_4265"}
{"text":"data_4266"}
{"text":"data_4267"}
{"text":"data_4268"}
{"text":"data_4269"}
{"text":"data_4270"}
{"text":"data_4271"}
{"text":"data_4272"}
{"text":"data_4273"}
{"text":"data_4274"}
{"text":"data_4275"}
{"text":"data_4276"}
{"text":"data_4277"}
{"text":"data_4278"}
{"text":"data_4279"}
{"text":"data_4280"}
{"text":"data_4281"}
{"text":"data_4282"}
{"text":"data_4283"}
{"text":"data_4284"}
{"text":"data_4285"}
{"text":"data_4286"}
{"text":"data_4287"}
{"text":"data_4288"}
{"text":"data_4289"}
{"text":"data_4290"}
{"text":"data_4291"}
{"text":"data_4292"}
{"text":"data_4293"}
{"text":"data_4294"}
{"text":"data_4295"}
{"text":"data_4296"}
{"text":"data_4297"}
{"text":"data_4298"}
{"text":"data_4299"}
{"text":"data_4300"}
{"text":"data_4301"}
{"text":"data_4302"}
{"text":"data_4303"}
{"text":"data_4304"}
{"text":"data_4305"}
{"text":"data_4306"}
{"text":"data_4307"}
{"text":"data_4308"}
{"text":"data_4309"}
{"text":"data_4310"}
{"text":"data_4311"}
{"text":"data_4312"}
{"text":"data_4313"}
{"text":"data_4314"}
{"text":"data_4315"}
{"text":"data_4316"}
{"text":"data_4317"}
{"text":"data_4318"}
{"text":"data_4319"}
{"text":"data_4320"}
{"text":"data_4321"}
{"text":"data_4322"}
{"text":"data_4323"}
{"text":"data_4324"}
{"text":"data_4325"}
{"text":"data_4326"}
{"text":"data_4327"}
{"text":"data_4328"}
{"text":"data_4329"}
{"text":"data_4330"}
{"text":"data_4331"}
{"text":"data_4332"}
{"text":"data_4333"}
{"text":"data_4334"}
{"text":"data_4335"}
{"text":"data_4336"}
{"text":"data_4337"}
{"text":"data_4338"}
{"text":"data_4339"}
{"text":"data_4340"}
{"text":"data_4341"}
{"text":"data_4342"}
{"text":"data_4343"}
{"text":"data_4344"}
{"text":"data_4345"}
{"text":"data_4346"}
{"text":"data_4347"}
{"text":"data_4348"}
{"text":"data_4349"}
{"text":"data_4350"}
{"text":"data_4351"}
{"text":"data_4352"}
{"text":"data_4353"}
{"text":"data_4354"}
{"text":"data_4355"}
{"text":"data_4356"}
{"text":"data_4357"}
{"text":"data_4358"}
{"text":"data_4359"}
{"text":"data_4360"}
{"text":"data_4361"}
{"text":"data_4362"}
{"text":"data_4363"}
{"text":"data_4364"}
{"text":"data_4365"}
{"text":"data_4366"}
{"text":"data_4367"}
{"text":"data_4368"}
{"text":"data_4369"}
{"text":"data_4370"}
{"text":"data_4371"}
{"text":"data_4372"}
{"text":"data_4373"}
{"text":"data_4374"}
{"text":"data_4375"}
{"text":"data_4376"}
{"text":"data_4377"}
{"text":"data_4378"}
{"text":"data_4379"}
{"text":"data_4380"}
{"text":"data_4381"}
{"text":"data_4382"}
{"text":"data_4383"}
{"text":"data_4384"}
{"text":"data_4385"}
{"text":"data_4386"}
{"text":"data_4387"}
{"text":"data_4388"}
{"text":"data_4389"}
{"text":"data_4390"}
{"text":"data_4391"}
{"text":"data_4392"}
{"text":"data_4393"}
{"text":"data_4394"}
{"text":"data_4395"}
{"text":"data_4396"}
{"text":"data_4397"}
{"text":"data_4398"}
{"text":"data_4399"}
{"text":"data_4400"}
{"text":"data_4401"}
{"text":"data_4402"}
{"text":"data_4403"}
{"text":"data_4404"}
{"text":"data_4405"}
{"text":"data_4406"}
{"text":"data_4407"}
{"text":"data_4408"}
{"text":"data_4409"}
{"text":"data_4410"}
{"text":"data_4411"}
{"text":"data_4412"}
{"text":"data_4413"}
{"text":"data_4414"}
{"text":"data_4415"}
{"text":"data_4416"}
{"text":"data_4417"}
{"text":"data_4418"}
{"text":"data_4419"}
{"text":"data_4420"}
{"text":"data_4421"}
{"text":"data_4422"}
{"text":"data_4423"}
{"text":"data_4424"}
{"text":"data_4425"}
{"text":"data_4426"}
{"text":"data_4427"}
{"text":"data_4428"}
{"text":"data_4429"}
{"text":"data_4430"}
{"text":"data_4431"}
{"text":"data_4432"}
{"text":"data_4433"}
{"text":"data_4434"}
{"text":"data_4435"}
{"text":"data_4436"}
{"text":"data_4437"}
{"text":"data_4438"}
{"text":"data_4439"}
{"text":"data_4440"}
{"text":"data_4441"}
{"text":"data_4442"}
{"text":"data_4443"}
{"text":"data_4444"}
{"text":"data_4445"}
{"text":"data_4446"}
{"text":"data_4447"}
{"text":"data_4448"}
{"text":"data_4449"}
{"text":"data_4450"}
{"text":"data_4451"}
{"text":"data_4452"}
{"text":"data_4453"}
{"text":"data_4454"}
{"text":"data_4455"}
{"text":"data_4456"}
{"text":"data_4457"}
{"text":"data_4458"}
{"text":"data_4459"}
{"text":"data_4460"}
{"text":"data_4461"}
{"text":"data_4462"}
{"text":"data_4463"}
{"text":"data_4464"}
{"text":"data_4465"}
{"text":"data_4466"}
{"text":"data_4467"}
{"text":"data_4468"}
{"text":"data_4469"}
{"text":"data_4470"}
{"text":"data_4471"}
{"text":"data_4472"}
{"text":"data_4473"}
{"text":"data_4474"}
{"text":"data_4475"}
{"text":"data_4476"}
{"text":"data_4477"}
{"text":"data_4478"}
{"text":"data_4479"}
{"text":"data_4480"}
{"text":"data_4481"}
{"text":"data_4482"}
{"text":"data_4483"}
{"text":"data_4484"}
{"text":"data_4485"}
{"text":"data_4486"}
{"text":"data_4487"}
{"text":"data_4488"}
{"text":"data_4489"}
{"text":"data_4490"}
{"text":"data_4491"}
{"text":"data_4492"}
{"text":"data_4493"}
{"text":"data_4494"}
{"text":"data_4495"}
{"text":"data_4496"}
{"text":"data_4497"}
{"text":"data_4498"}
{"text":"data_4499"}
{"text":"data_4500"}
{"text":"data_4501"}
{"text":"data_4502"}
{"text":"data_4503"}
{"text":"data_4504"}
{"text":"data_4505"}
{"text":"data_4506"}
{"text":"data_4507"}
{"text":"data_4508"}
{"text":"data_4509"}
{"text":"data_4510"}
{"text":"data_4511"}
{"text":"data_4512"}
{"text":"data_4513"}
{"text":"data_4514"}
{"text":"data_4515"}
{"text":"data_4516"}
{"text":"data_4517"}
{"text":"data_4518"}
{"text":"data_4519"}
{"text":"data_4520"}
{"text":"data_4521"}
{"text":"data_4522"}
{"text":"data_4523"}
{"text":"data_4524"}
{"text":"data_4525"}
{"text":"data_4526"}
{"text":"data_4527"}
{"text":"data_4528"}
{"text":"data_4529"}
{"text":"data_4530"}
{"text":"data_4531"}
{"text":"data_4532"}
{"text":"data_4533"}
{"text":"data_4534"}
{"text":"data_4535"}
{"text":"data_4536"}
{"text":"data_4537"}
{"text":"data_4538"}
{"text":"data_4539"}
{"text":"data_4540"}
{"text":"data_4541"}
{"text":"data_4542"}
{"text":"data_4543"}
{"text":"data_4544"}
{"text":"data_4545"}
{"text":"data_4546"}
{"text":"data_4547"}
{"text":"data_4548"}
{"text":"data_4549"}
{"text":"data_4550"}
{"text":"data_4551"}
{"text":"data_4552"}
{"text":"data_4553"}
{"text":"data_4554"}
{"text":"data_4555"}
{"text":"data_4556"}
{"text":"data_4557"}
{"text":"data_4558"}
{"text":"data_4559"}
{"text":"data_4560"}
{"text":"data_4561"}
{"text":"data_4562"}
{"text":"data_4563"}
{"text":"data_4564"}
{"text":"data_4565"}
{"text":"data_4566"}
{"text":"data_4567"}
{"text":"data_4568"}
{"text":"data_4569"}
{"text":"data_4570"}
{"text":"data_4571"}
{"text":"data_4572"}
{"text":"data_4573"}
{"text":"data_4574"}
{"text":"data_4575"}
{"text":"data_4576"}
{"text":"data_4577"}
{"text":"data_4578"}
{"text":"data_4579"}
{"text":"data_4580"}
{"text":"data_4581"}
{"text":"data_4582"}
{"text":"data_4583"}
{"text":"data_4584"}
{"text":"data_4585"}
{"text":"data_4586"}
{"text":"data_4587"}
{"text":"data_4588"}
{"text":"data_4589"}
{"text":"data_4590"}
{"text":"data_4591"}
{"text":"data_4592"}
{"text":"data_4593"}
{"text":"data_4594"}
{"text":"data_4595"}
{"text":"data_4596"}
{"text":"data_4597"}
{"text":"data_4598"}
{"text":"data_4599"}
{"text":"data_4600"}
{"text":"data_4601"}
{"text":"data_4602"}
{"text":"data_4603"}
{"text":"data_4604"}
{"text":"data_4605"}
{"text":"data_4606"}
{"text":"data_4607"}
{"text":"data_4608"}
{"text":"data_4609"}
{"text":"data_4610"}
{"text":"data_4611"}
{"text":"data_4612"}
{"text":"data_4613"}
{"text":"data_4614"}
{"text":"data_4615"}
{"text":"data_4616"}
{"text":"data_4617"}
{"text":"data_4618"}
{"text":"data_4619"}
{"text":"data_4620"}
{"text":"data_4621"}
{"text":"data_4622"}
{"text":"data_4623"}
{"text":"data_4624"}
{"text":"data_4625"}
{"text":"data_4626"}
{"text":"data_4627"}
{"text":"data_4628"}
{"text":"data_4629"}
{"text":"data_4630"}
{"text":"data_4631"}
{"text":"data_4632"}
{"text":"data_4633"}
{"text":"data_4634"}
{"text":"data_4635"}
{"text":"data_4636"}
{"text":"data_4637"}
{"text":"data_4638"}
{"text":"data_4639"}
{"text":"data_4640"}
{"text":"data_4641"}
{"text":"data_4642"}
{"text":"data_4643"}
{"text":"data_4644"}
{"text":"data_4645"}
{"text":"data_4646"}
{"text":"data_4647"}
{"text":"data_4648"}
{"text":"data_4649"}
{"text":"data_4650"}
{"text":"data_4651"}
{"text":"data_4652"}
{"text":"data_4653"}
{"text":"data_4654"}
{"text":"data_4655"}
{"text":"data_4656"}
{"text":"data_4657"}
{"text":"data_4658"}
{"text":"data_4659"}
{"text":"data_4660"}
{"text":"data_4661"}
{"text":"data_4662"}
{"text":"data_4663"}
{"text":"data_4664"}
{"text":"data_4665"}
{"text":"data_4666"}
{"text":"data_4667"}
{"text":"data_4668"}
{"text":"data_4669"}
{"text":"data_4670"}
{"text":"data_4671"}
{"text":"data_4672"}
{"text":"data_4673"}
{"text":"data_4674"}
{"text":"data_4675"}
{"text":"data_4676"}
{"text":"data_4677"}
{"text":"data_4678"}
{"text":"data_4679"}
{"text":"data_4680"}
{"text":"data_4681"}
{"text":"data_4682"}
{"text":"data_4683"}
{"text":"data_4684"}
{"text":"data_4685"}
{"text":"data_4686"}
{"text":"data_4687"}
{"text":"data_4688"}
{"text":"data_4689"}
{"text":"data_4690"}
{"text":"data_4691"}
{"text":"data_4692"}
{"text":"data_4693"}
{"text":"data_4694"}
{"text":"data_4695"}
{"text":"data_4696"}
{"text":"data_4697"}
{"text":"data_4698"}
{"text":"data_4699"}
{"text":"data_4700"}
{"text":"data_4701"}
{"text":"data_4702"}
{"text":"data_4703"}
{"text":"data_4704"}
{"text":"data_4705"}
{"text":"data_4706"}
{"text":"data_4707"}
{"text":"data_4708"}
{"text":"data_4709"}
{"text":"data_4710"}
{"text":"data_4711"}
{"text":"data_4712"}
{"text":"data_4713"}
{"text":"data_4714"}
{"text":"data_4715"}
{"text":"data_4716"}
{"text":"data_4717"}
{"text":"data_4718"}
{"text":"data_4719"}
{"text":"data_4720"}
{"text":"data_4721"}
{"text":"data_4722"}
{"text":"data_4723"}
{"text":"data_4724"}
{"text":"data_4725"}
{"text":"data_4726"}
{"text":"data_4727"}
{"text":"data_4728"}
{"text":"data_4729"}
{"text":"data_4730"}
{"text":"data_4731"}
{"text":"data_4732"}
{"text":"data_4733"}
{"text":"data_4734"}
{"text":"data_4735"}
{"text":"data_4736"}
{"text":"data_4737"}
{"text":"data_4738"}
{"text":"data_4739"}
{"text":"data_4740"}
{"text":"data_4741"}
{"text":"data_4742"}
{"text":"data_4743"}
{"text":"data_4744"}
{"text":"data_4745"}
{"text":"data_4746"}
{"text":"data_4747"}
{"text":"data_4748"}
{"text":"data_4749"}
{"text":"data_4750"}
{"text":"data_4751"}
{"text":"data_4752"}
{"text":"data_4753"}
{"text":"data_4754"}
{"text":"data_4755"}
{"text":"data_4756"}
{"text":"data_4757"}
{"text":"data_4758"}
{"text":"data_4759"}
{"text":"data_4760"}
{"text":"data_4761"}
{"text":"data_4762"}
{"text":"data_4763"}
{"text":"data_4764"}
{"text":"data_4765"}
{"text":"data_4766"}
{"text":"data_4767"}
{"text":"data_4768"}
{"text":"data_4769"}
{"text":"data_4770"}
{"text":"data_4771"}
{"text":"data_4772"}
{"text":"data_4773"}
{"text":"data_4774"}
{"text":"data_4775"}
{"text":"data_4776"}
{"text":"data_4777"}
{"text":"data_4778"}
{"text":"data_4779"}
{"text":"data_4780"}
{"text":"data_4781"}
{"text":"data_4782"}
{"text":"data_4783"}
{"text":"data_4784"}
{"text":"data_4785"}
{"text":"data_4786"}
{"text":"data_4787"}
{"text":"data_4788"}
{"text":"data_4789"}
{"text":"data_4790"}
{"text":"data_4791"}
{"text":"data_4792"}
{"text":"data_4793"}
{"text":"data_4794"}
{"text":"data_4795"}
{"text":"data_4796"}
{"text":"data_4797"}
{"text":"data_4798"}
{"text":"data_4799"}
{"text":"data_4800"}
{"text":"data_4801"}
{"text":"data_4802"}
{"text":"data_4803"}
{"text":"data_4804"}
{"text":"data_4805"}
{"text":"data_4806"}
{"text":"data_4807"}
{"text":"data_4808"}
{"text":"data_4809"}
{"text":"data_4810"}
{"text":"data_4811"}
{"text":"data_4812"}
{"text":"data_4813"}
{"text":"data_4814"}
{"text":"data_4815"}
{"text":"data_4816"}
{"text":"data_4817"}
{"text":"data_4818"}
{"text":"data_4819"}
{"text":"data_4820"}
{"text":"data_4821"}
{"text":"data_4822"}
{"text":"data_4823"}
{"text":"data_4824"}
{"text":"data_4825"}
{"text":"data_4826"}
{"text":"data_4827"}
{"text":"data_4828"}
{"text":"data_4829"}
{"text":"data_4830"}
{"text":"data_4831"}
{"text":"data_4832"}
{"text":"data_4833"}
{"text":"data_4834"}
{"text":"data_4835"}
{"text":"data_4836"}
{"text":"data_4837"}
{"text":"data_4838"}
{"text":"data_4839"}
{"text":"data_4840"}
{"text":"data_4841"}
{"text":"data_4842"}
{"text":"data_4843"}
{"text":"data_4844"}
{"text":"data_4845"}
{"text":"data_4846"}
{"text":"data_4847"}
{"text":"data_4848"}
{"text":"data_4849"}
{"text":"data_4850"}
{"text":"data_4851"}
{"text":"data_4852"}
{"text":"data_4853"}
{"text":"data_4854"}
{"text":"data_4855"}
{"text":"data_4856"}
{"text":"data_4857"}
{"text":"data_4858"}
{"text":"data_4859"}
{"text":"data_4860"}
{"text":"data_4861"}
{"text":"data_4862"}
{"text":"data_4863"}
{"text":"data_4864"}
{"text":"data_4865"}
{"text":"data_4866"}
{"text":"data_4867"}
{"text":"data_4868"}
{"text":"data_4869"}
{"text":"data_4870"}
{"text":"data_4871"}
{"text":"data_4872"}
{"text":"data_4873"}
{"text":"data_4874"}
{"text":"data_4875"}
{"text":"data_4876"}
{"text":"data_4877"}
{"text":"data_4878"}
{"text":"data_4879"}
{"text":"data_4880"}
{"text":"data_4881"}
{"text":"data_4882"}
{"text":"data_4883"}
{"text":"data_4884"}
{"text":"data_4885"}
{"text":"data_4886"}
{"text":"data_4887"}
{"text":"data_4888"}
{"text":"data_4889"}
{"text":"data_4890"}
{"text":"data_4891"}
{"text":"data_4892"}
{"text":"data_4893"}
{"text":"data_4894"}
{"text":"data_4895"}
{"text":"data_4896"}
{"text":"data_4897"}
{"text":"data_4898"}
{"text":"data_4899"}
{"text":"data_4900"}
{"text":"data_4901"}
{"text":"data_4902"}
{"text":"data_4903"}
{"text":"data_4904"}
{"text":"data_4905"}
{"text":"data_4906"}
{"text":"data_4907"}
{"text":"data_4908"}
{"text":"data_4909"}
{"text":"data_4910"}
{"text":"data_4911"}
{"text":"data_4912"}
{"text":"data_4913"}
{"text":"data_4914"}
{"text":"data_4915"}
{"text":"data_4916"}
{"text":"data_4917"}
{"text":"data_4918"}
{"text":"data_4919"}
{"text":"data_4920"}
{"text":"data_4921"}
{"text":"data_4922"}
{"text":"data_4923"}
{"text":"data_4924"}
{"text":"data_4925"}
{"text":"data_4926"}
{"text":"data_4927"}
{"text":"data_4928"}
{"text":"data_4929"}
{"text":"data_4930"}
{"text":"data_4931"}
{"text":"data_4932"}
{"text":"data_4933"}
{"text":"data_4934"}
{"text":"data_4935"}
{"text":"data_4936"}
{"text":"data_4937"}
{"text":"data_4938"}
{"text":"data_4939"}
{"text":"data_4940"}
{"text":"data_4941"}
{"text":"data_4942"}
{"text":"data_4943"}
{"text":"data_4944"}
{"text":"data_4945"}
{"text":"data_4946"}
{"text":"data_4947"}
{"text":"data_4948"}
{"text":"data_4949"}
{"text":"data_4950"}
{"text":"data_4951"}
{"text":"data_4952"}
{"text":"data_4953"}
{"text":"data_4954"}
{"text":"data_4955"}
{"text":"data_4956"}
{"text":"data_4957"}
{"text":"data_4958"}
{"text":"data_4959"}
{"text":"data_4960"}
{"text":"data_4961"}
{"text":"data_4962"}
{"text":"data_4963"}
{"text":"data_4964"}
{"text":"data_4965"}
{"text":"data_4966"}
{"text":"data_4967"}
{"text":"data_4968"}
{"text":"data_4969"}
{"text":"data_4970"}
{"text":"data_4971"}
{"text":"data_4972"}
{"text":"data_4973"}
{"text":"data_4974"}
{"text":"data_4975"}
{"text":"data_4976"}
{"text":"data_4977"}
{"text":"data_4978"}
{"text":"data_4979"}
{"text":"data_4980"}
{"text":"data_4981"}
{"text":"data_4982"}
{"text":"data_4983"}
{"text":"data_4984"}
{"text":"data_4985"}
{"text":"data_4986"}
{"text":"data_4987"}
{"text":"data_4988"}
{"text":"data_4989"}
{"text":"data_4990"}
{"text":"data_4991"}
{"text":"data_4992"}
{"text":"data_4993"}
{"text":"data_4994"}
{"text":"data_4995"}
{"text":"data_4996"}
{"text":"data_4997"}
{"text":"data_4998"}
{"text":"data_4999"}
| promptflow/src/promptflow/tests/test_configs/datas/load_data_cases/10k/5k.1.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/datas/load_data_cases/10k/5k.1.jsonl",
"repo_id": "promptflow",
"token_count": 44913
} | 62 |
[
{
"expected_node_count": 3,
"expected_outputs": {
"output": "Node A not executed. Node B not executed."
},
"expected_bypassed_nodes": [
"nodeA",
"nodeB"
]
}
] | promptflow/src/promptflow/tests/test_configs/flows/activate_condition_always_met/expected_result.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/activate_condition_always_met/expected_result.json",
"repo_id": "promptflow",
"token_count": 140
} | 63 |
from promptflow import tool
@tool
def line_process(groundtruth: str, prediction: str):
processed_result = groundtruth + prediction
return processed_result
| promptflow/src/promptflow/tests/test_configs/flows/aggregation_node_failed/line_process.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/aggregation_node_failed/line_process.py",
"repo_id": "promptflow",
"token_count": 45
} | 64 |
# Chat with Calorie Assistant
This sample demonstrates how to chat with the PromptFlow Assistant tool facilitates calorie calculations by considering your location, the duration of your exercise, and the type of sport. Currently, it supports two types of sports: jogging and swimming.
Tools used in this flow:
- `add_message_and_run` tool, assistant tool, provisioned with below inner functions:
- `get_current_location``: get current city
- `get_temperature(location)``: get temperature of the city
- `get_calorie_by_jogging(duration, temperature)``: calculate calorie for jogging exercise
- `get_calorie_by_jogging(duration, temperature)``: calculate calorie for swimming exercise
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```sh
pip install -r requirements.txt
```
## What you will learn
In this flow, you will understand how assistant tools within PromptFlow are triggered by user prompts. The assistant tool decides which internal functions or tools to invoke based on the input provided. Your responsibility involves implementing each of these tools and registering them in the `assistant_definition`. Additionally, be aware that the tools may have dependencies on each other, affecting the order and manner of their invocation.
## Getting started
### 1. Create assistant connection (openai)
Go to "Prompt flow" "Connections" tab. Click on "Create" button, select one of assistant tool supported connection types and fill in the configurations.
Currently, only "Open AI" connection type are supported for assistant tool. Please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/openai.yml --set api_key=<your_api_key>
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
### 2. Create or get assistant/thread
Navigate to the OpenAI Assistant page and create an assistant if you haven't already. Once created, click on the 'Test' button to enter the assistant's playground. Make sure to note down the assistant_id.
**[Optional]** Start a chat session to create thread automatically. Keep track of the thread_id.
### 3. run the flow
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow . --interactive --multi-modal --user-agent "prompt-flow-extension/1.8.0 (win32; x64) VSCode/1.85.1"
```
| promptflow/src/promptflow/tests/test_configs/flows/chat-with-assistant-no-file/README.md/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/chat-with-assistant-no-file/README.md",
"repo_id": "promptflow",
"token_count": 663
} | 65 |
[
{
"expected_node_count": 9,
"expected_outputs":{
"investigation_method": {
"first": "Skip job info extractor",
"second": "Execute incident info extractor"
}
},
"expected_bypassed_nodes":["job_info_extractor", "icm_retriever"]
},
{
"expected_node_count": 9,
"expected_outputs":{
"investigation_method": {
"first": "Execute job info extractor",
"second": "Skip incident info extractor"
}
},
"expected_bypassed_nodes":["incident_info_extractor", "icm_retriever", "kql_tsg_retriever", "tsg_retriever", "investigation_steps", "retriever_summary"]
},
{
"expected_node_count": 9,
"expected_outputs":{
"investigation_method": {
"first": "Skip job info extractor",
"second": "Execute incident info extractor"
}
},
"expected_bypassed_nodes":["job_info_extractor", "kql_tsg_retriever", "tsg_retriever"]
}
] | promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_activate/expected_result.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_activate/expected_result.json",
"repo_id": "promptflow",
"token_count": 554
} | 66 |
[
{
"expected_node_count": 3,
"expected_outputs":{
"output":{
"double": 2,
"square": ""
}
},
"expected_bypassed_nodes":["square"]
},
{
"expected_node_count": 3,
"expected_outputs":{
"output":{
"double": 4,
"square": ""
}
},
"expected_bypassed_nodes":["square"]
},
{
"expected_node_count": 3,
"expected_outputs":{
"output":{
"double": null,
"square": 9
}
},
"expected_bypassed_nodes":["double"]
},
{
"expected_node_count": 3,
"expected_outputs":{
"output":{
"double": null,
"square": 16
}
},
"expected_bypassed_nodes":["double"]
}
] | promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_aggregate_bypassed/expected_result.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_aggregate_bypassed/expected_result.json",
"repo_id": "promptflow",
"token_count": 567
} | 67 |
from promptflow import tool
from promptflow.connections import CustomConnection
@tool
def get_val(key, conn: CustomConnection):
# get from env var
print(key)
if not isinstance(key, dict):
raise TypeError(f"key must be a dict, got {type(key)}")
return {"value": f"{key}: {type(key)}"}
| promptflow/src/promptflow/tests/test_configs/flows/flow_with_dict_input_with_variant/print_val.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_dict_input_with_variant/print_val.py",
"repo_id": "promptflow",
"token_count": 109
} | 68 |
{
"data": "code_first_input.csv"
} | promptflow/src/promptflow/tests/test_configs/flows/flow_with_langchain_traces/data_inputs.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_langchain_traces/data_inputs.json",
"repo_id": "promptflow",
"token_count": 19
} | 69 |
import sys
from promptflow import tool
@tool
def get_val(key):
# get from env var
print(key)
print("user log")
print("error log", file=sys.stderr) | promptflow/src/promptflow/tests/test_configs/flows/flow_with_user_output/print_val.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_user_output/print_val.py",
"repo_id": "promptflow",
"token_count": 64
} | 70 |
$schema: https://azuremlschemas.azureedge.net/latest/flow.schema.json
name: classification_accuracy_eval
display_name: Classification Accuracy Evaluation
type: evaluate
path: azureml://datastores/workspaceworkingdirectory/paths/Users/wanhan/a/flow.dag.yaml
description: Measuring the performance of a classification system by comparing its outputs to groundtruth.
properties:
promptflow.stage: prod
promptflow.details.type: markdown
promptflow.details.source: README.md
promptflow.batch_inputs: samples.json
| promptflow/src/promptflow/tests/test_configs/flows/meta_files/remote_flow_short_path.meta.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/meta_files/remote_flow_short_path.meta.yaml",
"repo_id": "promptflow",
"token_count": 150
} | 71 |
[{"idx": 5}, {"idx": 5}] | promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/samples_all_timeout.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/samples_all_timeout.json",
"repo_id": "promptflow",
"token_count": 14
} | 72 |
inputs:
text:
type: string
outputs:
output_text:
type: string
reference: ${print_input.output}
nodes:
- name: print_input
type: python
source:
type: code
path: print_input.py
inputs:
text: ${inputs.text}
| promptflow/src/promptflow/tests/test_configs/flows/print_input_flow/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/print_input_flow/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 99
} | 73 |
{"mod": 2, "mod_2": 5}
{"mod": 2, "mod_2": 5}
{"mod": 2, "mod_2": 5}
{"mod": 2, "mod_2": 5}
{"mod": 2, "mod_2": 5}
{"mod": 2, "mod_2": 5}
{"mod": 2, "mod_2": 5} | promptflow/src/promptflow/tests/test_configs/flows/python_tool_partial_failure/inputs/data.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/python_tool_partial_failure/inputs/data.jsonl",
"repo_id": "promptflow",
"token_count": 90
} | 74 |
from promptflow import tool
@tool
def passthrough(x: str):
return x
| promptflow/src/promptflow/tests/test_configs/flows/script_with_import/dummy_utils/util_tool.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/script_with_import/dummy_utils/util_tool.py",
"repo_id": "promptflow",
"token_count": 26
} | 75 |
inputs:
num:
type: int
outputs:
content:
type: string
reference: ${divide_num.output}
aggregate_content:
type: string
reference: ${aggregate_num.output}
nodes:
- name: divide_num
type: python
source:
type: code
path: divide_num.py
inputs:
num: ${inputs.num}
- name: aggregate_num
type: python
source:
type: code
path: aggregate_num.py
inputs:
num: ${divide_num.output}
aggregation: True
| promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_python_tool_and_aggregate/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_python_tool_and_aggregate/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 183
} | 76 |
interactions:
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/api-version=2023-08-01-preview
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000",
"name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location":
"eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic",
"tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}'
headers:
cache-control:
- no-cache
content-length:
- '3630'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.017'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?api-version=2023-04-01-preview&count=30&isDefault=true&orderByAsc=false
response:
body:
string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}]}'
headers:
cache-control:
- no-cache
content-length:
- '1372'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.070'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- python-requests/2.31.0
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/connections/azure_open_ai_connection/listsecrets?api-version=2023-04-01-preview
response:
body:
string: '{"tags": null, "location": null, "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/connections/azure_open_ai_connection",
"name": "azure_open_ai_connection", "type": "Microsoft.MachineLearningServices/workspaces/connections",
"properties": {"authType": "ApiKey", "credentials": {"key": "_"}, "group":
"AzureAI", "category": "AzureOpenAI", "expiryTime": null, "target": "_", "createdByWorkspaceArmId":
null, "isSharedToAll": false, "sharedUserList": [], "metadata": {"azureml.flow.connection_type":
"AzureOpenAI", "azureml.flow.module": "promptflow.connections", "ApiType":
"azure", "ApiVersion": "2023-07-01-preview", "ResourceId": null, "DeploymentApiVersion":
"2023-10-01-preview"}}, "systemData": {"createdAt": "2023-08-22T10:15:34.5762053Z",
"createdBy": "[email protected]", "createdByType": "User", "lastModifiedAt":
"2023-08-22T10:15:34.5762053Z", "lastModifiedBy": "[email protected]",
"lastModifiedByType": "User"}}'
headers:
cache-control:
- no-cache
content-length:
- '1246'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.072'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- python-requests/2.31.0
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/connections/custom_connection/listsecrets?api-version=2023-04-01-preview
response:
body:
string: '{"tags": null, "location": null, "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/connections/custom_connection",
"name": "custom_connection", "type": "Microsoft.MachineLearningServices/workspaces/connections",
"properties": {"authType": "CustomKeys", "credentials": {"keys": {}}, "group":
"AzureAI", "category": "CustomKeys", "expiryTime": null, "target": "_", "createdByWorkspaceArmId":
null, "isSharedToAll": false, "sharedUserList": [], "metadata": {"azureml.flow.connection_type":
"Custom", "azureml.flow.module": "promptflow.connections"}}, "systemData":
{"createdAt": "2023-06-19T20:56:12.0353964Z", "createdBy": "[email protected]",
"createdByType": "User", "lastModifiedAt": "2023-06-19T20:56:12.0353964Z",
"lastModifiedBy": "[email protected]", "lastModifiedByType": "User"}}'
headers:
cache-control:
- no-cache
content-length:
- '1275'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.075'
status:
code: 200
message: OK
version: 1
| promptflow/src/promptflow/tests/test_configs/recordings/test_arm_connection_operations_TestArmConnectionOperations_test_get_connection.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_arm_connection_operations_TestArmConnectionOperations_test_get_connection.yaml",
"repo_id": "promptflow",
"token_count": 3426
} | 77 |
interactions:
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000",
"name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location":
"eastus2euap", "tags": {}, "etag": null, "kind": "Default", "sku": {"name":
"Basic", "tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}'
headers:
cache-control:
- no-cache
content-length:
- '3601'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.019'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false
response:
body:
string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-09-22T05:26:30.7527337+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-09-22T05:26:31.3199607+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}]}'
headers:
cache-control:
- no-cache
content-length:
- '1378'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.628'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-09-22T05:26:30.7527337+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-09-22T05:26:31.3199607+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1233'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.069'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.114'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Fri, 05 Jan 2024 08:28:55 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '49'
content-md5:
- quXiEreYvPinSj0HsaNa/g==
content-type:
- application/octet-stream
last-modified:
- Tue, 26 Dec 2023 02:27:07 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Tue, 26 Dec 2023 02:27:07 GMT
x-ms-meta-name:
- bcc45cd4-c343-4bd0-8bdd-cecfafea742d
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- be87bd84-d39b-442b-af5a-e8c209d6d10c
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Fri, 05 Jan 2024 08:28:56 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-09-22T05:26:30.7527337+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-09-22T05:26:31.3199607+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1233'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.068'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.106'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Fri, 05 Jan 2024 08:29:00 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/print_env_var/flow.dag.yaml
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '245'
content-md5:
- F+JA0a3CxcLYZ0ANRdlZbA==
content-type:
- application/octet-stream
last-modified:
- Tue, 26 Dec 2023 02:27:07 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Tue, 26 Dec 2023 02:27:07 GMT
x-ms-meta-name:
- 5541d425-c3dc-4f2e-b818-956634d8a470
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- '1'
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Fri, 05 Jan 2024 08:29:01 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/print_env_var/flow.dag.yaml
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-09-22T05:26:30.7527337+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-09-22T05:26:31.3199607+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1233'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.058'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.085'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Fri, 05 Jan 2024 08:29:09 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '49'
content-md5:
- quXiEreYvPinSj0HsaNa/g==
content-type:
- application/octet-stream
last-modified:
- Tue, 26 Dec 2023 02:27:07 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Tue, 26 Dec 2023 02:27:07 GMT
x-ms-meta-name:
- bcc45cd4-c343-4bd0-8bdd-cecfafea742d
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- be87bd84-d39b-442b-af5a-e8c209d6d10c
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Fri, 05 Jan 2024 08:29:10 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-09-22T05:26:30.7527337+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-09-22T05:26:31.3199607+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1233'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.097'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.083'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Fri, 05 Jan 2024 08:29:13 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/print_env_var/flow.dag.yaml
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '245'
content-md5:
- F+JA0a3CxcLYZ0ANRdlZbA==
content-type:
- application/octet-stream
last-modified:
- Tue, 26 Dec 2023 02:27:07 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Tue, 26 Dec 2023 02:27:07 GMT
x-ms-meta-name:
- 5541d425-c3dc-4f2e-b818-956634d8a470
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- '1'
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Fri, 05 Jan 2024 08:29:14 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/print_env_var/flow.dag.yaml
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
version: 1
| promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_automatic_runtime.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_automatic_runtime.yaml",
"repo_id": "promptflow",
"token_count": 12047
} | 78 |
interactions:
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000",
"name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location":
"eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic",
"tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}'
headers:
cache-control:
- no-cache
content-length:
- '3630'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.022'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false
response:
body:
string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}]}'
headers:
cache-control:
- no-cache
content-length:
- '1372'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
vary:
- Accept-Encoding
x-cache:
- CONFIG_NOCACHE
x-content-type-options:
- nosniff
x-request-time:
- '0.055'
status:
code: 200
message: OK
- request:
body: '{"filters": [{"field": "type", "operator": "eq", "values": ["runs"]}, {"field":
"annotations/archived", "operator": "eq", "values": ["false"]}, {"field": "properties/runType",
"operator": "contains", "values": ["azureml.promptflow.FlowRun", "azureml.promptflow.EvaluationRun",
"azureml.promptflow.PairwiseEvaluationRun"]}], "freeTextSearch": "", "order":
[{"direction": "Desc", "field": "properties/creationContext/createdTime"}],
"pageSize": 10, "skip": 0, "includeTotalResultCount": true, "searchBuilder":
"AppendPrefix"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '523'
Content-Type:
- application/json
User-Agent:
- python-requests/2.31.0
method: POST
uri: https://eastus.api.azureml.ms/index/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/entities
response:
body:
string: '{"totalCount": 146558, "value": [{"relevancyScore": 0.28867513, "entityResourceName":
"promptflow-eastus", "highlights": {}, "usage": {"totalCount": 0}, "schemaId":
"974ab09e-bfc2-56a6-9be4-97bcfe3d33ca", "entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/2283afa6-deed-415c-aac4-490205cf23dd",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"b7989f3e-5409-4e57-adc3-34ac2a615527", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {},
"parameters": {}, "settings": {}, "modifiedTime": "2023-11-29T08:58:33.4780914Z",
"retainForLifetimeOfWorkspace": false, "error": {"code": null, "errorCodeHierarchy":
null, "message": null, "time": null, "componentName": null, "severity": null,
"detailsUri": null, "referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:08.3614863", "computeDurationMilliseconds": 8361.4863, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:58:12.0723876+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.b7989f3e-5409-4e57-adc3-34ac2a615527", "targetName":
null, "runName": null, "experimentName": "web_classification", "runId": "b7989f3e-5409-4e57-adc3-34ac2a615527",
"parentRunId": null, "rootRunId": "b7989f3e-5409-4e57-adc3-34ac2a615527",
"runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null,
"traits": {}, "attribution": "PromptFlow", "computeType": "MIR_v2"}, "scriptName":
null, "experimentId": "d30efbeb-f81d-4cfa-b5cc-a0570a049009", "runUuid": "2283afa6-deed-415c-aac4-490205cf23dd",
"parentRunUuid": null, "runNumber": 1701248292, "startTime": "2023-11-29T08:58:25.0794582Z",
"endTime": "2023-11-29T08:58:33.4409445Z", "computeRequest": null, "compute":
null, "userProperties": {"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "4dd8f4d5f44dfeb817d3438cf84bd739215d87afd9458597",
"azureml.promptflow.flow_lineage_id": "af1a6951de9be2ce13d3b58b23dbd8b6a0cd8fd4918ad9cb22b28fb8395fbcb0",
"azureml.promptflow.node_variant": "${summarize_text_content.variant_0}",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/7c59ad67129b552538ca09362171cc15/web_classification/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml:webClassification1:1", "azureml.promptflow.inputs_mapping":
"{\"url\":\"${data.url}\"}", "azureml.promptflow.snapshot_id": "7d0b4037-9946-4054-928b-a2266f333eb5",
"_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.total_tokens":
"858"}, "actionUris": {}, "duration": "00:00:08.3614863", "durationMilliseconds":
8361.4863}, "internal": {}, "updateSequence": 5, "type": "runs", "version":
null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5", "entityObjectId":
"2283afa6-deed-415c-aac4-490205cf23dd", "resourceType": "Workspace", "relationships":
[{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_b7989f3e-5409-4e57-adc3-34ac2a615527_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_b7989f3e-5409-4e57-adc3-34ac2a615527_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/fa3a6d13-5bab-490d-8372-6836b44385bd",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"9232169c-e77a-4a7b-8410-00390061c303", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.fetch_text_content_from_url.completed":
{"count": 1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.prepare_examples.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.summarize_text_content.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.classify_with_llm.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.convert_to_dict.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}}, "parameters": {},
"settings": {}, "modifiedTime": "2023-11-29T08:57:38.7208614Z", "retainForLifetimeOfWorkspace":
false, "error": {"code": null, "errorCodeHierarchy": null, "message": null,
"time": null, "componentName": null, "severity": null, "detailsUri": null,
"referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:07.6293709", "computeDurationMilliseconds": 7629.3709, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:57:17.6541225+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.9232169c-e77a-4a7b-8410-00390061c303", "targetName":
null, "runName": null, "experimentName": "web_classification", "runId": "9232169c-e77a-4a7b-8410-00390061c303",
"parentRunId": null, "rootRunId": "9232169c-e77a-4a7b-8410-00390061c303",
"runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null,
"traits": {}, "attribution": "PromptFlow", "computeType": "MIR_v2"}, "scriptName":
null, "experimentId": "d30efbeb-f81d-4cfa-b5cc-a0570a049009", "runUuid": "fa3a6d13-5bab-490d-8372-6836b44385bd",
"parentRunUuid": null, "runNumber": 1701248237, "startTime": "2023-11-29T08:57:31.0659361Z",
"endTime": "2023-11-29T08:57:38.695307Z", "computeRequest": null, "compute":
null, "userProperties": {"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "4dd8f4d5f44dfeb817d3438cf84bd739215d87afd9458597",
"azureml.promptflow.flow_lineage_id": "af1a6951de9be2ce13d3b58b23dbd8b6a0cd8fd4918ad9cb22b28fb8395fbcb0",
"azureml.promptflow.node_variant": "${summarize_text_content.variant_0}",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/7c59ad67129b552538ca09362171cc15/web_classification/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml:/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/data/webClassification1/versions/1",
"azureml.promptflow.inputs_mapping": "{\"url\":\"${data.url}\"}", "azureml.promptflow.snapshot_id":
"363446e5-2e0b-49fe-a9ac-894be75cd2fc", "_azureml.evaluation_run": "promptflow.BatchRun",
"azureml.promptflow.total_tokens": "784"}, "actionUris": {}, "duration": "00:00:07.6293709",
"durationMilliseconds": 7629.3709}, "internal": {}, "updateSequence": 5, "type":
"runs", "version": null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5",
"entityObjectId": "fa3a6d13-5bab-490d-8372-6836b44385bd", "resourceType":
"Workspace", "relationships": [{"relationType": "CreatedBy", "targetEntityId":
null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_9232169c-e77a-4a7b-8410-00390061c303_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_9232169c-e77a-4a7b-8410-00390061c303_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/45bc17a9-381d-4be1-8b9f-6bc7afdf3b5e",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"740436db-a664-47ec-843a-322888b7dcda", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.print_env.completed":
{"count": 1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}}, "parameters": {},
"settings": {}, "modifiedTime": "2023-11-29T08:57:15.6923897Z", "retainForLifetimeOfWorkspace":
false, "error": {"code": null, "errorCodeHierarchy": null, "message": null,
"time": null, "componentName": null, "severity": null, "detailsUri": null,
"referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:19.9853113", "computeDurationMilliseconds": 19985.3113, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:56:41.8575651+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.740436db-a664-47ec-843a-322888b7dcda", "targetName":
null, "runName": null, "experimentName": "print_env_var", "runId": "740436db-a664-47ec-843a-322888b7dcda",
"parentRunId": null, "rootRunId": "740436db-a664-47ec-843a-322888b7dcda",
"runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null,
"traits": {}, "attribution": "PromptFlow", "computeType": "MIR_v2"}, "scriptName":
null, "experimentId": "6a87c3ae-5a75-4c5d-9eb9-5203b0062282", "runUuid": "45bc17a9-381d-4be1-8b9f-6bc7afdf3b5e",
"parentRunUuid": null, "runNumber": 1701248201, "startTime": "2023-11-29T08:56:55.6708784Z",
"endTime": "2023-11-29T08:57:15.6561897Z", "computeRequest": null, "compute":
null, "userProperties": {"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "62adccc385dd5d078797bdd0d2e1c55e120f3d5216885b81",
"azureml.promptflow.flow_lineage_id": "f1efdb93dcf9b3c17e246e7bcf0e2c7398d7bc289f8dd2c3d8f808eacc63c31f",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/3360ae705933fb90bcd290241ca0ece9/print_env_var/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl",
"azureml.promptflow.snapshot_id": "95d3a45f-4a34-4ce2-a5ac-a9dae21618fd",
"_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.total_tokens":
"0"}, "actionUris": {}, "duration": "00:00:19.9853113", "durationMilliseconds":
19985.3113}, "internal": {}, "updateSequence": 5, "type": "runs", "version":
null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5", "entityObjectId":
"45bc17a9-381d-4be1-8b9f-6bc7afdf3b5e", "resourceType": "Workspace", "relationships":
[{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_740436db-a664-47ec-843a-322888b7dcda_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_740436db-a664-47ec-843a-322888b7dcda_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/b38b90c3-8766-4d1f-8ea4-4a1863d7aa34",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"my_display_name_variant_0_202311290856", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.print_env.completed":
{"count": 1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}}, "parameters": {},
"settings": {}, "modifiedTime": "2023-11-29T08:56:40.3364816Z", "retainForLifetimeOfWorkspace":
false, "error": {"code": null, "errorCodeHierarchy": null, "message": null,
"time": null, "componentName": null, "severity": null, "detailsUri": null,
"referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:21.0025046", "computeDurationMilliseconds": 21002.5046, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:56:05.6175926+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.95e3cd29-751d-4697-82c1-02c876845d6e", "targetName":
null, "runName": null, "experimentName": "print_env_var", "runId": "95e3cd29-751d-4697-82c1-02c876845d6e",
"parentRunId": null, "rootRunId": "95e3cd29-751d-4697-82c1-02c876845d6e",
"runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null,
"traits": {}, "attribution": "PromptFlow", "computeType": "MIR_v2"}, "scriptName":
null, "experimentId": "6a87c3ae-5a75-4c5d-9eb9-5203b0062282", "runUuid": "b38b90c3-8766-4d1f-8ea4-4a1863d7aa34",
"parentRunUuid": null, "runNumber": 1701248165, "startTime": "2023-11-29T08:56:19.2722436Z",
"endTime": "2023-11-29T08:56:40.2747482Z", "computeRequest": null, "compute":
null, "userProperties": {"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "62adccc385dd5d078797bdd0d2e1c55e120f3d5216885b81",
"azureml.promptflow.flow_lineage_id": "f1efdb93dcf9b3c17e246e7bcf0e2c7398d7bc289f8dd2c3d8f808eacc63c31f",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/3360ae705933fb90bcd290241ca0ece9/print_env_var/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl",
"azureml.promptflow.snapshot_id": "96d73260-c033-464f-af3a-bd14c40ec7ea",
"_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.total_tokens":
"0"}, "actionUris": {}, "duration": "00:00:21.0025046", "durationMilliseconds":
21002.5046}, "internal": {}, "updateSequence": 5, "type": "runs", "version":
null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5", "entityObjectId":
"b38b90c3-8766-4d1f-8ea4-4a1863d7aa34", "resourceType": "Workspace", "relationships":
[{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_95e3cd29-751d-4697-82c1-02c876845d6e_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_95e3cd29-751d-4697-82c1-02c876845d6e_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/9e2d06ac-cc88-4ae0-bc67-50e4dc53e7de",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"0730e814-465b-46a8-9378-ac4d33f9925d", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.print_env.completed":
{"count": 1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}}, "parameters": {},
"settings": {}, "modifiedTime": "2023-11-29T08:56:02.4358462Z", "retainForLifetimeOfWorkspace":
false, "error": {"code": null, "errorCodeHierarchy": null, "message": null,
"time": null, "componentName": null, "severity": null, "detailsUri": null,
"referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:19.9540921", "computeDurationMilliseconds": 19954.0921, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:55:29.1814714+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.0730e814-465b-46a8-9378-ac4d33f9925d", "targetName":
null, "runName": null, "experimentName": "print_env_var", "runId": "0730e814-465b-46a8-9378-ac4d33f9925d",
"parentRunId": null, "rootRunId": "0730e814-465b-46a8-9378-ac4d33f9925d",
"runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null,
"traits": {}, "attribution": "PromptFlow", "computeType": "MIR_v2"}, "scriptName":
null, "experimentId": "6a87c3ae-5a75-4c5d-9eb9-5203b0062282", "runUuid": "9e2d06ac-cc88-4ae0-bc67-50e4dc53e7de",
"parentRunUuid": null, "runNumber": 1701248129, "startTime": "2023-11-29T08:55:42.4538208Z",
"endTime": "2023-11-29T08:56:02.4079129Z", "computeRequest": null, "compute":
null, "userProperties": {"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "62adccc385dd5d078797bdd0d2e1c55e120f3d5216885b81",
"azureml.promptflow.flow_lineage_id": "f1efdb93dcf9b3c17e246e7bcf0e2c7398d7bc289f8dd2c3d8f808eacc63c31f",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/3360ae705933fb90bcd290241ca0ece9/print_env_var/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl",
"azureml.promptflow.snapshot_id": "253cf774-f77d-405e-b6c8-936d8167e6c2",
"_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.total_tokens":
"0"}, "actionUris": {}, "duration": "00:00:19.9540921", "durationMilliseconds":
19954.0921}, "internal": {}, "updateSequence": 5, "type": "runs", "version":
null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5", "entityObjectId":
"9e2d06ac-cc88-4ae0-bc67-50e4dc53e7de", "resourceType": "Workspace", "relationships":
[{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_0730e814-465b-46a8-9378-ac4d33f9925d_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_0730e814-465b-46a8-9378-ac4d33f9925d_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/4d3fae7b-440b-4160-a222-446d3fd7d9fd",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"49405488-bade-4678-83ae-54eaadf36812", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.fetch_text_content_from_url.completed":
{"count": 1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.prepare_examples.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.summarize_text_content.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.classify_with_llm.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.convert_to_dict.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}}, "parameters": {},
"settings": {}, "modifiedTime": "2023-11-29T08:55:16.7524129Z", "retainForLifetimeOfWorkspace":
false, "error": {"code": null, "errorCodeHierarchy": null, "message": null,
"time": null, "componentName": null, "severity": null, "detailsUri": null,
"referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:09.8764537", "computeDurationMilliseconds": 9876.4537, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:54:53.2174798+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.49405488-bade-4678-83ae-54eaadf36812", "targetName":
null, "runName": null, "experimentName": "web_classification", "runId": "49405488-bade-4678-83ae-54eaadf36812",
"parentRunId": null, "rootRunId": "49405488-bade-4678-83ae-54eaadf36812",
"runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null,
"traits": {}, "attribution": "PromptFlow", "computeType": "MIR_v2"}, "scriptName":
null, "experimentId": "d30efbeb-f81d-4cfa-b5cc-a0570a049009", "runUuid": "4d3fae7b-440b-4160-a222-446d3fd7d9fd",
"parentRunUuid": null, "runNumber": 1701248093, "startTime": "2023-11-29T08:55:06.8461037Z",
"endTime": "2023-11-29T08:55:16.7225574Z", "computeRequest": null, "compute":
null, "userProperties": {"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "4dd8f4d5f44dfeb817d3438cf84bd739215d87afd9458597",
"azureml.promptflow.flow_lineage_id": "af1a6951de9be2ce13d3b58b23dbd8b6a0cd8fd4918ad9cb22b28fb8395fbcb0",
"azureml.promptflow.node_variant": "${summarize_text_content.variant_0}",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/7c59ad67129b552538ca09362171cc15/web_classification/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/107bd3498e44deb2dccc53d2208d32b2/webClassification1.jsonl",
"azureml.promptflow.inputs_mapping": "{\"url\":\"${data.url}\"}", "azureml.promptflow.snapshot_id":
"87e7369f-31cd-47c9-99d5-30d88d86b5ac", "_azureml.evaluation_run": "promptflow.BatchRun",
"azureml.promptflow.total_tokens": "854"}, "actionUris": {}, "duration": "00:00:09.8764537",
"durationMilliseconds": 9876.4537}, "internal": {}, "updateSequence": 5, "type":
"runs", "version": null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5",
"entityObjectId": "4d3fae7b-440b-4160-a222-446d3fd7d9fd", "resourceType":
"Workspace", "relationships": [{"relationType": "CreatedBy", "targetEntityId":
null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_49405488-bade-4678-83ae-54eaadf36812_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_49405488-bade-4678-83ae-54eaadf36812_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/50710cad-1dea-406e-a6c2-104892f5a554",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"b4c70ce2-98a4-49df-b537-ffa9c9249340", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.hello_world.completed":
{"count": 1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}}, "parameters": {},
"settings": {}, "modifiedTime": "2023-11-29T08:54:35.9414902Z", "retainForLifetimeOfWorkspace":
false, "error": {"code": null, "errorCodeHierarchy": null, "message": null,
"time": null, "componentName": null, "severity": null, "detailsUri": null,
"referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:04.1365326", "computeDurationMilliseconds": 4136.5326, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:54:18.2213481+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.b4c70ce2-98a4-49df-b537-ffa9c9249340", "targetName":
null, "runName": null, "experimentName": "simple_hello_world", "runId": "b4c70ce2-98a4-49df-b537-ffa9c9249340",
"parentRunId": null, "rootRunId": "b4c70ce2-98a4-49df-b537-ffa9c9249340",
"runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null,
"traits": {}, "attribution": "PromptFlow", "computeType": "MIR_v2"}, "scriptName":
null, "experimentId": "e57eb79e-3e83-45f1-810c-ee22c20b2ebd", "runUuid": "50710cad-1dea-406e-a6c2-104892f5a554",
"parentRunUuid": null, "runNumber": 1701248058, "startTime": "2023-11-29T08:54:31.7697586Z",
"endTime": "2023-11-29T08:54:35.9062912Z", "computeRequest": null, "compute":
null, "userProperties": {"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "simple_hello_world", "azureml.promptflow.flow_lineage_id":
"simple_hello_world", "azureml.promptflow.flow_definition_resource_id": "azureml://registries/promptflow-preview/models/simple_hello_world/versions/202311241",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/79f088fae0e502653c43146c9682f425/simple_hello_world.jsonl",
"azureml.promptflow.inputs_mapping": "{\"name\":\"${data.name}\"}", "azureml.promptflow.snapshot_id":
"6df06b41-50a3-49d4-8d88-c745050f2a78", "_azureml.evaluation_run": "promptflow.BatchRun",
"azureml.promptflow.total_tokens": "0"}, "actionUris": {}, "duration": "00:00:04.1365326",
"durationMilliseconds": 4136.5326}, "internal": {}, "updateSequence": 5, "type":
"runs", "version": null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5",
"entityObjectId": "50710cad-1dea-406e-a6c2-104892f5a554", "resourceType":
"Workspace", "relationships": [{"relationType": "CreatedBy", "targetEntityId":
null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_b4c70ce2-98a4-49df-b537-ffa9c9249340_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_b4c70ce2-98a4-49df-b537-ffa9c9249340_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/cb83b450-eb49-424a-b2ee-e08fa742b6a7",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"b8188c7f-fd92-4e64-bb9c-608731bc0279", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.hello_world.completed":
{"count": 1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}}, "parameters": {},
"settings": {}, "modifiedTime": "2023-11-29T08:54:09.8110604Z", "retainForLifetimeOfWorkspace":
false, "error": {"code": null, "errorCodeHierarchy": null, "message": null,
"time": null, "componentName": null, "severity": null, "detailsUri": null,
"referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:04.2052292", "computeDurationMilliseconds": 4205.2292, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:53:51.9217991+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.b8188c7f-fd92-4e64-bb9c-608731bc0279", "targetName":
null, "runName": null, "experimentName": "9370c311-e0d5-44a5-8802-7974f3eacb76",
"runId": "b8188c7f-fd92-4e64-bb9c-608731bc0279", "parentRunId": null, "rootRunId":
"b8188c7f-fd92-4e64-bb9c-608731bc0279", "runType": "azureml.promptflow.FlowRun",
"runTypeV2": {"orchestrator": null, "traits": {}, "attribution": "PromptFlow",
"computeType": "MIR_v2"}, "scriptName": null, "experimentId": "2c90f6b2-c0f5-42b7-a11e-f6d43ee205be",
"runUuid": "cb83b450-eb49-424a-b2ee-e08fa742b6a7", "parentRunUuid": null,
"runNumber": 1701248031, "startTime": "2023-11-29T08:54:05.5637286Z", "endTime":
"2023-11-29T08:54:09.7689578Z", "computeRequest": null, "compute": null, "userProperties":
{"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "9370c311-e0d5-44a5-8802-7974f3eacb76", "azureml.promptflow.flow_lineage_id":
"9370c311-e0d5-44a5-8802-7974f3eacb76", "azureml.promptflow.flow_definition_resource_id":
"azureml://locations/eastus/workspaces/00000/flows/9370c311-e0d5-44a5-8802-7974f3eacb76",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/79f088fae0e502653c43146c9682f425/simple_hello_world.jsonl",
"azureml.promptflow.inputs_mapping": "{\"name\":\"${data.name}\"}", "azureml.promptflow.snapshot_id":
"1ce3b7c6-dc4e-4a48-92bf-36be28eb6084", "_azureml.evaluation_run": "promptflow.BatchRun",
"azureml.promptflow.total_tokens": "0"}, "actionUris": {}, "duration": "00:00:04.2052292",
"durationMilliseconds": 4205.2292}, "internal": {}, "updateSequence": 5, "type":
"runs", "version": null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5",
"entityObjectId": "cb83b450-eb49-424a-b2ee-e08fa742b6a7", "resourceType":
"Workspace", "relationships": [{"relationType": "CreatedBy", "targetEntityId":
null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_b8188c7f-fd92-4e64-bb9c-608731bc0279_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_b8188c7f-fd92-4e64-bb9c-608731bc0279_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/82d60584-a4dd-463f-a1e2-8f7ce1ba46f5",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"9113bb55-3f3e-4a49-92c7-515bed79fa66", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.grade.completed":
{"count": 1, "lastValue": 3.0, "minimumValue": 3.0, "maximumValue": 3.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.calculate_accuracy.completed": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
3.0, "minimumValue": 3.0, "maximumValue": 3.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}, "accuracy": {"count":
1, "lastValue": 1.0, "minimumValue": 1.0, "maximumValue": 1.0, "metricType":
"azureml.v1.scalar"}}, "parameters": {}, "settings": {}, "modifiedTime": "2023-11-29T08:47:37.5128714Z",
"retainForLifetimeOfWorkspace": false, "error": {"code": null, "errorCodeHierarchy":
null, "message": null, "time": null, "componentName": null, "severity": null,
"detailsUri": null, "referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:05.1390317", "computeDurationMilliseconds": 5139.0317, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:47:18.9653925+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.9113bb55-3f3e-4a49-92c7-515bed79fa66", "targetName":
null, "runName": null, "experimentName": "eval_classification_accuracy", "runId":
"9113bb55-3f3e-4a49-92c7-515bed79fa66", "parentRunId": null, "rootRunId":
"9113bb55-3f3e-4a49-92c7-515bed79fa66", "runType": "azureml.promptflow.FlowRun",
"runTypeV2": {"orchestrator": null, "traits": {}, "attribution": "PromptFlow",
"computeType": "MIR_v2"}, "scriptName": null, "experimentId": "7bdec279-f99c-4ed3-b0b8-dd75698b8fd0",
"runUuid": "82d60584-a4dd-463f-a1e2-8f7ce1ba46f5", "parentRunUuid": null,
"runNumber": 1701247638, "startTime": "2023-11-29T08:47:32.3452199Z", "endTime":
"2023-11-29T08:47:37.4842516Z", "computeRequest": null, "compute": null, "userProperties":
{"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "f8e4236a4e78e7f7125bbd811ec7976cb330412723a530f8",
"azureml.promptflow.flow_lineage_id": "26c575d863a85371ef937096728441d8c68c3e737b5a1bfeae5ac8f3b9ccb048",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/1aa3064d06f6170abbc488cc35c713b9/eval-classification-accuracy/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/74c11bba717480b2d6b04b8e746d09d7/webClassification3.jsonl",
"azureml.promptflow.input_run_id": "fe987530-fca6-4741-81c0-f28a2701aa6d",
"azureml.promptflow.inputs_mapping": "{\"groundtruth\":\"${data.answer}\",\"prediction\":\"${run.outputs.category}\"}",
"azureml.promptflow.snapshot_id": "12e71b5e-ae43-4776-89a5-9d7144c3902e",
"_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.total_tokens":
"0"}, "actionUris": {}, "duration": "00:00:05.1390317", "durationMilliseconds":
5139.0317}, "internal": {}, "updateSequence": 5, "type": "runs", "version":
null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5", "entityObjectId":
"82d60584-a4dd-463f-a1e2-8f7ce1ba46f5", "resourceType": "Workspace", "relationships":
[{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_9113bb55-3f3e-4a49-92c7-515bed79fa66_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_9113bb55-3f3e-4a49-92c7-515bed79fa66_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]},
{"relevancyScore": 0.28867513, "entityResourceName": "promptflow-eastus",
"highlights": {}, "usage": {"totalCount": 0}, "schemaId": "974ab09e-bfc2-56a6-9be4-97bcfe3d33ca",
"entityId": "azureml://location/eastus/workspaceId/3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/type/runs/objectId/688f1eff-a2b1-49f6-a89b-ae00bfc61e1c",
"kind": "Unversioned", "annotations": {"archived": false, "tags": {}, "displayName":
"fe987530-fca6-4741-81c0-f28a2701aa6d", "status": "Completed", "primaryMetricName":
null, "estimatedCost": null, "primaryMetricSummary": null, "metrics": {"__pf__.nodes.fetch_text_content_from_url.completed":
{"count": 1, "lastValue": 3.0, "minimumValue": 3.0, "maximumValue": 3.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.prepare_examples.completed": {"count":
1, "lastValue": 3.0, "minimumValue": 3.0, "maximumValue": 3.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.summarize_text_content.completed": {"count":
1, "lastValue": 3.0, "minimumValue": 3.0, "maximumValue": 3.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.classify_with_llm.completed": {"count":
1, "lastValue": 3.0, "minimumValue": 3.0, "maximumValue": 3.0, "metricType":
"azureml.v1.scalar"}, "__pf__.nodes.convert_to_dict.completed": {"count":
1, "lastValue": 3.0, "minimumValue": 3.0, "maximumValue": 3.0, "metricType":
"azureml.v1.scalar"}, "__pf__.lines.completed": {"count": 1, "lastValue":
3.0, "minimumValue": 3.0, "maximumValue": 3.0, "metricType": "azureml.v1.scalar"},
"__pf__.lines.failed": {"count": 1, "lastValue": 0.0, "minimumValue": 0.0,
"maximumValue": 0.0, "metricType": "azureml.v1.scalar"}}, "parameters": {},
"settings": {}, "modifiedTime": "2023-11-29T08:46:39.9700654Z", "retainForLifetimeOfWorkspace":
false, "error": {"code": null, "errorCodeHierarchy": null, "message": null,
"time": null, "componentName": null, "severity": null, "detailsUri": null,
"referenceCode": null}, "resourceMetricSummary": {"gpuUtilizationPercentLastHour":
null, "gpuMemoryUtilizationPercentLastHour": null, "gpuEnergyJoules": null,
"resourceMetricNames": null}, "jobCost": {"chargedCpuCoreSeconds": null, "chargedCpuMemoryMegabyteSeconds":
null, "chargedGpuSeconds": null, "chargedNodeUtilizationSeconds": null}, "computeDuration":
"00:00:13.6812734", "computeDurationMilliseconds": 13681.2734, "effectiveStartTimeUtc":
null, "name": null, "description": null}, "properties": {"updatedTime": "0001-01-01T00:00:00+00:00",
"creationContext": {"createdTime": "2023-11-29T08:46:12.5308144+00:00", "createdBy":
{"userObjectId": "4e60fbf3-0338-41a8-bed5-fc341be556f8", "userTenantId": "00000000-0000-0000-0000-000000000000",
"userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587"}, "creationSource": null},
"dataContainerId": "dcid.fe987530-fca6-4741-81c0-f28a2701aa6d", "targetName":
null, "runName": null, "experimentName": "web_classification", "runId": "fe987530-fca6-4741-81c0-f28a2701aa6d",
"parentRunId": null, "rootRunId": "fe987530-fca6-4741-81c0-f28a2701aa6d",
"runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null,
"traits": {}, "attribution": "PromptFlow", "computeType": "MIR_v2"}, "scriptName":
null, "experimentId": "d30efbeb-f81d-4cfa-b5cc-a0570a049009", "runUuid": "688f1eff-a2b1-49f6-a89b-ae00bfc61e1c",
"parentRunUuid": null, "runNumber": 1701247572, "startTime": "2023-11-29T08:46:26.2582802Z",
"endTime": "2023-11-29T08:46:39.9395536Z", "computeRequest": null, "compute":
null, "userProperties": {"azureml.promptflow.runtime_name": "demo-mir", "azureml.promptflow.runtime_version":
"20231011.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "4dd8f4d5f44dfeb817d3438cf84bd739215d87afd9458597",
"azureml.promptflow.flow_lineage_id": "af1a6951de9be2ce13d3b58b23dbd8b6a0cd8fd4918ad9cb22b28fb8395fbcb0",
"azureml.promptflow.node_variant": "${summarize_text_content.variant_0}",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/7c59ad67129b552538ca09362171cc15/web_classification/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/74c11bba717480b2d6b04b8e746d09d7/webClassification3.jsonl",
"azureml.promptflow.inputs_mapping": "{\"url\":\"${data.url}\"}", "azureml.promptflow.snapshot_id":
"ab880004-3c1e-4e56-937c-8208cf567021", "_azureml.evaluation_run": "promptflow.BatchRun",
"azureml.promptflow.total_tokens": "2522"}, "actionUris": {}, "duration":
"00:00:13.6812734", "durationMilliseconds": 13681.2734}, "internal": {}, "updateSequence":
5, "type": "runs", "version": null, "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5",
"entityObjectId": "688f1eff-a2b1-49f6-a89b-ae00bfc61e1c", "resourceType":
"Workspace", "relationships": [{"relationType": "CreatedBy", "targetEntityId":
null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_fe987530-fca6-4741-81c0-f28a2701aa6d_output_data_debug_info/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"},
{"relationType": "CreatedBy", "targetEntityId": null, "assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_fe987530-fca6-4741-81c0-f28a2701aa6d_output_data_flow_outputs/versions/1",
"entityType": "data", "direction": "Output", "entityContainerId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"}]}],
"nextSkip": 10, "entityContainerIdsToEntityContainerMetadata": {"3e123da1-f9a5-4c91-9234-8d9ffbb39ff5":
{"resourceId": "3e123da1-f9a5-4c91-9234-8d9ffbb39ff5", "subscriptionId": "96aede12-2f73-41cb-b983-6d11a904839b",
"resourceGroup": "promptflow", "resourceName": "promptflow-eastus", "entityContainerType":
"Workspace", "regions": [{"regionName": "eastus", "isPrimaryRegion": true}],
"tenantId": "00000000-0000-0000-0000-000000000000", "immutableResourceId":
"3e123da1-f9a5-4c91-9234-8d9ffbb39ff5", "isPublicResource": false}}, "resourcesNotQueriedReasons":
{}, "numberOfEntityContainersNotQueried": 0, "fanoutData": {"Multitenant":
{"nextSkip": 10, "isShardDone": false, "didShardFail": false, "totalCount":
146558, "resourceIdsOnShardThisPage": ["3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"]}},
"regionalFanoutState": {"shardFanoutStates": [{"shardId": "Multitenant", "nextSkip":
10, "isPlanExecutionDone": false, "didPlanExecutionFail": false, "totalCount":
146558, "resourceIdsOnShardThisPage": ["3e123da1-f9a5-4c91-9234-8d9ffbb39ff5"]}],
"firstPageStartTime": null}, "shardErrors": {}, "canSupportSkip": true}'
headers:
connection:
- keep-alive
content-length:
- '47750'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.158'
status:
code: 200
message: OK
version: 1
| promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_list_runs.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_list_runs.yaml",
"repo_id": "promptflow",
"token_count": 26972
} | 79 |
flow: ../flows/classification_accuracy_evaluation
column_mapping:
groundtruth: "${data.answer}"
prediction: "${run.outputs.category}"
run: flow_run_20230629_101205 # ./sample_bulk_run.yaml
# run config: env related
environment_variables: .env
# optional
connections:
node_1:
connection: test_llm_connection
deployment_name: gpt-35-turbo
| promptflow/src/promptflow/tests/test_configs/runs/illegal/missing_data.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/runs/illegal/missing_data.yaml",
"repo_id": "promptflow",
"token_count": 133
} | 80 |
from setuptools import find_packages, setup
PACKAGE_NAME = "tool_package"
setup(
name=PACKAGE_NAME,
version="0.0.1",
description="This is my tools package",
packages=find_packages(),
entry_points={
"package_tools": ["tool_func = tool_package.utils:list_package_tools"],
},
install_requires=[
"promptflow",
"promptflow-tools"
]
) | promptflow/src/promptflow/tests/test_configs/tools/tool_package/setup.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/tools/tool_package/setup.py",
"repo_id": "promptflow",
"token_count": 159
} | 81 |
name: node_wrong_reference
inputs:
text:
type: string
outputs:
result:
type: string
reference: ${second_node}
nodes:
- name: first_node
type: python
source:
type: code
path: test.py
inputs:
text: ${inputs.text}
aggregation: true
- name: second_node
type: python
source:
type: code
path: test.py
inputs:
text: ${third_node}
aggregation: true
| promptflow/src/promptflow/tests/test_configs/wrong_flows/wrong_node_reference/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/wrong_flows/wrong_node_reference/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 158
} | 82 |
{
"version": "0.2",
"language": "en",
"languageId": "python",
"dictionaries": [
"powershell",
"python",
"go",
"css",
"html",
"bash",
"npm",
"softwareTerms",
"en_us",
"en-gb"
],
"ignorePaths": [
"**/*.js",
"**/*.pyc",
"**/*.log",
"**/*.jsonl",
"**/*.xml",
"**/*.txt",
".gitignore",
"scripts/docs/_build/**",
"src/promptflow/promptflow/azure/_restclient/flow/**",
"src/promptflow/promptflow/azure/_restclient/swagger.json",
"src/promptflow/tests/**",
"src/promptflow-tools/tests/**",
"**/flow.dag.yaml",
"**/setup.py",
"scripts/installer/curl_install_pypi/**",
"scripts/installer/windows/**",
"src/promptflow/promptflow/_sdk/_service/pfsvc.py"
],
"words": [
"aoai",
"amlignore",
"mldesigner",
"faiss",
"serp",
"azureml",
"mlflow",
"vnet",
"openai",
"pfazure",
"eastus",
"azureai",
"vectordb",
"Qdrant",
"Weaviate",
"env",
"e2etests",
"e2etest",
"tablefmt",
"logprobs",
"logit",
"hnsw",
"chatml",
"UNLCK",
"KHTML",
"numlines",
"azurecr",
"centralus",
"Policheck",
"azuremlsdktestpypi",
"rediraffe",
"pydata",
"ROBOCOPY",
"undoc",
"retriable",
"pfcli",
"pfutil",
"mgmt",
"wsid",
"westus",
"msrest",
"cref",
"msal",
"pfbytes",
"Apim",
"junit",
"nunit",
"astext",
"Likert",
"pfsvc"
],
"ignoreWords": [
"openmpi",
"ipynb",
"xdist",
"pydash",
"tqdm",
"rtype",
"epocs",
"fout",
"funcs",
"todos",
"fstring",
"creds",
"zipp",
"gmtime",
"pyjwt",
"nbconvert",
"nbformat",
"pypandoc",
"dotenv",
"miniconda",
"datas",
"tcgetpgrp",
"yamls",
"fmt",
"serpapi",
"genutils",
"metadatas",
"tiktoken",
"bfnrt",
"orelse",
"thead",
"sympy",
"ghactions",
"esac",
"MSRC",
"pycln",
"strictyaml",
"psutil",
"getch",
"tcgetattr",
"TCSADRAIN",
"stringio",
"jsonify",
"werkzeug",
"continuumio",
"pydantic",
"iterrows",
"dtype",
"fillna",
"nlines",
"aggr",
"tcsetattr",
"pysqlite",
"AADSTS700082",
"Pyinstaller",
"runsvdir",
"runsv",
"levelno",
"LANCZOS",
"Mobius",
"ruamel",
"gunicorn",
"pkill",
"pgrep",
"Hwfoxydrg",
"llms",
"vcrpy",
"uionly",
"llmops",
"Abhishek",
"restx",
"httpx",
"tiiuae",
"nohup",
"metagenai",
"WBITS",
"laddr",
"nrows",
"Dumpable",
"XCLASS"
],
"flagWords": [
"Prompt Flow"
],
"allowCompoundWords": true
}
| promptflow/.cspell.json/0 | {
"file_path": "promptflow/.cspell.json",
"repo_id": "promptflow",
"token_count": 1564
} | 0 |
# Support
## How to file issues and get help
This project uses GitHub Issues to track bugs and feature requests. Please search the existing
issues before filing new issues to avoid duplicates. For new issues, file your bug or
feature request as a new Issue.
## Microsoft Support Policy
Support for this **PROJECT or PRODUCT** is limited to the resources listed above.
| promptflow/SUPPORT.md/0 | {
"file_path": "promptflow/SUPPORT.md",
"repo_id": "promptflow",
"token_count": 84
} | 1 |
# Dev Setup
## Set up process
- First create a new [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html) environment. Please specify python version as 3.9.
`conda create -n <env_name> python=3.9`.
- Activate the env you created.
- Set environment variable `PYTHONPATH` in your new conda environment.
`conda env config vars set PYTHONPATH=<path-to-src>\promptflow`.
Once you have set the environment variable, you have to reactivate your environment.
`conda activate <env_name>`.
- In root folder, run `python scripts/building/dev_setup.py --promptflow-extra-deps azure` to install the package and dependencies.
## How to run tests
### Set up your secrets
`dev-connections.json.example` is a template about connections provided in `src/promptflow`. You can follow these steps to refer to this template to configure your connection for the test cases:
1. `cd ./src/promptflow`
2. Run the command `cp dev-connections.json.example connections.json`;
3. Replace the values in the json file with your connection info;
4. Set the environment `PROMPTFLOW_CONNECTIONS='connections.json'`;
After above setup process is finished. You can use `pytest` command to run test, for example in root folder you can:
### Run tests via command
- Run all tests under a folder: `pytest src/promptflow/tests -v`
- Run a single test: ` pytest src/promptflow/tests/promptflow_test/e2etests/test_executor.py::TestExecutor::test_executor_basic_flow -v`
### Run tests in VSCode
1. Set up your python interperter
- Open the Command Palette (Ctrl+Shift+P) and select `Python: Select Interpreter`.

- Select existing conda env which you created previously.

2. Set up your test framework and directory
- Open the Command Palette (Ctrl+Shift+P) and select `Python: Configure Tests`.

- Select `pytest` as test framework.

- Select `Root directory` as test directory.

3. Exclude specific test folders.
You can exclude specific test folders if you don't have some extra dependency to avoid VS Code's test discovery fail.
For example, if you don't have azure dependency, you can exclude `sdk_cli_azure_test`.
Open `.vscode/settings.json`, write `"--ignore=src/promptflow/tests/sdk_cli_azure_test"` to `"python.testing.pytestArgs"`.

4. Click the `Run Test` button on the left

### Run tests in pycharm
1. Set up your pycharm python interpreter

2. Select existing conda env which you created previously

3. Run test, right-click the test name to run, or click the green arrow button on the left.

### Record and replay tests
Please refer to [Replay End-to-End Tests](./replay-e2e-test.md) to learn how to record and replay tests.
## How to write docstring.
A clear and consistent API documentation is crucial for the usability and maintainability of our codebase. Please refer to [API Documentation Guidelines](./documentation_guidelines.md) to learn how to write docstring when developing the project.
## How to write tests
- Put all test data/configs under `src/promptflow/tests/test_configs`.
- Write unit tests:
- Flow run: `src/promptflow/tests/sdk_cli_test/unittest/`
- Flow run in azure: `src/promptflow/tests/sdk_cli_azure_test/unittest/`
- Write e2e tests:
- Flow run: `src/promptflow/tests/sdk_cli_test/e2etests/`
- Flow run in azure: `src/promptflow/tests/sdk_cli_azure_test/e2etests/`
- Test file name and the test case name all start with `test_`.
- A basic test example, see [test_connection.py](../../src/promptflow/tests/sdk_cli_test/e2etests/test_connection.py).
### Test structure
Currently all tests are under `src/promptflow/tests/` folder:
- tests/
- promptflow/
- sdk_cli_test/
- e2etests/
- unittests/
- sdk_cli_azure_test/
- e2etests/
- unittests/
- test_configs/
- connections/
- datas/
- flows/
- runs/
- wrong_flows/
- wrong_tools/
When you want to add tests for a new feature, you can add new test file let's say a e2e test file `test_construction.py`
under `tests/promptflow/**/e2etests/`.
Once the project gets more complicated or anytime you find it necessary to add new test folder and test configs for
a specific feature, feel free to split the `promptflow` to more folders, for example:
- tests/
- (Test folder name)/
- e2etests/
- test_xxx.py
- unittests/
- test_xxx.py
- test_configs/
- (Data or config folder name)/
| promptflow/docs/dev/dev_setup.md/0 | {
"file_path": "promptflow/docs/dev/dev_setup.md",
"repo_id": "promptflow",
"token_count": 1669
} | 2 |
# Create and Use Tool Package
In this document, we will guide you through the process of developing your own tool package, offering detailed steps and advice on how to utilize your creation.
The custom tool is the prompt flow tool developed by yourself. If you find it useful, you can follow this guidance to make it a tool package. This will enable you to conveniently reuse it, share it with your team, or distribute it to anyone in the world.
After successful installation of the package, your custom "tool" will show up in VSCode extension as below:

## Create your own tool package
Your tool package should be a python package. To try it quickly, just use [my-tools-package 0.0.1](https://pypi.org/project/my-tools-package/) and skip this section.
### Prerequisites
Create a new conda environment using python 3.9 or 3.10. Run below command to install PromptFlow dependencies:
```
pip install promptflow
```
Install Pytest packages for running tests:
```
pip install pytest pytest-mock
```
Clone the PromptFlow repository from GitHub using the following command:
```
git clone https://github.com/microsoft/promptflow.git
```
### Create custom tool package
Run below command under the root folder to create your tool project quickly:
```
python <promptflow github repo>\scripts\tool\generate_tool_package_template.py --destination <your-tool-project> --package-name <your-package-name> --tool-name <your-tool-name> --function-name <your-tool-function-name>
```
For example:
```
python D:\proj\github\promptflow\scripts\tool\generate_tool_package_template.py --destination hello-world-proj --package-name hello-world --tool-name hello_world_tool --function-name get_greeting_message
```
This auto-generated script will create one tool for you. The parameters _destination_ and _package-name_ are mandatory. The parameters _tool-name_ and _function-name_ are optional. If left unfilled, the _tool-name_ will default to _hello_world_tool_, and the _function-name_ will default to _tool-name_.
The command will generate the tool project as follows with one tool `hello_world_tool.py` in it:
```
hello-world-proj/
│
├── hello_world/
│ ├── tools/
│ │ ├── __init__.py
│ │ ├── hello_world_tool.py
│ │ └── utils.py
│ ├── yamls/
│ │ └── hello_world_tool.yaml
│ └── __init__.py
│
├── tests/
│ ├── __init__.py
│ └── test_hello_world_tool.py
│
├── MANIFEST.in
│
└── setup.py
```
```The points outlined below explain the purpose of each folder/file in the package. If your aim is to develop multiple tools within your package, please make sure to closely examine point 2 and 5.```
1. **hello-world-proj**: This is the source directory. All of your project's source code should be placed in this directory.
2. **hello-world/tools**: This directory contains the individual tools for your project. Your tool package can contain either one tool or many tools. When adding a new tool, you should create another *_tool.py under the `tools` folder.
3. **hello-world/tools/hello_world_tool.py**: Develop your tool within the def function. Use the `@tool` decorator to identify the function as a tool.
> [!Note] There are two ways to write a tool. The default and recommended way is the function implemented way. You can also use the class implementation way, referring to [my_tool_2.py](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/my_tool_2.py) as an example.
4. **hello-world/tools/utils.py**: This file implements the tool list method, which collects all the tools defined. It is required to have this tool list method, as it allows the User Interface (UI) to retrieve your tools and display them within the UI.
> [!Note] There's no need to create your own list method if you maintain the existing folder structure. You can simply use the auto-generated list method provided in the `utils.py` file.
5. **hello_world/yamls/hello_world_tool.yaml**: Tool YAMLs defines the metadata of the tool. The tool list method, as outlined in the `utils.py`, fetches these tool YAMLs.
> [!Note] If you create a new tool, don't forget to also create the corresponding tool YAML. You can run below command under your tool project to auto generate your tool YAML. You may want to specify `-n` for `name` and `-d` for `description`, which would be displayed as the tool name and tooltip in prompt flow UI.
```
python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> -n <tool_name> -d <tool_description>
```
For example:
```
python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m hello_world.tools.hello_world_tool -o hello_world\yamls\hello_world_tool.yaml -n "Hello World Tool" -d "This is my hello world tool."
```
To populate your tool module, adhere to the pattern \<package_name\>.tools.\<tool_name\>, which represents the folder path to your tool within the package.
6. **tests**: This directory contains all your tests, though they are not required for creating your custom tool package. When adding a new tool, you can also create corresponding tests and place them in this directory. Run below command under your tool project:
```
pytest tests
```
7. **MANIFEST.in**: This file is used to determine which files to include in the distribution of the project. Tool YAML files should be included in MANIFEST.in so that your tool YAMLs would be packaged and your tools can show in the UI.
> [!Note] There's no need to update this file if you maintain the existing folder structure.
8. **setup.py**: This file contains metadata about your project like the name, version, author, and more. Additionally, the entry point is automatically configured for you in the `generate_tool_package_template.py` script. In Python, configuring the entry point in `setup.py` helps establish the primary execution point for a package, streamlining its integration with other software.
The `package_tools` entry point together with the tool list method are used to retrieve all the tools and display them in the UI.
```python
entry_points={
"package_tools": ["<your_tool_name> = <list_module>:<list_method>"],
},
```
> [!Note] There's no need to update this file if you maintain the existing folder structure.
## Build and share the tool package
Execute the following command in the tool package root directory to build your tool package:
```
python setup.py sdist bdist_wheel
```
This will generate a tool package `<your-package>-0.0.1.tar.gz` and corresponding `whl file` inside the `dist` folder.
Create an account on PyPI if you don't already have one, and install `twine` package by running `pip install twine`.
Upload your package to PyPI by running `twine upload dist/*`, this will prompt you for your Pypi username and password, and then upload your package on PyPI. Once your package is uploaded to PyPI, others can install it using pip by running `pip install your-package-name`. Make sure to replace `your-package-name` with the name of your package as it appears on PyPI.
If you only want to put it on Test PyPI, upload your package by running `twine upload --repository-url https://test.pypi.org/legacy/ dist/*`. Once your package is uploaded to Test PyPI, others can install it using pip by running `pip install --index-url https://test.pypi.org/simple/ your-package-name`.
## Use your tool from VSCode Extension
* Step1: Install [Prompt flow for VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow).
* Step2: Go to terminal and install your tool package in conda environment of the extension. Assume your conda env name is `prompt-flow`.
```
(local_test) PS D:\projects\promptflow\tool-package-quickstart> conda activate prompt-flow
(prompt-flow) PS D:\projects\promptflow\tool-package-quickstart> pip install .\dist\my_tools_package-0.0.1-py3-none-any.whl
```
* Step3: Go to the extension and open one flow folder. Click 'flow.dag.yaml' and preview the flow. Next, click `+` button and you will see your tools. You may need to reload the windows to clean previous cache if you don't see your tool in the list.

## FAQs
### Why is my custom tool not showing up in the UI?
Confirm that the tool YAML files are included in your custom tool package. You can add the YAML files to [MANIFEST.in](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/MANIFEST.in) and include the package data in [setup.py](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/setup.py).
Alternatively, you can test your tool package using the script below to ensure that you've packaged your tool YAML files and configured the package tool entry point correctly.
1. Make sure to install the tool package in your conda environment before executing this script.
2. Create a python file anywhere and copy the content below into it.
```python
import importlib
import importlib.metadata
def test():
"""List all package tools information using the `package-tools` entry point.
This function iterates through all entry points registered under the group "package_tools."
For each tool, it imports the associated module to ensure its validity and then prints
information about the tool.
Note:
- Make sure your package is correctly packed to appear in the list.
- The module is imported to validate its presence and correctness.
Example of tool information printed:
----identifier
{'module': 'module_name', 'package': 'package_name', 'package_version': 'package_version', ...}
"""
entry_points = importlib.metadata.entry_points()
if isinstance(entry_points, list):
entry_points = entry_points.select(group=PACKAGE_TOOLS_ENTRY)
else:
entry_points = entry_points.get(PACKAGE_TOOLS_ENTRY, [])
for entry_point in entry_points:
list_tool_func = entry_point.load()
package_tools = list_tool_func()
for identifier, tool in package_tools.items():
importlib.import_module(tool["module"]) # Import the module to ensure its validity
print(f"----{identifier}\n{tool}")
if __name__ == "__main__":
test()
```
3. Run this script in your conda environment. This will return the metadata of all tools installed in your local environment, and you should verify that your tools are listed.
### Why am I unable to upload package to PyPI?
* Make sure that the entered username and password of your PyPI account are accurate.
* If you encounter a `403 Forbidden Error`, it's likely due to a naming conflict with an existing package. You will need to choose a different name. Package names must be unique on PyPI to avoid confusion and conflicts among users. Before creating a new package, it's recommended to search PyPI (https://pypi.org/) to verify that your chosen name is not already taken. If the name you want is unavailable, consider selecting an alternative name or a variation that clearly differentiates your package from the existing one.
## Advanced features
- [Add a Tool Icon](add-a-tool-icon.md)
- [Add Category and Tags for Tool](add-category-and-tags-for-tool.md)
- [Create and Use Your Own Custom Strong Type Connection](create-your-own-custom-strong-type-connection.md)
- [Customize an LLM Tool](customize_an_llm_tool.md)
- [Use File Path as Tool Input](use-file-path-as-tool-input.md)
- [Create a Dynamic List Tool Input](create-dynamic-list-tool-input.md)
- [Create Cascading Tool Inputs](create-cascading-tool-inputs.md)
| promptflow/docs/how-to-guides/develop-a-tool/create-and-use-tool-package.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-tool/create-and-use-tool-package.md",
"repo_id": "promptflow",
"token_count": 3697
} | 3 |
# Run and evaluate a flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
After you have developed and tested the flow in [init and test a flow](../init-and-test-a-flow.md), this guide will help you learn how to run a flow with a larger dataset and then evaluate the flow you have created.
## Create a batch run
Since you have run your flow successfully with a small set of data, you might want to test if it performs well in large set of data, you can run a batch test and check the outputs.
A bulk test allows you to run your flow with a large dataset and generate outputs for each data row, and the run results will be recorded in local db so you can use [pf commands](../../reference/pf-command-reference.md) to view the run results at anytime. (e.g. `pf run list`)
Let's create a run with flow [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification). It is a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.
To begin with the guide, you need:
- Git clone the sample repository(above flow link) and set the working directory to `<path-to-the-sample-repo>/examples/flows/`.
- Make sure you have already created the necessary connection following [Create necessary connections](../quick-start.md#create-necessary-connections).
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Create the run with flow and data, can add `--stream` to stream the run.
```sh
pf run create --flow standard/web-classification --data standard/web-classification/data.jsonl --column-mapping url='${data.url}' --stream
```
Note `column-mapping` is a mapping from flow input name to specified values, see more details in [Use column mapping](https://aka.ms/pf/column-mapping).
You can also name the run by specifying `--name my_first_run` in above command, otherwise the run name will be generated in a certain pattern which has timestamp inside.

With a run name, you can easily view or visualize the run details using below commands:
```sh
pf run show-details -n my_first_run
```

```sh
pf run visualize -n my_first_run
```

More details can be found with `pf run --help`
:::
:::{tab-item} SDK
:sync: SDK
```python
from promptflow import PFClient
# Please protect the entry point by using `if __name__ == '__main__':`,
# otherwise it would cause unintended side effect when promptflow spawn worker processes.
# Ref: https://docs.python.org/3/library/multiprocessing.html#the-spawn-and-forkserver-start-methods
if __name__ == "__main__":
# PFClient can help manage your runs and connections.
pf = PFClient()
# Set flow path and run input data
flow = "standard/web-classification" # set the flow directory
data= "standard/web-classification/data.jsonl" # set the data file
# create a run, stream it until it's finished
base_run = pf.run(
flow=flow,
data=data,
stream=True,
# map the url field from the data to the url input of the flow
column_mapping={"url": "${data.url}"},
)
```

```python
# get the inputs/outputs details of a finished run.
details = pf.get_details(base_run)
details.head(10)
```

```python
# visualize the run in a web browser
pf.visualize(base_run)
```

Feel free to check [Promptflow Python Library Reference](../../reference/python-library-reference/promptflow.md) for all SDK public interfaces.
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Use the code lens action on the top of the yaml editor to trigger batch run

Click the bulk test button on the top of the visual editor to trigger flow test.

:::
::::
We also have a more detailed documentation [Manage runs](../manage-runs.md) demonstrating how to manage your finished runs with CLI, SDK and VS Code Extension.
## Evaluate your flow
You can use an evaluation method to evaluate your flow. The evaluation methods are also flows which use Python or LLM etc., to calculate metrics like accuracy, relevance score. Please refer to [Develop evaluation flow](../develop-a-flow/develop-evaluation-flow.md) to learn how to develop an evaluation flow.
In this guide, we use [eval-classification-accuracy](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-classification-accuracy) flow to evaluate. This is a flow illustrating how to evaluate the performance of a classification system. It involves comparing each prediction to the groundtruth and assigns a `Correct` or `Incorrect` grade, and aggregating the results to produce metrics such as `accuracy`, which reflects how good the system is at classifying the data.
### Run evaluation flow against run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
**Evaluate the finished flow run**
After the run is finished, you can evaluate the run with below command, compared with the normal run create command, note there are two extra arguments:
- `column-mapping`: A mapping from flow input name to specified data values. Reference [here](https://aka.ms/pf/column-mapping) for detailed information.
- `run`: The run name of the flow run to be evaluated.
More details can be found in [Use column mapping](https://aka.ms/pf/column-mapping).
```sh
pf run create --flow evaluation/eval-classification-accuracy --data standard/web-classification/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run my_first_run --stream
```
Same as the previous run, you can specify the evaluation run name with `--name my_first_eval_run` in above command.
You can also stream or view the run details with:
```sh
pf run stream -n my_first_eval_run # same as "--stream" in command "run create"
pf run show-details -n my_first_eval_run
pf run show-metrics -n my_first_eval_run
```
Since now you have two different runs `my_first_run` and `my_first_eval_run`, you can visualize the two runs at the same time with below command.
```sh
pf run visualize -n "my_first_run,my_first_eval_run"
```
A web browser will be opened to show the visualization result.

:::
:::{tab-item} SDK
:sync: SDK
**Evaluate the finished flow run**
After the run is finished, you can evaluate the run with below command, compared with the normal run create command, note there are two extra arguments:
- `column-mapping`: A dictionary represents sources of the input data that are needed for the evaluation method. The sources can be from the flow run output or from your test dataset.
- If the data column is in your test dataset, then it is specified as `${data.<column_name>}`.
- If the data column is from your flow output, then it is specified as `${run.outputs.<output_name>}`.
- `run`: The run name or run instance of the flow run to be evaluated.
More details can be found in [Use column mapping](https://aka.ms/pf/column-mapping).
```python
from promptflow import PFClient
# PFClient can help manage your runs and connections.
pf = PFClient()
# set eval flow path
eval_flow = "evaluation/eval-classification-accuracy"
data= "standard/web-classification/data.jsonl"
# run the flow with existing run
eval_run = pf.run(
flow=eval_flow,
data=data,
run=base_run,
column_mapping={ # map the url field from the data to the url input of the flow
"groundtruth": "${data.answer}",
"prediction": "${run.outputs.category}",
}
)
# stream the run until it's finished
pf.stream(eval_run)
# get the inputs/outputs details of a finished run.
details = pf.get_details(eval_run)
details.head(10)
# view the metrics of the eval run
metrics = pf.get_metrics(eval_run)
print(json.dumps(metrics, indent=4))
# visualize both the base run and the eval run
pf.visualize([base_run, eval_run])
```
A web browser will be opened to show the visualization result.

:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
There are actions to trigger local batch runs. To perform an evaluation you can use the run against "existing runs" actions.


:::
::::
## Next steps
Learn more about:
- [Tune prompts with variants](../tune-prompts-with-variants.md)
- [Deploy a flow](../deploy-a-flow/index.md)
- [Manage runs](../manage-runs.md)
- [Python library reference](../../reference/python-library-reference/promptflow.md)
```{toctree}
:maxdepth: 1
:hidden:
use-column-mapping
```
| promptflow/docs/how-to-guides/run-and-evaluate-a-flow/index.md/0 | {
"file_path": "promptflow/docs/how-to-guides/run-and-evaluate-a-flow/index.md",
"repo_id": "promptflow",
"token_count": 2884
} | 4 |
# Embedding
## Introduction
OpenAI's embedding models convert text into dense vector representations for various NLP tasks. See the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings) for more information.
## Prerequisite
Create OpenAI resources:
- **OpenAI**
Sign up account [OpenAI website](https://openai.com/)
Login and [Find personal API key](https://platform.openai.com/account/api-keys)
- **Azure OpenAI (AOAI)**
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
## **Connections**
Setup connections to provide resources in embedding tool.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| OpenAI | Required | Required | - | - |
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
| Name | Type | Description | Required |
|------------------------|-------------|-----------------------------------------------------------------------|----------|
| input | string | the input text to embed | Yes |
| connection | string | the connection for the embedding tool use to provide resources | Yes |
| model/deployment_name | string | instance of the text-embedding engine to use. Fill in model name if you use OpenAI connection, or deployment name if use Azure OpenAI connection. | Yes |
## Outputs
| Return Type | Description |
|-------------|------------------------------------------|
| list | The vector representations for inputs |
The following is an example response returned by the embedding tool:
<details>
<summary>Output</summary>
```
[-0.005744616035372019,
-0.007096089422702789,
-0.00563855143263936,
-0.005272455979138613,
-0.02355326898396015,
0.03955197334289551,
-0.014260607771575451,
-0.011810848489403725,
-0.023170066997408867,
-0.014739611186087132,
...]
```
</details> | promptflow/docs/reference/tools-reference/embedding_tool.md/0 | {
"file_path": "promptflow/docs/reference/tools-reference/embedding_tool.md",
"repo_id": "promptflow",
"token_count": 851
} | 5 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: open_ai_connection
type: azure_open_ai
api_key: "<user-input>"
api_base: "aoai-api-endpoint"
api_type: "azure"
| promptflow/examples/connections/azure_openai.yml/0 | {
"file_path": "promptflow/examples/connections/azure_openai.yml",
"repo_id": "promptflow",
"token_count": 89
} | 6 |
system:
You are an assistant to calculate the answer to the provided math problems.
Please return the final numerical answer only, without any accompanying reasoning or explanation.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
| promptflow/examples/flows/chat/chat-math-variant/chat.jinja2/0 | {
"file_path": "promptflow/examples/flows/chat/chat-math-variant/chat.jinja2",
"repo_id": "promptflow",
"token_count": 87
} | 7 |
import os
from typing import Iterable, List, Optional
from dataclasses import dataclass
from faiss import Index
import faiss
import pickle
import numpy as np
from .oai import OAIEmbedding as Embedding
@dataclass
class SearchResultEntity:
text: str = None
vector: List[float] = None
score: float = None
original_entity: dict = None
metadata: dict = None
INDEX_FILE_NAME = "index.faiss"
DATA_FILE_NAME = "index.pkl"
class FAISSIndex:
def __init__(self, index: Index, embedding: Embedding) -> None:
self.index = index
self.docs = {} # id -> doc, doc is (text, metadata)
self.embedding = embedding
def insert_batch(
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None
) -> None:
documents = []
vectors = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
vector = self.embedding.generate(text)
documents.append((text, metadata))
vectors.append(vector)
self.index.add(np.array(vectors, dtype=np.float32))
self.docs.update(
{i: doc for i, doc in enumerate(documents, start=len(self.docs))}
)
pass
def query(self, text: str, top_k: int = 10) -> List[SearchResultEntity]:
vector = self.embedding.generate(text)
scores, indices = self.index.search(np.array([vector], dtype=np.float32), top_k)
docs = []
for j, i in enumerate(indices[0]):
if i == -1: # This happens when not enough docs are returned.
continue
doc = self.docs[i]
docs.append(
SearchResultEntity(text=doc[0], metadata=doc[1], score=scores[0][j])
)
return docs
def save(self, path: str) -> None:
faiss.write_index(self.index, os.path.join(path, INDEX_FILE_NAME))
# dump docs to pickle file
with open(os.path.join(path, DATA_FILE_NAME), "wb") as f:
pickle.dump(self.docs, f)
pass
def load(self, path: str) -> None:
self.index = faiss.read_index(os.path.join(path, INDEX_FILE_NAME))
with open(os.path.join(path, DATA_FILE_NAME), "rb") as f:
self.docs = pickle.load(f)
pass
| promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/index.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/index.py",
"repo_id": "promptflow",
"token_count": 1014
} | 8 |
# All the values should be string type, please use "123" instead of 123 or "True" instead of True.
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: open_ai_connection
type: open_ai
api_key: "<open-ai-api-key>"
organization: ""
# Note:
# The connection information will be stored in a local database with api_key encrypted for safety.
# Prompt flow will ONLY use the connection information (incl. keys) when instructed by you, e.g. manage connections, use connections to run flow etc.
| promptflow/examples/flows/chat/chat-with-pdf/openai.yaml/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/openai.yaml",
"repo_id": "promptflow",
"token_count": 158
} | 9 |
import random
import time
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import bs4
import requests
from promptflow import tool
session = requests.Session()
def decode_str(string):
return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8")
def get_page_sentence(page, count: int = 10):
# find all paragraphs
paragraphs = page.split("\n")
paragraphs = [p.strip() for p in paragraphs if p.strip()]
# find all sentence
sentences = []
for p in paragraphs:
sentences += p.split(". ")
sentences = [s.strip() + "." for s in sentences if s.strip()]
# get first `count` number of sentences
return " ".join(sentences[:count])
def fetch_text_content_from_url(url: str, count: int = 10):
# Send a request to the URL
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.35"
}
delay = random.uniform(0, 0.5)
time.sleep(delay)
response = session.get(url, headers=headers)
if response.status_code == 200:
# Parse the HTML content using BeautifulSoup
soup = bs4.BeautifulSoup(response.text, "html.parser")
page_content = [p_ul.get_text().strip() for p_ul in soup.find_all("p") + soup.find_all("ul")]
page = ""
for content in page_content:
if len(content.split(" ")) > 2:
page += decode_str(content)
if not content.endswith("\n"):
page += "\n"
text = get_page_sentence(page, count=count)
return (url, text)
else:
msg = (
f"Get url failed with status code {response.status_code}.\nURL: {url}\nResponse: "
f"{response.text[:100]}"
)
print(msg)
return (url, "No available content")
except Exception as e:
print("Get url failed with error: {}".format(e))
return (url, "No available content")
@tool
def search_result_from_url(url_list: list, count: int = 10):
results = []
partial_func_of_fetch_text_content_from_url = partial(fetch_text_content_from_url, count=count)
with ThreadPoolExecutor(max_workers=5) as executor:
futures = executor.map(partial_func_of_fetch_text_content_from_url, url_list)
for feature in futures:
results.append(feature)
return results
| promptflow/examples/flows/chat/chat-with-wikipedia/search_result_from_url.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-wikipedia/search_result_from_url.py",
"repo_id": "promptflow",
"token_count": 1117
} | 10 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
entities:
type: list
default:
- software engineer
- CEO
ground_truth:
type: string
default: '"CEO, Software Engineer, Finance Manager"'
outputs:
match_cnt:
type: object
reference: ${match.output}
nodes:
- name: cleansing
type: python
source:
type: code
path: cleansing.py
inputs:
entities_str: ${inputs.ground_truth}
- name: match
type: python
source:
type: code
path: match.py
inputs:
answer: ${inputs.entities}
ground_truth: ${cleansing.output}
- name: log_metrics
type: python
source:
type: code
path: log_metrics.py
inputs:
match_counts: ${match.output}
aggregation: true
environment:
python_requirements_txt: requirements.txt | promptflow/examples/flows/evaluation/eval-entity-match-rate/flow.dag.yaml/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-entity-match-rate/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 313
} | 11 |
user:
# Instructions
* There are many chatbots that can answer users questions based on the context given from different sources like search results, or snippets from books/papers. They try to understand users's question and then get context by either performing search from search engines, databases or books/papers for relevant content. Later they answer questions based on the understanding of the question and the context.
* Perceived intelligence is the degree to which a bot can impress the user with its responses, by showing originality, insight, creativity, knowledge, and adaptability. Perceived intelligence can be influenced by various factors, such as the content, tone, style, and structure of the bot's responses, the relevance, coherence, and accuracy of the information the bot provides, the creativity, originality, and wit of the bot's expressions, the depth, breadth, and insight of the bot's knowledge, and the ability of the bot to adapt, learn, and use feedback.
* Your goal is to score the answer for given question and context from 1 to 10 based on perceived intelligence described above:
* Score 10 means the answer is excellent for perceived intelligence
* Score 1 means the answer is poor for perceived intelligence
* Score 5 means the answer is normal for perceived intelligence
* Just respond with the score, nothing else.
# Real work
## Question
{{question}}
## Answer
{{answer}}
## Context
{{context}}
## Score | promptflow/examples/flows/evaluation/eval-perceived-intelligence/gpt_perceived_intelligence.md/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-perceived-intelligence/gpt_perceived_intelligence.md",
"repo_id": "promptflow",
"token_count": 315
} | 12 |
from promptflow import tool
@tool
def select_metrics(metrics: str) -> str:
supported_metrics = ('gpt_coherence', 'gpt_similarity', 'gpt_fluency', 'gpt_relevance', 'gpt_groundedness',
'f1_score', 'ada_similarity')
user_selected_metrics = [metric.strip() for metric in metrics.split(',') if metric]
metric_selection_dict = {}
for metric in supported_metrics:
if metric in user_selected_metrics:
metric_selection_dict[metric] = True
else:
metric_selection_dict[metric] = False
return metric_selection_dict
| promptflow/examples/flows/evaluation/eval-qna-non-rag/select_metrics.py/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-qna-non-rag/select_metrics.py",
"repo_id": "promptflow",
"token_count": 244
} | 13 |
# Integrations Folder
This folder contains flow examples contributed by various contributors. Each flow example should have a README.md file that provides a comprehensive introduction to the flow and includes contact information for the flow owner.
# Guideline for README.md of flows
To ensure consistency and clarity, please follow the guidelines below when creating the README.md file for your flow example. You can also refer to the [README.md](../standard/web-classification/README.md) file in the [web-classification](../standard/web-classification) flow example as a reference.
Note: Above sample README.md may not have contact information because it's a shared example and people can open issues to this repository if they have any questions about the flow example. For integration samples, **please make sure to include contact information in your README.md file**.
## Introduction (Required)
Provide a detailed description of the flow, including its components, inputs, outputs, and any dependencies. Explain how the flow works and what problem it solves. This section should give users a clear understanding of the flow's functionality and how it can be used.
## Tools Used in this Flow (Required)
List all the tools (functions) used in the flow. This can include both standard tools provided by prompt flow and any custom tools created specifically for the flow. Include a brief description of each tool and its purpose within the flow.
## Pre-requisites (Required)
List any pre-requisites that are required to run the flow. This can include any specific versions of prompt flow or other dependencies. If there are any specific configurations or settings that need to be applied, make sure to mention them in this section.
## Getting Started (Required)
Provide step-by-step instructions on how to get started with the flow. This should include any necessary setup or configuration steps, such as installing dependencies or setting up connections. If there are specific requirements or prerequisites, make sure to mention them in this section.
## Usage Examples
Include usage examples that demonstrate how to run the flow and provide input data. This can include command-line instructions or code snippets. Show users how to execute the flow and explain the expected output or results.
## Troubleshooting
If there are any known issues or troubleshooting tips related to the flow, include them in this section. Provide solutions or workarounds for common problems that users may encounter. This will help users troubleshoot issues on their own and reduce the need for support.
## Contribution Guidelines
If you would like to encourage other users to contribute to your flow or provide guidelines for contributing to the integration folder, include a section with contribution guidelines. This can include instructions on how to submit pull requests, guidelines for code formatting, or any other relevant information.
## Contact (Required)
Specify the flow owner and provide contact information in the README.md file. This can include an email address, GitHub username, or any other preferred method of contact. By including this information, users will be able to reach out to the owner with any questions or issues related to the flow.
# Conclusion
By following these guidelines, you can create a well-structured and informative README.md file for your flow example. This will help users understand and utilize your flow effectively. If you have any further questions or need assistance, please don't hesitate to reach out. Happy contributing!
| promptflow/examples/flows/integrations/README.md/0 | {
"file_path": "promptflow/examples/flows/integrations/README.md",
"repo_id": "promptflow",
"token_count": 747
} | 14 |
# Autonomous Agent
This is a flow showcasing how to construct a AutoGPT agent with promptflow to autonomously figures out how to apply the given functions
to solve the goal, which is film trivia that provides accurate and up-to-date information about movies, directors,
actors, and more in this sample.
It involves inferring next executed function and user intent with LLM, and then use the function to generate
observation. The observation above will be used as augmented prompt which is the input of next LLM inferring loop
until the inferred function is the signal that you have finished all your objectives. The functions set used in the
flow contains Wikipedia search function that can search the web to find answer about current events and PythonREPL
python function that can run python code in a REPL.
For the sample input about movie introduction, the AutoGPT usually runs 4 rounds to finish the task. The first round
is searching for the movie name, the second round is searching for the movie director, the third round is calculating
director age, and the last round is outputting finishing signal. It takes 30s~40s to finish the task, but may take
longer time if you use "gpt-3.5" or encounter Azure OpenAI rate limit. You could use "gpt-4" or go to
https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.
Note: This is just a sample introducing how to use promptflow to build a simple AutoGPT. You can go to
https://github.com/Significant-Gravitas/Auto-GPT to get more concepts about AutoGPT.
## What you will learn
In this flow, you will learn
- how to use prompt tool.
- how to compose an AutoGPT flow using functions.
## Prerequisites
Install prompt-flow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Getting Started
### 1 Create Azure OpenAI or OpenAI connection
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
Note that you need to use "2023-07-01-preview" as Azure OpenAI connection API version when using function calling.
See <a href='https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling' target='_blank'>How to use function calling with Azure OpenAI Service</a> for more details.
### 2. Configure the flow with your connection
`flow.dag.yaml` is already configured with connection named `open_ai_connection`. It is recommended to use "gpt-4" model for stable performance. Using "gpt-3.5-turbo" may lead to the model getting stuck in the agent inner loop due to its suboptimal and unstable performance.
### 3. Test flow with single line data
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
```
### 4. Run with multi-line data
```bash
# create run using command line args
pf run create --flow . --data ./data.jsonl --column-mapping name='${data.name}' role='${data.role}' goals='${data.goals}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
## Disclaimer
LLM systems are susceptible to prompt injection, and you can gain a deeper understanding of this issue in the [technical blog](https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/). As an illustration, the PythonREPL function might execute harmful code if provided with a malicious prompt within the provided sample. Furthermore, we cannot guarantee that implementing AST validations solely within the PythonREPL function will reliably elevate the sample's security to an enterprise level. We kindly remind you to refrain from utilizing this in a production environment. | promptflow/examples/flows/standard/autonomous-agent/README.md/0 | {
"file_path": "promptflow/examples/flows/standard/autonomous-agent/README.md",
"repo_id": "promptflow",
"token_count": 1005
} | 15 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
text:
type: string
default: Python Hello World!
outputs:
output:
type: string
reference: ${llm.output}
nodes:
- name: hello_prompt
type: prompt
inputs:
text: ${inputs.text}
source:
type: code
path: hello.jinja2
- name: llm
type: llm
inputs:
prompt: ${hello_prompt.output}
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: '120'
source:
type: code
path: hello.jinja2
connection: open_ai_connection
api: chat
node_variants: {}
environment:
python_requirements_txt: requirements.txt | promptflow/examples/flows/standard/basic-with-builtin-llm/flow.dag.yaml/0 | {
"file_path": "promptflow/examples/flows/standard/basic-with-builtin-llm/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 313
} | 16 |
{"query": "When will my order be shipped?"}
{"query": "Can you help me find information about this T-shirt?"}
{"query": "Can you recommend me a useful prompt tool?"} | promptflow/examples/flows/standard/conditional-flow-for-switch/data.jsonl/0 | {
"file_path": "promptflow/examples/flows/standard/conditional-flow-for-switch/data.jsonl",
"repo_id": "promptflow",
"token_count": 45
} | 17 |
# Flow with symlinks
User sometimes need to reference some common files or folders, this sample demos how to solve the problem using symlinks.
But it has the following limitations. It is recommended to use **additional include**.
Learn more: [flow-with-additional-includes](../flow-with-additional-includes/README.md)
1. For Windows user, by default need Administrator role to create symlinks.
2. For Windows user, directly copy the folder with symlinks, it will deep copy the contents to the location.
3. Need to update the git config to support symlinks.
**Notes**:
- For Windows user, please grant user permission to [create symbolic links without administrator role](https://learn.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/create-symbolic-links).
1. Open your `Local Security Policy`
2. Find `Local Policies` -> `User Rights Assignment` -> `Create symbolic links`
3. Add you user name to this policy then reboot the compute.
**Attention**:
- For git operations, need to set: `git config core.symlinks true`
## Tools used in this flow
- LLM Tool
- Python Tool
## What you will learn
In this flow, you will learn
- how to use symlinks in the flow
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Getting Started
### 1. Create symbolic links in the flow
```bash
python ./create_symlinks.py
```
### 2. Test & run the flow with symlinks
In this sample, this flow will references some files in the [web-classification](../web-classification/README.md) flow, and assume you already have required connection setup.
You can execute this flow or submit it to cloud.
#### Test flow with single line data
```bash
# test flow with default input value in flow.dag.yaml
pf flow test --flow .
# test flow with input
pf flow test --flow . --inputs url=https://www.youtube.com/watch?v=o5ZQyXaAv1g answer=Channel evidence=Url
# test node in the flow
pf flow test --flow . --node convert_to_dict --inputs classify_with_llm.output='{"category": "App", "evidence": "URL"}'
```
#### Run with multi-line data
```bash
# create run using command line args
pf run create --flow . --data ./data.jsonl --column-mapping url='${data.url}' --stream
# create run using yaml file
pf run create --file run.yml --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
#### Submit run to cloud
``` bash
# create run
pfazure run create --flow . --data ./data.jsonl --column-mapping url='${data.url}' --stream --subscription <your_subscription_id> -g <your_resource_group_name> -w <your_workspace_name>
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
pfazure run create --file run.yml --stream
```
| promptflow/examples/flows/standard/flow-with-symlinks/README.md/0 | {
"file_path": "promptflow/examples/flows/standard/flow-with-symlinks/README.md",
"repo_id": "promptflow",
"token_count": 870
} | 18 |
import ast
import asyncio
import logging
import os
import sys
from typing import Union, List
from promptflow import tool
from azure_open_ai import ChatLLM
from divider import Divider
from prompt import docstring_prompt, PromptLimitException
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
def get_imports(content):
tree = ast.parse(content)
import_statements = []
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for n in node.names:
import_statements.append(f"import {n.name}")
elif isinstance(node, ast.ImportFrom):
module_name = node.module
for n in node.names:
import_statements.append(f"from {module_name} import {n.name}")
return import_statements
async def async_generate_docstring(divided: List[str]):
llm = ChatLLM()
divided = list(reversed(divided))
all_divided = []
# If too many imports result in tokens exceeding the limit, please set an empty string.
modules = '' # '\n'.join(get_imports(divided[-1]))
modules_tokens = llm.count_tokens(modules)
if modules_tokens > 300:
logging.warning(f'Too many imports, the number of tokens is {modules_tokens}')
if modules_tokens > 500:
logging.warning(f'Too many imports, the number of tokens is {modules_tokens}, will set an empty string.')
modules = ''
# Divide the code into two parts if the global class/function is too long.
while len(divided):
item = divided.pop()
try:
llm.validate_tokens(llm.create_prompt(docstring_prompt(code=item, module=modules)))
except PromptLimitException as e:
logging.warning(e.message + ', will divide the code into two parts.')
divided_tmp = Divider.divide_half(item)
if len(divided_tmp) > 1:
divided.extend(list(reversed(divided_tmp)))
continue
except Exception as e:
logging.warning(e)
all_divided.append(item)
tasks = []
last_code = ''
for item in all_divided:
if Divider.has_class_or_func(item):
tasks.append(llm.async_query(docstring_prompt(last_code=last_code, code=item, module=modules)))
else: # If the code has not function or class, no need to generate docstring.
tasks.append(asyncio.sleep(0))
last_code = item
res_doc = await asyncio.gather(*tasks)
new_code = []
for i in range(len(all_divided)):
if type(res_doc[i]) is str:
new_code.append(Divider.merge_doc2code(res_doc[i], all_divided[i]))
else:
new_code.append(all_divided[i])
return new_code
@tool
def generate_docstring(divided: List[str],
connection: Union[AzureOpenAIConnection, OpenAIConnection] = None,
model: str = None):
if isinstance(connection, AzureOpenAIConnection):
os.environ["OPENAI_API_KEY"] = connection.api_key
os.environ["OPENAI_API_BASE"] = connection.api_base
os.environ["OPENAI_API_VERSION"] = connection.api_version
os.environ["API_TYPE"] = connection.api_type
elif isinstance(connection, OpenAIConnection):
os.environ["OPENAI_API_KEY"] = connection.api_key
os.environ["ORGANIZATION"] = connection.organization
if model:
os.environ["MODEL"] = model
if sys.platform.startswith("win"):
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
return asyncio.run(async_generate_docstring(divided))
| promptflow/examples/flows/standard/gen-docstring/generate_docstring_tool.py/0 | {
"file_path": "promptflow/examples/flows/standard/gen-docstring/generate_docstring_tool.py",
"repo_id": "promptflow",
"token_count": 1511
} | 19 |
<jupyter_start><jupyter_code># Setup execution path and pf client
import os
import promptflow
root = os.path.join(os.getcwd(), "../")
flow_path = os.path.join(root, "named-entity-recognition")
data_path = os.path.join(flow_path, "data.jsonl")
eval_match_rate_flow_path = os.path.join(root, "../evaluation/eval-entity-match-rate")
pf = promptflow.PFClient()
# Run flow against test data
run = pf.run(
flow=flow_path,
data=data_path,
column_mapping={
"text": "${data.text}",
"entity_type": "${data.entity_type}"
},
display_name="ner_bulk_run",
tags={"unittest": "true"},
stream=True)
# Show output of flow run
pf.get_details(run)
# Evaluate the match rate of the entity recognition result of the flow run
eval = pf.run(
flow=eval_match_rate_flow_path,
run=run,
data=data_path,
column_mapping={
"entities": "${run.outputs.entities}",
"ground_truth": "${data.results}"
},
display_name="eval_match_rate",
tags={"unittest": "true"},
stream=True)
pf.get_details(eval)
# Get metrics of the evaluation flow run
pf.get_metrics(eval)
# Visualize the flow run and evaluation run with HTML
pf.visualize([run, eval])<jupyter_output><empty_output> | promptflow/examples/flows/standard/named-entity-recognition/NER-test.ipynb/0 | {
"file_path": "promptflow/examples/flows/standard/named-entity-recognition/NER-test.ipynb",
"repo_id": "promptflow",
"token_count": 494
} | 20 |
from promptflow import tool
@tool
def prepare_examples():
return [
{
"url": "https://play.google.com/store/apps/details?id=com.spotify.music",
"text_content": "Spotify is a free music and podcast streaming app with millions of songs, albums, and "
"original podcasts. It also offers audiobooks, so users can enjoy thousands of stories. "
"It has a variety of features such as creating and sharing music playlists, discovering "
"new music, and listening to popular and exclusive podcasts. It also has a Premium "
"subscription option which allows users to download and listen offline, and access "
"ad-free music. It is available on all devices and has a variety of genres and artists "
"to choose from.",
"category": "App",
"evidence": "Both",
},
{
"url": "https://www.youtube.com/channel/UC_x5XG1OV2P6uZZ5FSM9Ttw",
"text_content": "NFL Sunday Ticket is a service offered by Google LLC that allows users to watch NFL "
"games on YouTube. It is available in 2023 and is subject to the terms and privacy policy "
"of Google LLC. It is also subject to YouTube's terms of use and any applicable laws.",
"category": "Channel",
"evidence": "URL",
},
{
"url": "https://arxiv.org/abs/2303.04671",
"text_content": "Visual ChatGPT is a system that enables users to interact with ChatGPT by sending and "
"receiving not only languages but also images, providing complex visual questions or "
"visual editing instructions, and providing feedback and asking for corrected results. "
"It incorporates different Visual Foundation Models and is publicly available. Experiments "
"show that Visual ChatGPT opens the door to investigating the visual roles of ChatGPT with "
"the help of Visual Foundation Models.",
"category": "Academic",
"evidence": "Text content",
},
{
"url": "https://ab.politiaromana.ro/",
"text_content": "There is no content available for this text.",
"category": "None",
"evidence": "None",
},
]
| promptflow/examples/flows/standard/web-classification/prepare_examples.py/0 | {
"file_path": "promptflow/examples/flows/standard/web-classification/prepare_examples.py",
"repo_id": "promptflow",
"token_count": 899
} | 21 |
from enum import Enum
from promptflow import tool
class UserType(str, Enum):
STUDENT = "student"
TEACHER = "teacher"
@tool
def my_tool(user_type: Enum, student_id: str = "", teacher_id: str = "") -> str:
"""This is a dummy function to support cascading inputs.
:param user_type: user type, student or teacher.
:param student_id: student id.
:param teacher_id: teacher id.
:return: id of the user.
If user_type is student, return student_id.
If user_type is teacher, return teacher_id.
"""
if user_type == UserType.STUDENT:
return student_id
elif user_type == UserType.TEACHER:
return teacher_id
else:
raise Exception("Invalid user.")
| promptflow/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_cascading_inputs.py/0 | {
"file_path": "promptflow/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_cascading_inputs.py",
"repo_id": "promptflow",
"token_count": 267
} | 22 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomStrongTypeConnection.schema.json
name: "my_custom_connection"
type: custom
custom_type: MyCustomConnection
module: my_tool_package.tools.tool_with_custom_strong_type_connection
package: my-tools-package
package_version: 0.0.5
configs:
api_base: "This is a fake api base." # String type. The api base.
secrets: # must-have
api_key: "to_replace_with_api_key" # Secret type. The api key get from "https://xxx.com".
| promptflow/examples/tools/use-cases/custom-strong-type-connection-package-tool-showcase/my_custom_connection.yml/0 | {
"file_path": "promptflow/examples/tools/use-cases/custom-strong-type-connection-package-tool-showcase/my_custom_connection.yml",
"repo_id": "promptflow",
"token_count": 171
} | 23 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
input:
type: string
default: Microsoft
outputs:
output:
type: string
reference: ${Tool_with_FilePath_Input.output}
nodes:
- name: Tool_with_FilePath_Input
type: python
source:
type: package
tool: my_tool_package.tools.tool_with_file_path_input.my_tool
inputs:
input_text: ${inputs.input}
input_file: hello_method.py
| promptflow/examples/tools/use-cases/filepath-input-tool-showcase/flow.dag.yaml/0 | {
"file_path": "promptflow/examples/tools/use-cases/filepath-input-tool-showcase/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 177
} | 24 |
---
resources: examples/connections/azure_openai.yml, examples/flows/standard/web-classification
---
# Distribute flow as executable app
This example demos how to package flow as a executable app.
We will use [web-classification](../../../flows/standard/web-classification/README.md) as example in this tutorial.
Please ensure that you have installed all the required dependencies. You can refer to the "Prerequisites" section in the README of the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) for a comprehensive list of prerequisites and installation instructions. And we recommend you to add a `requirements.txt` to indicate all the required dependencies for each flow.
[Pyinstaller](https://pyinstaller.org/en/stable/installation.html) is a popular tool used for converting Python applications into standalone executables. It allows you to package your Python scripts into a single executable file, which can be run on a target machine without requiring the Python interpreter to be installed.
[Streamlit](https://docs.streamlit.io/library/get-started) is an open-source Python library used for creating web applications quickly and easily. It's designed for data scientists and engineers who want to turn data scripts into shareable web apps with minimal effort.
We use Pyinstaller to package the flow and Streamlit to create custom web apps. Prior to distributing the workflow, kindly ensure that you have installed them.
In this example, we use PyInstaller version 5.13.2 and Streamlit version 1.26.0 within a Python 3.10.8 environment.
## Build a flow as executable format
Note that all dependent connections must be created before building as executable.
```bash
# create connection if not created before
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Use the command below to build a flow as executable format app:
```shell
pf flow build --source ../../../flows/standard/web-classification --output target --format executable
```
## Executable format folder structure
Exported files & its dependencies are located in the same folder. The structure is as below:
- flow: the folder contains all the flow files.
- connections: the folder contains yaml files to create all related connections.
- app.py: the entry file is included as the entry point for the bundled application.
- app.spec: the spec file tells PyInstaller how to process your script.
- main.py: it will start Streamlit service and be called by the entry file.
- settings.json: a json file to store the settings of the executable application.
- build: a folder contains various log and working files.
- dist: a folder contains the executable application.
- README.md: Simple introduction of the files.
### A template script of the entry file
PyInstaller reads a spec file or Python script written by you. It analyzes your code to discover every other module and library your script needs in order to execute. Then it collects copies of all those files, including the active Python interpreter, and puts them with your script in a single folder, or optionally in a single executable file.
We provide a Python entry script named `app.py` as the entry point for the bundled app, which enables you to serve a flow folder as an endpoint.
```python
import os
import sys
from promptflow._cli._pf._connection import create_connection
from streamlit.web import cli as st_cli
from streamlit.runtime import exists
from main import start
def is_yaml_file(file_path):
_, file_extension = os.path.splitext(file_path)
return file_extension.lower() in ('.yaml', '.yml')
def create_connections(directory_path) -> None:
for root, dirs, files in os.walk(directory_path):
for file in files:
file_path = os.path.join(root, file)
if is_yaml_file(file_path):
create_connection(file_path)
if __name__ == "__main__":
create_connections(os.path.join(os.path.dirname(__file__), "connections"))
if exists():
start()
else:
main_script = os.path.join(os.path.dirname(__file__), "main.py")
sys.argv = ["streamlit", "run", main_script, "--global.developmentMode=false"]
st_cli.main(prog_name="streamlit")
```
:::
### A template script of the spec file
The spec file tells PyInstaller how to process your script. It encodes the script names and most of the options you give to the pyinstaller command. The spec file is actually executable Python code. PyInstaller builds the app by executing the contents of the spec file.
To streamline this process, we offer a `app.spec` spec file that bundles the application into a single file. For additional information on spec files, you can refer to the [Using Spec Files](https://pyinstaller.org/en/stable/spec-files.html).
Please replace {{streamlit_runtime_interpreter_path}} with the path of streamlit runtime interpreter in your environment.
```spec
# -*- mode: python ; coding: utf-8 -*-
from PyInstaller.utils.hooks import collect_data_files
from PyInstaller.utils.hooks import copy_metadata
datas = [('connections', 'connections'), ('flow', 'flow'), ('settings.json', '.'), ('main.py', '.'), ('{{streamlit_runtime_interpreter_path}}', './streamlit/runtime')]
datas += collect_data_files('streamlit')
datas += copy_metadata('streamlit')
datas += collect_data_files('keyrings.alt', include_py_files=True)
datas += copy_metadata('keyrings.alt')
block_cipher = None
a = Analysis(
['app.py', 'main.py'],
pathex=[],
binaries=[],
datas=datas,
hiddenimports=['bs4'],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='app',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
```
### The bundled application using Pyinstaller
Once you've build a flow as executable format following [Build a flow as executable format](#build-a-flow-as-executable-format).
It will create two folders named `build` and `dist` within your specified output directory, denoted as <your-output-dir>. The `build` folder houses various log and working files, while the `dist` folder contains the `app` executable application.
#### Connections
If the service involves connections, all related connections will be exported as yaml files and recreated in the executable package.
Secrets in connections won't be exported directly. Instead, we will export them as a reference to environment variables:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
type: open_ai
name: open_ai_connection
module: promptflow.connections
api_key: ${env:OPEN_AI_CONNECTION_API_KEY} # env reference
```
## Test the endpoint
Finally, You can distribute the bundled application `app` to other people. They can execute your program by double clicking the executable file, e.g. `app.exe` in Windows system or running the binary file, e.g. `app` in Linux system.
The development server has a built-in web page they can use to test the flow by opening 'http://localhost:8501' in the browser. The expected result is as follows: if the flow served successfully, the process will keep alive until it is killed manually.
To your users, the app is self-contained. They do not need to install any particular version of Python or any modules. They do not need to have Python installed at all.
**Note**: The executable generated is not cross-platform. One platform (e.g. Windows) packaged executable can't run on others (Mac, Linux).
## Known issues
1. Note that Python 3.10.0 contains a bug making it unsupportable by PyInstaller. PyInstaller will also not work with beta releases of Python 3.13.
| promptflow/examples/tutorials/flow-deploy/distribute-flow-as-executable-app/README.md/0 | {
"file_path": "promptflow/examples/tutorials/flow-deploy/distribute-flow-as-executable-app/README.md",
"repo_id": "promptflow",
"token_count": 2383
} | 25 |
<svg width="512" height="512" viewBox="0 0 512 512" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_699_15212)">
<path fill-rule="evenodd" clip-rule="evenodd" d="M237 39.0408V461.693C237 469.397 228.655 474.208 221.988 470.346L151.918 429.764C130.306 417.247 117 394.164 117 369.19V148.892C117 123.917 130.306 100.834 151.918 88.3177L237 39.0408Z" fill="url(#paint0_linear_699_15212)"/>
<path d="M395.075 127.51L237 39V167.541L283.451 192.041L395.075 127.51Z" fill="url(#paint1_linear_699_15212)"/>
<path d="M395.075 127.51L237 39V167.541L283.451 192.041L395.075 127.51Z" fill="url(#paint2_linear_699_15212)"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M255.5 231.426C255.5 217.184 263.073 204.017 275.382 196.854L395 127.248V216.101C395 241.03 381.742 264.078 360.193 276.611L270.528 328.76C263.861 332.637 255.5 327.828 255.5 320.116L255.5 231.426Z" fill="url(#paint3_linear_699_15212)"/>
</g>
<defs>
<linearGradient id="paint0_linear_699_15212" x1="196.286" y1="183.041" x2="270.786" y2="92.5087" gradientUnits="userSpaceOnUse">
<stop stop-color="#3272ED"/>
<stop offset="1" stop-color="#AF7BD6"/>
</linearGradient>
<linearGradient id="paint1_linear_699_15212" x1="457.98" y1="131.313" x2="260.351" y2="133.014" gradientUnits="userSpaceOnUse">
<stop stop-color="#DA7ED0"/>
<stop offset="0.05" stop-color="#B77BD4"/>
<stop offset="0.11" stop-color="#9079DA"/>
<stop offset="0.18" stop-color="#6E77DF"/>
<stop offset="0.25" stop-color="#5175E3"/>
<stop offset="0.33" stop-color="#3973E7"/>
<stop offset="0.42" stop-color="#2772E9"/>
<stop offset="0.54" stop-color="#1A71EB"/>
<stop offset="0.813361" stop-color="#1371EC"/>
<stop offset="1" stop-color="#064495"/>
</linearGradient>
<linearGradient id="paint2_linear_699_15212" x1="210.18" y1="4.19164" x2="307.181" y2="175.949" gradientUnits="userSpaceOnUse">
<stop stop-color="#712575"/>
<stop offset="0.09" stop-color="#9A2884"/>
<stop offset="0.18" stop-color="#BF2C92"/>
<stop offset="0.27" stop-color="#DA2E9C"/>
<stop offset="0.34" stop-color="#EB30A2"/>
<stop offset="0.4" stop-color="#F131A5"/>
<stop offset="0.5" stop-color="#EC30A3"/>
<stop offset="0.61" stop-color="#DF2F9E"/>
<stop offset="0.72" stop-color="#C92D96"/>
<stop offset="0.83" stop-color="#AA2A8A"/>
<stop offset="0.95" stop-color="#83267C"/>
<stop offset="1" stop-color="#712575"/>
</linearGradient>
<linearGradient id="paint3_linear_699_15212" x1="308" y1="260.041" x2="307.043" y2="133.204" gradientUnits="userSpaceOnUse">
<stop stop-color="#1D5CD6"/>
<stop offset="1" stop-color="#787BE5"/>
</linearGradient>
<clipPath id="clip0_699_15212">
<rect width="512" height="512" fill="white"/>
</clipPath>
</defs>
</svg>
| promptflow/scripts/docs/_static/logo.svg/0 | {
"file_path": "promptflow/scripts/docs/_static/logo.svg",
"repo_id": "promptflow",
"token_count": 1155
} | 26 |
@echo off
setlocal
SET PF_INSTALLER=MSI
set MAIN_EXE=%~dp0.\pfcli.exe
"%MAIN_EXE%" pf %* | promptflow/scripts/installer/windows/scripts/pf.bat/0 | {
"file_path": "promptflow/scripts/installer/windows/scripts/pf.bat",
"repo_id": "promptflow",
"token_count": 49
} | 27 |
import io
import re
from pathlib import Path
import panflute
import pypandoc
from .readme_step import ReadmeStepsManage
def strip_comments(code):
code = str(code)
code = re.sub(r"(?m)^ *#.*\n?", "", code) # remove comments
splits = [ll.rstrip() for ll in code.splitlines() if ll.strip()] # remove empty
splits_no_interactive = [
split
for split in splits
if "interactive" not in split
and "pf flow serve" not in split
and "pf connection delete" not in split
] # remove --interactive and pf flow serve and pf export docker
text = "\n".join([ll.rstrip() for ll in splits_no_interactive])
# replacements
text = text.replace("<your_api_key>", "$aoai_api_key")
text = text.replace("<your_api_base>", "$aoai_api_endpoint")
text = text.replace("<your_subscription_id>", "$test_workspace_sub_id")
text = text.replace("<your_resource_group_name>", "$test_workspace_rg")
text = text.replace("<your_workspace_name>", "$test_workspace_name")
return text
def prepare(doc):
doc.full_text = ""
def action(elem, doc):
if isinstance(elem, panflute.CodeBlock) and "bash" in elem.classes:
doc.full_text = "\n".join([doc.full_text, strip_comments(elem.text)])
def readme_parser(filename: str):
real_filename = Path(ReadmeStepsManage.git_base_dir()) / filename
data = pypandoc.convert_file(str(real_filename), "json")
f = io.StringIO(data)
doc = panflute.load(f)
panflute.run_filter(action, prepare, doc=doc)
return doc.full_text
| promptflow/scripts/readme/ghactions_driver/readme_parse.py/0 | {
"file_path": "promptflow/scripts/readme/ghactions_driver/readme_parse.py",
"repo_id": "promptflow",
"token_count": 609
} | 28 |
{% extends "workflow_skeleton.yml.jinja2" %}
{% block steps %}
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ '{{' }} secrets.AZURE_CREDENTIALS }}
- name: Setup Python 3.9 environment
uses: actions/setup-python@v4
with:
python-version: "3.9"
- name: Prepare requirements
run: |
python -m pip install --upgrade pip
pip install -r ${{ '{{' }} github.workspace }}/examples/requirements.txt
pip install -r ${{ '{{' }} github.workspace }}/examples/dev_requirements.txt
- name: Create Aoai Connection
run: pf connection create -f ${{ '{{' }} github.workspace }}/examples/connections/azure_openai.yml --set api_key="${{ '{{' }} secrets.AOAI_API_KEY_TEST }}" api_base="${{ '{{' }} secrets.AOAI_API_ENDPOINT_TEST }}"
- name: Test Notebook
working-directory: {{ gh_working_dir }}
run: |
papermill -k python {{ name }}.ipynb {{ name }}.output.ipynb
- name: Upload artifact
if: ${{ '{{' }} always() }}
uses: actions/upload-artifact@v3
with:
name: artifact
path: {{ gh_working_dir }}
{% endblock steps %} | promptflow/scripts/readme/ghactions_driver/workflow_templates/basic_workflow.yml.jinja2/0 | {
"file_path": "promptflow/scripts/readme/ghactions_driver/workflow_templates/basic_workflow.yml.jinja2",
"repo_id": "promptflow",
"token_count": 478
} | 29 |
import argparse
import base64
import os
import io
from PIL import Image
SUPPORT_IMAGE_TYPES = ["png", "jpg", "jpeg", "bmp"]
def get_image_size(image_path):
with Image.open(image_path) as img:
width, height = img.size
return width, height
def get_image_storage_size(image_path):
file_size_bytes = os.path.getsize(image_path)
file_size_mb = file_size_bytes / (1024 * 1024)
return file_size_mb
def image_to_data_url(image_path):
with open(image_path, "rb") as image_file:
# Create a BytesIO object from the image file
image_data = io.BytesIO(image_file.read())
# Open the image and resize it
img = Image.open(image_data)
if img.size != (16, 16):
img = img.resize((16, 16), Image.Resampling.LANCZOS)
# Save the resized image to a data URL
buffered = io.BytesIO()
img.save(buffered, format="PNG")
img_str = base64.b64encode(buffered.getvalue())
data_url = 'data:image/png;base64,' + img_str.decode('utf-8')
return data_url
def create_html_file(data_uri, output_path):
html_content = '<html>\n<body>\n<img src="{}" alt="My Image">\n</body>\n</html>'.format(data_uri)
with open(output_path, 'w') as file:
file.write(html_content)
def check_image_type(image_path):
file_extension = image_path.lower().split('.')[-1]
if file_extension not in SUPPORT_IMAGE_TYPES:
raise ValueError("Only png, jpg or bmp image types are supported.")
def check_image_type_and_generate_data_url(image_path):
check_image_type(image_path)
return image_to_data_url(image_path)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--image-path",
type=str,
required=True,
help="Your image input path",
)
parser.add_argument(
"--output",
"-o",
type=str,
required=True,
help="Your image output path",
)
args = parser.parse_args()
data_url = check_image_type_and_generate_data_url(args.image_path)
print("Your image data uri: \n{}".format(data_url))
create_html_file(data_url, args.output)
| promptflow/scripts/tool/convert_image_to_data_url.py/0 | {
"file_path": "promptflow/scripts/tool/convert_image_to_data_url.py",
"repo_id": "promptflow",
"token_count": 895
} | 30 |
import argparse
from utils.secret_manager import get_secret_client, upload_secret
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--tenant_id",
type=str,
required=True,
help="The tenant id of the service principal",
)
parser.add_argument(
"--client_id",
type=str,
required=True,
help="The client id of the service principal",
)
parser.add_argument(
"--client_secret",
type=str,
required=True,
help="The client secret of the service principal",
)
parser.add_argument(
"--secret_name",
type=str,
required=True,
)
parser.add_argument(
"--secret_value",
type=str,
required=True,
)
args = parser.parse_args()
secret_client = get_secret_client(
args.tenant_id, args.client_id, args.client_secret
)
upload_secret(secret_client, args.secret_name, args.secret_value)
| promptflow/scripts/tool/upload_tool_secret.py/0 | {
"file_path": "promptflow/scripts/tool/upload_tool_secret.py",
"repo_id": "promptflow",
"token_count": 448
} | 31 |
from .aoai import AzureOpenAI # noqa: F401
from .openai import OpenAI # noqa: F401
from .serpapi import SerpAPI # noqa: F401
| promptflow/src/promptflow-tools/promptflow/tools/__init__.py/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/__init__.py",
"repo_id": "promptflow",
"token_count": 48
} | 32 |
promptflow.tools.open_model_llm.OpenModelLLM.call:
name: Open Model LLM
description: Use an open model from the Azure Model catalog, deployed to an AzureML Online Endpoint for LLM Chat or Completion API calls.
icon: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==
type: custom_llm
module: promptflow.tools.open_model_llm
class_name: OpenModelLLM
function: call
inputs:
endpoint_name:
type:
- string
dynamic_list:
func_path: promptflow.tools.open_model_llm.list_endpoint_names
allow_manual_entry: true # Allow the user to clear this field
is_multi_select: false
deployment_name:
default: ''
type:
- string
dynamic_list:
func_path: promptflow.tools.open_model_llm.list_deployment_names
func_kwargs:
- name: endpoint
type:
- string
optional: true
reference: ${inputs.endpoint}
allow_manual_entry: true
is_multi_select: false
api:
enum:
- chat
- completion
type:
- string
temperature:
default: 1.0
type:
- double
max_new_tokens:
default: 500
type:
- int
top_p:
default: 1.0
advanced: true
type:
- double
model_kwargs:
default: "{}"
advanced: true
type:
- object
| promptflow/src/promptflow-tools/promptflow/tools/yamls/open_model_llm.yaml/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/yamls/open_model_llm.yaml",
"repo_id": "promptflow",
"token_count": 1312
} | 33 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.