text
stringlengths 8
1.72M
| id
stringlengths 22
143
| metadata
dict | __index_level_0__
int64 0
104
|
---|---|---|---|
While how LLMs work may be elusive to many developers, how LLM apps work is not - they essentially involve a series of calls to external services such as LLMs/databases/search engines, or intermediate data processing, all glued together. Thus LLM apps are merely Directed Acyclic Graphs (DAGs) of function calls. These DAGs are flows in prompt flow.
# Flows
A flow in prompt flow is a DAG of functions (we call them [tools](./concept-tools.md)). These functions/tools connected via input/output dependencies and executed based on the topology by prompt flow executor.
A flow is represented as a YAML file and can be visualized with our [Prompt flow for VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow). Here is an example:

## Flow types
Prompt flow has three flow types:
- **Standard flow** and **Chat flow**: these two are for you to develop your LLM application. The primary difference between the two lies in the additional support provided by the "Chat Flow" for chat applications. For instance, you can define chat_history, chat_input, and chat_output for your flow. The prompt flow, in turn, will offer a chat-like experience (including conversation history) during the development of the flow. Moreover, it also provides a sample chat application for deployment purposes.
- **Evaluation flow** is for you to test/evaluate the quality of your LLM application (standard/chat flow). It usually run on the outputs of standard/chat flow, and compute some metrics that can be used to determine whether the standard/chat flow performs well. E.g. is the answer accurate? is the answer fact-based?
## When to use standard flow vs. chat flow?
As a general guideline, if you are building a chatbot that needs to maintain conversation history, try chat flow. In most other cases, standard flow should serve your needs.
Our examples should also give you an idea when to use what:
- [examples/flows/standard](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard)
- [examples/flows/chat](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat)
## Next steps
- [Quick start](../how-to-guides/quick-start.md)
- [Initialize and test a flow](../how-to-guides/init-and-test-a-flow.md)
- [Run and evaluate a flow](../how-to-guides/run-and-evaluate-a-flow/index.md)
- [Tune prompts using variants](../how-to-guides/tune-prompts-with-variants.md) | promptflow/docs/concepts/concept-flows.md/0 | {
"file_path": "promptflow/docs/concepts/concept-flows.md",
"repo_id": "promptflow",
"token_count": 673
} | 1 |
# Develop standard flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
From this document, you can learn how to develop a standard flow by writing a flow yaml from scratch. You can
find additional information about flow yaml schema in [Flow YAML Schema](../../reference/flow-yaml-schema-reference.md).
## Flow input data
The flow input data is the data that you want to process in your flow.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can add a flow input in inputs section of flow yaml.
```yaml
inputs:
url:
type: string
default: https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
When unfolding Inputs section in the authoring page, you can set and view your flow inputs, including input schema (name and type),
and the input value.

:::
::::
For Web Classification sample as shown the screenshot above, the flow input is an url of string type.
For more input types in a python tool, please refer to [Input types](../../reference/tools-reference/python-tool.md#types).
## Develop the flow using different tools
In one flow, you can consume different kinds of tools. We now support built-in tool like
[LLM](../../reference/tools-reference/llm-tool.md), [Python](../../reference/tools-reference/python-tool.md) and
[Prompt](../../reference/tools-reference/prompt-tool.md) and
third-party tool like [Serp API](../../reference/tools-reference/serp-api-tool.md),
[Vector Search](../../reference/tools-reference/vector_db_lookup_tool.md), etc.
### Add tool as your need
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can add a tool node in nodes section of flow yaml. For example, yaml below shows how to add a Python tool node in the flow.
```yaml
nodes:
- name: fetch_text_content_from_url
type: python
source:
type: code
path: fetch_text_content_from_url.py
inputs:
url: ${inputs.url}
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
By selecting the tool card on the very top, you'll add a new tool node to flow.

:::
::::
### Edit tool
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can edit the tool by simply opening the source file and making edits. For example, we provide a simple Python tool code below.
```python
from promptflow import tool
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(input1: str) -> str:
return 'hello ' + input1
```
We also provide an LLM tool prompt below.
```jinja
Please summarize the following text in one paragraph. 100 words.
Do not add any information that is not in the text.
Text: {{text}}
Summary:
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
When a new tool node is added to flow, it will be appended at the bottom of flatten view with a random name by default.
At the top of each tool node card, there's a toolbar for adjusting the tool node. You can move it up or down, you can delete or rename it too.
For a python tool node, you can edit the tool code by clicking the code file. For a LLM tool node, you can edit the
tool prompt by clicking the prompt file and adjust input parameters like connection, api and etc.

:::
::::
### Create connection
Please refer to the [Create necessary connections](../quick-start.md#create-necessary-connections) for details.
## Chain your flow - link nodes together
Before linking nodes together, you need to define and expose an interface.
### Define LLM node interface
LLM node has only one output, the completion given by LLM provider.
As for inputs, we offer a templating strategy that can help you create parametric prompts that accept different input
values. Instead of fixed text, enclose your input name in `{{}}`, so it can be replaced on the fly. We use Jinja as our
templating language. For example:
```jinja
Your task is to classify a given url into one of the following types:
Movie, App, Academic, Channel, Profile, PDF or None based on the text content information.
The classification will be based on the url, the webpage text content summary, or both.
Here are a few examples:
{% for ex in examples %}
URL: {{ex.url}}
Text content: {{ex.text_content}}
OUTPUT:
{"category": "{{ex.category}}", "evidence": "{{ex.evidence}}"}
{% endfor %}
For a given URL : {{url}}, and text content: {{text_content}}.
Classify above url to complete the category and indicate evidence.
OUTPUT:
```
### Define Python node interface
Python node might have multiple inputs and outputs. Define inputs and outputs as shown below.
If you have multiple outputs, remember to make it a dictionary so that the downstream node can call each key separately.
For example:
```python
import json
from promptflow import tool
@tool
def convert_to_dict(input_str: str, input_str2: str) -> dict:
try:
print(input_str2)
return json.loads(input_str)
except Exception as e:
print("input is not valid, error: {}".format(e))
return {"category": "None", "evidence": "None"}
```
### Link nodes together
After the interface is defined, you can use:
- ${inputs.key} to link with flow input.
- ${upstream_node_name.output} to link with single-output upstream node.
- ${upstream_node_name.output.key} to link with multi-output upstream node.
Below are common scenarios for linking nodes together.
### Scenario 1 - Link LLM node with flow input and single-output upstream node
After you add a new LLM node and edit the prompt file like [Define LLM node interface](#define-llm-node-interface),
three inputs called `url`, `examples` and `text_content` are created in inputs section.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can link the LLM node input with flow input by `${inputs.url}`.
And you can link `examples` to the upstream `prepare_examples` node and `text_content` to the `summarize_text_content` node
by `${prepare_examples.output}` and `${summarize_text_content.output}`.
```yaml
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
deployment_name: text-davinci-003
suffix: ""
max_tokens: 128
temperature: 0.2
top_p: 1
echo: false
presence_penalty: 0
frequency_penalty: 0
best_of: 1
url: ${inputs.url} # Link with flow input
examples: ${prepare_examples.output} # Link LLM node with single-output upstream node
text_content: ${summarize_text_content.output} # Link LLM node with single-output upstream node
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
In the value drop-down, select `${inputs.url}`, `${prepare_examples.output}` and `${summarize_text_content.output}`, then
you'll see in the graph view that the newly created LLM node is linked to the flow input, upstream `prepare_examples` and `summarize_text_content` node.

:::
::::
When running the flow, the `url` input of the node will be replaced by flow input on the fly, and the `examples` and
`text_content` input of the node will be replaced by `prepare_examples` and `summarize_text_content` node output on the fly.
### Scenario 2 - Link LLM node with multi-output upstream node
Suppose we want to link the newly created LLM node with `covert_to_dict` Python node whose output is a dictionary with two keys: `category` and `evidence`.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can link `examples` to the `evidence` output of upstream `covert_to_dict` node by `${convert_to_dict.output.evidence}` like below:
```yaml
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
deployment_name: text-davinci-003
suffix: ""
max_tokens: 128
temperature: 0.2
top_p: 1
echo: false
presence_penalty: 0
frequency_penalty: 0
best_of: 1
text_content: ${convert_to_dict.output.evidence} # Link LLM node with multi-output upstream node
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
In the value drop-down, select `${convert_to_dict.output}`, then manually append `evidence`, then you'll see in the graph
view that the newly created LLM node is linked to the upstream `convert_to_dict node`.

:::
::::
When running the flow, the `text_content` input of the node will be replaced by `evidence` value from `convert_to_dict node` output dictionary on the fly.
### Scenario 3 - Link Python node with upstream node/flow input
After you add a new Python node and edit the code file like [Define Python node interface](#define-python-node-interface)],
two inputs called `input_str` and `input_str2` are created in inputs section. The linkage is the same as LLM node,
using `${flow.input_name}` to link with flow input or `${upstream_node_name.output}` to link with upstream node.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
```yaml
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs:
input_str: ${inputs.url} # Link Python node with flow input
input_str2: ${fetch_text_content_from_url.output} # Link Python node with single-output upstream node
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension

:::
::::
When running the flow, the `input_str` input of the node will be replaced by flow input on the fly and the `input_str2`
input of the node will be replaced by `fetch_text_content_from_url` node output dictionary on the fly.
## Set flow output
When the flow is complicated, instead of checking outputs on each node, you can set flow output and check outputs of
multiple nodes in one place. Moreover, flow output helps:
- Check bulk test results in one single table.
- Define evaluation interface mapping.
- Set deployment response schema.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can add flow outputs in outputs section of flow yaml . The linkage is the same as LLM node,
using `${convert_to_dict.output.category}` to link `category` flow output with with `category` value of upstream node
`convert_to_dict`.
```yaml
outputs:
category:
type: string
reference: ${convert_to_dict.output.category}
evidence:
type: string
reference: ${convert_to_dict.output.evidence}
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
First define flow output schema, then select in drop-down the node whose output you want to set as flow output.
Since `convert_to_dict` has a dictionary output with two keys: `category` and `evidence`, you need to manually append
`category` and `evidence` to each. Then run flow, after a while, you can check flow output in a table.

:::
:::: | promptflow/docs/how-to-guides/develop-a-flow/develop-standard-flow.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-flow/develop-standard-flow.md",
"repo_id": "promptflow",
"token_count": 3588
} | 2 |
# Initialize and test a flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
From this document, customer can initialize a flow and test it.
## Initialize flow
Creating a flow folder with code/prompts and yaml definitions of the flow.
### Initialize flow from scratch
Promptflow can [create three types of flow folder](https://promptflow.azurewebsites.net/concepts/concept-flows.html#flow-types):
- standard: Basic structure of flow folder.
- chat: Chat flow is designed for conversational application development, building upon the capabilities of standard flow and providing enhanced support for chat inputs/outputs and chat history management.
- evaluation: Evaluation flows are special types of flows that assess how well the outputs of a flow align with specific criteria and goals.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
```bash
# Create a flow
pf flow init --flow <flow-name>
# Create a chat flow
pf flow init --flow <flow-name> --type chat
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Use VS Code explorer pane > directory icon > right click > the "New flow in this directory" action. Follow the popped out dialog to initialize your flow in the target folder.

Alternatively, you can use the "Create new flow" action on the prompt flow pane > quick access section to create a new flow

:::
::::
Structure of flow folder:
- **flow.dag.yaml**: The flow definition with inputs/outputs, nodes, tools and variants for authoring purpose.
- **.promptflow/flow.tools.json**: It contains tools meta referenced in `flow.dag.yaml`.
- **Source code files (.py, .jinja2)**: User managed, the code scripts referenced by tools.
- **requirements.txt**: Python package dependencies for this flow.

### Create from existing code
Customer needs to pass the path of tool script to `entry`, and also needs to pass in the promptflow template dict to `prompt-template`, which the key is the input name of the tool and the value is the path to the promptflow template.
Promptflow CLI can generate the yaml definitions needed for prompt flow from the existing folder, using the tools script and prompt templates.
```bash
# Create a flow in existing folder
pf flow init --flow <flow-name> --entry <tool-script-path> --function <tool-function-name> --prompt-template <prompt-param-name>=<prompt-tempate-path>
```
Take [customer-intent-extraction](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/customer-intent-extraction) for example, which demonstrating how to convert a langchain code into a prompt flow.

In this case, promptflow CLI generates `flow.dag.yaml`, `.promptflow/flow.tools.json` and `extract_intent_tool.py`, it is a python tool in the flow.

## Test a flow
:::{admonition} Note
Testing flow will NOT create a batch run record, therefore it's unable to use commands like `pf run show-details` to get the run information. If you want to persist the run record, see [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md)
:::
Promptflow also provides ways to test the initialized flow or flow node. It will help you quickly test your flow.
### Visual editor on the VS Code for prompt flow.
::::{tab-set}
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Open the flow.dag.yaml file of your flow. On the top of the yaml editor you can find the "Visual editor" action. Use it to open the Visual editor with GUI support.

:::
::::
### Test flow
Customer can use CLI or VS Code extension to test the flow.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
```bash
# Test flow
pf flow test --flow <flow-name>
# Test flow with specified variant
pf flow test --flow <flow-name> --variant '${<node-name>.<variant-name>}'
```
The log and result of flow test will be displayed in the terminal.

Promptflow CLI will generate test logs and outputs in `.promptflow`:
- **flow.detail.json**: Defails info of flow test, include the result of each node.
- **flow.log**: The log of flow test.
- **flow.output.json**: The result of flow test.

:::
:::{tab-item} SDK
:sync: SDK
The return value of `test` function is the flow outputs.
```python
from promptflow import PFClient
pf_client = PFClient()
# Test flow
inputs = {"<flow_input_name>": "<flow_input_value>"} # The inputs of the flow.
flow_result = pf_client.test(flow="<flow_folder_path>", inputs=inputs)
print(f"Flow outputs: {flow_result}")
```
The log and result of flow test will be displayed in the terminal.

Promptflow CLI will generate test logs and outputs in `.promptflow`:
- **flow.detail.json**: Defails info of flow test, include the result of each node.
- **flow.log**: The log of flow test.
- **flow.output.json**: The result of flow test.

:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
You can use the action either on the default yaml editor or the visual editor to trigger flow test. See the snapshots below:


:::
::::
### Test a single node in the flow
Customer can test a single python node in the flow. It will use customer provides date or the default value of the node as input. It will only use customer specified node to execute with the input.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Customer can execute this command to test the flow.
```bash
# Test flow node
pf flow test --flow <flow-name> --node <node-name>
```
The log and result of flow node test will be displayed in the terminal. And the details of node test will generated to `.promptflow/flow-<node-name>.node.detail.json`.
:::
:::{tab-item} SDK
:sync: SDK
Customer can execute this command to test the flow. The return value of `test` function is the node outputs.
```python
from promptflow import PFClient
pf_client = PFClient()
# Test not iun the flow
inputs = {<node_input_name>: <node_input_value>} # The inputs of the node.
node_result = pf_client.test(flow=<flow_folder_path>, inputs=inputs, node=<node_name>)
print(f"Node outputs: {node_result}")
```
The log and result of flow node test will be displayed in the terminal. And the details of node test will generated to `.promptflow/flow-<node-name>.node.detail.json`.
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
The prompt flow extension provides inline actions in both default yaml editor and visual editor to trigger single node runs.


:::
::::
### Test with interactive mode
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Promptflow CLI provides a way to start an interactive chat session for chat flow. Customer can use below command to start an interactive chat session:
```bash
# Chat in the flow
pf flow test --flow <flow-name> --interactive
```
After executing this command, customer can interact with the chat flow in the terminal. Customer can press **Enter** to send the message to chat flow. And customer can quit with **ctrl+C**.
Promptflow CLI will distinguish the output of different roles by color, <span style="color:Green">User input</span>, <span style="color:Gold">Bot output</span>, <span style="color:Blue">Flow script output</span>, <span style="color:Cyan">Node output</span>.
Using this [chat flow](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/basic-chat) to show how to use interactive mode.

:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
If a flow contains chat inputs or chat outputs in the flow interface, there will be a selection when triggering flow test. You can select the interactive mode if you want to.


:::
::::
When the [LLM node](https://promptflow.azurewebsites.net/tools-reference/llm-tool.html) in the chat flow that is connected to the flow output, Promptflow SDK streams the results of the LLM node.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
The flow result will be streamed in the terminal as shown below.

:::
:::{tab-item} SDK
:sync: SDK
The LLM node return value of `test` function is a generator, you can consume the result by this way:
```python
from promptflow import PFClient
pf_client = PFClient()
# Test flow
inputs = {"<flow_input_name>": "<flow_input_value>"} # The inputs of the flow.
flow_result = pf_client.test(flow="<flow_folder_path>", inputs=inputs)
for item in flow_result["<LLM_node_output_name>"]:
print(item)
```
:::
::::
### Debug a single node in the flow
Customer can debug a single python node in VScode by the extension.
::::{tab-set}
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Break points and debugging functionalities for the Python steps in your flow. Just set the break points and use the debug actions on either default yaml editor or visual editor.


:::
::::
## Next steps
- [Add conditional control to a flow](./add-conditional-control-to-a-flow.md) | promptflow/docs/how-to-guides/init-and-test-a-flow.md/0 | {
"file_path": "promptflow/docs/how-to-guides/init-and-test-a-flow.md",
"repo_id": "promptflow",
"token_count": 3178
} | 3 |
# pfazure
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
Manage prompt flow resources on Azure with the prompt flow CLI.
| Command | Description |
| --- | --- |
| [pfazure flow](#pfazure-flow) | Manage flows. |
| [pfazure run](#pfazure-run) | Manage runs. |
## pfazure flow
Manage flows.
| Command | Description |
| --- | --- |
| [pfazure flow create](#pfazure-flow-create) | Create a flow. |
| [pfazure flow list](#pfazure-flow-list) | List flows in a workspace. |
### pfazure flow create
Create a flow in Azure AI from a local flow folder.
```bash
pfazure flow create [--flow]
[--set]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--flow`
Local path to the flow directory.
`--set`
Update an object by specifying a property path and value to set.
- `display_name`: Flow display name that will be created in remote. Default to be flow folder name + timestamp if not specified.
- `type`: Flow type. Default to be "standard" if not specified. Available types are: "standard", "evaluation", "chat".
- `description`: Flow description. e.g. "--set description=\<description\>."
- `tags`: Flow tags. e.g. "--set tags.key1=value1 tags.key2=value2."
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure flow list
List remote flows on Azure AI.
```bash
pfazure flow list [--max-results]
[--include-others]
[--type]
[--output]
[--archived-only]
[--include-archived]
[--subscription]
[--resource-group]
[--workspace-name]
[--output]
```
#### Parameters
`--max-results -r`
Max number of results to return. Default is 50, upper bound is 100.
`--include-others`
Include flows created by other owners. By default only flows created by the current user are returned.
`--type`
Filter flows by type. Available types are: "standard", "evaluation", "chat".
`--archived-only`
List archived flows only.
`--include-archived`
List archived flows and active flows.
`--output -o`
Output format. Allowed values: `json`, `table`. Default: `json`.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
## pfazure run
Manage prompt flow runs.
| Command | Description |
| --- | --- |
| [pfazure run create](#pfazure-run-create) | Create a run. |
| [pfazure run list](#pfazure-run-list) | List runs in a workspace. |
| [pfazure run show](#pfazure-run-show) | Show details for a run. |
| [pfazure run stream](#pfazure-run-stream) | Stream run logs to the console. |
| [pfazure run show-details](#pfazure-run-show-details) | Show a run details. |
| [pfazure run show-metrics](#pfazure-run-show-metrics) | Show run metrics. |
| [pfazure run visualize](#pfazure-run-visualize) | Visualize a run. |
| [pfazure run archive](#pfazure-run-archive) | Archive a run. |
| [pfazure run restore](#pfazure-run-restore) | Restore a run. |
| [pfazure run update](#pfazure-run-update) | Update a run. |
| [pfazure run download](#pfazure-run-download) | Download a run. |
### pfazure run create
Create a run.
```bash
pfazure run create [--file]
[--flow]
[--data]
[--column-mapping]
[--run]
[--variant]
[--stream]
[--environment-variables]
[--connections]
[--set]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--file -f`
Local path to the YAML file containing the prompt flow run specification; can be overwritten by other parameters. Reference [here](https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json) for YAML schema.
`--flow`
Local path to the flow directory.
`--data`
Local path to the data file or remote data. e.g. azureml:name:version.
`--column-mapping`
Inputs column mapping, use `${data.xx}` to refer to data columns, use `${run.inputs.xx}` to refer to referenced run's data columns, and `${run.outputs.xx}` to refer to run outputs columns.
`--run`
Referenced flow run name. For example, you can run an evaluation flow against an existing run. For example, "pfazure run create --flow evaluation_flow_dir --run existing_bulk_run --column-mapping url='${data.url}'".
`--variant`
Node & variant name in format of `${node_name.variant_name}`.
`--stream -s`
Indicates whether to stream the run's logs to the console.
default value: False
`--environment-variables`
Environment variables to set by specifying a property path and value. Example:
`--environment-variable key1='${my_connection.api_key}' key2='value2'`. The value reference
to connection keys will be resolved to the actual value, and all environment variables
specified will be set into os.environ.
`--connections`
Overwrite node level connections with provided value.
Example: `--connections node1.connection=test_llm_connection node1.deployment_name=gpt-35-turbo`
`--set`
Update an object by specifying a property path and value to set.
Example: `--set property1.property2=<value>`.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run list
List runs in a workspace.
```bash
pfazure run list [--archived-only]
[--include-archived]
[--max-results]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--archived-only`
List archived runs only.
default value: False
`--include-archived`
List archived runs and active runs.
default value: False
`--max-results -r`
Max number of results to return. Default is 50, upper bound is 100.
default value: 50
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show
Show details for a run.
```bash
pfazure run show --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run stream
Stream run logs to the console.
```bash
pfazure run stream --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show-details
Show a run details.
```bash
pfazure run show-details --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show-metrics
Show run metrics.
```bash
pfazure run show-metrics --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run visualize
Visualize a run.
```bash
pfazure run visualize --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run archive
Archive a run.
```bash
pfazure run archive --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run restore
Restore a run.
```bash
pfazure run restore --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run update
Update a run's metadata, such as `display name`, `description` and `tags`.
```bash
pfazure run update --name
[--set display_name="<value>" description="<value>" tags.key="<value>"]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Examples
Set `display name`, `description` and `tags`:
```bash
pfazure run update --name <run_name> --set display_name="<value>" description="<value>" tags.key="<value>"
```
#### Parameters
`--name -n`
Name of the run.
`--set`
Set meta information of the run, like `display_name`, `description` or `tags`. Example: --set <key>=<value>.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run download
Download a run's metadata, such as `input`, `output`, `snapshot` and `artifact`. After the download is finished, you can use `pf run create --source <run-info-local-folder>` to register this run as a local run record, then you can use commands like `pf run show/visualize` to inspect the run just like a run that was created from local flow.
```bash
pfazure run download --name
[--output]
[--overwrite]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Examples
Download a run data to local:
```bash
pfazure run download --name <name> --output <output-folder-path>
```
#### Parameters
`--name -n`
Name of the run.
`--output -o`
Output folder path to store the downloaded run data. Default to be `~/.promptflow/.runs` if not specified
`--overwrite`
Overwrite the existing run data if the output folder already exists. Default to be `False` if not specified
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
| promptflow/docs/reference/pfazure-command-reference.md/0 | {
"file_path": "promptflow/docs/reference/pfazure-command-reference.md",
"repo_id": "promptflow",
"token_count": 4975
} | 4 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
question:
type: string
is_chat_input: true
default: What is ChatGPT?
outputs:
answer:
type: string
reference: ${chat.output}
is_chat_output: true
nodes:
- inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: "256"
temperature: "0.7"
chat_history: ${inputs.chat_history}
question: ${inputs.question}
name: chat
type: llm
source:
type: code
path: chat.jinja2
api: chat
connection: open_ai_connection
node_variants: {}
environment:
python_requirements_txt: requirements.txt
| promptflow/examples/flows/chat/basic-chat/flow.dag.yaml/0 | {
"file_path": "promptflow/examples/flows/chat/basic-chat/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 330
} | 5 |
{
"package": {},
"code": {
"setup_env.py": {
"type": "python",
"inputs": {
"connection": {
"type": [
"AzureOpenAIConnection",
"OpenAIConnection"
]
},
"config": {
"type": [
"object"
]
}
},
"source": "setup_env.py",
"function": "setup_env"
},
"download_tool.py": {
"type": "python",
"inputs": {
"url": {
"type": [
"string"
]
},
"env_ready_signal": {
"type": [
"string"
]
}
},
"source": "download_tool.py",
"function": "download_tool"
},
"build_index_tool.py": {
"type": "python",
"inputs": {
"pdf_path": {
"type": [
"string"
]
}
},
"source": "build_index_tool.py",
"function": "build_index_tool"
},
"find_context_tool.py": {
"type": "python",
"inputs": {
"question": {
"type": [
"string"
]
},
"index_path": {
"type": [
"string"
]
}
},
"source": "find_context_tool.py",
"function": "find_context_tool"
},
"qna_tool.py": {
"type": "python",
"inputs": {
"prompt": {
"type": [
"string"
]
},
"history": {
"type": [
"list"
]
}
},
"source": "qna_tool.py",
"function": "qna_tool"
},
"rewrite_question_tool.py": {
"type": "python",
"inputs": {
"question": {
"type": [
"string"
]
},
"history": {
"type": [
"list"
]
},
"env_ready_signal": {
"type": [
"string"
]
}
},
"source": "rewrite_question_tool.py",
"function": "rewrite_question_tool"
}
}
} | promptflow/examples/flows/chat/chat-with-pdf/.promptflow/flow.tools.json/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/.promptflow/flow.tools.json",
"repo_id": "promptflow",
"token_count": 1279
} | 6 |
<jupyter_start><jupyter_text>Chat with PDF using Azure AIThis is a simple flow that allow you to ask questions about the content of a PDF file and get answers.You can run the flow with a URL to a PDF file and question as argument.Once it's launched it will download the PDF and build an index of the content. Then when you ask a question, it will look up the index to retrieve relevant content and post the question with the relevant content to OpenAI chat model (gpt-3.5-turbo or gpt4) to get an answer. 0. Install dependencies<jupyter_code>%pip install -r requirements.txt<jupyter_output><empty_output><jupyter_text>1. Connect to Azure Machine Learning Workspace<jupyter_code>from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()<jupyter_output><empty_output><jupyter_text>1.1 Get familiar with the primary interface - PFClient<jupyter_code>import promptflow.azure as azure
# Get a handle to workspace
pf = azure.PFClient.from_config(credential=credential)<jupyter_output><empty_output><jupyter_text>1.2 Create necessary connectionsConnection in prompt flow is for managing settings of your application behaviors incl. how to talk to different services (Azure OpenAI for example).Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.Please go to [workspace portal](https://ml.azure.com/), click `Prompt flow` -> `Connections` -> `Create`, then follow the instruction to create your own connections. Learn more on [connections](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/concept-connections?view=azureml-api-2).<jupyter_code>conn_name = "open_ai_connection"
# TODO integrate with azure.ai sdk
# currently we only support create connection in Azure ML Studio UI
# raise Exception(f"Please create {conn_name} connection in Azure ML Studio.")<jupyter_output><empty_output><jupyter_text>2. Run a flow with setting (context size 2K)<jupyter_code>flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_2k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 2000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 32,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_2k_context,
}
run_2k_context = pf.run(
flow=flow_path,
data=data_path,
column_mapping=column_mapping,
display_name="chat_with_pdf_2k_context",
tags={"chat_with_pdf": "", "1st_round": ""},
)
pf.stream(run_2k_context)
print(run_2k_context)
detail = pf.get_details(run_2k_context)
detail<jupyter_output><empty_output><jupyter_text>3. Evaluate the "groundedness"The [eval-groundedness flow](../../evaluation/eval-groundedness/) is using ChatGPT/GPT4 model to grade the answers generated by chat-with-pdf flow.<jupyter_code>eval_groundedness_flow_path = "../../evaluation/eval-groundedness/"
eval_groundedness_2k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_2k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_2k_context",
)
pf.stream(eval_groundedness_2k_context)
print(eval_groundedness_2k_context)<jupyter_output><empty_output><jupyter_text>4. Try a different configuration and evaluate again - experimentation<jupyter_code>flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_3k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 3000, # different from 2k context
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 32,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_3k_context,
}
run_3k_context = pf.run(
flow=flow_path,
data=data_path,
column_mapping=column_mapping,
display_name="chat_with_pdf_3k_context",
tags={"chat_with_pdf": "", "2nd_round": ""},
)
pf.stream(run_3k_context)
print(run_3k_context)
detail = pf.get_details(run_3k_context)
detail
eval_groundedness_3k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_3k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_3k_context",
)
pf.stream(eval_groundedness_3k_context)
print(eval_groundedness_3k_context)
pf.get_details(eval_groundedness_3k_context)
pf.visualize([eval_groundedness_2k_context, eval_groundedness_3k_context])<jupyter_output><empty_output> | promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf-azure.ipynb/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf-azure.ipynb",
"repo_id": "promptflow",
"token_count": 2006
} | 7 |
You're a smart assistant can answer questions based on provided context and previous conversation history between you and human.
Use the context to answer the question at the end, note that the context has order and importance - e.g. context #1 is more important than #2.
Try as much as you can to answer based on the provided the context, if you cannot derive the answer from the context, you should say you don't know.
Answer in the same language as the question.
# Context
{% for i, c in context %}
## Context #{{i+1}}
{{c.text}}
{% endfor %}
# Question
{{question}} | promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/qna_prompt.md/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/qna_prompt.md",
"repo_id": "promptflow",
"token_count": 147
} | 8 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
#name: eval_groundedness_default_20230820_200152_009000
flow: ../../evaluation/eval-groundedness
run: chat_with_pdf_default_20230820_162219_559000
column_mapping:
question: ${run.inputs.question}
answer: ${run.outputs.answer}
context: ${run.outputs.context} | promptflow/examples/flows/chat/chat-with-pdf/eval_run.yaml/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/eval_run.yaml",
"repo_id": "promptflow",
"token_count": 129
} | 9 |
system:
You are an AI assistant reading the transcript of a conversation between an AI and a human. Given an input question and conversation history, infer user real intent.
The conversation history is provided just in case of a context (e.g. "What is this?" where "this" is defined in previous conversation).
Return the output as query used for next round user message.
user:
EXAMPLE
Conversation history:
Human: I want to find the best restaurants nearby, could you recommend some?
AI: Sure, I can help you with that. Here are some of the best restaurants nearby: Rock Bar.
Human: How do I get to Rock Bar?
Output: directions to Rock Bar
END OF EXAMPLE
EXAMPLE
Conversation history:
Human: I want to find the best restaurants nearby, could you recommend some?
AI: Sure, I can help you with that. Here are some of the best restaurants nearby: Rock Bar.
Human: How do I get to Rock Bar?
AI: To get to Rock Bar, you need to go to the 52nd floor of the Park A. You can take the subway to Station A and walk for about 8 minutes from exit A53. Alternatively, you can take the train to S Station and walk for about 12 minutes from the south exit3.
Human: Show me more restaurants.
Output: best restaurants nearby
END OF EXAMPLE
Conversation history (for reference only):
{% for item in chat_history %}
Human: {{item.inputs.question}}
AI: {{item.outputs.answer}}
{% endfor %}
Human: {{question}}
Output:
| promptflow/examples/flows/chat/chat-with-wikipedia/extract_query_from_question.jinja2/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-wikipedia/extract_query_from_question.jinja2",
"repo_id": "promptflow",
"token_count": 371
} | 10 |
# Basic Eval
This example shows how to create a basic evaluation flow.
Tools used in this flow:
- `python` tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, you will learn
- how to compose a point based evaluation flow, where you can calculate point-wise metrics.
- the way to log metrics. use `from promptflow import log_metric`
- see file [aggregate](aggregate.py).
### 1. Test flow with single line data
Testing flow/node:
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs groundtruth=ABC prediction=ABC
# test node with inputs
pf flow test --flow . --node line_process --inputs groundtruth=ABC prediction=ABC
```
### 2. create flow run with multi line data
There are two ways to evaluate an classification flow.
```bash
pf run create --flow . --data ./data.jsonl --column-mapping groundtruth='${data.groundtruth}' prediction='${data.prediction}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
| promptflow/examples/flows/evaluation/eval-basic/README.md/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-basic/README.md",
"repo_id": "promptflow",
"token_count": 372
} | 11 |
from promptflow import tool
@tool
def grade(groundtruth: str, prediction: str):
return "Correct" if groundtruth.lower() == prediction.lower() else "Incorrect"
| promptflow/examples/flows/evaluation/eval-classification-accuracy/grade.py/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-classification-accuracy/grade.py",
"repo_id": "promptflow",
"token_count": 47
} | 12 |
system:
You are an AI assistant. You will be given the definition of an evaluation metric for assessing the quality of an answer in a question-answering task. Your job is to compute an accurate evaluation score using the provided evaluation metric.
user:
Fluency measures the quality of individual sentences in the answer, and whether they are well-written and grammatically correct. Consider the quality of individual sentences when evaluating fluency. Given the question and answer, score the fluency of the answer between one to five stars using the following rating scale:
One star: the answer completely lacks fluency
Two stars: the answer mostly lacks fluency
Three stars: the answer is partially fluent
Four stars: the answer is mostly fluent
Five stars: the answer has perfect fluency
This rating value should always be an integer between 1 and 5. So the rating produced should be 1 or 2 or 3 or 4 or 5.
question: What did you have for breakfast today?
answer: Breakfast today, me eating cereal and orange juice very good.
stars: 1
question: How do you feel when you travel alone?
answer: Alone travel, nervous, but excited also. I feel adventure and like its time.
stars: 2
question: When was the last time you went on a family vacation?
answer: Last family vacation, it took place in last summer. We traveled to a beach destination, very fun.
stars: 3
question: What is your favorite thing about your job?
answer: My favorite aspect of my job is the chance to interact with diverse people. I am constantly learning from their experiences and stories.
stars: 4
question: Can you describe your morning routine?
answer: Every morning, I wake up at 6 am, drink a glass of water, and do some light stretching. After that, I take a shower and get dressed for work. Then, I have a healthy breakfast, usually consisting of oatmeal and fruits, before leaving the house around 7:30 am.
stars: 5
question: {{question}}
answer: {{answer}}
stars: | promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_fluency_prompt.jinja2/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-qna-non-rag/gpt_fluency_prompt.jinja2",
"repo_id": "promptflow",
"token_count": 445
} | 13 |
system:
You are a helpful assistant.
user:
Your task is to check and rate if factual information in chatbot's reply is all grounded to retrieved documents.
You will be given a question, chatbot's response to the question, a chat history between this chatbot and human, and a list of retrieved documents in json format.
The chatbot must base its response exclusively on factual information extracted from the retrieved documents, utilizing paraphrasing, summarization, or inference techniques. When the chatbot responds to information that is not mentioned in or cannot be inferred from the retrieved documents, we refer to it as a grounded issue.
To rate the groundness of chat response, follow the below steps:
1. Review the chat history to understand better about the question and chat response
2. Look for all the factual information in chatbot's response
3. Compare the factual information in chatbot's response with the retrieved documents. Check if there are any facts that are not in the retrieved documents at all,or that contradict or distort the facts in the retrieved documents. If there are, write them down. If there are none, leave it blank. Note that some facts may be implied or suggested by the retrieved documents, but not explicitly stated. In that case, use your best judgment to decide if the fact is grounded or not.
For example, if the retrieved documents mention that a film was nominated for 12 awards, and chatbot's reply states the same, you can consider that fact as grounded, as it is directly taken from the retrieved documents.
However, if the retrieved documents do not mention the film won any awards at all, and chatbot reply states that the film won some awards, you should consider that fact as not grounded.
4. Rate how well grounded the chatbot response is on a Likert scale from 1 to 5 judging if chatbot response has no ungrounded facts. (higher better)
5: agree strongly
4: agree
3: neither agree or disagree
2: disagree
1: disagree strongly
If the chatbot response used information from outside sources, or made claims that are not backed up by the retrieved documents, give it a low score.
5. Your answer should follow the format:
<Quality reasoning:> [insert reasoning here]
<Quality score: [insert score here]/5>
Your answer must end with <Input for Labeling End>.
# Question
{{ question }}
# Chat Response
{{ answer }}
# Chat History
# Documents
---BEGIN RETRIEVED DOCUMENTS---
{{ FullBody }}
---END RETRIEVED DOCUMENTS--- | promptflow/examples/flows/evaluation/eval-qna-rag-metrics/rag_groundedness_prompt.jinja2/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-qna-rag-metrics/rag_groundedness_prompt.jinja2",
"repo_id": "promptflow",
"token_count": 580
} | 14 |
{
"projectFileVersion": "2022-10-01-preview",
"stringIndexType": "Utf16CodeUnit",
"metadata": {
"projectKind": "Conversation",
"settings": {
"confidenceThreshold": 0,
"normalizeCasing": false
},
"projectName": "MediaPlayer",
"multilingual": false,
"description": "",
"language": "en-us"
},
"assets": {
"projectKind": "Conversation",
"intents": [
{
"category": "None"
},
{
"category": "PlayMedia"
},
{
"category": "UpdateVolume"
}
],
"entities": [],
"utterances": [
{
"text": "Put the volume at maximum.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Mute the audio.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Decrease the sound.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Can you adjust the volume to a comfortable level?",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Volume up to 80%.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Set the volume to 50%.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Make it quieter.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Lower the volume.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Turn up the sound.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Increase the volume.",
"language": "en-us",
"intent": "UpdateVolume",
"entities": [],
"dataset": "Train"
},
{
"text": "Play the next episode of my podcast.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Play a random podcast.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Start playing the song \"Shape of You\" by Ed Sheeran.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Play the latest album by Guns n Roses.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Play some classical music.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Can you play a relaxing playlist?",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Shuffle my playlist.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Play track number 5.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Play my favorite song.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
},
{
"text": "Play Eric Clapton.",
"language": "en-us",
"intent": "PlayMedia",
"entities": [],
"dataset": "Train"
}
]
}
} | promptflow/examples/flows/integrations/azure-ai-language/multi_intent_conversational_language_understanding/MediaPlayer.json/0 | {
"file_path": "promptflow/examples/flows/integrations/azure-ai-language/multi_intent_conversational_language_understanding/MediaPlayer.json",
"repo_id": "promptflow",
"token_count": 3327
} | 15 |
Goals:
{{goals}}
| promptflow/examples/flows/standard/autonomous-agent/user_prompt.jinja2/0 | {
"file_path": "promptflow/examples/flows/standard/autonomous-agent/user_prompt.jinja2",
"repo_id": "promptflow",
"token_count": 10
} | 16 |
# Basic standard flow
A basic standard flow using custom python tool that calls Azure OpenAI with connection info stored in environment variables.
Tools used in this flow:
- `prompt` tool
- custom `python` Tool
Connections used in this flow:
- None
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Run flow
- Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
- Setup environment variables
Ensure you have put your azure open ai endpoint key in [.env](.env) file. You can create one refer to this [example file](.env.example).
```bash
cat .env
```
- Test flow/node
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs text="Java Hello World!"
# test node with inputs
pf flow test --flow . --node llm --inputs prompt="Write a simple Hello World program that displays the greeting message when executed."
```
- Create run with multiple lines data
```bash
# using environment from .env file (loaded in user code: hello.py)
pf run create --flow . --data ./data.jsonl --column-mapping text='${data.text}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
- List and show run meta
```bash
# list created run
pf run list
# get a sample run name
name=$(pf run list -r 10 | jq '.[] | select(.name | contains("basic_variant_0")) | .name'| head -n 1 | tr -d '"')
# show specific run detail
pf run show --name $name
# show output
pf run show-details --name $name
# visualize run in browser
pf run visualize --name $name
```
## Run flow with connection
Storing connection info in .env with plaintext is not safe. We recommend to use `pf connection` to guard secrets like `api_key` from leak.
- Show or create `open_ai_connection`
```bash
# create connection from `azure_openai.yml` file
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
# check if connection exists
pf connection show -n open_ai_connection
```
- Test using connection secret specified in environment variables
**Note**: we used `'` to wrap value since it supports raw value without escape in powershell & bash. For windows command prompt, you may remove the `'` to avoid it become part of the value.
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow . --environment-variables AZURE_OPENAI_API_KEY='${open_ai_connection.api_key}' AZURE_OPENAI_API_BASE='${open_ai_connection.api_base}'
```
- Create run using connection secret binding specified in environment variables, see [run.yml](run.yml)
```bash
# create run
pf run create --flow . --data ./data.jsonl --stream --environment-variables AZURE_OPENAI_API_KEY='${open_ai_connection.api_key}' AZURE_OPENAI_API_BASE='${open_ai_connection.api_base}' --column-mapping text='${data.text}'
# create run using yaml file
pf run create --file run.yml --stream
# show outputs
name=$(pf run list -r 10 | jq '.[] | select(.name | contains("basic_variant_0")) | .name'| head -n 1 | tr -d '"')
pf run show-details --name $name
```
## Run flow in cloud with connection
- Assume we already have a connection named `open_ai_connection` in workspace.
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
```
- Create run
```bash
# run with environment variable reference connection in azureml workspace
pfazure run create --flow . --data ./data.jsonl --environment-variables AZURE_OPENAI_API_KEY='${open_ai_connection.api_key}' AZURE_OPENAI_API_BASE='${open_ai_connection.api_base}' --column-mapping text='${data.text}' --stream
# run using yaml file
pfazure run create --file run.yml --stream
```
- List and show run meta
```bash
# list created run
pfazure run list -r 3
# get a sample run name
name=$(pfazure run list -r 100 | jq '.[] | select(.name | contains("basic_variant_0")) | .name'| head -n 1 | tr -d '"')
# show specific run detail
pfazure run show --name $name
# show output
pfazure run show-details --name $name
# visualize run in browser
pfazure run visualize --name $name
``` | promptflow/examples/flows/standard/basic/README.md/0 | {
"file_path": "promptflow/examples/flows/standard/basic/README.md",
"repo_id": "promptflow",
"token_count": 1439
} | 17 |
from promptflow import tool
from divider import Divider
@tool
def divide_code(file_content: str):
# Divide the code into several parts according to the global import/class/function.
divided = Divider.divide_file(file_content)
return divided
| promptflow/examples/flows/standard/gen-docstring/divide_code_tool.py/0 | {
"file_path": "promptflow/examples/flows/standard/gen-docstring/divide_code_tool.py",
"repo_id": "promptflow",
"token_count": 74
} | 18 |
{"question": "What is the sum of 5 and 3?", "answer": "8"}
{"question": "Subtract 7 from 10.", "answer": "3"}
{"question": "Multiply 6 by 4.", "answer": "24"}
{"question": "Divide 20 by 5.", "answer": "4"}
{"question": "What is the square of 7?", "answer": "49"}
{"question": "What is the square root of 81?", "answer": "9"}
{"question": "If a rectangle has a length of 10 and width of 5, what is the area?", "answer": "50"}
{"question": "A circle has a radius of 7, what is the area? (Use 3.14 for pi)", "answer": "153.86"}
{"question": "Solve for x in the equation 2x + 3 = 9.", "answer": "3"}
{"question": "What is the value of x if 5x = 25?", "answer": "5"}
{"question": "A car travels 200 miles in 4 hours. What is the average speed of the car?", "answer": "50"}
{"question": "A car travels at a speed of 60 mph. How long will it take to travel 180 miles?", "answer": "3"}
{"question": "If a car travels at a speed of 40 mph for 2 hours, how far will it travel?","answer": "80"}
{"question":"A rectangle has length = 10 cm and width = 5 cm. What is its area?", "answer":"50"}
{"question":"A circle has radius = 7 cm. What is its circumference? (Use pi =3.14)", "answer":"43.96"}
{"question":"A triangle has base =10 cm and height =5 cm. What is its area?", "answer":"25"}
{"question":"What is the slope of the line that passes through (2,3) and (4,7)?", "answer":"2"}
{"question":"The distance between A and B is 2000km, A is moving towards B with speed 80km/hour, meanwhile B is moving towards A with speed 120km/hour, how many hours later A and B can meet?", "answer":"10"}
{"question":"The lengths of the two perpendicular sides of a right triangle are 6cm and 8cm. What is the length of the hypotenuse?", "answer": "10"}
{"question":"A is running with average speed 10km/hour, A already run half hour. B start to chase A along the same route with average speed 15km/hour, how many hours B will take to meet A?", "answer":"1"} | promptflow/examples/flows/standard/maths-to-code/math_data.jsonl/0 | {
"file_path": "promptflow/examples/flows/standard/maths-to-code/math_data.jsonl",
"repo_id": "promptflow",
"token_count": 573
} | 19 |
system:
Your task is to classify a given url into one of the following categories:
Movie, App, Academic, Channel, Profile, PDF or None based on the text content information.
The classification will be based on the url, the webpage text content summary, or both.
user:
The selection range of the value of "category" must be within "Movie", "App", "Academic", "Channel", "Profile", "PDF" and "None".
The selection range of the value of "evidence" must be within "Url", "Text content", and "Both".
Here are a few examples:
{% for ex in examples %}
URL: {{ex.url}}
Text content: {{ex.text_content}}
OUTPUT:
{"category": "{{ex.category}}", "evidence": "{{ex.evidence}}"}
{% endfor %}
For a given URL and text content, classify the url to complete the category and indicate evidence:
URL: {{url}}
Text content: {{text_content}}.
OUTPUT: | promptflow/examples/flows/standard/web-classification/classify_with_llm.jinja2/0 | {
"file_path": "promptflow/examples/flows/standard/web-classification/classify_with_llm.jinja2",
"repo_id": "promptflow",
"token_count": 235
} | 20 |
my_tool_package.tools.tool_with_custom_strong_type_connection.my_tool:
description: This is my tool with custom strong type connection.
function: my_tool
inputs:
connection:
custom_type:
- MyCustomConnection
type:
- CustomConnection
input_text:
type:
- string
module: my_tool_package.tools.tool_with_custom_strong_type_connection
name: Tool With Custom Strong Type Connection
type: python
| promptflow/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_custom_strong_type_connection.yaml/0 | {
"file_path": "promptflow/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_custom_strong_type_connection.yaml",
"repo_id": "promptflow",
"token_count": 155
} | 21 |
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs: {}
outputs:
user_id:
type: string
reference: ${Tool_with_Cascading_Inputs.output}
nodes:
- name: Tool_with_Cascading_Inputs
type: python
source:
type: package
tool: my_tool_package.tools.tool_with_cascading_inputs.my_tool
inputs:
user_type: student
student_id: "student_id"
| promptflow/examples/tools/use-cases/cascading-inputs-tool-showcase/flow.dag.yaml/0 | {
"file_path": "promptflow/examples/tools/use-cases/cascading-inputs-tool-showcase/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 177
} | 22 |
Welcome to {{ website_name }}!
{% if user_name %}
Hello, {{ user_name }}!
{% else %}
Hello there!
{% endif %} | promptflow/examples/tools/use-cases/custom_llm_tool_showcase/prompt_template.jinja2/0 | {
"file_path": "promptflow/examples/tools/use-cases/custom_llm_tool_showcase/prompt_template.jinja2",
"repo_id": "promptflow",
"token_count": 48
} | 23 |
#! /bin/bash
set -e
program_name=$0
function usage {
echo "usage: $program_name [-i|-image_tag|--image_tag]"
echo " -i|-image_tag|--image_tag specify container image tag"
echo " -r|-registry|--registry specify container registry name, for example 'xx.azurecr.io'"
echo " -n|-name|--name specify app name to produce a unique FQDN as AppName.azurewebsites.net."
echo " -l|-location|--location specify app location, default to 'centralus'"
echo " -sku|--sku specify app sku, default to 'F1'(free)"
echo " -g|-resource_group|--resource_group specify app resource group"
echo " -subscription|--subscription specify app subscription, default using az account subscription"
echo " -v|-verbose|--verbose specify verbose mode"
echo " -p|-path|--path specify folder path to be deployed"
exit 1
}
if [ "$1" == "-help" ] || [ "$1" == "-h" ]; then
usage
exit 0
fi
location="eastus"
sku="F1"
verbose=false
####################### Parse and validate args ############################
while [ $# -gt 0 ]; do
case "$1" in
-i|-image_tag|--image_tag)
image_tag="$2"
;;
-r|-registry|--registry)
registry_name="$2"
;;
-n|-name|--name)
name="$2"
;;
-l|-location|--location)
location="$2"
;;
-sku|--sku)
sku="$2"
;;
-g|-resource_group|--resource_group)
resource_group="$2"
;;
-subscription|--subscription)
subscription="$2"
;;
-v|-verbose|--verbose)
verbose=true
;;
-p|-path|--path)
path="$2"
;;
*)
printf "***************************\n"
printf "* Error: Invalid argument.*\n"
printf "***************************\n"
exit 1
esac
shift
shift
done
# fail if image_tag not provided
if [ -z "$image_tag" ]; then
printf "***************************\n"
printf "* Error: image_tag is required.*\n"
printf "***************************\n"
exit 1
fi
# check if : in image_tag
if [[ $image_tag == *":"* ]]; then
echo "image_tag: $image_tag"
else
version="v$(date '+%Y%m%d-%H%M%S')"
image_tag="$image_tag:$version"
echo "image_tag: $image_tag"
fi
# fail if registry_name not provided
if [ -z "$registry_name" ]; then
printf "***************************\n"
printf "* Error: registry is required.*\n"
printf "***************************\n"
fi
# fail if name not provided
if [ -z "$name" ]; then
printf "***************************\n"
printf "* Error: name is required.*\n"
printf "***************************\n"
fi
# fail if resource_group not provided
if [ -z "$resource_group" ]; then
printf "***************************\n"
printf "* Error: resource_group is required.*\n"
printf "***************************\n"
fi
# fail if path not provided
if [ -z "$path" ]; then
printf "***************************\n"
printf "* Error: path is required.*\n"
printf "***************************\n"
exit 1
fi
####################### Build and push image ############################
echo "Change working directory to $path"
cd "$path"
docker build -t "$image_tag" .
if [[ $registry_name == *"azurecr.io" ]]; then
echo "Trying to login to $registry_name..."
az acr login -n "$registry_name"
acr_image_tag=$registry_name/$image_tag
echo "ACR image tag: $acr_image_tag"
docker tag "$image_tag" "$acr_image_tag"
image_tag=$acr_image_tag
else
echo "Make sure you have docker account login!!!"
printf "***************************************************\n"
printf "* WARN: Make sure you have docker account login!!!*\n"
printf "***************************************************\n"
docker_image_tag=$registry_name/$image_tag
echo "Docker image tag: $docker_image_tag"
docker tag "$image_tag" "$docker_image_tag"
image_tag=$docker_image_tag
fi
echo "Start pushing image...$image_tag"
docker push "$image_tag"
####################### Create and config app ############################
function append_to_command {
command=$1
if [ -n "$subscription" ]; then
command="$command --subscription $subscription"
fi
if $verbose; then
command="$command --debug"
fi
echo "$command"
}
# Check and create resource group if not exist
result=$(az group exists --name "$resource_group")
if [ "$result" = "false" ]; then
echo "Creating resource group...$resource_group"
command="az group create --name $resource_group -l $location"
command=$(append_to_command "$command")
eval "$command"
fi
# Create service plan
service_plan_name=$name"_service_plan"
echo "Creating service plan...$service_plan_name"
command="az appservice plan create --name $service_plan_name --sku $sku --location $location --is-linux -g $resource_group"
command=$(append_to_command "$command")
echo "$command"
eval "$command"
# Create app
echo "Creating app...$name"
command="az webapp create --name $name -p $service_plan_name --deployment-container-image-name $image_tag --startup-file 'bash start.sh' -g $resource_group"
command=$(append_to_command "$command")
echo "$command"
eval "$command"
# Config environment variable
echo "Config app...$name"
command="az webapp config appsettings set -g $resource_group --name $name --settings USER_AGENT=promptflow-appservice @settings.json "
command=$(append_to_command "$command")
echo "$command"
eval "$command"
echo "Please go to https://portal.azure.com/ to config environment variables and restart the app: $name at (Settings>Configuration) or (Settings>Environment variables)"
echo "Reach deployment logs at (Deployment>Deployment Central) and app logs at (Monitoring>Log stream)"
echo "Reach advanced deployment tools at https://$name.scm.azurewebsites.net/"
echo "Reach more details about app service at https://learn.microsoft.com/en-us/azure/app-service/"
| promptflow/examples/tutorials/flow-deploy/azure-app-service/deploy.sh/0 | {
"file_path": "promptflow/examples/tutorials/flow-deploy/azure-app-service/deploy.sh",
"repo_id": "promptflow",
"token_count": 2250
} | 24 |
---
resources: examples/connections/azure_openai.yml, examples/flows/chat/basic-chat, examples/flows/chat/chat-math-variant, examples/flows/evaluation/eval-chat-math
---
# Tutorial: How prompt flow helps on quality improvement
This tutorial is designed to enhance your understanding of improving flow quality through prompt tuning and evaluation.
Embark on a journey to overcome the inherent randomness of Language Models (LLMs) and enhance output reliability through **prompt fine-tuning** with this comprehensive tutorial. Explore how prompt flow can simplify this process, enabling you to swiftly build high-quality, LLM-native apps.
Prompt fine-tuning involves optimizing the input prompts given to an LLM. This strategic adjustment helps the model to focus on specific information needed for a task, thereby improving the accuracy and reliability of the LLM's responses.
When we talk about "high quality", it's not just about accuracy. It's equally important to strike a balance between the accuracy and the token cost of the LLM. Spend just 15 minutes with us to discover how prompt flow expedites the process of prompt tuning, testing, and evaluation, guiding you towards finding the ideal prompt **(accuracy ↑,token ↓)**
<img src="./media/realcase.png" alt="comparison result" width=60%>
## Video tutorial
Before practicing, you can watch the video for a quick understand. This video shows how to use the **prompt flow VS code extension** to develop your chat flow, fine tune the prompt, batch test the flow, and evaluate the quality.
<a href="http://www.youtube.com/watch?feature=player_embedded&v=gcIe6nk2gA4
" target="_blank"><img src="./media/Screenshot-video.png"
alt="video demo" border="5" /></a>
## Hands-on practice
* Option 1 - VS Code Extension: [Install the prompt flow extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) in VS Code and follow the [video tutorial](https://youtu.be/gcIe6nk2gA4) above for a guided practice.
* Option 2 - CLI:Follow the steps below to gain hands-on experience with the prompt flow CLI.
It's time to put theory into practice! Execute our sample and witness the effects.
### Prerequisite
Before moving ahead, ensure you've completed the [Quick Start](../../../README.md#get-started-with-prompt-flow-⚡) guidance. Ensure you have the following setup:
* [Install prompt flow](../../../README.md#installation)
* [Setup a connection for your API key](../../../README.md#quick-start-⚡)
> ℹ️ For testing quickly, this tutorial uses CLI command.
Clone the promptflow repository to your local machine:
```shell
git clone https://github.com/microsoft/promptflow.git
```
Setup sample `open_ai_connection` connection
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Next, let's get started with customizing the flow for a specific task.
### Customize the flow for a specific task
In the `promptflow/examples/flows/chat` folder, you can see a `basic-chat` folder, which represents a chat template flow as same as the one you created in the [Quick Start](../../../README.md#get-started-with-prompt-flow-⚡) guidance. We'll use this flow as a starting point to build a math problem solver.
```bash
cd ../../flows/chat/basic-chat/
```
To enable your chatbot flow to solve math problems, you need to instruct the LLM about the task and target in the prompt. Open `chat.jinja2`, update the prompt as below:
```jinja
system:
You are an assistant to calculate the answer to the provided math problems.
Please return the final numerical answer only, without any accompanying reasoning or explanation.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
```
Before run, check your connection settings in `flow.dag.yaml` file. The default connection name is `open_ai_connection`, and the default model is `gpt-3.5-turbo`. If you have a different connection name or model, please modify the `flow.dag.yaml` file accordingly.
><details>
><summary>(click to toggle details) For example, if you use Azure Open AI, please modify the `flow.dag.yaml` file to specify your connection and deployment</summary>
>
> Replace the 'node:' section with following content, specify the 'connection_name' to your Azure Open AI connection, and specify the 'deployment_name' to the model deployment you'd like to use.
> ```yaml
>nodes:
>- name: chat
> type: llm
> source:
> type: code
> path: chat.jinja2
> inputs:
> deployment_name: <your_azure_open_ai_deployment_name> #specify your deployment name
> max_tokens: '256'
> temperature: '0'
> chat_history: ${inputs.chat_history}
> question: ${inputs.question}
> api: chat
> connection: <your_azure_open_ai_connection_name> #specify your azure openai connection name
> ```
</details>
Go back to the `promptflow/examples/flows/chat` path, run the following command to test the flow with a simple math problem:
```bash
cd ..
pf flow test --flow ./basic-chat --inputs question="1+1=?"
```
This will yield the following output:
```json
{
"answer": "2"
}
```
Sometime, the question may be challenging. Now, let's test it with a complex math problem, such as:
```bash
pf flow test --flow ./basic-chat --inputs question="We are allowed to remove exactly one integer from the list $$-1,0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,$$and then we choose two distinct integers at random from the remaining list. What number should we remove if we wish to maximize the probability that the sum of the two chosen numbers is 10?"
```
The output is:
```json
{
"answer": "-1"
}
```
However, the correct answer is 5, so the output answer is incorrect! (Don't be surprised if you got the correct answer, as the randiness of LLM. You can try multiple times for different answers.) It indicates that we need to further evaluate the performance. Therefore, in the next step, we will test the flow with more math problems to better evaluate the quality.
### Evaluate the quality of your prompt
With prompt flow, you can quickly trigger a batch-run to test your prompt with a larger dataset, and evaluate the quality of the answers.
There is a `data.jsonl` file in the `promptflow/examples/flows/chat/chat-math-variant` folder, which is a dataset containing 20 test data entries (a subset of [the Math Dataset](https://github.com/hendrycks/math/)). It includes the input question, the ground truth for numerical answer, and the reasoning (raw_answer). Here's one example:
```json
{
"question": "Determine the number of ways to arrange the letters of the word PROOF.",
"answer": "60",
"raw_answer": "There are two O's and five total letters, so the answer is $\\dfrac{5!}{2!} = \\boxed{60}$."
}
```
Run the following command to test your prompt with this dataset:
First, set the environment variable `base_run_name` to specify the run name.
```bash
base_run_name="base_run"
```
<details>
<summary>For Windows CMD users, run commnad in toggle </summary>
```shell
set base_run_name=base_run
```
</details>
>ℹ️ The default model is `gpt-turbo-3.5`, let's try `gpt-4` to see if it's smarter to get better results. Use `--connections <node_name>.connection=<connection_name>...`to specify.
```bash
pf run create --flow ./basic-chat --data ./chat-math-variant/data.jsonl --column-mapping question='${data.question}' chat_history=[] --connections chat.connection=open_ai_connection chat.model=gpt-4 --stream --name $base_run_name
```
>ℹ️ For Azure Open AI, run the following command instead:
> ```shell
> pf run create --flow ./chat_math_variant --data test_data.jsonl --column-mapping question='${data.question}' chat_history=[] --connections chat.connection=azure_open_ai_connection chat.deployment_name=gpt-4 --stream --name $base_run_name
> ```
<details>
<summary>For Windows CMD users, run commnad in toggle</summary>
```shell
pf run create --flow ./basic-chat --data ./chat-math-variant/data.jsonl --column-mapping question='${data.question}' chat_history=[] --connections chat.connection=open_ai_connection chat.model=gpt-4 --stream --name %base_run_name%
```
</details>
> ℹ️ The run name must be unique. Please specify a new name in `--name`.
> If you see "Run 'base_run' already exists.", you can specify another name. But please remember the name you specified, because you'll need it in the next step.
When it completes, you can run the following command to see the details of results:
> Specify the run name of your completed run in `--name` argument:
```bash
pf run show-details --name $base_run_name
```
<details>
<summary>For Windows CMD users, run commnad in toggle</summary>
```shell
pf run show-details --name %base_run_name%
```
</details>
This can show the line by line input and output of the run:
```
+----+---------------+-----------------+---------------+---------------+
| | inputs.chat | inputs.question | inputs.line | outputs.ans |
| | _history | | _number | wer |
+====+===============+=================+===============+===============+
| 0 | [] | Compute $\dbi | 0 | 4368 |
| | | nom{16}{5}$. | | |
+----+---------------+-----------------+---------------+---------------+
| 1 | [] | Determine the | 1 | 60 |
| | | number of | | |
| | | ways to | | |
| | | arrange the | | |
| | | letters of | | |
| | | the word | | |
| | | PROOF. | | |
+----+---------------+-----------------+---------------+---------------+
| .. | ... | ... |... | ... |
```
Next, create an **evaluation run** to calculate the accuracy of the answers based on the previous run.
In the `promptflow/examples/flows/evaluation` folder, you can see a `eval-chat-math` folder, which represents an evaluation flow. We'll use this flow to evaluate the accuracy of the answers.
```bash
cd ../evaluation
```
Run the following command to create an evaluation run:
```bash
eval_run_name="eval_run"
pf run create --flow ./eval-chat-math --data ../chat/chat-math-variant/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --stream --run $base_run_name --name $eval_run_name
```
<details>
<summary>For Windows CMD users, run commnad in toggle</summary>
```shell
set eval_run_name=eval_run
pf run create --flow ./eval-chat-math --data ../chat/chat-math-variant/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --run %base_run_name% --name %eval_run_name% --stream
```
</details>
> If needed, specify the run name which you want to evaluate in `--run` argument, and specify this evaluation run name in `--name` argument.
Then get metrics of the `eval_run`:
```bash
pf run show-metrics --name $eval_run_name
```
<details>
<summary>For Windows CMD users, run commnad in toggle</summary>
```shell
pf run show-details --name %eval_run_name%
```
</details>
You can visualize and compare the output line by line of `base_run` and `eval_run` in a web browser:
```bash
pf run visualize --name "$base_run_name,$eval_run_name"
```
<details>
<summary>For Windows CMD users, run commnad in toggle</summary>
```shell
pf run visualize --name "%base_run_name%,%eval_run_name%"
```
</details>
Because of the randomness of the LLM, the accuracy may vary. For example, in my run, the metrics are as follows:
```json
{
"accuracy": 0.35,
"error_rate": 0.65
}
```
Oops! The accuracy isn't satisfactory. It's time to fine-tune your prompt for higher quality!
### Fine-tuning your prompt and evaluate the improvement
In the `/chat` folder, you can see a `chat-math-variant` folder, which represents a flow with two additional prompt variants compared to the original one you customized based on the `basic-chat`.
In this sample flow, you'll find three Jinja files:
* `chat.jinja2` is the original prompt as same as the one you customized in `basic-chat`.
* `chat_variant_1.jinja2` and `chat_variant_2.jinja2` are the 2 additional prompt variants.
We leverage the Chain of Thought (CoT) prompt engineering method to adjust the prompt. The goal is to activate the Language Model's reasoning capability of the questions, by providing a few CoT examples.
<details>
<summary>Variant_1: 2 CoT examples</summary>
```jinja
system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format. <br>
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marbles. Two marbles are randomly drawn from this jar without replacement. What is the probability that these two marbles drawn will both be red? Express your answer as a common fraction.
assistant:
{Chain of thought: "The total number of marbles is $2+3+10=15$. The probability that the first marble drawn will be red is $2/15$. Then, there will be one red left, out of 14. Therefore, the probability of drawing out two red marbles will be: $$\\frac{2}{15}\\cdot\\frac{1}{14}=\\boxed{\\frac{1}{105}}$$.", "answer": "1/105"}
user:
Find the greatest common divisor of $7!$ and $(5!)^2.$
assistant:
{"Chain of thought": "$$ \\begin{array} 7! &=& 7 \\cdot 6 \\cdot 5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1 &=& 2^4 \\cdot 3^2 \\cdot 5^1 \\cdot 7^1 \\\\ (5!)^2 &=& (5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1)^2 &=& 2^6 \\cdot 3^2 \\cdot 5^2 \\\\ \\text{gcd}(7!, (5!)^2) &=& 2^4 \\cdot 3^2 \\cdot 5^1 &=& \\boxed{720} \\end{array} $$.", "answer": "720"}
```
</details>
<details>
<summary>Variant_2 : 6 CoT examples.</summary>
```jinja
system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format.
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marbles. Two marbles are randomly drawn from this jar without replacement. What is the probability that these two marbles drawn will both be red? Express your answer as a common fraction.
assistant:
{Chain of thought: "The total number of marbles is $2+3+10=15$. The probability that the first marble drawn will be red is $2/15$. Then, there will be one red left, out of 14. Therefore, the probability of drawing out two red marbles will be: $$\\frac{2}{15}\\cdot\\frac{1}{14}=\\boxed{\\frac{1}{105}}$$.", "answer": "1/105"}
user:
Find the greatest common divisor of $7!$ and $(5!)^2.$
assistant:
{"Chain of thought": "$$ \\begin{array} 7! &=& 7 \\cdot 6 \\cdot 5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1 &=& 2^4 \\cdot 3^2 \\cdot 5^1 \\cdot 7^1 \\\\ (5!)^2 &=& (5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1)^2 &=& 2^6 \\cdot 3^2 \\cdot 5^2 \\\\ \\text{gcd}(7!, (5!)^2) &=& 2^4 \\cdot 3^2 \\cdot 5^1 &=& \\boxed{720} \\end{array} $$.", "answer": "720"}
user:
A club has 10 members, 5 boys and 5 girls. Two of the members are chosen at random. What is the probability that they are both girls?
assistant:
{"Chain of thought": "There are $\\binomial{10}{2} = 45$ ways to choose two members of the group, and there are $\\binomial{5}{2} = 10$ ways to choose two girls. Therefore, the probability that two members chosen at random are girls is $\\dfrac{10}{45} = \\boxed{\\dfrac{2}{9}}$.", "answer": "2/9"}
user:
Allison, Brian and Noah each have a 6-sided cube. All of the faces on Allison's cube have a 5. The faces on Brian's cube are numbered 1, 2, 3, 4, 5 and 6. Three of the faces on Noah's cube have a 2 and three of the faces have a 6. All three cubes are rolled. What is the probability that Allison's roll is greater than each of Brian's and Noah's? Express your answer as a common fraction.
assistant:
{"Chain of thought": "Since Allison will always roll a 5, we must calculate the probability that both Brian and Noah roll a 4 or lower. The probability of Brian rolling a 4 or lower is $\\frac{4}{6} = \\frac{2}{3}$ since Brian has a standard die. Noah, however, has a $\\frac{3}{6} = \\frac{1}{2}$ probability of rolling a 4 or lower, since the only way he can do so is by rolling one of his 3 sides that have a 2. So, the probability of both of these independent events occurring is $\\frac{2}{3} \\cdot \\frac{1}{2} = \\boxed{\\frac{1}{3}}$.", "answer": "1/3"}
user:
Compute $\\density binomial{50}{2}$.
assistant:
{"Chain of thought": "$\\density binomial{50}{2} = \\dfrac{50!}{2!48!}=\\dfrac{50\\times 49}{2\\times 1}=\\boxed{1225}.$", "answer": "1225"}
user:
The set $S = \\{1, 2, 3, \\ldots , 49, 50\\}$ contains the first $50$ positive integers. After the multiples of 2 and the multiples of 3 are removed, how many integers remain in the set $S$?
assistant:
{"Chain of thought": "The set $S$ contains $25$ multiples of 2 (that is, even numbers). When these are removed, the set $S$ is left with only the odd integers from 1 to 49. At this point, there are $50-25=25$ integers in $S$. We still need to remove the multiples of 3 from $S$.\n\nSince $S$ only contains odd integers after the multiples of 2 are removed, we must remove the odd multiples of 3 between 1 and 49. These are 3, 9, 15, 21, 27, 33, 39, 45, of which there are 8. Therefore, the number of integers remaining in the set $S$ is $25 - 8 = \\boxed{17}$.", "answer": "17"}
```
</details>
These two jinja files are specified in the `flow.dag.yaml` file, which defines the flow structure. You can see that the `chat` node has 3 variants, which point to these 3 Jinja files.
### Test and evaluate your prompt variants
First, you need to modify your flow to add two more prompt variants into the chat node, in addition to the existed default one. In the flow.dag.yaml file, you can see 3 variants definition of the `chat` node, which point to these 3 Jinja files.
Run the CLI command below to start the experiment: test all variants, evaluate them, get the visualized comparison results of the experiment.
> ℹ️ By default, the connection is set to `open_ai_connection` and and the model is set to `gpt-4` for each variant, as specified in the `flow.dag.yaml` file. However, you have the flexibility to specify a different connection and model by adding `--connections chat.connection=<your_connection_name> chat.deployment_name=<model_name>` in the test run command.
Navigate to the `promptflow/examples/flows` folder
```bash
cd ..
```
Set the environment variable `base_run_name` and `eval_run_name` to specify the run name.
```bash
base_run_name="base_run_variant_"
eval_run_name="eval_run_variant_"
```
<details>
<summary>For Windows CMD users, run commnad in toggle</summary>
```shell
set base_run_name=base_run_variant_
set eval_run_name=eval_run_variant_
```
</details>
Run the following command to test and evaluate the variants:
```bash
# Test and evaluate variant_0:
# Test-run
pf run create --flow ./chat/chat-math-variant --data ./chat/chat-math-variant/data.jsonl --column-mapping question='${data.question}' chat_history=[] --variant '${chat.variant_0}' --stream --name "${base_run_name}0"
# Evaluate-run
pf run create --flow ./evaluation/eval-chat-math --data ./chat/chat-math-variant/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --stream --run "${base_run_name}0" --name "${eval_run_name}0"
# Test and evaluate variant_1:
# Test-run
pf run create --flow ./chat/chat-math-variant --data ./chat/chat-math-variant/data.jsonl --column-mapping question='${data.question}' chat_history=[] --variant '${chat.variant_1}' --stream --name "${base_run_name}1"
# Evaluate-run
pf run create --flow ./evaluation/eval-chat-math --data ./chat/chat-math-variant/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --stream --run "${base_run_name}1" --name "${eval_run_name}1"
# Test and evaluate variant_2:
# Test-run
pf run create --flow ./chat/chat-math-variant --data ./chat/chat-math-variant/data.jsonl --column-mapping question='${data.question}' chat_history=[] --variant '${chat.variant_2}' --stream --name "${base_run_name}2"
# Evaluate-run
pf run create --flow ./evaluation/eval-chat-math --data ./chat/chat-math-variant/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --stream --run "${base_run_name}2" --name "${eval_run_name}2"
```
<!-- > If encounter the 'execution timeout' error, just try again. It might be caused by the LLM service congestion. -->
<details>
<summary>For Windows CMD users, run commnad in toggle</summary>
```shell
# Test and evaluate variant_0:
# Test-run
pf run create --flow ./chat/chat-math-variant --data ./chat/chat-math-variant/data.jsonl --column-mapping question='${data.question}' chat_history=[] --variant '${chat.variant_0}' --stream --name %base_run_name%0
# Evaluate-run
pf run create --flow ./evaluation/eval-chat-math --data ./chat/chat-math-variant/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --stream --run %base_run_name%0 --name %eval_run_name%0
# Test and evaluate variant_1:
# Test-run
pf run create --flow ./chat/chat-math-variant --data ./chat/chat-math-variant/data.jsonl --column-mapping question='${data.question}' chat_history=[] --variant '${chat.variant_1}' --stream --name %base_run_name%1
# Evaluate-run
pf run create --flow ./evaluation/eval-chat-math --data ./chat/chat-math-variant/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --stream --run %base_run_name%1 --name %eval_run_name%1
# Test and evaluate variant_2:
# Test-run
pf run create --flow ./chat/chat-math-variant --data ./chat/chat-math-variant/data.jsonl --column-mapping question='${data.question}' chat_history=[] --variant '${chat.variant_2}' --stream --name %base_run_name%2
# Evaluate-run
pf run create --flow ./evaluation/eval-chat-math --data ./chat/chat-math-variant/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --stream --run %base_run_name%2 --name %eval_run_name%2
```
</details>
Get metrics of the all evaluations:
```bash
pf run show-metrics --name "${eval_run_name}0"
pf run show-metrics --name "${eval_run_name}1"
pf run show-metrics --name "${eval_run_name}2"
```
You may get the familiar output like this:
```
# eval_variant_0_run
{
"accuracy": 0.3,
"error_rate": 0.7
}
# eval_variant_1_run
{
"accuracy": 0.9,
"error_rate": 0.1
}
# eval_variant_2_run
{
"accuracy": 0.9,
"error_rate": 0.1
}
```
Visualize the results:
```bash
pf run visualize --name "${base_run_name}0,${eval_run_name}0,${base_run_name}1,${eval_run_name}1,${base_run_name}2,${eval_run_name}2"
```
<details>
<summary>For Windows CMD users, run commnad in toggle</summary>
```shell
pf run visualize --name "%base_run_name%0,%eval_run_name%0,base_run_name%1,%eval_run_name%1,base_run_name%2,%eval_run_name%2"
```
</details>
Click the HTML link, to get the experiment results. Click on column in the **Output** table will allow you to view the snapshot of each line.
The snapshot of chat flow:

The snapshot of evaluation flow:

Excellent! Now you can compare their performances and token costs, and choose the prompt that best suits your needs. We can see that variant_1 and variant_2 have the same accuracy, but variant_1 has a lower token cost (only 2 few shots rather than the 6 in in variant_2). So variant_1 is the best choice for the quality and cost balance.
### Conclusion
Great! Now you can compare their performances and token costs to choose the prompt that best suits your needs. Upon comparison, we can observe that variant_1 and variant_2 have the similar accuracy. However, variant_1 stands out as the better choice due to its lower token cost (2 few-shots vs. 6 few-shots).
<img src="media/realcase.png" alt="comparison result" width=65%/>
It is evident that adding more CoT examples in the prompt does not necessarily improve the accuracy further. Instead, we should identify the optimal point where the number of shots maximizes accuracy while minimizing cost.
Just in a few steps, we identified that variant_1 strikes the ideal balance between quality and cost! This is where the value of prompt tuning and evaluation using prompt flow becomes apparent. With prompt flow, you can easily test and evaluate different prompt variants, enabling you to facilitate high quality LLM-native apps to production.
| promptflow/examples/tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md/0 | {
"file_path": "promptflow/examples/tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md",
"repo_id": "promptflow",
"token_count": 8166
} | 25 |
{
"releases":{
"internal":{
"promptflow-tools-release":{
"index": "internal-index-release"
},
"promptflow-tools-test":{
"index": "internal-index-test"
}
}
},
"targets": {
"internal-index-release": {
"storage_account": "azuremlsdktestpypi",
"packages_container": "repo",
"index_container": "wheels",
"blob_prefix": "promptflow",
"endpoint": "azuremlsdktestpypi.azureedge.net"
},
"internal-index-test": {
"storage_account": "azuremlsdktestpypi",
"packages_container": "repo",
"index_container": "wheels",
"blob_prefix": "test-promptflow",
"endpoint": "azuremlsdktestpypi.azureedge.net"
}
}
}
| promptflow/scripts/distributing/configs/distribution_settings.json/0 | {
"file_path": "promptflow/scripts/distributing/configs/distribution_settings.json",
"repo_id": "promptflow",
"token_count": 348
} | 26 |
# Install prompt flow MSI installer on Windows
Prompt flow is a suite of development tools designed to streamline the end-to-end development
cycle of LLM-based AI applications, that can be installed locally on Windows computers.
For Windows, the prompt flow is installed via an MSI, which gives you access to the CLI
through the Windows Command Prompt (CMD) or PowerShell.
## Install or update
The MSI distributable is used for installing or updating the prompt flow on Windows.
You don't need to uninstall current versions before using the MSI installer because
the MSI updates any existing version.
::::{tab-set}
:::{tab-item} Microsoft Installer (MSI)
:sync: Microsoft Installer (MSI)
### Latest version
Download and install the latest release of the prompt flow.
When the installer asks if it can make changes to your computer, select the "Yes" box.
> [Latest release of the promptflow (64-bit)](https://aka.ms/installpromptflowwindowsx64)
)
### Specific version
If you prefer, you can download a specific version of the promptflow by using a URL.
To download the MSI installer for a specific version, change the version segment in URL
https://promptflowartifact.blob.core.windows.net/msi-installer/promptflow-<version>.msi
:::
:::{tab-item} Microsoft Installer (MSI) with PowerShell
:sync: Microsoft Installer (MSI) with PowerShell
### PowerShell
To install the prompt flow using PowerShell, start PowerShell and
run the following command:
```PowerShell
$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://aka.ms/installpromptflowwindowsx64 -OutFile .\promptflow.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I promptflow.msi /quiet'; Remove-Item .\promptflow.msi
```
This will download and install the latest 64-bit installer of the prompt flow for Windows.
To install a specific version, replace the `-Uri` argument with the URL like below.
Here is an example of using the 64-bit installer of the promptflow version 1.0.0 in PowerShell:
```PowerShell
$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://promptflowartifact.blob.core.windows.net/msi-installer/promptflow-1.0.0.msi -OutFile .\promptflow.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I promptflow.msi /quiet'; Remove-Item .\promptflow.msi
```
:::
::::
## Run the prompt flow
You can now run the prompt flow with the `pf` or `pfazure` command from either Windows Command Prompt or PowerShell.
## Upgrade the prompt flow
Beginning with version 1.4.0, the prompt flow provides an in-tool command to upgrade to the latest version.
```commandline
pf upgrade
```
For prompt flow versions prior to 1.4.0, upgrade by reinstalling as described in Install the prompt flow.
## Uninstall
You uninstall the prompt flow from the Windows "Apps and Features" list. To uninstall:
| Platform | Instructions |
|---|---|
| Windows 11 | Start > Settings > Apps > Installed apps |
| Windows 10 | Start > Settings > System > Apps & Features |
| Windows 8 and Windows 7 | Start > Control Panel > Programs > Uninstall a program |
Once on this screen type __promptflow_ into the program search bar.
The program to uninstall is listed as __promptflow (64-bit)__.
Select this application, then select the `Uninstall` button.
## FAQ
### Where is the prompt flow installed?
In Windows, the 64-bit prompt flow installs in `C:\Users\**\AppData\Local\Apps\promptflow` by default.
### What version of the prompt flow is installed?
Type `pf --version` in a terminal window to know what version of the prompt flow is installed.
Your output looks like this:
```output
promptflow x.x.x
Executable '***\python.exe'
Python (Windows) 3.*.* | packaged by conda-forge | *
``` | promptflow/scripts/installer/windows/install_from_msi.md/0 | {
"file_path": "promptflow/scripts/installer/windows/install_from_msi.md",
"repo_id": "promptflow",
"token_count": 1064
} | 27 |
# Readme Workflow Generator
These tools is used to generate workflows from README.md and python notebook files in the [examples](../../examples/) folder.
* Generated workflows will be placed in [.github/workflows/samples_*](../../.github/workflows/) folder.
* The script will also generate a new explanation [README.md](../../examples/README.md) for all the examples.
## 1. Install dependencies
```bash
pip install -r ../../examples/requirements.txt
pip install -r ../../examples/dev_requirements.txt
```
## 2. Generate workflows
### (Option 1) One Step Generation
At the **root** of the repository, run the following command:
```bash
python scripts/readme/readme.py
```
### (Option 2) Step by Step Generation
At the **root** of the repository, run the following command:
```bash
# Generate workflow from README.md inside examples folder
python scripts/readme/readme_generator.py -g "examples/**/*.ipynb"
# Generate workflow from python notebook inside examples folder
python scripts/readme/workflow_generator.py -g "examples/flows/**/README.md"
```
Multiple inputs are supported.
## 3. Options to control generations of examples [README.md](../../examples/README.md)
### 3.1 Notebook Workflow Generation
* Each workflow contains metadata area, set `.metadata.description` area will display this message in the corresponding cell in [README.md](../../examples/README.md) file.
* When set `.metadata.no_readme_generation` to value `true`, the script will stop generating for this notebook.
### 3.2 README.md Workflow Generation
* For README.md files, only `bash` cells will be collected and converted to workflow. No cells will produce no workflow.
* Readme descriptions are simply collected from the first sentence in the README.md file just below the title. The script will collect words before the first **.** of the fist paragraph. Multi-line sentence is also supported
* A supported description sentence: `This is a sample workflow for testing.`
* A not supported description sentence: `Please check www.microsoft.com for more details.`
| promptflow/scripts/readme/README.md/0 | {
"file_path": "promptflow/scripts/readme/README.md",
"repo_id": "promptflow",
"token_count": 567
} | 28 |
- name: {{ step_name }}
working-directory: ${{ '{{' }} github.workspace }}
run: |
python scripts/readme/extract_steps_from_readme.py -f {{ readme_name }} -o {{ working_dir }}
- name: Cat script
working-directory: {{ working_dir }}
run: |
cat bash_script.sh
- name: Run scripts against canary workspace (scheduled runs only)
if: github.event_name == 'schedule'
working-directory: {{ working_dir }}
run: |
export aoai_api_key=${{ '{{' }}secrets.AOAI_API_KEY_TEST }}
export aoai_api_endpoint=${{ '{{' }} secrets.AOAI_API_ENDPOINT_TEST }}
export test_workspace_sub_id=${{ '{{' }} secrets.TEST_WORKSPACE_SUB_ID }}
export test_workspace_rg=${{ '{{' }} secrets.TEST_WORKSPACE_RG }}
export test_workspace_name=${{ '{{' }} secrets.TEST_WORKSPACE_NAME_CANARY }}
bash bash_script.sh
- name: Run scripts against production workspace
if: github.event_name != 'schedule'
working-directory: {{ working_dir }}
run: |
export aoai_api_key=${{ '{{' }}secrets.AOAI_API_KEY_TEST }}
export aoai_api_endpoint=${{ '{{' }} secrets.AOAI_API_ENDPOINT_TEST }}
export test_workspace_sub_id=${{ '{{' }} secrets.TEST_WORKSPACE_SUB_ID }}
export test_workspace_rg=${{ '{{' }} secrets.TEST_WORKSPACE_RG }}
export test_workspace_name=${{ '{{' }} secrets.TEST_WORKSPACE_NAME_PROD }}
bash bash_script.sh
- name: Pip List for Debug
if : ${{ '{{' }} always() }}
working-directory: {{ working_dir }}
run: |
pip list
- name: Upload artifact
if: ${{ '{{' }} always() }}
uses: actions/upload-artifact@v3
with:
name: artifact
path: {{ working_dir }}/bash_script.sh | promptflow/scripts/readme/ghactions_driver/workflow_steps/step_extract_steps_and_run.yml.jinja2/0 | {
"file_path": "promptflow/scripts/readme/ghactions_driver/workflow_steps/step_extract_steps_and_run.yml.jinja2",
"repo_id": "promptflow",
"token_count": 625
} | 29 |
We are pleased to announce the release of promptflow {{VERSION}}.
This release includes some new features, bug fixes, and improvements. We recommend that all users upgrade to this version.
See the [CHANGELOG](https://github.com/microsoft/promptflow/blob/release/promptflow/{{VERSION}}/src/promptflow/CHANGELOG.md) for a list of all the changes.
The release will be available via PyPI:
```bash
pip install --upgrade promptflow
```
Please report any issues with the release on the [promptflow issue tracker](https://github.com/microsoft/promptflow/issues).
Thanks to all the contributors who made this release possible.
| promptflow/scripts/release/promptflow-release-note.md/0 | {
"file_path": "promptflow/scripts/release/promptflow-release-note.md",
"repo_id": "promptflow",
"token_count": 169
} | 30 |
from promptflow import tool
from promptflow.connections import CustomConnection
@tool
def {{ function_name }}(connection: CustomConnection, input_text: str) -> str:
# Replace with your tool code.
# Usually connection contains configs to connect to an API.
# Use CustomConnection is a dict. You can use it like: connection.api_key, connection.api_base
# Not all tools need a connection. You can remove it if you don't need it.
return "Hello " + input_text
| promptflow/scripts/tool/templates/tool.py.j2/0 | {
"file_path": "promptflow/scripts/tool/templates/tool.py.j2",
"repo_id": "promptflow",
"token_count": 133
} | 31 |
include promptflow/tools/yamls/*.yaml | promptflow/src/promptflow-tools/MANIFEST.in/0 | {
"file_path": "promptflow/src/promptflow-tools/MANIFEST.in",
"repo_id": "promptflow",
"token_count": 13
} | 32 |
import json
import sys
from enum import Enum
import requests
# Avoid circular dependencies: Use import 'from promptflow._internal' instead of 'from promptflow'
# since the code here is in promptflow namespace as well
from promptflow._internal import ToolProvider, tool
from promptflow.connections import SerpConnection
from promptflow.exceptions import PromptflowException
from promptflow.tools.exception import SerpAPIUserError, SerpAPISystemError
class SafeMode(str, Enum):
ACTIVE = "active"
OFF = "off"
class Engine(str, Enum):
GOOGLE = "google"
BING = "bing"
class SerpAPI(ToolProvider):
def __init__(self, connection: SerpConnection):
super().__init__()
self.connection = connection
def extract_error_message_from_json(self, error_data):
error_message = ""
# For request was rejected. For example, the api_key is not valid
if "error" in error_data:
error_message = error_data["error"]
return str(error_message)
def safe_extract_error_message(self, response):
default_error_message = f"SerpAPI search request failed: {response.text}"
try:
# Keep the same style as SerpAPIClient
error_data = json.loads(response.text)
print(f"Response text json: {json.dumps(error_data)}", file=sys.stderr)
error_message = self.extract_error_message_from_json(error_data)
error_message = error_message if len(error_message) > 0 else default_error_message
return error_message
except Exception as e:
# Swallow any exception when extract detailed error message
print(
f"Unexpected exception occurs while extract error message "
f"from response: {type(e).__name__}: {str(e)}",
file=sys.stderr,
)
return default_error_message
# flake8: noqa: C901
@tool
def search(
self,
query: str, # this is required
location: str = None,
safe: SafeMode = SafeMode.OFF, # Not default to be SafeMode.OFF
num: int = 10,
engine: Engine = Engine.GOOGLE, # this is required
):
from serpapi import SerpApiClient
# required parameters. https://serpapi.com/search-api.
params = {
"q": query,
"location": location,
"api_key": self.connection.api_key,
}
if isinstance(engine, Engine):
params["engine"] = engine.value
else:
params["engine"] = engine
if safe == SafeMode.ACTIVE:
# Ingore invalid value and safe="off" (as default)
# For bing and google, they use diff parameters
if params["engine"].lower() == "google":
params["safe"] = "Active"
else:
params["safeSearch"] = "Strict"
if int(num) > 0:
# to combine multiple engines together, we use "num" as the parameter for such purpose
if params["engine"].lower() == "google":
params["num"] = int(num)
else:
params["count"] = int(num)
search = SerpApiClient(params)
# get response
try:
response = search.get_response()
if response.status_code == requests.codes.ok:
# default output is json
return json.loads(response.text)
else:
# Step I: Try to get accurate error message at best
error_message = self.safe_extract_error_message(response)
# Step II: Construct PromptflowException
if response.status_code >= 500:
raise SerpAPISystemError(message=error_message)
else:
raise SerpAPIUserError(message=error_message)
except Exception as e:
# SerpApi is super robust. Set basic error handle
if not isinstance(e, PromptflowException):
print(f"Unexpected exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
error_message = f"SerpAPI search request failed: {type(e).__name__}: {str(e)}"
raise SerpAPISystemError(message=error_message)
raise
@tool
def search(
connection: SerpConnection,
query: str, # this is required
location: str = None,
safe: SafeMode = SafeMode.OFF, # Not default to be SafeMode.OFF
num: int = 10,
engine: Engine = Engine.GOOGLE, # this is required
):
return SerpAPI(connection).search(
query=query,
location=location,
safe=safe,
num=num,
engine=engine,
)
| promptflow/src/promptflow-tools/promptflow/tools/serpapi.py/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/serpapi.py",
"repo_id": "promptflow",
"token_count": 2097
} | 33 |
import pytest
from promptflow.tools.aoai_gpt4v import AzureOpenAI
@pytest.fixture
def azure_openai_provider(azure_open_ai_connection) -> AzureOpenAI:
return AzureOpenAI(azure_open_ai_connection)
@pytest.mark.usefixtures("use_secrets_config_file")
@pytest.mark.skip("Skipping until we have a Azure OpenAI GPT-4 Vision deployment")
class TestAzureOpenAIGPT4V:
def test_openai_gpt4v_chat(self, azure_openai_provider, example_prompt_template_with_image, example_image):
result = azure_openai_provider.chat(
prompt=example_prompt_template_with_image,
deployment_name="gpt-4v",
max_tokens=480,
temperature=0,
question="which number did you see in this picture?",
image_input=example_image,
)
assert "10" == result
def test_openai_gpt4v_stream_chat(self, azure_openai_provider, example_prompt_template_with_image, example_image):
result = azure_openai_provider.chat(
prompt=example_prompt_template_with_image,
deployment_name="gpt-4v",
max_tokens=480,
temperature=0,
question="which number did you see in this picture?",
image_input=example_image,
)
answer = ""
while True:
try:
answer += next(result)
except Exception:
break
assert "10" == result
| promptflow/src/promptflow-tools/tests/test_aoai_gptv.py/0 | {
"file_path": "promptflow/src/promptflow-tools/tests/test_aoai_gptv.py",
"repo_id": "promptflow",
"token_count": 651
} | 34 |
# Release History
## 1.5.0 (Upcoming)
### Features Added
### Bugs Fixed
- [SDK/CLI] The inputs of node test allows the value of reference node output be passed directly in.
### Improvements
- [SDK/CLI] For `pf run delete`, `pf connection delete`, introducing an option to skip confirmation prompts.
## 1.4.0 (2024.01.22)
### Features Added
- [Executor] Calculate system_metrics recursively in api_calls.
- [Executor] Add flow root level api_calls, so that user can overview the aggregated metrics of a flow.
- [Executor] Add @trace decorator to make it possible to log traces for functions that are called by tools.
- [SDK/CLI][azure] Switch automatic runtime's session provision to system wait.
- [SDK/CLI] Add `--skip-open-browser` option to `pf flow serve` to skip opening browser.
- [SDK/CLI][azure] Support submit flow to sovereign cloud.
- [SDK/CLI] Support `pf run delete` to delete a run irreversibly.
- [SDK/CLI][azure] Automatically put requirements.txt to flow.dag.yaml if exists in flow snapshot.
- [SDK/CLI] Support `pf upgrade` to upgrade prompt flow to the latest version.
- [SDK/CLI] Support env variables in yaml file.
### Bugs Fixed
- Fix unaligned inputs & outputs or pandas exception during get details against run in Azure.
- Fix loose flow path validation for run schema.
- Fix "Without Import Data" in run visualize page results from invalid JSON value (`-Infinity`, `Infinity` and `NaN`).
- Fix "ValueError: invalid width -1" when show-details against long column(s) in narrow terminal window.
- Fix invalid tool code generated when initializing the script tool with icon.
### Improvements
- [SDK/CLI] For `pfazure flow create`:
- If used by non-msft tenant user, use user name instead of user object id in the remote flow folder path. (e.g. `Users/<user-name>/promptflow`).
- When flow has unknown attributes, log warning instead of raising error.
- Use local flow folder name and timestamp as the azure flow file share folder name.
- [SDK/CLI] For `pf/pfazure run create`, when run has unknown attribute, log warning instead of raising error.
- Replace `pyyaml` with `ruamel.yaml` to adopt YAML 1.2 specification.
## 1.3.0 (2023.12.27)
### Features Added
- [SDK/CLI] Support `pfazure run cancel` to cancel a run on Azure AI.
- Add support to configure prompt flow home directory via environment variable `PF_HOME_DIRECTORY`.
- Please set before importing `promptflow`, otherwise it won't take effect.
- [Executor] Handle KeyboardInterrupt in flow test so that the final state is Canceled.
### Bugs Fixed
- [SDK/CLI] Fix single node run doesn't work when consuming sub item of upstream node
### Improvements
- Change `ruamel.yaml` lower bound to 0.17.10.
- [SDK/CLI] Improve `pfazure run download` to handle large run data files.
- [Executor] Exit the process when all async tools are done or exceeded timeout after cancellation.
## 1.2.0 (2023.12.14)
### Features Added
- [SDK/CLI] Support `pfazure run download` to download run data from Azure AI.
- [SDK/CLI] Support `pf run create` to create a local run record from downloaded run data.
### Bugs Fixed
- [SDK/CLI] Removing telemetry warning when running commands.
- Empty node stdout & stderr to avoid large visualize HTML.
- Hide unnecessary fields in run list for better readability.
- Fix bug that ignores timeout lines in batch run status summary.
## 1.1.1 (2023.12.1)
### Bugs Fixed
- [SDK/CLI] Fix compatibility issue with `semantic-kernel==0.4.0.dev0` and `azure-ai-ml==1.12.0`.
- [SDK/CLI] Add back workspace information in CLI telemetry.
- [SDK/CLI] Disable the feature to customize user agent in CLI to avoid changes on operation context.
- Fix openai metrics calculator to adapt openai v1.
## 1.1.0 (2023.11.30)
### Features Added
- Add `pfazure flow show/list` to show or list flows from Azure AI.
- Display node status in run visualize page graph view.
- Add support for image input and output in prompt flow.
- [SDK/CLI] SDK/CLI will collect telemetry by default, user can use `pf config set telemetry.enabled=false` to opt out.
- Add `raise_on_error` for stream run API, by default we raise for failed run.
- Flow as function: consume a flow like a function with parameters mapped to flow inputs.
- Enable specifying the default output path for run.
- Use `pf config set run.output_path=<output-path>` to specify, and the run output path will be `<output-path>/<run-name>`.
- Introduce macro `${flow_directory}` for `run.output_path` in config, which will be replaced with corresponding flow directory.
- The flow directory cannot be set as run output path, which means `pf config set run.output_path='${flow_directory}'` is invalid; but you can use child folder, e.g. `pf config set run.output_path='${flow_directory}/.runs'`.
- Support pfazure run create with remote flow.
- For remote workspace flow: `pfazure run create --flow azureml:<flow-name>`
- For remote registry flow: `pfazure run create --flow azureml://registries/<registry-name>/models/<flow-name>/versions/<flow-version>`
- Support set logging level via environment variable `PF_LOGGING_LEVEL`, valid values includes `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`, default to `INFO`.
- Remove openai version restrictions
### Bugs Fixed
- [SDK/CLI] Fix node test with dict node input will raise "Required input(s) missing".
- [SDK/CLI] Will use run name as display name when display name not specified (used flow folder name before).
- [SDK/CLI] Fix pf flow build created unexpected layer of dist folder
- [SDK/CLI] Fix deploy prompt flow: connections value may be none
### Improvements
- Force 'az login' if using azureml connection provider in cli command.
- Add env variable 'PF_NO_INTERACTIVE_LOGIN' to disable interactive login if using azureml connection provider in promptflow sdk.
- Improved CLI invoke time.
- Bump `pydash` upper bound to 8.0.0.
- Bump `SQLAlchemy` upper bound to 3.0.0.
- Bump `flask` upper bound to 4.0.0, `flask-restx` upper bound to 2.0.0.
- Bump `ruamel.yaml` upper bound to 1.0.0.
## 1.0.0 (2023.11.09)
### Features Added
- [Executor] Add `enable_kwargs` tag in tools.json for customer python tool.
- [SDK/CLI] Support `pfazure flow create`. Create a flow on Azure AI from local flow folder.
- [SDK/CLI] Changed column mapping `${run.inputs.xx}`'s behavior, it will refer to run's data columns instead of run's inputs columns.
### Bugs Fixed
- [SDK/CLI] Keep original format in run output.jsonl.
- [Executor] Fix the bug that raise an error when an aggregation node references a bypassed node
### Improvements
- [Executor] Set the outputs of the bypassed nodes as None
## 0.1.0b8 (2023.10.26)
### Features Added
- [Executor] Add average execution time and estimated execution time to batch run logs
- [SDK/CLI] Support `pfazure run archive/restore/update`.
- [SDK/CLI] Support custom strong type connection.
- [SDK/CLI] Enable telemetry and won't collect by default, use `pf config set cli.telemetry_enabled=true` to opt in.
- [SDK/CLI] Exposed function `from promptflow import load_run` to load run object from local YAML file.
- [Executor] Support `ToolProvider` for script tools.
### Bugs Fixed
- **pf config set**:
- Fix bug for workspace `connection.provider=azureml` doesn't work as expected.
- [SDK/CLI] Fix the bug that using sdk/cli to submit batch run did not display the log correctly.
- [SDK/CLI] Fix encoding issues when input is non-English with `pf flow test`.
- [Executor] Fix the bug can't read file containing "Private Use" unicode character.
- [SDK/CLI] Fix string type data will be converted to integer/float.
- [SDK/CLI] Remove the max rows limitation of loading data.
- [SDK/CLI] Fix the bug --set not taking effect when creating run from file.
### Improvements
- [SDK/CLI] Experience improvements in `pf run visualize` page:
- Add column status.
- Support opening flow file by clicking run id.
## 0.1.0b7.post1 (2023.09.28)
### Bug Fixed
- Fix extra dependency bug when importing `promptflow` without `azure-ai-ml` installed.
## 0.1.0b7 (2023.09.27)
### Features Added
- **pf flow validate**: support validate flow
- **pf config set**: support set user-level promptflow config.
- Support workspace connection provider, usage: `pf config set connection.provider=azureml://subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace_name>`
- Support override openai connection's model when submitting a flow. For example: `pf run create --flow ./ --data ./data.jsonl --connection llm.model=xxx --column-mapping url='${data.url}'`
### Bugs Fixed
- [Flow build] Fix flow build file name and environment variable name when connection name contains space.
- Reserve `.promptflow` folder when dump run snapshot.
- Read/write log file with encoding specified.
- Avoid inconsistent error message when executor exits abnormally.
- Align inputs & outputs row number in case partial completed run will break `pfazure run show-details`.
- Fix bug that failed to parse portal url for run data when the form is an asset id.
- Fix the issue of process hanging for a long time when running the batch run.
### Improvements
- [Executor][Internal] Improve error message with more details and actionable information.
- [SDK/CLI] `pf/pfazure run show-details`:
- Add `--max-results` option to control the number of results to display.
- Add `--all-results` option to display all results.
- Add validation for azure `PFClient` constructor in case wrong parameter is passed.
## 0.1.0b6 (2023.09.15)
### Features Added
- [promptflow][Feature] Store token metrics in run properties
### Bugs Fixed
- Refine error message body for flow_validator.py
- Refine error message body for run_tracker.py
- [Executor][Internal] Add some unit test to improve code coverage of log/metric
- [SDK/CLI] Update portal link to remove flight.
- [Executor][Internal] Improve inputs mapping's error message.
- [API] Resolve warnings/errors of sphinx build
## 0.1.0b5 (2023.09.08)
### Features Added
- **pf run visualize**: support lineage graph & display name in visualize page
### Bugs Fixed
- Add missing requirement `psutil` in `setup.py`
## 0.1.0b4 (2023.09.04)
### Features added
- Support `pf flow build` commands
## 0.1.0b3 (2023.08.30)
- Minor bug fixes.
## 0.1.0b2 (2023.08.29)
- First preview version with major CLI & SDK features.
### Features added
- **pf flow**: init/test/serve/export
- **pf run**: create/update/stream/list/show/show-details/show-metrics/visualize/archive/restore/export
- **pf connection**: create/update/show/list/delete
- Azure AI support:
- **pfazure run**: create/list/stream/show/show-details/show-metrics/visualize
## 0.1.0b1 (2023.07.20)
- Stub version in Pypi.
| promptflow/src/promptflow/CHANGELOG.md/0 | {
"file_path": "promptflow/src/promptflow/CHANGELOG.md",
"repo_id": "promptflow",
"token_count": 3253
} | 35 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import inspect
import json
import shutil
from abc import ABC, abstractmethod
from ast import literal_eval
from enum import Enum
from pathlib import Path
from jinja2 import Environment, Template, meta
from promptflow._sdk._constants import DEFAULT_ENCODING
from promptflow._sdk.operations._flow_operations import FlowOperations
from promptflow._utils.logger_utils import get_cli_sdk_logger
from promptflow.contracts.flow import Flow as ExecutableFlow
from promptflow.exceptions import UserErrorException
logger = get_cli_sdk_logger()
TEMPLATE_PATH = Path(__file__).parent.parent / "data" / "entry_flow"
CHAT_FLOW_TEMPLATE_PATH = Path(__file__).parent.parent / "data" / "chat_flow" / "template"
TOOL_TEMPLATE_PATH = Path(__file__).parent.parent / "data" / "package_tool"
EXTRA_FILES_MAPPING = {"requirements.txt": "requirements_txt", ".gitignore": "gitignore"}
SERVE_TEMPLATE_PATH = Path(__file__).resolve().parent.parent.parent / "_sdk" / "data" / "executable"
class BaseGenerator(ABC):
@property
@abstractmethod
def tpl_file(self):
pass
@property
@abstractmethod
def entry_template_keys(self):
pass
def generate(self) -> str:
"""Generate content based on given template and actual value of template keys."""
with open(self.tpl_file, encoding=DEFAULT_ENCODING) as f:
entry_template = f.read()
entry_template = Template(entry_template, trim_blocks=True, lstrip_blocks=True)
return entry_template.render(**{key: getattr(self, key) for key in self.entry_template_keys})
def generate_to_file(self, target):
"""Generate content to a file based on given template and actual value of template keys."""
target = Path(target).resolve()
action = "Overwriting" if target.exists() else "Creating"
print(f"{action} {target.resolve()}...")
with open(target, "w", encoding=DEFAULT_ENCODING) as f:
f.write(self.generate())
class ToolPyGenerator(BaseGenerator):
def __init__(self, entry, function, function_obj):
self.function_import = f"from {Path(entry).stem} import {function}"
self.entry_function = function
self.tool_function = f"{function}_tool"
# TODO: support default for tool args
self.tool_arg_list = inspect.signature(function_obj).parameters.values()
@property
def tpl_file(self):
return TEMPLATE_PATH / "tool.py.jinja2"
@property
def entry_template_keys(self):
return ["function_import", "entry_function", "tool_function", "tool_arg_list"]
class ValueType(str, Enum):
INT = "int"
DOUBLE = "double"
BOOL = "bool"
STRING = "string"
LIST = "list"
OBJECT = "object"
@staticmethod
def from_type(t: type):
if t == int:
return ValueType.INT
if t == float:
return ValueType.DOUBLE
if t == bool:
return ValueType.BOOL
if t == str:
return ValueType.STRING
if t == list:
return ValueType.LIST
return ValueType.OBJECT
class ToolMetaGenerator(BaseGenerator):
def __init__(self, tool_py, function, function_obj, prompt_params):
self.tool_file = tool_py
self.tool_function = f"{function}_tool"
# TODO: support default for tool meta args
self.tool_meta_args = self.get_tool_meta_args(function_obj)
self._prompt_params = prompt_params
@property
def prompt_params(self):
from promptflow._core.tool_meta_generator import generate_prompt_meta_dict
prompt_objs = {}
for key, file_name in self._prompt_params.items():
file_path = Path(file_name)
if not file_path.exists():
logger.warning(
f'Cannot find the prompt template "{file_name}", creating an empty prompt file in the flow...'
)
with open(file_path, "w") as f:
f.write("{# please enter your prompt content in this file. #}")
with open(file_name, "r") as f:
content = f.read()
name = Path(file_name).stem
prompt_objs[key] = generate_prompt_meta_dict(name, content, prompt_only=True, source=file_name)
return prompt_objs
def get_tool_meta_args(self, function_obj):
func_params = inspect.signature(function_obj).parameters
# TODO: Support enum/union in the future
return {k: ValueType.from_type(v.annotation).value for k, v in func_params.items()}
@property
def tpl_file(self):
return TEMPLATE_PATH / "flow.tools.json.jinja2"
@property
def entry_template_keys(self):
return ["prompt_params", "tool_file", "tool_meta_args", "tool_function"]
class FlowDAGGenerator(BaseGenerator):
def __init__(self, tool_py, function, function_obj, prompt_params):
self.tool_file = tool_py
self.main_node_name = function
self.prompt_params = prompt_params
self.setup_sh = None
self.python_requirements_txt = None
self._prompt_inputs = None
self._func_params = None
self._function_obj = function_obj
# Abstract prompt param from tool meta args
self.flow_inputs = self.get_flow_inputs(prompt_params)
def get_flow_inputs(self, prompt_params):
"""Generate the flow inputs"""
flow_inputs = {
k: ValueType.from_type(v.annotation).value for k, v in self.func_params.items() if k not in prompt_params
}
for prompt_inputs in self.prompt_inputs.values():
flow_inputs.update(prompt_inputs)
return flow_inputs
@property
def tpl_file(self):
return TEMPLATE_PATH / "flow.dag.yaml.jinja2"
@property
def func_params(self):
"""Generate function inputs without prompt templates."""
if self._func_params is None:
self._func_params = {
k: v for k, v in inspect.signature(self._function_obj).parameters.items() if k not in self.prompt_params
}
return self._func_params
@property
def prompt_inputs(self):
"""Generate prompt inputs."""
if self._prompt_inputs is None:
self._prompt_inputs = {}
for prompt_name, file_name in self.prompt_params.items():
try:
with open(file_name, "r") as f:
env = Environment()
ast = env.parse(f.read())
variables = meta.find_undeclared_variables(ast)
self._prompt_inputs[prompt_name] = {item: "string" for item in variables or []}
except Exception as e:
logger.warning(f"Get the prompt input from {file_name} failed, {e}.")
return self._prompt_inputs
@property
def entry_template_keys(self):
return [
"flow_inputs",
"main_node_name",
"prompt_params",
"tool_file",
"setup_sh",
"python_requirements_txt",
"prompt_inputs",
"func_params",
]
def generate_to_file(self, target):
# Get requirements.txt and setup.sh from target folder.
requirements_file = "requirements.txt"
if (Path(target).parent / requirements_file).exists():
self.python_requirements_txt = requirements_file
setup_file = "setup.sh"
if (Path(target).parent / setup_file).exists():
self.setup_sh = setup_file
super().generate_to_file(target=target)
class FlowMetaYamlGenerator(BaseGenerator):
def __init__(self, flow_name):
self.flow_name = flow_name
@property
def tpl_file(self):
return TEMPLATE_PATH / "flow.meta.yaml.jinja2"
@property
def entry_template_keys(self):
return ["flow_name"]
class StreamlitFileReplicator:
def __init__(self, flow_name, flow_dag_path):
self.flow_name = flow_name
self.flow_dag_path = Path(flow_dag_path)
self.executable = ExecutableFlow.from_yaml(
flow_file=Path(self.flow_dag_path.name), working_dir=self.flow_dag_path.parent
)
self.is_chat_flow, self.chat_history_input_name, error_msg = FlowOperations._is_chat_flow(self.executable)
@property
def flow_inputs(self):
if self.is_chat_flow:
results = {}
for flow_input, value in self.executable.inputs.items():
if value.is_chat_input:
if value.type.value not in [ValueType.STRING.value, ValueType.LIST.value]:
raise UserErrorException(
f"Only support string or list type for chat input, but got {value.type.value}."
)
results.update({flow_input: (value.default, value.type.value)})
else:
results = {
flow_input: (value.default, value.type.value) for flow_input, value in self.executable.inputs.items()
}
return results
@property
def label(self):
return "Chat" if self.is_chat_flow else "Run"
@property
def py_file(self):
return SERVE_TEMPLATE_PATH / "main.py"
@property
def flow_path(self):
return self.flow_dag_path.as_posix()
@property
def chat_output_name(self):
try:
output_name = next(
filter(
lambda key: self.executable.outputs[key].is_chat_output,
self.executable.outputs.keys(),
)
)
except StopIteration:
output_name = None
return output_name
@property
def is_streaming(self):
return True if self.is_chat_flow else False
@property
def entry_template_keys(self):
return [
"flow_name",
"flow_path",
"is_chat_flow",
"chat_history_input_name",
"flow_inputs",
"label",
"chat_output_name",
"is_streaming",
]
def generate_to_file(self, target):
if Path(target).name == "main.py":
target = Path(target).resolve()
shutil.copy(self.py_file, target)
config_content = {key: getattr(self, key) for key in self.entry_template_keys}
with open(target.parent / "config.json", "w") as file:
json.dump(config_content, file, indent=4)
else:
shutil.copy(SERVE_TEMPLATE_PATH / Path(target).name, target)
class ChatFlowDAGGenerator(BaseGenerator):
def __init__(self, connection, deployment):
self.connection = connection
self.deployment = deployment
@property
def tpl_file(self):
return CHAT_FLOW_TEMPLATE_PATH / "flow.dag.yaml.jinja2"
@property
def entry_template_keys(self):
return ["connection", "deployment"]
class AzureOpenAIConnectionGenerator(BaseGenerator):
def __init__(self, connection):
self.connection = connection
@property
def tpl_file(self):
return CHAT_FLOW_TEMPLATE_PATH / "azure_openai.yaml.jinja2"
@property
def entry_template_keys(self):
return ["connection"]
class OpenAIConnectionGenerator(BaseGenerator):
def __init__(self, connection):
self.connection = connection
@property
def tpl_file(self):
return CHAT_FLOW_TEMPLATE_PATH / "openai.yaml.jinja2"
@property
def entry_template_keys(self):
return ["connection"]
def copy_extra_files(flow_path, extra_files, overwrite=False):
for file_name in extra_files:
extra_file_path = (
Path(__file__).parent.parent / "data" / "entry_flow" / EXTRA_FILES_MAPPING.get(file_name, file_name)
)
target_path = Path(flow_path) / file_name
if target_path.exists() and not overwrite:
continue
action = "Overwriting" if target_path.exists() else "Creating"
print(f"{action} {target_path.resolve()}...")
shutil.copy2(extra_file_path, target_path)
class ToolPackageGenerator(BaseGenerator):
def __init__(self, tool_name, icon=None, extra_info=None):
self.tool_name = tool_name
self._extra_info = extra_info
self.icon = icon
@property
def extra_info(self):
if self._extra_info:
extra_info = {}
for k, v in self._extra_info.items():
try:
extra_info[k] = literal_eval(v)
except Exception:
extra_info[k] = repr(v)
return extra_info
else:
return {}
@property
def tpl_file(self):
return TOOL_TEMPLATE_PATH / "tool.py.jinja2"
@property
def entry_template_keys(self):
return ["tool_name", "extra_info", "icon"]
class SetupGenerator(BaseGenerator):
def __init__(self, package_name, tool_name):
self.package_name = package_name
self.tool_name = tool_name
@property
def tpl_file(self):
return TOOL_TEMPLATE_PATH / "setup.py.jinja2"
@property
def entry_template_keys(self):
return ["package_name", "tool_name"]
class ToolPackageUtilsGenerator(BaseGenerator):
def __init__(self, package_name):
self.package_name = package_name
@property
def tpl_file(self):
return TOOL_TEMPLATE_PATH / "utils.py.jinja2"
@property
def entry_template_keys(self):
return ["package_name"]
class ToolReadmeGenerator(BaseGenerator):
def __init__(self, package_name, tool_name):
self.package_name = package_name
self.tool_name = tool_name
@property
def tpl_file(self):
return TOOL_TEMPLATE_PATH / "README.md.jinja2"
@property
def entry_template_keys(self):
return ["package_name", "tool_name"]
class InitGenerator(BaseGenerator):
@property
def tpl_file(self):
return TOOL_TEMPLATE_PATH / "init.py"
@property
def entry_template_keys(self):
pass
def generate(self) -> str:
with open(self.tpl_file) as f:
init_content = f.read()
return init_content
| promptflow/src/promptflow/promptflow/_cli/_pf/_init_entry_generators.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf/_init_entry_generators.py",
"repo_id": "promptflow",
"token_count": 6361
} | 36 |
from setuptools import find_packages, setup
PACKAGE_NAME = "{{ package_name }}"
setup(
name=PACKAGE_NAME,
version="0.0.1",
description="This is my tools package",
packages=find_packages(),
entry_points={
"package_tools": ["{{ package_name }} = {{ package_name }}.utils:list_package_tools"],
},
install_requires=[
"promptflow",
]
)
| promptflow/src/promptflow/promptflow/_cli/data/package_tool/setup.py.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/package_tool/setup.py.jinja2",
"repo_id": "promptflow",
"token_count": 151
} | 37 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import sys
from contextvars import ContextVar
from datetime import datetime, timezone
from io import StringIO, TextIOBase
from typing import Dict
from promptflow._utils.logger_utils import flow_logger, logger, scrub_credentials
class NodeInfo:
def __init__(self, run_id: str, node_name: str, line_number: int):
self.run_id = run_id
self.node_name = node_name
self.line_number = line_number
def __str__(self) -> str:
return f"{self.node_name} in line {self.line_number} (index starts from 0)"
class NodeLogManager:
"""Replace sys.stdout and sys.stderr with NodeLogWriter.
This class intercepts and saves logs to stdout/stderr when executing a node. For example:
with NodeLogManager() as log_manager:
print('test stdout')
print('test stderr', file=sys.stderr)
log_manager.get_logs() will return: {'stdout': 'test stdout\n', 'stderr': 'test stderr\n'}
"""
def __init__(self, record_datetime=True):
self.stdout_logger = NodeLogWriter(sys.stdout, record_datetime)
self.stderr_logger = NodeLogWriter(sys.stderr, record_datetime, is_stderr=True)
self.log_handler = None
def __enter__(self):
"""Replace sys.stdout and sys.stderr with NodeLogWriter."""
self._prev_stdout = sys.stdout
self._prev_stderr = sys.stderr
sys.stdout = self.stdout_logger
sys.stderr = self.stderr_logger
return self
def __exit__(self, *args):
"""Restore sys.stdout and sys.stderr."""
sys.stdout = self._prev_stdout
sys.stderr = self._prev_stderr
def set_node_context(self, run_id: str, node_name: str, line_number: int):
"""Set node context."""
self.stdout_logger.set_node_info(run_id, node_name, line_number)
self.stderr_logger.set_node_info(run_id, node_name, line_number)
def clear_node_context(self, run_id):
"""Clear node context."""
self.stdout_logger.clear_node_info(run_id)
self.stderr_logger.clear_node_info(run_id)
def get_logs(self, run_id) -> Dict[str, str]:
return {
"stdout": self.stdout_logger.get_log(run_id),
"stderr": self.stderr_logger.get_log(run_id),
}
class NodeLogWriter(TextIOBase):
"""Record node run logs."""
DATETIME_FORMAT = "%Y-%m-%dT%H:%M:%S%z"
def __init__(self, prev_stdout, record_datetime=True, is_stderr=False):
self.run_id_to_stdout = dict()
self._context = ContextVar("run_log_info", default=None)
self._prev_out = prev_stdout
self._record_datetime = record_datetime
self._is_stderr = is_stderr
def set_node_info(self, run_id: str, node_name: str, line_number: int = None):
"""Set node info to a context variable.
After set node info, write method will write to stringio associated with this node.
"""
run_log_info = NodeInfo(run_id, node_name, line_number)
self._context.set(run_log_info)
self.run_id_to_stdout.update({run_id: StringIO()})
def clear_node_info(self, run_id: str):
"""Clear context variable associated with run id."""
log_info: NodeInfo = self._context.get()
if log_info and log_info.run_id == run_id:
self._context.set(None)
if run_id in self.run_id_to_stdout:
self.run_id_to_stdout.pop(run_id)
def get_log(self, run_id: str) -> str:
"""Get log associated with run id."""
string_io: StringIO = self.run_id_to_stdout.get(run_id)
if string_io is None:
return None
return string_io.getvalue()
def write(self, s: str):
"""Override TextIO's write method and writes input string into a stringio
The written string is compliant without any credentials.
The string is also recorded to flow/bulk logger.
If node info is not set, write to previous stdout.
"""
log_info: NodeInfo = self._context.get()
s = scrub_credentials(s) # Remove credential from string.
if log_info is None:
self._prev_out.write(s)
else:
self._write_to_flow_log(log_info, s)
stdout: StringIO = self.run_id_to_stdout.get(log_info.run_id)
if self._record_datetime and s != "\n": # For line breaker, do not add datetime prefix.
s = f"[{datetime.now(timezone.utc).strftime(self.DATETIME_FORMAT)}] {s}"
stdout.write(s)
def flush(self):
"""Override TextIO's flush method."""
node_info: NodeInfo = self._context.get()
if node_info is None:
self._prev_out.flush()
else:
string_io = self.run_id_to_stdout.get(node_info.run_id)
if string_io is not None:
string_io.flush()
def _write_to_flow_log(self, log_info: NodeInfo, s: str):
"""Save stdout log to flow_logger and stderr log to logger."""
# If user uses "print('log message.')" to log, then
# "write" method will be called twice and the second time input is only '\n'.
# For this case, should not log '\n' in flow_logger.
if s != "\n":
if self._is_stderr:
flow_log = f"[{str(log_info)}] stderr> " + s.rstrip("\n")
# Log stderr in all scenarios so we can diagnose problems.
logger.warning(flow_log)
else:
flow_log = f"[{str(log_info)}] stdout> " + s.rstrip("\n")
# Log stdout only in flow mode.
flow_logger.info(flow_log)
| promptflow/src/promptflow/promptflow/_core/log_manager.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/log_manager.py",
"repo_id": "promptflow",
"token_count": 2523
} | 38 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from os import PathLike
from pathlib import Path
from typing import IO, AnyStr, Optional, Union
from dotenv import dotenv_values
from .._utils.logger_utils import get_cli_sdk_logger
from .._utils.yaml_utils import load_yaml
from .entities import Run
from .entities._connection import CustomConnection, _Connection
from .entities._flow import Flow
logger = get_cli_sdk_logger()
def load_common(
cls,
source: Union[str, PathLike, IO[AnyStr]],
relative_origin: str = None,
params_override: Optional[list] = None,
**kwargs,
):
"""Private function to load a yaml file to an entity object.
:param cls: The entity class type.
:type cls: type[Resource]
:param source: A source of yaml.
:type source: Union[str, PathLike, IO[AnyStr]]
:param relative_origin: The origin of to be used when deducing
the relative locations of files referenced in the parsed yaml.
Must be provided, and is assumed to be assigned by other internal
functions that call this.
:type relative_origin: str
:param params_override: _description_, defaults to None
:type params_override: list, optional
"""
if relative_origin is None:
if isinstance(source, (str, PathLike)):
relative_origin = source
else:
try:
relative_origin = source.name
except AttributeError: # input is a stream or something
relative_origin = "./"
params_override = params_override or []
yaml_dict = load_yaml(source)
logger.debug(f"Resolve cls and type with {yaml_dict}, params_override {params_override}.")
# pylint: disable=protected-access
cls, type_str = cls._resolve_cls_and_type(data=yaml_dict, params_override=params_override)
try:
return cls._load(
data=yaml_dict,
yaml_path=relative_origin,
params_override=params_override,
**kwargs,
)
except Exception as e:
raise Exception(f"Load entity error: {e}") from e
def load_flow(
source: Union[str, PathLike, IO[AnyStr]],
*,
entry: str = None,
**kwargs,
) -> Flow:
"""Load flow from YAML file.
:param source: The local yaml source of a flow. Must be a path to a local file.
If the source is a path, it will be open and read.
An exception is raised if the file does not exist.
:type source: Union[PathLike, str]
:param entry: The entry function, only works when source is a code file.
:type entry: str
:return: A Flow object
:rtype: Flow
"""
return Flow.load(source, entry=entry, **kwargs)
def load_run(
source: Union[str, PathLike, IO[AnyStr]],
params_override: Optional[list] = None,
**kwargs,
) -> Run:
"""Load run from YAML file.
:param source: The local yaml source of a run. Must be a path to a local file.
If the source is a path, it will be open and read.
An exception is raised if the file does not exist.
:type source: Union[PathLike, str]
:param params_override: Fields to overwrite on top of the yaml file.
Format is [{"field1": "value1"}, {"field2": "value2"}]
:type params_override: List[Dict]
:return: A Run object
:rtype: Run
"""
data = load_yaml(source=source)
return Run._load(data=data, yaml_path=source, params_override=params_override, **kwargs)
def load_connection(
source: Union[str, PathLike, IO[AnyStr]],
**kwargs,
):
if Path(source).name.endswith(".env"):
return _load_env_to_connection(source, **kwargs)
return load_common(_Connection, source, **kwargs)
def _load_env_to_connection(
source,
params_override: Optional[list] = None,
**kwargs,
):
source = Path(source)
name = next((_dct["name"] for _dct in params_override if "name" in _dct), None)
if not name:
raise Exception("Please specify --name when creating connection from .env.")
if not source.exists():
raise FileNotFoundError(f"File {source.absolute().as_posix()!r} not found.")
try:
data = dict(dotenv_values(source))
if not data:
# Handle some special case dotenv returns empty with no exception raised.
raise ValueError(
f"Load nothing from dotenv file {source.absolute().as_posix()!r}, "
"please make sure the file is not empty and readable."
)
return CustomConnection(name=name, secrets=data)
except Exception as e:
raise Exception(f"Load entity error: {e}") from e
| promptflow/src/promptflow/promptflow/_sdk/_load_functions.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_load_functions.py",
"repo_id": "promptflow",
"token_count": 1791
} | 39 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from flask import jsonify, make_response, request
from flask_restx import fields
from promptflow._sdk._service import Namespace, Resource
from promptflow._sdk._service.utils.utils import build_pfs_user_agent, local_user_only
from promptflow._sdk._telemetry import ActivityCompletionStatus, ActivityType
from promptflow._utils.utils import camel_to_snake
from promptflow.exceptions import UserErrorException
api = Namespace("Telemetries", description="Telemetry Management")
class EventType:
START = "Start"
END = "End"
class AllowedActivityName:
FLOW_TEST = "pf.flow.test"
FLOW_NODE_TEST = "pf.flow.node_test"
GENERATE_TOOL_META = "pf.flow._generate_tools_meta"
REQUEST_ID_KEY = "x-ms-promptflow-request-id"
def _dict_camel_to_snake(data):
if isinstance(data, dict):
result = {}
for key, value in data.items():
result[camel_to_snake(key)] = _dict_camel_to_snake(value)
return result
else:
return data
def parse_activity_info(metadata, first_call, user_agent, request_id):
request_id = request_id
return {
"request_id": request_id,
"first_call": first_call,
"user_agent": user_agent,
**_dict_camel_to_snake(metadata),
}
def validate_metadata(value: dict) -> dict:
allowed_activity_names = [
AllowedActivityName.FLOW_TEST,
AllowedActivityName.FLOW_NODE_TEST,
AllowedActivityName.GENERATE_TOOL_META,
]
if value.get("activityName", None) not in allowed_activity_names:
raise UserErrorException(f"metadata.activityName must be one of {', '.join(allowed_activity_names)}.")
allowed_activity_types = [
ActivityType.INTERNALCALL,
ActivityType.PUBLICAPI,
]
if value.get("activityType") not in allowed_activity_types:
raise UserErrorException(f"metadata.activityType must be one of {', '.join(allowed_activity_types)}")
return value
def validate_metadata_based_on_event_type(metadata: dict, event_type: str):
if event_type == EventType.END:
if not all(
key in metadata
for key in (
"completionStatus", # End event should have completionStatus
"durationMs", # End event should have durationMs
)
):
missing_fields = {"completionStatus", "durationMs"} - set(metadata.keys())
raise UserErrorException(f"Missing required fields in telemetry metadata: {', '.join(missing_fields)}")
if metadata.get("completionStatus") == ActivityCompletionStatus.FAILURE:
if not all(
key in metadata
for key in (
"errorCategory", # Failure event should have errorCategory
"errorType", # Failure event should have errorType
"errorTarget", # Failure event should have errorTarget
"errorMessage", # Failure event should have errorMessage
)
):
missing_fields = {"errorCategory", "errorType", "errorTarget", "errorMessage"} - set(metadata.keys())
raise UserErrorException(f"Missing required fields in telemetry payload: {', '.join(missing_fields)}")
def validate_event_type(value) -> str:
if value not in (EventType.START, EventType.END):
raise ValueError(f"Event type must be one of {EventType.START} and {EventType.END}.")
return value
metadata_model = api.model(
"Metadata",
{
"activityName": fields.String(
required=True,
description="The name of the activity.",
enum=[
AllowedActivityName.FLOW_TEST,
AllowedActivityName.FLOW_NODE_TEST,
AllowedActivityName.GENERATE_TOOL_META,
],
),
"activityType": fields.String(required=True, description="The type of the activity."),
"completionStatus": fields.String(
required=False,
description="The completion status of the activity.",
enum=[ActivityCompletionStatus.SUCCESS, ActivityCompletionStatus.FAILURE],
),
"durationMs": fields.Integer(required=False, description="The duration of the activity in milliseconds."),
"errorCategory": fields.String(required=False, description="The error category of the activity."),
"errorType": fields.String(required=False, description="The error type of the activity."),
"errorTarget": fields.String(required=False, description="The error target of the activity."),
"errorMessage": fields.String(required=False, description="The error message of the activity."),
"errorDetails": fields.String(required=False, description="The error details of the activity."),
},
)
telemetry_model = api.model(
"Telemetry",
{
"eventType": fields.String(
required=True,
description="The event type of the telemetry.",
enum=[EventType.START, EventType.END],
),
"timestamp": fields.DateTime(required=True, description="The timestamp of the telemetry."),
"firstCall": fields.Boolean(
required=False,
default=True,
description="Whether current activity is the first activity in the call chain.",
),
"metadata": fields.Nested(metadata_model),
},
)
@api.route("/")
class Telemetry(Resource):
@api.header(REQUEST_ID_KEY, type=str)
@api.response(code=200, description="Create telemetry record")
@api.response(code=400, description="Input payload validation failed")
@api.doc(description="Create telemetry record")
@api.expect(telemetry_model)
@local_user_only
@api.response(code=403, description="Telemetry is disabled or X-Remote-User is not set.")
def post(self):
from promptflow._sdk._telemetry import get_telemetry_logger, is_telemetry_enabled
from promptflow._sdk._telemetry.activity import log_activity_end, log_activity_start
if not is_telemetry_enabled():
return make_response(
jsonify(
{
"message": "Telemetry is disabled, you may re-enable it "
"via `pf config set telemetry.enabled=true`."
}
),
403,
)
request_id = request.headers.get(REQUEST_ID_KEY)
try:
validate_metadata_based_on_event_type(api.payload["metadata"], api.payload["eventType"])
except UserErrorException as exception:
return make_response(
jsonify({"errors": {"metadata": str(exception)}, "message": "Input payload validation failed"}), 400
)
activity_info = parse_activity_info(
metadata=api.payload["metadata"],
first_call=api.payload.get("firstCall", True),
user_agent=build_pfs_user_agent(),
request_id=request_id,
)
if api.payload["eventType"] == EventType.START:
log_activity_start(activity_info, get_telemetry_logger())
elif api.payload["eventType"] == EventType.END:
log_activity_end(activity_info, get_telemetry_logger())
return jsonify(
{
"status": ActivityCompletionStatus.SUCCESS,
}
)
| promptflow/src/promptflow/promptflow/_sdk/_service/apis/telemetry.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/apis/telemetry.py",
"repo_id": "promptflow",
"token_count": 3067
} | 40 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import os
from abc import ABC, abstractmethod
from pathlib import Path
from promptflow._constants import DEFAULT_ENCODING
from promptflow._sdk._configuration import Configuration
from promptflow._sdk._serving.blueprint.monitor_blueprint import construct_monitor_blueprint
from promptflow._sdk._serving.blueprint.static_web_blueprint import construct_staticweb_blueprint
from promptflow._sdk._serving.monitor.flow_monitor import FlowMonitor
from promptflow._utils.yaml_utils import load_yaml
from promptflow._version import VERSION
from promptflow.contracts.flow import Flow
USER_AGENT = f"promptflow-local-serving/{VERSION}"
DEFAULT_STATIC_PATH = Path(__file__).parent.parent / "static"
class AppExtension(ABC):
def __init__(self, logger, **kwargs):
self.logger = logger
@abstractmethod
def get_flow_project_path(self) -> str:
"""Get flow project path."""
pass
@abstractmethod
def get_flow_name(self) -> str:
"""Get flow name."""
pass
@abstractmethod
def get_connection_provider(self) -> str:
"""Get connection provider."""
pass
@abstractmethod
def get_blueprints(self):
"""Get blueprints for current extension."""
pass
def get_override_connections(self, flow: Flow) -> (dict, dict):
"""
Get override connections for current extension.
:param flow: The flow to execute.
:type flow: ~promptflow._sdk.entities._flow.Flow
:return: The override connections, first dict is for connection data override, second dict is for connection name override. # noqa: E501
:rtype: (dict, dict)
"""
return {}, {}
def raise_ex_on_invoker_initialization_failure(self, ex: Exception):
"""
whether to raise exception when initializing flow invoker failed.
:param ex: The exception when initializing flow invoker.
:type ex: Exception
:return: Whether to raise exception when initializing flow invoker failed.
"""
return True
def get_user_agent(self) -> str:
"""Get user agent used for current extension."""
return USER_AGENT
def get_credential(self):
"""Get credential for current extension."""
return None
def get_metrics_common_dimensions(self):
"""Get common dimensions for metrics if exist."""
return self._get_common_dimensions_from_env()
def get_flow_monitor(self) -> FlowMonitor:
"""Get flow monitor for current extension."""
# default no data collector, no app insights metric exporter
return FlowMonitor(self.logger, self.get_flow_name(), None, metrics_recorder=None)
def _get_mlflow_project_path(self, project_path: str):
# check whether it's mlflow model
mlflow_metadata_file = os.path.join(project_path, "MLmodel")
if os.path.exists(mlflow_metadata_file):
with open(mlflow_metadata_file, "r", encoding=DEFAULT_ENCODING) as fin:
mlflow_metadata = load_yaml(fin)
flow_entry = mlflow_metadata.get("flavors", {}).get("promptflow", {}).get("entry")
if mlflow_metadata:
dag_path = os.path.join(project_path, flow_entry)
return str(Path(dag_path).parent.absolute())
return project_path
def _get_common_dimensions_from_env(self):
common_dimensions_str = os.getenv("PF_SERVING_METRICS_COMMON_DIMENSIONS", None)
if common_dimensions_str:
try:
common_dimensions = json.loads(common_dimensions_str)
return common_dimensions
except Exception as ex:
self.logger.warn(f"Failed to parse common dimensions with value={common_dimensions_str}: {ex}")
return {}
def _get_default_blueprints(self, static_folder=None):
static_web_blueprint = construct_staticweb_blueprint(static_folder)
monitor_print = construct_monitor_blueprint(self.get_flow_monitor())
return [static_web_blueprint, monitor_print]
class DefaultAppExtension(AppExtension):
"""default app extension for local serve."""
def __init__(self, logger, **kwargs):
self.logger = logger
static_folder = kwargs.get("static_folder", None)
self.static_folder = static_folder if static_folder else DEFAULT_STATIC_PATH
logger.info(f"Static_folder: {self.static_folder}")
app_config = kwargs.get("config", None) or {}
pf_config = Configuration(overrides=app_config)
logger.info(f"Promptflow config: {pf_config}")
self.connection_provider = pf_config.get_connection_provider()
def get_flow_project_path(self) -> str:
return os.getenv("PROMPTFLOW_PROJECT_PATH", ".")
def get_flow_name(self) -> str:
project_path = self.get_flow_project_path()
return Path(project_path).stem
def get_connection_provider(self) -> str:
return self.connection_provider
def get_blueprints(self):
return self._get_default_blueprints(self.static_folder)
| promptflow/src/promptflow/promptflow/_sdk/_serving/extension/default_extension.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/extension/default_extension.py",
"repo_id": "promptflow",
"token_count": 1992
} | 41 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# this file is a middle layer between the local SDK and executor, it'll have some similar logic with cloud PFS.
import datetime
from pathlib import Path
from typing import Union
from promptflow._constants import FlowLanguage
from promptflow._sdk._constants import FlowRunProperties
from promptflow._sdk._utils import parse_variant
from promptflow._sdk.entities._flow import ProtectedFlow
from promptflow._sdk.entities._run import Run
from promptflow._sdk.operations._local_storage_operations import LocalStorageOperations
from promptflow._sdk.operations._run_operations import RunOperations
from promptflow._utils.context_utils import _change_working_dir
from promptflow.batch import BatchEngine
from promptflow.contracts.run_info import Status
from promptflow.contracts.run_mode import RunMode
from promptflow.exceptions import UserErrorException, ValidationException
from ..._utils.logger_utils import LoggerFactory
from .._load_functions import load_flow
from ..entities._eager_flow import EagerFlow
from .utils import SubmitterHelper, variant_overwrite_context
logger = LoggerFactory.get_logger(name=__name__)
class RunSubmitter:
"""Submit run to executor."""
def __init__(self, run_operations: RunOperations):
self.run_operations = run_operations
def submit(self, run: Run, stream=False, **kwargs):
self._run_bulk(run=run, stream=stream, **kwargs)
return self.run_operations.get(name=run.name)
def _run_bulk(self, run: Run, stream=False, **kwargs):
# validate & resolve variant
if run.variant:
tuning_node, variant = parse_variant(run.variant)
else:
tuning_node, variant = None, None
if run.run is not None:
if isinstance(run.run, str):
run.run = self.run_operations.get(name=run.run)
elif not isinstance(run.run, Run):
error = TypeError(f"Referenced run must be a Run instance, got {type(run.run)}")
raise UserErrorException(message=str(error), error=error)
else:
# get the run again to make sure it's status is latest
run.run = self.run_operations.get(name=run.run.name)
if run.run.status != Status.Completed.value:
error = ValueError(f"Referenced run {run.run.name} is not completed, got status {run.run.status}")
raise UserErrorException(message=str(error), error=error)
run.run.outputs = self.run_operations._get_outputs(run.run)
self._validate_inputs(run=run)
local_storage = LocalStorageOperations(run, stream=stream, run_mode=RunMode.Batch)
with local_storage.logger:
if local_storage.eager_mode:
flow_obj = load_flow(source=run.flow)
self._submit_bulk_run(flow=flow_obj, run=run, local_storage=local_storage)
else:
# running specified variant
with variant_overwrite_context(run.flow, tuning_node, variant, connections=run.connections) as flow:
self._submit_bulk_run(flow=flow, run=run, local_storage=local_storage)
@classmethod
def _validate_inputs(cls, run: Run):
if not run.run and not run.data:
error = ValidationException("Either run or data must be specified for flow run.")
raise UserErrorException(message=str(error), error=error)
def _submit_bulk_run(
self, flow: Union[ProtectedFlow, EagerFlow], run: Run, local_storage: LocalStorageOperations
) -> dict:
logger.info(f"Submitting run {run.name}, reach logs at {local_storage.logger.file_path}.")
run_id = run.name
if flow.language == FlowLanguage.CSharp:
connections = []
else:
with _change_working_dir(flow.code):
connections = SubmitterHelper.resolve_connections(flow=flow)
column_mapping = run.column_mapping
# resolve environment variables
run.environment_variables = SubmitterHelper.load_and_resolve_environment_variables(
flow=flow, environment_variables=run.environment_variables
)
SubmitterHelper.init_env(environment_variables=run.environment_variables)
# prepare data
input_dirs = self._resolve_input_dirs(run)
self._validate_column_mapping(column_mapping)
batch_result = None
status = Status.Failed.value
exception = None
# create run to db when fully prepared to run in executor, otherwise won't create it
run._dump() # pylint: disable=protected-access
try:
batch_engine = BatchEngine(
flow.path,
flow.code,
connections=connections,
entry=flow.entry if isinstance(flow, EagerFlow) else None,
storage=local_storage,
log_path=local_storage.logger.file_path,
)
batch_result = batch_engine.run(
input_dirs=input_dirs,
inputs_mapping=column_mapping,
output_dir=local_storage.outputs_folder,
run_id=run_id,
)
error_logs = []
if batch_result.failed_lines > 0:
# Log warning message when there are failed line run in bulk run.
error_logs.append(
f"{batch_result.failed_lines} out of {batch_result.total_lines} runs failed in batch run."
)
if batch_result.error_summary.aggr_error_dict:
# log warning message when there are failed aggregation nodes in bulk run.
aggregation_nodes = list(batch_result.error_summary.aggr_error_dict.keys())
error_logs.append(f"aggregation nodes {aggregation_nodes} failed in batch run.")
# update error log
if error_logs and run.properties.get(FlowRunProperties.OUTPUT_PATH, None):
error_logs.append(
f" Please check out {run.properties[FlowRunProperties.OUTPUT_PATH]} for more details."
)
if error_logs:
logger.warning("\n".join(error_logs))
# The bulk run is completed if the batch_engine.run successfully completed.
status = Status.Completed.value
except Exception as e:
# when run failed in executor, store the exception in result and dump to file
logger.warning(f"Run {run.name} failed when executing in executor with exception {e}.")
exception = e
# for user error, swallow stack trace and return failed run since user don't need the stack trace
if not isinstance(e, UserErrorException):
# for other errors, raise it to user to help debug root cause.
raise e
# won't raise the exception since it's already included in run object.
finally:
# persist snapshot and result
# snapshot: flow directory
local_storage.dump_snapshot(flow)
# persist inputs, outputs and metrics
local_storage.persist_result(batch_result)
# exceptions
local_storage.dump_exception(exception=exception, batch_result=batch_result)
# system metrics: token related
system_metrics = batch_result.system_metrics.to_dict() if batch_result else {}
self.run_operations.update(
name=run.name,
status=status,
end_time=datetime.datetime.now(),
system_metrics=system_metrics,
)
def _resolve_input_dirs(self, run: Run):
result = {"data": run.data if run.data else None}
if run.run is not None:
result.update(
{
"run.outputs": self.run_operations._get_outputs_path(run.run),
# to align with cloud behavior, run.inputs should refer to original data
"run.inputs": self.run_operations._get_data_path(run.run),
}
)
return {k: str(Path(v).resolve()) for k, v in result.items() if v is not None}
@classmethod
def _validate_column_mapping(cls, column_mapping: dict):
if not column_mapping:
return
if not isinstance(column_mapping, dict):
raise ValidationException(f"Column mapping must be a dict, got {type(column_mapping)}.")
all_static = True
for v in column_mapping.values():
if isinstance(v, str) and v.startswith("$"):
all_static = False
break
if all_static:
raise ValidationException(
"Column mapping must contain at least one mapping binding, "
f"current column mapping contains all static values: {column_mapping}"
)
| promptflow/src/promptflow/promptflow/_sdk/_submitter/run_submitter.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_submitter/run_submitter.py",
"repo_id": "promptflow",
"token_count": 3850
} | 42 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Run Details</title>
</head>
<body>
<div id="root"></div>
<script>
window.bulk_test_details_data = {{ data }}
</script>
<script src="{{ js_path }}"></script>
</body>
</html>
| promptflow/src/promptflow/promptflow/_sdk/data/visualize.j2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/data/visualize.j2",
"repo_id": "promptflow",
"token_count": 154
} | 43 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import contextlib
import glob
import json
import os
import shutil
import subprocess
import sys
from importlib.metadata import version
from os import PathLike
from pathlib import Path
from typing import Dict, Iterable, List, Tuple, Union
from promptflow._constants import LANGUAGE_KEY, FlowLanguage
from promptflow._sdk._constants import (
CHAT_HISTORY,
DEFAULT_ENCODING,
FLOW_TOOLS_JSON_GEN_TIMEOUT,
LOCAL_MGMT_DB_PATH,
PROMPT_FLOW_DIR_NAME,
)
from promptflow._sdk._load_functions import load_flow
from promptflow._sdk._submitter import TestSubmitter
from promptflow._sdk._submitter.utils import SubmitterHelper
from promptflow._sdk._telemetry import ActivityType, TelemetryMixin, monitor_operation
from promptflow._sdk._utils import (
_get_additional_includes,
_merge_local_code_and_additional_includes,
copy_tree_respect_template_and_ignore_file,
dump_flow_result,
generate_flow_tools_json,
generate_random_string,
logger,
parse_variant,
)
from promptflow._sdk.entities._eager_flow import EagerFlow
from promptflow._sdk.entities._flow import ProtectedFlow
from promptflow._sdk.entities._validation import ValidationResult
from promptflow._utils.context_utils import _change_working_dir
from promptflow._utils.yaml_utils import dump_yaml, load_yaml
from promptflow.exceptions import UserErrorException
class FlowOperations(TelemetryMixin):
"""FlowOperations."""
def __init__(self, client):
self._client = client
super().__init__()
@monitor_operation(activity_name="pf.flows.test", activity_type=ActivityType.PUBLICAPI)
def test(
self,
flow: Union[str, PathLike],
*,
inputs: dict = None,
variant: str = None,
node: str = None,
environment_variables: dict = None,
entry: str = None,
**kwargs,
) -> dict:
"""Test flow or node.
:param flow: path to flow directory to test
:type flow: Union[str, PathLike]
:param inputs: Input data for the flow test
:type inputs: dict
:param variant: Node & variant name in format of ${node_name.variant_name}, will use default variant
if not specified.
:type variant: str
:param node: If specified it will only test this node, else it will test the flow.
:type node: str
:param environment_variables: Environment variables to set by specifying a property path and value.
Example: {"key1": "${my_connection.api_key}", "key2"="value2"}
The value reference to connection keys will be resolved to the actual value,
and all environment variables specified will be set into os.environ.
:type environment_variables: dict
:param entry: Entry function. Required when flow is script.
:type entry: str
:return: The result of flow or node
:rtype: dict
"""
result = self._test(
flow=flow,
inputs=inputs,
variant=variant,
node=node,
environment_variables=environment_variables,
entry=entry,
**kwargs,
)
dump_test_result = kwargs.get("dump_test_result", False)
if dump_test_result:
# Dump flow/node test info
flow = load_flow(flow)
if node:
dump_flow_result(flow_folder=flow.code, node_result=result, prefix=f"flow-{node}.node")
else:
if variant:
tuning_node, node_variant = parse_variant(variant)
prefix = f"flow-{tuning_node}-{node_variant}"
else:
prefix = "flow"
dump_flow_result(flow_folder=flow.code, flow_result=result, prefix=prefix)
additional_output_path = kwargs.get("detail", None)
if additional_output_path:
if not dump_test_result:
flow = load_flow(flow)
if node:
# detail and output
dump_flow_result(
flow_folder=flow.code,
node_result=result,
prefix=f"flow-{node}.node",
custom_path=additional_output_path,
)
# log
log_src_path = Path(flow.code) / PROMPT_FLOW_DIR_NAME / f"{node}.node.log"
log_dst_path = Path(additional_output_path) / f"{node}.node.log"
shutil.copy(log_src_path, log_dst_path)
else:
if variant:
tuning_node, node_variant = parse_variant(variant)
prefix = f"flow-{tuning_node}-{node_variant}"
else:
prefix = "flow"
# detail and output
dump_flow_result(
flow_folder=flow.code,
flow_result=result,
prefix=prefix,
custom_path=additional_output_path,
)
# log
log_src_path = Path(flow.code) / PROMPT_FLOW_DIR_NAME / "flow.log"
log_dst_path = Path(additional_output_path) / "flow.log"
shutil.copy(log_src_path, log_dst_path)
TestSubmitter._raise_error_when_test_failed(result, show_trace=node is not None)
return result.output
def _test(
self,
flow: Union[str, PathLike],
*,
inputs: dict = None,
variant: str = None,
node: str = None,
environment_variables: dict = None,
stream_log: bool = True,
stream_output: bool = True,
allow_generator_output: bool = True,
entry: str = None,
**kwargs,
):
"""Test flow or node.
:param flow: path to flow directory to test
:param inputs: Input data for the flow test
:param variant: Node & variant name in format of ${node_name.variant_name}, will use default variant
if not specified.
:param node: If specified it will only test this node, else it will test the flow.
:param environment_variables: Environment variables to set by specifying a property path and value.
Example: {"key1": "${my_connection.api_key}", "key2"="value2"}
The value reference to connection keys will be resolved to the actual value,
and all environment variables specified will be set into os.environ.
:param stream_log: Whether streaming the log.
:param stream_output: Whether streaming the outputs.
:param allow_generator_output: Whether return streaming output when flow has streaming output.
:param entry: The entry function, only works when source is a code file.
:return: Executor result
"""
from promptflow._sdk._load_functions import load_flow
inputs = inputs or {}
flow = load_flow(flow, entry=entry)
if isinstance(flow, EagerFlow):
if variant or node:
logger.warning("variant and node are not supported for eager flow, will be ignored")
variant, node = None, None
else:
if entry:
logger.warning("entry is only supported for eager flow, will be ignored")
flow.context.variant = variant
from promptflow._constants import FlowLanguage
from promptflow._sdk._submitter.test_submitter import TestSubmitterViaProxy
if flow.language == FlowLanguage.CSharp:
with TestSubmitterViaProxy(flow=flow, flow_context=flow.context, client=self._client).init() as submitter:
is_chat_flow, chat_history_input_name, _ = self._is_chat_flow(submitter.dataplane_flow)
flow_inputs, dependency_nodes_outputs = submitter.resolve_data(
node_name=node, inputs=inputs, chat_history_name=chat_history_input_name
)
if node:
return submitter.node_test(
node_name=node,
flow_inputs=flow_inputs,
dependency_nodes_outputs=dependency_nodes_outputs,
environment_variables=environment_variables,
stream=True,
)
else:
return submitter.flow_test(
inputs=flow_inputs,
environment_variables=environment_variables,
stream_log=stream_log,
stream_output=stream_output,
allow_generator_output=allow_generator_output and is_chat_flow,
)
with TestSubmitter(flow=flow, flow_context=flow.context, client=self._client).init() as submitter:
if isinstance(flow, EagerFlow):
# TODO(2897153): support chat eager flow
is_chat_flow, chat_history_input_name = False, None
flow_inputs, dependency_nodes_outputs = inputs, None
else:
is_chat_flow, chat_history_input_name, _ = self._is_chat_flow(submitter.dataplane_flow)
flow_inputs, dependency_nodes_outputs = submitter.resolve_data(
node_name=node, inputs=inputs, chat_history_name=chat_history_input_name
)
if node:
return submitter.node_test(
node_name=node,
flow_inputs=flow_inputs,
dependency_nodes_outputs=dependency_nodes_outputs,
environment_variables=environment_variables,
stream=True,
)
else:
return submitter.flow_test(
inputs=flow_inputs,
environment_variables=environment_variables,
stream_log=stream_log,
stream_output=stream_output,
allow_generator_output=allow_generator_output and is_chat_flow,
)
@staticmethod
def _is_chat_flow(flow):
"""
Check if the flow is chat flow.
Check if chat_history in the flow input and only one chat input and
one chat output to determine if it is a chat flow.
"""
chat_inputs = [item for item in flow.inputs.values() if item.is_chat_input]
chat_outputs = [item for item in flow.outputs.values() if item.is_chat_output]
chat_history_input_name = next(
iter([input_name for input_name, value in flow.inputs.items() if value.is_chat_history]), None
)
if (
not chat_history_input_name
and CHAT_HISTORY in flow.inputs
and flow.inputs[CHAT_HISTORY].is_chat_history is not False
):
chat_history_input_name = CHAT_HISTORY
is_chat_flow, error_msg = True, ""
if len(chat_inputs) != 1:
is_chat_flow = False
error_msg = "chat flow does not support multiple chat inputs"
elif len(chat_outputs) != 1:
is_chat_flow = False
error_msg = "chat flow does not support multiple chat outputs"
elif not chat_history_input_name:
is_chat_flow = False
error_msg = "chat_history is required in the inputs of chat flow"
return is_chat_flow, chat_history_input_name, error_msg
@monitor_operation(activity_name="pf.flows._chat", activity_type=ActivityType.INTERNALCALL)
def _chat(
self,
flow,
*,
inputs: dict = None,
variant: str = None,
environment_variables: dict = None,
**kwargs,
) -> List:
"""Interact with Chat Flow. Only chat flow supported.
:param flow: path to flow directory to chat
:param inputs: Input data for the flow to chat
:param environment_variables: Environment variables to set by specifying a property path and value.
Example: {"key1": "${my_connection.api_key}", "key2"="value2"}
The value reference to connection keys will be resolved to the actual value,
and all environment variables specified will be set into os.environ.
"""
from promptflow._sdk._load_functions import load_flow
flow = load_flow(flow)
flow.context.variant = variant
with TestSubmitter(flow=flow, flow_context=flow.context, client=self._client).init() as submitter:
is_chat_flow, chat_history_input_name, error_msg = self._is_chat_flow(submitter.dataplane_flow)
if not is_chat_flow:
raise UserErrorException(f"Only support chat flow in interactive mode, {error_msg}.")
info_msg = f"Welcome to chat flow, {submitter.dataplane_flow.name}."
print("=" * len(info_msg))
print(info_msg)
print("Press Enter to send your message.")
print("You can quit with ctrl+C.")
print("=" * len(info_msg))
submitter._chat_flow(
inputs=inputs,
chat_history_name=chat_history_input_name,
environment_variables=environment_variables,
show_step_output=kwargs.get("show_step_output", False),
)
@monitor_operation(activity_name="pf.flows._chat_with_ui", activity_type=ActivityType.INTERNALCALL)
def _chat_with_ui(self, script):
try:
import bs4 # noqa: F401
import streamlit_quill # noqa: F401
from streamlit.web import cli as st_cli
except ImportError as ex:
raise UserErrorException(
f"Please try 'pip install promptflow[executable]' to install dependency, {ex.msg}."
)
sys.argv = [
"streamlit",
"run",
script,
"--global.developmentMode=false",
"--client.toolbarMode=viewer",
"--browser.gatherUsageStats=false",
]
st_cli.main()
def _build_environment_config(self, flow_dag_path: Path):
flow_info = load_yaml(flow_dag_path)
# standard env object:
# environment:
# image: xxx
# conda_file: xxx
# python_requirements_txt: xxx
# setup_sh: xxx
# TODO: deserialize dag with structured class here to avoid using so many magic strings
env_obj = flow_info.get("environment", {})
env_obj["sdk_version"] = version("promptflow")
# version 0.0.1 is the dev version of promptflow
if env_obj["sdk_version"] == "0.0.1":
del env_obj["sdk_version"]
if not env_obj.get("python_requirements_txt", None) and (flow_dag_path.parent / "requirements.txt").is_file():
env_obj["python_requirements_txt"] = "requirements.txt"
env_obj["conda_env_name"] = "promptflow-serve"
if "conda_file" in env_obj:
conda_file = flow_dag_path.parent / env_obj["conda_file"]
if conda_file.is_file():
conda_obj = yaml.safe_load(conda_file.read_text())
if "name" in conda_obj:
env_obj["conda_env_name"] = conda_obj["name"]
return env_obj
@classmethod
def _refine_connection_name(cls, connection_name: str):
return connection_name.replace(" ", "_")
def _dump_connection(self, connection, output_path: Path):
# connection yaml should be a dict instead of ordered dict
connection_dict = connection._to_dict()
connection_yaml = {
"$schema": f"https://azuremlschemas.azureedge.net/promptflow/"
f"latest/{connection.__class__.__name__}.schema.json",
**connection_dict,
}
if connection.type == "Custom":
secret_dict = connection_yaml["secrets"]
else:
secret_dict = connection_yaml
connection_var_name = self._refine_connection_name(connection.name)
env_var_names = [f"{connection_var_name}_{secret_key}".upper() for secret_key in connection.secrets]
for secret_key, secret_env in zip(connection.secrets, env_var_names):
secret_dict[secret_key] = "${env:" + secret_env + "}"
for key in ["created_date", "last_modified_date"]:
if key in connection_yaml:
del connection_yaml[key]
key_order = ["$schema", "type", "name", "configs", "secrets", "module"]
sorted_connection_dict = {
key: connection_yaml[key]
for key in sorted(
connection_yaml.keys(),
key=lambda x: (0, key_order.index(x)) if x in key_order else (1, x),
)
}
with open(output_path, "w", encoding="utf-8") as f:
f.write(dump_yaml(sorted_connection_dict))
return env_var_names
def _migrate_connections(self, connection_names: List[str], output_dir: Path):
from promptflow._sdk._pf_client import PFClient
output_dir.mkdir(parents=True, exist_ok=True)
local_client = PFClient()
connection_paths, env_var_names = [], {}
for connection_name in connection_names:
connection = local_client.connections.get(name=connection_name, with_secrets=True)
connection_var_name = self._refine_connection_name(connection_name)
connection_paths.append(output_dir / f"{connection_var_name}.yaml")
for env_var_name in self._dump_connection(
connection,
connection_paths[-1],
):
if env_var_name in env_var_names:
raise RuntimeError(
f"environment variable name conflict: connection {connection_name} and "
f"{env_var_names[env_var_name]} on {env_var_name}"
)
env_var_names[env_var_name] = connection_name
return connection_paths, list(env_var_names.keys())
def _export_flow_connections(
self,
built_flow_dag_path: Path,
*,
output_dir: Path,
):
"""Export flow connections to yaml files.
:param built_flow_dag_path: path to built flow dag yaml file. Given this is a built flow, we can assume
that the flow involves no additional includes, symlink, or variant.
:param output_dir: output directory to export connections
"""
flow: ProtectedFlow = load_flow(built_flow_dag_path)
with _change_working_dir(flow.code):
if flow.language == FlowLanguage.CSharp:
from promptflow.batch import CSharpExecutorProxy
return self._migrate_connections(
connection_names=SubmitterHelper.get_used_connection_names(
tools_meta=CSharpExecutorProxy.get_tool_metadata(
flow_file=flow.flow_dag_path,
working_dir=flow.code,
),
flow_dag=flow.dag,
),
output_dir=output_dir,
)
else:
# TODO: avoid using executable here
from promptflow.contracts.flow import Flow as ExecutableFlow
executable = ExecutableFlow.from_yaml(flow_file=flow.path, working_dir=flow.code)
return self._migrate_connections(
connection_names=executable.get_connection_names(),
output_dir=output_dir,
)
def _build_flow(
self,
flow_dag_path: Path,
*,
output: Union[str, PathLike],
tuning_node: str = None,
node_variant: str = None,
update_flow_tools_json: bool = True,
):
# TODO: confirm if we need to import this
from promptflow._sdk._submitter import variant_overwrite_context
flow_copy_target = Path(output)
flow_copy_target.mkdir(parents=True, exist_ok=True)
# resolve additional includes and copy flow directory first to guarantee there is a final flow directory
# TODO: shall we pop "node_variants" unless keep-variants is specified?
with variant_overwrite_context(
flow_dag_path,
tuning_node=tuning_node,
variant=node_variant,
drop_node_variants=True,
) as temp_flow:
# TODO: avoid copy for twice
copy_tree_respect_template_and_ignore_file(temp_flow.code, flow_copy_target)
if update_flow_tools_json:
generate_flow_tools_json(flow_copy_target)
return flow_copy_target / flow_dag_path.name
def _export_to_docker(
self,
flow_dag_path: Path,
output_dir: Path,
*,
env_var_names: List[str],
connection_paths: List[Path],
flow_name: str,
is_csharp_flow: bool = False,
):
(output_dir / "settings.json").write_text(
data=json.dumps({env_var_name: "" for env_var_name in env_var_names}, indent=2),
encoding="utf-8",
)
environment_config = self._build_environment_config(flow_dag_path)
# TODO: make below strings constants
if is_csharp_flow:
source = Path(__file__).parent.parent / "data" / "docker_csharp"
else:
source = Path(__file__).parent.parent / "data" / "docker"
copy_tree_respect_template_and_ignore_file(
source=source,
target=output_dir,
render_context={
"env": environment_config,
"flow_name": f"{flow_name}-{generate_random_string(6)}",
"local_db_rel_path": LOCAL_MGMT_DB_PATH.relative_to(Path.home()).as_posix(),
"connection_yaml_paths": list(map(lambda x: x.relative_to(output_dir).as_posix(), connection_paths)),
},
)
def _build_as_executable(
self,
flow_dag_path: Path,
output_dir: Path,
*,
flow_name: str,
env_var_names: List[str],
):
try:
import bs4 # noqa: F401
import PyInstaller # noqa: F401
import streamlit
import streamlit_quill # noqa: F401
except ImportError as ex:
raise UserErrorException(
f"Please try 'pip install promptflow[executable]' to install dependency, {ex.msg}."
)
from promptflow.contracts.flow import Flow as ExecutableFlow
(output_dir / "settings.json").write_text(
data=json.dumps({env_var_name: "" for env_var_name in env_var_names}, indent=2),
encoding="utf-8",
)
environment_config = self._build_environment_config(flow_dag_path)
hidden_imports = []
if (
environment_config.get("python_requirements_txt", None)
and (flow_dag_path.parent / "requirements.txt").is_file()
):
with open(flow_dag_path.parent / "requirements.txt", "r", encoding="utf-8") as file:
file_content = file.read()
hidden_imports = file_content.splitlines()
runtime_interpreter_path = (Path(streamlit.__file__).parent / "runtime").as_posix()
executable = ExecutableFlow.from_yaml(flow_file=Path(flow_dag_path.name), working_dir=flow_dag_path.parent)
flow_inputs = {
flow_input: (value.default, value.type.value)
for flow_input, value in executable.inputs.items()
if not value.is_chat_history
}
flow_inputs_params = ["=".join([flow_input, flow_input]) for flow_input, _ in flow_inputs.items()]
flow_inputs_params = ",".join(flow_inputs_params)
is_chat_flow, chat_history_input_name, _ = self._is_chat_flow(executable)
label = "Chat" if is_chat_flow else "Run"
copy_tree_respect_template_and_ignore_file(
source=Path(__file__).parent.parent / "data" / "executable",
target=output_dir,
render_context={
"hidden_imports": hidden_imports,
"flow_name": flow_name,
"runtime_interpreter_path": runtime_interpreter_path,
"flow_inputs": flow_inputs,
"flow_inputs_params": flow_inputs_params,
"flow_path": None,
"is_chat_flow": is_chat_flow,
"chat_history_input_name": chat_history_input_name,
"label": label,
},
)
self._run_pyinstaller(output_dir)
def _run_pyinstaller(self, output_dir):
with _change_working_dir(output_dir, mkdir=False):
subprocess.run(["pyinstaller", "app.spec"], check=True)
print("PyInstaller command executed successfully.")
@monitor_operation(activity_name="pf.flows.build", activity_type=ActivityType.PUBLICAPI)
def build(
self,
flow: Union[str, PathLike],
*,
output: Union[str, PathLike],
format: str = "docker",
variant: str = None,
**kwargs,
):
"""
Build flow to other format.
:param flow: path to the flow directory or flow dag to export
:type flow: Union[str, PathLike]
:param format: export format, support "docker" and "executable" only for now
:type format: str
:param output: output directory
:type output: Union[str, PathLike]
:param variant: node variant in format of {node_name}.{variant_name},
will use default variant if not specified.
:type variant: str
:return: no return
:rtype: None
"""
output_dir = Path(output).absolute()
output_dir.mkdir(parents=True, exist_ok=True)
flow: ProtectedFlow = load_flow(flow)
is_csharp_flow = flow.dag.get(LANGUAGE_KEY, "") == FlowLanguage.CSharp
if format not in ["docker", "executable"]:
raise ValueError(f"Unsupported export format: {format}")
if variant:
tuning_node, node_variant = parse_variant(variant)
else:
tuning_node, node_variant = None, None
flow_only = kwargs.pop("flow_only", False)
if flow_only:
output_flow_dir = output_dir
else:
output_flow_dir = output_dir / "flow"
new_flow_dag_path = self._build_flow(
flow_dag_path=flow.flow_dag_path,
output=output_flow_dir,
tuning_node=tuning_node,
node_variant=node_variant,
update_flow_tools_json=False if is_csharp_flow else True,
)
if flow_only:
return
# use new flow dag path below as origin one may miss additional includes
connection_paths, env_var_names = self._export_flow_connections(
built_flow_dag_path=new_flow_dag_path,
output_dir=output_dir / "connections",
)
if format == "docker":
self._export_to_docker(
flow_dag_path=new_flow_dag_path,
output_dir=output_dir,
connection_paths=connection_paths,
flow_name=flow.name,
env_var_names=env_var_names,
is_csharp_flow=is_csharp_flow,
)
elif format == "executable":
self._build_as_executable(
flow_dag_path=new_flow_dag_path,
output_dir=output_dir,
flow_name=flow.name,
env_var_names=env_var_names,
)
@classmethod
@contextlib.contextmanager
def _resolve_additional_includes(cls, flow_dag_path: Path) -> Iterable[Path]:
# TODO: confirm if we need to import this
from promptflow._sdk._submitter import remove_additional_includes
# Eager flow may not contain a yaml file, skip resolving additional includes
def is_yaml_file(file_path):
_, file_extension = os.path.splitext(file_path)
return file_extension.lower() in (".yaml", ".yml")
if is_yaml_file(flow_dag_path) and _get_additional_includes(flow_dag_path):
# Merge the flow folder and additional includes to temp folder.
# TODO: support a flow_dag_path with a name different from flow.dag.yaml
with _merge_local_code_and_additional_includes(code_path=flow_dag_path.parent) as temp_dir:
remove_additional_includes(Path(temp_dir))
yield Path(temp_dir) / flow_dag_path.name
else:
yield flow_dag_path
@monitor_operation(activity_name="pf.flows.validate", activity_type=ActivityType.PUBLICAPI)
def validate(self, flow: Union[str, PathLike], *, raise_error: bool = False, **kwargs) -> ValidationResult:
"""
Validate flow.
:param flow: path to the flow directory or flow dag to export
:type flow: Union[str, PathLike]
:param raise_error: whether raise error when validation failed
:type raise_error: bool
:return: a validation result object
:rtype: ValidationResult
"""
flow_entity: ProtectedFlow = load_flow(source=flow)
# TODO: put off this if we do path existence check in FlowSchema on fields other than additional_includes
validation_result = flow_entity._validate()
source_path_mapping = {}
flow_tools, tools_errors = self._generate_tools_meta(
flow=flow_entity.flow_dag_path,
source_path_mapping=source_path_mapping,
)
flow_entity.tools_meta_path.write_text(
data=json.dumps(flow_tools, indent=4),
encoding=DEFAULT_ENCODING,
)
if tools_errors:
for source_name, message in tools_errors.items():
for yaml_path in source_path_mapping.get(source_name, []):
validation_result.append_error(
yaml_path=yaml_path,
message=message,
)
# flow in control plane is read-only, so resolve location makes sense even in SDK experience
validation_result.resolve_location_for_diagnostics(flow_entity.flow_dag_path.as_posix())
flow_entity._try_raise(
validation_result,
raise_error=raise_error,
)
return validation_result
@monitor_operation(activity_name="pf.flows._generate_tools_meta", activity_type=ActivityType.INTERNALCALL)
def _generate_tools_meta(
self,
flow: Union[str, PathLike],
*,
source_name: str = None,
source_path_mapping: Dict[str, List[str]] = None,
timeout: int = FLOW_TOOLS_JSON_GEN_TIMEOUT,
) -> Tuple[dict, dict]:
"""Generate flow tools meta for a specific flow or a specific node in the flow.
This is a private interface for vscode extension, so do not change the interface unless necessary.
Usage:
from promptflow import PFClient
PFClient().flows._generate_tools_meta(flow="flow.dag.yaml", source_name="convert_to_dict.py")
:param flow: path to the flow directory or flow dag to export
:type flow: Union[str, PathLike]
:param source_name: source name to generate tools meta. If not specified, generate tools meta for all sources.
:type source_name: str
:param source_path_mapping: If passed in None, do nothing; if passed in a dict, will record all reference yaml
paths for each source in the dict passed in.
:type source_path_mapping: Dict[str, List[str]]
:param timeout: timeout for generating tools meta
:type timeout: int
:return: dict of tools meta and dict of tools errors
:rtype: Tuple[dict, dict]
"""
flow: ProtectedFlow = load_flow(source=flow)
if not isinstance(flow, ProtectedFlow):
# No tools meta for eager flow
return {}, {}
with self._resolve_additional_includes(flow.flow_dag_path) as new_flow_dag_path:
flow_tools = generate_flow_tools_json(
flow_directory=new_flow_dag_path.parent,
dump=False,
raise_error=False,
include_errors_in_output=True,
target_source=source_name,
used_packages_only=True,
source_path_mapping=source_path_mapping,
timeout=timeout,
)
flow_tools_meta = flow_tools.pop("code", {})
tools_errors = {}
nodes_with_error = [node_name for node_name, message in flow_tools_meta.items() if isinstance(message, str)]
for node_name in nodes_with_error:
tools_errors[node_name] = flow_tools_meta.pop(node_name)
additional_includes = _get_additional_includes(flow.flow_dag_path)
if additional_includes:
additional_files = {}
for include in additional_includes:
include_path = Path(include) if Path(include).is_absolute() else flow.code / include
if include_path.is_file():
file_name = Path(include).name
additional_files[Path(file_name)] = os.path.relpath(include_path, flow.code)
else:
if not Path(include).is_absolute():
include = flow.code / include
files = glob.glob(os.path.join(include, "**"), recursive=True)
additional_files.update(
{
Path(os.path.relpath(path, include.parent)): os.path.relpath(path, flow.code)
for path in files
}
)
for tool in flow_tools_meta.values():
source = tool.get("source", None)
if source and Path(source) in additional_files:
tool["source"] = additional_files[Path(source)]
flow_tools["code"] = flow_tools_meta
return flow_tools, tools_errors
| promptflow/src/promptflow/promptflow/_sdk/operations/_flow_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/operations/_flow_operations.py",
"repo_id": "promptflow",
"token_count": 15849
} | 44 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
"""!!!Note: context in this file only used for command line related logics,
please avoid using them in service code!!!"""
import contextlib
import os
import sys
@contextlib.contextmanager
def _change_working_dir(path, mkdir=True):
"""Context manager for changing the current working directory"""
saved_path = os.getcwd()
if mkdir:
os.makedirs(path, exist_ok=True)
os.chdir(str(path))
try:
yield
finally:
os.chdir(saved_path)
@contextlib.contextmanager
def inject_sys_path(path):
original_sys_path = sys.path.copy()
sys.path.insert(0, str(path))
try:
yield
finally:
sys.path = original_sys_path
| promptflow/src/promptflow/promptflow/_utils/context_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/context_utils.py",
"repo_id": "promptflow",
"token_count": 285
} | 45 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import datetime
import json
import logging
from promptflow._constants import (LAST_HINT_TIME, LAST_CHECK_TIME, PF_VERSION_CHECK, CLI_PACKAGE_NAME,
HINT_INTERVAL_DAY, GET_PYPI_INTERVAL_DAY, LATEST_VERSION, CURRENT_VERSION)
from promptflow._sdk._constants import HOME_PROMPT_FLOW_DIR
HINT_ACTIVITY_NAME = ["pf.flows.test", "pf.runs.create_or_update", "pfazure.flows.create_or_update",
"pfazure.runs.create_or_update"]
logger = logging.getLogger(__name__)
def get_cached_versions():
from promptflow._sdk._utils import read_write_by_user
(HOME_PROMPT_FLOW_DIR / PF_VERSION_CHECK).touch(mode=read_write_by_user(), exist_ok=True)
with open(HOME_PROMPT_FLOW_DIR / PF_VERSION_CHECK, "r") as f:
try:
cached_versions = json.load(f)
except json.decoder.JSONDecodeError:
cached_versions = {}
return cached_versions
def dump_cached_versions(cached_versions):
with open(HOME_PROMPT_FLOW_DIR / PF_VERSION_CHECK, "w") as f:
json.dump(cached_versions, f)
def get_latest_version_from_pypi(package_name):
pypi_url = f"https://pypi.org/pypi/{package_name}/json"
try:
import requests
response = requests.get(pypi_url, timeout=3)
if response.status_code == 200:
data = response.json()
latest_version = data["info"]["version"]
return latest_version
else:
return None
except Exception as ex: # pylint: disable=broad-except
logger.debug(f"Failed to get the latest version from '{pypi_url}'. {str(ex)}")
return None
def check_latest_version():
""" Get the latest versions from a cached file"""
cached_versions = get_cached_versions()
last_check_time = datetime.datetime.strptime(cached_versions[LAST_CHECK_TIME], '%Y-%m-%d %H:%M:%S.%f') \
if LAST_CHECK_TIME in cached_versions else None
if last_check_time is None or (datetime.datetime.now() >
last_check_time + datetime.timedelta(days=GET_PYPI_INTERVAL_DAY)):
version = get_latest_version_from_pypi(CLI_PACKAGE_NAME)
if version is not None:
cached_versions[LATEST_VERSION] = version
cached_versions[LAST_CHECK_TIME] = str(datetime.datetime.now())
dump_cached_versions(cached_versions)
def hint_for_update():
"""
Check if there is a new version of prompt flow available every 7 days. IF yes, log debug info to hint
customer to upgrade package.
"""
cached_versions = get_cached_versions()
last_hint_time = datetime.datetime.strptime(
cached_versions[LAST_HINT_TIME],
'%Y-%m-%d %H:%M:%S.%f'
) if LAST_HINT_TIME in cached_versions else None
if last_hint_time is None or (datetime.datetime.now() >
last_hint_time + datetime.timedelta(days=HINT_INTERVAL_DAY)):
from promptflow._sdk._utils import get_promptflow_sdk_version
cached_versions[CURRENT_VERSION] = get_promptflow_sdk_version()
if LATEST_VERSION in cached_versions:
from packaging.version import parse
if parse(cached_versions[CURRENT_VERSION]) < parse(cached_versions[LATEST_VERSION]):
cached_versions[LAST_HINT_TIME] = str(datetime.datetime.now())
message = (f"New prompt flow version available: promptflow-{cached_versions[LATEST_VERSION]}. Running "
f"'pf upgrade' to update CLI.")
logger.debug(message)
dump_cached_versions(cached_versions)
| promptflow/src/promptflow/promptflow/_utils/version_hint_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/version_hint_utils.py",
"repo_id": "promptflow",
"token_count": 1608
} | 46 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from copy import deepcopy
from typing import TYPE_CHECKING
from azure.core import PipelineClient
from msrest import Deserializer, Serializer
from . import models
from ._configuration import AzureMachineLearningDesignerServiceClientConfiguration
from .operations import BulkRunsOperations, ConnectionOperations, ConnectionsOperations, FlowRunsAdminOperations, FlowRuntimesOperations, FlowRuntimesWorkspaceIndependentOperations, FlowSessionsOperations, FlowsOperations, FlowsProviderOperations, ToolsOperations
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Optional
from azure.core.rest import HttpRequest, HttpResponse
class AzureMachineLearningDesignerServiceClient(object):
"""AzureMachineLearningDesignerServiceClient.
:ivar bulk_runs: BulkRunsOperations operations
:vartype bulk_runs: flow.operations.BulkRunsOperations
:ivar connection: ConnectionOperations operations
:vartype connection: flow.operations.ConnectionOperations
:ivar connections: ConnectionsOperations operations
:vartype connections: flow.operations.ConnectionsOperations
:ivar flow_runs_admin: FlowRunsAdminOperations operations
:vartype flow_runs_admin: flow.operations.FlowRunsAdminOperations
:ivar flow_runtimes: FlowRuntimesOperations operations
:vartype flow_runtimes: flow.operations.FlowRuntimesOperations
:ivar flow_runtimes_workspace_independent: FlowRuntimesWorkspaceIndependentOperations
operations
:vartype flow_runtimes_workspace_independent:
flow.operations.FlowRuntimesWorkspaceIndependentOperations
:ivar flows: FlowsOperations operations
:vartype flows: flow.operations.FlowsOperations
:ivar flow_sessions: FlowSessionsOperations operations
:vartype flow_sessions: flow.operations.FlowSessionsOperations
:ivar flows_provider: FlowsProviderOperations operations
:vartype flows_provider: flow.operations.FlowsProviderOperations
:ivar tools: ToolsOperations operations
:vartype tools: flow.operations.ToolsOperations
:param base_url: Service URL. Default value is ''.
:type base_url: str
:param api_version: Api Version. The default value is "1.0.0".
:type api_version: str
"""
def __init__(
self,
base_url="", # type: str
api_version="1.0.0", # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> None
self._config = AzureMachineLearningDesignerServiceClientConfiguration(api_version=api_version, **kwargs)
self._client = PipelineClient(base_url=base_url, config=self._config, **kwargs)
client_models = {k: v for k, v in models.__dict__.items() if isinstance(v, type)}
self._serialize = Serializer(client_models)
self._deserialize = Deserializer(client_models)
self._serialize.client_side_validation = False
self.bulk_runs = BulkRunsOperations(self._client, self._config, self._serialize, self._deserialize)
self.connection = ConnectionOperations(self._client, self._config, self._serialize, self._deserialize)
self.connections = ConnectionsOperations(self._client, self._config, self._serialize, self._deserialize)
self.flow_runs_admin = FlowRunsAdminOperations(self._client, self._config, self._serialize, self._deserialize)
self.flow_runtimes = FlowRuntimesOperations(self._client, self._config, self._serialize, self._deserialize)
self.flow_runtimes_workspace_independent = FlowRuntimesWorkspaceIndependentOperations(self._client, self._config, self._serialize, self._deserialize)
self.flows = FlowsOperations(self._client, self._config, self._serialize, self._deserialize)
self.flow_sessions = FlowSessionsOperations(self._client, self._config, self._serialize, self._deserialize)
self.flows_provider = FlowsProviderOperations(self._client, self._config, self._serialize, self._deserialize)
self.tools = ToolsOperations(self._client, self._config, self._serialize, self._deserialize)
def _send_request(
self,
request, # type: HttpRequest
**kwargs # type: Any
):
# type: (...) -> HttpResponse
"""Runs the network request through the client's chained policies.
>>> from azure.core.rest import HttpRequest
>>> request = HttpRequest("GET", "https://www.example.org/")
<HttpRequest [GET], url: 'https://www.example.org/'>
>>> response = client._send_request(request)
<HttpResponse: 200 OK>
For more information on this code flow, see https://aka.ms/azsdk/python/protocol/quickstart
:param request: The network request you want to make. Required.
:type request: ~azure.core.rest.HttpRequest
:keyword bool stream: Whether the response payload will be streamed. Defaults to False.
:return: The response of your network call. Does not do error handling on your response.
:rtype: ~azure.core.rest.HttpResponse
"""
request_copy = deepcopy(request)
request_copy.url = self._client.format_url(request_copy.url)
return self._client.send_request(request_copy, **kwargs)
def close(self):
# type: () -> None
self._client.close()
def __enter__(self):
# type: () -> AzureMachineLearningDesignerServiceClient
self._client.__enter__()
return self
def __exit__(self, *exc_details):
# type: (Any) -> None
self._client.__exit__(*exc_details)
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/_azure_machine_learning_designer_service_client.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/_azure_machine_learning_designer_service_client.py",
"repo_id": "promptflow",
"token_count": 2003
} | 47 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import functools
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator_async import distributed_trace_async
from ... import models as _models
from ..._vendor import _convert_request
from ...operations._flow_sessions_operations import build_create_flow_session_request, build_delete_flow_session_request, build_get_flow_session_request, build_get_standby_pools_request, build_poll_operation_status_request
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class FlowSessionsOperations:
"""FlowSessionsOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace_async
async def create_flow_session(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
session_id: str,
body: Optional["_models.CreateFlowSessionRequest"] = None,
**kwargs: Any
) -> Any:
"""create_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:param body:
:type body: ~flow.models.CreateFlowSessionRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowSessionRequest')
else:
_json = None
request = build_create_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
content_type=content_type,
json=_json,
template_url=self.create_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace_async
async def get_flow_session(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
session_id: str,
**kwargs: Any
) -> "_models.GetTrainingSessionDto":
"""get_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: GetTrainingSessionDto, or the result of cls(response)
:rtype: ~flow.models.GetTrainingSessionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.GetTrainingSessionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.get_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('GetTrainingSessionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace_async
async def delete_flow_session(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
session_id: str,
**kwargs: Any
) -> Any:
"""delete_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_delete_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.delete_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
delete_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace_async
async def poll_operation_status(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
session_id: str,
action_type: Union[str, "_models.SetupFlowSessionAction"],
location: str,
operation_id: str,
api_version: Optional[str] = "1.0.0",
type: Optional[str] = None,
**kwargs: Any
) -> Any:
"""poll_operation_status.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:param action_type:
:type action_type: str or ~flow.models.SetupFlowSessionAction
:param location:
:type location: str
:param operation_id:
:type operation_id: str
:param api_version: Api Version. The default value is "1.0.0".
:type api_version: str
:param type:
:type type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_poll_operation_status_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
action_type=action_type,
location=location,
operation_id=operation_id,
api_version=api_version,
type=type,
template_url=self.poll_operation_status.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
poll_operation_status.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/{actionType}/locations/{location}/operations/{operationId}'} # type: ignore
@distributed_trace_async
async def get_standby_pools(
self,
subscription_id: str,
resource_group_name: str,
workspace_name: str,
**kwargs: Any
) -> List["_models.StandbyPoolProperties"]:
"""get_standby_pools.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of StandbyPoolProperties, or the result of cls(response)
:rtype: list[~flow.models.StandbyPoolProperties]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.StandbyPoolProperties"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_standby_pools_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_standby_pools.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[StandbyPoolProperties]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_standby_pools.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/standbypools'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_flow_sessions_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/_flow_sessions_operations.py",
"repo_id": "promptflow",
"token_count": 6266
} | 48 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import functools
from typing import TYPE_CHECKING
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from msrest import Serializer
from .. import models as _models
from .._vendor import _convert_request, _format_url_section
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
# fmt: off
def build_create_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_delete_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="DELETE",
url=url,
headers=header_parameters,
**kwargs
)
def build_poll_operation_status_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
action_type, # type: Union[str, "_models.SetupFlowSessionAction"]
location, # type: str
operation_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
api_version = kwargs.pop('api_version', "1.0.0") # type: Optional[str]
type = kwargs.pop('type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/{actionType}/locations/{location}/operations/{operationId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
"actionType": _SERIALIZER.url("action_type", action_type, 'str'),
"location": _SERIALIZER.url("location", location, 'str'),
"operationId": _SERIALIZER.url("operation_id", operation_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if api_version is not None:
query_parameters['api-version'] = _SERIALIZER.query("api_version", api_version, 'str')
if type is not None:
query_parameters['type'] = _SERIALIZER.query("type", type, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_standby_pools_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/standbypools')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
# fmt: on
class FlowSessionsOperations(object):
"""FlowSessionsOperations operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace
def create_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
body=None, # type: Optional["_models.CreateFlowSessionRequest"]
**kwargs # type: Any
):
# type: (...) -> Any
"""create_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:param body:
:type body: ~flow.models.CreateFlowSessionRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowSessionRequest')
else:
_json = None
request = build_create_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
content_type=content_type,
json=_json,
template_url=self.create_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def get_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.GetTrainingSessionDto"
"""get_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: GetTrainingSessionDto, or the result of cls(response)
:rtype: ~flow.models.GetTrainingSessionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.GetTrainingSessionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.get_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('GetTrainingSessionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def delete_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> Any
"""delete_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_delete_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.delete_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
delete_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def poll_operation_status(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
action_type, # type: Union[str, "_models.SetupFlowSessionAction"]
location, # type: str
operation_id, # type: str
api_version="1.0.0", # type: Optional[str]
type=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> Any
"""poll_operation_status.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:param action_type:
:type action_type: str or ~flow.models.SetupFlowSessionAction
:param location:
:type location: str
:param operation_id:
:type operation_id: str
:param api_version: Api Version. The default value is "1.0.0".
:type api_version: str
:param type:
:type type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_poll_operation_status_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
action_type=action_type,
location=location,
operation_id=operation_id,
api_version=api_version,
type=type,
template_url=self.poll_operation_status.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
poll_operation_status.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/{actionType}/locations/{location}/operations/{operationId}'} # type: ignore
@distributed_trace
def get_standby_pools(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> List["_models.StandbyPoolProperties"]
"""get_standby_pools.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of StandbyPoolProperties, or the result of cls(response)
:rtype: list[~flow.models.StandbyPoolProperties]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.StandbyPoolProperties"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_standby_pools_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_standby_pools.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[StandbyPoolProperties]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_standby_pools.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/standbypools'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flow_sessions_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flow_sessions_operations.py",
"repo_id": "promptflow",
"token_count": 9211
} | 49 |
import asyncio
import contextvars
import functools
import json
from pathlib import Path
from typing import Optional, Union
import httpx
from azure.core.exceptions import HttpResponseError
from azure.storage.blob.aio import BlobServiceClient
from promptflow._sdk._constants import DEFAULT_ENCODING, DownloadedRun
from promptflow._sdk._errors import DownloadInternalError, RunNotFoundError, RunOperationError
from promptflow._sdk.entities import Run
from promptflow._utils.logger_utils import get_cli_sdk_logger
from promptflow.exceptions import UserErrorException
logger = get_cli_sdk_logger()
class AsyncRunDownloader:
"""Download run results from the service asynchronously.
:param run: The run id.
:type run: str
:param run_ops: The run operations.
:type run_ops: ~promptflow.azure.operations.RunOperations
:param output_folder: The output folder to save the run results.
:type output_folder: Union[Path, str]
"""
IGNORED_PATTERN = ["__pycache__"]
def __init__(self, run: str, run_ops: "RunOperations", output_folder: Union[str, Path]) -> None:
self.run = run
self.run_ops = run_ops
self.datastore = run_ops._workspace_default_datastore
self.output_folder = Path(output_folder)
self.blob_service_client = self._init_blob_service_client()
self._use_flow_outputs = False # old runtime does not write debug_info output asset, use flow_outputs instead
def _init_blob_service_client(self):
logger.debug("Initializing blob service client.")
account_url = f"{self.datastore.account_name}.blob.{self.datastore.endpoint}"
return BlobServiceClient(account_url=account_url, credential=self.run_ops._credential)
async def download(self) -> str:
"""Download the run results asynchronously."""
error_msg_prefix = f"Failed to download run {self.run!r}"
try:
# pass verify=False to client to disable SSL verification.
# Source: https://github.com/encode/httpx/issues/1331
async with httpx.AsyncClient(verify=False) as client:
tasks = [
# put async functions in tasks to run in coroutines
self._download_artifacts_and_snapshot(client),
# below functions are actually synchronous functions in order to reuse code
# and use thread pool to avoid blocking the event loop
to_thread(self._download_run_metrics),
to_thread(self._download_run_logs),
]
await asyncio.gather(*tasks)
except RunNotFoundError as e:
raise RunOperationError(f"{error_msg_prefix}. Error: {e}") from e
except HttpResponseError as e:
if e.status_code == 403:
raise RunOperationError(
f"{error_msg_prefix}. User does not have permission to perform this operation on storage account "
f"{self.datastore.account_name!r} container {self.datastore.container_name!r}. "
f"Original azure blob error: {str(e)}"
)
else:
raise DownloadInternalError(f"{error_msg_prefix}. Error: {e}") from e
except Exception as e:
raise DownloadInternalError(f"{error_msg_prefix}. Error: {e}") from e
return self.output_folder.resolve().as_posix()
async def _download_artifacts_and_snapshot(self, httpx_client: httpx.AsyncClient):
run_data = await self._get_run_data_from_run_history(httpx_client)
logger.debug("Parsing run data from run history to get necessary information.")
# extract necessary information from run data
snapshot_id = run_data["runMetadata"]["properties"]["azureml.promptflow.snapshot_id"]
output_data = run_data["runMetadata"]["outputs"].get("debug_info", None)
if output_data is None:
logger.warning(
"Downloading run '%s' but the 'debug_info' output assets is not available, "
"maybe because the job ran on old version runtime, trying to get `flow_outputs` output asset instead.",
self.run,
)
self._use_flow_outputs = True
output_data = run_data["runMetadata"]["outputs"].get("flow_outputs", None)
output_asset_id = output_data["assetId"]
# save run metadata to run_metadata.json
logger.debug("Saving the run meta data.")
run_data = self.run_ops._refine_run_data_from_run_history(run_data)
run_data = Run._from_run_history_entity(run_data)
with open(self.output_folder / DownloadedRun.RUN_METADATA_FILE_NAME, "w", encoding=DEFAULT_ENCODING) as f:
json.dump(run_data._to_dict(), f, ensure_ascii=False)
async with self.blob_service_client:
container_name = self.datastore.container_name
logger.debug("Getting container client (%s) from workspace default datastore.", container_name)
container_client = self.blob_service_client.get_container_client(container_name)
async with container_client:
tasks = [
self._download_flow_artifacts(httpx_client, container_client, output_asset_id),
self._download_snapshot(httpx_client, container_client, snapshot_id),
]
await asyncio.gather(*tasks)
async def _get_run_data_from_run_history(self, client: httpx.AsyncClient):
"""Get the run data from the run history."""
logger.debug("Getting run data from run history.")
headers = self.run_ops._get_headers()
url = self.run_ops._run_history_endpoint_url + "/rundata"
payload = {
"runId": self.run,
"selectRunMetadata": True,
"selectRunDefinition": True,
"selectJobSpecification": True,
}
error_msg_prefix = "Failed to get run data from run history"
try:
response = await client.post(url, headers=headers, json=payload)
except Exception as e:
raise DownloadInternalError(f"{error_msg_prefix}. Error: {e}") from e
else:
if response.status_code == 200:
return response.json()
elif response.status_code == 404:
raise RunNotFoundError(f"{error_msg_prefix}. Run {self.run!r} not found.")
else:
raise DownloadInternalError(
f"{error_msg_prefix}. Code: {response.status_code}. Reason: {response.reason_phrase}"
)
def _download_run_metrics(
self,
):
"""Download the run metrics."""
logger.debug("Downloading run metrics.")
metrics = self.run_ops.get_metrics(self.run)
with open(self.output_folder / DownloadedRun.METRICS_FILE_NAME, "w", encoding=DEFAULT_ENCODING) as f:
json.dump(metrics, f, ensure_ascii=False)
logger.debug("Downloaded run metrics.")
async def _download_flow_artifacts(self, httpx_client: httpx.AsyncClient, container_client, output_data):
"""Download the output data."""
asset_path = await self._get_asset_path(httpx_client, output_data)
await self._download_blob_folder_from_asset_path(container_client, asset_path)
async def _download_blob_folder_from_asset_path(
self, container_client, asset_path: str, local_folder: Optional[Path] = None
):
"""Download the blob data from the data path."""
logger.debug("Downloading all blobs from data path prefix '%s'", asset_path)
if local_folder is None:
local_folder = self.output_folder
tasks = []
async for blob in container_client.list_blobs(name_starts_with=asset_path):
blob_client = container_client.get_blob_client(blob.name)
relative_path = Path(blob.name).relative_to(asset_path)
local_path = local_folder / relative_path
tasks.append(self._download_single_blob(blob_client, local_path))
await asyncio.gather(*tasks)
async def _download_single_blob(self, blob_client, local_path: Optional[Path] = None):
"""Download a single blob."""
if local_path is None:
local_path = Path(self.output_folder / blob_client.blob_name)
elif local_path.exists():
raise UserErrorException(f"Local file {local_path.resolve().as_posix()!r} already exists.")
# ignore some files
for item in self.IGNORED_PATTERN:
if item in blob_client.blob_name:
logger.warning(
"Ignoring file '%s' because it matches the ignored pattern '%s'", local_path.as_posix(), item
)
return None
logger.debug("Downloading blob '%s' to local path '%s'", blob_client.blob_name, local_path.resolve().as_posix())
local_path.parent.mkdir(parents=True, exist_ok=True)
async with blob_client:
with open(local_path, "wb") as f:
stream = await blob_client.download_blob()
async for chunk in stream.chunks():
f.write(chunk)
return local_path
async def _download_snapshot(self, httpx_client: httpx.AsyncClient, container_client, snapshot_id):
"""Download the flow snapshot."""
snapshot_urls = await self._get_flow_snapshot_urls(httpx_client, snapshot_id)
logger.debug("Downloading all snapshot blobs from snapshot urls.")
tasks = []
for url in snapshot_urls:
blob_name = url.split(self.datastore.container_name)[-1].lstrip("/")
blob_client = container_client.get_blob_client(blob_name)
relative_path = url.split(self.run)[-1].lstrip("/")
local_path = Path(self.output_folder / DownloadedRun.SNAPSHOT_FOLDER / relative_path)
tasks.append(self._download_single_blob(blob_client, local_path))
await asyncio.gather(*tasks)
async def _get_flow_snapshot_urls(self, httpx_client: httpx.AsyncClient, snapshot_id):
logger.debug("Getting flow snapshot blob urls from snapshot id with calling to content service.")
headers = self.run_ops._get_headers()
endpoint = self.run_ops._run_history_endpoint_url.replace("/history/v1.0", "/content/v2.0")
url = endpoint + "/snapshots/sas"
payload = {
"snapshotOrAssetId": snapshot_id,
}
error_msg_prefix = (
f"Failed to download flow snapshots with snapshot id {snapshot_id}, "
f"because the client failed to retrieve data from content service"
)
try:
response = await httpx_client.post(url, headers=headers, json=payload)
except Exception as e:
raise DownloadInternalError(f"{error_msg_prefix}. Error: {e}") from e
else:
if response.status_code == 200:
return self._parse_snapshot_response(response.json())
elif response.status_code == 404:
raise DownloadInternalError(f"{error_msg_prefix}. Error: Snapshot id not found.")
else:
raise DownloadInternalError(
f"{error_msg_prefix}. Code: {response.status_code}. Reason: {response.reason_phrase}"
)
async def _get_asset_path(self, client: httpx.AsyncClient, asset_id):
"""Get the asset path from asset id."""
logger.debug("Getting asset path from asset id with calling to data service.")
headers = self.run_ops._get_headers()
endpoint = self.run_ops._run_history_endpoint_url.replace("/history", "/data")
url = endpoint + "/dataversion/getByAssetId"
payload = {
"value": asset_id,
}
error_msg_prefix = "Failed to download flow artifacts due to failed to retrieve data from data service"
try:
response = await client.post(url, headers=headers, json=payload)
except Exception as e:
raise DownloadInternalError(f"{error_msg_prefix}. Error: {e}") from e
if response.status_code != 200:
raise DownloadInternalError(
f"{error_msg_prefix}. Code: {response.status_code}. Reason: {response.reason_phrase}"
)
response_data = response.json()
data_path = response_data["dataVersion"]["dataUri"].split("/paths/")[-1]
if self._use_flow_outputs:
data_path = data_path.replace("flow_outputs", "flow_artifacts")
return data_path
def _parse_snapshot_response(self, response: dict):
"""Parse the snapshot response."""
urls = []
if response["absoluteUrl"]:
urls.append(response["absoluteUrl"])
for value in response["children"].values():
urls += self._parse_snapshot_response(value)
return urls
def _download_run_logs(self):
"""Download the run logs."""
logger.debug("Downloading run logs.")
logs = self.run_ops._get_log(self.run)
with open(self.output_folder / DownloadedRun.LOGS_FILE_NAME, "w", encoding=DEFAULT_ENCODING) as f:
f.write(logs)
logger.debug("Downloaded run logs.")
@classmethod
def _from_run_operations(cls, run_ops: "RunOperations", run: str, output_folder: Union[str, Path]):
"""Create an instance from run operations."""
from azure.ai.ml.entities._datastore.azure_storage import AzureBlobDatastore
datastore = run_ops._workspace_default_datastore
if isinstance(datastore, AzureBlobDatastore):
return cls(run=run, run_ops=run_ops, output_folder=output_folder)
else:
raise UserErrorException(
f"Cannot download run {run!r} because the workspace default datastore is not supported. Supported ones "
f"are ['AzureBlobDatastore'], got {type(datastore).__name__!r}."
)
async def to_thread(func, /, *args, **kwargs):
# this is copied from asyncio.to_thread() in Python 3.9
# as it is not available in Python 3.8, which is the minimum supported version of promptflow
loop = asyncio.get_running_loop()
ctx = contextvars.copy_context()
func_call = functools.partial(ctx.run, func, *args, **kwargs)
return await loop.run_in_executor(None, func_call)
| promptflow/src/promptflow/promptflow/azure/operations/_async_run_downloader.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/operations/_async_run_downloader.py",
"repo_id": "promptflow",
"token_count": 6052
} | 50 |
from promptflow.exceptions import UserErrorException
class FailedToImportModule(UserErrorException):
pass
class FlowDefinitionError(UserErrorException):
pass
| promptflow/src/promptflow/promptflow/contracts/_errors.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/contracts/_errors.py",
"repo_id": "promptflow",
"token_count": 44
} | 51 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from jinja2 import TemplateSyntaxError
from promptflow._utils.exception_utils import ExceptionPresenter, infer_error_code_from_class, remove_suffix
from promptflow.exceptions import (
ErrorTarget,
PromptflowException,
SystemErrorException,
UserErrorException,
ValidationException,
)
class InvalidCustomLLMTool(ValidationException):
"""Exception raised when package tool definition is wrong."""
pass
class ValueTypeUnresolved(ValidationException):
pass
class ToolValidationError(ValidationException):
def __init__(
self,
target: ErrorTarget = ErrorTarget.EXECUTOR,
**kwargs,
):
super().__init__(
target=target,
**kwargs,
)
class InvalidRequest(ValidationException):
def __init__(
self,
target: ErrorTarget = ErrorTarget.EXECUTOR,
**kwargs,
):
super().__init__(
target=target,
**kwargs,
)
class ConnectionNotFound(InvalidRequest):
pass
class InvalidBulkTestRequest(ValidationException):
def __init__(
self,
target: ErrorTarget = ErrorTarget.EXECUTOR,
**kwargs,
):
super().__init__(
target=target,
**kwargs,
)
class InvalidFlowRequest(ValidationException):
def __init__(
self,
target: ErrorTarget = ErrorTarget.EXECUTOR,
**kwargs,
):
super().__init__(
target=target,
**kwargs,
)
class NodeInputValidationError(InvalidFlowRequest):
pass
class DuplicateNodeName(InvalidFlowRequest):
pass
class EmptyOutputReference(InvalidFlowRequest):
pass
class OutputReferenceNotFound(InvalidFlowRequest):
pass
class InputReferenceNotFound(InvalidFlowRequest):
pass
class InputNotFound(InvalidFlowRequest):
pass
class InvalidAggregationInput(SystemErrorException):
pass
class InputNotFoundFromAncestorNodeOutput(SystemErrorException):
pass
class NoNodeExecutedError(SystemErrorException):
pass
class InputTypeError(InvalidFlowRequest):
pass
class InputParseError(InvalidFlowRequest):
pass
class InvalidConnectionType(InvalidFlowRequest):
pass
class NodeReferenceNotFound(InvalidFlowRequest):
pass
class NodeCircularDependency(InvalidFlowRequest):
pass
class InvalidNodeReference(InvalidFlowRequest):
pass
class NodeReferenceError(UserErrorException):
"""Exception raised when node reference not found or unsupported"""
pass
class UnsupportedReference(NodeReferenceError):
pass
class InvalidReferenceProperty(NodeReferenceError):
pass
class OutputReferenceNotExist(NodeReferenceError):
pass
class NodeOutputNotFound(UserErrorException):
pass
class SingleNodeValidationError(UserErrorException):
pass
class LineExecutionTimeoutError(UserErrorException):
"""Exception raised when single line execution timeout"""
def __init__(self, line_number, timeout):
super().__init__(
message_format="Line {line_number} execution timeout for exceeding {timeout} seconds",
line_number=line_number,
timeout=timeout,
target=ErrorTarget.EXECUTOR,
)
class BatchExecutionTimeoutError(UserErrorException):
"""Exception raised when batch timeout is exceeded"""
def __init__(self, line_number, timeout):
super().__init__(
message_format=(
"Line {line_number} execution terminated due to the "
"total batch run exceeding the batch timeout ({timeout}s)."
),
line_number=line_number,
timeout=timeout,
target=ErrorTarget.BATCH,
)
class ProcessCrashError(UserErrorException):
"""Exception raised when process crashed."""
def __init__(self, line_number):
super().__init__(message=f"Process crashed while executing line {line_number},", target=ErrorTarget.EXECUTOR)
class ProcessTerminatedTimeout(SystemErrorException):
"""Exception raised when process not terminated within a period of time."""
def __init__(self, timeout):
super().__init__(message=f"Process has not terminated after {timeout} seconds", target=ErrorTarget.EXECUTOR)
class ProcessInfoObtainedTimeout(SystemErrorException):
"""Exception raised when process info not obtained within a period of time."""
def __init__(self, timeout):
super().__init__(message=f"Failed to get process info after {timeout} seconds", target=ErrorTarget.EXECUTOR)
class SpawnedForkProcessManagerStartFailure(SystemErrorException):
"""Exception raised when failed to start spawned fork process manager."""
def __init__(self):
super().__init__(message="Failed to start spawned fork process manager", target=ErrorTarget.EXECUTOR)
class EmptyLLMApiMapping(UserErrorException):
"""Exception raised when connection_type_to_api_mapping is empty and llm node provider can't be inferred"""
def __init__(self):
super().__init__(
message="LLM api mapping is empty, please ensure 'promptflow-tools' package has been installed.",
target=ErrorTarget.EXECUTOR,
)
class ResolveToolError(PromptflowException):
"""Exception raised when tool load failed.
It is used to append the name of the failed node to the error message to improve the user experience.
It simply wraps the error thrown by the Resolve Tool phase.
It has the same additional_info and error_codes as inner error.
"""
def __init__(self, *, node_name: str, target: ErrorTarget = ErrorTarget.EXECUTOR, module: str = None):
self._node_name = node_name
super().__init__(target=target, module=module)
@property
def message(self):
if self.inner_exception:
error_type_and_message = f"({self.inner_exception.__class__.__name__}) {self.inner_exception}"
if isinstance(self.inner_exception, TemplateSyntaxError):
error_type_and_message = (
f"Jinja parsing failed at line {self.inner_exception.lineno}: {error_type_and_message}"
)
return remove_suffix(self._message, ".") + f": {error_type_and_message}"
return self._message
@property
def message_format(self):
return "Tool load failed in '{node_name}'."
@property
def message_parameters(self):
return {"node_name": self._node_name}
@property
def additional_info(self):
"""Get additional info from innererror when the innererror is PromptflowException"""
if isinstance(self.inner_exception, PromptflowException):
return self.inner_exception.additional_info
return None
@property
def error_codes(self):
"""The hierarchy of the error codes.
We follow the "Microsoft REST API Guidelines" to define error codes in a hierarchy style.
See the below link for details:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
Due to ResolveToolError has no classification of its own.
Its error_codes respect the inner_error.
"""
if self.inner_exception:
return ExceptionPresenter.create(self.inner_exception).error_codes
return [infer_error_code_from_class(SystemErrorException), self.__class__.__name__]
class UnsupportedAssistantToolType(ValidationException):
pass
class InvalidFlowFileError(UserErrorException):
pass
| promptflow/src/promptflow/promptflow/executor/_errors.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/executor/_errors.py",
"repo_id": "promptflow",
"token_count": 2739
} | 52 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
class DuplicatedPrimaryKeyException(Exception):
pass
class NotFoundException(Exception):
pass
| promptflow/src/promptflow/promptflow/storage/_errors.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/storage/_errors.py",
"repo_id": "promptflow",
"token_count": 53
} | 53 |
import os
from dataclasses import is_dataclass
from pathlib import Path
from tempfile import mkdtemp
import pytest
from promptflow.batch._batch_engine import OUTPUT_FILE_NAME, BatchEngine
from promptflow.batch._result import BatchResult, LineResult
from promptflow.contracts.run_info import Status
from promptflow.executor._script_executor import ScriptExecutor
from promptflow.executor.flow_executor import FlowExecutor
from ..utils import (
EAGER_FLOW_ROOT,
get_bulk_inputs_from_jsonl,
get_entry_file,
get_flow_folder,
get_flow_inputs_file,
get_yaml_file,
load_jsonl,
)
SAMPLE_FLOW = "web_classification_no_variants"
SAMPLE_EVAL_FLOW = "classification_accuracy_evaluation"
SAMPLE_FLOW_WITH_PARTIAL_FAILURE = "python_tool_partial_failure"
def validate_batch_result(batch_result: BatchResult, flow_folder, output_dir, ensure_output):
assert isinstance(batch_result, BatchResult)
nlines = len(get_bulk_inputs_from_jsonl(flow_folder, root=EAGER_FLOW_ROOT))
assert batch_result.total_lines == nlines
assert batch_result.completed_lines == nlines
assert batch_result.start_time < batch_result.end_time
assert batch_result.system_metrics.duration > 0
outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME)
assert len(outputs) == nlines
for i, output in enumerate(outputs):
assert isinstance(output, dict)
assert "line_number" in output, f"line_number is not in {i}th output {output}"
assert output["line_number"] == i, f"line_number is not correct in {i}th output {output}"
assert ensure_output(output)
@pytest.mark.usefixtures("dev_connections")
@pytest.mark.e2etest
class TestEagerFlow:
@pytest.mark.parametrize(
"flow_folder, entry, inputs, ensure_output",
[
(
"dummy_flow_with_trace",
"my_flow",
{"text": "text", "models": ["model"]},
lambda x: x == "dummy_output"
),
(
"flow_with_dataclass_output",
"my_flow",
{"text": "text", "models": ["model"]},
lambda x: is_dataclass(x) and x.text == "text" and x.models == ["model"]
),
]
)
def test_flow_run(self, flow_folder, entry, inputs, ensure_output):
# Test submitting eager flow to script executor
flow_file = get_entry_file(flow_folder, root=EAGER_FLOW_ROOT)
executor = ScriptExecutor(flow_file=flow_file, entry=entry)
line_result = executor.exec_line(inputs=inputs, index=0)
assert isinstance(line_result, LineResult)
assert ensure_output(line_result.output)
# Test submitting eager flow to flow executor
working_dir = get_flow_folder(flow_folder, root=EAGER_FLOW_ROOT)
os.chdir(working_dir)
flow_file = get_yaml_file(flow_folder, root=EAGER_FLOW_ROOT)
executor = FlowExecutor.create(flow_file=flow_file, connections={})
line_result = executor.exec_line(inputs=inputs, index=0)
assert isinstance(line_result, LineResult)
assert ensure_output(line_result.output)
@pytest.mark.parametrize(
"flow_folder, inputs, ensure_output",
[
(
"dummy_flow_with_trace",
{"text": "text", "models": ["model"]},
lambda x: x == "dummy_output"
),
(
"flow_with_dataclass_output",
{"text": "text", "models": ["model"]},
lambda x: is_dataclass(x) and x.text == "text" and x.models == ["model"]
),
]
)
def test_flow_run_with_flow_yaml(self, flow_folder, inputs, ensure_output):
working_dir = get_flow_folder(flow_folder, root=EAGER_FLOW_ROOT)
os.chdir(working_dir)
flow_file = get_yaml_file(flow_folder, root=EAGER_FLOW_ROOT)
executor = FlowExecutor.create(flow_file=flow_file, connections={})
line_result = executor.exec_line(inputs=inputs, index=0)
assert isinstance(line_result, LineResult)
assert ensure_output(line_result.output)
def test_exec_line_with_invalid_case(self):
flow_file = get_entry_file("dummy_flow_with_exception", root=EAGER_FLOW_ROOT)
executor = ScriptExecutor(flow_file=flow_file, entry="my_flow")
line_result = executor.exec_line(inputs={"text": "text"}, index=0)
assert isinstance(line_result, LineResult)
assert line_result.output is None
assert line_result.run_info.status == Status.Failed
assert "dummy exception" in line_result.run_info.error["message"]
@pytest.mark.parametrize(
"flow_folder, inputs_mapping, entry, ensure_output",
[
(
"dummy_flow_with_trace",
{"text": "${data.text}", "models": "${data.models}"},
"my_flow",
lambda x: "output" in x and x["output"] == "dummy_output",
),
(
"flow_with_dataclass_output",
{"text": "${data.text}", "models": "${data.models}"},
"my_flow",
lambda x: x["text"] == "text" and isinstance(x["models"], list),
),
(
"flow_with_dataclass_output",
{}, # if inputs_mapping is empty, then the inputs will be the default value
"my_flow",
lambda x: x["text"] == "default_text" and x["models"] == ["default_model"],
)
]
)
def test_batch_run(self, flow_folder, entry, inputs_mapping, ensure_output):
batch_engine = BatchEngine(
get_entry_file(flow_folder, root=EAGER_FLOW_ROOT),
get_flow_folder(flow_folder, root=EAGER_FLOW_ROOT),
entry=entry,
)
input_dirs = {"data": get_flow_inputs_file(flow_folder, root=EAGER_FLOW_ROOT)}
output_dir = Path(mkdtemp())
batch_result = batch_engine.run(input_dirs, inputs_mapping, output_dir)
validate_batch_result(batch_result, flow_folder, output_dir, ensure_output)
@pytest.mark.parametrize(
"flow_folder, inputs_mapping, ensure_output",
[
(
"dummy_flow_with_trace",
{"text": "${data.text}", "models": "${data.models}"},
lambda x: "output" in x and x["output"] == "dummy_output",
),
(
"flow_with_dataclass_output",
{"text": "${data.text}", "models": "${data.models}"},
lambda x: x["text"] == "text" and isinstance(x["models"], list),
),
]
)
def test_batch_run_with_flow_yaml(self, flow_folder, inputs_mapping, ensure_output):
batch_engine = BatchEngine(
get_yaml_file(flow_folder, root=EAGER_FLOW_ROOT),
get_flow_folder(flow_folder, root=EAGER_FLOW_ROOT),
)
input_dirs = {"data": get_flow_inputs_file(flow_folder, root=EAGER_FLOW_ROOT)}
output_dir = Path(mkdtemp())
batch_result = batch_engine.run(input_dirs, inputs_mapping, output_dir)
validate_batch_result(batch_result, flow_folder, output_dir, ensure_output)
def test_batch_run_with_invalid_case(self):
flow_folder = "dummy_flow_with_exception"
batch_engine = BatchEngine(
get_entry_file(flow_folder, root=EAGER_FLOW_ROOT),
get_flow_folder(flow_folder, root=EAGER_FLOW_ROOT),
entry="my_flow",
)
input_dirs = {"data": get_flow_inputs_file(flow_folder, root=EAGER_FLOW_ROOT)}
output_dir = Path(mkdtemp())
batch_result = batch_engine.run(input_dirs, {"text": "${data.text}"}, output_dir)
assert isinstance(batch_result, BatchResult)
nlines = len(get_bulk_inputs_from_jsonl(flow_folder, root=EAGER_FLOW_ROOT))
assert batch_result.total_lines == nlines
assert batch_result.failed_lines == nlines
assert batch_result.start_time < batch_result.end_time
assert batch_result.system_metrics.duration > 0
| promptflow/src/promptflow/tests/executor/e2etests/test_eager_flow.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/e2etests/test_eager_flow.py",
"repo_id": "promptflow",
"token_count": 3733
} | 54 |
[
{
"text": "Hello"
},
{
"text": "Hello World!"
}
]
| promptflow/src/promptflow/tests/executor/package_tools/custom_llm_tool/samples.json/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/package_tools/custom_llm_tool/samples.json",
"repo_id": "promptflow",
"token_count": 37
} | 55 |
import pytest
from promptflow._core.metric_logger import MetricLoggerManager, add_metric_logger, log_metric, remove_metric_logger
@pytest.mark.unittest
class TestMetricLogger:
def test_add_and_remove_metric_logger(self):
# define log metric function
metrics = {}
def _log_metric(key, value):
metrics[key] = value
def _log_metric_invalid(key, value, variant_id, extra_param):
metrics[key] = {variant_id: {value: extra_param}}
add_metric_logger(_log_metric)
assert MetricLoggerManager.get_instance()._metric_loggers == [_log_metric]
add_metric_logger(_log_metric)
assert MetricLoggerManager.get_instance()._metric_loggers == [_log_metric]
add_metric_logger(_log_metric_invalid)
assert MetricLoggerManager.get_instance()._metric_loggers == [_log_metric]
add_metric_logger("test")
assert MetricLoggerManager.get_instance()._metric_loggers == [_log_metric]
remove_metric_logger(_log_metric)
assert MetricLoggerManager.get_instance()._metric_loggers == []
def test_log_metric(self):
# define log metric function
metrics = {}
def _log_metric(key, value):
metrics[key] = value
def _log_metric_with_variant_id(key, value, variant_id):
metrics[key] = {variant_id: value}
add_metric_logger(_log_metric)
log_metric("test1", 1)
assert metrics == {"test1": 1}
add_metric_logger(_log_metric_with_variant_id)
log_metric("test2", 1, "line_0")
assert metrics == {"test1": 1, "test2": {"line_0": 1}}
| promptflow/src/promptflow/tests/executor/unittests/_core/test_metric_logger.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/_core/test_metric_logger.py",
"repo_id": "promptflow",
"token_count": 734
} | 56 |
import re
from pathlib import Path
from unittest.mock import MagicMock, mock_open, patch
import pytest
from promptflow._utils._errors import InvalidImageInput, LoadMultimediaDataError
from promptflow._utils.multimedia_utils import (
_create_image_from_base64,
_create_image_from_file,
_create_image_from_url,
_process_multimedia_dict_recursively,
_process_recursively,
convert_multimedia_data_to_base64,
create_image,
load_multimedia_data,
persist_multimedia_data,
resolve_multimedia_data_recursively,
)
from promptflow.contracts.flow import FlowInputDefinition
from promptflow.contracts.multimedia import Image
from promptflow.contracts.tool import ValueType
from ...utils import DATA_ROOT
TEST_IMAGE_PATH = DATA_ROOT / "logo.jpg"
@pytest.mark.unittest
class TestMultimediaUtils:
@pytest.mark.parametrize("image_path", ["logo.jpg", "logo.png", "logo.webp", "logo.gif"])
def test_create_image_from_base64(self, image_path):
image = _create_image_from_file(DATA_ROOT / image_path)
base64_str = image.to_base64()
image_from_base64 = _create_image_from_base64(base64_str)
assert str(image) == str(image_from_base64)
format = image_path.split(".")[-1]
mime_type = f"image/{format}" if format != "jpg" else "image/jpeg"
assert mime_type == image_from_base64._mime_type
@patch("requests.get")
def test_create_image_from_url_with_mime_type(self, mock_get):
url = "https://example.com/image.jpg"
content = b"image content"
mime_type = "image/jpeg"
mock_get.return_value = MagicMock(status_code=200, content=content)
image = _create_image_from_url(url, mime_type)
assert isinstance(image, Image)
assert image._mime_type == mime_type
assert image.source_url == url
@patch("requests.get")
def test_create_image_from_url_failure(self, mock_get):
url = "https://example.com/image.jpg"
message = "Failed to fetch image"
code = 404
mock_get.return_value = MagicMock(status_code=code, text=message)
with pytest.raises(InvalidImageInput) as ex:
_create_image_from_url(url)
expected_message = f"Failed to fetch image from URL: {url}. Error code: {code}. Error message: {message}."
assert str(ex.value) == expected_message
def test_create_image_with_dict(self, mocker):
## From path
image_dict = {"data:image/jpg;path": TEST_IMAGE_PATH}
image_from_path = create_image(image_dict)
assert image_from_path._mime_type == "image/jpg"
## From base64
image_dict = {"data:image/jpg;base64": image_from_path.to_base64()}
image_from_base64 = create_image(image_dict)
assert str(image_from_path) == str(image_from_base64)
assert image_from_base64._mime_type == "image/jpg"
## From url
mocker.patch("requests.get", return_value=mocker.Mock(content=image_from_path, status_code=200))
image_dict = {"data:image/jpg;url": ""}
image_from_url = create_image(image_dict)
assert str(image_from_path) == str(image_from_url)
assert image_from_url._mime_type == "image/jpg"
mocker.patch("requests.get", return_value=mocker.Mock(content=None, status_code=404))
with pytest.raises(InvalidImageInput) as ex:
create_image(image_dict)
assert "Failed to fetch image from URL" in ex.value.message_format
def test_create_image_with_string(self, mocker):
## From path
image_from_path = create_image(str(TEST_IMAGE_PATH))
assert image_from_path._mime_type == "image/jpeg"
# From base64
image_from_base64 = create_image(image_from_path.to_base64())
assert str(image_from_path) == str(image_from_base64)
assert image_from_base64._mime_type == "image/jpeg"
## From url
mocker.patch("promptflow._utils.multimedia_utils._is_url", return_value=True)
mocker.patch("promptflow._utils.multimedia_utils._is_base64", return_value=False)
mocker.patch("requests.get", return_value=mocker.Mock(content=image_from_path, status_code=200))
image_from_url = create_image("Test")
assert str(image_from_path) == str(image_from_url)
assert image_from_url._mime_type == "image/jpeg"
## From image
image_from_image = create_image(image_from_path)
assert str(image_from_path) == str(image_from_image)
def test_create_image_with_invalid_cases(self):
# Test invalid input type
with pytest.raises(InvalidImageInput) as ex:
create_image(0)
assert "Unsupported image input type" in ex.value.message_format
# Test invalid image dict
with pytest.raises(InvalidImageInput) as ex:
invalid_image_dict = {"invalid_image": "invalid_image"}
create_image(invalid_image_dict)
assert "Invalid image input format" in ex.value.message_format
# Test none or empty input value
with pytest.raises(InvalidImageInput) as ex:
create_image(None)
assert "Unsupported image input type" in ex.value.message_format
with pytest.raises(InvalidImageInput) as ex:
create_image("")
assert "The image input should not be empty." in ex.value.message_format
def test_persist_multimedia_date(self, mocker):
image = _create_image_from_file(TEST_IMAGE_PATH)
mocker.patch("builtins.open", mock_open())
data = {"image": image, "images": [image, image, "other_data"], "other_data": "other_data"}
persisted_data = persist_multimedia_data(data, base_dir=Path(__file__).parent)
file_name = re.compile(r"^[0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{12}.jpeg$")
assert re.match(file_name, persisted_data["image"]["data:image/jpeg;path"])
assert re.match(file_name, persisted_data["images"][0]["data:image/jpeg;path"])
assert re.match(file_name, persisted_data["images"][1]["data:image/jpeg;path"])
def test_convert_multimedia_date_to_base64(self):
image = _create_image_from_file(TEST_IMAGE_PATH)
data = {"image": image, "images": [image, image, "other_data"], "other_data": "other_data"}
base64_data = convert_multimedia_data_to_base64(data)
assert base64_data == {
"image": image.to_base64(),
"images": [image.to_base64(), image.to_base64(), "other_data"],
"other_data": "other_data",
}
base64_data = convert_multimedia_data_to_base64(data, with_type=True)
prefix = f"data:{image._mime_type};base64,"
assert base64_data == {
"image": prefix + image.to_base64(),
"images": [prefix + image.to_base64(), prefix + image.to_base64(), "other_data"],
"other_data": "other_data",
}
def test_load_multimedia_data(self):
# Case 1: Test normal node
inputs = {
"image": FlowInputDefinition(type=ValueType.IMAGE),
"images": FlowInputDefinition(type=ValueType.LIST),
"object": FlowInputDefinition(type=ValueType.OBJECT),
}
image_dict = {"data:image/jpg;path": str(TEST_IMAGE_PATH)}
line_inputs = {
"image": image_dict,
"images": [image_dict, image_dict],
"object": {"image": image_dict, "other_data": "other_data"},
}
updated_inputs = load_multimedia_data(inputs, line_inputs)
image = _create_image_from_file(TEST_IMAGE_PATH)
assert updated_inputs == {
"image": image,
"images": [image, image],
"object": {"image": image, "other_data": "other_data"},
}
# Case 2: Test aggregation node
line_inputs = {
"image": [image_dict, image_dict],
"images": [[image_dict, image_dict], [image_dict]],
"object": [{"image": image_dict, "other_data": "other_data"}, {"other_data": "other_data"}],
}
updated_inputs = load_multimedia_data(inputs, line_inputs)
assert updated_inputs == {
"image": [image, image],
"images": [[image, image], [image]],
"object": [{"image": image, "other_data": "other_data"}, {"other_data": "other_data"}],
}
# Case 3: Test invalid input type
with pytest.raises(LoadMultimediaDataError) as ex:
line_inputs = {"image": 0}
load_multimedia_data(inputs, line_inputs)
assert (
"Failed to load image for input 'image': "
"(InvalidImageInput) Unsupported image input type") in ex.value.message
def test_resolve_multimedia_data_recursively(self):
image_dict = {"data:image/jpg;path": "logo.jpg"}
value = {
"image": image_dict,
"images": [image_dict, image_dict],
"object": {"image": image_dict, "other_data": "other_data"},
}
input_dir = TEST_IMAGE_PATH
updated_value = resolve_multimedia_data_recursively(input_dir, value)
updated_image_dict = {"data:image/jpg;path": str(DATA_ROOT / "logo.jpg")}
assert updated_value == {
"image": updated_image_dict,
"images": [updated_image_dict, updated_image_dict],
"object": {"image": updated_image_dict, "other_data": "other_data"},
}
def test_process_recursively(self):
image = _create_image_from_file(TEST_IMAGE_PATH)
value = {"image": image, "images": [image, image], "object": {"image": image, "other_data": "other_data"}}
process_funcs = {Image: lambda x: str(x)}
updated_value = _process_recursively(value, process_funcs)
image_str = str(image)
assert updated_value == {
"image": image_str,
"images": [image_str, image_str],
"object": {"image": image_str, "other_data": "other_data"},
}
assert value != updated_value
def test_process_recursively_inplace(self):
image = _create_image_from_file(TEST_IMAGE_PATH)
value = {"image": image, "images": [image, image], "object": {"image": image, "other_data": "other_data"}}
process_funcs = {Image: lambda x: str(x)}
_process_recursively(value, process_funcs, inplace=True)
image_str = str(image)
assert value == {
"image": image_str,
"images": [image_str, image_str],
"object": {"image": image_str, "other_data": "other_data"},
}
def test_process_multimedia_dict_recursively(self):
def process_func(image_dict):
return "image_placeholder"
image_dict = {"data:image/jpg;path": "logo.jpg"}
value = {
"image": image_dict,
"images": [image_dict, image_dict],
"object": {"image": image_dict, "other_data": "other_data"},
}
updated_value = _process_multimedia_dict_recursively(value, process_func)
assert updated_value == {
"image": "image_placeholder",
"images": ["image_placeholder", "image_placeholder"],
"object": {"image": "image_placeholder", "other_data": "other_data"},
}
| promptflow/src/promptflow/tests/executor/unittests/_utils/test_multimedia_utils.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/_utils/test_multimedia_utils.py",
"repo_id": "promptflow",
"token_count": 4918
} | 57 |
from enum import Enum
from typing import Any, Callable, NewType, Optional, Tuple, TypeVar, Union
import pytest
from promptflow._core.tools_manager import connections
from promptflow._sdk.entities import CustomStrongTypeConnection
from promptflow._sdk.entities._connection import AzureContentSafetyConnection
from promptflow.contracts.multimedia import Image
from promptflow.contracts.run_info import Status
from promptflow.contracts.tool import (
AssistantDefinition,
ConnectionType,
InputDefinition,
OutputDefinition,
Tool,
ToolType,
ValueType,
_deserialize_enum,
)
from promptflow.contracts.types import FilePath, PromptTemplate, Secret
class MyConnection(CustomStrongTypeConnection):
pass
my_connection = MyConnection(name="my_connection", secrets={"key": "value"})
def some_function():
pass
class TestStatus(Enum):
Running = 1
Preparing = 2
Completed = 3
@pytest.mark.unittest
@pytest.mark.parametrize(
"enum, value, expected",
[
(Status, "Running", Status.Running),
(Status, "running", Status.Running),
(Status, "FAILED", Status.Failed),
(Status, "UNKNOWN", "UNKNOWN"),
(TestStatus, "Running", "Running"),
],
)
def test_deserialize_enum(enum, value, expected):
assert _deserialize_enum(enum, value) == expected
@pytest.mark.unittest
class TestValueType:
@pytest.mark.parametrize(
"value, expected",
[
(1, ValueType.INT),
(1.0, ValueType.DOUBLE),
(True, ValueType.BOOL),
("string", ValueType.STRING),
([], ValueType.LIST),
({}, ValueType.OBJECT),
(Secret("secret"), ValueType.SECRET),
(PromptTemplate("prompt"), ValueType.PROMPT_TEMPLATE),
(FilePath("file_path"), ValueType.FILE_PATH),
(AssistantDefinition("model", "instructions", []), ValueType.ASSISTANT_DEFINITION),
],
)
def test_from_value(self, value, expected):
assert ValueType.from_value(value) == expected
@pytest.mark.parametrize(
"value, expected",
[
(int, ValueType.INT),
(float, ValueType.DOUBLE),
(bool, ValueType.BOOL),
(str, ValueType.STRING),
(list, ValueType.LIST),
(dict, ValueType.OBJECT),
(Secret, ValueType.SECRET),
(PromptTemplate, ValueType.PROMPT_TEMPLATE),
(FilePath, ValueType.FILE_PATH),
(Image, ValueType.IMAGE),
(AssistantDefinition, ValueType.ASSISTANT_DEFINITION),
],
)
def test_from_type(self, value, expected):
assert ValueType.from_type(value) == expected
@pytest.mark.parametrize(
"value, value_type, expected",
[
("1", ValueType.INT, 1),
("1.0", ValueType.DOUBLE, 1.0),
("true", ValueType.BOOL, True),
("false", ValueType.BOOL, False),
(True, ValueType.BOOL, True),
(123, ValueType.STRING, "123"),
('["a", "b", "c"]', ValueType.LIST, ["a", "b", "c"]),
('{"key": "value"}', ValueType.OBJECT, {"key": "value"}),
("[1, 2, 3]", ValueType.OBJECT, [1, 2, 3]),
("{", ValueType.OBJECT, "{"),
([1, 2, 3], ValueType.OBJECT, [1, 2, 3]),
],
)
def test_parse(self, value, value_type, expected):
assert value_type.parse(value) == expected
@pytest.mark.parametrize(
"value, value_type",
[
("1", ValueType.BOOL),
({}, ValueType.LIST),
],
)
def test_parse_error(self, value, value_type):
with pytest.raises(ValueError):
value_type.parse(value)
@pytest.mark.unittest
class TestConnectionType:
@pytest.mark.parametrize(
"type_name, expected",
[
("AzureContentSafetyConnection", connections.get("AzureContentSafetyConnection")),
("AzureOpenAIConnection", connections.get("AzureOpenAIConnection")),
("_Connection", connections.get("_Connection")),
("unknown", None),
(123, None),
],
)
def test_get_connection_class(self, type_name, expected):
assert ConnectionType.get_connection_class(type_name) == expected
@pytest.mark.parametrize(
"type_name, expected",
[
("AzureContentSafetyConnection", True),
("AzureOpenAIConnection", True),
("_Connection", True),
("unknown", False),
(123, False),
],
)
def test_is_connection_class_name(self, type_name, expected):
assert ConnectionType.is_connection_class_name(type_name) == expected
@pytest.mark.parametrize(
"value, expected",
[
(connections.get("AzureContentSafetyConnection"), True),
(AzureContentSafetyConnection("api_key", "endpoint"), True),
(Status, False),
(ConnectionType.is_connection_value("non_connection_instance"), False),
],
)
def test_is_connection_value(self, value, expected):
assert ConnectionType.is_connection_value(value) == expected
@pytest.mark.parametrize(
"val, expected_res",
[
(my_connection, True),
(MyConnection, True),
(list, False),
# (list[str], False), # Python 3.9
# (list[int], False),
([1, 2, 3], False),
(float, False),
(int, False),
(5, False),
(str, False),
(some_function, False),
(Union[str, int], False),
# ((int | str), False), # Python 3.10
(tuple, False),
# (tuple[str, int], False), # Python 3.9
(Tuple[int, ...], False),
# (dict[str, Any], False), # Python 3.9
({"test1": [1, 2, 3], "test2": [4, 5, 6], "test3": [7, 8, 9]}, False),
(Any, False),
(None, False),
(Optional[str], False),
(TypeVar("T"), False),
(TypeVar, False),
(Callable, False),
(Callable[..., Any], False),
(NewType("MyType", int), False),
],
)
def test_is_custom_strong_type(self, val, expected_res):
assert ConnectionType.is_custom_strong_type(val) == expected_res
def test_serialize_conn(self):
assert ConnectionType.serialize_conn(AzureContentSafetyConnection) == "ABCMeta"
connection_instance = AzureContentSafetyConnection("api_key", "endpoint")
assert ConnectionType.serialize_conn(connection_instance) == "AzureContentSafetyConnection"
with pytest.raises(ValueError):
ConnectionType.serialize_conn("non_connection_instance")
@pytest.mark.unittest
class TestInputDefinition:
def test_serialize(self):
# test when len(type) == 1
input_def = InputDefinition(
[ValueType.STRING],
default="Default",
description="Description",
enum=["A", "B", "C"],
custom_type=["customtype"],
)
serialized = input_def.serialize()
assert serialized == {
"type": "string",
"default": "Default",
"description": "Description",
"enum": ["A", "B", "C"],
"custom_type": ["customtype"],
}
# test when len(type) > 1
input_def = InputDefinition([ValueType.STRING, ValueType.INT])
serialized = input_def.serialize()
assert serialized == {"type": ["string", "int"]}
def test_deserialize(self):
serialized = {"type": "string", "default": "Default", "description": "Description", "enum": ["A", "B", "C"]}
deserialized = InputDefinition.deserialize(serialized)
assert deserialized.type == [ValueType.STRING]
assert deserialized.default == "Default"
assert deserialized.description == "Description"
assert deserialized.enum == ["A", "B", "C"]
serialized = {"type": ["string", "int"]}
deserialized = InputDefinition.deserialize(serialized)
assert deserialized.type == [ValueType.STRING, ValueType.INT]
assert deserialized.default == ""
assert deserialized.description == ""
assert deserialized.enum == []
@pytest.mark.unittest
class TestOutDefinition:
@pytest.mark.parametrize(
"value, expected",
[
(
OutputDefinition([ValueType.STRING], description="Description", is_property=True),
{"type": "string", "description": "Description", "is_property": True},
),
(OutputDefinition([ValueType.STRING, ValueType.INT]), {"type": ["string", "int"], "is_property": False}),
],
)
def test_serialize(self, value, expected):
assert value.serialize() == expected
@pytest.mark.parametrize(
"value, expected",
[
(
{"type": "string", "description": "Description", "is_property": True},
OutputDefinition([ValueType.STRING], description="Description", is_property=True),
),
({"type": ["string", "int"]}, OutputDefinition([ValueType.STRING, ValueType.INT])),
],
)
def test_deserialize(self, value, expected):
assert OutputDefinition.deserialize(value) == expected
@pytest.mark.unittest
class TestTool:
@pytest.mark.parametrize(
"tool_type, expected_keys",
[
(ToolType._ACTION, ["name", "description", "enable_kwargs"]),
(ToolType.LLM, ["name", "type", "inputs", "description", "enable_kwargs"]),
],
)
def test_serialize_tool(self, tool_type, expected_keys):
tool = Tool(name="test_tool", type=tool_type, inputs={}, outputs={}, description="description")
serialized_tool = tool.serialize()
assert set(serialized_tool.keys()) == set(expected_keys)
def test_deserialize_tool(self):
data = {
"name": "test_tool",
"type": "LLM",
"inputs": {"input1": {"type": "ValueType1"}},
}
tool = Tool.deserialize(data)
assert tool.name == data["name"]
assert tool.type == ToolType[data["type"]]
assert "input1" in tool.inputs
@pytest.mark.parametrize(
"tooltype, connection_type, expected",
[
(ToolType.LLM, None, True),
(ToolType._ACTION, ["AzureContentSafetyConnection"], True),
(ToolType._ACTION, None, False),
],
)
def test_require_connection(self, tooltype, connection_type, expected):
tool = Tool(name="Test Tool", type=tooltype, inputs={}, connection_type=connection_type)
assert tool._require_connection() == expected
| promptflow/src/promptflow/tests/executor/unittests/contracts/test_tool.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/contracts/test_tool.py",
"repo_id": "promptflow",
"token_count": 4844
} | 58 |
import pytest
from multiprocessing import Queue
from promptflow.executor._line_execution_process_pool import QueueRunStorage
from promptflow.contracts.run_info import FlowRunInfo
from promptflow.contracts.run_info import RunInfo as NodeRunInfo
@pytest.mark.unittest
class TestLineExecutionProcessPool:
def test_persist_node_run(self):
queue = Queue()
run_storage = QueueRunStorage(queue)
node_run_info = NodeRunInfo(
node="node1",
flow_run_id="flow_run_id",
run_id="run_id",
status="status",
inputs="inputs",
output="output",
metrics="metrics",
error="error",
parent_run_id="parent_run_id",
start_time="start_time",
end_time="end_time",
index="index",
api_calls="api_calls",
variant_id="variant_id",
cached_run_id="cached_run_id",
cached_flow_run_id="cached_flow_run_id",
logs="logs",
system_metrics="system_metrics",
result="result",
)
run_storage.persist_node_run(node_run_info)
assert queue.get() == node_run_info
def test_persist_flow_run(self):
queue = Queue()
run_storage = QueueRunStorage(queue)
flow_run_info = FlowRunInfo(
run_id="run_id",
status="status",
inputs="inputs",
output="output",
metrics="metrics",
request="request",
root_run_id="root_run_id",
source_run_id="source_run_id",
flow_id="flow_id",
error="error",
parent_run_id="parent_run_id",
start_time="start_time",
end_time="end_time",
index="index",
api_calls="api_calls",
variant_id="variant_id",
system_metrics="system_metrics",
result="result",
)
run_storage.persist_flow_run(flow_run_info)
assert queue.get() == flow_run_info
| promptflow/src/promptflow/tests/executor/unittests/storage/test_queue_run_storage.py/0 | {
"file_path": "promptflow/src/promptflow/tests/executor/unittests/storage/test_queue_run_storage.py",
"repo_id": "promptflow",
"token_count": 1048
} | 59 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import copy
import json
import shutil
from logging import Logger
from pathlib import Path
from tempfile import TemporaryDirectory
from time import sleep
from typing import Callable
from unittest.mock import MagicMock, patch
import pandas as pd
import pydash
import pytest
from promptflow._sdk._constants import DownloadedRun, RunStatus
from promptflow._sdk._errors import InvalidRunError, InvalidRunStatusError, RunNotFoundError
from promptflow._sdk._load_functions import load_run
from promptflow._sdk.entities import Run
from promptflow._utils.flow_utils import get_flow_lineage_id
from promptflow._utils.yaml_utils import load_yaml
from promptflow.azure import PFClient
from promptflow.azure._constants._flow import ENVIRONMENT, PYTHON_REQUIREMENTS_TXT
from promptflow.azure._entities._flow import Flow
from promptflow.exceptions import UserErrorException
from .._azure_utils import DEFAULT_TEST_TIMEOUT, PYTEST_TIMEOUT_METHOD
from ..recording_utilities import is_live
PROMOTFLOW_ROOT = Path(__file__) / "../../../.."
TEST_ROOT = Path(__file__).parent.parent.parent
MODEL_ROOT = TEST_ROOT / "test_configs/e2e_samples"
CONNECTION_FILE = (PROMOTFLOW_ROOT / "connections.json").resolve().absolute().as_posix()
FLOWS_DIR = "./tests/test_configs/flows"
EAGER_FLOWS_DIR = "./tests/test_configs/eager_flows"
RUNS_DIR = "./tests/test_configs/runs"
DATAS_DIR = "./tests/test_configs/datas"
@pytest.mark.timeout(timeout=DEFAULT_TEST_TIMEOUT, method=PYTEST_TIMEOUT_METHOD)
@pytest.mark.e2etest
@pytest.mark.usefixtures(
"mock_set_headers_with_user_aml_token",
"single_worker_thread_pool",
"vcr_recording",
)
class TestFlowRun:
def test_run_bulk(self, pf, runtime: str, randstr: Callable[[str], str]):
name = randstr("name")
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification",
data=f"{DATAS_DIR}/webClassification1.jsonl",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=name,
)
assert isinstance(run, Run)
assert run.name == name
def test_run_bulk_from_yaml(self, pf, runtime: str, randstr: Callable[[str], str]):
run_id = randstr("run_id")
run = load_run(
source=f"{RUNS_DIR}/sample_bulk_run_cloud.yaml",
params_override=[{"name": run_id, "runtime": runtime}],
)
run = pf.runs.create_or_update(run=run)
assert isinstance(run, Run)
def test_basic_evaluation(self, pf, runtime: str, randstr: Callable[[str], str]):
data_path = f"{DATAS_DIR}/webClassification3.jsonl"
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification",
data=data_path,
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=randstr("batch_run_name"),
)
assert isinstance(run, Run)
run = pf.runs.stream(run=run.name)
assert run.status == RunStatus.COMPLETED
eval_run = pf.run(
flow=f"{FLOWS_DIR}/eval-classification-accuracy",
data=data_path,
run=run,
column_mapping={"groundtruth": "${data.answer}", "prediction": "${run.outputs.category}"},
runtime=runtime,
name=randstr("eval_run_name"),
)
assert isinstance(eval_run, Run)
eval_run = pf.runs.stream(run=eval_run.name)
assert eval_run.status == RunStatus.COMPLETED
def test_basic_evaluation_without_data(self, pf, runtime: str, randstr: Callable[[str], str]):
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification",
data=f"{DATAS_DIR}/webClassification3.jsonl",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=randstr("batch_run_name"),
)
assert isinstance(run, Run)
run = pf.runs.stream(run=run.name)
assert run.status == RunStatus.COMPLETED
eval_run = pf.run(
flow=f"{FLOWS_DIR}/eval-classification-accuracy",
run=run,
column_mapping={
# evaluation reference run.inputs
"groundtruth": "${run.inputs.url}",
"prediction": "${run.outputs.category}",
},
runtime=runtime,
name=randstr("eval_run_name"),
)
assert isinstance(eval_run, Run)
eval_run = pf.runs.stream(run=eval_run.name)
assert eval_run.status == RunStatus.COMPLETED
def test_run_bulk_with_remote_flow(
self, pf: PFClient, runtime: str, randstr: Callable[[str], str], created_flow: Flow
):
"""Test run bulk with remote workspace flow."""
name = randstr("name")
run = pf.run(
flow=f"azureml:{created_flow.name}",
data=f"{DATAS_DIR}/simple_hello_world.jsonl",
column_mapping={"name": "${data.name}"},
runtime=runtime,
name=name,
)
assert isinstance(run, Run)
assert run.name == name
def test_run_bulk_with_registry_flow(
self, pf: PFClient, runtime: str, randstr: Callable[[str], str], registry_name: str
):
"""Test run bulk with remote registry flow."""
name = randstr("name")
run = pf.run(
flow=f"azureml://registries/{registry_name}/models/simple_hello_world/versions/202311241",
data=f"{DATAS_DIR}/simple_hello_world.jsonl",
column_mapping={"name": "${data.name}"},
runtime=runtime,
name=name,
)
assert isinstance(run, Run)
assert run.name == name
# test invalid registry flow
with pytest.raises(UserErrorException, match="Invalid remote flow pattern, got"):
pf.run(
flow="azureml://registries/no-flow",
data=f"{DATAS_DIR}/simple_hello_world.jsonl",
column_mapping={"name": "${data.name}"},
runtime=runtime,
name=name,
)
def test_run_with_connection_overwrite(self, pf, runtime: str, randstr: Callable[[str], str]):
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification",
data=f"{DATAS_DIR}/webClassification1.jsonl",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
connections={"classify_with_llm": {"connection": "azure_open_ai", "model": "gpt-3.5-turbo"}},
runtime=runtime,
name=randstr("name"),
)
assert isinstance(run, Run)
def test_run_with_env_overwrite(self, pf, runtime: str, randstr: Callable[[str], str]):
run = load_run(
source=f"{RUNS_DIR}/run_with_env.yaml",
params_override=[{"runtime": runtime}],
)
run.name = randstr("name")
run = pf.runs.create_or_update(run=run)
assert isinstance(run, Run)
def test_run_display_name_with_macro(self, pf, runtime: str, randstr: Callable[[str], str]):
run = load_run(
source=f"{RUNS_DIR}/run_with_env.yaml",
params_override=[{"runtime": runtime}],
)
run.name = randstr("name")
run.display_name = "my_display_name_${variant_id}_${timestamp}"
run = pf.runs.create_or_update(run=run)
assert run.display_name.startswith("my_display_name_variant_0_")
assert "${timestamp}" not in run.display_name
assert isinstance(run, Run)
def test_default_run_display_name(self, pf, runtime: str, randstr: Callable[[str], str]):
run = load_run(
source=f"{RUNS_DIR}/run_with_env.yaml",
params_override=[{"runtime": runtime}],
)
run.name = randstr("name")
run = pf.runs.create_or_update(run=run)
assert run.display_name == run.name
assert isinstance(run, Run)
def test_run_with_remote_data(
self, pf, runtime: str, remote_web_classification_data, randstr: Callable[[str], str]
):
# run with arm id
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification",
data=f"azureml:{remote_web_classification_data.id}",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=randstr("name1"),
)
assert isinstance(run, Run)
# run with name version
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification",
data=f"azureml:{remote_web_classification_data.name}:{remote_web_classification_data.version}",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=randstr("name2"),
)
assert isinstance(run, Run)
# TODO: confirm whether this test is a end-to-end test
def test_run_bulk_not_exist(self, pf, runtime: str, randstr: Callable[[str], str]):
test_data = f"{DATAS_DIR}/webClassification1.jsonl"
with pytest.raises(UserErrorException) as e:
pf.run(
flow=f"{FLOWS_DIR}/web_classification",
# data with file:/// prefix is not supported, should raise not exist error
data=f"file:///{Path(test_data).resolve().absolute()}",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=randstr("name"),
)
assert "does not exist" in str(e.value)
def test_list_runs(self, pf):
runs = pf.runs.list(max_results=10)
for run in runs:
print(json.dumps(run._to_dict(), indent=4))
assert len(runs) == 10
def test_show_run(self, pf: PFClient, created_eval_run_without_llm: Run):
run = pf.runs.get(run=created_eval_run_without_llm.name)
run_dict = run._to_dict()
print(json.dumps(run_dict, indent=4))
# it's hard to assert with precise value, so just assert existence, type and length
expected_keys = [
"name",
"created_on",
"status",
"display_name",
"description",
"tags",
"properties",
"creation_context",
"start_time",
"end_time",
"duration",
"portal_url",
"data",
"output",
"run",
]
for expected_key in expected_keys:
assert expected_key in run_dict
if expected_key == "description":
assert run_dict[expected_key] is None
elif expected_key in {"tags", "properties", "creation_context"}:
assert isinstance(run_dict[expected_key], dict)
else:
assert isinstance(run_dict[expected_key], str)
assert len(run_dict[expected_key]) > 0
def test_show_run_details(self, pf: PFClient, created_batch_run_without_llm: Run):
# get first 2 results
details = pf.get_details(run=created_batch_run_without_llm.name, max_results=2)
assert details.shape[0] == 2
# get first 10 results while it only has 3
details = pf.get_details(run=created_batch_run_without_llm.name, max_results=10)
assert details.shape[0] == 3
# get all results
details = pf.get_details(run=created_batch_run_without_llm.name, all_results=True)
assert details.shape[0] == 3
# get all results even if max_results is set to 2
details = pf.get_details(
run=created_batch_run_without_llm.name,
max_results=2,
all_results=True,
)
assert details.shape[0] == 3
def test_show_metrics(self, pf: PFClient, created_eval_run_without_llm: Run):
metrics = pf.runs.get_metrics(run=created_eval_run_without_llm.name)
print(json.dumps(metrics, indent=4))
# as we use unmatched data, we can assert the accuracy is 0
assert metrics == {"accuracy": 0.0}
def test_stream_invalid_run_logs(self, pf, randstr: Callable[[str], str]):
# test get invalid run name
non_exist_run = randstr("non_exist_run")
with pytest.raises(RunNotFoundError, match=f"Run {non_exist_run!r} not found"):
pf.runs.stream(run=non_exist_run)
def test_stream_run_logs(self, pf: PFClient, created_batch_run_without_llm: Run):
run = pf.runs.stream(run=created_batch_run_without_llm.name)
assert run.status == RunStatus.COMPLETED
def test_stream_failed_run_logs(self, pf: PFClient, created_failed_run: Run, capfd: pytest.CaptureFixture):
# (default) raise_on_error=True
with pytest.raises(InvalidRunStatusError):
pf.stream(run=created_failed_run.name)
# raise_on_error=False
pf.stream(run=created_failed_run.name, raise_on_error=False)
out, _ = capfd.readouterr()
assert "The input for batch run is incorrect. Couldn't find these mapping relations: ${data.key}" in out
def test_failed_run_to_dict_exclude(self, pf: PFClient, created_failed_run: Run):
failed_run = pf.runs.get(run=created_failed_run.name)
# Azure run object reference a dict, use deepcopy to avoid unexpected modification
default = copy.deepcopy(failed_run._to_dict())
exclude = failed_run._to_dict(exclude_additional_info=True, exclude_debug_info=True)
assert "additionalInfo" in default["error"]["error"] and "additionalInfo" not in exclude["error"]["error"]
assert "debugInfo" in default["error"]["error"] and "debugInfo" not in exclude["error"]["error"]
@pytest.mark.skipif(
condition=not is_live(),
reason="cannot differ the two requests to run history in replay mode.",
)
def test_archive_and_restore_run(self, pf: PFClient, created_batch_run_without_llm: Run):
from promptflow._sdk._constants import RunHistoryKeys
run_meta_data = RunHistoryKeys.RunMetaData
hidden = RunHistoryKeys.HIDDEN
run_id = created_batch_run_without_llm.name
# test archive
pf.runs.archive(run=run_id)
run_data = pf.runs._get_run_from_run_history(run_id, original_form=True)[run_meta_data]
assert run_data[hidden] is True
# test restore
pf.runs.restore(run=run_id)
run_data = pf.runs._get_run_from_run_history(run_id, original_form=True)[run_meta_data]
assert run_data[hidden] is False
def test_update_run(self, pf: PFClient, created_batch_run_without_llm: Run, randstr: Callable[[str], str]):
run_id = created_batch_run_without_llm.name
test_mark = randstr("test_mark")
new_display_name = f"test_display_name_{test_mark}"
new_description = f"test_description_{test_mark}"
new_tags = {"test_tag": test_mark}
run = pf.runs.update(
run=run_id,
display_name=new_display_name,
description=new_description,
tags=new_tags,
)
# sleep to wait for update to take effect
sleep(3)
assert run.display_name == new_display_name
assert run.description == new_description
assert run.tags["test_tag"] == test_mark
# test wrong type of parameters won't raise error, just log warnings and got ignored
run = pf.runs.update(
run=run_id,
tags={"test_tag": {"a": 1}},
)
# sleep to wait for update to take effect
sleep(3)
assert run.display_name == new_display_name
assert run.description == new_description
assert run.tags["test_tag"] == test_mark
def test_cancel_run(self, pf, runtime: str, randstr: Callable[[str], str]):
# create a run
run_name = randstr("name")
pf.run(
flow=f"{FLOWS_DIR}/web_classification",
data=f"{DATAS_DIR}/webClassification1.jsonl",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=run_name,
)
pf.runs.cancel(run=run_name)
sleep(3)
run = pf.runs.get(run=run_name)
# the run status might still be cancel requested, but it should be canceled eventually
assert run.status in [RunStatus.CANCELED, RunStatus.CANCEL_REQUESTED]
@pytest.mark.skipif(
condition=not is_live(), reason="request uri contains temp folder name, need some time to sanitize."
)
def test_run_with_additional_includes(self, pf, runtime: str, randstr: Callable[[str], str]):
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification_with_additional_include",
data=f"{DATAS_DIR}/webClassification1.jsonl",
inputs_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=randstr("name"),
)
run = pf.runs.stream(run=run.name)
assert run.status == RunStatus.COMPLETED
# Test additional includes don't exist
with pytest.raises(ValueError) as e:
pf.run(
flow=f"{FLOWS_DIR}/web_classification_with_invalid_additional_include",
data=f"{DATAS_DIR}/webClassification1.jsonl",
inputs_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=randstr("name_invalid"),
)
assert "Unable to find additional include ../invalid/file/path" in str(e.value)
@pytest.mark.skip(reason="Cannot find tools of the flow with symbolic.")
def test_run_with_symbolic(self, remote_client, pf, runtime, prepare_symbolic_flow):
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification_with_symbolic",
data=f"{DATAS_DIR}/webClassification1.jsonl",
inputs_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
)
remote_client.runs.stream(run=run.name)
def test_run_bulk_without_retry(self, remote_client):
from azure.core.exceptions import ServiceResponseError
from azure.core.pipeline.transport._requests_basic import RequestsTransport
from azure.core.rest._requests_basic import RestRequestsTransportResponse
from requests import Response
from promptflow.azure._restclient.flow.models import SubmitBulkRunRequest
from promptflow.azure._restclient.flow_service_caller import FlowRequestException, FlowServiceCaller
from promptflow.azure.operations import RunOperations
mock_run = MagicMock()
mock_run._runtime = "fake_runtime"
mock_run._to_rest_object.return_value = SubmitBulkRunRequest()
mock_run._use_remote_flow = False
with patch.object(RunOperations, "_resolve_data_to_asset_id"), patch.object(RunOperations, "_resolve_flow"):
with patch.object(RequestsTransport, "send") as mock_request, patch.object(
FlowServiceCaller, "_set_headers_with_user_aml_token"
):
mock_request.side_effect = ServiceResponseError(
"Connection aborted.",
error=ConnectionResetError(10054, "An existing connection was forcibly closed", None, 10054, None),
)
with pytest.raises(ServiceResponseError):
remote_client.runs.create_or_update(run=mock_run)
# won't retry connection error since POST without response code is not retryable according to
# retry policy
assert mock_request.call_count == 1
with patch.object(RunOperations, "_resolve_data_to_asset_id"), patch.object(RunOperations, "_resolve_flow"):
with patch.object(RequestsTransport, "send") as mock_request, patch.object(
FlowServiceCaller, "_set_headers_with_user_aml_token"
):
fake_response = Response()
# won't retry 500
fake_response.status_code = 500
fake_response._content = b'{"error": "error"}'
fake_response._content_consumed = True
mock_request.return_value = RestRequestsTransportResponse(
request=None,
internal_response=fake_response,
)
with pytest.raises(FlowRequestException):
remote_client.runs.create_or_update(run=mock_run)
assert mock_request.call_count == 1
with patch.object(RunOperations, "_resolve_data_to_asset_id"), patch.object(RunOperations, "_resolve_flow"):
with patch.object(RequestsTransport, "send") as mock_request, patch.object(
FlowServiceCaller, "_set_headers_with_user_aml_token"
):
fake_response = Response()
# will retry 503
fake_response.status_code = 503
fake_response._content = b'{"error": "error"}'
fake_response._content_consumed = True
mock_request.return_value = RestRequestsTransportResponse(
request=None,
internal_response=fake_response,
)
with pytest.raises(FlowRequestException):
remote_client.runs.create_or_update(run=mock_run)
assert mock_request.call_count == 4
def test_pf_run_with_env_var(self, pf, randstr: Callable[[str], str]):
from promptflow.azure.operations import RunOperations
def create_or_update(run, **kwargs):
# make run.flow a datastore path uri, so that it can be parsed by AzureMLDatastorePathUri
run.flow = "azureml://datastores/workspaceblobstore/paths/LocalUpload/not/important/path"
return run
with patch.object(RunOperations, "create_or_update") as mock_create_or_update:
mock_create_or_update.side_effect = create_or_update
env_var = {"API_BASE": "${azure_open_ai_connection.api_base}"}
run = pf.run(
flow=f"{FLOWS_DIR}/print_env_var",
data=f"{DATAS_DIR}/env_var_names.jsonl",
environment_variables=env_var,
name=randstr("name"),
)
assert run._to_rest_object().environment_variables == env_var
def test_automatic_runtime(self, pf, randstr: Callable[[str], str]):
from promptflow.azure._restclient.flow_service_caller import FlowServiceCaller
from promptflow.azure.operations import RunOperations
def submit(*args, **kwargs):
body = kwargs.get("body", None)
assert body.runtime_name == "automatic"
assert body.vm_size is None
assert body.max_idle_time_seconds is None
return body
with patch.object(FlowServiceCaller, "submit_bulk_run") as mock_submit, patch.object(RunOperations, "get"):
mock_submit.side_effect = submit
# no runtime provided, will use automatic runtime
pf.run(
flow=f"{FLOWS_DIR}/print_env_var",
data=f"{DATAS_DIR}/env_var_names.jsonl",
name=randstr("name1"),
)
with patch.object(FlowServiceCaller, "submit_bulk_run") as mock_submit, patch.object(RunOperations, "get"):
mock_submit.side_effect = submit
# automatic is a reserved runtime name, will use automatic runtime if specified.
pf.run(
flow=f"{FLOWS_DIR}/print_env_var",
data=f"{DATAS_DIR}/env_var_names.jsonl",
runtime="automatic",
name=randstr("name2"),
)
def test_automatic_runtime_with_resources(self, pf, randstr: Callable[[str], str]):
from promptflow.azure._restclient.flow.models import SessionSetupModeEnum
source = f"{RUNS_DIR}/sample_bulk_run_with_resources.yaml"
run_id = randstr("run_id")
run = load_run(
source=source,
params_override=[{"name": run_id}],
)
rest_run = run._to_rest_object()
assert rest_run.vm_size == "Standard_D2"
assert rest_run.max_idle_time_seconds == 3600
assert rest_run.session_setup_mode == SessionSetupModeEnum.SYSTEM_WAIT
run = pf.runs.create_or_update(run=run)
assert isinstance(run, Run)
def test_run_data_not_provided(self, pf, randstr: Callable[[str], str]):
with pytest.raises(UserErrorException) as e:
pf.run(
flow=f"{FLOWS_DIR}/web_classification",
name=randstr("name"),
)
assert "at least one of data or run must be provided" in str(e)
def test_run_without_dump(self, pf, runtime: str, randstr: Callable[[str], str]) -> None:
from promptflow._sdk._errors import RunNotFoundError
from promptflow._sdk._orm.run_info import RunInfo
run = pf.run(
flow=f"{FLOWS_DIR}/web_classification",
data=f"{DATAS_DIR}/webClassification1.jsonl",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}",
runtime=runtime,
name=randstr("name"),
)
# cloud run should not dump to database
with pytest.raises(RunNotFoundError):
RunInfo.get(run.name)
def test_input_mapping_with_dict(self, pf, runtime: str, randstr: Callable[[str], str]):
data_path = f"{DATAS_DIR}/webClassification3.jsonl"
run = pf.run(
flow=f"{FLOWS_DIR}/flow_with_dict_input",
data=data_path,
column_mapping=dict(key={"a": 1}, extra="${data.url}"),
runtime=runtime,
name=randstr("name"),
)
assert '"{\\"a\\": 1}"' in run.properties["azureml.promptflow.inputs_mapping"]
run = pf.runs.stream(run=run)
assert run.status == "Completed"
def test_get_invalid_run_cases(self, pf, randstr: Callable[[str], str]):
# test get invalid run type
with pytest.raises(InvalidRunError, match="expected 'str' or 'Run' object"):
pf.runs.get(run=object())
# test get invalid run name
non_exist_run = randstr("non_exist_run")
with pytest.raises(RunNotFoundError, match=f"Run {non_exist_run!r} not found"):
pf.runs.get(run=non_exist_run)
# TODO: need to confirm whether this is an end-to-end test
def test_exp_id(self):
with TemporaryDirectory() as tmp_dir:
shutil.copytree(f"{FLOWS_DIR}/flow_with_dict_input", f"{tmp_dir}/flow dir with space")
run = Run(
flow=Path(f"{tmp_dir}/flow dir with space"),
data=f"{DATAS_DIR}/webClassification3.jsonl",
)
rest_run = run._to_rest_object()
assert rest_run.run_experiment_name == "flow_dir_with_space"
shutil.copytree(f"{FLOWS_DIR}/flow_with_dict_input", f"{tmp_dir}/flow-dir-with-dash")
run = Run(
flow=Path(f"{tmp_dir}/flow-dir-with-dash"),
data=f"{DATAS_DIR}/webClassification3.jsonl",
)
rest_run = run._to_rest_object()
assert rest_run.run_experiment_name == "flow_dir_with_dash"
def test_tools_json_ignored(self, pf, randstr: Callable[[str], str]):
from azure.ai.ml._artifacts._blob_storage_helper import BlobStorageClient
from promptflow.azure._restclient.flow_service_caller import FlowServiceCaller
from promptflow.azure.operations import RunOperations
files_to_upload = []
def fake_upload_file(storage_client, source: str, dest, *args, **kwargs):
files_to_upload.append(source)
storage_client.uploaded_file_count += 1
with patch("azure.ai.ml._utils._asset_utils.upload_file") as mock_upload_file, patch.object(
FlowServiceCaller, "submit_bulk_run"
), patch.object(BlobStorageClient, "_set_confirmation_metadata"), patch.object(RunOperations, "get"):
mock_upload_file.side_effect = fake_upload_file
data_path = f"{DATAS_DIR}/webClassification3.jsonl"
pf.run(
flow=f"{FLOWS_DIR}/flow_with_dict_input",
data=data_path,
column_mapping={"key": {"value": "1"}, "url": "${data.url}"},
runtime="fake_runtime",
name=randstr("name"),
)
# make sure .promptflow/flow.tools.json not uploaded
for f in files_to_upload:
if ".promptflow/flow.tools.json" in f:
raise Exception(f"flow.tools.json should not be uploaded, got {f}")
def test_flow_id_in_submission(self, pf, runtime: str, randstr: Callable[[str], str]):
from promptflow.azure._restclient.flow_service_caller import FlowServiceCaller
from promptflow.azure.operations import RunOperations
flow_path = f"{FLOWS_DIR}/print_env_var"
flow_lineage_id = get_flow_lineage_id(flow_path)
flow_session_id = pf._runs._get_session_id(flow_path)
def submit(*args, **kwargs):
body = kwargs.get("body", None)
assert flow_session_id == body.session_id
assert flow_lineage_id == body.flow_lineage_id
return body
# flow session id is same with or without session creation
with patch.object(FlowServiceCaller, "submit_bulk_run") as mock_submit, patch.object(
RunOperations, "get"
), patch.object(FlowServiceCaller, "create_flow_session"):
mock_submit.side_effect = submit
pf.run(
flow=flow_path,
data=f"{DATAS_DIR}/env_var_names.jsonl",
runtime=runtime,
name=randstr("name1"),
)
with patch.object(FlowServiceCaller, "submit_bulk_run") as mock_submit, patch.object(
RunOperations, "get"
), patch.object(FlowServiceCaller, "create_flow_session"):
mock_submit.side_effect = submit
# no runtime provided, will use automatic runtime
pf.run(
flow=flow_path,
data=f"{DATAS_DIR}/env_var_names.jsonl",
name=randstr("name2"),
)
def test_run_submission_exception(self, pf):
from azure.core.exceptions import HttpResponseError
from promptflow.azure._restclient.flow.operations import BulkRunsOperations
from promptflow.azure._restclient.flow_service_caller import FlowRequestException, FlowServiceCaller
def fake_submit(*args, **kwargs):
headers = kwargs.get("headers", None)
request_id_in_headers = headers["x-ms-client-request-id"]
# request id in headers should be same with request id in service caller
assert request_id_in_headers == pf.runs._service_caller._request_id
raise HttpResponseError("customized error message.")
with patch.object(BulkRunsOperations, "submit_bulk_run") as mock_request, patch.object(
FlowServiceCaller, "_set_headers_with_user_aml_token"
):
mock_request.side_effect = fake_submit
with pytest.raises(FlowRequestException) as e:
original_request_id = pf.runs._service_caller._request_id
pf.runs._service_caller.submit_bulk_run(
subscription_id="fake_subscription_id",
resource_group_name="fake_resource_group",
workspace_name="fake_workspace_name",
)
# request id has been updated
assert original_request_id != pf.runs._service_caller._request_id
# original error message should be included in FlowRequestException
assert "customized error message" in str(e.value)
# request id should be included in FlowRequestException
assert f"request id: {pf.runs._service_caller._request_id}" in str(e.value)
def test_get_detail_against_partial_fail_run(self, pf, runtime: str, randstr: Callable[[str], str]) -> None:
run = pf.run(
flow=f"{FLOWS_DIR}/partial_fail",
data=f"{FLOWS_DIR}/partial_fail/data.jsonl",
runtime=runtime,
name=randstr("name"),
)
pf.runs.stream(run=run.name)
detail = pf.get_details(run=run.name)
assert len(detail) == 3
# TODO: seems another unit test...
def test_vnext_workspace_base_url(self):
from promptflow.azure._restclient.service_caller_factory import _FlowServiceCallerFactory
mock_workspace = MagicMock()
mock_workspace.discovery_url = "https://promptflow.azure-api.net/discovery/workspaces/fake_workspace_id"
service_caller = _FlowServiceCallerFactory.get_instance(
workspace=mock_workspace, credential=MagicMock(), operation_scope=MagicMock()
)
assert service_caller.caller._client._base_url == "https://promptflow.azure-api.net/"
@pytest.mark.usefixtures("mock_isinstance_for_mock_datastore")
def test_download_run(self, pf: PFClient, created_batch_run_without_llm: Run):
expected_files = [
DownloadedRun.RUN_METADATA_FILE_NAME,
DownloadedRun.LOGS_FILE_NAME,
DownloadedRun.METRICS_FILE_NAME,
f"{DownloadedRun.SNAPSHOT_FOLDER}/flow.dag.yaml",
]
with TemporaryDirectory() as tmp_dir:
pf.runs.download(run=created_batch_run_without_llm.name, output=tmp_dir)
for file in expected_files:
assert Path(tmp_dir, created_batch_run_without_llm.name, file).exists()
def test_request_id_when_making_http_requests(self, pf, runtime: str, randstr: Callable[[str], str]):
from azure.core.exceptions import HttpResponseError
from promptflow.azure._restclient.flow.operations import BulkRunsOperations
from promptflow.azure._restclient.flow_service_caller import FlowRequestException
request_ids = set()
def fake_submit(*args, **kwargs):
headers = kwargs.get("headers", None)
request_id_in_headers = headers["x-ms-client-request-id"]
# request id in headers should be same with request id in service caller
assert request_id_in_headers == pf.runs._service_caller._request_id
# request id in request is same request id in collected logs
assert request_id_in_headers in request_ids
raise HttpResponseError("customized error message.")
def check_inner_call(*args, **kwargs):
if "extra" in kwargs:
request_id = pydash.get(kwargs, "extra.custom_dimensions.request_id")
request_ids.add(request_id)
with patch.object(BulkRunsOperations, "submit_bulk_run") as mock_request, patch.object(
Logger, "info"
) as mock_logger:
mock_logger.side_effect = check_inner_call
mock_request.side_effect = fake_submit
with pytest.raises(FlowRequestException) as e:
pf.run(
flow=f"{FLOWS_DIR}/print_env_var",
data=f"{DATAS_DIR}/env_var_names.jsonl",
runtime=runtime,
name=randstr("name1"),
)
# request id in service caller is same request id in collected logs
assert pf.runs._service_caller._request_id in request_ids
# only 1 request id generated in logs
assert len(request_ids) == 1
# request id should be included in FlowRequestException
assert f"request id: {pf.runs._service_caller._request_id}" in str(e.value)
old_request_id = request_ids.pop()
with pytest.raises(FlowRequestException) as e:
pf.run(
flow=f"{FLOWS_DIR}/print_env_var",
data=f"{DATAS_DIR}/env_var_names.jsonl",
runtime=runtime,
name=randstr("name1"),
)
# request id in service caller is same request id in collected logs
assert pf.runs._service_caller._request_id in request_ids
# request id is not same with before
assert old_request_id not in request_ids
# only 1 request id generated in logs
assert len(request_ids) == 1
# request id should be included in FlowRequestException
assert f"request id: {pf.runs._service_caller._request_id}" in str(e.value)
def test_get_details_against_partial_completed_run(
self, pf: PFClient, runtime: str, randstr: Callable[[str], str]
) -> None:
flow_mod2 = f"{FLOWS_DIR}/mod-n/two"
flow_mod3 = f"{FLOWS_DIR}/mod-n/three"
data_path = f"{DATAS_DIR}/numbers.jsonl"
# batch run against data
run1 = pf.run(
flow=flow_mod2,
data=data_path,
column_mapping={"number": "${data.value}"},
runtime=runtime,
name=randstr("run1"),
)
pf.runs.stream(run1)
details1 = pf.get_details(run1)
assert len(details1) == 20
assert len(details1[details1["outputs.output"].notnull()]) == 10
# assert to ensure inputs and outputs are aligned
for _, row in details1.iterrows():
if pd.notnull(row["outputs.output"]):
assert int(row["inputs.number"]) == int(row["outputs.output"])
# batch run against previous run
run2 = pf.run(
flow=flow_mod3,
run=run1,
column_mapping={"number": "${run.outputs.output}"},
runtime=runtime,
name=randstr("run2"),
)
pf.runs.stream(run2)
details2 = pf.get_details(run2)
assert len(details2) == 10
assert len(details2[details2["outputs.output"].notnull()]) == 4
# assert to ensure inputs and outputs are aligned
for _, row in details2.iterrows():
if pd.notnull(row["outputs.output"]):
assert int(row["inputs.number"]) == int(row["outputs.output"])
@pytest.mark.usefixtures("mock_isinstance_for_mock_datastore")
def test_auto_resolve_requirements(self, pf: PFClient, randstr: Callable[[str], str]):
# will add requirements.txt to flow.dag.yaml if exists when submitting run.
with TemporaryDirectory() as temp:
temp = Path(temp)
shutil.copytree(f"{FLOWS_DIR}/flow_with_requirements_txt", temp / "flow_with_requirements_txt")
run = pf.run(
flow=temp / "flow_with_requirements_txt",
data=f"{DATAS_DIR}/env_var_names.jsonl",
name=randstr("name"),
)
pf.runs.stream(run)
pf.runs.download(run=run.name, output=temp)
flow_dag = load_yaml(Path(temp, run.name, "snapshot/flow.dag.yaml"))
assert "requirements.txt" in flow_dag[ENVIRONMENT][PYTHON_REQUIREMENTS_TXT]
local_flow_dag = load_yaml(f"{FLOWS_DIR}/flow_with_requirements_txt/flow.dag.yaml")
assert "environment" not in local_flow_dag
@pytest.mark.usefixtures("mock_isinstance_for_mock_datastore")
def test_requirements_in_additional_includes(self, pf: PFClient, randstr: Callable[[str], str]):
run = pf.run(
flow=f"{FLOWS_DIR}/flow_with_additional_include_req",
data=f"{DATAS_DIR}/env_var_names.jsonl",
name=randstr("name"),
)
run = pf.runs.stream(run)
assert run._error is None
with TemporaryDirectory() as temp:
pf.runs.download(run=run.name, output=temp)
assert Path(temp, run.name, "snapshot/requirements").exists()
@pytest.mark.skipif(
condition=is_live(),
reason="removed requirement.txt to avoid compliance check.",
)
def test_eager_flow_crud(self, pf: PFClient, randstr: Callable[[str], str], simple_eager_run: Run):
run = simple_eager_run
run = pf.runs.get(run)
assert run.status == RunStatus.COMPLETED
details = pf.runs.get_details(run)
assert details.shape[0] == 1
metrics = pf.runs.get_metrics(run)
assert metrics == {}
# TODO(2917923): cannot differ the two requests to run history in replay mode."
# run_meta_data = RunHistoryKeys.RunMetaData
# hidden = RunHistoryKeys.HIDDEN
# run_id = run.name
# # test archive
# pf.runs.archive(run=run_id)
# run_data = pf.runs._get_run_from_run_history(run_id, original_form=True)[run_meta_data]
# assert run_data[hidden] is True
#
# # test restore
# pf.runs.restore(run=run_id)
# run_data = pf.runs._get_run_from_run_history(run_id, original_form=True)[run_meta_data]
# assert run_data[hidden] is False
@pytest.mark.skipif(
condition=is_live(),
reason="removed requirement.txt to avoid compliance check.",
)
def test_eager_flow_cancel(self, pf: PFClient, randstr: Callable[[str], str]):
"""Test cancel eager flow."""
# create a run
run_name = randstr("name")
pf.run(
flow=f"{EAGER_FLOWS_DIR}/long_running",
data=f"{DATAS_DIR}/simple_eager_flow_data.jsonl",
name=run_name,
)
pf.runs.cancel(run=run_name)
sleep(3)
run = pf.runs.get(run=run_name)
# the run status might still be cancel requested, but it should be canceled eventually
assert run.status in [RunStatus.CANCELED, RunStatus.CANCEL_REQUESTED]
@pytest.mark.skipif(
condition=is_live(),
reason="removed requirement.txt to avoid compliance check.",
)
@pytest.mark.usefixtures("mock_isinstance_for_mock_datastore")
def test_eager_flow_download(self, pf: PFClient, simple_eager_run: Run):
run = simple_eager_run
expected_files = [
DownloadedRun.RUN_METADATA_FILE_NAME,
DownloadedRun.LOGS_FILE_NAME,
DownloadedRun.METRICS_FILE_NAME,
f"{DownloadedRun.SNAPSHOT_FOLDER}/flow.dag.yaml",
]
# test download
with TemporaryDirectory() as tmp_dir:
pf.runs.download(run=run.name, output=tmp_dir)
for file in expected_files:
assert Path(tmp_dir, run.name, file).exists()
| promptflow/src/promptflow/tests/sdk_cli_azure_test/e2etests/test_run_operations.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_azure_test/e2etests/test_run_operations.py",
"repo_id": "promptflow",
"token_count": 19965
} | 60 |
import pytest
from promptflow._sdk._errors import RunOperationParameterError
@pytest.mark.unittest
class TestPFClientAzure:
def test_wrong_client_parameters(self):
from promptflow.azure import PFClient
# test wrong client parameters
with pytest.raises(RunOperationParameterError, match="You have passed in the wrong parameter name"):
PFClient(
subscription_id="fake_subscription_id",
resource_group="fake_resource_group",
workspace_name="fake_workspace_name",
)
| promptflow/src/promptflow/tests/sdk_cli_azure_test/unittests/test_pf_client_azure.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_azure_test/unittests/test_pf_client_azure.py",
"repo_id": "promptflow",
"token_count": 224
} | 61 |
import subprocess
import sys
import tempfile
from pathlib import Path
import mock
import pytest
from .test_cli import run_pf_command
FLOWS_DIR = "./tests/test_configs/flows"
RUNS_DIR = "./tests/test_configs/runs"
CONNECTIONS_DIR = "./tests/test_configs/connections"
DATAS_DIR = "./tests/test_configs/datas"
@pytest.mark.usefixtures("use_secrets_config_file", "setup_local_connection", "install_custom_tool_pkg")
@pytest.mark.cli_test
@pytest.mark.e2etest
class TestExecutable:
@pytest.mark.skipif(
sys.platform == "win32" or sys.platform == "darwin",
reason="Raise Exception: Process terminated with exit code 4294967295",
)
def test_flow_build_executable(self):
source = f"{FLOWS_DIR}/web_classification/flow.dag.yaml"
target = "promptflow._sdk.operations._flow_operations.FlowOperations._run_pyinstaller"
with mock.patch(target) as mocked:
mocked.return_value = None
with tempfile.TemporaryDirectory() as temp_dir:
run_pf_command(
"flow",
"build",
"--source",
source,
"--output",
temp_dir,
"--format",
"executable",
)
# Start the Python script as a subprocess
app_file = Path(temp_dir, "app.py").as_posix()
process = subprocess.Popen(["python", app_file], stderr=subprocess.PIPE)
try:
# Wait for a specified time (in seconds)
wait_time = 5
process.wait(timeout=wait_time)
if process.returncode == 0:
pass
else:
raise Exception(
f"Process terminated with exit code {process.returncode}, "
f"{process.stderr.read().decode('utf-8')}"
)
except (subprocess.TimeoutExpired, KeyboardInterrupt):
pass
finally:
# Kill the process
process.terminate()
process.wait() # Ensure the process is fully terminated
| promptflow/src/promptflow/tests/sdk_cli_test/e2etests/test_executable.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_test/e2etests/test_executable.py",
"repo_id": "promptflow",
"token_count": 1189
} | 62 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from pathlib import Path
import pytest
from promptflow._sdk._configuration import Configuration, InvalidConfigValue
from promptflow._sdk._constants import FLOW_DIRECTORY_MACRO_IN_CONFIG
from promptflow._sdk._utils import ClientUserAgentUtil
CONFIG_DATA_ROOT = Path(__file__).parent.parent.parent / "test_configs" / "configs"
@pytest.fixture
def config():
return Configuration.get_instance()
@pytest.mark.unittest
class TestConfig:
def test_set_config(self, config):
config.set_config("a.b.c.test_key", "test_value")
assert config.get_config("a.b.c.test_key") == "test_value"
# global config may contain other keys
assert config.config["a"] == {"b": {"c": {"test_key": "test_value"}}}
def test_get_config(self, config):
config.set_config("test_key", "test_value")
assert config.get_config("test_key") == "test_value"
def test_get_or_set_installation_id(self, config):
user_id = config.get_or_set_installation_id()
assert user_id is not None
def test_config_instance(self, config):
new_config = Configuration.get_instance()
assert new_config is config
def test_set_invalid_run_output_path(self, config: Configuration) -> None:
expected_error_message = (
"Cannot specify flow directory as run output path; "
"if you want to specify run output path under flow directory, "
"please use its child folder, e.g. '${flow_directory}/.runs'."
)
# directly set
with pytest.raises(InvalidConfigValue) as e:
config.set_config(key=Configuration.RUN_OUTPUT_PATH, value=FLOW_DIRECTORY_MACRO_IN_CONFIG)
assert expected_error_message in str(e)
# override
with pytest.raises(InvalidConfigValue) as e:
Configuration(overrides={Configuration.RUN_OUTPUT_PATH: FLOW_DIRECTORY_MACRO_IN_CONFIG})
assert expected_error_message in str(e)
def test_ua_set_load(self, config: Configuration) -> None:
config.set_config(key=Configuration.USER_AGENT, value="test/1.0.0")
user_agent = config.get_user_agent()
assert user_agent == "PFCustomer_test/1.0.0"
# load empty ua won't break
config.set_config(key=Configuration.USER_AGENT, value="")
user_agent = config.get_user_agent()
assert user_agent == ""
# empty ua won't add to context
ClientUserAgentUtil.update_user_agent_from_config()
user_agent = ClientUserAgentUtil.get_user_agent()
# in test environment, user agent may contain promptflow-local-serving/0.0.1 test-user-agent
assert "test/1.0.0" not in user_agent
| promptflow/src/promptflow/tests/sdk_cli_test/unittests/test_config.py/0 | {
"file_path": "promptflow/src/promptflow/tests/sdk_cli_test/unittests/test_config.py",
"repo_id": "promptflow",
"token_count": 1086
} | 63 |
{"text":"data_0000"}
{"text":"data_0001"}
{"text":"data_0002"}
{"text":"data_0003"}
{"text":"data_0004"}
{"text":"data_0005"}
{"text":"data_0006"}
{"text":"data_0007"}
{"text":"data_0008"}
{"text":"data_0009"}
{"text":"data_0010"}
{"text":"data_0011"}
{"text":"data_0012"}
{"text":"data_0013"}
{"text":"data_0014"}
{"text":"data_0015"}
{"text":"data_0016"}
{"text":"data_0017"}
{"text":"data_0018"}
{"text":"data_0019"}
{"text":"data_0020"}
{"text":"data_0021"}
{"text":"data_0022"}
{"text":"data_0023"}
{"text":"data_0024"}
{"text":"data_0025"}
{"text":"data_0026"}
{"text":"data_0027"}
{"text":"data_0028"}
{"text":"data_0029"}
{"text":"data_0030"}
{"text":"data_0031"}
{"text":"data_0032"}
{"text":"data_0033"}
{"text":"data_0034"}
{"text":"data_0035"}
{"text":"data_0036"}
{"text":"data_0037"}
{"text":"data_0038"}
{"text":"data_0039"}
{"text":"data_0040"}
{"text":"data_0041"}
{"text":"data_0042"}
{"text":"data_0043"}
{"text":"data_0044"}
{"text":"data_0045"}
{"text":"data_0046"}
{"text":"data_0047"}
{"text":"data_0048"}
{"text":"data_0049"}
{"text":"data_0050"}
{"text":"data_0051"}
{"text":"data_0052"}
{"text":"data_0053"}
{"text":"data_0054"}
{"text":"data_0055"}
{"text":"data_0056"}
{"text":"data_0057"}
{"text":"data_0058"}
{"text":"data_0059"}
{"text":"data_0060"}
{"text":"data_0061"}
{"text":"data_0062"}
{"text":"data_0063"}
{"text":"data_0064"}
{"text":"data_0065"}
{"text":"data_0066"}
{"text":"data_0067"}
{"text":"data_0068"}
{"text":"data_0069"}
{"text":"data_0070"}
{"text":"data_0071"}
{"text":"data_0072"}
{"text":"data_0073"}
{"text":"data_0074"}
{"text":"data_0075"}
{"text":"data_0076"}
{"text":"data_0077"}
{"text":"data_0078"}
{"text":"data_0079"}
{"text":"data_0080"}
{"text":"data_0081"}
{"text":"data_0082"}
{"text":"data_0083"}
{"text":"data_0084"}
{"text":"data_0085"}
{"text":"data_0086"}
{"text":"data_0087"}
{"text":"data_0088"}
{"text":"data_0089"}
{"text":"data_0090"}
{"text":"data_0091"}
{"text":"data_0092"}
{"text":"data_0093"}
{"text":"data_0094"}
{"text":"data_0095"}
{"text":"data_0096"}
{"text":"data_0097"}
{"text":"data_0098"}
{"text":"data_0099"}
{"text":"data_0100"}
{"text":"data_0101"}
{"text":"data_0102"}
{"text":"data_0103"}
{"text":"data_0104"}
{"text":"data_0105"}
{"text":"data_0106"}
{"text":"data_0107"}
{"text":"data_0108"}
{"text":"data_0109"}
{"text":"data_0110"}
{"text":"data_0111"}
{"text":"data_0112"}
{"text":"data_0113"}
{"text":"data_0114"}
{"text":"data_0115"}
{"text":"data_0116"}
{"text":"data_0117"}
{"text":"data_0118"}
{"text":"data_0119"}
{"text":"data_0120"}
{"text":"data_0121"}
{"text":"data_0122"}
{"text":"data_0123"}
{"text":"data_0124"}
{"text":"data_0125"}
{"text":"data_0126"}
{"text":"data_0127"}
{"text":"data_0128"}
{"text":"data_0129"}
{"text":"data_0130"}
{"text":"data_0131"}
{"text":"data_0132"}
{"text":"data_0133"}
{"text":"data_0134"}
{"text":"data_0135"}
{"text":"data_0136"}
{"text":"data_0137"}
{"text":"data_0138"}
{"text":"data_0139"}
{"text":"data_0140"}
{"text":"data_0141"}
{"text":"data_0142"}
{"text":"data_0143"}
{"text":"data_0144"}
{"text":"data_0145"}
{"text":"data_0146"}
{"text":"data_0147"}
{"text":"data_0148"}
{"text":"data_0149"}
{"text":"data_0150"}
{"text":"data_0151"}
{"text":"data_0152"}
{"text":"data_0153"}
{"text":"data_0154"}
{"text":"data_0155"}
{"text":"data_0156"}
{"text":"data_0157"}
{"text":"data_0158"}
{"text":"data_0159"}
{"text":"data_0160"}
{"text":"data_0161"}
{"text":"data_0162"}
{"text":"data_0163"}
{"text":"data_0164"}
{"text":"data_0165"}
{"text":"data_0166"}
{"text":"data_0167"}
{"text":"data_0168"}
{"text":"data_0169"}
{"text":"data_0170"}
{"text":"data_0171"}
{"text":"data_0172"}
{"text":"data_0173"}
{"text":"data_0174"}
{"text":"data_0175"}
{"text":"data_0176"}
{"text":"data_0177"}
{"text":"data_0178"}
{"text":"data_0179"}
{"text":"data_0180"}
{"text":"data_0181"}
{"text":"data_0182"}
{"text":"data_0183"}
{"text":"data_0184"}
{"text":"data_0185"}
{"text":"data_0186"}
{"text":"data_0187"}
{"text":"data_0188"}
{"text":"data_0189"}
{"text":"data_0190"}
{"text":"data_0191"}
{"text":"data_0192"}
{"text":"data_0193"}
{"text":"data_0194"}
{"text":"data_0195"}
{"text":"data_0196"}
{"text":"data_0197"}
{"text":"data_0198"}
{"text":"data_0199"}
{"text":"data_0200"}
{"text":"data_0201"}
{"text":"data_0202"}
{"text":"data_0203"}
{"text":"data_0204"}
{"text":"data_0205"}
{"text":"data_0206"}
{"text":"data_0207"}
{"text":"data_0208"}
{"text":"data_0209"}
{"text":"data_0210"}
{"text":"data_0211"}
{"text":"data_0212"}
{"text":"data_0213"}
{"text":"data_0214"}
{"text":"data_0215"}
{"text":"data_0216"}
{"text":"data_0217"}
{"text":"data_0218"}
{"text":"data_0219"}
{"text":"data_0220"}
{"text":"data_0221"}
{"text":"data_0222"}
{"text":"data_0223"}
{"text":"data_0224"}
{"text":"data_0225"}
{"text":"data_0226"}
{"text":"data_0227"}
{"text":"data_0228"}
{"text":"data_0229"}
{"text":"data_0230"}
{"text":"data_0231"}
{"text":"data_0232"}
{"text":"data_0233"}
{"text":"data_0234"}
{"text":"data_0235"}
{"text":"data_0236"}
{"text":"data_0237"}
{"text":"data_0238"}
{"text":"data_0239"}
{"text":"data_0240"}
{"text":"data_0241"}
{"text":"data_0242"}
{"text":"data_0243"}
{"text":"data_0244"}
{"text":"data_0245"}
{"text":"data_0246"}
{"text":"data_0247"}
{"text":"data_0248"}
{"text":"data_0249"}
{"text":"data_0250"}
{"text":"data_0251"}
{"text":"data_0252"}
{"text":"data_0253"}
{"text":"data_0254"}
{"text":"data_0255"}
{"text":"data_0256"}
{"text":"data_0257"}
{"text":"data_0258"}
{"text":"data_0259"}
{"text":"data_0260"}
{"text":"data_0261"}
{"text":"data_0262"}
{"text":"data_0263"}
{"text":"data_0264"}
{"text":"data_0265"}
{"text":"data_0266"}
{"text":"data_0267"}
{"text":"data_0268"}
{"text":"data_0269"}
{"text":"data_0270"}
{"text":"data_0271"}
{"text":"data_0272"}
{"text":"data_0273"}
{"text":"data_0274"}
{"text":"data_0275"}
{"text":"data_0276"}
{"text":"data_0277"}
{"text":"data_0278"}
{"text":"data_0279"}
{"text":"data_0280"}
{"text":"data_0281"}
{"text":"data_0282"}
{"text":"data_0283"}
{"text":"data_0284"}
{"text":"data_0285"}
{"text":"data_0286"}
{"text":"data_0287"}
{"text":"data_0288"}
{"text":"data_0289"}
{"text":"data_0290"}
{"text":"data_0291"}
{"text":"data_0292"}
{"text":"data_0293"}
{"text":"data_0294"}
{"text":"data_0295"}
{"text":"data_0296"}
{"text":"data_0297"}
{"text":"data_0298"}
{"text":"data_0299"}
{"text":"data_0300"}
{"text":"data_0301"}
{"text":"data_0302"}
{"text":"data_0303"}
{"text":"data_0304"}
{"text":"data_0305"}
{"text":"data_0306"}
{"text":"data_0307"}
{"text":"data_0308"}
{"text":"data_0309"}
{"text":"data_0310"}
{"text":"data_0311"}
{"text":"data_0312"}
{"text":"data_0313"}
{"text":"data_0314"}
{"text":"data_0315"}
{"text":"data_0316"}
{"text":"data_0317"}
{"text":"data_0318"}
{"text":"data_0319"}
{"text":"data_0320"}
{"text":"data_0321"}
{"text":"data_0322"}
{"text":"data_0323"}
{"text":"data_0324"}
{"text":"data_0325"}
{"text":"data_0326"}
{"text":"data_0327"}
{"text":"data_0328"}
{"text":"data_0329"}
{"text":"data_0330"}
{"text":"data_0331"}
{"text":"data_0332"}
{"text":"data_0333"}
{"text":"data_0334"}
{"text":"data_0335"}
{"text":"data_0336"}
{"text":"data_0337"}
{"text":"data_0338"}
{"text":"data_0339"}
{"text":"data_0340"}
{"text":"data_0341"}
{"text":"data_0342"}
{"text":"data_0343"}
{"text":"data_0344"}
{"text":"data_0345"}
{"text":"data_0346"}
{"text":"data_0347"}
{"text":"data_0348"}
{"text":"data_0349"}
{"text":"data_0350"}
{"text":"data_0351"}
{"text":"data_0352"}
{"text":"data_0353"}
{"text":"data_0354"}
{"text":"data_0355"}
{"text":"data_0356"}
{"text":"data_0357"}
{"text":"data_0358"}
{"text":"data_0359"}
{"text":"data_0360"}
{"text":"data_0361"}
{"text":"data_0362"}
{"text":"data_0363"}
{"text":"data_0364"}
{"text":"data_0365"}
{"text":"data_0366"}
{"text":"data_0367"}
{"text":"data_0368"}
{"text":"data_0369"}
{"text":"data_0370"}
{"text":"data_0371"}
{"text":"data_0372"}
{"text":"data_0373"}
{"text":"data_0374"}
{"text":"data_0375"}
{"text":"data_0376"}
{"text":"data_0377"}
{"text":"data_0378"}
{"text":"data_0379"}
{"text":"data_0380"}
{"text":"data_0381"}
{"text":"data_0382"}
{"text":"data_0383"}
{"text":"data_0384"}
{"text":"data_0385"}
{"text":"data_0386"}
{"text":"data_0387"}
{"text":"data_0388"}
{"text":"data_0389"}
{"text":"data_0390"}
{"text":"data_0391"}
{"text":"data_0392"}
{"text":"data_0393"}
{"text":"data_0394"}
{"text":"data_0395"}
{"text":"data_0396"}
{"text":"data_0397"}
{"text":"data_0398"}
{"text":"data_0399"}
{"text":"data_0400"}
{"text":"data_0401"}
{"text":"data_0402"}
{"text":"data_0403"}
{"text":"data_0404"}
{"text":"data_0405"}
{"text":"data_0406"}
{"text":"data_0407"}
{"text":"data_0408"}
{"text":"data_0409"}
{"text":"data_0410"}
{"text":"data_0411"}
{"text":"data_0412"}
{"text":"data_0413"}
{"text":"data_0414"}
{"text":"data_0415"}
{"text":"data_0416"}
{"text":"data_0417"}
{"text":"data_0418"}
{"text":"data_0419"}
{"text":"data_0420"}
{"text":"data_0421"}
{"text":"data_0422"}
{"text":"data_0423"}
{"text":"data_0424"}
{"text":"data_0425"}
{"text":"data_0426"}
{"text":"data_0427"}
{"text":"data_0428"}
{"text":"data_0429"}
{"text":"data_0430"}
{"text":"data_0431"}
{"text":"data_0432"}
{"text":"data_0433"}
{"text":"data_0434"}
{"text":"data_0435"}
{"text":"data_0436"}
{"text":"data_0437"}
{"text":"data_0438"}
{"text":"data_0439"}
{"text":"data_0440"}
{"text":"data_0441"}
{"text":"data_0442"}
{"text":"data_0443"}
{"text":"data_0444"}
{"text":"data_0445"}
{"text":"data_0446"}
{"text":"data_0447"}
{"text":"data_0448"}
{"text":"data_0449"}
{"text":"data_0450"}
{"text":"data_0451"}
{"text":"data_0452"}
{"text":"data_0453"}
{"text":"data_0454"}
{"text":"data_0455"}
{"text":"data_0456"}
{"text":"data_0457"}
{"text":"data_0458"}
{"text":"data_0459"}
{"text":"data_0460"}
{"text":"data_0461"}
{"text":"data_0462"}
{"text":"data_0463"}
{"text":"data_0464"}
{"text":"data_0465"}
{"text":"data_0466"}
{"text":"data_0467"}
{"text":"data_0468"}
{"text":"data_0469"}
{"text":"data_0470"}
{"text":"data_0471"}
{"text":"data_0472"}
{"text":"data_0473"}
{"text":"data_0474"}
{"text":"data_0475"}
{"text":"data_0476"}
{"text":"data_0477"}
{"text":"data_0478"}
{"text":"data_0479"}
{"text":"data_0480"}
{"text":"data_0481"}
{"text":"data_0482"}
{"text":"data_0483"}
{"text":"data_0484"}
{"text":"data_0485"}
{"text":"data_0486"}
{"text":"data_0487"}
{"text":"data_0488"}
{"text":"data_0489"}
{"text":"data_0490"}
{"text":"data_0491"}
{"text":"data_0492"}
{"text":"data_0493"}
{"text":"data_0494"}
{"text":"data_0495"}
{"text":"data_0496"}
{"text":"data_0497"}
{"text":"data_0498"}
{"text":"data_0499"}
{"text":"data_0500"}
{"text":"data_0501"}
{"text":"data_0502"}
{"text":"data_0503"}
{"text":"data_0504"}
{"text":"data_0505"}
{"text":"data_0506"}
{"text":"data_0507"}
{"text":"data_0508"}
{"text":"data_0509"}
{"text":"data_0510"}
{"text":"data_0511"}
{"text":"data_0512"}
{"text":"data_0513"}
{"text":"data_0514"}
{"text":"data_0515"}
{"text":"data_0516"}
{"text":"data_0517"}
{"text":"data_0518"}
{"text":"data_0519"}
{"text":"data_0520"}
{"text":"data_0521"}
{"text":"data_0522"}
{"text":"data_0523"}
{"text":"data_0524"}
{"text":"data_0525"}
{"text":"data_0526"}
{"text":"data_0527"}
{"text":"data_0528"}
{"text":"data_0529"}
{"text":"data_0530"}
{"text":"data_0531"}
{"text":"data_0532"}
{"text":"data_0533"}
{"text":"data_0534"}
{"text":"data_0535"}
{"text":"data_0536"}
{"text":"data_0537"}
{"text":"data_0538"}
{"text":"data_0539"}
{"text":"data_0540"}
{"text":"data_0541"}
{"text":"data_0542"}
{"text":"data_0543"}
{"text":"data_0544"}
{"text":"data_0545"}
{"text":"data_0546"}
{"text":"data_0547"}
{"text":"data_0548"}
{"text":"data_0549"}
{"text":"data_0550"}
{"text":"data_0551"}
{"text":"data_0552"}
{"text":"data_0553"}
{"text":"data_0554"}
{"text":"data_0555"}
{"text":"data_0556"}
{"text":"data_0557"}
{"text":"data_0558"}
{"text":"data_0559"}
{"text":"data_0560"}
{"text":"data_0561"}
{"text":"data_0562"}
{"text":"data_0563"}
{"text":"data_0564"}
{"text":"data_0565"}
{"text":"data_0566"}
{"text":"data_0567"}
{"text":"data_0568"}
{"text":"data_0569"}
{"text":"data_0570"}
{"text":"data_0571"}
{"text":"data_0572"}
{"text":"data_0573"}
{"text":"data_0574"}
{"text":"data_0575"}
{"text":"data_0576"}
{"text":"data_0577"}
{"text":"data_0578"}
{"text":"data_0579"}
{"text":"data_0580"}
{"text":"data_0581"}
{"text":"data_0582"}
{"text":"data_0583"}
{"text":"data_0584"}
{"text":"data_0585"}
{"text":"data_0586"}
{"text":"data_0587"}
{"text":"data_0588"}
{"text":"data_0589"}
{"text":"data_0590"}
{"text":"data_0591"}
{"text":"data_0592"}
{"text":"data_0593"}
{"text":"data_0594"}
{"text":"data_0595"}
{"text":"data_0596"}
{"text":"data_0597"}
{"text":"data_0598"}
{"text":"data_0599"}
{"text":"data_0600"}
{"text":"data_0601"}
{"text":"data_0602"}
{"text":"data_0603"}
{"text":"data_0604"}
{"text":"data_0605"}
{"text":"data_0606"}
{"text":"data_0607"}
{"text":"data_0608"}
{"text":"data_0609"}
{"text":"data_0610"}
{"text":"data_0611"}
{"text":"data_0612"}
{"text":"data_0613"}
{"text":"data_0614"}
{"text":"data_0615"}
{"text":"data_0616"}
{"text":"data_0617"}
{"text":"data_0618"}
{"text":"data_0619"}
{"text":"data_0620"}
{"text":"data_0621"}
{"text":"data_0622"}
{"text":"data_0623"}
{"text":"data_0624"}
{"text":"data_0625"}
{"text":"data_0626"}
{"text":"data_0627"}
{"text":"data_0628"}
{"text":"data_0629"}
{"text":"data_0630"}
{"text":"data_0631"}
{"text":"data_0632"}
{"text":"data_0633"}
{"text":"data_0634"}
{"text":"data_0635"}
{"text":"data_0636"}
{"text":"data_0637"}
{"text":"data_0638"}
{"text":"data_0639"}
{"text":"data_0640"}
{"text":"data_0641"}
{"text":"data_0642"}
{"text":"data_0643"}
{"text":"data_0644"}
{"text":"data_0645"}
{"text":"data_0646"}
{"text":"data_0647"}
{"text":"data_0648"}
{"text":"data_0649"}
{"text":"data_0650"}
{"text":"data_0651"}
{"text":"data_0652"}
{"text":"data_0653"}
{"text":"data_0654"}
{"text":"data_0655"}
{"text":"data_0656"}
{"text":"data_0657"}
{"text":"data_0658"}
{"text":"data_0659"}
{"text":"data_0660"}
{"text":"data_0661"}
{"text":"data_0662"}
{"text":"data_0663"}
{"text":"data_0664"}
{"text":"data_0665"}
{"text":"data_0666"}
{"text":"data_0667"}
{"text":"data_0668"}
{"text":"data_0669"}
{"text":"data_0670"}
{"text":"data_0671"}
{"text":"data_0672"}
{"text":"data_0673"}
{"text":"data_0674"}
{"text":"data_0675"}
{"text":"data_0676"}
{"text":"data_0677"}
{"text":"data_0678"}
{"text":"data_0679"}
{"text":"data_0680"}
{"text":"data_0681"}
{"text":"data_0682"}
{"text":"data_0683"}
{"text":"data_0684"}
{"text":"data_0685"}
{"text":"data_0686"}
{"text":"data_0687"}
{"text":"data_0688"}
{"text":"data_0689"}
{"text":"data_0690"}
{"text":"data_0691"}
{"text":"data_0692"}
{"text":"data_0693"}
{"text":"data_0694"}
{"text":"data_0695"}
{"text":"data_0696"}
{"text":"data_0697"}
{"text":"data_0698"}
{"text":"data_0699"}
{"text":"data_0700"}
{"text":"data_0701"}
{"text":"data_0702"}
{"text":"data_0703"}
{"text":"data_0704"}
{"text":"data_0705"}
{"text":"data_0706"}
{"text":"data_0707"}
{"text":"data_0708"}
{"text":"data_0709"}
{"text":"data_0710"}
{"text":"data_0711"}
{"text":"data_0712"}
{"text":"data_0713"}
{"text":"data_0714"}
{"text":"data_0715"}
{"text":"data_0716"}
{"text":"data_0717"}
{"text":"data_0718"}
{"text":"data_0719"}
{"text":"data_0720"}
{"text":"data_0721"}
{"text":"data_0722"}
{"text":"data_0723"}
{"text":"data_0724"}
{"text":"data_0725"}
{"text":"data_0726"}
{"text":"data_0727"}
{"text":"data_0728"}
{"text":"data_0729"}
{"text":"data_0730"}
{"text":"data_0731"}
{"text":"data_0732"}
{"text":"data_0733"}
{"text":"data_0734"}
{"text":"data_0735"}
{"text":"data_0736"}
{"text":"data_0737"}
{"text":"data_0738"}
{"text":"data_0739"}
{"text":"data_0740"}
{"text":"data_0741"}
{"text":"data_0742"}
{"text":"data_0743"}
{"text":"data_0744"}
{"text":"data_0745"}
{"text":"data_0746"}
{"text":"data_0747"}
{"text":"data_0748"}
{"text":"data_0749"}
{"text":"data_0750"}
{"text":"data_0751"}
{"text":"data_0752"}
{"text":"data_0753"}
{"text":"data_0754"}
{"text":"data_0755"}
{"text":"data_0756"}
{"text":"data_0757"}
{"text":"data_0758"}
{"text":"data_0759"}
{"text":"data_0760"}
{"text":"data_0761"}
{"text":"data_0762"}
{"text":"data_0763"}
{"text":"data_0764"}
{"text":"data_0765"}
{"text":"data_0766"}
{"text":"data_0767"}
{"text":"data_0768"}
{"text":"data_0769"}
{"text":"data_0770"}
{"text":"data_0771"}
{"text":"data_0772"}
{"text":"data_0773"}
{"text":"data_0774"}
{"text":"data_0775"}
{"text":"data_0776"}
{"text":"data_0777"}
{"text":"data_0778"}
{"text":"data_0779"}
{"text":"data_0780"}
{"text":"data_0781"}
{"text":"data_0782"}
{"text":"data_0783"}
{"text":"data_0784"}
{"text":"data_0785"}
{"text":"data_0786"}
{"text":"data_0787"}
{"text":"data_0788"}
{"text":"data_0789"}
{"text":"data_0790"}
{"text":"data_0791"}
{"text":"data_0792"}
{"text":"data_0793"}
{"text":"data_0794"}
{"text":"data_0795"}
{"text":"data_0796"}
{"text":"data_0797"}
{"text":"data_0798"}
{"text":"data_0799"}
{"text":"data_0800"}
{"text":"data_0801"}
{"text":"data_0802"}
{"text":"data_0803"}
{"text":"data_0804"}
{"text":"data_0805"}
{"text":"data_0806"}
{"text":"data_0807"}
{"text":"data_0808"}
{"text":"data_0809"}
{"text":"data_0810"}
{"text":"data_0811"}
{"text":"data_0812"}
{"text":"data_0813"}
{"text":"data_0814"}
{"text":"data_0815"}
{"text":"data_0816"}
{"text":"data_0817"}
{"text":"data_0818"}
{"text":"data_0819"}
{"text":"data_0820"}
{"text":"data_0821"}
{"text":"data_0822"}
{"text":"data_0823"}
{"text":"data_0824"}
{"text":"data_0825"}
{"text":"data_0826"}
{"text":"data_0827"}
{"text":"data_0828"}
{"text":"data_0829"}
{"text":"data_0830"}
{"text":"data_0831"}
{"text":"data_0832"}
{"text":"data_0833"}
{"text":"data_0834"}
{"text":"data_0835"}
{"text":"data_0836"}
{"text":"data_0837"}
{"text":"data_0838"}
{"text":"data_0839"}
{"text":"data_0840"}
{"text":"data_0841"}
{"text":"data_0842"}
{"text":"data_0843"}
{"text":"data_0844"}
{"text":"data_0845"}
{"text":"data_0846"}
{"text":"data_0847"}
{"text":"data_0848"}
{"text":"data_0849"}
{"text":"data_0850"}
{"text":"data_0851"}
{"text":"data_0852"}
{"text":"data_0853"}
{"text":"data_0854"}
{"text":"data_0855"}
{"text":"data_0856"}
{"text":"data_0857"}
{"text":"data_0858"}
{"text":"data_0859"}
{"text":"data_0860"}
{"text":"data_0861"}
{"text":"data_0862"}
{"text":"data_0863"}
{"text":"data_0864"}
{"text":"data_0865"}
{"text":"data_0866"}
{"text":"data_0867"}
{"text":"data_0868"}
{"text":"data_0869"}
{"text":"data_0870"}
{"text":"data_0871"}
{"text":"data_0872"}
{"text":"data_0873"}
{"text":"data_0874"}
{"text":"data_0875"}
{"text":"data_0876"}
{"text":"data_0877"}
{"text":"data_0878"}
{"text":"data_0879"}
{"text":"data_0880"}
{"text":"data_0881"}
{"text":"data_0882"}
{"text":"data_0883"}
{"text":"data_0884"}
{"text":"data_0885"}
{"text":"data_0886"}
{"text":"data_0887"}
{"text":"data_0888"}
{"text":"data_0889"}
{"text":"data_0890"}
{"text":"data_0891"}
{"text":"data_0892"}
{"text":"data_0893"}
{"text":"data_0894"}
{"text":"data_0895"}
{"text":"data_0896"}
{"text":"data_0897"}
{"text":"data_0898"}
{"text":"data_0899"}
{"text":"data_0900"}
{"text":"data_0901"}
{"text":"data_0902"}
{"text":"data_0903"}
{"text":"data_0904"}
{"text":"data_0905"}
{"text":"data_0906"}
{"text":"data_0907"}
{"text":"data_0908"}
{"text":"data_0909"}
{"text":"data_0910"}
{"text":"data_0911"}
{"text":"data_0912"}
{"text":"data_0913"}
{"text":"data_0914"}
{"text":"data_0915"}
{"text":"data_0916"}
{"text":"data_0917"}
{"text":"data_0918"}
{"text":"data_0919"}
{"text":"data_0920"}
{"text":"data_0921"}
{"text":"data_0922"}
{"text":"data_0923"}
{"text":"data_0924"}
{"text":"data_0925"}
{"text":"data_0926"}
{"text":"data_0927"}
{"text":"data_0928"}
{"text":"data_0929"}
{"text":"data_0930"}
{"text":"data_0931"}
{"text":"data_0932"}
{"text":"data_0933"}
{"text":"data_0934"}
{"text":"data_0935"}
{"text":"data_0936"}
{"text":"data_0937"}
{"text":"data_0938"}
{"text":"data_0939"}
{"text":"data_0940"}
{"text":"data_0941"}
{"text":"data_0942"}
{"text":"data_0943"}
{"text":"data_0944"}
{"text":"data_0945"}
{"text":"data_0946"}
{"text":"data_0947"}
{"text":"data_0948"}
{"text":"data_0949"}
{"text":"data_0950"}
{"text":"data_0951"}
{"text":"data_0952"}
{"text":"data_0953"}
{"text":"data_0954"}
{"text":"data_0955"}
{"text":"data_0956"}
{"text":"data_0957"}
{"text":"data_0958"}
{"text":"data_0959"}
{"text":"data_0960"}
{"text":"data_0961"}
{"text":"data_0962"}
{"text":"data_0963"}
{"text":"data_0964"}
{"text":"data_0965"}
{"text":"data_0966"}
{"text":"data_0967"}
{"text":"data_0968"}
{"text":"data_0969"}
{"text":"data_0970"}
{"text":"data_0971"}
{"text":"data_0972"}
{"text":"data_0973"}
{"text":"data_0974"}
{"text":"data_0975"}
{"text":"data_0976"}
{"text":"data_0977"}
{"text":"data_0978"}
{"text":"data_0979"}
{"text":"data_0980"}
{"text":"data_0981"}
{"text":"data_0982"}
{"text":"data_0983"}
{"text":"data_0984"}
{"text":"data_0985"}
{"text":"data_0986"}
{"text":"data_0987"}
{"text":"data_0988"}
{"text":"data_0989"}
{"text":"data_0990"}
{"text":"data_0991"}
{"text":"data_0992"}
{"text":"data_0993"}
{"text":"data_0994"}
{"text":"data_0995"}
{"text":"data_0996"}
{"text":"data_0997"}
{"text":"data_0998"}
{"text":"data_0999"}
{"text":"data_1000"}
{"text":"data_1001"}
{"text":"data_1002"}
{"text":"data_1003"}
{"text":"data_1004"}
{"text":"data_1005"}
{"text":"data_1006"}
{"text":"data_1007"}
{"text":"data_1008"}
{"text":"data_1009"}
{"text":"data_1010"}
{"text":"data_1011"}
{"text":"data_1012"}
{"text":"data_1013"}
{"text":"data_1014"}
{"text":"data_1015"}
{"text":"data_1016"}
{"text":"data_1017"}
{"text":"data_1018"}
{"text":"data_1019"}
{"text":"data_1020"}
{"text":"data_1021"}
{"text":"data_1022"}
{"text":"data_1023"}
{"text":"data_1024"}
{"text":"data_1025"}
{"text":"data_1026"}
{"text":"data_1027"}
{"text":"data_1028"}
{"text":"data_1029"}
{"text":"data_1030"}
{"text":"data_1031"}
{"text":"data_1032"}
{"text":"data_1033"}
{"text":"data_1034"}
{"text":"data_1035"}
{"text":"data_1036"}
{"text":"data_1037"}
{"text":"data_1038"}
{"text":"data_1039"}
{"text":"data_1040"}
{"text":"data_1041"}
{"text":"data_1042"}
{"text":"data_1043"}
{"text":"data_1044"}
{"text":"data_1045"}
{"text":"data_1046"}
{"text":"data_1047"}
{"text":"data_1048"}
{"text":"data_1049"}
{"text":"data_1050"}
{"text":"data_1051"}
{"text":"data_1052"}
{"text":"data_1053"}
{"text":"data_1054"}
{"text":"data_1055"}
{"text":"data_1056"}
{"text":"data_1057"}
{"text":"data_1058"}
{"text":"data_1059"}
{"text":"data_1060"}
{"text":"data_1061"}
{"text":"data_1062"}
{"text":"data_1063"}
{"text":"data_1064"}
{"text":"data_1065"}
{"text":"data_1066"}
{"text":"data_1067"}
{"text":"data_1068"}
{"text":"data_1069"}
{"text":"data_1070"}
{"text":"data_1071"}
{"text":"data_1072"}
{"text":"data_1073"}
{"text":"data_1074"}
{"text":"data_1075"}
{"text":"data_1076"}
{"text":"data_1077"}
{"text":"data_1078"}
{"text":"data_1079"}
{"text":"data_1080"}
{"text":"data_1081"}
{"text":"data_1082"}
{"text":"data_1083"}
{"text":"data_1084"}
{"text":"data_1085"}
{"text":"data_1086"}
{"text":"data_1087"}
{"text":"data_1088"}
{"text":"data_1089"}
{"text":"data_1090"}
{"text":"data_1091"}
{"text":"data_1092"}
{"text":"data_1093"}
{"text":"data_1094"}
{"text":"data_1095"}
{"text":"data_1096"}
{"text":"data_1097"}
{"text":"data_1098"}
{"text":"data_1099"}
{"text":"data_1100"}
{"text":"data_1101"}
{"text":"data_1102"}
{"text":"data_1103"}
{"text":"data_1104"}
{"text":"data_1105"}
{"text":"data_1106"}
{"text":"data_1107"}
{"text":"data_1108"}
{"text":"data_1109"}
{"text":"data_1110"}
{"text":"data_1111"}
{"text":"data_1112"}
{"text":"data_1113"}
{"text":"data_1114"}
{"text":"data_1115"}
{"text":"data_1116"}
{"text":"data_1117"}
{"text":"data_1118"}
{"text":"data_1119"}
{"text":"data_1120"}
{"text":"data_1121"}
{"text":"data_1122"}
{"text":"data_1123"}
{"text":"data_1124"}
{"text":"data_1125"}
{"text":"data_1126"}
{"text":"data_1127"}
{"text":"data_1128"}
{"text":"data_1129"}
{"text":"data_1130"}
{"text":"data_1131"}
{"text":"data_1132"}
{"text":"data_1133"}
{"text":"data_1134"}
{"text":"data_1135"}
{"text":"data_1136"}
{"text":"data_1137"}
{"text":"data_1138"}
{"text":"data_1139"}
{"text":"data_1140"}
{"text":"data_1141"}
{"text":"data_1142"}
{"text":"data_1143"}
{"text":"data_1144"}
{"text":"data_1145"}
{"text":"data_1146"}
{"text":"data_1147"}
{"text":"data_1148"}
{"text":"data_1149"}
{"text":"data_1150"}
{"text":"data_1151"}
{"text":"data_1152"}
{"text":"data_1153"}
{"text":"data_1154"}
{"text":"data_1155"}
{"text":"data_1156"}
{"text":"data_1157"}
{"text":"data_1158"}
{"text":"data_1159"}
{"text":"data_1160"}
{"text":"data_1161"}
{"text":"data_1162"}
{"text":"data_1163"}
{"text":"data_1164"}
{"text":"data_1165"}
{"text":"data_1166"}
{"text":"data_1167"}
{"text":"data_1168"}
{"text":"data_1169"}
{"text":"data_1170"}
{"text":"data_1171"}
{"text":"data_1172"}
{"text":"data_1173"}
{"text":"data_1174"}
{"text":"data_1175"}
{"text":"data_1176"}
{"text":"data_1177"}
{"text":"data_1178"}
{"text":"data_1179"}
{"text":"data_1180"}
{"text":"data_1181"}
{"text":"data_1182"}
{"text":"data_1183"}
{"text":"data_1184"}
{"text":"data_1185"}
{"text":"data_1186"}
{"text":"data_1187"}
{"text":"data_1188"}
{"text":"data_1189"}
{"text":"data_1190"}
{"text":"data_1191"}
{"text":"data_1192"}
{"text":"data_1193"}
{"text":"data_1194"}
{"text":"data_1195"}
{"text":"data_1196"}
{"text":"data_1197"}
{"text":"data_1198"}
{"text":"data_1199"}
{"text":"data_1200"}
{"text":"data_1201"}
{"text":"data_1202"}
{"text":"data_1203"}
{"text":"data_1204"}
{"text":"data_1205"}
{"text":"data_1206"}
{"text":"data_1207"}
{"text":"data_1208"}
{"text":"data_1209"}
{"text":"data_1210"}
{"text":"data_1211"}
{"text":"data_1212"}
{"text":"data_1213"}
{"text":"data_1214"}
{"text":"data_1215"}
{"text":"data_1216"}
{"text":"data_1217"}
{"text":"data_1218"}
{"text":"data_1219"}
{"text":"data_1220"}
{"text":"data_1221"}
{"text":"data_1222"}
{"text":"data_1223"}
{"text":"data_1224"}
{"text":"data_1225"}
{"text":"data_1226"}
{"text":"data_1227"}
{"text":"data_1228"}
{"text":"data_1229"}
{"text":"data_1230"}
{"text":"data_1231"}
{"text":"data_1232"}
{"text":"data_1233"}
{"text":"data_1234"}
{"text":"data_1235"}
{"text":"data_1236"}
{"text":"data_1237"}
{"text":"data_1238"}
{"text":"data_1239"}
{"text":"data_1240"}
{"text":"data_1241"}
{"text":"data_1242"}
{"text":"data_1243"}
{"text":"data_1244"}
{"text":"data_1245"}
{"text":"data_1246"}
{"text":"data_1247"}
{"text":"data_1248"}
{"text":"data_1249"}
{"text":"data_1250"}
{"text":"data_1251"}
{"text":"data_1252"}
{"text":"data_1253"}
{"text":"data_1254"}
{"text":"data_1255"}
{"text":"data_1256"}
{"text":"data_1257"}
{"text":"data_1258"}
{"text":"data_1259"}
{"text":"data_1260"}
{"text":"data_1261"}
{"text":"data_1262"}
{"text":"data_1263"}
{"text":"data_1264"}
{"text":"data_1265"}
{"text":"data_1266"}
{"text":"data_1267"}
{"text":"data_1268"}
{"text":"data_1269"}
{"text":"data_1270"}
{"text":"data_1271"}
{"text":"data_1272"}
{"text":"data_1273"}
{"text":"data_1274"}
{"text":"data_1275"}
{"text":"data_1276"}
{"text":"data_1277"}
{"text":"data_1278"}
{"text":"data_1279"}
{"text":"data_1280"}
{"text":"data_1281"}
{"text":"data_1282"}
{"text":"data_1283"}
{"text":"data_1284"}
{"text":"data_1285"}
{"text":"data_1286"}
{"text":"data_1287"}
{"text":"data_1288"}
{"text":"data_1289"}
{"text":"data_1290"}
{"text":"data_1291"}
{"text":"data_1292"}
{"text":"data_1293"}
{"text":"data_1294"}
{"text":"data_1295"}
{"text":"data_1296"}
{"text":"data_1297"}
{"text":"data_1298"}
{"text":"data_1299"}
{"text":"data_1300"}
{"text":"data_1301"}
{"text":"data_1302"}
{"text":"data_1303"}
{"text":"data_1304"}
{"text":"data_1305"}
{"text":"data_1306"}
{"text":"data_1307"}
{"text":"data_1308"}
{"text":"data_1309"}
{"text":"data_1310"}
{"text":"data_1311"}
{"text":"data_1312"}
{"text":"data_1313"}
{"text":"data_1314"}
{"text":"data_1315"}
{"text":"data_1316"}
{"text":"data_1317"}
{"text":"data_1318"}
{"text":"data_1319"}
{"text":"data_1320"}
{"text":"data_1321"}
{"text":"data_1322"}
{"text":"data_1323"}
{"text":"data_1324"}
{"text":"data_1325"}
{"text":"data_1326"}
{"text":"data_1327"}
{"text":"data_1328"}
{"text":"data_1329"}
{"text":"data_1330"}
{"text":"data_1331"}
{"text":"data_1332"}
{"text":"data_1333"}
{"text":"data_1334"}
{"text":"data_1335"}
{"text":"data_1336"}
{"text":"data_1337"}
{"text":"data_1338"}
{"text":"data_1339"}
{"text":"data_1340"}
{"text":"data_1341"}
{"text":"data_1342"}
{"text":"data_1343"}
{"text":"data_1344"}
{"text":"data_1345"}
{"text":"data_1346"}
{"text":"data_1347"}
{"text":"data_1348"}
{"text":"data_1349"}
{"text":"data_1350"}
{"text":"data_1351"}
{"text":"data_1352"}
{"text":"data_1353"}
{"text":"data_1354"}
{"text":"data_1355"}
{"text":"data_1356"}
{"text":"data_1357"}
{"text":"data_1358"}
{"text":"data_1359"}
{"text":"data_1360"}
{"text":"data_1361"}
{"text":"data_1362"}
{"text":"data_1363"}
{"text":"data_1364"}
{"text":"data_1365"}
{"text":"data_1366"}
{"text":"data_1367"}
{"text":"data_1368"}
{"text":"data_1369"}
{"text":"data_1370"}
{"text":"data_1371"}
{"text":"data_1372"}
{"text":"data_1373"}
{"text":"data_1374"}
{"text":"data_1375"}
{"text":"data_1376"}
{"text":"data_1377"}
{"text":"data_1378"}
{"text":"data_1379"}
{"text":"data_1380"}
{"text":"data_1381"}
{"text":"data_1382"}
{"text":"data_1383"}
{"text":"data_1384"}
{"text":"data_1385"}
{"text":"data_1386"}
{"text":"data_1387"}
{"text":"data_1388"}
{"text":"data_1389"}
{"text":"data_1390"}
{"text":"data_1391"}
{"text":"data_1392"}
{"text":"data_1393"}
{"text":"data_1394"}
{"text":"data_1395"}
{"text":"data_1396"}
{"text":"data_1397"}
{"text":"data_1398"}
{"text":"data_1399"}
{"text":"data_1400"}
{"text":"data_1401"}
{"text":"data_1402"}
{"text":"data_1403"}
{"text":"data_1404"}
{"text":"data_1405"}
{"text":"data_1406"}
{"text":"data_1407"}
{"text":"data_1408"}
{"text":"data_1409"}
{"text":"data_1410"}
{"text":"data_1411"}
{"text":"data_1412"}
{"text":"data_1413"}
{"text":"data_1414"}
{"text":"data_1415"}
{"text":"data_1416"}
{"text":"data_1417"}
{"text":"data_1418"}
{"text":"data_1419"}
{"text":"data_1420"}
{"text":"data_1421"}
{"text":"data_1422"}
{"text":"data_1423"}
{"text":"data_1424"}
{"text":"data_1425"}
{"text":"data_1426"}
{"text":"data_1427"}
{"text":"data_1428"}
{"text":"data_1429"}
{"text":"data_1430"}
{"text":"data_1431"}
{"text":"data_1432"}
{"text":"data_1433"}
{"text":"data_1434"}
{"text":"data_1435"}
{"text":"data_1436"}
{"text":"data_1437"}
{"text":"data_1438"}
{"text":"data_1439"}
{"text":"data_1440"}
{"text":"data_1441"}
{"text":"data_1442"}
{"text":"data_1443"}
{"text":"data_1444"}
{"text":"data_1445"}
{"text":"data_1446"}
{"text":"data_1447"}
{"text":"data_1448"}
{"text":"data_1449"}
{"text":"data_1450"}
{"text":"data_1451"}
{"text":"data_1452"}
{"text":"data_1453"}
{"text":"data_1454"}
{"text":"data_1455"}
{"text":"data_1456"}
{"text":"data_1457"}
{"text":"data_1458"}
{"text":"data_1459"}
{"text":"data_1460"}
{"text":"data_1461"}
{"text":"data_1462"}
{"text":"data_1463"}
{"text":"data_1464"}
{"text":"data_1465"}
{"text":"data_1466"}
{"text":"data_1467"}
{"text":"data_1468"}
{"text":"data_1469"}
{"text":"data_1470"}
{"text":"data_1471"}
{"text":"data_1472"}
{"text":"data_1473"}
{"text":"data_1474"}
{"text":"data_1475"}
{"text":"data_1476"}
{"text":"data_1477"}
{"text":"data_1478"}
{"text":"data_1479"}
{"text":"data_1480"}
{"text":"data_1481"}
{"text":"data_1482"}
{"text":"data_1483"}
{"text":"data_1484"}
{"text":"data_1485"}
{"text":"data_1486"}
{"text":"data_1487"}
{"text":"data_1488"}
{"text":"data_1489"}
{"text":"data_1490"}
{"text":"data_1491"}
{"text":"data_1492"}
{"text":"data_1493"}
{"text":"data_1494"}
{"text":"data_1495"}
{"text":"data_1496"}
{"text":"data_1497"}
{"text":"data_1498"}
{"text":"data_1499"}
{"text":"data_1500"}
{"text":"data_1501"}
{"text":"data_1502"}
{"text":"data_1503"}
{"text":"data_1504"}
{"text":"data_1505"}
{"text":"data_1506"}
{"text":"data_1507"}
{"text":"data_1508"}
{"text":"data_1509"}
{"text":"data_1510"}
{"text":"data_1511"}
{"text":"data_1512"}
{"text":"data_1513"}
{"text":"data_1514"}
{"text":"data_1515"}
{"text":"data_1516"}
{"text":"data_1517"}
{"text":"data_1518"}
{"text":"data_1519"}
{"text":"data_1520"}
{"text":"data_1521"}
{"text":"data_1522"}
{"text":"data_1523"}
{"text":"data_1524"}
{"text":"data_1525"}
{"text":"data_1526"}
{"text":"data_1527"}
{"text":"data_1528"}
{"text":"data_1529"}
{"text":"data_1530"}
{"text":"data_1531"}
{"text":"data_1532"}
{"text":"data_1533"}
{"text":"data_1534"}
{"text":"data_1535"}
{"text":"data_1536"}
{"text":"data_1537"}
{"text":"data_1538"}
{"text":"data_1539"}
{"text":"data_1540"}
{"text":"data_1541"}
{"text":"data_1542"}
{"text":"data_1543"}
{"text":"data_1544"}
{"text":"data_1545"}
{"text":"data_1546"}
{"text":"data_1547"}
{"text":"data_1548"}
{"text":"data_1549"}
{"text":"data_1550"}
{"text":"data_1551"}
{"text":"data_1552"}
{"text":"data_1553"}
{"text":"data_1554"}
{"text":"data_1555"}
{"text":"data_1556"}
{"text":"data_1557"}
{"text":"data_1558"}
{"text":"data_1559"}
{"text":"data_1560"}
{"text":"data_1561"}
{"text":"data_1562"}
{"text":"data_1563"}
{"text":"data_1564"}
{"text":"data_1565"}
{"text":"data_1566"}
{"text":"data_1567"}
{"text":"data_1568"}
{"text":"data_1569"}
{"text":"data_1570"}
{"text":"data_1571"}
{"text":"data_1572"}
{"text":"data_1573"}
{"text":"data_1574"}
{"text":"data_1575"}
{"text":"data_1576"}
{"text":"data_1577"}
{"text":"data_1578"}
{"text":"data_1579"}
{"text":"data_1580"}
{"text":"data_1581"}
{"text":"data_1582"}
{"text":"data_1583"}
{"text":"data_1584"}
{"text":"data_1585"}
{"text":"data_1586"}
{"text":"data_1587"}
{"text":"data_1588"}
{"text":"data_1589"}
{"text":"data_1590"}
{"text":"data_1591"}
{"text":"data_1592"}
{"text":"data_1593"}
{"text":"data_1594"}
{"text":"data_1595"}
{"text":"data_1596"}
{"text":"data_1597"}
{"text":"data_1598"}
{"text":"data_1599"}
{"text":"data_1600"}
{"text":"data_1601"}
{"text":"data_1602"}
{"text":"data_1603"}
{"text":"data_1604"}
{"text":"data_1605"}
{"text":"data_1606"}
{"text":"data_1607"}
{"text":"data_1608"}
{"text":"data_1609"}
{"text":"data_1610"}
{"text":"data_1611"}
{"text":"data_1612"}
{"text":"data_1613"}
{"text":"data_1614"}
{"text":"data_1615"}
{"text":"data_1616"}
{"text":"data_1617"}
{"text":"data_1618"}
{"text":"data_1619"}
{"text":"data_1620"}
{"text":"data_1621"}
{"text":"data_1622"}
{"text":"data_1623"}
{"text":"data_1624"}
{"text":"data_1625"}
{"text":"data_1626"}
{"text":"data_1627"}
{"text":"data_1628"}
{"text":"data_1629"}
{"text":"data_1630"}
{"text":"data_1631"}
{"text":"data_1632"}
{"text":"data_1633"}
{"text":"data_1634"}
{"text":"data_1635"}
{"text":"data_1636"}
{"text":"data_1637"}
{"text":"data_1638"}
{"text":"data_1639"}
{"text":"data_1640"}
{"text":"data_1641"}
{"text":"data_1642"}
{"text":"data_1643"}
{"text":"data_1644"}
{"text":"data_1645"}
{"text":"data_1646"}
{"text":"data_1647"}
{"text":"data_1648"}
{"text":"data_1649"}
{"text":"data_1650"}
{"text":"data_1651"}
{"text":"data_1652"}
{"text":"data_1653"}
{"text":"data_1654"}
{"text":"data_1655"}
{"text":"data_1656"}
{"text":"data_1657"}
{"text":"data_1658"}
{"text":"data_1659"}
{"text":"data_1660"}
{"text":"data_1661"}
{"text":"data_1662"}
{"text":"data_1663"}
{"text":"data_1664"}
{"text":"data_1665"}
{"text":"data_1666"}
{"text":"data_1667"}
{"text":"data_1668"}
{"text":"data_1669"}
{"text":"data_1670"}
{"text":"data_1671"}
{"text":"data_1672"}
{"text":"data_1673"}
{"text":"data_1674"}
{"text":"data_1675"}
{"text":"data_1676"}
{"text":"data_1677"}
{"text":"data_1678"}
{"text":"data_1679"}
{"text":"data_1680"}
{"text":"data_1681"}
{"text":"data_1682"}
{"text":"data_1683"}
{"text":"data_1684"}
{"text":"data_1685"}
{"text":"data_1686"}
{"text":"data_1687"}
{"text":"data_1688"}
{"text":"data_1689"}
{"text":"data_1690"}
{"text":"data_1691"}
{"text":"data_1692"}
{"text":"data_1693"}
{"text":"data_1694"}
{"text":"data_1695"}
{"text":"data_1696"}
{"text":"data_1697"}
{"text":"data_1698"}
{"text":"data_1699"}
{"text":"data_1700"}
{"text":"data_1701"}
{"text":"data_1702"}
{"text":"data_1703"}
{"text":"data_1704"}
{"text":"data_1705"}
{"text":"data_1706"}
{"text":"data_1707"}
{"text":"data_1708"}
{"text":"data_1709"}
{"text":"data_1710"}
{"text":"data_1711"}
{"text":"data_1712"}
{"text":"data_1713"}
{"text":"data_1714"}
{"text":"data_1715"}
{"text":"data_1716"}
{"text":"data_1717"}
{"text":"data_1718"}
{"text":"data_1719"}
{"text":"data_1720"}
{"text":"data_1721"}
{"text":"data_1722"}
{"text":"data_1723"}
{"text":"data_1724"}
{"text":"data_1725"}
{"text":"data_1726"}
{"text":"data_1727"}
{"text":"data_1728"}
{"text":"data_1729"}
{"text":"data_1730"}
{"text":"data_1731"}
{"text":"data_1732"}
{"text":"data_1733"}
{"text":"data_1734"}
{"text":"data_1735"}
{"text":"data_1736"}
{"text":"data_1737"}
{"text":"data_1738"}
{"text":"data_1739"}
{"text":"data_1740"}
{"text":"data_1741"}
{"text":"data_1742"}
{"text":"data_1743"}
{"text":"data_1744"}
{"text":"data_1745"}
{"text":"data_1746"}
{"text":"data_1747"}
{"text":"data_1748"}
{"text":"data_1749"}
{"text":"data_1750"}
{"text":"data_1751"}
{"text":"data_1752"}
{"text":"data_1753"}
{"text":"data_1754"}
{"text":"data_1755"}
{"text":"data_1756"}
{"text":"data_1757"}
{"text":"data_1758"}
{"text":"data_1759"}
{"text":"data_1760"}
{"text":"data_1761"}
{"text":"data_1762"}
{"text":"data_1763"}
{"text":"data_1764"}
{"text":"data_1765"}
{"text":"data_1766"}
{"text":"data_1767"}
{"text":"data_1768"}
{"text":"data_1769"}
{"text":"data_1770"}
{"text":"data_1771"}
{"text":"data_1772"}
{"text":"data_1773"}
{"text":"data_1774"}
{"text":"data_1775"}
{"text":"data_1776"}
{"text":"data_1777"}
{"text":"data_1778"}
{"text":"data_1779"}
{"text":"data_1780"}
{"text":"data_1781"}
{"text":"data_1782"}
{"text":"data_1783"}
{"text":"data_1784"}
{"text":"data_1785"}
{"text":"data_1786"}
{"text":"data_1787"}
{"text":"data_1788"}
{"text":"data_1789"}
{"text":"data_1790"}
{"text":"data_1791"}
{"text":"data_1792"}
{"text":"data_1793"}
{"text":"data_1794"}
{"text":"data_1795"}
{"text":"data_1796"}
{"text":"data_1797"}
{"text":"data_1798"}
{"text":"data_1799"}
{"text":"data_1800"}
{"text":"data_1801"}
{"text":"data_1802"}
{"text":"data_1803"}
{"text":"data_1804"}
{"text":"data_1805"}
{"text":"data_1806"}
{"text":"data_1807"}
{"text":"data_1808"}
{"text":"data_1809"}
{"text":"data_1810"}
{"text":"data_1811"}
{"text":"data_1812"}
{"text":"data_1813"}
{"text":"data_1814"}
{"text":"data_1815"}
{"text":"data_1816"}
{"text":"data_1817"}
{"text":"data_1818"}
{"text":"data_1819"}
{"text":"data_1820"}
{"text":"data_1821"}
{"text":"data_1822"}
{"text":"data_1823"}
{"text":"data_1824"}
{"text":"data_1825"}
{"text":"data_1826"}
{"text":"data_1827"}
{"text":"data_1828"}
{"text":"data_1829"}
{"text":"data_1830"}
{"text":"data_1831"}
{"text":"data_1832"}
{"text":"data_1833"}
{"text":"data_1834"}
{"text":"data_1835"}
{"text":"data_1836"}
{"text":"data_1837"}
{"text":"data_1838"}
{"text":"data_1839"}
{"text":"data_1840"}
{"text":"data_1841"}
{"text":"data_1842"}
{"text":"data_1843"}
{"text":"data_1844"}
{"text":"data_1845"}
{"text":"data_1846"}
{"text":"data_1847"}
{"text":"data_1848"}
{"text":"data_1849"}
{"text":"data_1850"}
{"text":"data_1851"}
{"text":"data_1852"}
{"text":"data_1853"}
{"text":"data_1854"}
{"text":"data_1855"}
{"text":"data_1856"}
{"text":"data_1857"}
{"text":"data_1858"}
{"text":"data_1859"}
{"text":"data_1860"}
{"text":"data_1861"}
{"text":"data_1862"}
{"text":"data_1863"}
{"text":"data_1864"}
{"text":"data_1865"}
{"text":"data_1866"}
{"text":"data_1867"}
{"text":"data_1868"}
{"text":"data_1869"}
{"text":"data_1870"}
{"text":"data_1871"}
{"text":"data_1872"}
{"text":"data_1873"}
{"text":"data_1874"}
{"text":"data_1875"}
{"text":"data_1876"}
{"text":"data_1877"}
{"text":"data_1878"}
{"text":"data_1879"}
{"text":"data_1880"}
{"text":"data_1881"}
{"text":"data_1882"}
{"text":"data_1883"}
{"text":"data_1884"}
{"text":"data_1885"}
{"text":"data_1886"}
{"text":"data_1887"}
{"text":"data_1888"}
{"text":"data_1889"}
{"text":"data_1890"}
{"text":"data_1891"}
{"text":"data_1892"}
{"text":"data_1893"}
{"text":"data_1894"}
{"text":"data_1895"}
{"text":"data_1896"}
{"text":"data_1897"}
{"text":"data_1898"}
{"text":"data_1899"}
{"text":"data_1900"}
{"text":"data_1901"}
{"text":"data_1902"}
{"text":"data_1903"}
{"text":"data_1904"}
{"text":"data_1905"}
{"text":"data_1906"}
{"text":"data_1907"}
{"text":"data_1908"}
{"text":"data_1909"}
{"text":"data_1910"}
{"text":"data_1911"}
{"text":"data_1912"}
{"text":"data_1913"}
{"text":"data_1914"}
{"text":"data_1915"}
{"text":"data_1916"}
{"text":"data_1917"}
{"text":"data_1918"}
{"text":"data_1919"}
{"text":"data_1920"}
{"text":"data_1921"}
{"text":"data_1922"}
{"text":"data_1923"}
{"text":"data_1924"}
{"text":"data_1925"}
{"text":"data_1926"}
{"text":"data_1927"}
{"text":"data_1928"}
{"text":"data_1929"}
{"text":"data_1930"}
{"text":"data_1931"}
{"text":"data_1932"}
{"text":"data_1933"}
{"text":"data_1934"}
{"text":"data_1935"}
{"text":"data_1936"}
{"text":"data_1937"}
{"text":"data_1938"}
{"text":"data_1939"}
{"text":"data_1940"}
{"text":"data_1941"}
{"text":"data_1942"}
{"text":"data_1943"}
{"text":"data_1944"}
{"text":"data_1945"}
{"text":"data_1946"}
{"text":"data_1947"}
{"text":"data_1948"}
{"text":"data_1949"}
{"text":"data_1950"}
{"text":"data_1951"}
{"text":"data_1952"}
{"text":"data_1953"}
{"text":"data_1954"}
{"text":"data_1955"}
{"text":"data_1956"}
{"text":"data_1957"}
{"text":"data_1958"}
{"text":"data_1959"}
{"text":"data_1960"}
{"text":"data_1961"}
{"text":"data_1962"}
{"text":"data_1963"}
{"text":"data_1964"}
{"text":"data_1965"}
{"text":"data_1966"}
{"text":"data_1967"}
{"text":"data_1968"}
{"text":"data_1969"}
{"text":"data_1970"}
{"text":"data_1971"}
{"text":"data_1972"}
{"text":"data_1973"}
{"text":"data_1974"}
{"text":"data_1975"}
{"text":"data_1976"}
{"text":"data_1977"}
{"text":"data_1978"}
{"text":"data_1979"}
{"text":"data_1980"}
{"text":"data_1981"}
{"text":"data_1982"}
{"text":"data_1983"}
{"text":"data_1984"}
{"text":"data_1985"}
{"text":"data_1986"}
{"text":"data_1987"}
{"text":"data_1988"}
{"text":"data_1989"}
{"text":"data_1990"}
{"text":"data_1991"}
{"text":"data_1992"}
{"text":"data_1993"}
{"text":"data_1994"}
{"text":"data_1995"}
{"text":"data_1996"}
{"text":"data_1997"}
{"text":"data_1998"}
{"text":"data_1999"}
{"text":"data_2000"}
{"text":"data_2001"}
{"text":"data_2002"}
{"text":"data_2003"}
{"text":"data_2004"}
{"text":"data_2005"}
{"text":"data_2006"}
{"text":"data_2007"}
{"text":"data_2008"}
{"text":"data_2009"}
{"text":"data_2010"}
{"text":"data_2011"}
{"text":"data_2012"}
{"text":"data_2013"}
{"text":"data_2014"}
{"text":"data_2015"}
{"text":"data_2016"}
{"text":"data_2017"}
{"text":"data_2018"}
{"text":"data_2019"}
{"text":"data_2020"}
{"text":"data_2021"}
{"text":"data_2022"}
{"text":"data_2023"}
{"text":"data_2024"}
{"text":"data_2025"}
{"text":"data_2026"}
{"text":"data_2027"}
{"text":"data_2028"}
{"text":"data_2029"}
{"text":"data_2030"}
{"text":"data_2031"}
{"text":"data_2032"}
{"text":"data_2033"}
{"text":"data_2034"}
{"text":"data_2035"}
{"text":"data_2036"}
{"text":"data_2037"}
{"text":"data_2038"}
{"text":"data_2039"}
{"text":"data_2040"}
{"text":"data_2041"}
{"text":"data_2042"}
{"text":"data_2043"}
{"text":"data_2044"}
{"text":"data_2045"}
{"text":"data_2046"}
{"text":"data_2047"}
{"text":"data_2048"}
{"text":"data_2049"}
{"text":"data_2050"}
{"text":"data_2051"}
{"text":"data_2052"}
{"text":"data_2053"}
{"text":"data_2054"}
{"text":"data_2055"}
{"text":"data_2056"}
{"text":"data_2057"}
{"text":"data_2058"}
{"text":"data_2059"}
{"text":"data_2060"}
{"text":"data_2061"}
{"text":"data_2062"}
{"text":"data_2063"}
{"text":"data_2064"}
{"text":"data_2065"}
{"text":"data_2066"}
{"text":"data_2067"}
{"text":"data_2068"}
{"text":"data_2069"}
{"text":"data_2070"}
{"text":"data_2071"}
{"text":"data_2072"}
{"text":"data_2073"}
{"text":"data_2074"}
{"text":"data_2075"}
{"text":"data_2076"}
{"text":"data_2077"}
{"text":"data_2078"}
{"text":"data_2079"}
{"text":"data_2080"}
{"text":"data_2081"}
{"text":"data_2082"}
{"text":"data_2083"}
{"text":"data_2084"}
{"text":"data_2085"}
{"text":"data_2086"}
{"text":"data_2087"}
{"text":"data_2088"}
{"text":"data_2089"}
{"text":"data_2090"}
{"text":"data_2091"}
{"text":"data_2092"}
{"text":"data_2093"}
{"text":"data_2094"}
{"text":"data_2095"}
{"text":"data_2096"}
{"text":"data_2097"}
{"text":"data_2098"}
{"text":"data_2099"}
{"text":"data_2100"}
{"text":"data_2101"}
{"text":"data_2102"}
{"text":"data_2103"}
{"text":"data_2104"}
{"text":"data_2105"}
{"text":"data_2106"}
{"text":"data_2107"}
{"text":"data_2108"}
{"text":"data_2109"}
{"text":"data_2110"}
{"text":"data_2111"}
{"text":"data_2112"}
{"text":"data_2113"}
{"text":"data_2114"}
{"text":"data_2115"}
{"text":"data_2116"}
{"text":"data_2117"}
{"text":"data_2118"}
{"text":"data_2119"}
{"text":"data_2120"}
{"text":"data_2121"}
{"text":"data_2122"}
{"text":"data_2123"}
{"text":"data_2124"}
{"text":"data_2125"}
{"text":"data_2126"}
{"text":"data_2127"}
{"text":"data_2128"}
{"text":"data_2129"}
{"text":"data_2130"}
{"text":"data_2131"}
{"text":"data_2132"}
{"text":"data_2133"}
{"text":"data_2134"}
{"text":"data_2135"}
{"text":"data_2136"}
{"text":"data_2137"}
{"text":"data_2138"}
{"text":"data_2139"}
{"text":"data_2140"}
{"text":"data_2141"}
{"text":"data_2142"}
{"text":"data_2143"}
{"text":"data_2144"}
{"text":"data_2145"}
{"text":"data_2146"}
{"text":"data_2147"}
{"text":"data_2148"}
{"text":"data_2149"}
{"text":"data_2150"}
{"text":"data_2151"}
{"text":"data_2152"}
{"text":"data_2153"}
{"text":"data_2154"}
{"text":"data_2155"}
{"text":"data_2156"}
{"text":"data_2157"}
{"text":"data_2158"}
{"text":"data_2159"}
{"text":"data_2160"}
{"text":"data_2161"}
{"text":"data_2162"}
{"text":"data_2163"}
{"text":"data_2164"}
{"text":"data_2165"}
{"text":"data_2166"}
{"text":"data_2167"}
{"text":"data_2168"}
{"text":"data_2169"}
{"text":"data_2170"}
{"text":"data_2171"}
{"text":"data_2172"}
{"text":"data_2173"}
{"text":"data_2174"}
{"text":"data_2175"}
{"text":"data_2176"}
{"text":"data_2177"}
{"text":"data_2178"}
{"text":"data_2179"}
{"text":"data_2180"}
{"text":"data_2181"}
{"text":"data_2182"}
{"text":"data_2183"}
{"text":"data_2184"}
{"text":"data_2185"}
{"text":"data_2186"}
{"text":"data_2187"}
{"text":"data_2188"}
{"text":"data_2189"}
{"text":"data_2190"}
{"text":"data_2191"}
{"text":"data_2192"}
{"text":"data_2193"}
{"text":"data_2194"}
{"text":"data_2195"}
{"text":"data_2196"}
{"text":"data_2197"}
{"text":"data_2198"}
{"text":"data_2199"}
{"text":"data_2200"}
{"text":"data_2201"}
{"text":"data_2202"}
{"text":"data_2203"}
{"text":"data_2204"}
{"text":"data_2205"}
{"text":"data_2206"}
{"text":"data_2207"}
{"text":"data_2208"}
{"text":"data_2209"}
{"text":"data_2210"}
{"text":"data_2211"}
{"text":"data_2212"}
{"text":"data_2213"}
{"text":"data_2214"}
{"text":"data_2215"}
{"text":"data_2216"}
{"text":"data_2217"}
{"text":"data_2218"}
{"text":"data_2219"}
{"text":"data_2220"}
{"text":"data_2221"}
{"text":"data_2222"}
{"text":"data_2223"}
{"text":"data_2224"}
{"text":"data_2225"}
{"text":"data_2226"}
{"text":"data_2227"}
{"text":"data_2228"}
{"text":"data_2229"}
{"text":"data_2230"}
{"text":"data_2231"}
{"text":"data_2232"}
{"text":"data_2233"}
{"text":"data_2234"}
{"text":"data_2235"}
{"text":"data_2236"}
{"text":"data_2237"}
{"text":"data_2238"}
{"text":"data_2239"}
{"text":"data_2240"}
{"text":"data_2241"}
{"text":"data_2242"}
{"text":"data_2243"}
{"text":"data_2244"}
{"text":"data_2245"}
{"text":"data_2246"}
{"text":"data_2247"}
{"text":"data_2248"}
{"text":"data_2249"}
{"text":"data_2250"}
{"text":"data_2251"}
{"text":"data_2252"}
{"text":"data_2253"}
{"text":"data_2254"}
{"text":"data_2255"}
{"text":"data_2256"}
{"text":"data_2257"}
{"text":"data_2258"}
{"text":"data_2259"}
{"text":"data_2260"}
{"text":"data_2261"}
{"text":"data_2262"}
{"text":"data_2263"}
{"text":"data_2264"}
{"text":"data_2265"}
{"text":"data_2266"}
{"text":"data_2267"}
{"text":"data_2268"}
{"text":"data_2269"}
{"text":"data_2270"}
{"text":"data_2271"}
{"text":"data_2272"}
{"text":"data_2273"}
{"text":"data_2274"}
{"text":"data_2275"}
{"text":"data_2276"}
{"text":"data_2277"}
{"text":"data_2278"}
{"text":"data_2279"}
{"text":"data_2280"}
{"text":"data_2281"}
{"text":"data_2282"}
{"text":"data_2283"}
{"text":"data_2284"}
{"text":"data_2285"}
{"text":"data_2286"}
{"text":"data_2287"}
{"text":"data_2288"}
{"text":"data_2289"}
{"text":"data_2290"}
{"text":"data_2291"}
{"text":"data_2292"}
{"text":"data_2293"}
{"text":"data_2294"}
{"text":"data_2295"}
{"text":"data_2296"}
{"text":"data_2297"}
{"text":"data_2298"}
{"text":"data_2299"}
{"text":"data_2300"}
{"text":"data_2301"}
{"text":"data_2302"}
{"text":"data_2303"}
{"text":"data_2304"}
{"text":"data_2305"}
{"text":"data_2306"}
{"text":"data_2307"}
{"text":"data_2308"}
{"text":"data_2309"}
{"text":"data_2310"}
{"text":"data_2311"}
{"text":"data_2312"}
{"text":"data_2313"}
{"text":"data_2314"}
{"text":"data_2315"}
{"text":"data_2316"}
{"text":"data_2317"}
{"text":"data_2318"}
{"text":"data_2319"}
{"text":"data_2320"}
{"text":"data_2321"}
{"text":"data_2322"}
{"text":"data_2323"}
{"text":"data_2324"}
{"text":"data_2325"}
{"text":"data_2326"}
{"text":"data_2327"}
{"text":"data_2328"}
{"text":"data_2329"}
{"text":"data_2330"}
{"text":"data_2331"}
{"text":"data_2332"}
{"text":"data_2333"}
{"text":"data_2334"}
{"text":"data_2335"}
{"text":"data_2336"}
{"text":"data_2337"}
{"text":"data_2338"}
{"text":"data_2339"}
{"text":"data_2340"}
{"text":"data_2341"}
{"text":"data_2342"}
{"text":"data_2343"}
{"text":"data_2344"}
{"text":"data_2345"}
{"text":"data_2346"}
{"text":"data_2347"}
{"text":"data_2348"}
{"text":"data_2349"}
{"text":"data_2350"}
{"text":"data_2351"}
{"text":"data_2352"}
{"text":"data_2353"}
{"text":"data_2354"}
{"text":"data_2355"}
{"text":"data_2356"}
{"text":"data_2357"}
{"text":"data_2358"}
{"text":"data_2359"}
{"text":"data_2360"}
{"text":"data_2361"}
{"text":"data_2362"}
{"text":"data_2363"}
{"text":"data_2364"}
{"text":"data_2365"}
{"text":"data_2366"}
{"text":"data_2367"}
{"text":"data_2368"}
{"text":"data_2369"}
{"text":"data_2370"}
{"text":"data_2371"}
{"text":"data_2372"}
{"text":"data_2373"}
{"text":"data_2374"}
{"text":"data_2375"}
{"text":"data_2376"}
{"text":"data_2377"}
{"text":"data_2378"}
{"text":"data_2379"}
{"text":"data_2380"}
{"text":"data_2381"}
{"text":"data_2382"}
{"text":"data_2383"}
{"text":"data_2384"}
{"text":"data_2385"}
{"text":"data_2386"}
{"text":"data_2387"}
{"text":"data_2388"}
{"text":"data_2389"}
{"text":"data_2390"}
{"text":"data_2391"}
{"text":"data_2392"}
{"text":"data_2393"}
{"text":"data_2394"}
{"text":"data_2395"}
{"text":"data_2396"}
{"text":"data_2397"}
{"text":"data_2398"}
{"text":"data_2399"}
{"text":"data_2400"}
{"text":"data_2401"}
{"text":"data_2402"}
{"text":"data_2403"}
{"text":"data_2404"}
{"text":"data_2405"}
{"text":"data_2406"}
{"text":"data_2407"}
{"text":"data_2408"}
{"text":"data_2409"}
{"text":"data_2410"}
{"text":"data_2411"}
{"text":"data_2412"}
{"text":"data_2413"}
{"text":"data_2414"}
{"text":"data_2415"}
{"text":"data_2416"}
{"text":"data_2417"}
{"text":"data_2418"}
{"text":"data_2419"}
{"text":"data_2420"}
{"text":"data_2421"}
{"text":"data_2422"}
{"text":"data_2423"}
{"text":"data_2424"}
{"text":"data_2425"}
{"text":"data_2426"}
{"text":"data_2427"}
{"text":"data_2428"}
{"text":"data_2429"}
{"text":"data_2430"}
{"text":"data_2431"}
{"text":"data_2432"}
{"text":"data_2433"}
{"text":"data_2434"}
{"text":"data_2435"}
{"text":"data_2436"}
{"text":"data_2437"}
{"text":"data_2438"}
{"text":"data_2439"}
{"text":"data_2440"}
{"text":"data_2441"}
{"text":"data_2442"}
{"text":"data_2443"}
{"text":"data_2444"}
{"text":"data_2445"}
{"text":"data_2446"}
{"text":"data_2447"}
{"text":"data_2448"}
{"text":"data_2449"}
{"text":"data_2450"}
{"text":"data_2451"}
{"text":"data_2452"}
{"text":"data_2453"}
{"text":"data_2454"}
{"text":"data_2455"}
{"text":"data_2456"}
{"text":"data_2457"}
{"text":"data_2458"}
{"text":"data_2459"}
{"text":"data_2460"}
{"text":"data_2461"}
{"text":"data_2462"}
{"text":"data_2463"}
{"text":"data_2464"}
{"text":"data_2465"}
{"text":"data_2466"}
{"text":"data_2467"}
{"text":"data_2468"}
{"text":"data_2469"}
{"text":"data_2470"}
{"text":"data_2471"}
{"text":"data_2472"}
{"text":"data_2473"}
{"text":"data_2474"}
{"text":"data_2475"}
{"text":"data_2476"}
{"text":"data_2477"}
{"text":"data_2478"}
{"text":"data_2479"}
{"text":"data_2480"}
{"text":"data_2481"}
{"text":"data_2482"}
{"text":"data_2483"}
{"text":"data_2484"}
{"text":"data_2485"}
{"text":"data_2486"}
{"text":"data_2487"}
{"text":"data_2488"}
{"text":"data_2489"}
{"text":"data_2490"}
{"text":"data_2491"}
{"text":"data_2492"}
{"text":"data_2493"}
{"text":"data_2494"}
{"text":"data_2495"}
{"text":"data_2496"}
{"text":"data_2497"}
{"text":"data_2498"}
{"text":"data_2499"}
{"text":"data_2500"}
{"text":"data_2501"}
{"text":"data_2502"}
{"text":"data_2503"}
{"text":"data_2504"}
{"text":"data_2505"}
{"text":"data_2506"}
{"text":"data_2507"}
{"text":"data_2508"}
{"text":"data_2509"}
{"text":"data_2510"}
{"text":"data_2511"}
{"text":"data_2512"}
{"text":"data_2513"}
{"text":"data_2514"}
{"text":"data_2515"}
{"text":"data_2516"}
{"text":"data_2517"}
{"text":"data_2518"}
{"text":"data_2519"}
{"text":"data_2520"}
{"text":"data_2521"}
{"text":"data_2522"}
{"text":"data_2523"}
{"text":"data_2524"}
{"text":"data_2525"}
{"text":"data_2526"}
{"text":"data_2527"}
{"text":"data_2528"}
{"text":"data_2529"}
{"text":"data_2530"}
{"text":"data_2531"}
{"text":"data_2532"}
{"text":"data_2533"}
{"text":"data_2534"}
{"text":"data_2535"}
{"text":"data_2536"}
{"text":"data_2537"}
{"text":"data_2538"}
{"text":"data_2539"}
{"text":"data_2540"}
{"text":"data_2541"}
{"text":"data_2542"}
{"text":"data_2543"}
{"text":"data_2544"}
{"text":"data_2545"}
{"text":"data_2546"}
{"text":"data_2547"}
{"text":"data_2548"}
{"text":"data_2549"}
{"text":"data_2550"}
{"text":"data_2551"}
{"text":"data_2552"}
{"text":"data_2553"}
{"text":"data_2554"}
{"text":"data_2555"}
{"text":"data_2556"}
{"text":"data_2557"}
{"text":"data_2558"}
{"text":"data_2559"}
{"text":"data_2560"}
{"text":"data_2561"}
{"text":"data_2562"}
{"text":"data_2563"}
{"text":"data_2564"}
{"text":"data_2565"}
{"text":"data_2566"}
{"text":"data_2567"}
{"text":"data_2568"}
{"text":"data_2569"}
{"text":"data_2570"}
{"text":"data_2571"}
{"text":"data_2572"}
{"text":"data_2573"}
{"text":"data_2574"}
{"text":"data_2575"}
{"text":"data_2576"}
{"text":"data_2577"}
{"text":"data_2578"}
{"text":"data_2579"}
{"text":"data_2580"}
{"text":"data_2581"}
{"text":"data_2582"}
{"text":"data_2583"}
{"text":"data_2584"}
{"text":"data_2585"}
{"text":"data_2586"}
{"text":"data_2587"}
{"text":"data_2588"}
{"text":"data_2589"}
{"text":"data_2590"}
{"text":"data_2591"}
{"text":"data_2592"}
{"text":"data_2593"}
{"text":"data_2594"}
{"text":"data_2595"}
{"text":"data_2596"}
{"text":"data_2597"}
{"text":"data_2598"}
{"text":"data_2599"}
{"text":"data_2600"}
{"text":"data_2601"}
{"text":"data_2602"}
{"text":"data_2603"}
{"text":"data_2604"}
{"text":"data_2605"}
{"text":"data_2606"}
{"text":"data_2607"}
{"text":"data_2608"}
{"text":"data_2609"}
{"text":"data_2610"}
{"text":"data_2611"}
{"text":"data_2612"}
{"text":"data_2613"}
{"text":"data_2614"}
{"text":"data_2615"}
{"text":"data_2616"}
{"text":"data_2617"}
{"text":"data_2618"}
{"text":"data_2619"}
{"text":"data_2620"}
{"text":"data_2621"}
{"text":"data_2622"}
{"text":"data_2623"}
{"text":"data_2624"}
{"text":"data_2625"}
{"text":"data_2626"}
{"text":"data_2627"}
{"text":"data_2628"}
{"text":"data_2629"}
{"text":"data_2630"}
{"text":"data_2631"}
{"text":"data_2632"}
{"text":"data_2633"}
{"text":"data_2634"}
{"text":"data_2635"}
{"text":"data_2636"}
{"text":"data_2637"}
{"text":"data_2638"}
{"text":"data_2639"}
{"text":"data_2640"}
{"text":"data_2641"}
{"text":"data_2642"}
{"text":"data_2643"}
{"text":"data_2644"}
{"text":"data_2645"}
{"text":"data_2646"}
{"text":"data_2647"}
{"text":"data_2648"}
{"text":"data_2649"}
{"text":"data_2650"}
{"text":"data_2651"}
{"text":"data_2652"}
{"text":"data_2653"}
{"text":"data_2654"}
{"text":"data_2655"}
{"text":"data_2656"}
{"text":"data_2657"}
{"text":"data_2658"}
{"text":"data_2659"}
{"text":"data_2660"}
{"text":"data_2661"}
{"text":"data_2662"}
{"text":"data_2663"}
{"text":"data_2664"}
{"text":"data_2665"}
{"text":"data_2666"}
{"text":"data_2667"}
{"text":"data_2668"}
{"text":"data_2669"}
{"text":"data_2670"}
{"text":"data_2671"}
{"text":"data_2672"}
{"text":"data_2673"}
{"text":"data_2674"}
{"text":"data_2675"}
{"text":"data_2676"}
{"text":"data_2677"}
{"text":"data_2678"}
{"text":"data_2679"}
{"text":"data_2680"}
{"text":"data_2681"}
{"text":"data_2682"}
{"text":"data_2683"}
{"text":"data_2684"}
{"text":"data_2685"}
{"text":"data_2686"}
{"text":"data_2687"}
{"text":"data_2688"}
{"text":"data_2689"}
{"text":"data_2690"}
{"text":"data_2691"}
{"text":"data_2692"}
{"text":"data_2693"}
{"text":"data_2694"}
{"text":"data_2695"}
{"text":"data_2696"}
{"text":"data_2697"}
{"text":"data_2698"}
{"text":"data_2699"}
{"text":"data_2700"}
{"text":"data_2701"}
{"text":"data_2702"}
{"text":"data_2703"}
{"text":"data_2704"}
{"text":"data_2705"}
{"text":"data_2706"}
{"text":"data_2707"}
{"text":"data_2708"}
{"text":"data_2709"}
{"text":"data_2710"}
{"text":"data_2711"}
{"text":"data_2712"}
{"text":"data_2713"}
{"text":"data_2714"}
{"text":"data_2715"}
{"text":"data_2716"}
{"text":"data_2717"}
{"text":"data_2718"}
{"text":"data_2719"}
{"text":"data_2720"}
{"text":"data_2721"}
{"text":"data_2722"}
{"text":"data_2723"}
{"text":"data_2724"}
{"text":"data_2725"}
{"text":"data_2726"}
{"text":"data_2727"}
{"text":"data_2728"}
{"text":"data_2729"}
{"text":"data_2730"}
{"text":"data_2731"}
{"text":"data_2732"}
{"text":"data_2733"}
{"text":"data_2734"}
{"text":"data_2735"}
{"text":"data_2736"}
{"text":"data_2737"}
{"text":"data_2738"}
{"text":"data_2739"}
{"text":"data_2740"}
{"text":"data_2741"}
{"text":"data_2742"}
{"text":"data_2743"}
{"text":"data_2744"}
{"text":"data_2745"}
{"text":"data_2746"}
{"text":"data_2747"}
{"text":"data_2748"}
{"text":"data_2749"}
{"text":"data_2750"}
{"text":"data_2751"}
{"text":"data_2752"}
{"text":"data_2753"}
{"text":"data_2754"}
{"text":"data_2755"}
{"text":"data_2756"}
{"text":"data_2757"}
{"text":"data_2758"}
{"text":"data_2759"}
{"text":"data_2760"}
{"text":"data_2761"}
{"text":"data_2762"}
{"text":"data_2763"}
{"text":"data_2764"}
{"text":"data_2765"}
{"text":"data_2766"}
{"text":"data_2767"}
{"text":"data_2768"}
{"text":"data_2769"}
{"text":"data_2770"}
{"text":"data_2771"}
{"text":"data_2772"}
{"text":"data_2773"}
{"text":"data_2774"}
{"text":"data_2775"}
{"text":"data_2776"}
{"text":"data_2777"}
{"text":"data_2778"}
{"text":"data_2779"}
{"text":"data_2780"}
{"text":"data_2781"}
{"text":"data_2782"}
{"text":"data_2783"}
{"text":"data_2784"}
{"text":"data_2785"}
{"text":"data_2786"}
{"text":"data_2787"}
{"text":"data_2788"}
{"text":"data_2789"}
{"text":"data_2790"}
{"text":"data_2791"}
{"text":"data_2792"}
{"text":"data_2793"}
{"text":"data_2794"}
{"text":"data_2795"}
{"text":"data_2796"}
{"text":"data_2797"}
{"text":"data_2798"}
{"text":"data_2799"}
{"text":"data_2800"}
{"text":"data_2801"}
{"text":"data_2802"}
{"text":"data_2803"}
{"text":"data_2804"}
{"text":"data_2805"}
{"text":"data_2806"}
{"text":"data_2807"}
{"text":"data_2808"}
{"text":"data_2809"}
{"text":"data_2810"}
{"text":"data_2811"}
{"text":"data_2812"}
{"text":"data_2813"}
{"text":"data_2814"}
{"text":"data_2815"}
{"text":"data_2816"}
{"text":"data_2817"}
{"text":"data_2818"}
{"text":"data_2819"}
{"text":"data_2820"}
{"text":"data_2821"}
{"text":"data_2822"}
{"text":"data_2823"}
{"text":"data_2824"}
{"text":"data_2825"}
{"text":"data_2826"}
{"text":"data_2827"}
{"text":"data_2828"}
{"text":"data_2829"}
{"text":"data_2830"}
{"text":"data_2831"}
{"text":"data_2832"}
{"text":"data_2833"}
{"text":"data_2834"}
{"text":"data_2835"}
{"text":"data_2836"}
{"text":"data_2837"}
{"text":"data_2838"}
{"text":"data_2839"}
{"text":"data_2840"}
{"text":"data_2841"}
{"text":"data_2842"}
{"text":"data_2843"}
{"text":"data_2844"}
{"text":"data_2845"}
{"text":"data_2846"}
{"text":"data_2847"}
{"text":"data_2848"}
{"text":"data_2849"}
{"text":"data_2850"}
{"text":"data_2851"}
{"text":"data_2852"}
{"text":"data_2853"}
{"text":"data_2854"}
{"text":"data_2855"}
{"text":"data_2856"}
{"text":"data_2857"}
{"text":"data_2858"}
{"text":"data_2859"}
{"text":"data_2860"}
{"text":"data_2861"}
{"text":"data_2862"}
{"text":"data_2863"}
{"text":"data_2864"}
{"text":"data_2865"}
{"text":"data_2866"}
{"text":"data_2867"}
{"text":"data_2868"}
{"text":"data_2869"}
{"text":"data_2870"}
{"text":"data_2871"}
{"text":"data_2872"}
{"text":"data_2873"}
{"text":"data_2874"}
{"text":"data_2875"}
{"text":"data_2876"}
{"text":"data_2877"}
{"text":"data_2878"}
{"text":"data_2879"}
{"text":"data_2880"}
{"text":"data_2881"}
{"text":"data_2882"}
{"text":"data_2883"}
{"text":"data_2884"}
{"text":"data_2885"}
{"text":"data_2886"}
{"text":"data_2887"}
{"text":"data_2888"}
{"text":"data_2889"}
{"text":"data_2890"}
{"text":"data_2891"}
{"text":"data_2892"}
{"text":"data_2893"}
{"text":"data_2894"}
{"text":"data_2895"}
{"text":"data_2896"}
{"text":"data_2897"}
{"text":"data_2898"}
{"text":"data_2899"}
{"text":"data_2900"}
{"text":"data_2901"}
{"text":"data_2902"}
{"text":"data_2903"}
{"text":"data_2904"}
{"text":"data_2905"}
{"text":"data_2906"}
{"text":"data_2907"}
{"text":"data_2908"}
{"text":"data_2909"}
{"text":"data_2910"}
{"text":"data_2911"}
{"text":"data_2912"}
{"text":"data_2913"}
{"text":"data_2914"}
{"text":"data_2915"}
{"text":"data_2916"}
{"text":"data_2917"}
{"text":"data_2918"}
{"text":"data_2919"}
{"text":"data_2920"}
{"text":"data_2921"}
{"text":"data_2922"}
{"text":"data_2923"}
{"text":"data_2924"}
{"text":"data_2925"}
{"text":"data_2926"}
{"text":"data_2927"}
{"text":"data_2928"}
{"text":"data_2929"}
{"text":"data_2930"}
{"text":"data_2931"}
{"text":"data_2932"}
{"text":"data_2933"}
{"text":"data_2934"}
{"text":"data_2935"}
{"text":"data_2936"}
{"text":"data_2937"}
{"text":"data_2938"}
{"text":"data_2939"}
{"text":"data_2940"}
{"text":"data_2941"}
{"text":"data_2942"}
{"text":"data_2943"}
{"text":"data_2944"}
{"text":"data_2945"}
{"text":"data_2946"}
{"text":"data_2947"}
{"text":"data_2948"}
{"text":"data_2949"}
{"text":"data_2950"}
{"text":"data_2951"}
{"text":"data_2952"}
{"text":"data_2953"}
{"text":"data_2954"}
{"text":"data_2955"}
{"text":"data_2956"}
{"text":"data_2957"}
{"text":"data_2958"}
{"text":"data_2959"}
{"text":"data_2960"}
{"text":"data_2961"}
{"text":"data_2962"}
{"text":"data_2963"}
{"text":"data_2964"}
{"text":"data_2965"}
{"text":"data_2966"}
{"text":"data_2967"}
{"text":"data_2968"}
{"text":"data_2969"}
{"text":"data_2970"}
{"text":"data_2971"}
{"text":"data_2972"}
{"text":"data_2973"}
{"text":"data_2974"}
{"text":"data_2975"}
{"text":"data_2976"}
{"text":"data_2977"}
{"text":"data_2978"}
{"text":"data_2979"}
{"text":"data_2980"}
{"text":"data_2981"}
{"text":"data_2982"}
{"text":"data_2983"}
{"text":"data_2984"}
{"text":"data_2985"}
{"text":"data_2986"}
{"text":"data_2987"}
{"text":"data_2988"}
{"text":"data_2989"}
{"text":"data_2990"}
{"text":"data_2991"}
{"text":"data_2992"}
{"text":"data_2993"}
{"text":"data_2994"}
{"text":"data_2995"}
{"text":"data_2996"}
{"text":"data_2997"}
{"text":"data_2998"}
{"text":"data_2999"}
{"text":"data_3000"}
{"text":"data_3001"}
{"text":"data_3002"}
{"text":"data_3003"}
{"text":"data_3004"}
{"text":"data_3005"}
{"text":"data_3006"}
{"text":"data_3007"}
{"text":"data_3008"}
{"text":"data_3009"}
{"text":"data_3010"}
{"text":"data_3011"}
{"text":"data_3012"}
{"text":"data_3013"}
{"text":"data_3014"}
{"text":"data_3015"}
{"text":"data_3016"}
{"text":"data_3017"}
{"text":"data_3018"}
{"text":"data_3019"}
{"text":"data_3020"}
{"text":"data_3021"}
{"text":"data_3022"}
{"text":"data_3023"}
{"text":"data_3024"}
{"text":"data_3025"}
{"text":"data_3026"}
{"text":"data_3027"}
{"text":"data_3028"}
{"text":"data_3029"}
{"text":"data_3030"}
{"text":"data_3031"}
{"text":"data_3032"}
{"text":"data_3033"}
{"text":"data_3034"}
{"text":"data_3035"}
{"text":"data_3036"}
{"text":"data_3037"}
{"text":"data_3038"}
{"text":"data_3039"}
{"text":"data_3040"}
{"text":"data_3041"}
{"text":"data_3042"}
{"text":"data_3043"}
{"text":"data_3044"}
{"text":"data_3045"}
{"text":"data_3046"}
{"text":"data_3047"}
{"text":"data_3048"}
{"text":"data_3049"}
{"text":"data_3050"}
{"text":"data_3051"}
{"text":"data_3052"}
{"text":"data_3053"}
{"text":"data_3054"}
{"text":"data_3055"}
{"text":"data_3056"}
{"text":"data_3057"}
{"text":"data_3058"}
{"text":"data_3059"}
{"text":"data_3060"}
{"text":"data_3061"}
{"text":"data_3062"}
{"text":"data_3063"}
{"text":"data_3064"}
{"text":"data_3065"}
{"text":"data_3066"}
{"text":"data_3067"}
{"text":"data_3068"}
{"text":"data_3069"}
{"text":"data_3070"}
{"text":"data_3071"}
{"text":"data_3072"}
{"text":"data_3073"}
{"text":"data_3074"}
{"text":"data_3075"}
{"text":"data_3076"}
{"text":"data_3077"}
{"text":"data_3078"}
{"text":"data_3079"}
{"text":"data_3080"}
{"text":"data_3081"}
{"text":"data_3082"}
{"text":"data_3083"}
{"text":"data_3084"}
{"text":"data_3085"}
{"text":"data_3086"}
{"text":"data_3087"}
{"text":"data_3088"}
{"text":"data_3089"}
{"text":"data_3090"}
{"text":"data_3091"}
{"text":"data_3092"}
{"text":"data_3093"}
{"text":"data_3094"}
{"text":"data_3095"}
{"text":"data_3096"}
{"text":"data_3097"}
{"text":"data_3098"}
{"text":"data_3099"}
{"text":"data_3100"}
{"text":"data_3101"}
{"text":"data_3102"}
{"text":"data_3103"}
{"text":"data_3104"}
{"text":"data_3105"}
{"text":"data_3106"}
{"text":"data_3107"}
{"text":"data_3108"}
{"text":"data_3109"}
{"text":"data_3110"}
{"text":"data_3111"}
{"text":"data_3112"}
{"text":"data_3113"}
{"text":"data_3114"}
{"text":"data_3115"}
{"text":"data_3116"}
{"text":"data_3117"}
{"text":"data_3118"}
{"text":"data_3119"}
{"text":"data_3120"}
{"text":"data_3121"}
{"text":"data_3122"}
{"text":"data_3123"}
{"text":"data_3124"}
{"text":"data_3125"}
{"text":"data_3126"}
{"text":"data_3127"}
{"text":"data_3128"}
{"text":"data_3129"}
{"text":"data_3130"}
{"text":"data_3131"}
{"text":"data_3132"}
{"text":"data_3133"}
{"text":"data_3134"}
{"text":"data_3135"}
{"text":"data_3136"}
{"text":"data_3137"}
{"text":"data_3138"}
{"text":"data_3139"}
{"text":"data_3140"}
{"text":"data_3141"}
{"text":"data_3142"}
{"text":"data_3143"}
{"text":"data_3144"}
{"text":"data_3145"}
{"text":"data_3146"}
{"text":"data_3147"}
{"text":"data_3148"}
{"text":"data_3149"}
{"text":"data_3150"}
{"text":"data_3151"}
{"text":"data_3152"}
{"text":"data_3153"}
{"text":"data_3154"}
{"text":"data_3155"}
{"text":"data_3156"}
{"text":"data_3157"}
{"text":"data_3158"}
{"text":"data_3159"}
{"text":"data_3160"}
{"text":"data_3161"}
{"text":"data_3162"}
{"text":"data_3163"}
{"text":"data_3164"}
{"text":"data_3165"}
{"text":"data_3166"}
{"text":"data_3167"}
{"text":"data_3168"}
{"text":"data_3169"}
{"text":"data_3170"}
{"text":"data_3171"}
{"text":"data_3172"}
{"text":"data_3173"}
{"text":"data_3174"}
{"text":"data_3175"}
{"text":"data_3176"}
{"text":"data_3177"}
{"text":"data_3178"}
{"text":"data_3179"}
{"text":"data_3180"}
{"text":"data_3181"}
{"text":"data_3182"}
{"text":"data_3183"}
{"text":"data_3184"}
{"text":"data_3185"}
{"text":"data_3186"}
{"text":"data_3187"}
{"text":"data_3188"}
{"text":"data_3189"}
{"text":"data_3190"}
{"text":"data_3191"}
{"text":"data_3192"}
{"text":"data_3193"}
{"text":"data_3194"}
{"text":"data_3195"}
{"text":"data_3196"}
{"text":"data_3197"}
{"text":"data_3198"}
{"text":"data_3199"}
{"text":"data_3200"}
{"text":"data_3201"}
{"text":"data_3202"}
{"text":"data_3203"}
{"text":"data_3204"}
{"text":"data_3205"}
{"text":"data_3206"}
{"text":"data_3207"}
{"text":"data_3208"}
{"text":"data_3209"}
{"text":"data_3210"}
{"text":"data_3211"}
{"text":"data_3212"}
{"text":"data_3213"}
{"text":"data_3214"}
{"text":"data_3215"}
{"text":"data_3216"}
{"text":"data_3217"}
{"text":"data_3218"}
{"text":"data_3219"}
{"text":"data_3220"}
{"text":"data_3221"}
{"text":"data_3222"}
{"text":"data_3223"}
{"text":"data_3224"}
{"text":"data_3225"}
{"text":"data_3226"}
{"text":"data_3227"}
{"text":"data_3228"}
{"text":"data_3229"}
{"text":"data_3230"}
{"text":"data_3231"}
{"text":"data_3232"}
{"text":"data_3233"}
{"text":"data_3234"}
{"text":"data_3235"}
{"text":"data_3236"}
{"text":"data_3237"}
{"text":"data_3238"}
{"text":"data_3239"}
{"text":"data_3240"}
{"text":"data_3241"}
{"text":"data_3242"}
{"text":"data_3243"}
{"text":"data_3244"}
{"text":"data_3245"}
{"text":"data_3246"}
{"text":"data_3247"}
{"text":"data_3248"}
{"text":"data_3249"}
{"text":"data_3250"}
{"text":"data_3251"}
{"text":"data_3252"}
{"text":"data_3253"}
{"text":"data_3254"}
{"text":"data_3255"}
{"text":"data_3256"}
{"text":"data_3257"}
{"text":"data_3258"}
{"text":"data_3259"}
{"text":"data_3260"}
{"text":"data_3261"}
{"text":"data_3262"}
{"text":"data_3263"}
{"text":"data_3264"}
{"text":"data_3265"}
{"text":"data_3266"}
{"text":"data_3267"}
{"text":"data_3268"}
{"text":"data_3269"}
{"text":"data_3270"}
{"text":"data_3271"}
{"text":"data_3272"}
{"text":"data_3273"}
{"text":"data_3274"}
{"text":"data_3275"}
{"text":"data_3276"}
{"text":"data_3277"}
{"text":"data_3278"}
{"text":"data_3279"}
{"text":"data_3280"}
{"text":"data_3281"}
{"text":"data_3282"}
{"text":"data_3283"}
{"text":"data_3284"}
{"text":"data_3285"}
{"text":"data_3286"}
{"text":"data_3287"}
{"text":"data_3288"}
{"text":"data_3289"}
{"text":"data_3290"}
{"text":"data_3291"}
{"text":"data_3292"}
{"text":"data_3293"}
{"text":"data_3294"}
{"text":"data_3295"}
{"text":"data_3296"}
{"text":"data_3297"}
{"text":"data_3298"}
{"text":"data_3299"}
{"text":"data_3300"}
{"text":"data_3301"}
{"text":"data_3302"}
{"text":"data_3303"}
{"text":"data_3304"}
{"text":"data_3305"}
{"text":"data_3306"}
{"text":"data_3307"}
{"text":"data_3308"}
{"text":"data_3309"}
{"text":"data_3310"}
{"text":"data_3311"}
{"text":"data_3312"}
{"text":"data_3313"}
{"text":"data_3314"}
{"text":"data_3315"}
{"text":"data_3316"}
{"text":"data_3317"}
{"text":"data_3318"}
{"text":"data_3319"}
{"text":"data_3320"}
{"text":"data_3321"}
{"text":"data_3322"}
{"text":"data_3323"}
{"text":"data_3324"}
{"text":"data_3325"}
{"text":"data_3326"}
{"text":"data_3327"}
{"text":"data_3328"}
{"text":"data_3329"}
{"text":"data_3330"}
{"text":"data_3331"}
{"text":"data_3332"}
{"text":"data_3333"}
{"text":"data_3334"}
{"text":"data_3335"}
{"text":"data_3336"}
{"text":"data_3337"}
{"text":"data_3338"}
{"text":"data_3339"}
{"text":"data_3340"}
{"text":"data_3341"}
{"text":"data_3342"}
{"text":"data_3343"}
{"text":"data_3344"}
{"text":"data_3345"}
{"text":"data_3346"}
{"text":"data_3347"}
{"text":"data_3348"}
{"text":"data_3349"}
{"text":"data_3350"}
{"text":"data_3351"}
{"text":"data_3352"}
{"text":"data_3353"}
{"text":"data_3354"}
{"text":"data_3355"}
{"text":"data_3356"}
{"text":"data_3357"}
{"text":"data_3358"}
{"text":"data_3359"}
{"text":"data_3360"}
{"text":"data_3361"}
{"text":"data_3362"}
{"text":"data_3363"}
{"text":"data_3364"}
{"text":"data_3365"}
{"text":"data_3366"}
{"text":"data_3367"}
{"text":"data_3368"}
{"text":"data_3369"}
{"text":"data_3370"}
{"text":"data_3371"}
{"text":"data_3372"}
{"text":"data_3373"}
{"text":"data_3374"}
{"text":"data_3375"}
{"text":"data_3376"}
{"text":"data_3377"}
{"text":"data_3378"}
{"text":"data_3379"}
{"text":"data_3380"}
{"text":"data_3381"}
{"text":"data_3382"}
{"text":"data_3383"}
{"text":"data_3384"}
{"text":"data_3385"}
{"text":"data_3386"}
{"text":"data_3387"}
{"text":"data_3388"}
{"text":"data_3389"}
{"text":"data_3390"}
{"text":"data_3391"}
{"text":"data_3392"}
{"text":"data_3393"}
{"text":"data_3394"}
{"text":"data_3395"}
{"text":"data_3396"}
{"text":"data_3397"}
{"text":"data_3398"}
{"text":"data_3399"}
{"text":"data_3400"}
{"text":"data_3401"}
{"text":"data_3402"}
{"text":"data_3403"}
{"text":"data_3404"}
{"text":"data_3405"}
{"text":"data_3406"}
{"text":"data_3407"}
{"text":"data_3408"}
{"text":"data_3409"}
{"text":"data_3410"}
{"text":"data_3411"}
{"text":"data_3412"}
{"text":"data_3413"}
{"text":"data_3414"}
{"text":"data_3415"}
{"text":"data_3416"}
{"text":"data_3417"}
{"text":"data_3418"}
{"text":"data_3419"}
{"text":"data_3420"}
{"text":"data_3421"}
{"text":"data_3422"}
{"text":"data_3423"}
{"text":"data_3424"}
{"text":"data_3425"}
{"text":"data_3426"}
{"text":"data_3427"}
{"text":"data_3428"}
{"text":"data_3429"}
{"text":"data_3430"}
{"text":"data_3431"}
{"text":"data_3432"}
{"text":"data_3433"}
{"text":"data_3434"}
{"text":"data_3435"}
{"text":"data_3436"}
{"text":"data_3437"}
{"text":"data_3438"}
{"text":"data_3439"}
{"text":"data_3440"}
{"text":"data_3441"}
{"text":"data_3442"}
{"text":"data_3443"}
{"text":"data_3444"}
{"text":"data_3445"}
{"text":"data_3446"}
{"text":"data_3447"}
{"text":"data_3448"}
{"text":"data_3449"}
{"text":"data_3450"}
{"text":"data_3451"}
{"text":"data_3452"}
{"text":"data_3453"}
{"text":"data_3454"}
{"text":"data_3455"}
{"text":"data_3456"}
{"text":"data_3457"}
{"text":"data_3458"}
{"text":"data_3459"}
{"text":"data_3460"}
{"text":"data_3461"}
{"text":"data_3462"}
{"text":"data_3463"}
{"text":"data_3464"}
{"text":"data_3465"}
{"text":"data_3466"}
{"text":"data_3467"}
{"text":"data_3468"}
{"text":"data_3469"}
{"text":"data_3470"}
{"text":"data_3471"}
{"text":"data_3472"}
{"text":"data_3473"}
{"text":"data_3474"}
{"text":"data_3475"}
{"text":"data_3476"}
{"text":"data_3477"}
{"text":"data_3478"}
{"text":"data_3479"}
{"text":"data_3480"}
{"text":"data_3481"}
{"text":"data_3482"}
{"text":"data_3483"}
{"text":"data_3484"}
{"text":"data_3485"}
{"text":"data_3486"}
{"text":"data_3487"}
{"text":"data_3488"}
{"text":"data_3489"}
{"text":"data_3490"}
{"text":"data_3491"}
{"text":"data_3492"}
{"text":"data_3493"}
{"text":"data_3494"}
{"text":"data_3495"}
{"text":"data_3496"}
{"text":"data_3497"}
{"text":"data_3498"}
{"text":"data_3499"}
{"text":"data_3500"}
{"text":"data_3501"}
{"text":"data_3502"}
{"text":"data_3503"}
{"text":"data_3504"}
{"text":"data_3505"}
{"text":"data_3506"}
{"text":"data_3507"}
{"text":"data_3508"}
{"text":"data_3509"}
{"text":"data_3510"}
{"text":"data_3511"}
{"text":"data_3512"}
{"text":"data_3513"}
{"text":"data_3514"}
{"text":"data_3515"}
{"text":"data_3516"}
{"text":"data_3517"}
{"text":"data_3518"}
{"text":"data_3519"}
{"text":"data_3520"}
{"text":"data_3521"}
{"text":"data_3522"}
{"text":"data_3523"}
{"text":"data_3524"}
{"text":"data_3525"}
{"text":"data_3526"}
{"text":"data_3527"}
{"text":"data_3528"}
{"text":"data_3529"}
{"text":"data_3530"}
{"text":"data_3531"}
{"text":"data_3532"}
{"text":"data_3533"}
{"text":"data_3534"}
{"text":"data_3535"}
{"text":"data_3536"}
{"text":"data_3537"}
{"text":"data_3538"}
{"text":"data_3539"}
{"text":"data_3540"}
{"text":"data_3541"}
{"text":"data_3542"}
{"text":"data_3543"}
{"text":"data_3544"}
{"text":"data_3545"}
{"text":"data_3546"}
{"text":"data_3547"}
{"text":"data_3548"}
{"text":"data_3549"}
{"text":"data_3550"}
{"text":"data_3551"}
{"text":"data_3552"}
{"text":"data_3553"}
{"text":"data_3554"}
{"text":"data_3555"}
{"text":"data_3556"}
{"text":"data_3557"}
{"text":"data_3558"}
{"text":"data_3559"}
{"text":"data_3560"}
{"text":"data_3561"}
{"text":"data_3562"}
{"text":"data_3563"}
{"text":"data_3564"}
{"text":"data_3565"}
{"text":"data_3566"}
{"text":"data_3567"}
{"text":"data_3568"}
{"text":"data_3569"}
{"text":"data_3570"}
{"text":"data_3571"}
{"text":"data_3572"}
{"text":"data_3573"}
{"text":"data_3574"}
{"text":"data_3575"}
{"text":"data_3576"}
{"text":"data_3577"}
{"text":"data_3578"}
{"text":"data_3579"}
{"text":"data_3580"}
{"text":"data_3581"}
{"text":"data_3582"}
{"text":"data_3583"}
{"text":"data_3584"}
{"text":"data_3585"}
{"text":"data_3586"}
{"text":"data_3587"}
{"text":"data_3588"}
{"text":"data_3589"}
{"text":"data_3590"}
{"text":"data_3591"}
{"text":"data_3592"}
{"text":"data_3593"}
{"text":"data_3594"}
{"text":"data_3595"}
{"text":"data_3596"}
{"text":"data_3597"}
{"text":"data_3598"}
{"text":"data_3599"}
{"text":"data_3600"}
{"text":"data_3601"}
{"text":"data_3602"}
{"text":"data_3603"}
{"text":"data_3604"}
{"text":"data_3605"}
{"text":"data_3606"}
{"text":"data_3607"}
{"text":"data_3608"}
{"text":"data_3609"}
{"text":"data_3610"}
{"text":"data_3611"}
{"text":"data_3612"}
{"text":"data_3613"}
{"text":"data_3614"}
{"text":"data_3615"}
{"text":"data_3616"}
{"text":"data_3617"}
{"text":"data_3618"}
{"text":"data_3619"}
{"text":"data_3620"}
{"text":"data_3621"}
{"text":"data_3622"}
{"text":"data_3623"}
{"text":"data_3624"}
{"text":"data_3625"}
{"text":"data_3626"}
{"text":"data_3627"}
{"text":"data_3628"}
{"text":"data_3629"}
{"text":"data_3630"}
{"text":"data_3631"}
{"text":"data_3632"}
{"text":"data_3633"}
{"text":"data_3634"}
{"text":"data_3635"}
{"text":"data_3636"}
{"text":"data_3637"}
{"text":"data_3638"}
{"text":"data_3639"}
{"text":"data_3640"}
{"text":"data_3641"}
{"text":"data_3642"}
{"text":"data_3643"}
{"text":"data_3644"}
{"text":"data_3645"}
{"text":"data_3646"}
{"text":"data_3647"}
{"text":"data_3648"}
{"text":"data_3649"}
{"text":"data_3650"}
{"text":"data_3651"}
{"text":"data_3652"}
{"text":"data_3653"}
{"text":"data_3654"}
{"text":"data_3655"}
{"text":"data_3656"}
{"text":"data_3657"}
{"text":"data_3658"}
{"text":"data_3659"}
{"text":"data_3660"}
{"text":"data_3661"}
{"text":"data_3662"}
{"text":"data_3663"}
{"text":"data_3664"}
{"text":"data_3665"}
{"text":"data_3666"}
{"text":"data_3667"}
{"text":"data_3668"}
{"text":"data_3669"}
{"text":"data_3670"}
{"text":"data_3671"}
{"text":"data_3672"}
{"text":"data_3673"}
{"text":"data_3674"}
{"text":"data_3675"}
{"text":"data_3676"}
{"text":"data_3677"}
{"text":"data_3678"}
{"text":"data_3679"}
{"text":"data_3680"}
{"text":"data_3681"}
{"text":"data_3682"}
{"text":"data_3683"}
{"text":"data_3684"}
{"text":"data_3685"}
{"text":"data_3686"}
{"text":"data_3687"}
{"text":"data_3688"}
{"text":"data_3689"}
{"text":"data_3690"}
{"text":"data_3691"}
{"text":"data_3692"}
{"text":"data_3693"}
{"text":"data_3694"}
{"text":"data_3695"}
{"text":"data_3696"}
{"text":"data_3697"}
{"text":"data_3698"}
{"text":"data_3699"}
{"text":"data_3700"}
{"text":"data_3701"}
{"text":"data_3702"}
{"text":"data_3703"}
{"text":"data_3704"}
{"text":"data_3705"}
{"text":"data_3706"}
{"text":"data_3707"}
{"text":"data_3708"}
{"text":"data_3709"}
{"text":"data_3710"}
{"text":"data_3711"}
{"text":"data_3712"}
{"text":"data_3713"}
{"text":"data_3714"}
{"text":"data_3715"}
{"text":"data_3716"}
{"text":"data_3717"}
{"text":"data_3718"}
{"text":"data_3719"}
{"text":"data_3720"}
{"text":"data_3721"}
{"text":"data_3722"}
{"text":"data_3723"}
{"text":"data_3724"}
{"text":"data_3725"}
{"text":"data_3726"}
{"text":"data_3727"}
{"text":"data_3728"}
{"text":"data_3729"}
{"text":"data_3730"}
{"text":"data_3731"}
{"text":"data_3732"}
{"text":"data_3733"}
{"text":"data_3734"}
{"text":"data_3735"}
{"text":"data_3736"}
{"text":"data_3737"}
{"text":"data_3738"}
{"text":"data_3739"}
{"text":"data_3740"}
{"text":"data_3741"}
{"text":"data_3742"}
{"text":"data_3743"}
{"text":"data_3744"}
{"text":"data_3745"}
{"text":"data_3746"}
{"text":"data_3747"}
{"text":"data_3748"}
{"text":"data_3749"}
{"text":"data_3750"}
{"text":"data_3751"}
{"text":"data_3752"}
{"text":"data_3753"}
{"text":"data_3754"}
{"text":"data_3755"}
{"text":"data_3756"}
{"text":"data_3757"}
{"text":"data_3758"}
{"text":"data_3759"}
{"text":"data_3760"}
{"text":"data_3761"}
{"text":"data_3762"}
{"text":"data_3763"}
{"text":"data_3764"}
{"text":"data_3765"}
{"text":"data_3766"}
{"text":"data_3767"}
{"text":"data_3768"}
{"text":"data_3769"}
{"text":"data_3770"}
{"text":"data_3771"}
{"text":"data_3772"}
{"text":"data_3773"}
{"text":"data_3774"}
{"text":"data_3775"}
{"text":"data_3776"}
{"text":"data_3777"}
{"text":"data_3778"}
{"text":"data_3779"}
{"text":"data_3780"}
{"text":"data_3781"}
{"text":"data_3782"}
{"text":"data_3783"}
{"text":"data_3784"}
{"text":"data_3785"}
{"text":"data_3786"}
{"text":"data_3787"}
{"text":"data_3788"}
{"text":"data_3789"}
{"text":"data_3790"}
{"text":"data_3791"}
{"text":"data_3792"}
{"text":"data_3793"}
{"text":"data_3794"}
{"text":"data_3795"}
{"text":"data_3796"}
{"text":"data_3797"}
{"text":"data_3798"}
{"text":"data_3799"}
{"text":"data_3800"}
{"text":"data_3801"}
{"text":"data_3802"}
{"text":"data_3803"}
{"text":"data_3804"}
{"text":"data_3805"}
{"text":"data_3806"}
{"text":"data_3807"}
{"text":"data_3808"}
{"text":"data_3809"}
{"text":"data_3810"}
{"text":"data_3811"}
{"text":"data_3812"}
{"text":"data_3813"}
{"text":"data_3814"}
{"text":"data_3815"}
{"text":"data_3816"}
{"text":"data_3817"}
{"text":"data_3818"}
{"text":"data_3819"}
{"text":"data_3820"}
{"text":"data_3821"}
{"text":"data_3822"}
{"text":"data_3823"}
{"text":"data_3824"}
{"text":"data_3825"}
{"text":"data_3826"}
{"text":"data_3827"}
{"text":"data_3828"}
{"text":"data_3829"}
{"text":"data_3830"}
{"text":"data_3831"}
{"text":"data_3832"}
{"text":"data_3833"}
{"text":"data_3834"}
{"text":"data_3835"}
{"text":"data_3836"}
{"text":"data_3837"}
{"text":"data_3838"}
{"text":"data_3839"}
{"text":"data_3840"}
{"text":"data_3841"}
{"text":"data_3842"}
{"text":"data_3843"}
{"text":"data_3844"}
{"text":"data_3845"}
{"text":"data_3846"}
{"text":"data_3847"}
{"text":"data_3848"}
{"text":"data_3849"}
{"text":"data_3850"}
{"text":"data_3851"}
{"text":"data_3852"}
{"text":"data_3853"}
{"text":"data_3854"}
{"text":"data_3855"}
{"text":"data_3856"}
{"text":"data_3857"}
{"text":"data_3858"}
{"text":"data_3859"}
{"text":"data_3860"}
{"text":"data_3861"}
{"text":"data_3862"}
{"text":"data_3863"}
{"text":"data_3864"}
{"text":"data_3865"}
{"text":"data_3866"}
{"text":"data_3867"}
{"text":"data_3868"}
{"text":"data_3869"}
{"text":"data_3870"}
{"text":"data_3871"}
{"text":"data_3872"}
{"text":"data_3873"}
{"text":"data_3874"}
{"text":"data_3875"}
{"text":"data_3876"}
{"text":"data_3877"}
{"text":"data_3878"}
{"text":"data_3879"}
{"text":"data_3880"}
{"text":"data_3881"}
{"text":"data_3882"}
{"text":"data_3883"}
{"text":"data_3884"}
{"text":"data_3885"}
{"text":"data_3886"}
{"text":"data_3887"}
{"text":"data_3888"}
{"text":"data_3889"}
{"text":"data_3890"}
{"text":"data_3891"}
{"text":"data_3892"}
{"text":"data_3893"}
{"text":"data_3894"}
{"text":"data_3895"}
{"text":"data_3896"}
{"text":"data_3897"}
{"text":"data_3898"}
{"text":"data_3899"}
{"text":"data_3900"}
{"text":"data_3901"}
{"text":"data_3902"}
{"text":"data_3903"}
{"text":"data_3904"}
{"text":"data_3905"}
{"text":"data_3906"}
{"text":"data_3907"}
{"text":"data_3908"}
{"text":"data_3909"}
{"text":"data_3910"}
{"text":"data_3911"}
{"text":"data_3912"}
{"text":"data_3913"}
{"text":"data_3914"}
{"text":"data_3915"}
{"text":"data_3916"}
{"text":"data_3917"}
{"text":"data_3918"}
{"text":"data_3919"}
{"text":"data_3920"}
{"text":"data_3921"}
{"text":"data_3922"}
{"text":"data_3923"}
{"text":"data_3924"}
{"text":"data_3925"}
{"text":"data_3926"}
{"text":"data_3927"}
{"text":"data_3928"}
{"text":"data_3929"}
{"text":"data_3930"}
{"text":"data_3931"}
{"text":"data_3932"}
{"text":"data_3933"}
{"text":"data_3934"}
{"text":"data_3935"}
{"text":"data_3936"}
{"text":"data_3937"}
{"text":"data_3938"}
{"text":"data_3939"}
{"text":"data_3940"}
{"text":"data_3941"}
{"text":"data_3942"}
{"text":"data_3943"}
{"text":"data_3944"}
{"text":"data_3945"}
{"text":"data_3946"}
{"text":"data_3947"}
{"text":"data_3948"}
{"text":"data_3949"}
{"text":"data_3950"}
{"text":"data_3951"}
{"text":"data_3952"}
{"text":"data_3953"}
{"text":"data_3954"}
{"text":"data_3955"}
{"text":"data_3956"}
{"text":"data_3957"}
{"text":"data_3958"}
{"text":"data_3959"}
{"text":"data_3960"}
{"text":"data_3961"}
{"text":"data_3962"}
{"text":"data_3963"}
{"text":"data_3964"}
{"text":"data_3965"}
{"text":"data_3966"}
{"text":"data_3967"}
{"text":"data_3968"}
{"text":"data_3969"}
{"text":"data_3970"}
{"text":"data_3971"}
{"text":"data_3972"}
{"text":"data_3973"}
{"text":"data_3974"}
{"text":"data_3975"}
{"text":"data_3976"}
{"text":"data_3977"}
{"text":"data_3978"}
{"text":"data_3979"}
{"text":"data_3980"}
{"text":"data_3981"}
{"text":"data_3982"}
{"text":"data_3983"}
{"text":"data_3984"}
{"text":"data_3985"}
{"text":"data_3986"}
{"text":"data_3987"}
{"text":"data_3988"}
{"text":"data_3989"}
{"text":"data_3990"}
{"text":"data_3991"}
{"text":"data_3992"}
{"text":"data_3993"}
{"text":"data_3994"}
{"text":"data_3995"}
{"text":"data_3996"}
{"text":"data_3997"}
{"text":"data_3998"}
{"text":"data_3999"}
{"text":"data_4000"}
{"text":"data_4001"}
{"text":"data_4002"}
{"text":"data_4003"}
{"text":"data_4004"}
{"text":"data_4005"}
{"text":"data_4006"}
{"text":"data_4007"}
{"text":"data_4008"}
{"text":"data_4009"}
{"text":"data_4010"}
{"text":"data_4011"}
{"text":"data_4012"}
{"text":"data_4013"}
{"text":"data_4014"}
{"text":"data_4015"}
{"text":"data_4016"}
{"text":"data_4017"}
{"text":"data_4018"}
{"text":"data_4019"}
{"text":"data_4020"}
{"text":"data_4021"}
{"text":"data_4022"}
{"text":"data_4023"}
{"text":"data_4024"}
{"text":"data_4025"}
{"text":"data_4026"}
{"text":"data_4027"}
{"text":"data_4028"}
{"text":"data_4029"}
{"text":"data_4030"}
{"text":"data_4031"}
{"text":"data_4032"}
{"text":"data_4033"}
{"text":"data_4034"}
{"text":"data_4035"}
{"text":"data_4036"}
{"text":"data_4037"}
{"text":"data_4038"}
{"text":"data_4039"}
{"text":"data_4040"}
{"text":"data_4041"}
{"text":"data_4042"}
{"text":"data_4043"}
{"text":"data_4044"}
{"text":"data_4045"}
{"text":"data_4046"}
{"text":"data_4047"}
{"text":"data_4048"}
{"text":"data_4049"}
{"text":"data_4050"}
{"text":"data_4051"}
{"text":"data_4052"}
{"text":"data_4053"}
{"text":"data_4054"}
{"text":"data_4055"}
{"text":"data_4056"}
{"text":"data_4057"}
{"text":"data_4058"}
{"text":"data_4059"}
{"text":"data_4060"}
{"text":"data_4061"}
{"text":"data_4062"}
{"text":"data_4063"}
{"text":"data_4064"}
{"text":"data_4065"}
{"text":"data_4066"}
{"text":"data_4067"}
{"text":"data_4068"}
{"text":"data_4069"}
{"text":"data_4070"}
{"text":"data_4071"}
{"text":"data_4072"}
{"text":"data_4073"}
{"text":"data_4074"}
{"text":"data_4075"}
{"text":"data_4076"}
{"text":"data_4077"}
{"text":"data_4078"}
{"text":"data_4079"}
{"text":"data_4080"}
{"text":"data_4081"}
{"text":"data_4082"}
{"text":"data_4083"}
{"text":"data_4084"}
{"text":"data_4085"}
{"text":"data_4086"}
{"text":"data_4087"}
{"text":"data_4088"}
{"text":"data_4089"}
{"text":"data_4090"}
{"text":"data_4091"}
{"text":"data_4092"}
{"text":"data_4093"}
{"text":"data_4094"}
{"text":"data_4095"}
{"text":"data_4096"}
{"text":"data_4097"}
{"text":"data_4098"}
{"text":"data_4099"}
{"text":"data_4100"}
{"text":"data_4101"}
{"text":"data_4102"}
{"text":"data_4103"}
{"text":"data_4104"}
{"text":"data_4105"}
{"text":"data_4106"}
{"text":"data_4107"}
{"text":"data_4108"}
{"text":"data_4109"}
{"text":"data_4110"}
{"text":"data_4111"}
{"text":"data_4112"}
{"text":"data_4113"}
{"text":"data_4114"}
{"text":"data_4115"}
{"text":"data_4116"}
{"text":"data_4117"}
{"text":"data_4118"}
{"text":"data_4119"}
{"text":"data_4120"}
{"text":"data_4121"}
{"text":"data_4122"}
{"text":"data_4123"}
{"text":"data_4124"}
{"text":"data_4125"}
{"text":"data_4126"}
{"text":"data_4127"}
{"text":"data_4128"}
{"text":"data_4129"}
{"text":"data_4130"}
{"text":"data_4131"}
{"text":"data_4132"}
{"text":"data_4133"}
{"text":"data_4134"}
{"text":"data_4135"}
{"text":"data_4136"}
{"text":"data_4137"}
{"text":"data_4138"}
{"text":"data_4139"}
{"text":"data_4140"}
{"text":"data_4141"}
{"text":"data_4142"}
{"text":"data_4143"}
{"text":"data_4144"}
{"text":"data_4145"}
{"text":"data_4146"}
{"text":"data_4147"}
{"text":"data_4148"}
{"text":"data_4149"}
{"text":"data_4150"}
{"text":"data_4151"}
{"text":"data_4152"}
{"text":"data_4153"}
{"text":"data_4154"}
{"text":"data_4155"}
{"text":"data_4156"}
{"text":"data_4157"}
{"text":"data_4158"}
{"text":"data_4159"}
{"text":"data_4160"}
{"text":"data_4161"}
{"text":"data_4162"}
{"text":"data_4163"}
{"text":"data_4164"}
{"text":"data_4165"}
{"text":"data_4166"}
{"text":"data_4167"}
{"text":"data_4168"}
{"text":"data_4169"}
{"text":"data_4170"}
{"text":"data_4171"}
{"text":"data_4172"}
{"text":"data_4173"}
{"text":"data_4174"}
{"text":"data_4175"}
{"text":"data_4176"}
{"text":"data_4177"}
{"text":"data_4178"}
{"text":"data_4179"}
{"text":"data_4180"}
{"text":"data_4181"}
{"text":"data_4182"}
{"text":"data_4183"}
{"text":"data_4184"}
{"text":"data_4185"}
{"text":"data_4186"}
{"text":"data_4187"}
{"text":"data_4188"}
{"text":"data_4189"}
{"text":"data_4190"}
{"text":"data_4191"}
{"text":"data_4192"}
{"text":"data_4193"}
{"text":"data_4194"}
{"text":"data_4195"}
{"text":"data_4196"}
{"text":"data_4197"}
{"text":"data_4198"}
{"text":"data_4199"}
{"text":"data_4200"}
{"text":"data_4201"}
{"text":"data_4202"}
{"text":"data_4203"}
{"text":"data_4204"}
{"text":"data_4205"}
{"text":"data_4206"}
{"text":"data_4207"}
{"text":"data_4208"}
{"text":"data_4209"}
{"text":"data_4210"}
{"text":"data_4211"}
{"text":"data_4212"}
{"text":"data_4213"}
{"text":"data_4214"}
{"text":"data_4215"}
{"text":"data_4216"}
{"text":"data_4217"}
{"text":"data_4218"}
{"text":"data_4219"}
{"text":"data_4220"}
{"text":"data_4221"}
{"text":"data_4222"}
{"text":"data_4223"}
{"text":"data_4224"}
{"text":"data_4225"}
{"text":"data_4226"}
{"text":"data_4227"}
{"text":"data_4228"}
{"text":"data_4229"}
{"text":"data_4230"}
{"text":"data_4231"}
{"text":"data_4232"}
{"text":"data_4233"}
{"text":"data_4234"}
{"text":"data_4235"}
{"text":"data_4236"}
{"text":"data_4237"}
{"text":"data_4238"}
{"text":"data_4239"}
{"text":"data_4240"}
{"text":"data_4241"}
{"text":"data_4242"}
{"text":"data_4243"}
{"text":"data_4244"}
{"text":"data_4245"}
{"text":"data_4246"}
{"text":"data_4247"}
{"text":"data_4248"}
{"text":"data_4249"}
{"text":"data_4250"}
{"text":"data_4251"}
{"text":"data_4252"}
{"text":"data_4253"}
{"text":"data_4254"}
{"text":"data_4255"}
{"text":"data_4256"}
{"text":"data_4257"}
{"text":"data_4258"}
{"text":"data_4259"}
{"text":"data_4260"}
{"text":"data_4261"}
{"text":"data_4262"}
{"text":"data_4263"}
{"text":"data_4264"}
{"text":"data_4265"}
{"text":"data_4266"}
{"text":"data_4267"}
{"text":"data_4268"}
{"text":"data_4269"}
{"text":"data_4270"}
{"text":"data_4271"}
{"text":"data_4272"}
{"text":"data_4273"}
{"text":"data_4274"}
{"text":"data_4275"}
{"text":"data_4276"}
{"text":"data_4277"}
{"text":"data_4278"}
{"text":"data_4279"}
{"text":"data_4280"}
{"text":"data_4281"}
{"text":"data_4282"}
{"text":"data_4283"}
{"text":"data_4284"}
{"text":"data_4285"}
{"text":"data_4286"}
{"text":"data_4287"}
{"text":"data_4288"}
{"text":"data_4289"}
{"text":"data_4290"}
{"text":"data_4291"}
{"text":"data_4292"}
{"text":"data_4293"}
{"text":"data_4294"}
{"text":"data_4295"}
{"text":"data_4296"}
{"text":"data_4297"}
{"text":"data_4298"}
{"text":"data_4299"}
{"text":"data_4300"}
{"text":"data_4301"}
{"text":"data_4302"}
{"text":"data_4303"}
{"text":"data_4304"}
{"text":"data_4305"}
{"text":"data_4306"}
{"text":"data_4307"}
{"text":"data_4308"}
{"text":"data_4309"}
{"text":"data_4310"}
{"text":"data_4311"}
{"text":"data_4312"}
{"text":"data_4313"}
{"text":"data_4314"}
{"text":"data_4315"}
{"text":"data_4316"}
{"text":"data_4317"}
{"text":"data_4318"}
{"text":"data_4319"}
{"text":"data_4320"}
{"text":"data_4321"}
{"text":"data_4322"}
{"text":"data_4323"}
{"text":"data_4324"}
{"text":"data_4325"}
{"text":"data_4326"}
{"text":"data_4327"}
{"text":"data_4328"}
{"text":"data_4329"}
{"text":"data_4330"}
{"text":"data_4331"}
{"text":"data_4332"}
{"text":"data_4333"}
{"text":"data_4334"}
{"text":"data_4335"}
{"text":"data_4336"}
{"text":"data_4337"}
{"text":"data_4338"}
{"text":"data_4339"}
{"text":"data_4340"}
{"text":"data_4341"}
{"text":"data_4342"}
{"text":"data_4343"}
{"text":"data_4344"}
{"text":"data_4345"}
{"text":"data_4346"}
{"text":"data_4347"}
{"text":"data_4348"}
{"text":"data_4349"}
{"text":"data_4350"}
{"text":"data_4351"}
{"text":"data_4352"}
{"text":"data_4353"}
{"text":"data_4354"}
{"text":"data_4355"}
{"text":"data_4356"}
{"text":"data_4357"}
{"text":"data_4358"}
{"text":"data_4359"}
{"text":"data_4360"}
{"text":"data_4361"}
{"text":"data_4362"}
{"text":"data_4363"}
{"text":"data_4364"}
{"text":"data_4365"}
{"text":"data_4366"}
{"text":"data_4367"}
{"text":"data_4368"}
{"text":"data_4369"}
{"text":"data_4370"}
{"text":"data_4371"}
{"text":"data_4372"}
{"text":"data_4373"}
{"text":"data_4374"}
{"text":"data_4375"}
{"text":"data_4376"}
{"text":"data_4377"}
{"text":"data_4378"}
{"text":"data_4379"}
{"text":"data_4380"}
{"text":"data_4381"}
{"text":"data_4382"}
{"text":"data_4383"}
{"text":"data_4384"}
{"text":"data_4385"}
{"text":"data_4386"}
{"text":"data_4387"}
{"text":"data_4388"}
{"text":"data_4389"}
{"text":"data_4390"}
{"text":"data_4391"}
{"text":"data_4392"}
{"text":"data_4393"}
{"text":"data_4394"}
{"text":"data_4395"}
{"text":"data_4396"}
{"text":"data_4397"}
{"text":"data_4398"}
{"text":"data_4399"}
{"text":"data_4400"}
{"text":"data_4401"}
{"text":"data_4402"}
{"text":"data_4403"}
{"text":"data_4404"}
{"text":"data_4405"}
{"text":"data_4406"}
{"text":"data_4407"}
{"text":"data_4408"}
{"text":"data_4409"}
{"text":"data_4410"}
{"text":"data_4411"}
{"text":"data_4412"}
{"text":"data_4413"}
{"text":"data_4414"}
{"text":"data_4415"}
{"text":"data_4416"}
{"text":"data_4417"}
{"text":"data_4418"}
{"text":"data_4419"}
{"text":"data_4420"}
{"text":"data_4421"}
{"text":"data_4422"}
{"text":"data_4423"}
{"text":"data_4424"}
{"text":"data_4425"}
{"text":"data_4426"}
{"text":"data_4427"}
{"text":"data_4428"}
{"text":"data_4429"}
{"text":"data_4430"}
{"text":"data_4431"}
{"text":"data_4432"}
{"text":"data_4433"}
{"text":"data_4434"}
{"text":"data_4435"}
{"text":"data_4436"}
{"text":"data_4437"}
{"text":"data_4438"}
{"text":"data_4439"}
{"text":"data_4440"}
{"text":"data_4441"}
{"text":"data_4442"}
{"text":"data_4443"}
{"text":"data_4444"}
{"text":"data_4445"}
{"text":"data_4446"}
{"text":"data_4447"}
{"text":"data_4448"}
{"text":"data_4449"}
{"text":"data_4450"}
{"text":"data_4451"}
{"text":"data_4452"}
{"text":"data_4453"}
{"text":"data_4454"}
{"text":"data_4455"}
{"text":"data_4456"}
{"text":"data_4457"}
{"text":"data_4458"}
{"text":"data_4459"}
{"text":"data_4460"}
{"text":"data_4461"}
{"text":"data_4462"}
{"text":"data_4463"}
{"text":"data_4464"}
{"text":"data_4465"}
{"text":"data_4466"}
{"text":"data_4467"}
{"text":"data_4468"}
{"text":"data_4469"}
{"text":"data_4470"}
{"text":"data_4471"}
{"text":"data_4472"}
{"text":"data_4473"}
{"text":"data_4474"}
{"text":"data_4475"}
{"text":"data_4476"}
{"text":"data_4477"}
{"text":"data_4478"}
{"text":"data_4479"}
{"text":"data_4480"}
{"text":"data_4481"}
{"text":"data_4482"}
{"text":"data_4483"}
{"text":"data_4484"}
{"text":"data_4485"}
{"text":"data_4486"}
{"text":"data_4487"}
{"text":"data_4488"}
{"text":"data_4489"}
{"text":"data_4490"}
{"text":"data_4491"}
{"text":"data_4492"}
{"text":"data_4493"}
{"text":"data_4494"}
{"text":"data_4495"}
{"text":"data_4496"}
{"text":"data_4497"}
{"text":"data_4498"}
{"text":"data_4499"}
{"text":"data_4500"}
{"text":"data_4501"}
{"text":"data_4502"}
{"text":"data_4503"}
{"text":"data_4504"}
{"text":"data_4505"}
{"text":"data_4506"}
{"text":"data_4507"}
{"text":"data_4508"}
{"text":"data_4509"}
{"text":"data_4510"}
{"text":"data_4511"}
{"text":"data_4512"}
{"text":"data_4513"}
{"text":"data_4514"}
{"text":"data_4515"}
{"text":"data_4516"}
{"text":"data_4517"}
{"text":"data_4518"}
{"text":"data_4519"}
{"text":"data_4520"}
{"text":"data_4521"}
{"text":"data_4522"}
{"text":"data_4523"}
{"text":"data_4524"}
{"text":"data_4525"}
{"text":"data_4526"}
{"text":"data_4527"}
{"text":"data_4528"}
{"text":"data_4529"}
{"text":"data_4530"}
{"text":"data_4531"}
{"text":"data_4532"}
{"text":"data_4533"}
{"text":"data_4534"}
{"text":"data_4535"}
{"text":"data_4536"}
{"text":"data_4537"}
{"text":"data_4538"}
{"text":"data_4539"}
{"text":"data_4540"}
{"text":"data_4541"}
{"text":"data_4542"}
{"text":"data_4543"}
{"text":"data_4544"}
{"text":"data_4545"}
{"text":"data_4546"}
{"text":"data_4547"}
{"text":"data_4548"}
{"text":"data_4549"}
{"text":"data_4550"}
{"text":"data_4551"}
{"text":"data_4552"}
{"text":"data_4553"}
{"text":"data_4554"}
{"text":"data_4555"}
{"text":"data_4556"}
{"text":"data_4557"}
{"text":"data_4558"}
{"text":"data_4559"}
{"text":"data_4560"}
{"text":"data_4561"}
{"text":"data_4562"}
{"text":"data_4563"}
{"text":"data_4564"}
{"text":"data_4565"}
{"text":"data_4566"}
{"text":"data_4567"}
{"text":"data_4568"}
{"text":"data_4569"}
{"text":"data_4570"}
{"text":"data_4571"}
{"text":"data_4572"}
{"text":"data_4573"}
{"text":"data_4574"}
{"text":"data_4575"}
{"text":"data_4576"}
{"text":"data_4577"}
{"text":"data_4578"}
{"text":"data_4579"}
{"text":"data_4580"}
{"text":"data_4581"}
{"text":"data_4582"}
{"text":"data_4583"}
{"text":"data_4584"}
{"text":"data_4585"}
{"text":"data_4586"}
{"text":"data_4587"}
{"text":"data_4588"}
{"text":"data_4589"}
{"text":"data_4590"}
{"text":"data_4591"}
{"text":"data_4592"}
{"text":"data_4593"}
{"text":"data_4594"}
{"text":"data_4595"}
{"text":"data_4596"}
{"text":"data_4597"}
{"text":"data_4598"}
{"text":"data_4599"}
{"text":"data_4600"}
{"text":"data_4601"}
{"text":"data_4602"}
{"text":"data_4603"}
{"text":"data_4604"}
{"text":"data_4605"}
{"text":"data_4606"}
{"text":"data_4607"}
{"text":"data_4608"}
{"text":"data_4609"}
{"text":"data_4610"}
{"text":"data_4611"}
{"text":"data_4612"}
{"text":"data_4613"}
{"text":"data_4614"}
{"text":"data_4615"}
{"text":"data_4616"}
{"text":"data_4617"}
{"text":"data_4618"}
{"text":"data_4619"}
{"text":"data_4620"}
{"text":"data_4621"}
{"text":"data_4622"}
{"text":"data_4623"}
{"text":"data_4624"}
{"text":"data_4625"}
{"text":"data_4626"}
{"text":"data_4627"}
{"text":"data_4628"}
{"text":"data_4629"}
{"text":"data_4630"}
{"text":"data_4631"}
{"text":"data_4632"}
{"text":"data_4633"}
{"text":"data_4634"}
{"text":"data_4635"}
{"text":"data_4636"}
{"text":"data_4637"}
{"text":"data_4638"}
{"text":"data_4639"}
{"text":"data_4640"}
{"text":"data_4641"}
{"text":"data_4642"}
{"text":"data_4643"}
{"text":"data_4644"}
{"text":"data_4645"}
{"text":"data_4646"}
{"text":"data_4647"}
{"text":"data_4648"}
{"text":"data_4649"}
{"text":"data_4650"}
{"text":"data_4651"}
{"text":"data_4652"}
{"text":"data_4653"}
{"text":"data_4654"}
{"text":"data_4655"}
{"text":"data_4656"}
{"text":"data_4657"}
{"text":"data_4658"}
{"text":"data_4659"}
{"text":"data_4660"}
{"text":"data_4661"}
{"text":"data_4662"}
{"text":"data_4663"}
{"text":"data_4664"}
{"text":"data_4665"}
{"text":"data_4666"}
{"text":"data_4667"}
{"text":"data_4668"}
{"text":"data_4669"}
{"text":"data_4670"}
{"text":"data_4671"}
{"text":"data_4672"}
{"text":"data_4673"}
{"text":"data_4674"}
{"text":"data_4675"}
{"text":"data_4676"}
{"text":"data_4677"}
{"text":"data_4678"}
{"text":"data_4679"}
{"text":"data_4680"}
{"text":"data_4681"}
{"text":"data_4682"}
{"text":"data_4683"}
{"text":"data_4684"}
{"text":"data_4685"}
{"text":"data_4686"}
{"text":"data_4687"}
{"text":"data_4688"}
{"text":"data_4689"}
{"text":"data_4690"}
{"text":"data_4691"}
{"text":"data_4692"}
{"text":"data_4693"}
{"text":"data_4694"}
{"text":"data_4695"}
{"text":"data_4696"}
{"text":"data_4697"}
{"text":"data_4698"}
{"text":"data_4699"}
{"text":"data_4700"}
{"text":"data_4701"}
{"text":"data_4702"}
{"text":"data_4703"}
{"text":"data_4704"}
{"text":"data_4705"}
{"text":"data_4706"}
{"text":"data_4707"}
{"text":"data_4708"}
{"text":"data_4709"}
{"text":"data_4710"}
{"text":"data_4711"}
{"text":"data_4712"}
{"text":"data_4713"}
{"text":"data_4714"}
{"text":"data_4715"}
{"text":"data_4716"}
{"text":"data_4717"}
{"text":"data_4718"}
{"text":"data_4719"}
{"text":"data_4720"}
{"text":"data_4721"}
{"text":"data_4722"}
{"text":"data_4723"}
{"text":"data_4724"}
{"text":"data_4725"}
{"text":"data_4726"}
{"text":"data_4727"}
{"text":"data_4728"}
{"text":"data_4729"}
{"text":"data_4730"}
{"text":"data_4731"}
{"text":"data_4732"}
{"text":"data_4733"}
{"text":"data_4734"}
{"text":"data_4735"}
{"text":"data_4736"}
{"text":"data_4737"}
{"text":"data_4738"}
{"text":"data_4739"}
{"text":"data_4740"}
{"text":"data_4741"}
{"text":"data_4742"}
{"text":"data_4743"}
{"text":"data_4744"}
{"text":"data_4745"}
{"text":"data_4746"}
{"text":"data_4747"}
{"text":"data_4748"}
{"text":"data_4749"}
{"text":"data_4750"}
{"text":"data_4751"}
{"text":"data_4752"}
{"text":"data_4753"}
{"text":"data_4754"}
{"text":"data_4755"}
{"text":"data_4756"}
{"text":"data_4757"}
{"text":"data_4758"}
{"text":"data_4759"}
{"text":"data_4760"}
{"text":"data_4761"}
{"text":"data_4762"}
{"text":"data_4763"}
{"text":"data_4764"}
{"text":"data_4765"}
{"text":"data_4766"}
{"text":"data_4767"}
{"text":"data_4768"}
{"text":"data_4769"}
{"text":"data_4770"}
{"text":"data_4771"}
{"text":"data_4772"}
{"text":"data_4773"}
{"text":"data_4774"}
{"text":"data_4775"}
{"text":"data_4776"}
{"text":"data_4777"}
{"text":"data_4778"}
{"text":"data_4779"}
{"text":"data_4780"}
{"text":"data_4781"}
{"text":"data_4782"}
{"text":"data_4783"}
{"text":"data_4784"}
{"text":"data_4785"}
{"text":"data_4786"}
{"text":"data_4787"}
{"text":"data_4788"}
{"text":"data_4789"}
{"text":"data_4790"}
{"text":"data_4791"}
{"text":"data_4792"}
{"text":"data_4793"}
{"text":"data_4794"}
{"text":"data_4795"}
{"text":"data_4796"}
{"text":"data_4797"}
{"text":"data_4798"}
{"text":"data_4799"}
{"text":"data_4800"}
{"text":"data_4801"}
{"text":"data_4802"}
{"text":"data_4803"}
{"text":"data_4804"}
{"text":"data_4805"}
{"text":"data_4806"}
{"text":"data_4807"}
{"text":"data_4808"}
{"text":"data_4809"}
{"text":"data_4810"}
{"text":"data_4811"}
{"text":"data_4812"}
{"text":"data_4813"}
{"text":"data_4814"}
{"text":"data_4815"}
{"text":"data_4816"}
{"text":"data_4817"}
{"text":"data_4818"}
{"text":"data_4819"}
{"text":"data_4820"}
{"text":"data_4821"}
{"text":"data_4822"}
{"text":"data_4823"}
{"text":"data_4824"}
{"text":"data_4825"}
{"text":"data_4826"}
{"text":"data_4827"}
{"text":"data_4828"}
{"text":"data_4829"}
{"text":"data_4830"}
{"text":"data_4831"}
{"text":"data_4832"}
{"text":"data_4833"}
{"text":"data_4834"}
{"text":"data_4835"}
{"text":"data_4836"}
{"text":"data_4837"}
{"text":"data_4838"}
{"text":"data_4839"}
{"text":"data_4840"}
{"text":"data_4841"}
{"text":"data_4842"}
{"text":"data_4843"}
{"text":"data_4844"}
{"text":"data_4845"}
{"text":"data_4846"}
{"text":"data_4847"}
{"text":"data_4848"}
{"text":"data_4849"}
{"text":"data_4850"}
{"text":"data_4851"}
{"text":"data_4852"}
{"text":"data_4853"}
{"text":"data_4854"}
{"text":"data_4855"}
{"text":"data_4856"}
{"text":"data_4857"}
{"text":"data_4858"}
{"text":"data_4859"}
{"text":"data_4860"}
{"text":"data_4861"}
{"text":"data_4862"}
{"text":"data_4863"}
{"text":"data_4864"}
{"text":"data_4865"}
{"text":"data_4866"}
{"text":"data_4867"}
{"text":"data_4868"}
{"text":"data_4869"}
{"text":"data_4870"}
{"text":"data_4871"}
{"text":"data_4872"}
{"text":"data_4873"}
{"text":"data_4874"}
{"text":"data_4875"}
{"text":"data_4876"}
{"text":"data_4877"}
{"text":"data_4878"}
{"text":"data_4879"}
{"text":"data_4880"}
{"text":"data_4881"}
{"text":"data_4882"}
{"text":"data_4883"}
{"text":"data_4884"}
{"text":"data_4885"}
{"text":"data_4886"}
{"text":"data_4887"}
{"text":"data_4888"}
{"text":"data_4889"}
{"text":"data_4890"}
{"text":"data_4891"}
{"text":"data_4892"}
{"text":"data_4893"}
{"text":"data_4894"}
{"text":"data_4895"}
{"text":"data_4896"}
{"text":"data_4897"}
{"text":"data_4898"}
{"text":"data_4899"}
{"text":"data_4900"}
{"text":"data_4901"}
{"text":"data_4902"}
{"text":"data_4903"}
{"text":"data_4904"}
{"text":"data_4905"}
{"text":"data_4906"}
{"text":"data_4907"}
{"text":"data_4908"}
{"text":"data_4909"}
{"text":"data_4910"}
{"text":"data_4911"}
{"text":"data_4912"}
{"text":"data_4913"}
{"text":"data_4914"}
{"text":"data_4915"}
{"text":"data_4916"}
{"text":"data_4917"}
{"text":"data_4918"}
{"text":"data_4919"}
{"text":"data_4920"}
{"text":"data_4921"}
{"text":"data_4922"}
{"text":"data_4923"}
{"text":"data_4924"}
{"text":"data_4925"}
{"text":"data_4926"}
{"text":"data_4927"}
{"text":"data_4928"}
{"text":"data_4929"}
{"text":"data_4930"}
{"text":"data_4931"}
{"text":"data_4932"}
{"text":"data_4933"}
{"text":"data_4934"}
{"text":"data_4935"}
{"text":"data_4936"}
{"text":"data_4937"}
{"text":"data_4938"}
{"text":"data_4939"}
{"text":"data_4940"}
{"text":"data_4941"}
{"text":"data_4942"}
{"text":"data_4943"}
{"text":"data_4944"}
{"text":"data_4945"}
{"text":"data_4946"}
{"text":"data_4947"}
{"text":"data_4948"}
{"text":"data_4949"}
{"text":"data_4950"}
{"text":"data_4951"}
{"text":"data_4952"}
{"text":"data_4953"}
{"text":"data_4954"}
{"text":"data_4955"}
{"text":"data_4956"}
{"text":"data_4957"}
{"text":"data_4958"}
{"text":"data_4959"}
{"text":"data_4960"}
{"text":"data_4961"}
{"text":"data_4962"}
{"text":"data_4963"}
{"text":"data_4964"}
{"text":"data_4965"}
{"text":"data_4966"}
{"text":"data_4967"}
{"text":"data_4968"}
{"text":"data_4969"}
{"text":"data_4970"}
{"text":"data_4971"}
{"text":"data_4972"}
{"text":"data_4973"}
{"text":"data_4974"}
{"text":"data_4975"}
{"text":"data_4976"}
{"text":"data_4977"}
{"text":"data_4978"}
{"text":"data_4979"}
{"text":"data_4980"}
{"text":"data_4981"}
{"text":"data_4982"}
{"text":"data_4983"}
{"text":"data_4984"}
{"text":"data_4985"}
{"text":"data_4986"}
{"text":"data_4987"}
{"text":"data_4988"}
{"text":"data_4989"}
{"text":"data_4990"}
{"text":"data_4991"}
{"text":"data_4992"}
{"text":"data_4993"}
{"text":"data_4994"}
{"text":"data_4995"}
{"text":"data_4996"}
{"text":"data_4997"}
{"text":"data_4998"}
{"text":"data_4999"}
{"text":"data_5000"}
{"text":"data_5001"}
{"text":"data_5002"}
{"text":"data_5003"}
{"text":"data_5004"}
{"text":"data_5005"}
{"text":"data_5006"}
{"text":"data_5007"}
{"text":"data_5008"}
{"text":"data_5009"}
{"text":"data_5010"}
{"text":"data_5011"}
{"text":"data_5012"}
{"text":"data_5013"}
{"text":"data_5014"}
{"text":"data_5015"}
{"text":"data_5016"}
{"text":"data_5017"}
{"text":"data_5018"}
{"text":"data_5019"}
{"text":"data_5020"}
{"text":"data_5021"}
{"text":"data_5022"}
{"text":"data_5023"}
{"text":"data_5024"}
{"text":"data_5025"}
{"text":"data_5026"}
{"text":"data_5027"}
{"text":"data_5028"}
{"text":"data_5029"}
{"text":"data_5030"}
{"text":"data_5031"}
{"text":"data_5032"}
{"text":"data_5033"}
{"text":"data_5034"}
{"text":"data_5035"}
{"text":"data_5036"}
{"text":"data_5037"}
{"text":"data_5038"}
{"text":"data_5039"}
{"text":"data_5040"}
{"text":"data_5041"}
{"text":"data_5042"}
{"text":"data_5043"}
{"text":"data_5044"}
{"text":"data_5045"}
{"text":"data_5046"}
{"text":"data_5047"}
{"text":"data_5048"}
{"text":"data_5049"}
{"text":"data_5050"}
{"text":"data_5051"}
{"text":"data_5052"}
{"text":"data_5053"}
{"text":"data_5054"}
{"text":"data_5055"}
{"text":"data_5056"}
{"text":"data_5057"}
{"text":"data_5058"}
{"text":"data_5059"}
{"text":"data_5060"}
{"text":"data_5061"}
{"text":"data_5062"}
{"text":"data_5063"}
{"text":"data_5064"}
{"text":"data_5065"}
{"text":"data_5066"}
{"text":"data_5067"}
{"text":"data_5068"}
{"text":"data_5069"}
{"text":"data_5070"}
{"text":"data_5071"}
{"text":"data_5072"}
{"text":"data_5073"}
{"text":"data_5074"}
{"text":"data_5075"}
{"text":"data_5076"}
{"text":"data_5077"}
{"text":"data_5078"}
{"text":"data_5079"}
{"text":"data_5080"}
{"text":"data_5081"}
{"text":"data_5082"}
{"text":"data_5083"}
{"text":"data_5084"}
{"text":"data_5085"}
{"text":"data_5086"}
{"text":"data_5087"}
{"text":"data_5088"}
{"text":"data_5089"}
{"text":"data_5090"}
{"text":"data_5091"}
{"text":"data_5092"}
{"text":"data_5093"}
{"text":"data_5094"}
{"text":"data_5095"}
{"text":"data_5096"}
{"text":"data_5097"}
{"text":"data_5098"}
{"text":"data_5099"}
{"text":"data_5100"}
{"text":"data_5101"}
{"text":"data_5102"}
{"text":"data_5103"}
{"text":"data_5104"}
{"text":"data_5105"}
{"text":"data_5106"}
{"text":"data_5107"}
{"text":"data_5108"}
{"text":"data_5109"}
{"text":"data_5110"}
{"text":"data_5111"}
{"text":"data_5112"}
{"text":"data_5113"}
{"text":"data_5114"}
{"text":"data_5115"}
{"text":"data_5116"}
{"text":"data_5117"}
{"text":"data_5118"}
{"text":"data_5119"}
{"text":"data_5120"}
{"text":"data_5121"}
{"text":"data_5122"}
{"text":"data_5123"}
{"text":"data_5124"}
{"text":"data_5125"}
{"text":"data_5126"}
{"text":"data_5127"}
{"text":"data_5128"}
{"text":"data_5129"}
{"text":"data_5130"}
{"text":"data_5131"}
{"text":"data_5132"}
{"text":"data_5133"}
{"text":"data_5134"}
{"text":"data_5135"}
{"text":"data_5136"}
{"text":"data_5137"}
{"text":"data_5138"}
{"text":"data_5139"}
{"text":"data_5140"}
{"text":"data_5141"}
{"text":"data_5142"}
{"text":"data_5143"}
{"text":"data_5144"}
{"text":"data_5145"}
{"text":"data_5146"}
{"text":"data_5147"}
{"text":"data_5148"}
{"text":"data_5149"}
{"text":"data_5150"}
{"text":"data_5151"}
{"text":"data_5152"}
{"text":"data_5153"}
{"text":"data_5154"}
{"text":"data_5155"}
{"text":"data_5156"}
{"text":"data_5157"}
{"text":"data_5158"}
{"text":"data_5159"}
{"text":"data_5160"}
{"text":"data_5161"}
{"text":"data_5162"}
{"text":"data_5163"}
{"text":"data_5164"}
{"text":"data_5165"}
{"text":"data_5166"}
{"text":"data_5167"}
{"text":"data_5168"}
{"text":"data_5169"}
{"text":"data_5170"}
{"text":"data_5171"}
{"text":"data_5172"}
{"text":"data_5173"}
{"text":"data_5174"}
{"text":"data_5175"}
{"text":"data_5176"}
{"text":"data_5177"}
{"text":"data_5178"}
{"text":"data_5179"}
{"text":"data_5180"}
{"text":"data_5181"}
{"text":"data_5182"}
{"text":"data_5183"}
{"text":"data_5184"}
{"text":"data_5185"}
{"text":"data_5186"}
{"text":"data_5187"}
{"text":"data_5188"}
{"text":"data_5189"}
{"text":"data_5190"}
{"text":"data_5191"}
{"text":"data_5192"}
{"text":"data_5193"}
{"text":"data_5194"}
{"text":"data_5195"}
{"text":"data_5196"}
{"text":"data_5197"}
{"text":"data_5198"}
{"text":"data_5199"}
{"text":"data_5200"}
{"text":"data_5201"}
{"text":"data_5202"}
{"text":"data_5203"}
{"text":"data_5204"}
{"text":"data_5205"}
{"text":"data_5206"}
{"text":"data_5207"}
{"text":"data_5208"}
{"text":"data_5209"}
{"text":"data_5210"}
{"text":"data_5211"}
{"text":"data_5212"}
{"text":"data_5213"}
{"text":"data_5214"}
{"text":"data_5215"}
{"text":"data_5216"}
{"text":"data_5217"}
{"text":"data_5218"}
{"text":"data_5219"}
{"text":"data_5220"}
{"text":"data_5221"}
{"text":"data_5222"}
{"text":"data_5223"}
{"text":"data_5224"}
{"text":"data_5225"}
{"text":"data_5226"}
{"text":"data_5227"}
{"text":"data_5228"}
{"text":"data_5229"}
{"text":"data_5230"}
{"text":"data_5231"}
{"text":"data_5232"}
{"text":"data_5233"}
{"text":"data_5234"}
{"text":"data_5235"}
{"text":"data_5236"}
{"text":"data_5237"}
{"text":"data_5238"}
{"text":"data_5239"}
{"text":"data_5240"}
{"text":"data_5241"}
{"text":"data_5242"}
{"text":"data_5243"}
{"text":"data_5244"}
{"text":"data_5245"}
{"text":"data_5246"}
{"text":"data_5247"}
{"text":"data_5248"}
{"text":"data_5249"}
{"text":"data_5250"}
{"text":"data_5251"}
{"text":"data_5252"}
{"text":"data_5253"}
{"text":"data_5254"}
{"text":"data_5255"}
{"text":"data_5256"}
{"text":"data_5257"}
{"text":"data_5258"}
{"text":"data_5259"}
{"text":"data_5260"}
{"text":"data_5261"}
{"text":"data_5262"}
{"text":"data_5263"}
{"text":"data_5264"}
{"text":"data_5265"}
{"text":"data_5266"}
{"text":"data_5267"}
{"text":"data_5268"}
{"text":"data_5269"}
{"text":"data_5270"}
{"text":"data_5271"}
{"text":"data_5272"}
{"text":"data_5273"}
{"text":"data_5274"}
{"text":"data_5275"}
{"text":"data_5276"}
{"text":"data_5277"}
{"text":"data_5278"}
{"text":"data_5279"}
{"text":"data_5280"}
{"text":"data_5281"}
{"text":"data_5282"}
{"text":"data_5283"}
{"text":"data_5284"}
{"text":"data_5285"}
{"text":"data_5286"}
{"text":"data_5287"}
{"text":"data_5288"}
{"text":"data_5289"}
{"text":"data_5290"}
{"text":"data_5291"}
{"text":"data_5292"}
{"text":"data_5293"}
{"text":"data_5294"}
{"text":"data_5295"}
{"text":"data_5296"}
{"text":"data_5297"}
{"text":"data_5298"}
{"text":"data_5299"}
{"text":"data_5300"}
{"text":"data_5301"}
{"text":"data_5302"}
{"text":"data_5303"}
{"text":"data_5304"}
{"text":"data_5305"}
{"text":"data_5306"}
{"text":"data_5307"}
{"text":"data_5308"}
{"text":"data_5309"}
{"text":"data_5310"}
{"text":"data_5311"}
{"text":"data_5312"}
{"text":"data_5313"}
{"text":"data_5314"}
{"text":"data_5315"}
{"text":"data_5316"}
{"text":"data_5317"}
{"text":"data_5318"}
{"text":"data_5319"}
{"text":"data_5320"}
{"text":"data_5321"}
{"text":"data_5322"}
{"text":"data_5323"}
{"text":"data_5324"}
{"text":"data_5325"}
{"text":"data_5326"}
{"text":"data_5327"}
{"text":"data_5328"}
{"text":"data_5329"}
{"text":"data_5330"}
{"text":"data_5331"}
{"text":"data_5332"}
{"text":"data_5333"}
{"text":"data_5334"}
{"text":"data_5335"}
{"text":"data_5336"}
{"text":"data_5337"}
{"text":"data_5338"}
{"text":"data_5339"}
{"text":"data_5340"}
{"text":"data_5341"}
{"text":"data_5342"}
{"text":"data_5343"}
{"text":"data_5344"}
{"text":"data_5345"}
{"text":"data_5346"}
{"text":"data_5347"}
{"text":"data_5348"}
{"text":"data_5349"}
{"text":"data_5350"}
{"text":"data_5351"}
{"text":"data_5352"}
{"text":"data_5353"}
{"text":"data_5354"}
{"text":"data_5355"}
{"text":"data_5356"}
{"text":"data_5357"}
{"text":"data_5358"}
{"text":"data_5359"}
{"text":"data_5360"}
{"text":"data_5361"}
{"text":"data_5362"}
{"text":"data_5363"}
{"text":"data_5364"}
{"text":"data_5365"}
{"text":"data_5366"}
{"text":"data_5367"}
{"text":"data_5368"}
{"text":"data_5369"}
{"text":"data_5370"}
{"text":"data_5371"}
{"text":"data_5372"}
{"text":"data_5373"}
{"text":"data_5374"}
{"text":"data_5375"}
{"text":"data_5376"}
{"text":"data_5377"}
{"text":"data_5378"}
{"text":"data_5379"}
{"text":"data_5380"}
{"text":"data_5381"}
{"text":"data_5382"}
{"text":"data_5383"}
{"text":"data_5384"}
{"text":"data_5385"}
{"text":"data_5386"}
{"text":"data_5387"}
{"text":"data_5388"}
{"text":"data_5389"}
{"text":"data_5390"}
{"text":"data_5391"}
{"text":"data_5392"}
{"text":"data_5393"}
{"text":"data_5394"}
{"text":"data_5395"}
{"text":"data_5396"}
{"text":"data_5397"}
{"text":"data_5398"}
{"text":"data_5399"}
{"text":"data_5400"}
{"text":"data_5401"}
{"text":"data_5402"}
{"text":"data_5403"}
{"text":"data_5404"}
{"text":"data_5405"}
{"text":"data_5406"}
{"text":"data_5407"}
{"text":"data_5408"}
{"text":"data_5409"}
{"text":"data_5410"}
{"text":"data_5411"}
{"text":"data_5412"}
{"text":"data_5413"}
{"text":"data_5414"}
{"text":"data_5415"}
{"text":"data_5416"}
{"text":"data_5417"}
{"text":"data_5418"}
{"text":"data_5419"}
{"text":"data_5420"}
{"text":"data_5421"}
{"text":"data_5422"}
{"text":"data_5423"}
{"text":"data_5424"}
{"text":"data_5425"}
{"text":"data_5426"}
{"text":"data_5427"}
{"text":"data_5428"}
{"text":"data_5429"}
{"text":"data_5430"}
{"text":"data_5431"}
{"text":"data_5432"}
{"text":"data_5433"}
{"text":"data_5434"}
{"text":"data_5435"}
{"text":"data_5436"}
{"text":"data_5437"}
{"text":"data_5438"}
{"text":"data_5439"}
{"text":"data_5440"}
{"text":"data_5441"}
{"text":"data_5442"}
{"text":"data_5443"}
{"text":"data_5444"}
{"text":"data_5445"}
{"text":"data_5446"}
{"text":"data_5447"}
{"text":"data_5448"}
{"text":"data_5449"}
{"text":"data_5450"}
{"text":"data_5451"}
{"text":"data_5452"}
{"text":"data_5453"}
{"text":"data_5454"}
{"text":"data_5455"}
{"text":"data_5456"}
{"text":"data_5457"}
{"text":"data_5458"}
{"text":"data_5459"}
{"text":"data_5460"}
{"text":"data_5461"}
{"text":"data_5462"}
{"text":"data_5463"}
{"text":"data_5464"}
{"text":"data_5465"}
{"text":"data_5466"}
{"text":"data_5467"}
{"text":"data_5468"}
{"text":"data_5469"}
{"text":"data_5470"}
{"text":"data_5471"}
{"text":"data_5472"}
{"text":"data_5473"}
{"text":"data_5474"}
{"text":"data_5475"}
{"text":"data_5476"}
{"text":"data_5477"}
{"text":"data_5478"}
{"text":"data_5479"}
{"text":"data_5480"}
{"text":"data_5481"}
{"text":"data_5482"}
{"text":"data_5483"}
{"text":"data_5484"}
{"text":"data_5485"}
{"text":"data_5486"}
{"text":"data_5487"}
{"text":"data_5488"}
{"text":"data_5489"}
{"text":"data_5490"}
{"text":"data_5491"}
{"text":"data_5492"}
{"text":"data_5493"}
{"text":"data_5494"}
{"text":"data_5495"}
{"text":"data_5496"}
{"text":"data_5497"}
{"text":"data_5498"}
{"text":"data_5499"}
{"text":"data_5500"}
{"text":"data_5501"}
{"text":"data_5502"}
{"text":"data_5503"}
{"text":"data_5504"}
{"text":"data_5505"}
{"text":"data_5506"}
{"text":"data_5507"}
{"text":"data_5508"}
{"text":"data_5509"}
{"text":"data_5510"}
{"text":"data_5511"}
{"text":"data_5512"}
{"text":"data_5513"}
{"text":"data_5514"}
{"text":"data_5515"}
{"text":"data_5516"}
{"text":"data_5517"}
{"text":"data_5518"}
{"text":"data_5519"}
{"text":"data_5520"}
{"text":"data_5521"}
{"text":"data_5522"}
{"text":"data_5523"}
{"text":"data_5524"}
{"text":"data_5525"}
{"text":"data_5526"}
{"text":"data_5527"}
{"text":"data_5528"}
{"text":"data_5529"}
{"text":"data_5530"}
{"text":"data_5531"}
{"text":"data_5532"}
{"text":"data_5533"}
{"text":"data_5534"}
{"text":"data_5535"}
{"text":"data_5536"}
{"text":"data_5537"}
{"text":"data_5538"}
{"text":"data_5539"}
{"text":"data_5540"}
{"text":"data_5541"}
{"text":"data_5542"}
{"text":"data_5543"}
{"text":"data_5544"}
{"text":"data_5545"}
{"text":"data_5546"}
{"text":"data_5547"}
{"text":"data_5548"}
{"text":"data_5549"}
{"text":"data_5550"}
{"text":"data_5551"}
{"text":"data_5552"}
{"text":"data_5553"}
{"text":"data_5554"}
{"text":"data_5555"}
{"text":"data_5556"}
{"text":"data_5557"}
{"text":"data_5558"}
{"text":"data_5559"}
{"text":"data_5560"}
{"text":"data_5561"}
{"text":"data_5562"}
{"text":"data_5563"}
{"text":"data_5564"}
{"text":"data_5565"}
{"text":"data_5566"}
{"text":"data_5567"}
{"text":"data_5568"}
{"text":"data_5569"}
{"text":"data_5570"}
{"text":"data_5571"}
{"text":"data_5572"}
{"text":"data_5573"}
{"text":"data_5574"}
{"text":"data_5575"}
{"text":"data_5576"}
{"text":"data_5577"}
{"text":"data_5578"}
{"text":"data_5579"}
{"text":"data_5580"}
{"text":"data_5581"}
{"text":"data_5582"}
{"text":"data_5583"}
{"text":"data_5584"}
{"text":"data_5585"}
{"text":"data_5586"}
{"text":"data_5587"}
{"text":"data_5588"}
{"text":"data_5589"}
{"text":"data_5590"}
{"text":"data_5591"}
{"text":"data_5592"}
{"text":"data_5593"}
{"text":"data_5594"}
{"text":"data_5595"}
{"text":"data_5596"}
{"text":"data_5597"}
{"text":"data_5598"}
{"text":"data_5599"}
{"text":"data_5600"}
{"text":"data_5601"}
{"text":"data_5602"}
{"text":"data_5603"}
{"text":"data_5604"}
{"text":"data_5605"}
{"text":"data_5606"}
{"text":"data_5607"}
{"text":"data_5608"}
{"text":"data_5609"}
{"text":"data_5610"}
{"text":"data_5611"}
{"text":"data_5612"}
{"text":"data_5613"}
{"text":"data_5614"}
{"text":"data_5615"}
{"text":"data_5616"}
{"text":"data_5617"}
{"text":"data_5618"}
{"text":"data_5619"}
{"text":"data_5620"}
{"text":"data_5621"}
{"text":"data_5622"}
{"text":"data_5623"}
{"text":"data_5624"}
{"text":"data_5625"}
{"text":"data_5626"}
{"text":"data_5627"}
{"text":"data_5628"}
{"text":"data_5629"}
{"text":"data_5630"}
{"text":"data_5631"}
{"text":"data_5632"}
{"text":"data_5633"}
{"text":"data_5634"}
{"text":"data_5635"}
{"text":"data_5636"}
{"text":"data_5637"}
{"text":"data_5638"}
{"text":"data_5639"}
{"text":"data_5640"}
{"text":"data_5641"}
{"text":"data_5642"}
{"text":"data_5643"}
{"text":"data_5644"}
{"text":"data_5645"}
{"text":"data_5646"}
{"text":"data_5647"}
{"text":"data_5648"}
{"text":"data_5649"}
{"text":"data_5650"}
{"text":"data_5651"}
{"text":"data_5652"}
{"text":"data_5653"}
{"text":"data_5654"}
{"text":"data_5655"}
{"text":"data_5656"}
{"text":"data_5657"}
{"text":"data_5658"}
{"text":"data_5659"}
{"text":"data_5660"}
{"text":"data_5661"}
{"text":"data_5662"}
{"text":"data_5663"}
{"text":"data_5664"}
{"text":"data_5665"}
{"text":"data_5666"}
{"text":"data_5667"}
{"text":"data_5668"}
{"text":"data_5669"}
{"text":"data_5670"}
{"text":"data_5671"}
{"text":"data_5672"}
{"text":"data_5673"}
{"text":"data_5674"}
{"text":"data_5675"}
{"text":"data_5676"}
{"text":"data_5677"}
{"text":"data_5678"}
{"text":"data_5679"}
{"text":"data_5680"}
{"text":"data_5681"}
{"text":"data_5682"}
{"text":"data_5683"}
{"text":"data_5684"}
{"text":"data_5685"}
{"text":"data_5686"}
{"text":"data_5687"}
{"text":"data_5688"}
{"text":"data_5689"}
{"text":"data_5690"}
{"text":"data_5691"}
{"text":"data_5692"}
{"text":"data_5693"}
{"text":"data_5694"}
{"text":"data_5695"}
{"text":"data_5696"}
{"text":"data_5697"}
{"text":"data_5698"}
{"text":"data_5699"}
{"text":"data_5700"}
{"text":"data_5701"}
{"text":"data_5702"}
{"text":"data_5703"}
{"text":"data_5704"}
{"text":"data_5705"}
{"text":"data_5706"}
{"text":"data_5707"}
{"text":"data_5708"}
{"text":"data_5709"}
{"text":"data_5710"}
{"text":"data_5711"}
{"text":"data_5712"}
{"text":"data_5713"}
{"text":"data_5714"}
{"text":"data_5715"}
{"text":"data_5716"}
{"text":"data_5717"}
{"text":"data_5718"}
{"text":"data_5719"}
{"text":"data_5720"}
{"text":"data_5721"}
{"text":"data_5722"}
{"text":"data_5723"}
{"text":"data_5724"}
{"text":"data_5725"}
{"text":"data_5726"}
{"text":"data_5727"}
{"text":"data_5728"}
{"text":"data_5729"}
{"text":"data_5730"}
{"text":"data_5731"}
{"text":"data_5732"}
{"text":"data_5733"}
{"text":"data_5734"}
{"text":"data_5735"}
{"text":"data_5736"}
{"text":"data_5737"}
{"text":"data_5738"}
{"text":"data_5739"}
{"text":"data_5740"}
{"text":"data_5741"}
{"text":"data_5742"}
{"text":"data_5743"}
{"text":"data_5744"}
{"text":"data_5745"}
{"text":"data_5746"}
{"text":"data_5747"}
{"text":"data_5748"}
{"text":"data_5749"}
{"text":"data_5750"}
{"text":"data_5751"}
{"text":"data_5752"}
{"text":"data_5753"}
{"text":"data_5754"}
{"text":"data_5755"}
{"text":"data_5756"}
{"text":"data_5757"}
{"text":"data_5758"}
{"text":"data_5759"}
{"text":"data_5760"}
{"text":"data_5761"}
{"text":"data_5762"}
{"text":"data_5763"}
{"text":"data_5764"}
{"text":"data_5765"}
{"text":"data_5766"}
{"text":"data_5767"}
{"text":"data_5768"}
{"text":"data_5769"}
{"text":"data_5770"}
{"text":"data_5771"}
{"text":"data_5772"}
{"text":"data_5773"}
{"text":"data_5774"}
{"text":"data_5775"}
{"text":"data_5776"}
{"text":"data_5777"}
{"text":"data_5778"}
{"text":"data_5779"}
{"text":"data_5780"}
{"text":"data_5781"}
{"text":"data_5782"}
{"text":"data_5783"}
{"text":"data_5784"}
{"text":"data_5785"}
{"text":"data_5786"}
{"text":"data_5787"}
{"text":"data_5788"}
{"text":"data_5789"}
{"text":"data_5790"}
{"text":"data_5791"}
{"text":"data_5792"}
{"text":"data_5793"}
{"text":"data_5794"}
{"text":"data_5795"}
{"text":"data_5796"}
{"text":"data_5797"}
{"text":"data_5798"}
{"text":"data_5799"}
{"text":"data_5800"}
{"text":"data_5801"}
{"text":"data_5802"}
{"text":"data_5803"}
{"text":"data_5804"}
{"text":"data_5805"}
{"text":"data_5806"}
{"text":"data_5807"}
{"text":"data_5808"}
{"text":"data_5809"}
{"text":"data_5810"}
{"text":"data_5811"}
{"text":"data_5812"}
{"text":"data_5813"}
{"text":"data_5814"}
{"text":"data_5815"}
{"text":"data_5816"}
{"text":"data_5817"}
{"text":"data_5818"}
{"text":"data_5819"}
{"text":"data_5820"}
{"text":"data_5821"}
{"text":"data_5822"}
{"text":"data_5823"}
{"text":"data_5824"}
{"text":"data_5825"}
{"text":"data_5826"}
{"text":"data_5827"}
{"text":"data_5828"}
{"text":"data_5829"}
{"text":"data_5830"}
{"text":"data_5831"}
{"text":"data_5832"}
{"text":"data_5833"}
{"text":"data_5834"}
{"text":"data_5835"}
{"text":"data_5836"}
{"text":"data_5837"}
{"text":"data_5838"}
{"text":"data_5839"}
{"text":"data_5840"}
{"text":"data_5841"}
{"text":"data_5842"}
{"text":"data_5843"}
{"text":"data_5844"}
{"text":"data_5845"}
{"text":"data_5846"}
{"text":"data_5847"}
{"text":"data_5848"}
{"text":"data_5849"}
{"text":"data_5850"}
{"text":"data_5851"}
{"text":"data_5852"}
{"text":"data_5853"}
{"text":"data_5854"}
{"text":"data_5855"}
{"text":"data_5856"}
{"text":"data_5857"}
{"text":"data_5858"}
{"text":"data_5859"}
{"text":"data_5860"}
{"text":"data_5861"}
{"text":"data_5862"}
{"text":"data_5863"}
{"text":"data_5864"}
{"text":"data_5865"}
{"text":"data_5866"}
{"text":"data_5867"}
{"text":"data_5868"}
{"text":"data_5869"}
{"text":"data_5870"}
{"text":"data_5871"}
{"text":"data_5872"}
{"text":"data_5873"}
{"text":"data_5874"}
{"text":"data_5875"}
{"text":"data_5876"}
{"text":"data_5877"}
{"text":"data_5878"}
{"text":"data_5879"}
{"text":"data_5880"}
{"text":"data_5881"}
{"text":"data_5882"}
{"text":"data_5883"}
{"text":"data_5884"}
{"text":"data_5885"}
{"text":"data_5886"}
{"text":"data_5887"}
{"text":"data_5888"}
{"text":"data_5889"}
{"text":"data_5890"}
{"text":"data_5891"}
{"text":"data_5892"}
{"text":"data_5893"}
{"text":"data_5894"}
{"text":"data_5895"}
{"text":"data_5896"}
{"text":"data_5897"}
{"text":"data_5898"}
{"text":"data_5899"}
{"text":"data_5900"}
{"text":"data_5901"}
{"text":"data_5902"}
{"text":"data_5903"}
{"text":"data_5904"}
{"text":"data_5905"}
{"text":"data_5906"}
{"text":"data_5907"}
{"text":"data_5908"}
{"text":"data_5909"}
{"text":"data_5910"}
{"text":"data_5911"}
{"text":"data_5912"}
{"text":"data_5913"}
{"text":"data_5914"}
{"text":"data_5915"}
{"text":"data_5916"}
{"text":"data_5917"}
{"text":"data_5918"}
{"text":"data_5919"}
{"text":"data_5920"}
{"text":"data_5921"}
{"text":"data_5922"}
{"text":"data_5923"}
{"text":"data_5924"}
{"text":"data_5925"}
{"text":"data_5926"}
{"text":"data_5927"}
{"text":"data_5928"}
{"text":"data_5929"}
{"text":"data_5930"}
{"text":"data_5931"}
{"text":"data_5932"}
{"text":"data_5933"}
{"text":"data_5934"}
{"text":"data_5935"}
{"text":"data_5936"}
{"text":"data_5937"}
{"text":"data_5938"}
{"text":"data_5939"}
{"text":"data_5940"}
{"text":"data_5941"}
{"text":"data_5942"}
{"text":"data_5943"}
{"text":"data_5944"}
{"text":"data_5945"}
{"text":"data_5946"}
{"text":"data_5947"}
{"text":"data_5948"}
{"text":"data_5949"}
{"text":"data_5950"}
{"text":"data_5951"}
{"text":"data_5952"}
{"text":"data_5953"}
{"text":"data_5954"}
{"text":"data_5955"}
{"text":"data_5956"}
{"text":"data_5957"}
{"text":"data_5958"}
{"text":"data_5959"}
{"text":"data_5960"}
{"text":"data_5961"}
{"text":"data_5962"}
{"text":"data_5963"}
{"text":"data_5964"}
{"text":"data_5965"}
{"text":"data_5966"}
{"text":"data_5967"}
{"text":"data_5968"}
{"text":"data_5969"}
{"text":"data_5970"}
{"text":"data_5971"}
{"text":"data_5972"}
{"text":"data_5973"}
{"text":"data_5974"}
{"text":"data_5975"}
{"text":"data_5976"}
{"text":"data_5977"}
{"text":"data_5978"}
{"text":"data_5979"}
{"text":"data_5980"}
{"text":"data_5981"}
{"text":"data_5982"}
{"text":"data_5983"}
{"text":"data_5984"}
{"text":"data_5985"}
{"text":"data_5986"}
{"text":"data_5987"}
{"text":"data_5988"}
{"text":"data_5989"}
{"text":"data_5990"}
{"text":"data_5991"}
{"text":"data_5992"}
{"text":"data_5993"}
{"text":"data_5994"}
{"text":"data_5995"}
{"text":"data_5996"}
{"text":"data_5997"}
{"text":"data_5998"}
{"text":"data_5999"}
{"text":"data_6000"}
{"text":"data_6001"}
{"text":"data_6002"}
{"text":"data_6003"}
{"text":"data_6004"}
{"text":"data_6005"}
{"text":"data_6006"}
{"text":"data_6007"}
{"text":"data_6008"}
{"text":"data_6009"}
{"text":"data_6010"}
{"text":"data_6011"}
{"text":"data_6012"}
{"text":"data_6013"}
{"text":"data_6014"}
{"text":"data_6015"}
{"text":"data_6016"}
{"text":"data_6017"}
{"text":"data_6018"}
{"text":"data_6019"}
{"text":"data_6020"}
{"text":"data_6021"}
{"text":"data_6022"}
{"text":"data_6023"}
{"text":"data_6024"}
{"text":"data_6025"}
{"text":"data_6026"}
{"text":"data_6027"}
{"text":"data_6028"}
{"text":"data_6029"}
{"text":"data_6030"}
{"text":"data_6031"}
{"text":"data_6032"}
{"text":"data_6033"}
{"text":"data_6034"}
{"text":"data_6035"}
{"text":"data_6036"}
{"text":"data_6037"}
{"text":"data_6038"}
{"text":"data_6039"}
{"text":"data_6040"}
{"text":"data_6041"}
{"text":"data_6042"}
{"text":"data_6043"}
{"text":"data_6044"}
{"text":"data_6045"}
{"text":"data_6046"}
{"text":"data_6047"}
{"text":"data_6048"}
{"text":"data_6049"}
{"text":"data_6050"}
{"text":"data_6051"}
{"text":"data_6052"}
{"text":"data_6053"}
{"text":"data_6054"}
{"text":"data_6055"}
{"text":"data_6056"}
{"text":"data_6057"}
{"text":"data_6058"}
{"text":"data_6059"}
{"text":"data_6060"}
{"text":"data_6061"}
{"text":"data_6062"}
{"text":"data_6063"}
{"text":"data_6064"}
{"text":"data_6065"}
{"text":"data_6066"}
{"text":"data_6067"}
{"text":"data_6068"}
{"text":"data_6069"}
{"text":"data_6070"}
{"text":"data_6071"}
{"text":"data_6072"}
{"text":"data_6073"}
{"text":"data_6074"}
{"text":"data_6075"}
{"text":"data_6076"}
{"text":"data_6077"}
{"text":"data_6078"}
{"text":"data_6079"}
{"text":"data_6080"}
{"text":"data_6081"}
{"text":"data_6082"}
{"text":"data_6083"}
{"text":"data_6084"}
{"text":"data_6085"}
{"text":"data_6086"}
{"text":"data_6087"}
{"text":"data_6088"}
{"text":"data_6089"}
{"text":"data_6090"}
{"text":"data_6091"}
{"text":"data_6092"}
{"text":"data_6093"}
{"text":"data_6094"}
{"text":"data_6095"}
{"text":"data_6096"}
{"text":"data_6097"}
{"text":"data_6098"}
{"text":"data_6099"}
{"text":"data_6100"}
{"text":"data_6101"}
{"text":"data_6102"}
{"text":"data_6103"}
{"text":"data_6104"}
{"text":"data_6105"}
{"text":"data_6106"}
{"text":"data_6107"}
{"text":"data_6108"}
{"text":"data_6109"}
{"text":"data_6110"}
{"text":"data_6111"}
{"text":"data_6112"}
{"text":"data_6113"}
{"text":"data_6114"}
{"text":"data_6115"}
{"text":"data_6116"}
{"text":"data_6117"}
{"text":"data_6118"}
{"text":"data_6119"}
{"text":"data_6120"}
{"text":"data_6121"}
{"text":"data_6122"}
{"text":"data_6123"}
{"text":"data_6124"}
{"text":"data_6125"}
{"text":"data_6126"}
{"text":"data_6127"}
{"text":"data_6128"}
{"text":"data_6129"}
{"text":"data_6130"}
{"text":"data_6131"}
{"text":"data_6132"}
{"text":"data_6133"}
{"text":"data_6134"}
{"text":"data_6135"}
{"text":"data_6136"}
{"text":"data_6137"}
{"text":"data_6138"}
{"text":"data_6139"}
{"text":"data_6140"}
{"text":"data_6141"}
{"text":"data_6142"}
{"text":"data_6143"}
{"text":"data_6144"}
{"text":"data_6145"}
{"text":"data_6146"}
{"text":"data_6147"}
{"text":"data_6148"}
{"text":"data_6149"}
{"text":"data_6150"}
{"text":"data_6151"}
{"text":"data_6152"}
{"text":"data_6153"}
{"text":"data_6154"}
{"text":"data_6155"}
{"text":"data_6156"}
{"text":"data_6157"}
{"text":"data_6158"}
{"text":"data_6159"}
{"text":"data_6160"}
{"text":"data_6161"}
{"text":"data_6162"}
{"text":"data_6163"}
{"text":"data_6164"}
{"text":"data_6165"}
{"text":"data_6166"}
{"text":"data_6167"}
{"text":"data_6168"}
{"text":"data_6169"}
{"text":"data_6170"}
{"text":"data_6171"}
{"text":"data_6172"}
{"text":"data_6173"}
{"text":"data_6174"}
{"text":"data_6175"}
{"text":"data_6176"}
{"text":"data_6177"}
{"text":"data_6178"}
{"text":"data_6179"}
{"text":"data_6180"}
{"text":"data_6181"}
{"text":"data_6182"}
{"text":"data_6183"}
{"text":"data_6184"}
{"text":"data_6185"}
{"text":"data_6186"}
{"text":"data_6187"}
{"text":"data_6188"}
{"text":"data_6189"}
{"text":"data_6190"}
{"text":"data_6191"}
{"text":"data_6192"}
{"text":"data_6193"}
{"text":"data_6194"}
{"text":"data_6195"}
{"text":"data_6196"}
{"text":"data_6197"}
{"text":"data_6198"}
{"text":"data_6199"}
{"text":"data_6200"}
{"text":"data_6201"}
{"text":"data_6202"}
{"text":"data_6203"}
{"text":"data_6204"}
{"text":"data_6205"}
{"text":"data_6206"}
{"text":"data_6207"}
{"text":"data_6208"}
{"text":"data_6209"}
{"text":"data_6210"}
{"text":"data_6211"}
{"text":"data_6212"}
{"text":"data_6213"}
{"text":"data_6214"}
{"text":"data_6215"}
{"text":"data_6216"}
{"text":"data_6217"}
{"text":"data_6218"}
{"text":"data_6219"}
{"text":"data_6220"}
{"text":"data_6221"}
{"text":"data_6222"}
{"text":"data_6223"}
{"text":"data_6224"}
{"text":"data_6225"}
{"text":"data_6226"}
{"text":"data_6227"}
{"text":"data_6228"}
{"text":"data_6229"}
{"text":"data_6230"}
{"text":"data_6231"}
{"text":"data_6232"}
{"text":"data_6233"}
{"text":"data_6234"}
{"text":"data_6235"}
{"text":"data_6236"}
{"text":"data_6237"}
{"text":"data_6238"}
{"text":"data_6239"}
{"text":"data_6240"}
{"text":"data_6241"}
{"text":"data_6242"}
{"text":"data_6243"}
{"text":"data_6244"}
{"text":"data_6245"}
{"text":"data_6246"}
{"text":"data_6247"}
{"text":"data_6248"}
{"text":"data_6249"}
{"text":"data_6250"}
{"text":"data_6251"}
{"text":"data_6252"}
{"text":"data_6253"}
{"text":"data_6254"}
{"text":"data_6255"}
{"text":"data_6256"}
{"text":"data_6257"}
{"text":"data_6258"}
{"text":"data_6259"}
{"text":"data_6260"}
{"text":"data_6261"}
{"text":"data_6262"}
{"text":"data_6263"}
{"text":"data_6264"}
{"text":"data_6265"}
{"text":"data_6266"}
{"text":"data_6267"}
{"text":"data_6268"}
{"text":"data_6269"}
{"text":"data_6270"}
{"text":"data_6271"}
{"text":"data_6272"}
{"text":"data_6273"}
{"text":"data_6274"}
{"text":"data_6275"}
{"text":"data_6276"}
{"text":"data_6277"}
{"text":"data_6278"}
{"text":"data_6279"}
{"text":"data_6280"}
{"text":"data_6281"}
{"text":"data_6282"}
{"text":"data_6283"}
{"text":"data_6284"}
{"text":"data_6285"}
{"text":"data_6286"}
{"text":"data_6287"}
{"text":"data_6288"}
{"text":"data_6289"}
{"text":"data_6290"}
{"text":"data_6291"}
{"text":"data_6292"}
{"text":"data_6293"}
{"text":"data_6294"}
{"text":"data_6295"}
{"text":"data_6296"}
{"text":"data_6297"}
{"text":"data_6298"}
{"text":"data_6299"}
{"text":"data_6300"}
{"text":"data_6301"}
{"text":"data_6302"}
{"text":"data_6303"}
{"text":"data_6304"}
{"text":"data_6305"}
{"text":"data_6306"}
{"text":"data_6307"}
{"text":"data_6308"}
{"text":"data_6309"}
{"text":"data_6310"}
{"text":"data_6311"}
{"text":"data_6312"}
{"text":"data_6313"}
{"text":"data_6314"}
{"text":"data_6315"}
{"text":"data_6316"}
{"text":"data_6317"}
{"text":"data_6318"}
{"text":"data_6319"}
{"text":"data_6320"}
{"text":"data_6321"}
{"text":"data_6322"}
{"text":"data_6323"}
{"text":"data_6324"}
{"text":"data_6325"}
{"text":"data_6326"}
{"text":"data_6327"}
{"text":"data_6328"}
{"text":"data_6329"}
{"text":"data_6330"}
{"text":"data_6331"}
{"text":"data_6332"}
{"text":"data_6333"}
{"text":"data_6334"}
{"text":"data_6335"}
{"text":"data_6336"}
{"text":"data_6337"}
{"text":"data_6338"}
{"text":"data_6339"}
{"text":"data_6340"}
{"text":"data_6341"}
{"text":"data_6342"}
{"text":"data_6343"}
{"text":"data_6344"}
{"text":"data_6345"}
{"text":"data_6346"}
{"text":"data_6347"}
{"text":"data_6348"}
{"text":"data_6349"}
{"text":"data_6350"}
{"text":"data_6351"}
{"text":"data_6352"}
{"text":"data_6353"}
{"text":"data_6354"}
{"text":"data_6355"}
{"text":"data_6356"}
{"text":"data_6357"}
{"text":"data_6358"}
{"text":"data_6359"}
{"text":"data_6360"}
{"text":"data_6361"}
{"text":"data_6362"}
{"text":"data_6363"}
{"text":"data_6364"}
{"text":"data_6365"}
{"text":"data_6366"}
{"text":"data_6367"}
{"text":"data_6368"}
{"text":"data_6369"}
{"text":"data_6370"}
{"text":"data_6371"}
{"text":"data_6372"}
{"text":"data_6373"}
{"text":"data_6374"}
{"text":"data_6375"}
{"text":"data_6376"}
{"text":"data_6377"}
{"text":"data_6378"}
{"text":"data_6379"}
{"text":"data_6380"}
{"text":"data_6381"}
{"text":"data_6382"}
{"text":"data_6383"}
{"text":"data_6384"}
{"text":"data_6385"}
{"text":"data_6386"}
{"text":"data_6387"}
{"text":"data_6388"}
{"text":"data_6389"}
{"text":"data_6390"}
{"text":"data_6391"}
{"text":"data_6392"}
{"text":"data_6393"}
{"text":"data_6394"}
{"text":"data_6395"}
{"text":"data_6396"}
{"text":"data_6397"}
{"text":"data_6398"}
{"text":"data_6399"}
{"text":"data_6400"}
{"text":"data_6401"}
{"text":"data_6402"}
{"text":"data_6403"}
{"text":"data_6404"}
{"text":"data_6405"}
{"text":"data_6406"}
{"text":"data_6407"}
{"text":"data_6408"}
{"text":"data_6409"}
{"text":"data_6410"}
{"text":"data_6411"}
{"text":"data_6412"}
{"text":"data_6413"}
{"text":"data_6414"}
{"text":"data_6415"}
{"text":"data_6416"}
{"text":"data_6417"}
{"text":"data_6418"}
{"text":"data_6419"}
{"text":"data_6420"}
{"text":"data_6421"}
{"text":"data_6422"}
{"text":"data_6423"}
{"text":"data_6424"}
{"text":"data_6425"}
{"text":"data_6426"}
{"text":"data_6427"}
{"text":"data_6428"}
{"text":"data_6429"}
{"text":"data_6430"}
{"text":"data_6431"}
{"text":"data_6432"}
{"text":"data_6433"}
{"text":"data_6434"}
{"text":"data_6435"}
{"text":"data_6436"}
{"text":"data_6437"}
{"text":"data_6438"}
{"text":"data_6439"}
{"text":"data_6440"}
{"text":"data_6441"}
{"text":"data_6442"}
{"text":"data_6443"}
{"text":"data_6444"}
{"text":"data_6445"}
{"text":"data_6446"}
{"text":"data_6447"}
{"text":"data_6448"}
{"text":"data_6449"}
{"text":"data_6450"}
{"text":"data_6451"}
{"text":"data_6452"}
{"text":"data_6453"}
{"text":"data_6454"}
{"text":"data_6455"}
{"text":"data_6456"}
{"text":"data_6457"}
{"text":"data_6458"}
{"text":"data_6459"}
{"text":"data_6460"}
{"text":"data_6461"}
{"text":"data_6462"}
{"text":"data_6463"}
{"text":"data_6464"}
{"text":"data_6465"}
{"text":"data_6466"}
{"text":"data_6467"}
{"text":"data_6468"}
{"text":"data_6469"}
{"text":"data_6470"}
{"text":"data_6471"}
{"text":"data_6472"}
{"text":"data_6473"}
{"text":"data_6474"}
{"text":"data_6475"}
{"text":"data_6476"}
{"text":"data_6477"}
{"text":"data_6478"}
{"text":"data_6479"}
{"text":"data_6480"}
{"text":"data_6481"}
{"text":"data_6482"}
{"text":"data_6483"}
{"text":"data_6484"}
{"text":"data_6485"}
{"text":"data_6486"}
{"text":"data_6487"}
{"text":"data_6488"}
{"text":"data_6489"}
{"text":"data_6490"}
{"text":"data_6491"}
{"text":"data_6492"}
{"text":"data_6493"}
{"text":"data_6494"}
{"text":"data_6495"}
{"text":"data_6496"}
{"text":"data_6497"}
{"text":"data_6498"}
{"text":"data_6499"}
{"text":"data_6500"}
{"text":"data_6501"}
{"text":"data_6502"}
{"text":"data_6503"}
{"text":"data_6504"}
{"text":"data_6505"}
{"text":"data_6506"}
{"text":"data_6507"}
{"text":"data_6508"}
{"text":"data_6509"}
{"text":"data_6510"}
{"text":"data_6511"}
{"text":"data_6512"}
{"text":"data_6513"}
{"text":"data_6514"}
{"text":"data_6515"}
{"text":"data_6516"}
{"text":"data_6517"}
{"text":"data_6518"}
{"text":"data_6519"}
{"text":"data_6520"}
{"text":"data_6521"}
{"text":"data_6522"}
{"text":"data_6523"}
{"text":"data_6524"}
{"text":"data_6525"}
{"text":"data_6526"}
{"text":"data_6527"}
{"text":"data_6528"}
{"text":"data_6529"}
{"text":"data_6530"}
{"text":"data_6531"}
{"text":"data_6532"}
{"text":"data_6533"}
{"text":"data_6534"}
{"text":"data_6535"}
{"text":"data_6536"}
{"text":"data_6537"}
{"text":"data_6538"}
{"text":"data_6539"}
{"text":"data_6540"}
{"text":"data_6541"}
{"text":"data_6542"}
{"text":"data_6543"}
{"text":"data_6544"}
{"text":"data_6545"}
{"text":"data_6546"}
{"text":"data_6547"}
{"text":"data_6548"}
{"text":"data_6549"}
{"text":"data_6550"}
{"text":"data_6551"}
{"text":"data_6552"}
{"text":"data_6553"}
{"text":"data_6554"}
{"text":"data_6555"}
{"text":"data_6556"}
{"text":"data_6557"}
{"text":"data_6558"}
{"text":"data_6559"}
{"text":"data_6560"}
{"text":"data_6561"}
{"text":"data_6562"}
{"text":"data_6563"}
{"text":"data_6564"}
{"text":"data_6565"}
{"text":"data_6566"}
{"text":"data_6567"}
{"text":"data_6568"}
{"text":"data_6569"}
{"text":"data_6570"}
{"text":"data_6571"}
{"text":"data_6572"}
{"text":"data_6573"}
{"text":"data_6574"}
{"text":"data_6575"}
{"text":"data_6576"}
{"text":"data_6577"}
{"text":"data_6578"}
{"text":"data_6579"}
{"text":"data_6580"}
{"text":"data_6581"}
{"text":"data_6582"}
{"text":"data_6583"}
{"text":"data_6584"}
{"text":"data_6585"}
{"text":"data_6586"}
{"text":"data_6587"}
{"text":"data_6588"}
{"text":"data_6589"}
{"text":"data_6590"}
{"text":"data_6591"}
{"text":"data_6592"}
{"text":"data_6593"}
{"text":"data_6594"}
{"text":"data_6595"}
{"text":"data_6596"}
{"text":"data_6597"}
{"text":"data_6598"}
{"text":"data_6599"}
{"text":"data_6600"}
{"text":"data_6601"}
{"text":"data_6602"}
{"text":"data_6603"}
{"text":"data_6604"}
{"text":"data_6605"}
{"text":"data_6606"}
{"text":"data_6607"}
{"text":"data_6608"}
{"text":"data_6609"}
{"text":"data_6610"}
{"text":"data_6611"}
{"text":"data_6612"}
{"text":"data_6613"}
{"text":"data_6614"}
{"text":"data_6615"}
{"text":"data_6616"}
{"text":"data_6617"}
{"text":"data_6618"}
{"text":"data_6619"}
{"text":"data_6620"}
{"text":"data_6621"}
{"text":"data_6622"}
{"text":"data_6623"}
{"text":"data_6624"}
{"text":"data_6625"}
{"text":"data_6626"}
{"text":"data_6627"}
{"text":"data_6628"}
{"text":"data_6629"}
{"text":"data_6630"}
{"text":"data_6631"}
{"text":"data_6632"}
{"text":"data_6633"}
{"text":"data_6634"}
{"text":"data_6635"}
{"text":"data_6636"}
{"text":"data_6637"}
{"text":"data_6638"}
{"text":"data_6639"}
{"text":"data_6640"}
{"text":"data_6641"}
{"text":"data_6642"}
{"text":"data_6643"}
{"text":"data_6644"}
{"text":"data_6645"}
{"text":"data_6646"}
{"text":"data_6647"}
{"text":"data_6648"}
{"text":"data_6649"}
{"text":"data_6650"}
{"text":"data_6651"}
{"text":"data_6652"}
{"text":"data_6653"}
{"text":"data_6654"}
{"text":"data_6655"}
{"text":"data_6656"}
{"text":"data_6657"}
{"text":"data_6658"}
{"text":"data_6659"}
{"text":"data_6660"}
{"text":"data_6661"}
{"text":"data_6662"}
{"text":"data_6663"}
{"text":"data_6664"}
{"text":"data_6665"}
{"text":"data_6666"}
{"text":"data_6667"}
{"text":"data_6668"}
{"text":"data_6669"}
{"text":"data_6670"}
{"text":"data_6671"}
{"text":"data_6672"}
{"text":"data_6673"}
{"text":"data_6674"}
{"text":"data_6675"}
{"text":"data_6676"}
{"text":"data_6677"}
{"text":"data_6678"}
{"text":"data_6679"}
{"text":"data_6680"}
{"text":"data_6681"}
{"text":"data_6682"}
{"text":"data_6683"}
{"text":"data_6684"}
{"text":"data_6685"}
{"text":"data_6686"}
{"text":"data_6687"}
{"text":"data_6688"}
{"text":"data_6689"}
{"text":"data_6690"}
{"text":"data_6691"}
{"text":"data_6692"}
{"text":"data_6693"}
{"text":"data_6694"}
{"text":"data_6695"}
{"text":"data_6696"}
{"text":"data_6697"}
{"text":"data_6698"}
{"text":"data_6699"}
{"text":"data_6700"}
{"text":"data_6701"}
{"text":"data_6702"}
{"text":"data_6703"}
{"text":"data_6704"}
{"text":"data_6705"}
{"text":"data_6706"}
{"text":"data_6707"}
{"text":"data_6708"}
{"text":"data_6709"}
{"text":"data_6710"}
{"text":"data_6711"}
{"text":"data_6712"}
{"text":"data_6713"}
{"text":"data_6714"}
{"text":"data_6715"}
{"text":"data_6716"}
{"text":"data_6717"}
{"text":"data_6718"}
{"text":"data_6719"}
{"text":"data_6720"}
{"text":"data_6721"}
{"text":"data_6722"}
{"text":"data_6723"}
{"text":"data_6724"}
{"text":"data_6725"}
{"text":"data_6726"}
{"text":"data_6727"}
{"text":"data_6728"}
{"text":"data_6729"}
{"text":"data_6730"}
{"text":"data_6731"}
{"text":"data_6732"}
{"text":"data_6733"}
{"text":"data_6734"}
{"text":"data_6735"}
{"text":"data_6736"}
{"text":"data_6737"}
{"text":"data_6738"}
{"text":"data_6739"}
{"text":"data_6740"}
{"text":"data_6741"}
{"text":"data_6742"}
{"text":"data_6743"}
{"text":"data_6744"}
{"text":"data_6745"}
{"text":"data_6746"}
{"text":"data_6747"}
{"text":"data_6748"}
{"text":"data_6749"}
{"text":"data_6750"}
{"text":"data_6751"}
{"text":"data_6752"}
{"text":"data_6753"}
{"text":"data_6754"}
{"text":"data_6755"}
{"text":"data_6756"}
{"text":"data_6757"}
{"text":"data_6758"}
{"text":"data_6759"}
{"text":"data_6760"}
{"text":"data_6761"}
{"text":"data_6762"}
{"text":"data_6763"}
{"text":"data_6764"}
{"text":"data_6765"}
{"text":"data_6766"}
{"text":"data_6767"}
{"text":"data_6768"}
{"text":"data_6769"}
{"text":"data_6770"}
{"text":"data_6771"}
{"text":"data_6772"}
{"text":"data_6773"}
{"text":"data_6774"}
{"text":"data_6775"}
{"text":"data_6776"}
{"text":"data_6777"}
{"text":"data_6778"}
{"text":"data_6779"}
{"text":"data_6780"}
{"text":"data_6781"}
{"text":"data_6782"}
{"text":"data_6783"}
{"text":"data_6784"}
{"text":"data_6785"}
{"text":"data_6786"}
{"text":"data_6787"}
{"text":"data_6788"}
{"text":"data_6789"}
{"text":"data_6790"}
{"text":"data_6791"}
{"text":"data_6792"}
{"text":"data_6793"}
{"text":"data_6794"}
{"text":"data_6795"}
{"text":"data_6796"}
{"text":"data_6797"}
{"text":"data_6798"}
{"text":"data_6799"}
{"text":"data_6800"}
{"text":"data_6801"}
{"text":"data_6802"}
{"text":"data_6803"}
{"text":"data_6804"}
{"text":"data_6805"}
{"text":"data_6806"}
{"text":"data_6807"}
{"text":"data_6808"}
{"text":"data_6809"}
{"text":"data_6810"}
{"text":"data_6811"}
{"text":"data_6812"}
{"text":"data_6813"}
{"text":"data_6814"}
{"text":"data_6815"}
{"text":"data_6816"}
{"text":"data_6817"}
{"text":"data_6818"}
{"text":"data_6819"}
{"text":"data_6820"}
{"text":"data_6821"}
{"text":"data_6822"}
{"text":"data_6823"}
{"text":"data_6824"}
{"text":"data_6825"}
{"text":"data_6826"}
{"text":"data_6827"}
{"text":"data_6828"}
{"text":"data_6829"}
{"text":"data_6830"}
{"text":"data_6831"}
{"text":"data_6832"}
{"text":"data_6833"}
{"text":"data_6834"}
{"text":"data_6835"}
{"text":"data_6836"}
{"text":"data_6837"}
{"text":"data_6838"}
{"text":"data_6839"}
{"text":"data_6840"}
{"text":"data_6841"}
{"text":"data_6842"}
{"text":"data_6843"}
{"text":"data_6844"}
{"text":"data_6845"}
{"text":"data_6846"}
{"text":"data_6847"}
{"text":"data_6848"}
{"text":"data_6849"}
{"text":"data_6850"}
{"text":"data_6851"}
{"text":"data_6852"}
{"text":"data_6853"}
{"text":"data_6854"}
{"text":"data_6855"}
{"text":"data_6856"}
{"text":"data_6857"}
{"text":"data_6858"}
{"text":"data_6859"}
{"text":"data_6860"}
{"text":"data_6861"}
{"text":"data_6862"}
{"text":"data_6863"}
{"text":"data_6864"}
{"text":"data_6865"}
{"text":"data_6866"}
{"text":"data_6867"}
{"text":"data_6868"}
{"text":"data_6869"}
{"text":"data_6870"}
{"text":"data_6871"}
{"text":"data_6872"}
{"text":"data_6873"}
{"text":"data_6874"}
{"text":"data_6875"}
{"text":"data_6876"}
{"text":"data_6877"}
{"text":"data_6878"}
{"text":"data_6879"}
{"text":"data_6880"}
{"text":"data_6881"}
{"text":"data_6882"}
{"text":"data_6883"}
{"text":"data_6884"}
{"text":"data_6885"}
{"text":"data_6886"}
{"text":"data_6887"}
{"text":"data_6888"}
{"text":"data_6889"}
{"text":"data_6890"}
{"text":"data_6891"}
{"text":"data_6892"}
{"text":"data_6893"}
{"text":"data_6894"}
{"text":"data_6895"}
{"text":"data_6896"}
{"text":"data_6897"}
{"text":"data_6898"}
{"text":"data_6899"}
{"text":"data_6900"}
{"text":"data_6901"}
{"text":"data_6902"}
{"text":"data_6903"}
{"text":"data_6904"}
{"text":"data_6905"}
{"text":"data_6906"}
{"text":"data_6907"}
{"text":"data_6908"}
{"text":"data_6909"}
{"text":"data_6910"}
{"text":"data_6911"}
{"text":"data_6912"}
{"text":"data_6913"}
{"text":"data_6914"}
{"text":"data_6915"}
{"text":"data_6916"}
{"text":"data_6917"}
{"text":"data_6918"}
{"text":"data_6919"}
{"text":"data_6920"}
{"text":"data_6921"}
{"text":"data_6922"}
{"text":"data_6923"}
{"text":"data_6924"}
{"text":"data_6925"}
{"text":"data_6926"}
{"text":"data_6927"}
{"text":"data_6928"}
{"text":"data_6929"}
{"text":"data_6930"}
{"text":"data_6931"}
{"text":"data_6932"}
{"text":"data_6933"}
{"text":"data_6934"}
{"text":"data_6935"}
{"text":"data_6936"}
{"text":"data_6937"}
{"text":"data_6938"}
{"text":"data_6939"}
{"text":"data_6940"}
{"text":"data_6941"}
{"text":"data_6942"}
{"text":"data_6943"}
{"text":"data_6944"}
{"text":"data_6945"}
{"text":"data_6946"}
{"text":"data_6947"}
{"text":"data_6948"}
{"text":"data_6949"}
{"text":"data_6950"}
{"text":"data_6951"}
{"text":"data_6952"}
{"text":"data_6953"}
{"text":"data_6954"}
{"text":"data_6955"}
{"text":"data_6956"}
{"text":"data_6957"}
{"text":"data_6958"}
{"text":"data_6959"}
{"text":"data_6960"}
{"text":"data_6961"}
{"text":"data_6962"}
{"text":"data_6963"}
{"text":"data_6964"}
{"text":"data_6965"}
{"text":"data_6966"}
{"text":"data_6967"}
{"text":"data_6968"}
{"text":"data_6969"}
{"text":"data_6970"}
{"text":"data_6971"}
{"text":"data_6972"}
{"text":"data_6973"}
{"text":"data_6974"}
{"text":"data_6975"}
{"text":"data_6976"}
{"text":"data_6977"}
{"text":"data_6978"}
{"text":"data_6979"}
{"text":"data_6980"}
{"text":"data_6981"}
{"text":"data_6982"}
{"text":"data_6983"}
{"text":"data_6984"}
{"text":"data_6985"}
{"text":"data_6986"}
{"text":"data_6987"}
{"text":"data_6988"}
{"text":"data_6989"}
{"text":"data_6990"}
{"text":"data_6991"}
{"text":"data_6992"}
{"text":"data_6993"}
{"text":"data_6994"}
{"text":"data_6995"}
{"text":"data_6996"}
{"text":"data_6997"}
{"text":"data_6998"}
{"text":"data_6999"}
{"text":"data_7000"}
{"text":"data_7001"}
{"text":"data_7002"}
{"text":"data_7003"}
{"text":"data_7004"}
{"text":"data_7005"}
{"text":"data_7006"}
{"text":"data_7007"}
{"text":"data_7008"}
{"text":"data_7009"}
{"text":"data_7010"}
{"text":"data_7011"}
{"text":"data_7012"}
{"text":"data_7013"}
{"text":"data_7014"}
{"text":"data_7015"}
{"text":"data_7016"}
{"text":"data_7017"}
{"text":"data_7018"}
{"text":"data_7019"}
{"text":"data_7020"}
{"text":"data_7021"}
{"text":"data_7022"}
{"text":"data_7023"}
{"text":"data_7024"}
{"text":"data_7025"}
{"text":"data_7026"}
{"text":"data_7027"}
{"text":"data_7028"}
{"text":"data_7029"}
{"text":"data_7030"}
{"text":"data_7031"}
{"text":"data_7032"}
{"text":"data_7033"}
{"text":"data_7034"}
{"text":"data_7035"}
{"text":"data_7036"}
{"text":"data_7037"}
{"text":"data_7038"}
{"text":"data_7039"}
{"text":"data_7040"}
{"text":"data_7041"}
{"text":"data_7042"}
{"text":"data_7043"}
{"text":"data_7044"}
{"text":"data_7045"}
{"text":"data_7046"}
{"text":"data_7047"}
{"text":"data_7048"}
{"text":"data_7049"}
{"text":"data_7050"}
{"text":"data_7051"}
{"text":"data_7052"}
{"text":"data_7053"}
{"text":"data_7054"}
{"text":"data_7055"}
{"text":"data_7056"}
{"text":"data_7057"}
{"text":"data_7058"}
{"text":"data_7059"}
{"text":"data_7060"}
{"text":"data_7061"}
{"text":"data_7062"}
{"text":"data_7063"}
{"text":"data_7064"}
{"text":"data_7065"}
{"text":"data_7066"}
{"text":"data_7067"}
{"text":"data_7068"}
{"text":"data_7069"}
{"text":"data_7070"}
{"text":"data_7071"}
{"text":"data_7072"}
{"text":"data_7073"}
{"text":"data_7074"}
{"text":"data_7075"}
{"text":"data_7076"}
{"text":"data_7077"}
{"text":"data_7078"}
{"text":"data_7079"}
{"text":"data_7080"}
{"text":"data_7081"}
{"text":"data_7082"}
{"text":"data_7083"}
{"text":"data_7084"}
{"text":"data_7085"}
{"text":"data_7086"}
{"text":"data_7087"}
{"text":"data_7088"}
{"text":"data_7089"}
{"text":"data_7090"}
{"text":"data_7091"}
{"text":"data_7092"}
{"text":"data_7093"}
{"text":"data_7094"}
{"text":"data_7095"}
{"text":"data_7096"}
{"text":"data_7097"}
{"text":"data_7098"}
{"text":"data_7099"}
{"text":"data_7100"}
{"text":"data_7101"}
{"text":"data_7102"}
{"text":"data_7103"}
{"text":"data_7104"}
{"text":"data_7105"}
{"text":"data_7106"}
{"text":"data_7107"}
{"text":"data_7108"}
{"text":"data_7109"}
{"text":"data_7110"}
{"text":"data_7111"}
{"text":"data_7112"}
{"text":"data_7113"}
{"text":"data_7114"}
{"text":"data_7115"}
{"text":"data_7116"}
{"text":"data_7117"}
{"text":"data_7118"}
{"text":"data_7119"}
{"text":"data_7120"}
{"text":"data_7121"}
{"text":"data_7122"}
{"text":"data_7123"}
{"text":"data_7124"}
{"text":"data_7125"}
{"text":"data_7126"}
{"text":"data_7127"}
{"text":"data_7128"}
{"text":"data_7129"}
{"text":"data_7130"}
{"text":"data_7131"}
{"text":"data_7132"}
{"text":"data_7133"}
{"text":"data_7134"}
{"text":"data_7135"}
{"text":"data_7136"}
{"text":"data_7137"}
{"text":"data_7138"}
{"text":"data_7139"}
{"text":"data_7140"}
{"text":"data_7141"}
{"text":"data_7142"}
{"text":"data_7143"}
{"text":"data_7144"}
{"text":"data_7145"}
{"text":"data_7146"}
{"text":"data_7147"}
{"text":"data_7148"}
{"text":"data_7149"}
{"text":"data_7150"}
{"text":"data_7151"}
{"text":"data_7152"}
{"text":"data_7153"}
{"text":"data_7154"}
{"text":"data_7155"}
{"text":"data_7156"}
{"text":"data_7157"}
{"text":"data_7158"}
{"text":"data_7159"}
{"text":"data_7160"}
{"text":"data_7161"}
{"text":"data_7162"}
{"text":"data_7163"}
{"text":"data_7164"}
{"text":"data_7165"}
{"text":"data_7166"}
{"text":"data_7167"}
{"text":"data_7168"}
{"text":"data_7169"}
{"text":"data_7170"}
{"text":"data_7171"}
{"text":"data_7172"}
{"text":"data_7173"}
{"text":"data_7174"}
{"text":"data_7175"}
{"text":"data_7176"}
{"text":"data_7177"}
{"text":"data_7178"}
{"text":"data_7179"}
{"text":"data_7180"}
{"text":"data_7181"}
{"text":"data_7182"}
{"text":"data_7183"}
{"text":"data_7184"}
{"text":"data_7185"}
{"text":"data_7186"}
{"text":"data_7187"}
{"text":"data_7188"}
{"text":"data_7189"}
{"text":"data_7190"}
{"text":"data_7191"}
{"text":"data_7192"}
{"text":"data_7193"}
{"text":"data_7194"}
{"text":"data_7195"}
{"text":"data_7196"}
{"text":"data_7197"}
{"text":"data_7198"}
{"text":"data_7199"}
{"text":"data_7200"}
{"text":"data_7201"}
{"text":"data_7202"}
{"text":"data_7203"}
{"text":"data_7204"}
{"text":"data_7205"}
{"text":"data_7206"}
{"text":"data_7207"}
{"text":"data_7208"}
{"text":"data_7209"}
{"text":"data_7210"}
{"text":"data_7211"}
{"text":"data_7212"}
{"text":"data_7213"}
{"text":"data_7214"}
{"text":"data_7215"}
{"text":"data_7216"}
{"text":"data_7217"}
{"text":"data_7218"}
{"text":"data_7219"}
{"text":"data_7220"}
{"text":"data_7221"}
{"text":"data_7222"}
{"text":"data_7223"}
{"text":"data_7224"}
{"text":"data_7225"}
{"text":"data_7226"}
{"text":"data_7227"}
{"text":"data_7228"}
{"text":"data_7229"}
{"text":"data_7230"}
{"text":"data_7231"}
{"text":"data_7232"}
{"text":"data_7233"}
{"text":"data_7234"}
{"text":"data_7235"}
{"text":"data_7236"}
{"text":"data_7237"}
{"text":"data_7238"}
{"text":"data_7239"}
{"text":"data_7240"}
{"text":"data_7241"}
{"text":"data_7242"}
{"text":"data_7243"}
{"text":"data_7244"}
{"text":"data_7245"}
{"text":"data_7246"}
{"text":"data_7247"}
{"text":"data_7248"}
{"text":"data_7249"}
{"text":"data_7250"}
{"text":"data_7251"}
{"text":"data_7252"}
{"text":"data_7253"}
{"text":"data_7254"}
{"text":"data_7255"}
{"text":"data_7256"}
{"text":"data_7257"}
{"text":"data_7258"}
{"text":"data_7259"}
{"text":"data_7260"}
{"text":"data_7261"}
{"text":"data_7262"}
{"text":"data_7263"}
{"text":"data_7264"}
{"text":"data_7265"}
{"text":"data_7266"}
{"text":"data_7267"}
{"text":"data_7268"}
{"text":"data_7269"}
{"text":"data_7270"}
{"text":"data_7271"}
{"text":"data_7272"}
{"text":"data_7273"}
{"text":"data_7274"}
{"text":"data_7275"}
{"text":"data_7276"}
{"text":"data_7277"}
{"text":"data_7278"}
{"text":"data_7279"}
{"text":"data_7280"}
{"text":"data_7281"}
{"text":"data_7282"}
{"text":"data_7283"}
{"text":"data_7284"}
{"text":"data_7285"}
{"text":"data_7286"}
{"text":"data_7287"}
{"text":"data_7288"}
{"text":"data_7289"}
{"text":"data_7290"}
{"text":"data_7291"}
{"text":"data_7292"}
{"text":"data_7293"}
{"text":"data_7294"}
{"text":"data_7295"}
{"text":"data_7296"}
{"text":"data_7297"}
{"text":"data_7298"}
{"text":"data_7299"}
{"text":"data_7300"}
{"text":"data_7301"}
{"text":"data_7302"}
{"text":"data_7303"}
{"text":"data_7304"}
{"text":"data_7305"}
{"text":"data_7306"}
{"text":"data_7307"}
{"text":"data_7308"}
{"text":"data_7309"}
{"text":"data_7310"}
{"text":"data_7311"}
{"text":"data_7312"}
{"text":"data_7313"}
{"text":"data_7314"}
{"text":"data_7315"}
{"text":"data_7316"}
{"text":"data_7317"}
{"text":"data_7318"}
{"text":"data_7319"}
{"text":"data_7320"}
{"text":"data_7321"}
{"text":"data_7322"}
{"text":"data_7323"}
{"text":"data_7324"}
{"text":"data_7325"}
{"text":"data_7326"}
{"text":"data_7327"}
{"text":"data_7328"}
{"text":"data_7329"}
{"text":"data_7330"}
{"text":"data_7331"}
{"text":"data_7332"}
{"text":"data_7333"}
{"text":"data_7334"}
{"text":"data_7335"}
{"text":"data_7336"}
{"text":"data_7337"}
{"text":"data_7338"}
{"text":"data_7339"}
{"text":"data_7340"}
{"text":"data_7341"}
{"text":"data_7342"}
{"text":"data_7343"}
{"text":"data_7344"}
{"text":"data_7345"}
{"text":"data_7346"}
{"text":"data_7347"}
{"text":"data_7348"}
{"text":"data_7349"}
{"text":"data_7350"}
{"text":"data_7351"}
{"text":"data_7352"}
{"text":"data_7353"}
{"text":"data_7354"}
{"text":"data_7355"}
{"text":"data_7356"}
{"text":"data_7357"}
{"text":"data_7358"}
{"text":"data_7359"}
{"text":"data_7360"}
{"text":"data_7361"}
{"text":"data_7362"}
{"text":"data_7363"}
{"text":"data_7364"}
{"text":"data_7365"}
{"text":"data_7366"}
{"text":"data_7367"}
{"text":"data_7368"}
{"text":"data_7369"}
{"text":"data_7370"}
{"text":"data_7371"}
{"text":"data_7372"}
{"text":"data_7373"}
{"text":"data_7374"}
{"text":"data_7375"}
{"text":"data_7376"}
{"text":"data_7377"}
{"text":"data_7378"}
{"text":"data_7379"}
{"text":"data_7380"}
{"text":"data_7381"}
{"text":"data_7382"}
{"text":"data_7383"}
{"text":"data_7384"}
{"text":"data_7385"}
{"text":"data_7386"}
{"text":"data_7387"}
{"text":"data_7388"}
{"text":"data_7389"}
{"text":"data_7390"}
{"text":"data_7391"}
{"text":"data_7392"}
{"text":"data_7393"}
{"text":"data_7394"}
{"text":"data_7395"}
{"text":"data_7396"}
{"text":"data_7397"}
{"text":"data_7398"}
{"text":"data_7399"}
{"text":"data_7400"}
{"text":"data_7401"}
{"text":"data_7402"}
{"text":"data_7403"}
{"text":"data_7404"}
{"text":"data_7405"}
{"text":"data_7406"}
{"text":"data_7407"}
{"text":"data_7408"}
{"text":"data_7409"}
{"text":"data_7410"}
{"text":"data_7411"}
{"text":"data_7412"}
{"text":"data_7413"}
{"text":"data_7414"}
{"text":"data_7415"}
{"text":"data_7416"}
{"text":"data_7417"}
{"text":"data_7418"}
{"text":"data_7419"}
{"text":"data_7420"}
{"text":"data_7421"}
{"text":"data_7422"}
{"text":"data_7423"}
{"text":"data_7424"}
{"text":"data_7425"}
{"text":"data_7426"}
{"text":"data_7427"}
{"text":"data_7428"}
{"text":"data_7429"}
{"text":"data_7430"}
{"text":"data_7431"}
{"text":"data_7432"}
{"text":"data_7433"}
{"text":"data_7434"}
{"text":"data_7435"}
{"text":"data_7436"}
{"text":"data_7437"}
{"text":"data_7438"}
{"text":"data_7439"}
{"text":"data_7440"}
{"text":"data_7441"}
{"text":"data_7442"}
{"text":"data_7443"}
{"text":"data_7444"}
{"text":"data_7445"}
{"text":"data_7446"}
{"text":"data_7447"}
{"text":"data_7448"}
{"text":"data_7449"}
{"text":"data_7450"}
{"text":"data_7451"}
{"text":"data_7452"}
{"text":"data_7453"}
{"text":"data_7454"}
{"text":"data_7455"}
{"text":"data_7456"}
{"text":"data_7457"}
{"text":"data_7458"}
{"text":"data_7459"}
{"text":"data_7460"}
{"text":"data_7461"}
{"text":"data_7462"}
{"text":"data_7463"}
{"text":"data_7464"}
{"text":"data_7465"}
{"text":"data_7466"}
{"text":"data_7467"}
{"text":"data_7468"}
{"text":"data_7469"}
{"text":"data_7470"}
{"text":"data_7471"}
{"text":"data_7472"}
{"text":"data_7473"}
{"text":"data_7474"}
{"text":"data_7475"}
{"text":"data_7476"}
{"text":"data_7477"}
{"text":"data_7478"}
{"text":"data_7479"}
{"text":"data_7480"}
{"text":"data_7481"}
{"text":"data_7482"}
{"text":"data_7483"}
{"text":"data_7484"}
{"text":"data_7485"}
{"text":"data_7486"}
{"text":"data_7487"}
{"text":"data_7488"}
{"text":"data_7489"}
{"text":"data_7490"}
{"text":"data_7491"}
{"text":"data_7492"}
{"text":"data_7493"}
{"text":"data_7494"}
{"text":"data_7495"}
{"text":"data_7496"}
{"text":"data_7497"}
{"text":"data_7498"}
{"text":"data_7499"}
{"text":"data_7500"}
{"text":"data_7501"}
{"text":"data_7502"}
{"text":"data_7503"}
{"text":"data_7504"}
{"text":"data_7505"}
{"text":"data_7506"}
{"text":"data_7507"}
{"text":"data_7508"}
{"text":"data_7509"}
{"text":"data_7510"}
{"text":"data_7511"}
{"text":"data_7512"}
{"text":"data_7513"}
{"text":"data_7514"}
{"text":"data_7515"}
{"text":"data_7516"}
{"text":"data_7517"}
{"text":"data_7518"}
{"text":"data_7519"}
{"text":"data_7520"}
{"text":"data_7521"}
{"text":"data_7522"}
{"text":"data_7523"}
{"text":"data_7524"}
{"text":"data_7525"}
{"text":"data_7526"}
{"text":"data_7527"}
{"text":"data_7528"}
{"text":"data_7529"}
{"text":"data_7530"}
{"text":"data_7531"}
{"text":"data_7532"}
{"text":"data_7533"}
{"text":"data_7534"}
{"text":"data_7535"}
{"text":"data_7536"}
{"text":"data_7537"}
{"text":"data_7538"}
{"text":"data_7539"}
{"text":"data_7540"}
{"text":"data_7541"}
{"text":"data_7542"}
{"text":"data_7543"}
{"text":"data_7544"}
{"text":"data_7545"}
{"text":"data_7546"}
{"text":"data_7547"}
{"text":"data_7548"}
{"text":"data_7549"}
{"text":"data_7550"}
{"text":"data_7551"}
{"text":"data_7552"}
{"text":"data_7553"}
{"text":"data_7554"}
{"text":"data_7555"}
{"text":"data_7556"}
{"text":"data_7557"}
{"text":"data_7558"}
{"text":"data_7559"}
{"text":"data_7560"}
{"text":"data_7561"}
{"text":"data_7562"}
{"text":"data_7563"}
{"text":"data_7564"}
{"text":"data_7565"}
{"text":"data_7566"}
{"text":"data_7567"}
{"text":"data_7568"}
{"text":"data_7569"}
{"text":"data_7570"}
{"text":"data_7571"}
{"text":"data_7572"}
{"text":"data_7573"}
{"text":"data_7574"}
{"text":"data_7575"}
{"text":"data_7576"}
{"text":"data_7577"}
{"text":"data_7578"}
{"text":"data_7579"}
{"text":"data_7580"}
{"text":"data_7581"}
{"text":"data_7582"}
{"text":"data_7583"}
{"text":"data_7584"}
{"text":"data_7585"}
{"text":"data_7586"}
{"text":"data_7587"}
{"text":"data_7588"}
{"text":"data_7589"}
{"text":"data_7590"}
{"text":"data_7591"}
{"text":"data_7592"}
{"text":"data_7593"}
{"text":"data_7594"}
{"text":"data_7595"}
{"text":"data_7596"}
{"text":"data_7597"}
{"text":"data_7598"}
{"text":"data_7599"}
{"text":"data_7600"}
{"text":"data_7601"}
{"text":"data_7602"}
{"text":"data_7603"}
{"text":"data_7604"}
{"text":"data_7605"}
{"text":"data_7606"}
{"text":"data_7607"}
{"text":"data_7608"}
{"text":"data_7609"}
{"text":"data_7610"}
{"text":"data_7611"}
{"text":"data_7612"}
{"text":"data_7613"}
{"text":"data_7614"}
{"text":"data_7615"}
{"text":"data_7616"}
{"text":"data_7617"}
{"text":"data_7618"}
{"text":"data_7619"}
{"text":"data_7620"}
{"text":"data_7621"}
{"text":"data_7622"}
{"text":"data_7623"}
{"text":"data_7624"}
{"text":"data_7625"}
{"text":"data_7626"}
{"text":"data_7627"}
{"text":"data_7628"}
{"text":"data_7629"}
{"text":"data_7630"}
{"text":"data_7631"}
{"text":"data_7632"}
{"text":"data_7633"}
{"text":"data_7634"}
{"text":"data_7635"}
{"text":"data_7636"}
{"text":"data_7637"}
{"text":"data_7638"}
{"text":"data_7639"}
{"text":"data_7640"}
{"text":"data_7641"}
{"text":"data_7642"}
{"text":"data_7643"}
{"text":"data_7644"}
{"text":"data_7645"}
{"text":"data_7646"}
{"text":"data_7647"}
{"text":"data_7648"}
{"text":"data_7649"}
{"text":"data_7650"}
{"text":"data_7651"}
{"text":"data_7652"}
{"text":"data_7653"}
{"text":"data_7654"}
{"text":"data_7655"}
{"text":"data_7656"}
{"text":"data_7657"}
{"text":"data_7658"}
{"text":"data_7659"}
{"text":"data_7660"}
{"text":"data_7661"}
{"text":"data_7662"}
{"text":"data_7663"}
{"text":"data_7664"}
{"text":"data_7665"}
{"text":"data_7666"}
{"text":"data_7667"}
{"text":"data_7668"}
{"text":"data_7669"}
{"text":"data_7670"}
{"text":"data_7671"}
{"text":"data_7672"}
{"text":"data_7673"}
{"text":"data_7674"}
{"text":"data_7675"}
{"text":"data_7676"}
{"text":"data_7677"}
{"text":"data_7678"}
{"text":"data_7679"}
{"text":"data_7680"}
{"text":"data_7681"}
{"text":"data_7682"}
{"text":"data_7683"}
{"text":"data_7684"}
{"text":"data_7685"}
{"text":"data_7686"}
{"text":"data_7687"}
{"text":"data_7688"}
{"text":"data_7689"}
{"text":"data_7690"}
{"text":"data_7691"}
{"text":"data_7692"}
{"text":"data_7693"}
{"text":"data_7694"}
{"text":"data_7695"}
{"text":"data_7696"}
{"text":"data_7697"}
{"text":"data_7698"}
{"text":"data_7699"}
{"text":"data_7700"}
{"text":"data_7701"}
{"text":"data_7702"}
{"text":"data_7703"}
{"text":"data_7704"}
{"text":"data_7705"}
{"text":"data_7706"}
{"text":"data_7707"}
{"text":"data_7708"}
{"text":"data_7709"}
{"text":"data_7710"}
{"text":"data_7711"}
{"text":"data_7712"}
{"text":"data_7713"}
{"text":"data_7714"}
{"text":"data_7715"}
{"text":"data_7716"}
{"text":"data_7717"}
{"text":"data_7718"}
{"text":"data_7719"}
{"text":"data_7720"}
{"text":"data_7721"}
{"text":"data_7722"}
{"text":"data_7723"}
{"text":"data_7724"}
{"text":"data_7725"}
{"text":"data_7726"}
{"text":"data_7727"}
{"text":"data_7728"}
{"text":"data_7729"}
{"text":"data_7730"}
{"text":"data_7731"}
{"text":"data_7732"}
{"text":"data_7733"}
{"text":"data_7734"}
{"text":"data_7735"}
{"text":"data_7736"}
{"text":"data_7737"}
{"text":"data_7738"}
{"text":"data_7739"}
{"text":"data_7740"}
{"text":"data_7741"}
{"text":"data_7742"}
{"text":"data_7743"}
{"text":"data_7744"}
{"text":"data_7745"}
{"text":"data_7746"}
{"text":"data_7747"}
{"text":"data_7748"}
{"text":"data_7749"}
{"text":"data_7750"}
{"text":"data_7751"}
{"text":"data_7752"}
{"text":"data_7753"}
{"text":"data_7754"}
{"text":"data_7755"}
{"text":"data_7756"}
{"text":"data_7757"}
{"text":"data_7758"}
{"text":"data_7759"}
{"text":"data_7760"}
{"text":"data_7761"}
{"text":"data_7762"}
{"text":"data_7763"}
{"text":"data_7764"}
{"text":"data_7765"}
{"text":"data_7766"}
{"text":"data_7767"}
{"text":"data_7768"}
{"text":"data_7769"}
{"text":"data_7770"}
{"text":"data_7771"}
{"text":"data_7772"}
{"text":"data_7773"}
{"text":"data_7774"}
{"text":"data_7775"}
{"text":"data_7776"}
{"text":"data_7777"}
{"text":"data_7778"}
{"text":"data_7779"}
{"text":"data_7780"}
{"text":"data_7781"}
{"text":"data_7782"}
{"text":"data_7783"}
{"text":"data_7784"}
{"text":"data_7785"}
{"text":"data_7786"}
{"text":"data_7787"}
{"text":"data_7788"}
{"text":"data_7789"}
{"text":"data_7790"}
{"text":"data_7791"}
{"text":"data_7792"}
{"text":"data_7793"}
{"text":"data_7794"}
{"text":"data_7795"}
{"text":"data_7796"}
{"text":"data_7797"}
{"text":"data_7798"}
{"text":"data_7799"}
{"text":"data_7800"}
{"text":"data_7801"}
{"text":"data_7802"}
{"text":"data_7803"}
{"text":"data_7804"}
{"text":"data_7805"}
{"text":"data_7806"}
{"text":"data_7807"}
{"text":"data_7808"}
{"text":"data_7809"}
{"text":"data_7810"}
{"text":"data_7811"}
{"text":"data_7812"}
{"text":"data_7813"}
{"text":"data_7814"}
{"text":"data_7815"}
{"text":"data_7816"}
{"text":"data_7817"}
{"text":"data_7818"}
{"text":"data_7819"}
{"text":"data_7820"}
{"text":"data_7821"}
{"text":"data_7822"}
{"text":"data_7823"}
{"text":"data_7824"}
{"text":"data_7825"}
{"text":"data_7826"}
{"text":"data_7827"}
{"text":"data_7828"}
{"text":"data_7829"}
{"text":"data_7830"}
{"text":"data_7831"}
{"text":"data_7832"}
{"text":"data_7833"}
{"text":"data_7834"}
{"text":"data_7835"}
{"text":"data_7836"}
{"text":"data_7837"}
{"text":"data_7838"}
{"text":"data_7839"}
{"text":"data_7840"}
{"text":"data_7841"}
{"text":"data_7842"}
{"text":"data_7843"}
{"text":"data_7844"}
{"text":"data_7845"}
{"text":"data_7846"}
{"text":"data_7847"}
{"text":"data_7848"}
{"text":"data_7849"}
{"text":"data_7850"}
{"text":"data_7851"}
{"text":"data_7852"}
{"text":"data_7853"}
{"text":"data_7854"}
{"text":"data_7855"}
{"text":"data_7856"}
{"text":"data_7857"}
{"text":"data_7858"}
{"text":"data_7859"}
{"text":"data_7860"}
{"text":"data_7861"}
{"text":"data_7862"}
{"text":"data_7863"}
{"text":"data_7864"}
{"text":"data_7865"}
{"text":"data_7866"}
{"text":"data_7867"}
{"text":"data_7868"}
{"text":"data_7869"}
{"text":"data_7870"}
{"text":"data_7871"}
{"text":"data_7872"}
{"text":"data_7873"}
{"text":"data_7874"}
{"text":"data_7875"}
{"text":"data_7876"}
{"text":"data_7877"}
{"text":"data_7878"}
{"text":"data_7879"}
{"text":"data_7880"}
{"text":"data_7881"}
{"text":"data_7882"}
{"text":"data_7883"}
{"text":"data_7884"}
{"text":"data_7885"}
{"text":"data_7886"}
{"text":"data_7887"}
{"text":"data_7888"}
{"text":"data_7889"}
{"text":"data_7890"}
{"text":"data_7891"}
{"text":"data_7892"}
{"text":"data_7893"}
{"text":"data_7894"}
{"text":"data_7895"}
{"text":"data_7896"}
{"text":"data_7897"}
{"text":"data_7898"}
{"text":"data_7899"}
{"text":"data_7900"}
{"text":"data_7901"}
{"text":"data_7902"}
{"text":"data_7903"}
{"text":"data_7904"}
{"text":"data_7905"}
{"text":"data_7906"}
{"text":"data_7907"}
{"text":"data_7908"}
{"text":"data_7909"}
{"text":"data_7910"}
{"text":"data_7911"}
{"text":"data_7912"}
{"text":"data_7913"}
{"text":"data_7914"}
{"text":"data_7915"}
{"text":"data_7916"}
{"text":"data_7917"}
{"text":"data_7918"}
{"text":"data_7919"}
{"text":"data_7920"}
{"text":"data_7921"}
{"text":"data_7922"}
{"text":"data_7923"}
{"text":"data_7924"}
{"text":"data_7925"}
{"text":"data_7926"}
{"text":"data_7927"}
{"text":"data_7928"}
{"text":"data_7929"}
{"text":"data_7930"}
{"text":"data_7931"}
{"text":"data_7932"}
{"text":"data_7933"}
{"text":"data_7934"}
{"text":"data_7935"}
{"text":"data_7936"}
{"text":"data_7937"}
{"text":"data_7938"}
{"text":"data_7939"}
{"text":"data_7940"}
{"text":"data_7941"}
{"text":"data_7942"}
{"text":"data_7943"}
{"text":"data_7944"}
{"text":"data_7945"}
{"text":"data_7946"}
{"text":"data_7947"}
{"text":"data_7948"}
{"text":"data_7949"}
{"text":"data_7950"}
{"text":"data_7951"}
{"text":"data_7952"}
{"text":"data_7953"}
{"text":"data_7954"}
{"text":"data_7955"}
{"text":"data_7956"}
{"text":"data_7957"}
{"text":"data_7958"}
{"text":"data_7959"}
{"text":"data_7960"}
{"text":"data_7961"}
{"text":"data_7962"}
{"text":"data_7963"}
{"text":"data_7964"}
{"text":"data_7965"}
{"text":"data_7966"}
{"text":"data_7967"}
{"text":"data_7968"}
{"text":"data_7969"}
{"text":"data_7970"}
{"text":"data_7971"}
{"text":"data_7972"}
{"text":"data_7973"}
{"text":"data_7974"}
{"text":"data_7975"}
{"text":"data_7976"}
{"text":"data_7977"}
{"text":"data_7978"}
{"text":"data_7979"}
{"text":"data_7980"}
{"text":"data_7981"}
{"text":"data_7982"}
{"text":"data_7983"}
{"text":"data_7984"}
{"text":"data_7985"}
{"text":"data_7986"}
{"text":"data_7987"}
{"text":"data_7988"}
{"text":"data_7989"}
{"text":"data_7990"}
{"text":"data_7991"}
{"text":"data_7992"}
{"text":"data_7993"}
{"text":"data_7994"}
{"text":"data_7995"}
{"text":"data_7996"}
{"text":"data_7997"}
{"text":"data_7998"}
{"text":"data_7999"}
{"text":"data_8000"}
{"text":"data_8001"}
{"text":"data_8002"}
{"text":"data_8003"}
{"text":"data_8004"}
{"text":"data_8005"}
{"text":"data_8006"}
{"text":"data_8007"}
{"text":"data_8008"}
{"text":"data_8009"}
{"text":"data_8010"}
{"text":"data_8011"}
{"text":"data_8012"}
{"text":"data_8013"}
{"text":"data_8014"}
{"text":"data_8015"}
{"text":"data_8016"}
{"text":"data_8017"}
{"text":"data_8018"}
{"text":"data_8019"}
{"text":"data_8020"}
{"text":"data_8021"}
{"text":"data_8022"}
{"text":"data_8023"}
{"text":"data_8024"}
{"text":"data_8025"}
{"text":"data_8026"}
{"text":"data_8027"}
{"text":"data_8028"}
{"text":"data_8029"}
{"text":"data_8030"}
{"text":"data_8031"}
{"text":"data_8032"}
{"text":"data_8033"}
{"text":"data_8034"}
{"text":"data_8035"}
{"text":"data_8036"}
{"text":"data_8037"}
{"text":"data_8038"}
{"text":"data_8039"}
{"text":"data_8040"}
{"text":"data_8041"}
{"text":"data_8042"}
{"text":"data_8043"}
{"text":"data_8044"}
{"text":"data_8045"}
{"text":"data_8046"}
{"text":"data_8047"}
{"text":"data_8048"}
{"text":"data_8049"}
{"text":"data_8050"}
{"text":"data_8051"}
{"text":"data_8052"}
{"text":"data_8053"}
{"text":"data_8054"}
{"text":"data_8055"}
{"text":"data_8056"}
{"text":"data_8057"}
{"text":"data_8058"}
{"text":"data_8059"}
{"text":"data_8060"}
{"text":"data_8061"}
{"text":"data_8062"}
{"text":"data_8063"}
{"text":"data_8064"}
{"text":"data_8065"}
{"text":"data_8066"}
{"text":"data_8067"}
{"text":"data_8068"}
{"text":"data_8069"}
{"text":"data_8070"}
{"text":"data_8071"}
{"text":"data_8072"}
{"text":"data_8073"}
{"text":"data_8074"}
{"text":"data_8075"}
{"text":"data_8076"}
{"text":"data_8077"}
{"text":"data_8078"}
{"text":"data_8079"}
{"text":"data_8080"}
{"text":"data_8081"}
{"text":"data_8082"}
{"text":"data_8083"}
{"text":"data_8084"}
{"text":"data_8085"}
{"text":"data_8086"}
{"text":"data_8087"}
{"text":"data_8088"}
{"text":"data_8089"}
{"text":"data_8090"}
{"text":"data_8091"}
{"text":"data_8092"}
{"text":"data_8093"}
{"text":"data_8094"}
{"text":"data_8095"}
{"text":"data_8096"}
{"text":"data_8097"}
{"text":"data_8098"}
{"text":"data_8099"}
{"text":"data_8100"}
{"text":"data_8101"}
{"text":"data_8102"}
{"text":"data_8103"}
{"text":"data_8104"}
{"text":"data_8105"}
{"text":"data_8106"}
{"text":"data_8107"}
{"text":"data_8108"}
{"text":"data_8109"}
{"text":"data_8110"}
{"text":"data_8111"}
{"text":"data_8112"}
{"text":"data_8113"}
{"text":"data_8114"}
{"text":"data_8115"}
{"text":"data_8116"}
{"text":"data_8117"}
{"text":"data_8118"}
{"text":"data_8119"}
{"text":"data_8120"}
{"text":"data_8121"}
{"text":"data_8122"}
{"text":"data_8123"}
{"text":"data_8124"}
{"text":"data_8125"}
{"text":"data_8126"}
{"text":"data_8127"}
{"text":"data_8128"}
{"text":"data_8129"}
{"text":"data_8130"}
{"text":"data_8131"}
{"text":"data_8132"}
{"text":"data_8133"}
{"text":"data_8134"}
{"text":"data_8135"}
{"text":"data_8136"}
{"text":"data_8137"}
{"text":"data_8138"}
{"text":"data_8139"}
{"text":"data_8140"}
{"text":"data_8141"}
{"text":"data_8142"}
{"text":"data_8143"}
{"text":"data_8144"}
{"text":"data_8145"}
{"text":"data_8146"}
{"text":"data_8147"}
{"text":"data_8148"}
{"text":"data_8149"}
{"text":"data_8150"}
{"text":"data_8151"}
{"text":"data_8152"}
{"text":"data_8153"}
{"text":"data_8154"}
{"text":"data_8155"}
{"text":"data_8156"}
{"text":"data_8157"}
{"text":"data_8158"}
{"text":"data_8159"}
{"text":"data_8160"}
{"text":"data_8161"}
{"text":"data_8162"}
{"text":"data_8163"}
{"text":"data_8164"}
{"text":"data_8165"}
{"text":"data_8166"}
{"text":"data_8167"}
{"text":"data_8168"}
{"text":"data_8169"}
{"text":"data_8170"}
{"text":"data_8171"}
{"text":"data_8172"}
{"text":"data_8173"}
{"text":"data_8174"}
{"text":"data_8175"}
{"text":"data_8176"}
{"text":"data_8177"}
{"text":"data_8178"}
{"text":"data_8179"}
{"text":"data_8180"}
{"text":"data_8181"}
{"text":"data_8182"}
{"text":"data_8183"}
{"text":"data_8184"}
{"text":"data_8185"}
{"text":"data_8186"}
{"text":"data_8187"}
{"text":"data_8188"}
{"text":"data_8189"}
{"text":"data_8190"}
{"text":"data_8191"}
{"text":"data_8192"}
{"text":"data_8193"}
{"text":"data_8194"}
{"text":"data_8195"}
{"text":"data_8196"}
{"text":"data_8197"}
{"text":"data_8198"}
{"text":"data_8199"}
{"text":"data_8200"}
{"text":"data_8201"}
{"text":"data_8202"}
{"text":"data_8203"}
{"text":"data_8204"}
{"text":"data_8205"}
{"text":"data_8206"}
{"text":"data_8207"}
{"text":"data_8208"}
{"text":"data_8209"}
{"text":"data_8210"}
{"text":"data_8211"}
{"text":"data_8212"}
{"text":"data_8213"}
{"text":"data_8214"}
{"text":"data_8215"}
{"text":"data_8216"}
{"text":"data_8217"}
{"text":"data_8218"}
{"text":"data_8219"}
{"text":"data_8220"}
{"text":"data_8221"}
{"text":"data_8222"}
{"text":"data_8223"}
{"text":"data_8224"}
{"text":"data_8225"}
{"text":"data_8226"}
{"text":"data_8227"}
{"text":"data_8228"}
{"text":"data_8229"}
{"text":"data_8230"}
{"text":"data_8231"}
{"text":"data_8232"}
{"text":"data_8233"}
{"text":"data_8234"}
{"text":"data_8235"}
{"text":"data_8236"}
{"text":"data_8237"}
{"text":"data_8238"}
{"text":"data_8239"}
{"text":"data_8240"}
{"text":"data_8241"}
{"text":"data_8242"}
{"text":"data_8243"}
{"text":"data_8244"}
{"text":"data_8245"}
{"text":"data_8246"}
{"text":"data_8247"}
{"text":"data_8248"}
{"text":"data_8249"}
{"text":"data_8250"}
{"text":"data_8251"}
{"text":"data_8252"}
{"text":"data_8253"}
{"text":"data_8254"}
{"text":"data_8255"}
{"text":"data_8256"}
{"text":"data_8257"}
{"text":"data_8258"}
{"text":"data_8259"}
{"text":"data_8260"}
{"text":"data_8261"}
{"text":"data_8262"}
{"text":"data_8263"}
{"text":"data_8264"}
{"text":"data_8265"}
{"text":"data_8266"}
{"text":"data_8267"}
{"text":"data_8268"}
{"text":"data_8269"}
{"text":"data_8270"}
{"text":"data_8271"}
{"text":"data_8272"}
{"text":"data_8273"}
{"text":"data_8274"}
{"text":"data_8275"}
{"text":"data_8276"}
{"text":"data_8277"}
{"text":"data_8278"}
{"text":"data_8279"}
{"text":"data_8280"}
{"text":"data_8281"}
{"text":"data_8282"}
{"text":"data_8283"}
{"text":"data_8284"}
{"text":"data_8285"}
{"text":"data_8286"}
{"text":"data_8287"}
{"text":"data_8288"}
{"text":"data_8289"}
{"text":"data_8290"}
{"text":"data_8291"}
{"text":"data_8292"}
{"text":"data_8293"}
{"text":"data_8294"}
{"text":"data_8295"}
{"text":"data_8296"}
{"text":"data_8297"}
{"text":"data_8298"}
{"text":"data_8299"}
{"text":"data_8300"}
{"text":"data_8301"}
{"text":"data_8302"}
{"text":"data_8303"}
{"text":"data_8304"}
{"text":"data_8305"}
{"text":"data_8306"}
{"text":"data_8307"}
{"text":"data_8308"}
{"text":"data_8309"}
{"text":"data_8310"}
{"text":"data_8311"}
{"text":"data_8312"}
{"text":"data_8313"}
{"text":"data_8314"}
{"text":"data_8315"}
{"text":"data_8316"}
{"text":"data_8317"}
{"text":"data_8318"}
{"text":"data_8319"}
{"text":"data_8320"}
{"text":"data_8321"}
{"text":"data_8322"}
{"text":"data_8323"}
{"text":"data_8324"}
{"text":"data_8325"}
{"text":"data_8326"}
{"text":"data_8327"}
{"text":"data_8328"}
{"text":"data_8329"}
{"text":"data_8330"}
{"text":"data_8331"}
{"text":"data_8332"}
{"text":"data_8333"}
{"text":"data_8334"}
{"text":"data_8335"}
{"text":"data_8336"}
{"text":"data_8337"}
{"text":"data_8338"}
{"text":"data_8339"}
{"text":"data_8340"}
{"text":"data_8341"}
{"text":"data_8342"}
{"text":"data_8343"}
{"text":"data_8344"}
{"text":"data_8345"}
{"text":"data_8346"}
{"text":"data_8347"}
{"text":"data_8348"}
{"text":"data_8349"}
{"text":"data_8350"}
{"text":"data_8351"}
{"text":"data_8352"}
{"text":"data_8353"}
{"text":"data_8354"}
{"text":"data_8355"}
{"text":"data_8356"}
{"text":"data_8357"}
{"text":"data_8358"}
{"text":"data_8359"}
{"text":"data_8360"}
{"text":"data_8361"}
{"text":"data_8362"}
{"text":"data_8363"}
{"text":"data_8364"}
{"text":"data_8365"}
{"text":"data_8366"}
{"text":"data_8367"}
{"text":"data_8368"}
{"text":"data_8369"}
{"text":"data_8370"}
{"text":"data_8371"}
{"text":"data_8372"}
{"text":"data_8373"}
{"text":"data_8374"}
{"text":"data_8375"}
{"text":"data_8376"}
{"text":"data_8377"}
{"text":"data_8378"}
{"text":"data_8379"}
{"text":"data_8380"}
{"text":"data_8381"}
{"text":"data_8382"}
{"text":"data_8383"}
{"text":"data_8384"}
{"text":"data_8385"}
{"text":"data_8386"}
{"text":"data_8387"}
{"text":"data_8388"}
{"text":"data_8389"}
{"text":"data_8390"}
{"text":"data_8391"}
{"text":"data_8392"}
{"text":"data_8393"}
{"text":"data_8394"}
{"text":"data_8395"}
{"text":"data_8396"}
{"text":"data_8397"}
{"text":"data_8398"}
{"text":"data_8399"}
{"text":"data_8400"}
{"text":"data_8401"}
{"text":"data_8402"}
{"text":"data_8403"}
{"text":"data_8404"}
{"text":"data_8405"}
{"text":"data_8406"}
{"text":"data_8407"}
{"text":"data_8408"}
{"text":"data_8409"}
{"text":"data_8410"}
{"text":"data_8411"}
{"text":"data_8412"}
{"text":"data_8413"}
{"text":"data_8414"}
{"text":"data_8415"}
{"text":"data_8416"}
{"text":"data_8417"}
{"text":"data_8418"}
{"text":"data_8419"}
{"text":"data_8420"}
{"text":"data_8421"}
{"text":"data_8422"}
{"text":"data_8423"}
{"text":"data_8424"}
{"text":"data_8425"}
{"text":"data_8426"}
{"text":"data_8427"}
{"text":"data_8428"}
{"text":"data_8429"}
{"text":"data_8430"}
{"text":"data_8431"}
{"text":"data_8432"}
{"text":"data_8433"}
{"text":"data_8434"}
{"text":"data_8435"}
{"text":"data_8436"}
{"text":"data_8437"}
{"text":"data_8438"}
{"text":"data_8439"}
{"text":"data_8440"}
{"text":"data_8441"}
{"text":"data_8442"}
{"text":"data_8443"}
{"text":"data_8444"}
{"text":"data_8445"}
{"text":"data_8446"}
{"text":"data_8447"}
{"text":"data_8448"}
{"text":"data_8449"}
{"text":"data_8450"}
{"text":"data_8451"}
{"text":"data_8452"}
{"text":"data_8453"}
{"text":"data_8454"}
{"text":"data_8455"}
{"text":"data_8456"}
{"text":"data_8457"}
{"text":"data_8458"}
{"text":"data_8459"}
{"text":"data_8460"}
{"text":"data_8461"}
{"text":"data_8462"}
{"text":"data_8463"}
{"text":"data_8464"}
{"text":"data_8465"}
{"text":"data_8466"}
{"text":"data_8467"}
{"text":"data_8468"}
{"text":"data_8469"}
{"text":"data_8470"}
{"text":"data_8471"}
{"text":"data_8472"}
{"text":"data_8473"}
{"text":"data_8474"}
{"text":"data_8475"}
{"text":"data_8476"}
{"text":"data_8477"}
{"text":"data_8478"}
{"text":"data_8479"}
{"text":"data_8480"}
{"text":"data_8481"}
{"text":"data_8482"}
{"text":"data_8483"}
{"text":"data_8484"}
{"text":"data_8485"}
{"text":"data_8486"}
{"text":"data_8487"}
{"text":"data_8488"}
{"text":"data_8489"}
{"text":"data_8490"}
{"text":"data_8491"}
{"text":"data_8492"}
{"text":"data_8493"}
{"text":"data_8494"}
{"text":"data_8495"}
{"text":"data_8496"}
{"text":"data_8497"}
{"text":"data_8498"}
{"text":"data_8499"}
{"text":"data_8500"}
{"text":"data_8501"}
{"text":"data_8502"}
{"text":"data_8503"}
{"text":"data_8504"}
{"text":"data_8505"}
{"text":"data_8506"}
{"text":"data_8507"}
{"text":"data_8508"}
{"text":"data_8509"}
{"text":"data_8510"}
{"text":"data_8511"}
{"text":"data_8512"}
{"text":"data_8513"}
{"text":"data_8514"}
{"text":"data_8515"}
{"text":"data_8516"}
{"text":"data_8517"}
{"text":"data_8518"}
{"text":"data_8519"}
{"text":"data_8520"}
{"text":"data_8521"}
{"text":"data_8522"}
{"text":"data_8523"}
{"text":"data_8524"}
{"text":"data_8525"}
{"text":"data_8526"}
{"text":"data_8527"}
{"text":"data_8528"}
{"text":"data_8529"}
{"text":"data_8530"}
{"text":"data_8531"}
{"text":"data_8532"}
{"text":"data_8533"}
{"text":"data_8534"}
{"text":"data_8535"}
{"text":"data_8536"}
{"text":"data_8537"}
{"text":"data_8538"}
{"text":"data_8539"}
{"text":"data_8540"}
{"text":"data_8541"}
{"text":"data_8542"}
{"text":"data_8543"}
{"text":"data_8544"}
{"text":"data_8545"}
{"text":"data_8546"}
{"text":"data_8547"}
{"text":"data_8548"}
{"text":"data_8549"}
{"text":"data_8550"}
{"text":"data_8551"}
{"text":"data_8552"}
{"text":"data_8553"}
{"text":"data_8554"}
{"text":"data_8555"}
{"text":"data_8556"}
{"text":"data_8557"}
{"text":"data_8558"}
{"text":"data_8559"}
{"text":"data_8560"}
{"text":"data_8561"}
{"text":"data_8562"}
{"text":"data_8563"}
{"text":"data_8564"}
{"text":"data_8565"}
{"text":"data_8566"}
{"text":"data_8567"}
{"text":"data_8568"}
{"text":"data_8569"}
{"text":"data_8570"}
{"text":"data_8571"}
{"text":"data_8572"}
{"text":"data_8573"}
{"text":"data_8574"}
{"text":"data_8575"}
{"text":"data_8576"}
{"text":"data_8577"}
{"text":"data_8578"}
{"text":"data_8579"}
{"text":"data_8580"}
{"text":"data_8581"}
{"text":"data_8582"}
{"text":"data_8583"}
{"text":"data_8584"}
{"text":"data_8585"}
{"text":"data_8586"}
{"text":"data_8587"}
{"text":"data_8588"}
{"text":"data_8589"}
{"text":"data_8590"}
{"text":"data_8591"}
{"text":"data_8592"}
{"text":"data_8593"}
{"text":"data_8594"}
{"text":"data_8595"}
{"text":"data_8596"}
{"text":"data_8597"}
{"text":"data_8598"}
{"text":"data_8599"}
{"text":"data_8600"}
{"text":"data_8601"}
{"text":"data_8602"}
{"text":"data_8603"}
{"text":"data_8604"}
{"text":"data_8605"}
{"text":"data_8606"}
{"text":"data_8607"}
{"text":"data_8608"}
{"text":"data_8609"}
{"text":"data_8610"}
{"text":"data_8611"}
{"text":"data_8612"}
{"text":"data_8613"}
{"text":"data_8614"}
{"text":"data_8615"}
{"text":"data_8616"}
{"text":"data_8617"}
{"text":"data_8618"}
{"text":"data_8619"}
{"text":"data_8620"}
{"text":"data_8621"}
{"text":"data_8622"}
{"text":"data_8623"}
{"text":"data_8624"}
{"text":"data_8625"}
{"text":"data_8626"}
{"text":"data_8627"}
{"text":"data_8628"}
{"text":"data_8629"}
{"text":"data_8630"}
{"text":"data_8631"}
{"text":"data_8632"}
{"text":"data_8633"}
{"text":"data_8634"}
{"text":"data_8635"}
{"text":"data_8636"}
{"text":"data_8637"}
{"text":"data_8638"}
{"text":"data_8639"}
{"text":"data_8640"}
{"text":"data_8641"}
{"text":"data_8642"}
{"text":"data_8643"}
{"text":"data_8644"}
{"text":"data_8645"}
{"text":"data_8646"}
{"text":"data_8647"}
{"text":"data_8648"}
{"text":"data_8649"}
{"text":"data_8650"}
{"text":"data_8651"}
{"text":"data_8652"}
{"text":"data_8653"}
{"text":"data_8654"}
{"text":"data_8655"}
{"text":"data_8656"}
{"text":"data_8657"}
{"text":"data_8658"}
{"text":"data_8659"}
{"text":"data_8660"}
{"text":"data_8661"}
{"text":"data_8662"}
{"text":"data_8663"}
{"text":"data_8664"}
{"text":"data_8665"}
{"text":"data_8666"}
{"text":"data_8667"}
{"text":"data_8668"}
{"text":"data_8669"}
{"text":"data_8670"}
{"text":"data_8671"}
{"text":"data_8672"}
{"text":"data_8673"}
{"text":"data_8674"}
{"text":"data_8675"}
{"text":"data_8676"}
{"text":"data_8677"}
{"text":"data_8678"}
{"text":"data_8679"}
{"text":"data_8680"}
{"text":"data_8681"}
{"text":"data_8682"}
{"text":"data_8683"}
{"text":"data_8684"}
{"text":"data_8685"}
{"text":"data_8686"}
{"text":"data_8687"}
{"text":"data_8688"}
{"text":"data_8689"}
{"text":"data_8690"}
{"text":"data_8691"}
{"text":"data_8692"}
{"text":"data_8693"}
{"text":"data_8694"}
{"text":"data_8695"}
{"text":"data_8696"}
{"text":"data_8697"}
{"text":"data_8698"}
{"text":"data_8699"}
{"text":"data_8700"}
{"text":"data_8701"}
{"text":"data_8702"}
{"text":"data_8703"}
{"text":"data_8704"}
{"text":"data_8705"}
{"text":"data_8706"}
{"text":"data_8707"}
{"text":"data_8708"}
{"text":"data_8709"}
{"text":"data_8710"}
{"text":"data_8711"}
{"text":"data_8712"}
{"text":"data_8713"}
{"text":"data_8714"}
{"text":"data_8715"}
{"text":"data_8716"}
{"text":"data_8717"}
{"text":"data_8718"}
{"text":"data_8719"}
{"text":"data_8720"}
{"text":"data_8721"}
{"text":"data_8722"}
{"text":"data_8723"}
{"text":"data_8724"}
{"text":"data_8725"}
{"text":"data_8726"}
{"text":"data_8727"}
{"text":"data_8728"}
{"text":"data_8729"}
{"text":"data_8730"}
{"text":"data_8731"}
{"text":"data_8732"}
{"text":"data_8733"}
{"text":"data_8734"}
{"text":"data_8735"}
{"text":"data_8736"}
{"text":"data_8737"}
{"text":"data_8738"}
{"text":"data_8739"}
{"text":"data_8740"}
{"text":"data_8741"}
{"text":"data_8742"}
{"text":"data_8743"}
{"text":"data_8744"}
{"text":"data_8745"}
{"text":"data_8746"}
{"text":"data_8747"}
{"text":"data_8748"}
{"text":"data_8749"}
{"text":"data_8750"}
{"text":"data_8751"}
{"text":"data_8752"}
{"text":"data_8753"}
{"text":"data_8754"}
{"text":"data_8755"}
{"text":"data_8756"}
{"text":"data_8757"}
{"text":"data_8758"}
{"text":"data_8759"}
{"text":"data_8760"}
{"text":"data_8761"}
{"text":"data_8762"}
{"text":"data_8763"}
{"text":"data_8764"}
{"text":"data_8765"}
{"text":"data_8766"}
{"text":"data_8767"}
{"text":"data_8768"}
{"text":"data_8769"}
{"text":"data_8770"}
{"text":"data_8771"}
{"text":"data_8772"}
{"text":"data_8773"}
{"text":"data_8774"}
{"text":"data_8775"}
{"text":"data_8776"}
{"text":"data_8777"}
{"text":"data_8778"}
{"text":"data_8779"}
{"text":"data_8780"}
{"text":"data_8781"}
{"text":"data_8782"}
{"text":"data_8783"}
{"text":"data_8784"}
{"text":"data_8785"}
{"text":"data_8786"}
{"text":"data_8787"}
{"text":"data_8788"}
{"text":"data_8789"}
{"text":"data_8790"}
{"text":"data_8791"}
{"text":"data_8792"}
{"text":"data_8793"}
{"text":"data_8794"}
{"text":"data_8795"}
{"text":"data_8796"}
{"text":"data_8797"}
{"text":"data_8798"}
{"text":"data_8799"}
{"text":"data_8800"}
{"text":"data_8801"}
{"text":"data_8802"}
{"text":"data_8803"}
{"text":"data_8804"}
{"text":"data_8805"}
{"text":"data_8806"}
{"text":"data_8807"}
{"text":"data_8808"}
{"text":"data_8809"}
{"text":"data_8810"}
{"text":"data_8811"}
{"text":"data_8812"}
{"text":"data_8813"}
{"text":"data_8814"}
{"text":"data_8815"}
{"text":"data_8816"}
{"text":"data_8817"}
{"text":"data_8818"}
{"text":"data_8819"}
{"text":"data_8820"}
{"text":"data_8821"}
{"text":"data_8822"}
{"text":"data_8823"}
{"text":"data_8824"}
{"text":"data_8825"}
{"text":"data_8826"}
{"text":"data_8827"}
{"text":"data_8828"}
{"text":"data_8829"}
{"text":"data_8830"}
{"text":"data_8831"}
{"text":"data_8832"}
{"text":"data_8833"}
{"text":"data_8834"}
{"text":"data_8835"}
{"text":"data_8836"}
{"text":"data_8837"}
{"text":"data_8838"}
{"text":"data_8839"}
{"text":"data_8840"}
{"text":"data_8841"}
{"text":"data_8842"}
{"text":"data_8843"}
{"text":"data_8844"}
{"text":"data_8845"}
{"text":"data_8846"}
{"text":"data_8847"}
{"text":"data_8848"}
{"text":"data_8849"}
{"text":"data_8850"}
{"text":"data_8851"}
{"text":"data_8852"}
{"text":"data_8853"}
{"text":"data_8854"}
{"text":"data_8855"}
{"text":"data_8856"}
{"text":"data_8857"}
{"text":"data_8858"}
{"text":"data_8859"}
{"text":"data_8860"}
{"text":"data_8861"}
{"text":"data_8862"}
{"text":"data_8863"}
{"text":"data_8864"}
{"text":"data_8865"}
{"text":"data_8866"}
{"text":"data_8867"}
{"text":"data_8868"}
{"text":"data_8869"}
{"text":"data_8870"}
{"text":"data_8871"}
{"text":"data_8872"}
{"text":"data_8873"}
{"text":"data_8874"}
{"text":"data_8875"}
{"text":"data_8876"}
{"text":"data_8877"}
{"text":"data_8878"}
{"text":"data_8879"}
{"text":"data_8880"}
{"text":"data_8881"}
{"text":"data_8882"}
{"text":"data_8883"}
{"text":"data_8884"}
{"text":"data_8885"}
{"text":"data_8886"}
{"text":"data_8887"}
{"text":"data_8888"}
{"text":"data_8889"}
{"text":"data_8890"}
{"text":"data_8891"}
{"text":"data_8892"}
{"text":"data_8893"}
{"text":"data_8894"}
{"text":"data_8895"}
{"text":"data_8896"}
{"text":"data_8897"}
{"text":"data_8898"}
{"text":"data_8899"}
{"text":"data_8900"}
{"text":"data_8901"}
{"text":"data_8902"}
{"text":"data_8903"}
{"text":"data_8904"}
{"text":"data_8905"}
{"text":"data_8906"}
{"text":"data_8907"}
{"text":"data_8908"}
{"text":"data_8909"}
{"text":"data_8910"}
{"text":"data_8911"}
{"text":"data_8912"}
{"text":"data_8913"}
{"text":"data_8914"}
{"text":"data_8915"}
{"text":"data_8916"}
{"text":"data_8917"}
{"text":"data_8918"}
{"text":"data_8919"}
{"text":"data_8920"}
{"text":"data_8921"}
{"text":"data_8922"}
{"text":"data_8923"}
{"text":"data_8924"}
{"text":"data_8925"}
{"text":"data_8926"}
{"text":"data_8927"}
{"text":"data_8928"}
{"text":"data_8929"}
{"text":"data_8930"}
{"text":"data_8931"}
{"text":"data_8932"}
{"text":"data_8933"}
{"text":"data_8934"}
{"text":"data_8935"}
{"text":"data_8936"}
{"text":"data_8937"}
{"text":"data_8938"}
{"text":"data_8939"}
{"text":"data_8940"}
{"text":"data_8941"}
{"text":"data_8942"}
{"text":"data_8943"}
{"text":"data_8944"}
{"text":"data_8945"}
{"text":"data_8946"}
{"text":"data_8947"}
{"text":"data_8948"}
{"text":"data_8949"}
{"text":"data_8950"}
{"text":"data_8951"}
{"text":"data_8952"}
{"text":"data_8953"}
{"text":"data_8954"}
{"text":"data_8955"}
{"text":"data_8956"}
{"text":"data_8957"}
{"text":"data_8958"}
{"text":"data_8959"}
{"text":"data_8960"}
{"text":"data_8961"}
{"text":"data_8962"}
{"text":"data_8963"}
{"text":"data_8964"}
{"text":"data_8965"}
{"text":"data_8966"}
{"text":"data_8967"}
{"text":"data_8968"}
{"text":"data_8969"}
{"text":"data_8970"}
{"text":"data_8971"}
{"text":"data_8972"}
{"text":"data_8973"}
{"text":"data_8974"}
{"text":"data_8975"}
{"text":"data_8976"}
{"text":"data_8977"}
{"text":"data_8978"}
{"text":"data_8979"}
{"text":"data_8980"}
{"text":"data_8981"}
{"text":"data_8982"}
{"text":"data_8983"}
{"text":"data_8984"}
{"text":"data_8985"}
{"text":"data_8986"}
{"text":"data_8987"}
{"text":"data_8988"}
{"text":"data_8989"}
{"text":"data_8990"}
{"text":"data_8991"}
{"text":"data_8992"}
{"text":"data_8993"}
{"text":"data_8994"}
{"text":"data_8995"}
{"text":"data_8996"}
{"text":"data_8997"}
{"text":"data_8998"}
{"text":"data_8999"}
{"text":"data_9000"}
{"text":"data_9001"}
{"text":"data_9002"}
{"text":"data_9003"}
{"text":"data_9004"}
{"text":"data_9005"}
{"text":"data_9006"}
{"text":"data_9007"}
{"text":"data_9008"}
{"text":"data_9009"}
{"text":"data_9010"}
{"text":"data_9011"}
{"text":"data_9012"}
{"text":"data_9013"}
{"text":"data_9014"}
{"text":"data_9015"}
{"text":"data_9016"}
{"text":"data_9017"}
{"text":"data_9018"}
{"text":"data_9019"}
{"text":"data_9020"}
{"text":"data_9021"}
{"text":"data_9022"}
{"text":"data_9023"}
{"text":"data_9024"}
{"text":"data_9025"}
{"text":"data_9026"}
{"text":"data_9027"}
{"text":"data_9028"}
{"text":"data_9029"}
{"text":"data_9030"}
{"text":"data_9031"}
{"text":"data_9032"}
{"text":"data_9033"}
{"text":"data_9034"}
{"text":"data_9035"}
{"text":"data_9036"}
{"text":"data_9037"}
{"text":"data_9038"}
{"text":"data_9039"}
{"text":"data_9040"}
{"text":"data_9041"}
{"text":"data_9042"}
{"text":"data_9043"}
{"text":"data_9044"}
{"text":"data_9045"}
{"text":"data_9046"}
{"text":"data_9047"}
{"text":"data_9048"}
{"text":"data_9049"}
{"text":"data_9050"}
{"text":"data_9051"}
{"text":"data_9052"}
{"text":"data_9053"}
{"text":"data_9054"}
{"text":"data_9055"}
{"text":"data_9056"}
{"text":"data_9057"}
{"text":"data_9058"}
{"text":"data_9059"}
{"text":"data_9060"}
{"text":"data_9061"}
{"text":"data_9062"}
{"text":"data_9063"}
{"text":"data_9064"}
{"text":"data_9065"}
{"text":"data_9066"}
{"text":"data_9067"}
{"text":"data_9068"}
{"text":"data_9069"}
{"text":"data_9070"}
{"text":"data_9071"}
{"text":"data_9072"}
{"text":"data_9073"}
{"text":"data_9074"}
{"text":"data_9075"}
{"text":"data_9076"}
{"text":"data_9077"}
{"text":"data_9078"}
{"text":"data_9079"}
{"text":"data_9080"}
{"text":"data_9081"}
{"text":"data_9082"}
{"text":"data_9083"}
{"text":"data_9084"}
{"text":"data_9085"}
{"text":"data_9086"}
{"text":"data_9087"}
{"text":"data_9088"}
{"text":"data_9089"}
{"text":"data_9090"}
{"text":"data_9091"}
{"text":"data_9092"}
{"text":"data_9093"}
{"text":"data_9094"}
{"text":"data_9095"}
{"text":"data_9096"}
{"text":"data_9097"}
{"text":"data_9098"}
{"text":"data_9099"}
{"text":"data_9100"}
{"text":"data_9101"}
{"text":"data_9102"}
{"text":"data_9103"}
{"text":"data_9104"}
{"text":"data_9105"}
{"text":"data_9106"}
{"text":"data_9107"}
{"text":"data_9108"}
{"text":"data_9109"}
{"text":"data_9110"}
{"text":"data_9111"}
{"text":"data_9112"}
{"text":"data_9113"}
{"text":"data_9114"}
{"text":"data_9115"}
{"text":"data_9116"}
{"text":"data_9117"}
{"text":"data_9118"}
{"text":"data_9119"}
{"text":"data_9120"}
{"text":"data_9121"}
{"text":"data_9122"}
{"text":"data_9123"}
{"text":"data_9124"}
{"text":"data_9125"}
{"text":"data_9126"}
{"text":"data_9127"}
{"text":"data_9128"}
{"text":"data_9129"}
{"text":"data_9130"}
{"text":"data_9131"}
{"text":"data_9132"}
{"text":"data_9133"}
{"text":"data_9134"}
{"text":"data_9135"}
{"text":"data_9136"}
{"text":"data_9137"}
{"text":"data_9138"}
{"text":"data_9139"}
{"text":"data_9140"}
{"text":"data_9141"}
{"text":"data_9142"}
{"text":"data_9143"}
{"text":"data_9144"}
{"text":"data_9145"}
{"text":"data_9146"}
{"text":"data_9147"}
{"text":"data_9148"}
{"text":"data_9149"}
{"text":"data_9150"}
{"text":"data_9151"}
{"text":"data_9152"}
{"text":"data_9153"}
{"text":"data_9154"}
{"text":"data_9155"}
{"text":"data_9156"}
{"text":"data_9157"}
{"text":"data_9158"}
{"text":"data_9159"}
{"text":"data_9160"}
{"text":"data_9161"}
{"text":"data_9162"}
{"text":"data_9163"}
{"text":"data_9164"}
{"text":"data_9165"}
{"text":"data_9166"}
{"text":"data_9167"}
{"text":"data_9168"}
{"text":"data_9169"}
{"text":"data_9170"}
{"text":"data_9171"}
{"text":"data_9172"}
{"text":"data_9173"}
{"text":"data_9174"}
{"text":"data_9175"}
{"text":"data_9176"}
{"text":"data_9177"}
{"text":"data_9178"}
{"text":"data_9179"}
{"text":"data_9180"}
{"text":"data_9181"}
{"text":"data_9182"}
{"text":"data_9183"}
{"text":"data_9184"}
{"text":"data_9185"}
{"text":"data_9186"}
{"text":"data_9187"}
{"text":"data_9188"}
{"text":"data_9189"}
{"text":"data_9190"}
{"text":"data_9191"}
{"text":"data_9192"}
{"text":"data_9193"}
{"text":"data_9194"}
{"text":"data_9195"}
{"text":"data_9196"}
{"text":"data_9197"}
{"text":"data_9198"}
{"text":"data_9199"}
{"text":"data_9200"}
{"text":"data_9201"}
{"text":"data_9202"}
{"text":"data_9203"}
{"text":"data_9204"}
{"text":"data_9205"}
{"text":"data_9206"}
{"text":"data_9207"}
{"text":"data_9208"}
{"text":"data_9209"}
{"text":"data_9210"}
{"text":"data_9211"}
{"text":"data_9212"}
{"text":"data_9213"}
{"text":"data_9214"}
{"text":"data_9215"}
{"text":"data_9216"}
{"text":"data_9217"}
{"text":"data_9218"}
{"text":"data_9219"}
{"text":"data_9220"}
{"text":"data_9221"}
{"text":"data_9222"}
{"text":"data_9223"}
{"text":"data_9224"}
{"text":"data_9225"}
{"text":"data_9226"}
{"text":"data_9227"}
{"text":"data_9228"}
{"text":"data_9229"}
{"text":"data_9230"}
{"text":"data_9231"}
{"text":"data_9232"}
{"text":"data_9233"}
{"text":"data_9234"}
{"text":"data_9235"}
{"text":"data_9236"}
{"text":"data_9237"}
{"text":"data_9238"}
{"text":"data_9239"}
{"text":"data_9240"}
{"text":"data_9241"}
{"text":"data_9242"}
{"text":"data_9243"}
{"text":"data_9244"}
{"text":"data_9245"}
{"text":"data_9246"}
{"text":"data_9247"}
{"text":"data_9248"}
{"text":"data_9249"}
{"text":"data_9250"}
{"text":"data_9251"}
{"text":"data_9252"}
{"text":"data_9253"}
{"text":"data_9254"}
{"text":"data_9255"}
{"text":"data_9256"}
{"text":"data_9257"}
{"text":"data_9258"}
{"text":"data_9259"}
{"text":"data_9260"}
{"text":"data_9261"}
{"text":"data_9262"}
{"text":"data_9263"}
{"text":"data_9264"}
{"text":"data_9265"}
{"text":"data_9266"}
{"text":"data_9267"}
{"text":"data_9268"}
{"text":"data_9269"}
{"text":"data_9270"}
{"text":"data_9271"}
{"text":"data_9272"}
{"text":"data_9273"}
{"text":"data_9274"}
{"text":"data_9275"}
{"text":"data_9276"}
{"text":"data_9277"}
{"text":"data_9278"}
{"text":"data_9279"}
{"text":"data_9280"}
{"text":"data_9281"}
{"text":"data_9282"}
{"text":"data_9283"}
{"text":"data_9284"}
{"text":"data_9285"}
{"text":"data_9286"}
{"text":"data_9287"}
{"text":"data_9288"}
{"text":"data_9289"}
{"text":"data_9290"}
{"text":"data_9291"}
{"text":"data_9292"}
{"text":"data_9293"}
{"text":"data_9294"}
{"text":"data_9295"}
{"text":"data_9296"}
{"text":"data_9297"}
{"text":"data_9298"}
{"text":"data_9299"}
{"text":"data_9300"}
{"text":"data_9301"}
{"text":"data_9302"}
{"text":"data_9303"}
{"text":"data_9304"}
{"text":"data_9305"}
{"text":"data_9306"}
{"text":"data_9307"}
{"text":"data_9308"}
{"text":"data_9309"}
{"text":"data_9310"}
{"text":"data_9311"}
{"text":"data_9312"}
{"text":"data_9313"}
{"text":"data_9314"}
{"text":"data_9315"}
{"text":"data_9316"}
{"text":"data_9317"}
{"text":"data_9318"}
{"text":"data_9319"}
{"text":"data_9320"}
{"text":"data_9321"}
{"text":"data_9322"}
{"text":"data_9323"}
{"text":"data_9324"}
{"text":"data_9325"}
{"text":"data_9326"}
{"text":"data_9327"}
{"text":"data_9328"}
{"text":"data_9329"}
{"text":"data_9330"}
{"text":"data_9331"}
{"text":"data_9332"}
{"text":"data_9333"}
{"text":"data_9334"}
{"text":"data_9335"}
{"text":"data_9336"}
{"text":"data_9337"}
{"text":"data_9338"}
{"text":"data_9339"}
{"text":"data_9340"}
{"text":"data_9341"}
{"text":"data_9342"}
{"text":"data_9343"}
{"text":"data_9344"}
{"text":"data_9345"}
{"text":"data_9346"}
{"text":"data_9347"}
{"text":"data_9348"}
{"text":"data_9349"}
{"text":"data_9350"}
{"text":"data_9351"}
{"text":"data_9352"}
{"text":"data_9353"}
{"text":"data_9354"}
{"text":"data_9355"}
{"text":"data_9356"}
{"text":"data_9357"}
{"text":"data_9358"}
{"text":"data_9359"}
{"text":"data_9360"}
{"text":"data_9361"}
{"text":"data_9362"}
{"text":"data_9363"}
{"text":"data_9364"}
{"text":"data_9365"}
{"text":"data_9366"}
{"text":"data_9367"}
{"text":"data_9368"}
{"text":"data_9369"}
{"text":"data_9370"}
{"text":"data_9371"}
{"text":"data_9372"}
{"text":"data_9373"}
{"text":"data_9374"}
{"text":"data_9375"}
{"text":"data_9376"}
{"text":"data_9377"}
{"text":"data_9378"}
{"text":"data_9379"}
{"text":"data_9380"}
{"text":"data_9381"}
{"text":"data_9382"}
{"text":"data_9383"}
{"text":"data_9384"}
{"text":"data_9385"}
{"text":"data_9386"}
{"text":"data_9387"}
{"text":"data_9388"}
{"text":"data_9389"}
{"text":"data_9390"}
{"text":"data_9391"}
{"text":"data_9392"}
{"text":"data_9393"}
{"text":"data_9394"}
{"text":"data_9395"}
{"text":"data_9396"}
{"text":"data_9397"}
{"text":"data_9398"}
{"text":"data_9399"}
{"text":"data_9400"}
{"text":"data_9401"}
{"text":"data_9402"}
{"text":"data_9403"}
{"text":"data_9404"}
{"text":"data_9405"}
{"text":"data_9406"}
{"text":"data_9407"}
{"text":"data_9408"}
{"text":"data_9409"}
{"text":"data_9410"}
{"text":"data_9411"}
{"text":"data_9412"}
{"text":"data_9413"}
{"text":"data_9414"}
{"text":"data_9415"}
{"text":"data_9416"}
{"text":"data_9417"}
{"text":"data_9418"}
{"text":"data_9419"}
{"text":"data_9420"}
{"text":"data_9421"}
{"text":"data_9422"}
{"text":"data_9423"}
{"text":"data_9424"}
{"text":"data_9425"}
{"text":"data_9426"}
{"text":"data_9427"}
{"text":"data_9428"}
{"text":"data_9429"}
{"text":"data_9430"}
{"text":"data_9431"}
{"text":"data_9432"}
{"text":"data_9433"}
{"text":"data_9434"}
{"text":"data_9435"}
{"text":"data_9436"}
{"text":"data_9437"}
{"text":"data_9438"}
{"text":"data_9439"}
{"text":"data_9440"}
{"text":"data_9441"}
{"text":"data_9442"}
{"text":"data_9443"}
{"text":"data_9444"}
{"text":"data_9445"}
{"text":"data_9446"}
{"text":"data_9447"}
{"text":"data_9448"}
{"text":"data_9449"}
{"text":"data_9450"}
{"text":"data_9451"}
{"text":"data_9452"}
{"text":"data_9453"}
{"text":"data_9454"}
{"text":"data_9455"}
{"text":"data_9456"}
{"text":"data_9457"}
{"text":"data_9458"}
{"text":"data_9459"}
{"text":"data_9460"}
{"text":"data_9461"}
{"text":"data_9462"}
{"text":"data_9463"}
{"text":"data_9464"}
{"text":"data_9465"}
{"text":"data_9466"}
{"text":"data_9467"}
{"text":"data_9468"}
{"text":"data_9469"}
{"text":"data_9470"}
{"text":"data_9471"}
{"text":"data_9472"}
{"text":"data_9473"}
{"text":"data_9474"}
{"text":"data_9475"}
{"text":"data_9476"}
{"text":"data_9477"}
{"text":"data_9478"}
{"text":"data_9479"}
{"text":"data_9480"}
{"text":"data_9481"}
{"text":"data_9482"}
{"text":"data_9483"}
{"text":"data_9484"}
{"text":"data_9485"}
{"text":"data_9486"}
{"text":"data_9487"}
{"text":"data_9488"}
{"text":"data_9489"}
{"text":"data_9490"}
{"text":"data_9491"}
{"text":"data_9492"}
{"text":"data_9493"}
{"text":"data_9494"}
{"text":"data_9495"}
{"text":"data_9496"}
{"text":"data_9497"}
{"text":"data_9498"}
{"text":"data_9499"}
{"text":"data_9500"}
{"text":"data_9501"}
{"text":"data_9502"}
{"text":"data_9503"}
{"text":"data_9504"}
{"text":"data_9505"}
{"text":"data_9506"}
{"text":"data_9507"}
{"text":"data_9508"}
{"text":"data_9509"}
{"text":"data_9510"}
{"text":"data_9511"}
{"text":"data_9512"}
{"text":"data_9513"}
{"text":"data_9514"}
{"text":"data_9515"}
{"text":"data_9516"}
{"text":"data_9517"}
{"text":"data_9518"}
{"text":"data_9519"}
{"text":"data_9520"}
{"text":"data_9521"}
{"text":"data_9522"}
{"text":"data_9523"}
{"text":"data_9524"}
{"text":"data_9525"}
{"text":"data_9526"}
{"text":"data_9527"}
{"text":"data_9528"}
{"text":"data_9529"}
{"text":"data_9530"}
{"text":"data_9531"}
{"text":"data_9532"}
{"text":"data_9533"}
{"text":"data_9534"}
{"text":"data_9535"}
{"text":"data_9536"}
{"text":"data_9537"}
{"text":"data_9538"}
{"text":"data_9539"}
{"text":"data_9540"}
{"text":"data_9541"}
{"text":"data_9542"}
{"text":"data_9543"}
{"text":"data_9544"}
{"text":"data_9545"}
{"text":"data_9546"}
{"text":"data_9547"}
{"text":"data_9548"}
{"text":"data_9549"}
{"text":"data_9550"}
{"text":"data_9551"}
{"text":"data_9552"}
{"text":"data_9553"}
{"text":"data_9554"}
{"text":"data_9555"}
{"text":"data_9556"}
{"text":"data_9557"}
{"text":"data_9558"}
{"text":"data_9559"}
{"text":"data_9560"}
{"text":"data_9561"}
{"text":"data_9562"}
{"text":"data_9563"}
{"text":"data_9564"}
{"text":"data_9565"}
{"text":"data_9566"}
{"text":"data_9567"}
{"text":"data_9568"}
{"text":"data_9569"}
{"text":"data_9570"}
{"text":"data_9571"}
{"text":"data_9572"}
{"text":"data_9573"}
{"text":"data_9574"}
{"text":"data_9575"}
{"text":"data_9576"}
{"text":"data_9577"}
{"text":"data_9578"}
{"text":"data_9579"}
{"text":"data_9580"}
{"text":"data_9581"}
{"text":"data_9582"}
{"text":"data_9583"}
{"text":"data_9584"}
{"text":"data_9585"}
{"text":"data_9586"}
{"text":"data_9587"}
{"text":"data_9588"}
{"text":"data_9589"}
{"text":"data_9590"}
{"text":"data_9591"}
{"text":"data_9592"}
{"text":"data_9593"}
{"text":"data_9594"}
{"text":"data_9595"}
{"text":"data_9596"}
{"text":"data_9597"}
{"text":"data_9598"}
{"text":"data_9599"}
{"text":"data_9600"}
{"text":"data_9601"}
{"text":"data_9602"}
{"text":"data_9603"}
{"text":"data_9604"}
{"text":"data_9605"}
{"text":"data_9606"}
{"text":"data_9607"}
{"text":"data_9608"}
{"text":"data_9609"}
{"text":"data_9610"}
{"text":"data_9611"}
{"text":"data_9612"}
{"text":"data_9613"}
{"text":"data_9614"}
{"text":"data_9615"}
{"text":"data_9616"}
{"text":"data_9617"}
{"text":"data_9618"}
{"text":"data_9619"}
{"text":"data_9620"}
{"text":"data_9621"}
{"text":"data_9622"}
{"text":"data_9623"}
{"text":"data_9624"}
{"text":"data_9625"}
{"text":"data_9626"}
{"text":"data_9627"}
{"text":"data_9628"}
{"text":"data_9629"}
{"text":"data_9630"}
{"text":"data_9631"}
{"text":"data_9632"}
{"text":"data_9633"}
{"text":"data_9634"}
{"text":"data_9635"}
{"text":"data_9636"}
{"text":"data_9637"}
{"text":"data_9638"}
{"text":"data_9639"}
{"text":"data_9640"}
{"text":"data_9641"}
{"text":"data_9642"}
{"text":"data_9643"}
{"text":"data_9644"}
{"text":"data_9645"}
{"text":"data_9646"}
{"text":"data_9647"}
{"text":"data_9648"}
{"text":"data_9649"}
{"text":"data_9650"}
{"text":"data_9651"}
{"text":"data_9652"}
{"text":"data_9653"}
{"text":"data_9654"}
{"text":"data_9655"}
{"text":"data_9656"}
{"text":"data_9657"}
{"text":"data_9658"}
{"text":"data_9659"}
{"text":"data_9660"}
{"text":"data_9661"}
{"text":"data_9662"}
{"text":"data_9663"}
{"text":"data_9664"}
{"text":"data_9665"}
{"text":"data_9666"}
{"text":"data_9667"}
{"text":"data_9668"}
{"text":"data_9669"}
{"text":"data_9670"}
{"text":"data_9671"}
{"text":"data_9672"}
{"text":"data_9673"}
{"text":"data_9674"}
{"text":"data_9675"}
{"text":"data_9676"}
{"text":"data_9677"}
{"text":"data_9678"}
{"text":"data_9679"}
{"text":"data_9680"}
{"text":"data_9681"}
{"text":"data_9682"}
{"text":"data_9683"}
{"text":"data_9684"}
{"text":"data_9685"}
{"text":"data_9686"}
{"text":"data_9687"}
{"text":"data_9688"}
{"text":"data_9689"}
{"text":"data_9690"}
{"text":"data_9691"}
{"text":"data_9692"}
{"text":"data_9693"}
{"text":"data_9694"}
{"text":"data_9695"}
{"text":"data_9696"}
{"text":"data_9697"}
{"text":"data_9698"}
{"text":"data_9699"}
{"text":"data_9700"}
{"text":"data_9701"}
{"text":"data_9702"}
{"text":"data_9703"}
{"text":"data_9704"}
{"text":"data_9705"}
{"text":"data_9706"}
{"text":"data_9707"}
{"text":"data_9708"}
{"text":"data_9709"}
{"text":"data_9710"}
{"text":"data_9711"}
{"text":"data_9712"}
{"text":"data_9713"}
{"text":"data_9714"}
{"text":"data_9715"}
{"text":"data_9716"}
{"text":"data_9717"}
{"text":"data_9718"}
{"text":"data_9719"}
{"text":"data_9720"}
{"text":"data_9721"}
{"text":"data_9722"}
{"text":"data_9723"}
{"text":"data_9724"}
{"text":"data_9725"}
{"text":"data_9726"}
{"text":"data_9727"}
{"text":"data_9728"}
{"text":"data_9729"}
{"text":"data_9730"}
{"text":"data_9731"}
{"text":"data_9732"}
{"text":"data_9733"}
{"text":"data_9734"}
{"text":"data_9735"}
{"text":"data_9736"}
{"text":"data_9737"}
{"text":"data_9738"}
{"text":"data_9739"}
{"text":"data_9740"}
{"text":"data_9741"}
{"text":"data_9742"}
{"text":"data_9743"}
{"text":"data_9744"}
{"text":"data_9745"}
{"text":"data_9746"}
{"text":"data_9747"}
{"text":"data_9748"}
{"text":"data_9749"}
{"text":"data_9750"}
{"text":"data_9751"}
{"text":"data_9752"}
{"text":"data_9753"}
{"text":"data_9754"}
{"text":"data_9755"}
{"text":"data_9756"}
{"text":"data_9757"}
{"text":"data_9758"}
{"text":"data_9759"}
{"text":"data_9760"}
{"text":"data_9761"}
{"text":"data_9762"}
{"text":"data_9763"}
{"text":"data_9764"}
{"text":"data_9765"}
{"text":"data_9766"}
{"text":"data_9767"}
{"text":"data_9768"}
{"text":"data_9769"}
{"text":"data_9770"}
{"text":"data_9771"}
{"text":"data_9772"}
{"text":"data_9773"}
{"text":"data_9774"}
{"text":"data_9775"}
{"text":"data_9776"}
{"text":"data_9777"}
{"text":"data_9778"}
{"text":"data_9779"}
{"text":"data_9780"}
{"text":"data_9781"}
{"text":"data_9782"}
{"text":"data_9783"}
{"text":"data_9784"}
{"text":"data_9785"}
{"text":"data_9786"}
{"text":"data_9787"}
{"text":"data_9788"}
{"text":"data_9789"}
{"text":"data_9790"}
{"text":"data_9791"}
{"text":"data_9792"}
{"text":"data_9793"}
{"text":"data_9794"}
{"text":"data_9795"}
{"text":"data_9796"}
{"text":"data_9797"}
{"text":"data_9798"}
{"text":"data_9799"}
{"text":"data_9800"}
{"text":"data_9801"}
{"text":"data_9802"}
{"text":"data_9803"}
{"text":"data_9804"}
{"text":"data_9805"}
{"text":"data_9806"}
{"text":"data_9807"}
{"text":"data_9808"}
{"text":"data_9809"}
{"text":"data_9810"}
{"text":"data_9811"}
{"text":"data_9812"}
{"text":"data_9813"}
{"text":"data_9814"}
{"text":"data_9815"}
{"text":"data_9816"}
{"text":"data_9817"}
{"text":"data_9818"}
{"text":"data_9819"}
{"text":"data_9820"}
{"text":"data_9821"}
{"text":"data_9822"}
{"text":"data_9823"}
{"text":"data_9824"}
{"text":"data_9825"}
{"text":"data_9826"}
{"text":"data_9827"}
{"text":"data_9828"}
{"text":"data_9829"}
{"text":"data_9830"}
{"text":"data_9831"}
{"text":"data_9832"}
{"text":"data_9833"}
{"text":"data_9834"}
{"text":"data_9835"}
{"text":"data_9836"}
{"text":"data_9837"}
{"text":"data_9838"}
{"text":"data_9839"}
{"text":"data_9840"}
{"text":"data_9841"}
{"text":"data_9842"}
{"text":"data_9843"}
{"text":"data_9844"}
{"text":"data_9845"}
{"text":"data_9846"}
{"text":"data_9847"}
{"text":"data_9848"}
{"text":"data_9849"}
{"text":"data_9850"}
{"text":"data_9851"}
{"text":"data_9852"}
{"text":"data_9853"}
{"text":"data_9854"}
{"text":"data_9855"}
{"text":"data_9856"}
{"text":"data_9857"}
{"text":"data_9858"}
{"text":"data_9859"}
{"text":"data_9860"}
{"text":"data_9861"}
{"text":"data_9862"}
{"text":"data_9863"}
{"text":"data_9864"}
{"text":"data_9865"}
{"text":"data_9866"}
{"text":"data_9867"}
{"text":"data_9868"}
{"text":"data_9869"}
{"text":"data_9870"}
{"text":"data_9871"}
{"text":"data_9872"}
{"text":"data_9873"}
{"text":"data_9874"}
{"text":"data_9875"}
{"text":"data_9876"}
{"text":"data_9877"}
{"text":"data_9878"}
{"text":"data_9879"}
{"text":"data_9880"}
{"text":"data_9881"}
{"text":"data_9882"}
{"text":"data_9883"}
{"text":"data_9884"}
{"text":"data_9885"}
{"text":"data_9886"}
{"text":"data_9887"}
{"text":"data_9888"}
{"text":"data_9889"}
{"text":"data_9890"}
{"text":"data_9891"}
{"text":"data_9892"}
{"text":"data_9893"}
{"text":"data_9894"}
{"text":"data_9895"}
{"text":"data_9896"}
{"text":"data_9897"}
{"text":"data_9898"}
{"text":"data_9899"}
{"text":"data_9900"}
{"text":"data_9901"}
{"text":"data_9902"}
{"text":"data_9903"}
{"text":"data_9904"}
{"text":"data_9905"}
{"text":"data_9906"}
{"text":"data_9907"}
{"text":"data_9908"}
{"text":"data_9909"}
{"text":"data_9910"}
{"text":"data_9911"}
{"text":"data_9912"}
{"text":"data_9913"}
{"text":"data_9914"}
{"text":"data_9915"}
{"text":"data_9916"}
{"text":"data_9917"}
{"text":"data_9918"}
{"text":"data_9919"}
{"text":"data_9920"}
{"text":"data_9921"}
{"text":"data_9922"}
{"text":"data_9923"}
{"text":"data_9924"}
{"text":"data_9925"}
{"text":"data_9926"}
{"text":"data_9927"}
{"text":"data_9928"}
{"text":"data_9929"}
{"text":"data_9930"}
{"text":"data_9931"}
{"text":"data_9932"}
{"text":"data_9933"}
{"text":"data_9934"}
{"text":"data_9935"}
{"text":"data_9936"}
{"text":"data_9937"}
{"text":"data_9938"}
{"text":"data_9939"}
{"text":"data_9940"}
{"text":"data_9941"}
{"text":"data_9942"}
{"text":"data_9943"}
{"text":"data_9944"}
{"text":"data_9945"}
{"text":"data_9946"}
{"text":"data_9947"}
{"text":"data_9948"}
{"text":"data_9949"}
{"text":"data_9950"}
{"text":"data_9951"}
{"text":"data_9952"}
{"text":"data_9953"}
{"text":"data_9954"}
{"text":"data_9955"}
{"text":"data_9956"}
{"text":"data_9957"}
{"text":"data_9958"}
{"text":"data_9959"}
{"text":"data_9960"}
{"text":"data_9961"}
{"text":"data_9962"}
{"text":"data_9963"}
{"text":"data_9964"}
{"text":"data_9965"}
{"text":"data_9966"}
{"text":"data_9967"}
{"text":"data_9968"}
{"text":"data_9969"}
{"text":"data_9970"}
{"text":"data_9971"}
{"text":"data_9972"}
{"text":"data_9973"}
{"text":"data_9974"}
{"text":"data_9975"}
{"text":"data_9976"}
{"text":"data_9977"}
{"text":"data_9978"}
{"text":"data_9979"}
{"text":"data_9980"}
{"text":"data_9981"}
{"text":"data_9982"}
{"text":"data_9983"}
{"text":"data_9984"}
{"text":"data_9985"}
{"text":"data_9986"}
{"text":"data_9987"}
{"text":"data_9988"}
{"text":"data_9989"}
{"text":"data_9990"}
{"text":"data_9991"}
{"text":"data_9992"}
{"text":"data_9993"}
{"text":"data_9994"}
{"text":"data_9995"}
{"text":"data_9996"}
{"text":"data_9997"}
{"text":"data_9998"}
{"text":"data_9999"}
| promptflow/src/promptflow/tests/test_configs/datas/load_data_cases/10k.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/datas/load_data_cases/10k.jsonl",
"repo_id": "promptflow",
"token_count": 89936
} | 64 |
{"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g", "answer": "Channel", "evidence": "Url", "variant_id": "fake_variant1"}
| promptflow/src/promptflow/tests/test_configs/datas/webClassification1.jsonl/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/datas/webClassification1.jsonl",
"repo_id": "promptflow",
"token_count": 53
} | 65 |
entry: my_func
path: ./not_exist.py | promptflow/src/promptflow/tests/test_configs/eager_flows/invalid_illegal_path/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/eager_flows/invalid_illegal_path/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 14
} | 66 |
import argparse
import os
from pathlib import Path
parser = argparse.ArgumentParser()
parser.add_argument("--input-path", type=str, required=True)
parser.add_argument("--output-path", type=str, required=True)
parser.add_argument("--count", type=int, required=True)
args = parser.parse_args()
env_var = os.environ.get("CONNECTION_KEY")
assert env_var is not None, "Environment variable CONNECTION_KEY not set!"
assert env_var != "${azure_open_ai_connection.api_key}", "Environment variable CONNECTION_KEY not resolved!"
with open(args.input_path, "r", encoding="utf-8") as f:
input_lines = f.readlines()
assert args.count == len(input_lines), \
f"Data number {args.count} different from input lines {len(input_lines)} in file!"
output_path = Path(args.output_path)
assert output_path.exists(), f"Output path {args.output_path!r} not exists!"
with open(output_path / "data.jsonl", "w", encoding="utf-8") as f:
f.writelines(input_lines)
| promptflow/src/promptflow/tests/test_configs/experiments/basic-script-template/generate_data/generate_data.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/experiments/basic-script-template/generate_data/generate_data.py",
"repo_id": "promptflow",
"token_count": 318
} | 67 |
id: template_eval_flow
name: Template Evaluation Flow
inputs:
groundtruth:
type: string
is_chat_input: false
prediction:
type: string
is_chat_input: false
outputs:
results:
type: string
reference: ${line_process.output}
nodes:
- name: line_process
type: python
source:
type: code
path: line_process.py
inputs:
groundtruth: ${inputs.groundtruth}
prediction: ${inputs.prediction}
use_variants: false
- name: aggregate
type: python
source:
type: code
path: aggregate.py
inputs:
processed_results: ${line_process.output}
aggregation: true
use_variants: false
node_variants: {}
environment:
python_requirements_txt: requirements.txt
| promptflow/src/promptflow/tests/test_configs/flows/aggregation_node_failed/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/aggregation_node_failed/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 252
} | 68 |
inputs:
chat_history:
type: list
is_chat_history: true
default:
- inputs:
question: hi
outputs:
answer: hi
- inputs:
question: who are you
outputs:
answer: who are you
question:
type: string
is_chat_input: true
default: What is ChatGPT?
outputs:
answer:
type: string
reference: ${chat_node.output}
is_chat_output: true
nodes:
- inputs:
deployment_name: gpt-35-turbo
max_tokens: "256"
temperature: "0.7"
chat_history: ${inputs.chat_history}
question: ${inputs.question}
name: chat_node
type: llm
source:
type: code
path: chat.jinja2
api: chat
provider: AzureOpenAI
connection: azure_open_ai_connection | promptflow/src/promptflow/tests/test_configs/flows/chat_flow_with_default_history/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/chat_flow_with_default_history/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 334
} | 69 |
import threading
from time import sleep
from promptflow import tool
@tool
def wait(**kwargs) -> int:
if kwargs["throw_exception"]:
raise Exception("test exception")
for i in range(10):
print(f"Thread {threading.get_ident()} write test log number {i}")
sleep(2)
return 0
| promptflow/src/promptflow/tests/test_configs/flows/concurrent_execution_flow/wait_short.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/concurrent_execution_flow/wait_short.py",
"repo_id": "promptflow",
"token_count": 114
} | 70 |
from promptflow import tool
@tool
def double(input: int) -> int:
return 2*input
| promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_aggregate_bypassed/double.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_aggregate_bypassed/double.py",
"repo_id": "promptflow",
"token_count": 27
} | 71 |
from promptflow import tool
@tool
def test_print_input(input_str: str, input_bool: bool, input_list: list, input_dict: dict):
assert not input_bool
assert input_list == []
assert input_dict == {}
print(input_str)
return input_str | promptflow/src/promptflow/tests/test_configs/flows/default_input/test_print_input.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/default_input/test_print_input.py",
"repo_id": "promptflow",
"token_count": 88
} | 72 |
inputs:
key:
type: object
outputs:
output:
type: string
reference: ${print_val.output.value}
nodes:
- name: print_val
use_variants: true
type: python
source:
type: code
path: print_val.py
node_variants:
print_val:
default_variant_id: variant1
variants:
variant1:
node:
type: python
source:
type: code
path: print_val.py
inputs:
key: ${inputs.key}
conn: mock_custom_connection | promptflow/src/promptflow/tests/test_configs/flows/flow_with_dict_input_with_variant/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_dict_input_with_variant/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 250
} | 73 |
import os
from promptflow import tool
@tool
def get_env_var(key: str):
from langchain import __version__
print(__version__)
print(os.environ.get(key))
# get from env var
return {"value": os.environ.get(key)}
| promptflow/src/promptflow/tests/test_configs/flows/flow_with_requirements_txt/print_env.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_requirements_txt/print_env.py",
"repo_id": "promptflow",
"token_count": 88
} | 74 |
inputs:
key:
type: object
outputs:
output:
type: string
reference: ${print_val.output.value}
nodes:
- name: print_val
type: python
source:
type: code
path: print_val.py
inputs:
key: ${inputs.key}
| promptflow/src/promptflow/tests/test_configs/flows/flow_with_user_output/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_user_output/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 99
} | 75 |
$schema: https://azuremlschemas.azureedge.net/latest/flow.schema.json
name: web_classificiation_flow_3
display_name: Web Classification
type: standard
description: Create flows that use large language models to classify URLs into multiple categories.
path: ./flow.dag.yaml
| promptflow/src/promptflow/tests/test_configs/flows/meta_files/flow.meta.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/meta_files/flow.meta.yaml",
"repo_id": "promptflow",
"token_count": 79
} | 76 |
[{"idx": 1}, {"idx": 4}, {"idx": 10}] | promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/samples.json/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/samples.json",
"repo_id": "promptflow",
"token_count": 20
} | 77 |
inputs:
idx:
type: int
mod:
type: int
mod_2:
type: int
outputs:
output:
type: int
reference: ${my_python_tool_with_failed_line_2.output}
nodes:
- name: my_python_tool_with_failed_line_1
type: python
source:
type: code
path: my_python_tool_with_failed_line.py
inputs:
idx: ${inputs.idx}
mod: ${inputs.mod}
- name: my_python_tool_with_failed_line_2
type: python
source:
type: code
path: my_python_tool_with_failed_line.py
inputs:
idx: ${my_python_tool_with_failed_line_1.output}
mod: ${inputs.mod_2} | promptflow/src/promptflow/tests/test_configs/flows/python_tool_partial_failure/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/python_tool_partial_failure/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 257
} | 78 |
inputs:
image_1:
type: image
image_2:
type: image
outputs:
output:
type: image
reference: ${python_node.output}
nodes:
- name: python_node
type: python
source:
type: code
path: pick_an_image.py
inputs:
image_1: ${inputs.image_1}
image_2: ${inputs.image_2}
| promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image_without_default/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image_without_default/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 134
} | 79 |
from promptflow import tool
from dummy_utils.util_tool import passthrough
@tool
def main(x: str):
return passthrough(x)
| promptflow/src/promptflow/tests/test_configs/flows/script_with_import/dummy_utils/main.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/script_with_import/dummy_utils/main.py",
"repo_id": "promptflow",
"token_count": 42
} | 80 |
name: node_wrong_order
inputs:
text:
type: string
skip:
type: bool
outputs:
result:
type: string
reference: ${third_node}
nodes:
- name: third_node
type: python
source:
type: code
path: test.py
inputs:
text: ${second_node}
- name: first_node
type: python
source:
type: code
path: test.py
inputs:
text: ${inputs.text}
- name: second_node
type: python
source:
type: code
path: test.py
inputs:
text: ${first_node}
activate:
when: ${inputs.skip}
is: true
| promptflow/src/promptflow/tests/test_configs/flows/unordered_nodes_with_activate/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/unordered_nodes_with_activate/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 229
} | 81 |
import json
import time
from promptflow import tool
# use this to test the timeout
time.sleep(2)
@tool
def convert_to_dict(input_str: str):
try:
return json.loads(input_str)
except Exception as e:
print("input is not valid, error: {}".format(e))
return {"category": "None", "evidence": "None"}
| promptflow/src/promptflow/tests/test_configs/flows/web_classification_invalid/convert_to_dict.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/flows/web_classification_invalid/convert_to_dict.py",
"repo_id": "promptflow",
"token_count": 123
} | 82 |
<jupyter_start><jupyter_text>Set flow path<jupyter_code>from promptflow import PFClient
# client can help manage your runs and connections.
pf = PFClient()
flow = "../flows/simple_hello_world" # path to the flow directory<jupyter_output><empty_output><jupyter_text>Quick test<jupyter_code># Test flow
flow_inputs = {
"name": "dummy",
}
flow_result = pf.test(flow=flow, inputs=flow_inputs)
# Test single node in the flow
node_name = "hello_world"
node_inputs = {
"name": "dummy"
}
flow_result = pf.test(flow=flow, inputs=node_inputs, node=node_name)
print(f"Node result: {flow_result}")<jupyter_output><empty_output><jupyter_text>Batch run with a data file (with multiple lines of test data)<jupyter_code>data = "../datas/simple_hello_world.jsonl" # path to the data file
# create run with default variant
base_run = pf.run(flow=flow, data=data, stream=True)<jupyter_output><empty_output> | promptflow/src/promptflow/tests/test_configs/notebooks/dummy.ipynb/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/notebooks/dummy.ipynb",
"repo_id": "promptflow",
"token_count": 331
} | 83 |
interactions:
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000",
"name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location":
"eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic",
"tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}'
headers:
cache-control:
- no-cache
content-length:
- '3630'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.033'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false
response:
body:
string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}]}'
headers:
cache-control:
- no-cache
content-length:
- '1372'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.057'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1227'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.077'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.124'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:12:02 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '49'
content-md5:
- quXiEreYvPinSj0HsaNa/g==
content-type:
- application/octet-stream
last-modified:
- Wed, 08 Nov 2023 04:26:09 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Wed, 08 Nov 2023 04:26:09 GMT
x-ms-meta-name:
- c4092674-5e53-4c17-b78d-75353ae0edb6
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- 579021dc-8ac8-4c73-8110-4642bd00c69b
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:12:08 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/env_var_names.jsonl
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1227'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.072'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.120'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:12:11 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/flow_with_requirements_txt/flow.dag.yaml
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '304'
content-md5:
- FfuwUh4d2qJABbXwj4ACMA==
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 09:33:34 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Wed, 17 Jan 2024 09:33:33 GMT
x-ms-meta-name:
- 170072d0-60d9-4d21-8c3d-5ac5f8434bb9
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- '1'
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:12:16 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/flow_with_requirements_txt/flow.dag.yaml
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: '{"flowDefinitionDataStoreName": "workspaceblobstore", "flowDefinitionBlobPath":
"LocalUpload/000000000000000000000000000000000000/flow_with_requirements_txt/flow.dag.yaml",
"runId": "name", "runDisplayName": "name", "runExperimentName": "", "batchDataInput":
{"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/000000000000000000000000000000000000/env_var_names.jsonl"},
"inputsMapping": {}, "connections": {}, "environmentVariables": {}, "runtimeName":
"fake-runtime-name", "sessionId": "000000000000000000000000000000000000000000000000",
"sessionSetupMode": "SystemWait", "flowLineageId": "0000000000000000000000000000000000000000000000000000000000000000",
"runDisplayNameGenerationType": "UserProvidedMacro"}'
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '804'
Content-Type:
- application/json
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: POST
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/submit
response:
body:
string: '"name"'
headers:
connection:
- keep-alive
content-length:
- '38'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
x-content-type-options:
- nosniff
x-request-time:
- '18.617'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "print_env", "type": "python", "source":
{"type": "code", "path": "print_env.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "print_env.py", "reduce": false}], "tools": [{"name": "Azure OpenAI
GPT-4 Turbo with Vision", "type": "custom_llm", "inputs": {"connection": {"type":
["AzureOpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "frequency_penalty":
{"type": ["double"], "default": 0, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "max_tokens": {"type": ["int"], "default":
512, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Azure OpenAI GPT-4 Turbo
with Vision to leverage AOAI vision ability.", "module": "promptflow.tools.aoai_gpt4v",
"class_name": "AzureOpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Content Safety (Text Analyze)",
"type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"1.0.3", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": true, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "1.0.3", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Model LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "default": "", "dynamic_list": {"func_path": "promptflow.tools.open_model_llm.list_deployment_names",
"func_kwargs": [{"name": "endpoint", "optional": true, "reference": "${inputs.endpoint}",
"type": ["string"]}]}, "allow_manual_entry": true, "is_multi_select": false,
"input_type": "default"}, "endpoint_name": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow.tools.open_model_llm.list_endpoint_names"}, "allow_manual_entry":
true, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an open model from the Azure Model catalog, deployed to
an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_model_llm", "class_name": "OpenModelLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": 512, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Serp API", "type": "python", "inputs":
{"connection": {"type": ["SerpConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "engine": {"type": ["string"], "default":
"google", "enum": ["google", "bing"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "location": {"type": ["string"], "default":
"", "allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"num": {"type": ["int"], "default": "10", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "safe": {"type":
["string"], "default": "off", "enum": ["active", "off"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Use Serp API to obtain search results from a specific search engine.", "module":
"promptflow.tools.serpapi", "class_name": "SerpAPI", "function": "search",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Index Lookup",
"type": "python", "inputs": {"acs_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_embedding_field": {"type": ["string"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Collection(Edm.Single)", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_index_connection": {"type": ["CognitiveSearchConnection"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "acs_index_name":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_indices",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"acs_metadata_field": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Azure AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "aoai_embedding_connection": {"type":
["AzureOpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["Azure OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "embedding_deployment": {"type": ["string"], "enabled_by":
"embedding_type", "enabled_by_value": ["Azure OpenAI"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_aoai_embedding_deployments",
"func_kwargs": [{"name": "aoai_connection", "optional": false, "reference":
"${inputs.aoai_embedding_connection}", "type": ["AzurOpenAIConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"embedding_model": {"type": ["string"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI", "Hugging Face"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_embedding_models",
"func_kwargs": [{"name": "embedding_type", "optional": false, "reference":
"${inputs.embedding_type}", "type": ["string"]}]}, "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "embedding_type":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search", "FAISS", "Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_embedding_types",
"func_kwargs": [{"name": "index_type", "optional": false, "reference": "${inputs.index_type}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "faiss_index_path": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["FAISS"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "index_type": {"type":
["string"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_index_types"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_asset_id": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Registered Index"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_registered_mlindices"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_content": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "generated_by": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.forward_mapping",
"func_kwargs": [{"name": "index_type", "reference": "${inputs.index_type}",
"type": ["string"]}, {"name": "mlindex_asset_id", "optional": true, "reference":
"${inputs.mlindex_asset_id}", "type": ["string"]}, {"name": "mlindex_path",
"optional": true, "reference": "${inputs.mlindex_path}", "type": ["string"]},
{"name": "acs_index_connection", "optional": true, "reference": "${inputs.acs_index_connection}",
"type": ["CognitiveSearchConnection"]}, {"name": "acs_index_name", "optional":
true, "reference": "${inputs.acs_index_name}", "type": ["string"]}, {"name":
"acs_content_field", "optional": true, "reference": "${inputs.acs_content_field}",
"type": ["string"]}, {"name": "acs_embedding_field", "optional": true, "reference":
"${inputs.acs_embedding_field}", "type": ["string"]}, {"name": "acs_metadata_field",
"optional": true, "reference": "${inputs.acs_metadata_field}", "type": ["string"]},
{"name": "semantic_configuration", "optional": true, "reference": "${inputs.semantic_configuration}",
"type": ["string"]}, {"name": "faiss_index_path", "optional": true, "reference":
"${inputs.faiss_index_path}", "type": ["string"]}, {"name": "pinecone_index_connection",
"optional": true, "reference": "${inputs.pinecone_index_connection}", "type":
["string"]}, {"name": "pinecone_index_name", "optional": true, "reference":
"${inputs.pinecone_index_name}", "type": ["string"]}, {"name": "pinecone_content_field",
"optional": true, "reference": "${inputs.pinecone_content_field}", "type":
["string"]}, {"name": "pinecone_metadata_field", "optional": true, "reference":
"${inputs.pinecone_metadata_field}", "type": ["string"]}, {"name": "embedding_type",
"optional": true, "reference": "${inputs.embedding_type}", "type": ["string"]},
{"name": "aoai_embedding_connection", "optional": true, "reference": "${inputs.aoai_embedding_connection}",
"type": ["AzureOpenAIConnection"]}, {"name": "oai_embedding_connection", "optional":
true, "reference": "${inputs.oai_embedding_connection}", "type": ["string"]},
{"name": "embedding_model", "optional": true, "reference": "${inputs.embedding_model}",
"type": ["string"]}, {"name": "embedding_deployment", "optional": true, "reference":
"${inputs.embedding_deployment}", "type": ["string"]}], "reverse_func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.reverse_mapping"}, "input_type":
"default"}, "mlindex_path": {"type": ["string"], "enabled_by": "index_type",
"enabled_by_value": ["MLIndex file from path"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "oai_embedding_connection":
{"type": ["OpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "pinecone_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Pinecone"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_index_connection":
{"type": ["PineconeConnection"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_connections"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"pinecone_index_name": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_indices",
"func_kwargs": [{"name": "pinecone_connection_name", "optional": false, "reference":
"${inputs.pinecone_index_connection}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_metadata_field":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Pinecone"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"queries": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query_type": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_query_types",
"func_kwargs": [{"name": "mlindex_content", "optional": false, "reference":
"${inputs.mlindex_content}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "semantic_configuration":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_semantic_configurations",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "top_k": {"type": ["int"], "default": 3, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search an AzureML Vector Index for relevant results using one or more text
queries.", "module": "promptflow_vectordb.tool.common_index_lookup", "function":
"search", "is_builtin": true, "package": "promptflow-vectordb", "package_version":
"0.0.1", "enable_kwargs": false, "tool_state": "preview"}, {"name": "Faiss
Index Lookup", "type": "python", "inputs": {"path": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "vector": {"type": ["list"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search vector based query from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "print_env.py", "type": "python",
"inputs": {"key": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "print_env.py", "function": "get_env_var",
"is_builtin": false, "enable_kwargs": false, "tool_state": "stable"}], "inputs":
{"key": {"type": "string", "is_chat_input": false}}, "outputs": {"output":
{"type": "string", "reference": "${print_env.output.value}", "evaluation_only":
false, "is_chat_output": false}}}, "flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name",
"flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri":
"azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "automatic",
"inputsMapping": {}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath":
"promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath":
"flow.dag.yaml", "flowSnapshotId": "031ac559-2433-4b4b-87e5-1aa6a1170995",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '26109'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '1.629'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "print_env", "type": "python", "source":
{"type": "code", "path": "print_env.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "print_env.py", "reduce": false}], "tools": [{"name": "Azure OpenAI
GPT-4 Turbo with Vision", "type": "custom_llm", "inputs": {"connection": {"type":
["AzureOpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "frequency_penalty":
{"type": ["double"], "default": 0, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "max_tokens": {"type": ["int"], "default":
512, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Azure OpenAI GPT-4 Turbo
with Vision to leverage AOAI vision ability.", "module": "promptflow.tools.aoai_gpt4v",
"class_name": "AzureOpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Content Safety (Text Analyze)",
"type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"1.0.3", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": true, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "1.0.3", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Model LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "default": "", "dynamic_list": {"func_path": "promptflow.tools.open_model_llm.list_deployment_names",
"func_kwargs": [{"name": "endpoint", "optional": true, "reference": "${inputs.endpoint}",
"type": ["string"]}]}, "allow_manual_entry": true, "is_multi_select": false,
"input_type": "default"}, "endpoint_name": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow.tools.open_model_llm.list_endpoint_names"}, "allow_manual_entry":
true, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an open model from the Azure Model catalog, deployed to
an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_model_llm", "class_name": "OpenModelLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": 512, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Serp API", "type": "python", "inputs":
{"connection": {"type": ["SerpConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "engine": {"type": ["string"], "default":
"google", "enum": ["google", "bing"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "location": {"type": ["string"], "default":
"", "allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"num": {"type": ["int"], "default": "10", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "safe": {"type":
["string"], "default": "off", "enum": ["active", "off"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Use Serp API to obtain search results from a specific search engine.", "module":
"promptflow.tools.serpapi", "class_name": "SerpAPI", "function": "search",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Index Lookup",
"type": "python", "inputs": {"acs_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_embedding_field": {"type": ["string"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Collection(Edm.Single)", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_index_connection": {"type": ["CognitiveSearchConnection"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "acs_index_name":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_indices",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"acs_metadata_field": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Azure AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "aoai_embedding_connection": {"type":
["AzureOpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["Azure OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "embedding_deployment": {"type": ["string"], "enabled_by":
"embedding_type", "enabled_by_value": ["Azure OpenAI"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_aoai_embedding_deployments",
"func_kwargs": [{"name": "aoai_connection", "optional": false, "reference":
"${inputs.aoai_embedding_connection}", "type": ["AzurOpenAIConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"embedding_model": {"type": ["string"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI", "Hugging Face"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_embedding_models",
"func_kwargs": [{"name": "embedding_type", "optional": false, "reference":
"${inputs.embedding_type}", "type": ["string"]}]}, "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "embedding_type":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search", "FAISS", "Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_embedding_types",
"func_kwargs": [{"name": "index_type", "optional": false, "reference": "${inputs.index_type}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "faiss_index_path": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["FAISS"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "index_type": {"type":
["string"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_index_types"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_asset_id": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Registered Index"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_registered_mlindices"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_content": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "generated_by": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.forward_mapping",
"func_kwargs": [{"name": "index_type", "reference": "${inputs.index_type}",
"type": ["string"]}, {"name": "mlindex_asset_id", "optional": true, "reference":
"${inputs.mlindex_asset_id}", "type": ["string"]}, {"name": "mlindex_path",
"optional": true, "reference": "${inputs.mlindex_path}", "type": ["string"]},
{"name": "acs_index_connection", "optional": true, "reference": "${inputs.acs_index_connection}",
"type": ["CognitiveSearchConnection"]}, {"name": "acs_index_name", "optional":
true, "reference": "${inputs.acs_index_name}", "type": ["string"]}, {"name":
"acs_content_field", "optional": true, "reference": "${inputs.acs_content_field}",
"type": ["string"]}, {"name": "acs_embedding_field", "optional": true, "reference":
"${inputs.acs_embedding_field}", "type": ["string"]}, {"name": "acs_metadata_field",
"optional": true, "reference": "${inputs.acs_metadata_field}", "type": ["string"]},
{"name": "semantic_configuration", "optional": true, "reference": "${inputs.semantic_configuration}",
"type": ["string"]}, {"name": "faiss_index_path", "optional": true, "reference":
"${inputs.faiss_index_path}", "type": ["string"]}, {"name": "pinecone_index_connection",
"optional": true, "reference": "${inputs.pinecone_index_connection}", "type":
["string"]}, {"name": "pinecone_index_name", "optional": true, "reference":
"${inputs.pinecone_index_name}", "type": ["string"]}, {"name": "pinecone_content_field",
"optional": true, "reference": "${inputs.pinecone_content_field}", "type":
["string"]}, {"name": "pinecone_metadata_field", "optional": true, "reference":
"${inputs.pinecone_metadata_field}", "type": ["string"]}, {"name": "embedding_type",
"optional": true, "reference": "${inputs.embedding_type}", "type": ["string"]},
{"name": "aoai_embedding_connection", "optional": true, "reference": "${inputs.aoai_embedding_connection}",
"type": ["AzureOpenAIConnection"]}, {"name": "oai_embedding_connection", "optional":
true, "reference": "${inputs.oai_embedding_connection}", "type": ["string"]},
{"name": "embedding_model", "optional": true, "reference": "${inputs.embedding_model}",
"type": ["string"]}, {"name": "embedding_deployment", "optional": true, "reference":
"${inputs.embedding_deployment}", "type": ["string"]}], "reverse_func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.reverse_mapping"}, "input_type":
"default"}, "mlindex_path": {"type": ["string"], "enabled_by": "index_type",
"enabled_by_value": ["MLIndex file from path"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "oai_embedding_connection":
{"type": ["OpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "pinecone_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Pinecone"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_index_connection":
{"type": ["PineconeConnection"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_connections"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"pinecone_index_name": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_indices",
"func_kwargs": [{"name": "pinecone_connection_name", "optional": false, "reference":
"${inputs.pinecone_index_connection}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_metadata_field":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Pinecone"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"queries": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query_type": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_query_types",
"func_kwargs": [{"name": "mlindex_content", "optional": false, "reference":
"${inputs.mlindex_content}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "semantic_configuration":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_semantic_configurations",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "top_k": {"type": ["int"], "default": 3, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search an AzureML Vector Index for relevant results using one or more text
queries.", "module": "promptflow_vectordb.tool.common_index_lookup", "function":
"search", "is_builtin": true, "package": "promptflow-vectordb", "package_version":
"0.0.1", "enable_kwargs": false, "tool_state": "preview"}, {"name": "Faiss
Index Lookup", "type": "python", "inputs": {"path": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "vector": {"type": ["list"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search vector based query from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "print_env.py", "type": "python",
"inputs": {"key": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "print_env.py", "function": "get_env_var",
"is_builtin": false, "enable_kwargs": false, "tool_state": "stable"}], "inputs":
{"key": {"type": "string", "is_chat_input": false}}, "outputs": {"output":
{"type": "string", "reference": "${print_env.output.value}", "evaluation_only":
false, "is_chat_output": false}}}, "flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name",
"flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri":
"azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "automatic",
"inputsMapping": {}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath":
"promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath":
"flow.dag.yaml", "flowSnapshotId": "031ac559-2433-4b4b-87e5-1aa6a1170995",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '26109'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.326'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "print_env", "type": "python", "source":
{"type": "code", "path": "print_env.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "print_env.py", "reduce": false}], "tools": [{"name": "Azure OpenAI
GPT-4 Turbo with Vision", "type": "custom_llm", "inputs": {"connection": {"type":
["AzureOpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "frequency_penalty":
{"type": ["double"], "default": 0, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "max_tokens": {"type": ["int"], "default":
512, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Azure OpenAI GPT-4 Turbo
with Vision to leverage AOAI vision ability.", "module": "promptflow.tools.aoai_gpt4v",
"class_name": "AzureOpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Content Safety (Text Analyze)",
"type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"1.0.3", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": true, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "1.0.3", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Model LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "default": "", "dynamic_list": {"func_path": "promptflow.tools.open_model_llm.list_deployment_names",
"func_kwargs": [{"name": "endpoint", "optional": true, "reference": "${inputs.endpoint}",
"type": ["string"]}]}, "allow_manual_entry": true, "is_multi_select": false,
"input_type": "default"}, "endpoint_name": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow.tools.open_model_llm.list_endpoint_names"}, "allow_manual_entry":
true, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an open model from the Azure Model catalog, deployed to
an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_model_llm", "class_name": "OpenModelLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": 512, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Serp API", "type": "python", "inputs":
{"connection": {"type": ["SerpConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "engine": {"type": ["string"], "default":
"google", "enum": ["google", "bing"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "location": {"type": ["string"], "default":
"", "allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"num": {"type": ["int"], "default": "10", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "safe": {"type":
["string"], "default": "off", "enum": ["active", "off"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Use Serp API to obtain search results from a specific search engine.", "module":
"promptflow.tools.serpapi", "class_name": "SerpAPI", "function": "search",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Index Lookup",
"type": "python", "inputs": {"acs_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_embedding_field": {"type": ["string"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Collection(Edm.Single)", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_index_connection": {"type": ["CognitiveSearchConnection"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "acs_index_name":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_indices",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"acs_metadata_field": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Azure AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "aoai_embedding_connection": {"type":
["AzureOpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["Azure OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "embedding_deployment": {"type": ["string"], "enabled_by":
"embedding_type", "enabled_by_value": ["Azure OpenAI"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_aoai_embedding_deployments",
"func_kwargs": [{"name": "aoai_connection", "optional": false, "reference":
"${inputs.aoai_embedding_connection}", "type": ["AzurOpenAIConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"embedding_model": {"type": ["string"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI", "Hugging Face"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_embedding_models",
"func_kwargs": [{"name": "embedding_type", "optional": false, "reference":
"${inputs.embedding_type}", "type": ["string"]}]}, "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "embedding_type":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search", "FAISS", "Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_embedding_types",
"func_kwargs": [{"name": "index_type", "optional": false, "reference": "${inputs.index_type}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "faiss_index_path": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["FAISS"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "index_type": {"type":
["string"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_index_types"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_asset_id": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Registered Index"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_registered_mlindices"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_content": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "generated_by": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.forward_mapping",
"func_kwargs": [{"name": "index_type", "reference": "${inputs.index_type}",
"type": ["string"]}, {"name": "mlindex_asset_id", "optional": true, "reference":
"${inputs.mlindex_asset_id}", "type": ["string"]}, {"name": "mlindex_path",
"optional": true, "reference": "${inputs.mlindex_path}", "type": ["string"]},
{"name": "acs_index_connection", "optional": true, "reference": "${inputs.acs_index_connection}",
"type": ["CognitiveSearchConnection"]}, {"name": "acs_index_name", "optional":
true, "reference": "${inputs.acs_index_name}", "type": ["string"]}, {"name":
"acs_content_field", "optional": true, "reference": "${inputs.acs_content_field}",
"type": ["string"]}, {"name": "acs_embedding_field", "optional": true, "reference":
"${inputs.acs_embedding_field}", "type": ["string"]}, {"name": "acs_metadata_field",
"optional": true, "reference": "${inputs.acs_metadata_field}", "type": ["string"]},
{"name": "semantic_configuration", "optional": true, "reference": "${inputs.semantic_configuration}",
"type": ["string"]}, {"name": "faiss_index_path", "optional": true, "reference":
"${inputs.faiss_index_path}", "type": ["string"]}, {"name": "pinecone_index_connection",
"optional": true, "reference": "${inputs.pinecone_index_connection}", "type":
["string"]}, {"name": "pinecone_index_name", "optional": true, "reference":
"${inputs.pinecone_index_name}", "type": ["string"]}, {"name": "pinecone_content_field",
"optional": true, "reference": "${inputs.pinecone_content_field}", "type":
["string"]}, {"name": "pinecone_metadata_field", "optional": true, "reference":
"${inputs.pinecone_metadata_field}", "type": ["string"]}, {"name": "embedding_type",
"optional": true, "reference": "${inputs.embedding_type}", "type": ["string"]},
{"name": "aoai_embedding_connection", "optional": true, "reference": "${inputs.aoai_embedding_connection}",
"type": ["AzureOpenAIConnection"]}, {"name": "oai_embedding_connection", "optional":
true, "reference": "${inputs.oai_embedding_connection}", "type": ["string"]},
{"name": "embedding_model", "optional": true, "reference": "${inputs.embedding_model}",
"type": ["string"]}, {"name": "embedding_deployment", "optional": true, "reference":
"${inputs.embedding_deployment}", "type": ["string"]}], "reverse_func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.reverse_mapping"}, "input_type":
"default"}, "mlindex_path": {"type": ["string"], "enabled_by": "index_type",
"enabled_by_value": ["MLIndex file from path"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "oai_embedding_connection":
{"type": ["OpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "pinecone_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Pinecone"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_index_connection":
{"type": ["PineconeConnection"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_connections"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"pinecone_index_name": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_indices",
"func_kwargs": [{"name": "pinecone_connection_name", "optional": false, "reference":
"${inputs.pinecone_index_connection}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_metadata_field":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Pinecone"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"queries": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query_type": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_query_types",
"func_kwargs": [{"name": "mlindex_content", "optional": false, "reference":
"${inputs.mlindex_content}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "semantic_configuration":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_semantic_configurations",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "top_k": {"type": ["int"], "default": 3, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search an AzureML Vector Index for relevant results using one or more text
queries.", "module": "promptflow_vectordb.tool.common_index_lookup", "function":
"search", "is_builtin": true, "package": "promptflow-vectordb", "package_version":
"0.0.1", "enable_kwargs": false, "tool_state": "preview"}, {"name": "Faiss
Index Lookup", "type": "python", "inputs": {"path": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "vector": {"type": ["list"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search vector based query from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "print_env.py", "type": "python",
"inputs": {"key": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "print_env.py", "function": "get_env_var",
"is_builtin": false, "enable_kwargs": false, "tool_state": "stable"}], "inputs":
{"key": {"type": "string", "is_chat_input": false}}, "outputs": {"output":
{"type": "string", "reference": "${print_env.output.value}", "evaluation_only":
false, "is_chat_output": false}}}, "flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name",
"flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri":
"azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "automatic",
"inputsMapping": {}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath":
"promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath":
"flow.dag.yaml", "flowSnapshotId": "031ac559-2433-4b4b-87e5-1aa6a1170995",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '26109'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.292'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "print_env", "type": "python", "source":
{"type": "code", "path": "print_env.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "print_env.py", "reduce": false}], "tools": [{"name": "Azure OpenAI
GPT-4 Turbo with Vision", "type": "custom_llm", "inputs": {"connection": {"type":
["AzureOpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "frequency_penalty":
{"type": ["double"], "default": 0, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "max_tokens": {"type": ["int"], "default":
512, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Azure OpenAI GPT-4 Turbo
with Vision to leverage AOAI vision ability.", "module": "promptflow.tools.aoai_gpt4v",
"class_name": "AzureOpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Content Safety (Text Analyze)",
"type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"1.0.3", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": true, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "1.0.3", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Model LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "default": "", "dynamic_list": {"func_path": "promptflow.tools.open_model_llm.list_deployment_names",
"func_kwargs": [{"name": "endpoint", "optional": true, "reference": "${inputs.endpoint}",
"type": ["string"]}]}, "allow_manual_entry": true, "is_multi_select": false,
"input_type": "default"}, "endpoint_name": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow.tools.open_model_llm.list_endpoint_names"}, "allow_manual_entry":
true, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an open model from the Azure Model catalog, deployed to
an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_model_llm", "class_name": "OpenModelLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": 512, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Serp API", "type": "python", "inputs":
{"connection": {"type": ["SerpConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "engine": {"type": ["string"], "default":
"google", "enum": ["google", "bing"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "location": {"type": ["string"], "default":
"", "allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"num": {"type": ["int"], "default": "10", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "safe": {"type":
["string"], "default": "off", "enum": ["active", "off"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Use Serp API to obtain search results from a specific search engine.", "module":
"promptflow.tools.serpapi", "class_name": "SerpAPI", "function": "search",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Index Lookup",
"type": "python", "inputs": {"acs_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_embedding_field": {"type": ["string"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Collection(Edm.Single)", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_index_connection": {"type": ["CognitiveSearchConnection"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "acs_index_name":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_indices",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"acs_metadata_field": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Azure AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "aoai_embedding_connection": {"type":
["AzureOpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["Azure OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "embedding_deployment": {"type": ["string"], "enabled_by":
"embedding_type", "enabled_by_value": ["Azure OpenAI"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_aoai_embedding_deployments",
"func_kwargs": [{"name": "aoai_connection", "optional": false, "reference":
"${inputs.aoai_embedding_connection}", "type": ["AzurOpenAIConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"embedding_model": {"type": ["string"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI", "Hugging Face"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_embedding_models",
"func_kwargs": [{"name": "embedding_type", "optional": false, "reference":
"${inputs.embedding_type}", "type": ["string"]}]}, "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "embedding_type":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search", "FAISS", "Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_embedding_types",
"func_kwargs": [{"name": "index_type", "optional": false, "reference": "${inputs.index_type}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "faiss_index_path": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["FAISS"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "index_type": {"type":
["string"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_index_types"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_asset_id": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Registered Index"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_registered_mlindices"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_content": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "generated_by": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.forward_mapping",
"func_kwargs": [{"name": "index_type", "reference": "${inputs.index_type}",
"type": ["string"]}, {"name": "mlindex_asset_id", "optional": true, "reference":
"${inputs.mlindex_asset_id}", "type": ["string"]}, {"name": "mlindex_path",
"optional": true, "reference": "${inputs.mlindex_path}", "type": ["string"]},
{"name": "acs_index_connection", "optional": true, "reference": "${inputs.acs_index_connection}",
"type": ["CognitiveSearchConnection"]}, {"name": "acs_index_name", "optional":
true, "reference": "${inputs.acs_index_name}", "type": ["string"]}, {"name":
"acs_content_field", "optional": true, "reference": "${inputs.acs_content_field}",
"type": ["string"]}, {"name": "acs_embedding_field", "optional": true, "reference":
"${inputs.acs_embedding_field}", "type": ["string"]}, {"name": "acs_metadata_field",
"optional": true, "reference": "${inputs.acs_metadata_field}", "type": ["string"]},
{"name": "semantic_configuration", "optional": true, "reference": "${inputs.semantic_configuration}",
"type": ["string"]}, {"name": "faiss_index_path", "optional": true, "reference":
"${inputs.faiss_index_path}", "type": ["string"]}, {"name": "pinecone_index_connection",
"optional": true, "reference": "${inputs.pinecone_index_connection}", "type":
["string"]}, {"name": "pinecone_index_name", "optional": true, "reference":
"${inputs.pinecone_index_name}", "type": ["string"]}, {"name": "pinecone_content_field",
"optional": true, "reference": "${inputs.pinecone_content_field}", "type":
["string"]}, {"name": "pinecone_metadata_field", "optional": true, "reference":
"${inputs.pinecone_metadata_field}", "type": ["string"]}, {"name": "embedding_type",
"optional": true, "reference": "${inputs.embedding_type}", "type": ["string"]},
{"name": "aoai_embedding_connection", "optional": true, "reference": "${inputs.aoai_embedding_connection}",
"type": ["AzureOpenAIConnection"]}, {"name": "oai_embedding_connection", "optional":
true, "reference": "${inputs.oai_embedding_connection}", "type": ["string"]},
{"name": "embedding_model", "optional": true, "reference": "${inputs.embedding_model}",
"type": ["string"]}, {"name": "embedding_deployment", "optional": true, "reference":
"${inputs.embedding_deployment}", "type": ["string"]}], "reverse_func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.reverse_mapping"}, "input_type":
"default"}, "mlindex_path": {"type": ["string"], "enabled_by": "index_type",
"enabled_by_value": ["MLIndex file from path"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "oai_embedding_connection":
{"type": ["OpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "pinecone_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Pinecone"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_index_connection":
{"type": ["PineconeConnection"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_connections"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"pinecone_index_name": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_indices",
"func_kwargs": [{"name": "pinecone_connection_name", "optional": false, "reference":
"${inputs.pinecone_index_connection}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_metadata_field":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Pinecone"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"queries": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query_type": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_query_types",
"func_kwargs": [{"name": "mlindex_content", "optional": false, "reference":
"${inputs.mlindex_content}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "semantic_configuration":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_semantic_configurations",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "top_k": {"type": ["int"], "default": 3, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search an AzureML Vector Index for relevant results using one or more text
queries.", "module": "promptflow_vectordb.tool.common_index_lookup", "function":
"search", "is_builtin": true, "package": "promptflow-vectordb", "package_version":
"0.0.1", "enable_kwargs": false, "tool_state": "preview"}, {"name": "Faiss
Index Lookup", "type": "python", "inputs": {"path": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "vector": {"type": ["list"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search vector based query from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "print_env.py", "type": "python",
"inputs": {"key": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "print_env.py", "function": "get_env_var",
"is_builtin": false, "enable_kwargs": false, "tool_state": "stable"}], "inputs":
{"key": {"type": "string", "is_chat_input": false}}, "outputs": {"output":
{"type": "string", "reference": "${print_env.output.value}", "evaluation_only":
false, "is_chat_output": false}}}, "flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name",
"flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri":
"azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "automatic",
"inputsMapping": {}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath":
"promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath":
"flow.dag.yaml", "flowSnapshotId": "031ac559-2433-4b4b-87e5-1aa6a1170995",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '26109'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.372'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "print_env", "type": "python", "source":
{"type": "code", "path": "print_env.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "print_env.py", "reduce": false}], "tools": [{"name": "Azure OpenAI
GPT-4 Turbo with Vision", "type": "custom_llm", "inputs": {"connection": {"type":
["AzureOpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "frequency_penalty":
{"type": ["double"], "default": 0, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "max_tokens": {"type": ["int"], "default":
512, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Azure OpenAI GPT-4 Turbo
with Vision to leverage AOAI vision ability.", "module": "promptflow.tools.aoai_gpt4v",
"class_name": "AzureOpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Content Safety (Text Analyze)",
"type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"1.0.3", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": true, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "1.0.3", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Model LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "default": "", "dynamic_list": {"func_path": "promptflow.tools.open_model_llm.list_deployment_names",
"func_kwargs": [{"name": "endpoint", "optional": true, "reference": "${inputs.endpoint}",
"type": ["string"]}]}, "allow_manual_entry": true, "is_multi_select": false,
"input_type": "default"}, "endpoint_name": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow.tools.open_model_llm.list_endpoint_names"}, "allow_manual_entry":
true, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an open model from the Azure Model catalog, deployed to
an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_model_llm", "class_name": "OpenModelLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": 512, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Serp API", "type": "python", "inputs":
{"connection": {"type": ["SerpConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "engine": {"type": ["string"], "default":
"google", "enum": ["google", "bing"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "location": {"type": ["string"], "default":
"", "allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"num": {"type": ["int"], "default": "10", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "safe": {"type":
["string"], "default": "off", "enum": ["active", "off"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Use Serp API to obtain search results from a specific search engine.", "module":
"promptflow.tools.serpapi", "class_name": "SerpAPI", "function": "search",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Index Lookup",
"type": "python", "inputs": {"acs_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_embedding_field": {"type": ["string"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Collection(Edm.Single)", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_index_connection": {"type": ["CognitiveSearchConnection"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "acs_index_name":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_indices",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"acs_metadata_field": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Azure AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "aoai_embedding_connection": {"type":
["AzureOpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["Azure OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "embedding_deployment": {"type": ["string"], "enabled_by":
"embedding_type", "enabled_by_value": ["Azure OpenAI"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_aoai_embedding_deployments",
"func_kwargs": [{"name": "aoai_connection", "optional": false, "reference":
"${inputs.aoai_embedding_connection}", "type": ["AzurOpenAIConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"embedding_model": {"type": ["string"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI", "Hugging Face"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_embedding_models",
"func_kwargs": [{"name": "embedding_type", "optional": false, "reference":
"${inputs.embedding_type}", "type": ["string"]}]}, "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "embedding_type":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search", "FAISS", "Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_embedding_types",
"func_kwargs": [{"name": "index_type", "optional": false, "reference": "${inputs.index_type}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "faiss_index_path": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["FAISS"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "index_type": {"type":
["string"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_index_types"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_asset_id": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Registered Index"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_registered_mlindices"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_content": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "generated_by": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.forward_mapping",
"func_kwargs": [{"name": "index_type", "reference": "${inputs.index_type}",
"type": ["string"]}, {"name": "mlindex_asset_id", "optional": true, "reference":
"${inputs.mlindex_asset_id}", "type": ["string"]}, {"name": "mlindex_path",
"optional": true, "reference": "${inputs.mlindex_path}", "type": ["string"]},
{"name": "acs_index_connection", "optional": true, "reference": "${inputs.acs_index_connection}",
"type": ["CognitiveSearchConnection"]}, {"name": "acs_index_name", "optional":
true, "reference": "${inputs.acs_index_name}", "type": ["string"]}, {"name":
"acs_content_field", "optional": true, "reference": "${inputs.acs_content_field}",
"type": ["string"]}, {"name": "acs_embedding_field", "optional": true, "reference":
"${inputs.acs_embedding_field}", "type": ["string"]}, {"name": "acs_metadata_field",
"optional": true, "reference": "${inputs.acs_metadata_field}", "type": ["string"]},
{"name": "semantic_configuration", "optional": true, "reference": "${inputs.semantic_configuration}",
"type": ["string"]}, {"name": "faiss_index_path", "optional": true, "reference":
"${inputs.faiss_index_path}", "type": ["string"]}, {"name": "pinecone_index_connection",
"optional": true, "reference": "${inputs.pinecone_index_connection}", "type":
["string"]}, {"name": "pinecone_index_name", "optional": true, "reference":
"${inputs.pinecone_index_name}", "type": ["string"]}, {"name": "pinecone_content_field",
"optional": true, "reference": "${inputs.pinecone_content_field}", "type":
["string"]}, {"name": "pinecone_metadata_field", "optional": true, "reference":
"${inputs.pinecone_metadata_field}", "type": ["string"]}, {"name": "embedding_type",
"optional": true, "reference": "${inputs.embedding_type}", "type": ["string"]},
{"name": "aoai_embedding_connection", "optional": true, "reference": "${inputs.aoai_embedding_connection}",
"type": ["AzureOpenAIConnection"]}, {"name": "oai_embedding_connection", "optional":
true, "reference": "${inputs.oai_embedding_connection}", "type": ["string"]},
{"name": "embedding_model", "optional": true, "reference": "${inputs.embedding_model}",
"type": ["string"]}, {"name": "embedding_deployment", "optional": true, "reference":
"${inputs.embedding_deployment}", "type": ["string"]}], "reverse_func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.reverse_mapping"}, "input_type":
"default"}, "mlindex_path": {"type": ["string"], "enabled_by": "index_type",
"enabled_by_value": ["MLIndex file from path"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "oai_embedding_connection":
{"type": ["OpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "pinecone_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Pinecone"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_index_connection":
{"type": ["PineconeConnection"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_connections"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"pinecone_index_name": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_indices",
"func_kwargs": [{"name": "pinecone_connection_name", "optional": false, "reference":
"${inputs.pinecone_index_connection}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_metadata_field":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Pinecone"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"queries": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query_type": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_query_types",
"func_kwargs": [{"name": "mlindex_content", "optional": false, "reference":
"${inputs.mlindex_content}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "semantic_configuration":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_semantic_configurations",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "top_k": {"type": ["int"], "default": 3, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search an AzureML Vector Index for relevant results using one or more text
queries.", "module": "promptflow_vectordb.tool.common_index_lookup", "function":
"search", "is_builtin": true, "package": "promptflow-vectordb", "package_version":
"0.0.1", "enable_kwargs": false, "tool_state": "preview"}, {"name": "Faiss
Index Lookup", "type": "python", "inputs": {"path": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "vector": {"type": ["list"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search vector based query from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "print_env.py", "type": "python",
"inputs": {"key": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "print_env.py", "function": "get_env_var",
"is_builtin": false, "enable_kwargs": false, "tool_state": "stable"}], "inputs":
{"key": {"type": "string", "is_chat_input": false}}, "outputs": {"output":
{"type": "string", "reference": "${print_env.output.value}", "evaluation_only":
false, "is_chat_output": false}}}, "flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name",
"flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri":
"azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "automatic",
"inputsMapping": {}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath":
"promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath":
"flow.dag.yaml", "flowSnapshotId": "031ac559-2433-4b4b-87e5-1aa6a1170995",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '26109'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.431'
status:
code: 200
message: OK
- request:
body: '{}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
User-Agent:
- python-requests/2.31.0
method: POST
uri: https://eastus.api.azureml.ms/metric/v2.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/runs/name/lastvalues
response:
body:
string: '{"value": [{"dataContainerId": "dcid.name", "name": "__pf__.nodes.print_env.completed",
"columns": {"__pf__.nodes.print_env.completed": "Double"}, "properties": {"uxMetricType":
"azureml.v1.scalar", "dataLocation": null}, "namespace": null, "standardSchemaId":
null, "value": [{"metricId": "ba353a41-14f2-400c-925f-408afca84892", "createdUtc":
"2024-01-17T10:13:01.659+00:00", "step": 0, "data": {"__pf__.nodes.print_env.completed":
1.0}}]}, {"dataContainerId": "dcid.name", "name": "__pf__.lines.completed",
"columns": {"__pf__.lines.completed": "Double"}, "properties": {"uxMetricType":
"azureml.v1.scalar", "dataLocation": null}, "namespace": null, "standardSchemaId":
null, "value": [{"metricId": "1d8dcff4-eeb6-46ce-a9c9-2c8a44f3f6f3", "createdUtc":
"2024-01-17T10:13:02.06+00:00", "step": 0, "data": {"__pf__.lines.completed":
1.0}}]}, {"dataContainerId": "dcid.name", "name": "__pf__.lines.failed", "columns":
{"__pf__.lines.failed": "Double"}, "properties": {"uxMetricType": "azureml.v1.scalar",
"dataLocation": null}, "namespace": null, "standardSchemaId": null, "value":
[{"metricId": "cb5521af-802c-4ade-bc26-d5f50a99dc38", "createdUtc": "2024-01-17T10:13:02.485+00:00",
"step": 0, "data": {"__pf__.lines.failed": 0.0}}]}]}'
headers:
connection:
- keep-alive
content-length:
- '1884'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.083'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "print_env", "type": "python", "source":
{"type": "code", "path": "print_env.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "print_env.py", "reduce": false}], "tools": [{"name": "Azure OpenAI
GPT-4 Turbo with Vision", "type": "custom_llm", "inputs": {"connection": {"type":
["AzureOpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "frequency_penalty":
{"type": ["double"], "default": 0, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "max_tokens": {"type": ["int"], "default":
512, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Azure OpenAI GPT-4 Turbo
with Vision to leverage AOAI vision ability.", "module": "promptflow.tools.aoai_gpt4v",
"class_name": "AzureOpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Content Safety (Text Analyze)",
"type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"1.0.3", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": true, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "1.0.3", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Model LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "default": "", "dynamic_list": {"func_path": "promptflow.tools.open_model_llm.list_deployment_names",
"func_kwargs": [{"name": "endpoint", "optional": true, "reference": "${inputs.endpoint}",
"type": ["string"]}]}, "allow_manual_entry": true, "is_multi_select": false,
"input_type": "default"}, "endpoint_name": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow.tools.open_model_llm.list_endpoint_names"}, "allow_manual_entry":
true, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an open model from the Azure Model catalog, deployed to
an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_model_llm", "class_name": "OpenModelLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": 512, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "icon": {"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC",
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg=="},
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting
images and responding to questions about the image.\nRemember to provide accurate
answers based on the information present in the image.\n\n# user:\nCan you
tell me what the image depicts?\n\n", "enable_kwargs":
false, "tool_state": "preview"}, {"name": "Serp API", "type": "python", "inputs":
{"connection": {"type": ["SerpConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "engine": {"type": ["string"], "default":
"google", "enum": ["google", "bing"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "location": {"type": ["string"], "default":
"", "allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"num": {"type": ["int"], "default": "10", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "safe": {"type":
["string"], "default": "off", "enum": ["active", "off"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Use Serp API to obtain search results from a specific search engine.", "module":
"promptflow.tools.serpapi", "class_name": "SerpAPI", "function": "search",
"is_builtin": true, "package": "promptflow-tools", "package_version": "1.0.3",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Index Lookup",
"type": "python", "inputs": {"acs_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_embedding_field": {"type": ["string"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Collection(Edm.Single)", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "acs_index_connection": {"type": ["CognitiveSearchConnection"],
"enabled_by": "index_type", "enabled_by_value": ["Azure AI Search"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "acs_index_name":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_indices",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"acs_metadata_field": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Azure AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_fields",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}, {"default": "Edm.String", "name": "field_data_type",
"optional": false, "type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "uionly_hidden"}, "aoai_embedding_connection": {"type":
["AzureOpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["Azure OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "embedding_deployment": {"type": ["string"], "enabled_by":
"embedding_type", "enabled_by_value": ["Azure OpenAI"], "dynamic_list": {"func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.list_aoai_embedding_deployments",
"func_kwargs": [{"name": "aoai_connection", "optional": false, "reference":
"${inputs.aoai_embedding_connection}", "type": ["AzurOpenAIConnection"]}]},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"embedding_model": {"type": ["string"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI", "Hugging Face"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_embedding_models",
"func_kwargs": [{"name": "embedding_type", "optional": false, "reference":
"${inputs.embedding_type}", "type": ["string"]}]}, "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "embedding_type":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search", "FAISS", "Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_embedding_types",
"func_kwargs": [{"name": "index_type", "optional": false, "reference": "${inputs.index_type}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "faiss_index_path": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["FAISS"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "index_type": {"type":
["string"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_index_types"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_asset_id": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Registered Index"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_registered_mlindices"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"mlindex_content": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "generated_by": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.forward_mapping",
"func_kwargs": [{"name": "index_type", "reference": "${inputs.index_type}",
"type": ["string"]}, {"name": "mlindex_asset_id", "optional": true, "reference":
"${inputs.mlindex_asset_id}", "type": ["string"]}, {"name": "mlindex_path",
"optional": true, "reference": "${inputs.mlindex_path}", "type": ["string"]},
{"name": "acs_index_connection", "optional": true, "reference": "${inputs.acs_index_connection}",
"type": ["CognitiveSearchConnection"]}, {"name": "acs_index_name", "optional":
true, "reference": "${inputs.acs_index_name}", "type": ["string"]}, {"name":
"acs_content_field", "optional": true, "reference": "${inputs.acs_content_field}",
"type": ["string"]}, {"name": "acs_embedding_field", "optional": true, "reference":
"${inputs.acs_embedding_field}", "type": ["string"]}, {"name": "acs_metadata_field",
"optional": true, "reference": "${inputs.acs_metadata_field}", "type": ["string"]},
{"name": "semantic_configuration", "optional": true, "reference": "${inputs.semantic_configuration}",
"type": ["string"]}, {"name": "faiss_index_path", "optional": true, "reference":
"${inputs.faiss_index_path}", "type": ["string"]}, {"name": "pinecone_index_connection",
"optional": true, "reference": "${inputs.pinecone_index_connection}", "type":
["string"]}, {"name": "pinecone_index_name", "optional": true, "reference":
"${inputs.pinecone_index_name}", "type": ["string"]}, {"name": "pinecone_content_field",
"optional": true, "reference": "${inputs.pinecone_content_field}", "type":
["string"]}, {"name": "pinecone_metadata_field", "optional": true, "reference":
"${inputs.pinecone_metadata_field}", "type": ["string"]}, {"name": "embedding_type",
"optional": true, "reference": "${inputs.embedding_type}", "type": ["string"]},
{"name": "aoai_embedding_connection", "optional": true, "reference": "${inputs.aoai_embedding_connection}",
"type": ["AzureOpenAIConnection"]}, {"name": "oai_embedding_connection", "optional":
true, "reference": "${inputs.oai_embedding_connection}", "type": ["string"]},
{"name": "embedding_model", "optional": true, "reference": "${inputs.embedding_model}",
"type": ["string"]}, {"name": "embedding_deployment", "optional": true, "reference":
"${inputs.embedding_deployment}", "type": ["string"]}], "reverse_func_path":
"promptflow_vectordb.tool.common_index_lookup_utils.reverse_mapping"}, "input_type":
"default"}, "mlindex_path": {"type": ["string"], "enabled_by": "index_type",
"enabled_by_value": ["MLIndex file from path"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "oai_embedding_connection":
{"type": ["OpenAIConnection"], "enabled_by": "embedding_type", "enabled_by_value":
["OpenAI"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"uionly_hidden"}, "pinecone_content_field": {"type": ["string"], "enabled_by":
"index_type", "enabled_by_value": ["Pinecone"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_index_connection":
{"type": ["PineconeConnection"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_connections"},
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"pinecone_index_name": {"type": ["string"], "enabled_by": "index_type", "enabled_by_value":
["Pinecone"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_pinecone_indices",
"func_kwargs": [{"name": "pinecone_connection_name", "optional": false, "reference":
"${inputs.pinecone_index_connection}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "uionly_hidden"}, "pinecone_metadata_field":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Pinecone"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "uionly_hidden"},
"queries": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query_type": {"type": ["string"], "dynamic_list":
{"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_available_query_types",
"func_kwargs": [{"name": "mlindex_content", "optional": false, "reference":
"${inputs.mlindex_content}", "type": ["string"]}]}, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "semantic_configuration":
{"type": ["string"], "enabled_by": "index_type", "enabled_by_value": ["Azure
AI Search"], "dynamic_list": {"func_path": "promptflow_vectordb.tool.common_index_lookup_utils.list_acs_index_semantic_configurations",
"func_kwargs": [{"name": "acs_connection", "optional": false, "reference":
"${inputs.acs_index_connection}", "type": ["CognitiveSearchConnection"]},
{"name": "acs_index_name", "optional": false, "reference": "${inputs.acs_index_name}",
"type": ["string"]}]}, "allow_manual_entry": false, "is_multi_select": false,
"input_type": "uionly_hidden"}, "top_k": {"type": ["int"], "default": 3, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search an AzureML Vector Index for relevant results using one or more text
queries.", "module": "promptflow_vectordb.tool.common_index_lookup", "function":
"search", "is_builtin": true, "package": "promptflow-vectordb", "package_version":
"0.0.1", "enable_kwargs": false, "tool_state": "preview"}, {"name": "Faiss
Index Lookup", "type": "python", "inputs": {"path": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "vector": {"type": ["list"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "description":
"Search vector based query from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "print_env.py", "type": "python",
"inputs": {"key": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "print_env.py", "function": "get_env_var",
"is_builtin": false, "enable_kwargs": false, "tool_state": "stable"}], "inputs":
{"key": {"type": "string", "is_chat_input": false}}, "outputs": {"output":
{"type": "string", "reference": "${print_env.output.value}", "evaluation_only":
false, "is_chat_output": false}}}, "flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name",
"flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri":
"azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "automatic",
"inputsMapping": {}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath":
"promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath":
"flow.dag.yaml", "flowSnapshotId": "031ac559-2433-4b4b-87e5-1aa6a1170995",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '26109'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '1.264'
status:
code: 200
message: OK
- request:
body: '{"snapshotOrAssetId": "031ac559-2433-4b4b-87e5-1aa6a1170995"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '61'
content-type:
- application/json
host:
- eastus.api.azureml.ms
user-agent:
- python-httpx/0.26.0
method: POST
uri: https://eastus.api.azureml.ms/content/v2.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/snapshots/sas
response:
content: '{"name": "", "hash": null, "type": "Directory", "timestamp": "0001-01-01T00:00:00+00:00",
"sasUrl": null, "absoluteUrl": null, "sizeBytes": 0, "sizeSet": false, "children":
{"flow.dag.yaml": {"name": "flow.dag.yaml", "hash": "15FBB0521E1DDAA24005B5F08F800230",
"type": "File", "timestamp": "0001-01-01T00:00:00+00:00", "sasUrl": "https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/runs/name/flow.dag.yaml?sv=2019-07-07&sr=b&sig=BeYO6oo0UqDWrAXZGM2%2FIwFvDwUdIIyxnz58IkiKJUg%3D&st=2024-01-17T10%3A04%3A51Z&se=2024-01-17T18%3A14%3A51Z&sp=r&rscd=filename%3Dflow.dag.yaml",
"absoluteUrl": "https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/runs/name/flow.dag.yaml",
"sizeBytes": 304, "sizeSet": true, "children": {}}, "print_env.py": {"name":
"print_env.py", "hash": "C7B4B1B83D64548A97CDF74DDD9A532D", "type": "File",
"timestamp": "0001-01-01T00:00:00+00:00", "sasUrl": "https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/runs/name/print_env.py?sv=2019-07-07&sr=b&sig=peNM47UGTwNkNWRpuXZuuoszho952f%2BwoUEHOWufjQQ%3D&st=2024-01-17T10%3A04%3A51Z&se=2024-01-17T18%3A14%3A51Z&sp=r&rscd=filename%3Dprint_env.py",
"absoluteUrl": "https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/runs/name/print_env.py",
"sizeBytes": 245, "sizeSet": true, "children": {}}, "requirements.txt": {"name":
"requirements.txt", "hash": "DF7228B3D730379A4384E45275C91AF7", "type": "File",
"timestamp": "0001-01-01T00:00:00+00:00", "sasUrl": "https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/runs/name/requirements.txt?sv=2019-07-07&sr=b&sig=0ZN0aeWFILKkX4e308DgTEi7ud0qRAYOLTFnp%2Fogy7c%3D&st=2024-01-17T10%3A04%3A51Z&se=2024-01-17T18%3A14%3A51Z&sp=r&rscd=filename%3Drequirements.txt",
"absoluteUrl": "https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/runs/name/requirements.txt",
"sizeBytes": 11, "sizeSet": true, "children": {}}}}'
headers:
connection:
- keep-alive
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.040'
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"value": "azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_debug_info/versions/1"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '171'
content-type:
- application/json
host:
- eastus.api.azureml.ms
user-agent:
- python-httpx/0.26.0
method: POST
uri: https://eastus.api.azureml.ms/data/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/dataversion/getByAssetId
response:
content: '{"dataVersion": {"assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_debug_info/versions/1",
"dataContainerName": "azureml_name_output_data_debug_info", "dataType": "UriFolder",
"dataUri": "azureml://subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/workspaces/00000/datastores/workspaceblobstore/paths/promptflow/PromptFlowArtifacts/name/",
"versionId": "1", "mutableProps": {"dataExpiryTime": null, "description": null,
"tags": null, "isArchived": false, "stage": "Logged", "autoDeleteSetting": null},
"referencedDataUris": null, "properties": null, "initialAssetId": "azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_debug_info/versions/1",
"isRegistered": false, "runId": "name", "originAssetId": null}, "entityMetadata":
{"etag": "\"04009d91-0000-0100-0000-65a7a8300000\"", "createdTime": "2024-01-17T10:13:04.0261855+00:00",
"modifiedTime": "2024-01-17T10:13:04.060129+00:00", "createdBy": {"userObjectId":
"00000000-0000-0000-0000-000000000000", "userPuId": "100320005227D154", "userIdp":
null, "userAltSecId": null, "userIss": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/",
"userTenantId": "00000000-0000-0000-0000-000000000000", "userName": "Han Wang",
"upn": "[email protected]"}, "modifiedBy": null}, "legacyDatasetId": "40952a92-935d-431b-8f84-6c4f91c6aa94",
"isV2": true, "legacyDatasetType": null, "legacyDataflowType": null, "legacyDataflow":
null, "legacySavedDatasetId": null, "putAssetLROResponseDto": null}'
headers:
connection:
- keep-alive
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.050'
http_version: HTTP/1.1
status_code: 200
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:51 GMT
x-ms-range:
- bytes=0-33554431
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/runs/name/flow.dag.yaml
response:
body:
string: "inputs:\r\n key:\r\n type: string\r\noutputs:\r\n output:\r\n
\ type: string\r\n reference: ${print_env.output.value}\r\nnodes:\r\n-
name: print_env\r\n type: python\r\n source:\r\n type: code\r\n path:
print_env.py\r\n inputs:\r\n key: ${inputs.key}\r\nenvironment:\r\n python_requirements_txt:
requirements.txt\r\n"
headers:
accept-ranges:
- bytes
content-length:
- '304'
content-range:
- bytes 0-303/304
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 10:12:48 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-content-md5:
- FfuwUh4d2qJABbXwj4ACMA==
x-ms-blob-type:
- BlockBlob
x-ms-copy-completion-time:
- Wed, 17 Jan 2024 10:12:48 GMT
x-ms-copy-id:
- 5dd2f5cc-2133-47cf-813a-5917d60bc14c
x-ms-copy-progress:
- 304/304
x-ms-copy-source:
- https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/LocalUpload/0200d5934081bd47ddf030b9398d0cd0/flow_with_requirements_txt/flow.dag.yaml
x-ms-copy-status:
- success
x-ms-creation-time:
- Wed, 17 Jan 2024 10:12:48 GMT
x-ms-meta-name:
- 170072d0-60d9-4d21-8c3d-5ac5f8434bb9
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- '1'
x-ms-version:
- '2023-11-03'
status:
code: 206
message: Partial Content
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:55 GMT
x-ms-range:
- bytes=0-33554431
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/runs/name/print_env.py
response:
body:
string: "import os\r\n\r\nfrom promptflow import tool\r\n\r\n\r\n@tool\r\ndef
get_env_var(key: str):\r\n from langchain import __version__\r\n\r\n print(__version__)\r\n
\ print(os.environ.get(key))\r\n # get from env var\r\n return {\"value\":
os.environ.get(key)}\r\n"
headers:
accept-ranges:
- bytes
content-length:
- '245'
content-range:
- bytes 0-244/245
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 10:12:48 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-content-md5:
- x7SxuD1kVIqXzfdN3ZpTLQ==
x-ms-blob-type:
- BlockBlob
x-ms-copy-completion-time:
- Wed, 17 Jan 2024 10:12:48 GMT
x-ms-copy-id:
- db97b5ff-a0c6-4b5b-b8da-616a9e805643
x-ms-copy-progress:
- 245/245
x-ms-copy-source:
- https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/LocalUpload/0200d5934081bd47ddf030b9398d0cd0/flow_with_requirements_txt/print_env.py
x-ms-copy-status:
- success
x-ms-creation-time:
- Wed, 17 Jan 2024 10:12:48 GMT
x-ms-version:
- '2023-11-03'
status:
code: 206
message: Partial Content
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:55 GMT
x-ms-range:
- bytes=0-33554431
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/runs/name/requirements.txt
response:
body:
string: "langchain\r\n"
headers:
accept-ranges:
- bytes
content-length:
- '11'
content-range:
- bytes 0-10/11
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 10:12:48 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-content-md5:
- 33Ios9cwN5pDhORSdcka9w==
x-ms-blob-type:
- BlockBlob
x-ms-copy-completion-time:
- Wed, 17 Jan 2024 10:12:48 GMT
x-ms-copy-id:
- f32cb6cc-df1b-4580-8cec-19ebb20ae861
x-ms-copy-progress:
- 11/11
x-ms-copy-source:
- https://promptfloweast4063704120.blob.core.windows.net/azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5/LocalUpload/0200d5934081bd47ddf030b9398d0cd0/flow_with_requirements_txt/requirements.txt
x-ms-copy-status:
- success
x-ms-creation-time:
- Wed, 17 Jan 2024 10:12:48 GMT
x-ms-version:
- '2023-11-03'
status:
code: 206
message: Partial Content
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:55 GMT
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name?comp=list&prefix=promptflow%2FPromptFlowArtifacts%2Fname%2F&restype=container
response:
body:
string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><EnumerationResults
ServiceEndpoint=\"https://promptfloweast4063704120.blob.core.windows.net/\"
ContainerName=\"azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5\"><Prefix>promptflow/PromptFlowArtifacts/name/</Prefix><Blobs><Blob><Name>promptflow/PromptFlowArtifacts/name/flow_artifacts/000000000_000000024.jsonl</Name><Properties><Creation-Time>Wed,
17 Jan 2024 10:13:00 GMT</Creation-Time><Last-Modified>Wed, 17 Jan 2024 10:13:00
GMT</Last-Modified><Etag>0x8DC1744E2879ADA</Etag><Content-Length>1432</Content-Length><Content-Type>application/octet-stream</Content-Type><Content-Encoding
/><Content-Language /><Content-CRC64 /><Content-MD5 /><Cache-Control /><Content-Disposition
/><BlobType>AppendBlob</BlobType><LeaseStatus>unlocked</LeaseStatus><LeaseState>available</LeaseState><ServerEncrypted>true</ServerEncrypted></Properties><OrMetadata
/></Blob><Blob><Name>promptflow/PromptFlowArtifacts/name/flow_outputs/output.jsonl</Name><Properties><Creation-Time>Wed,
17 Jan 2024 10:13:04 GMT</Creation-Time><Last-Modified>Wed, 17 Jan 2024 10:13:04
GMT</Last-Modified><Etag>0x8DC1744E4ADDB62</Etag><Content-Length>35</Content-Length><Content-Type>application/octet-stream</Content-Type><Content-Encoding
/><Content-Language /><Content-CRC64 /><Content-MD5>/e0Zn1phO4FyeGCAse5gGw==</Content-MD5><Cache-Control
/><Content-Disposition /><BlobType>BlockBlob</BlobType><AccessTier>Hot</AccessTier><AccessTierInferred>true</AccessTierInferred><LeaseStatus>unlocked</LeaseStatus><LeaseState>available</LeaseState><ServerEncrypted>true</ServerEncrypted></Properties><OrMetadata
/></Blob><Blob><Name>promptflow/PromptFlowArtifacts/name/instance_results.jsonl</Name><Properties><Creation-Time>Wed,
17 Jan 2024 10:13:00 GMT</Creation-Time><Last-Modified>Wed, 17 Jan 2024 10:13:00
GMT</Last-Modified><Etag>0x8DC1744E28C505B</Etag><Content-Length>109</Content-Length><Content-Type>application/octet-stream</Content-Type><Content-Encoding
/><Content-Language /><Content-CRC64 /><Content-MD5 /><Cache-Control /><Content-Disposition
/><BlobType>AppendBlob</BlobType><LeaseStatus>unlocked</LeaseStatus><LeaseState>available</LeaseState><ServerEncrypted>true</ServerEncrypted></Properties><OrMetadata
/></Blob><Blob><Name>promptflow/PromptFlowArtifacts/name/meta.json</Name><Properties><Creation-Time>Wed,
17 Jan 2024 10:12:58 GMT</Creation-Time><Last-Modified>Wed, 17 Jan 2024 10:12:58
GMT</Last-Modified><Etag>0x8DC1744E1568CC8</Etag><Content-Length>18</Content-Length><Content-Type>application/octet-stream</Content-Type><Content-Encoding
/><Content-Language /><Content-CRC64 /><Content-MD5>/u1NXUpgXMFDmZEw835qnw==</Content-MD5><Cache-Control
/><Content-Disposition /><BlobType>BlockBlob</BlobType><AccessTier>Hot</AccessTier><AccessTierInferred>true</AccessTierInferred><LeaseStatus>unlocked</LeaseStatus><LeaseState>available</LeaseState><ServerEncrypted>true</ServerEncrypted></Properties><OrMetadata
/></Blob><Blob><Name>promptflow/PromptFlowArtifacts/name/node_artifacts/print_env/000000000.jsonl</Name><Properties><Creation-Time>Wed,
17 Jan 2024 10:13:00 GMT</Creation-Time><Last-Modified>Wed, 17 Jan 2024 10:13:00
GMT</Last-Modified><Etag>0x8DC1744E281FBC3</Etag><Content-Length>1142</Content-Length><Content-Type>application/octet-stream</Content-Type><Content-Encoding
/><Content-Language /><Content-CRC64 /><Content-MD5>FmF+m6Ku8yBJVaBT2aPWTg==</Content-MD5><Cache-Control
/><Content-Disposition /><BlobType>BlockBlob</BlobType><AccessTier>Hot</AccessTier><AccessTierInferred>true</AccessTierInferred><LeaseStatus>unlocked</LeaseStatus><LeaseState>available</LeaseState><ServerEncrypted>true</ServerEncrypted></Properties><OrMetadata
/></Blob></Blobs><NextMarker /></EnumerationResults>"
headers:
content-type:
- application/xml
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:57 GMT
x-ms-range:
- bytes=0-33554431
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/promptflow/PromptFlowArtifacts/name/flow_artifacts/000000000_000000024.jsonl
response:
body:
string: '{"line_number": 0, "run_info": {"run_id": "name_0", "status": "Completed",
"error": null, "inputs": {"key": "API_BASE", "line_number": 0}, "output":
{"output": null}, "metrics": null, "request": null, "parent_run_id": "name",
"root_run_id": "name", "source_run_id": null, "flow_id": "default_flow_id",
"start_time": "2024-01-17T10:13:00.482142Z", "end_time": "2024-01-17T10:13:00.494986Z",
"index": 0, "api_calls": [{"name": "flow", "node_name": "flow", "type": "Flow",
"start_time": 1705486380.482142, "end_time": 1705486380.494986, "children":
[{"name": "get_env_var", "type": "Tool", "inputs": {"key": "API_BASE"}, "output":
{"value": null}, "start_time": 1705486380.485055, "end_time": 1705486380.493225,
"error": null, "children": [], "node_name": "print_env", "parent_id": "",
"id": "de387a3d-b4ff-4bdf-991b-ea8c99b4c4c9", "system_metrics": {}}], "system_metrics":
{"duration": 0.012844, "prompt_tokens": 0, "completion_tokens": 0, "total_tokens":
0}}], "variant_id": "", "name": "", "description": "", "tags": null, "system_metrics":
{"duration": 0.012844, "prompt_tokens": 0, "completion_tokens": 0, "total_tokens":
0}, "result": {"output": null}, "upload_metrics": false}, "start_time": "2024-01-17T10:13:00.482142",
"end_time": "2024-01-17T10:13:00.494986", "name": "", "description": "", "status":
"Completed", "tags": null}
'
headers:
accept-ranges:
- bytes
content-length:
- '1432'
content-range:
- bytes 0-1431/1432
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 10:13:00 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-committed-block-count:
- '1'
x-ms-blob-type:
- AppendBlob
x-ms-creation-time:
- Wed, 17 Jan 2024 10:13:00 GMT
x-ms-version:
- '2023-11-03'
status:
code: 206
message: Partial Content
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:57 GMT
x-ms-range:
- bytes=0-33554431
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/promptflow/PromptFlowArtifacts/name/instance_results.jsonl
response:
body:
string: '{"line_number": 0, "status": "Completed", "inputs.key": "API_BASE",
"inputs.line_number": 0, "output": null}
'
headers:
accept-ranges:
- bytes
content-length:
- '109'
content-range:
- bytes 0-108/109
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 10:13:00 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-committed-block-count:
- '1'
x-ms-blob-type:
- AppendBlob
x-ms-creation-time:
- Wed, 17 Jan 2024 10:13:00 GMT
x-ms-version:
- '2023-11-03'
status:
code: 206
message: Partial Content
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:57 GMT
x-ms-range:
- bytes=0-33554431
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/promptflow/PromptFlowArtifacts/name/flow_outputs/output.jsonl
response:
body:
string: '{"line_number": 0, "output": null}
'
headers:
accept-ranges:
- bytes
content-length:
- '35'
content-range:
- bytes 0-34/35
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 10:13:04 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-content-md5:
- /e0Zn1phO4FyeGCAse5gGw==
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Wed, 17 Jan 2024 10:13:04 GMT
x-ms-version:
- '2023-11-03'
status:
code: 206
message: Partial Content
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:57 GMT
x-ms-range:
- bytes=0-33554431
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/promptflow/PromptFlowArtifacts/name/meta.json
response:
body:
string: '{"batch_size": 25}'
headers:
accept-ranges:
- bytes
content-length:
- '18'
content-range:
- bytes 0-17/18
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 10:12:58 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-content-md5:
- /u1NXUpgXMFDmZEw835qnw==
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Wed, 17 Jan 2024 10:12:58 GMT
x-ms-version:
- '2023-11-03'
status:
code: 206
message: Partial Content
- request:
body: null
headers:
Accept:
- application/xml
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0)
x-ms-date:
- Wed, 17 Jan 2024 10:14:57 GMT
x-ms-range:
- bytes=0-33554431
x-ms-version:
- '2023-11-03'
method: GET
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/promptflow/PromptFlowArtifacts/name/node_artifacts/print_env/000000000.jsonl
response:
body:
string: '{"node_name": "print_env", "line_number": 0, "run_info": {"node": "print_env",
"flow_run_id": "name", "run_id": "name_print_env_0", "status": "Completed",
"inputs": {"key": "API_BASE"}, "output": {"value": null}, "metrics": null,
"error": null, "parent_run_id": "name_0", "start_time": "2024-01-17T10:13:00.484187Z",
"end_time": "2024-01-17T10:13:00.493799Z", "index": 0, "api_calls": [{"name":
"get_env_var", "type": "Tool", "inputs": {"key": "API_BASE"}, "output": {"value":
null}, "start_time": 1705486380.485055, "end_time": 1705486380.493225, "error":
null, "children": [], "node_name": "print_env", "parent_id": "", "id": "de387a3d-b4ff-4bdf-991b-ea8c99b4c4c9",
"system_metrics": {}}], "variant_id": "", "cached_run_id": null, "cached_flow_run_id":
null, "logs": {"stdout": "[2024-01-17T10:13:00+0000] 0.0.348\n[2024-01-17T10:13:00+0000]
None\n", "stderr": ""}, "system_metrics": {"duration": 0.009612}, "result":
{"value": null}}, "start_time": "2024-01-17T10:13:00.484187", "end_time":
"2024-01-17T10:13:00.493799", "status": "Completed"}'
headers:
accept-ranges:
- bytes
content-length:
- '1142'
content-range:
- bytes 0-1141/1142
content-type:
- application/octet-stream
last-modified:
- Wed, 17 Jan 2024 10:13:00 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-content-md5:
- FmF+m6Ku8yBJVaBT2aPWTg==
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Wed, 17 Jan 2024 10:13:00 GMT
x-ms-version:
- '2023-11-03'
status:
code: 206
message: Partial Content
- request:
body: '{"runId": "name", "selectRunMetadata": true, "selectRunDefinition": true,
"selectJobSpecification": true}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '137'
content-type:
- application/json
host:
- eastus.api.azureml.ms
user-agent:
- python-httpx/0.26.0
method: POST
uri: https://eastus.api.azureml.ms/history/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/rundata
response:
content: '{"runMetadata": {"runNumber": 1705486358, "rootRunId": "name", "createdUtc":
"2024-01-17T10:12:38.2013066+00:00", "createdBy": {"userObjectId": "00000000-0000-0000-0000-000000000000",
"userPuId": "100320005227D154", "userIdp": null, "userAltSecId": null, "userIss":
"https://sts.windows.net/00000000-0000-0000-0000-000000000000/", "userTenantId":
"00000000-0000-0000-0000-000000000000", "userName": "Han Wang", "upn": null},
"userId": "00000000-0000-0000-0000-000000000000", "token": null, "tokenExpiryTimeUtc":
null, "error": null, "warnings": null, "revision": 6, "statusRevision": 3, "runUuid":
"f0b6dc5e-041b-4c4e-bde5-b834f338b411", "parentRunUuid": null, "rootRunUuid":
"f0b6dc5e-041b-4c4e-bde5-b834f338b411", "lastStartTimeUtc": null, "currentComputeTime":
null, "computeDuration": "00:00:04.6486936", "effectiveStartTimeUtc": null,
"lastModifiedBy": {"userObjectId": "00000000-0000-0000-0000-000000000000", "userPuId":
"100320005227D154", "userIdp": null, "userAltSecId": null, "userIss": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/",
"userTenantId": "00000000-0000-0000-0000-000000000000", "userName": "Han Wang",
"upn": "[email protected]"}, "lastModifiedUtc": "2024-01-17T10:13:02.9198188+00:00",
"duration": "00:00:04.6486936", "cancelationReason": null, "currentAttemptId":
1, "runId": "name", "parentRunId": null, "experimentId": "9fd7ec9d-c9b7-439c-9f21-d2f7d05037ae",
"status": "Completed", "startTimeUtc": "2024-01-17T10:13:00.1287876+00:00",
"endTimeUtc": "2024-01-17T10:13:04.7774812+00:00", "scheduleId": null, "displayName":
"name", "name": null, "dataContainerId": "dcid.name", "description": null, "hidden":
false, "runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator":
null, "traits": [], "attribution": "PromptFlow", "computeType": null}, "properties":
{"azureml.promptflow.runtime_name": "automatic", "azureml.promptflow.runtime_version":
"20240111.v2", "azureml.promptflow.definition_file_name": "flow.dag.yaml", "azureml.promptflow.session_id":
"b73aa389eb901648446f8f78c027d8049823b2f47afdc98b", "azureml.promptflow.flow_lineage_id":
"0dc4f921d423a11eec59c6a5f580361e7f257e56b8f9dc1cc8edb1ad56f6d50d", "azureml.promptflow.flow_definition_datastore_name":
"workspaceblobstore", "azureml.promptflow.flow_definition_blob_path": "LocalUpload/0200d5934081bd47ddf030b9398d0cd0/flow_with_requirements_txt/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl",
"_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.snapshot_id":
"031ac559-2433-4b4b-87e5-1aa6a1170995", "azureml.promptflow.total_tokens": "0",
"_azureml.evaluate_artifacts": "[{\"path\": \"instance_results.jsonl\", \"type\":
\"table\"}]"}, "parameters": {}, "actionUris": {}, "scriptName": null, "target":
null, "uniqueChildRunComputeTargets": [], "tags": {}, "settings": {}, "services":
{}, "inputDatasets": [], "outputDatasets": [], "runDefinition": null, "jobSpecification":
null, "primaryMetricName": null, "createdFrom": null, "cancelUri": null, "completeUri":
null, "diagnosticsUri": null, "computeRequest": null, "compute": null, "retainForLifetimeOfWorkspace":
false, "queueingInfo": null, "inputs": null, "outputs": {"debug_info": {"assetId":
"azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_debug_info/versions/1",
"type": "UriFolder"}, "flow_outputs": {"assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_flow_outputs/versions/1",
"type": "UriFolder"}}}, "runDefinition": null, "jobSpecification": null, "systemSettings":
null}'
headers:
connection:
- keep-alive
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.039'
http_version: HTTP/1.1
status_code: 200
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Type:
- application/json
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.11.5 (Windows-10-10.0.22621-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name/logContent
response:
body:
string: '"2024-01-17 10:12:53 +0000 49 promptflow-runtime INFO [name]
Receiving v2 bulk run request 03a647b1-37c9-4b18-af9a-c7c09fb4fef1: {\"flow_id\":
\"name\", \"flow_run_id\": \"name\", \"flow_source\": {\"flow_source_type\":
1, \"flow_source_info\": {\"snapshot_id\": \"031ac559-2433-4b4b-87e5-1aa6a1170995\"},
\"flow_dag_file\": \"flow.dag.yaml\"}, \"log_path\": \"https://promptfloweast4063704120.blob.core.windows.net/azureml/ExperimentRun/dcid.name/logs/azureml/executionlogs.txt?sv=2019-07-07&sr=b&sig=**data_scrubbed**&skoid=55b92eba-d7c7-4afd-ab76-7bb1cd345283&sktid=00000000-0000-0000-0000-000000000000&skt=2024-01-17T09%3A25%3A48Z&ske=2024-01-18T17%3A35%3A48Z&sks=b&skv=2019-07-07&st=2024-01-17T10%3A02%3A51Z&se=2024-01-17T18%3A12%3A51Z&sp=rcw\",
\"app_insights_instrumentation_key\": \"InstrumentationKey=**data_scrubbed**;IngestionEndpoint=https://eastus-6.in.applicationinsights.azure.com/;LiveEndpoint=https://eastus.livediagnostics.monitor.azure.com/\",
\"data_inputs\": {\"data\": \"azureml://datastores/workspaceblobstore/paths/LocalUpload/c32a61842e439cecc022ebcff5dc0da4/env_var_names.jsonl\"},
\"azure_storage_setting\": {\"azure_storage_mode\": 1, \"storage_account_name\":
\"promptfloweast4063704120\", \"blob_container_name\": \"azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5\",
\"flow_artifacts_root_path\": \"promptflow/PromptFlowArtifacts/name\", \"blob_container_sas_token\":
\"?sv=2019-07-07&sr=c&sig=**data_scrubbed**&skoid=55b92eba-d7c7-4afd-ab76-7bb1cd345283&sktid=00000000-0000-0000-0000-000000000000&skt=2024-01-17T10%3A12%3A52Z&ske=2024-01-24T10%3A12%3A52Z&sks=b&skv=2019-07-07&se=2024-01-24T10%3A12%3A52Z&sp=racwl\",
\"output_datastore_name\": \"workspaceblobstore\"}}\n2024-01-17 10:12:53 +0000 49
promptflow-runtime INFO Runtime version: 20240111.v2. PromptFlow version:
1.4.0rc2\n2024-01-17 10:12:53 +0000 49 promptflow-runtime INFO Updating
name to Status.Preparing...\n2024-01-17 10:12:53 +0000 49 promptflow-runtime
INFO Downloading snapshot to /mnt/host/service/app/43691/requests/name\n2024-01-17
10:12:53 +0000 49 promptflow-runtime INFO Get snapshot sas url for
031ac559-2433-4b4b-87e5-1aa6a1170995.\n2024-01-17 10:12:53 +0000 49 promptflow-runtime
INFO Snapshot 031ac559-2433-4b4b-87e5-1aa6a1170995 contains 3 files.\n2024-01-17
10:12:53 +0000 49 promptflow-runtime INFO Download snapshot 031ac559-2433-4b4b-87e5-1aa6a1170995
completed.\n2024-01-17 10:12:53 +0000 49 promptflow-runtime INFO Successfully
download snapshot to /mnt/host/service/app/43691/requests/name\n2024-01-17
10:12:53 +0000 49 promptflow-runtime INFO About to execute a python
flow.\n2024-01-17 10:12:53 +0000 49 promptflow-runtime INFO Use spawn
method to start child process.\n2024-01-17 10:12:53 +0000 49 promptflow-runtime
INFO Starting to check process 401 status for run name\n2024-01-17 10:12:53
+0000 49 promptflow-runtime INFO Start checking run status for run
name\n2024-01-17 10:12:57 +0000 401 promptflow-runtime INFO [49--401]
Start processing flowV2......\n2024-01-17 10:12:57 +0000 401 promptflow-runtime
INFO Runtime version: 20240111.v2. PromptFlow version: 1.4.0rc2\n2024-01-17
10:12:57 +0000 401 promptflow-runtime INFO Setting mlflow tracking
uri...\n2024-01-17 10:12:57 +0000 401 promptflow-runtime INFO Validating
''AzureML Data Scientist'' user authentication...\n2024-01-17 10:12:58 +0000 401
promptflow-runtime INFO Successfully validated ''AzureML Data Scientist''
user authentication.\n2024-01-17 10:12:58 +0000 401 promptflow-runtime
INFO Using AzureMLRunStorageV2\n2024-01-17 10:12:58 +0000 401 promptflow-runtime
INFO Setting mlflow tracking uri to ''azureml://eastus.api.azureml.ms/mlflow/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/promptflow-eastus''\n2024-01-17
10:12:58 +0000 401 promptflow-runtime INFO Initialized blob service
client for AzureMLRunTracker.\n2024-01-17 10:12:58 +0000 401 promptflow-runtime
INFO Setting mlflow tracking uri to ''azureml://eastus.api.azureml.ms/mlflow/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/promptflow-eastus''\n2024-01-17
10:12:59 +0000 401 promptflow-runtime INFO Resolve data from url finished
in 0.7507558229999631 seconds\n2024-01-17 10:12:59 +0000 401 promptflow-runtime
INFO Starting the aml run ''name''...\n2024-01-17 10:13:00 +0000 401
execution WARNING Starting run without column mapping may lead to
unexpected results. Please consult the following documentation for more information:
https://aka.ms/pf/column-mapping\n2024-01-17 10:13:00 +0000 401 execution.bulk INFO Set
process count to 1 by taking the minimum value among the factors of {''default_worker_count'':
16, ''row_count'': 1}.\n2024-01-17 10:13:00 +0000 442 execution.bulk INFO Process
442 started.\n2024-01-17 10:13:00 +0000 401 execution.bulk INFO Process
name: ForkProcess-6:2, Process id: 442, Line number: 0 start execution.\n2024-01-17
10:13:00 +0000 401 execution.bulk INFO Process name: ForkProcess-6:2,
Process id: 442, Line number: 0 completed.\n2024-01-17 10:13:00 +0000 401
execution.bulk INFO Finished 1 / 1 lines.\n2024-01-17 10:13:00 +0000 401
execution.bulk INFO Average execution time for completed lines: 0.2
seconds. Estimated time for incomplete lines: 0.0 seconds.\n2024-01-17 10:13:01
+0000 401 promptflow-runtime INFO Post processing batch result...\n2024-01-17
10:13:02 +0000 401 execution.bulk INFO Upload status summary metrics
for run name finished in 1.2066231439998774 seconds\n2024-01-17 10:13:02 +0000 401
promptflow-runtime INFO Successfully write run properties {\"azureml.promptflow.total_tokens\":
0, \"_azureml.evaluate_artifacts\": \"[{\\\"path\\\": \\\"instance_results.jsonl\\\",
\\\"type\\\": \\\"table\\\"}]\"} with run id ''name''\n2024-01-17 10:13:02
+0000 401 execution.bulk INFO Upload RH properties for run name
finished in 0.0651191319998361 seconds\n2024-01-17 10:13:02 +0000 401
promptflow-runtime INFO Creating unregistered output Asset for Run name...\n2024-01-17
10:13:04 +0000 401 promptflow-runtime INFO Created debug_info Asset:
azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_debug_info/versions/1\n2024-01-17
10:13:04 +0000 401 promptflow-runtime INFO Creating unregistered output
Asset for Run name...\n2024-01-17 10:13:04 +0000 401 promptflow-runtime
INFO Created flow_outputs output Asset: azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_flow_outputs/versions/1\n2024-01-17
10:13:04 +0000 401 promptflow-runtime INFO Creating Artifact for Run
name...\n2024-01-17 10:13:04 +0000 401 promptflow-runtime INFO Created
instance_results.jsonl Artifact.\n2024-01-17 10:13:04 +0000 401 promptflow-runtime
INFO Patching name...\n2024-01-17 10:13:04 +0000 401 promptflow-runtime
INFO Ending the aml run ''name'' with status ''Completed''...\n2024-01-17
10:13:59 +0000 49 promptflow-runtime INFO Process 401 finished\n2024-01-17
10:13:59 +0000 49 promptflow-runtime INFO [49] Child process finished!\n2024-01-17
10:13:59 +0000 49 promptflow-runtime INFO [name] End processing bulk
run\n2024-01-17 10:13:59 +0000 49 promptflow-runtime INFO Cleanup
working dir /mnt/host/service/app/43691/requests/name for bulk run\n"'
headers:
connection:
- keep-alive
content-length:
- '8471'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '1.763'
status:
code: 200
message: OK
version: 1
| promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_auto_resolve_requirements.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_auto_resolve_requirements.yaml",
"repo_id": "promptflow",
"token_count": 104189
} | 84 |
interactions:
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000",
"name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location":
"eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic",
"tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}'
headers:
cache-control:
- no-cache
content-length:
- '3630'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.028'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false
response:
body:
string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}]}'
headers:
cache-control:
- no-cache
content-length:
- '1372'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.086'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1227'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.083'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.117'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 08:48:28 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/webClassification3.jsonl
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '379'
content-md5:
- lI/pz9jzTQ7Td3RHPL7y7w==
content-type:
- application/octet-stream
last-modified:
- Mon, 06 Nov 2023 08:30:18 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Mon, 06 Nov 2023 08:30:18 GMT
x-ms-meta-name:
- 94331215-cf7f-452a-9f1a-1d276bc9b0e4
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- 3f163752-edb0-4afc-a6f5-b0a670bd7c24
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 08:48:29 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/webClassification3.jsonl
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore
response:
body:
string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore",
"name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores",
"properties": {"description": null, "tags": null, "properties": null, "isDefault":
true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty":
null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup":
"00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name",
"containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol":
"https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"},
"systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy":
"779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt":
"2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a",
"lastModifiedByType": "Application"}}'
headers:
cache-control:
- no-cache
content-length:
- '1227'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding,Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.085'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '0'
User-Agent:
- promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets
response:
body:
string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}'
headers:
cache-control:
- no-cache
content-length:
- '134'
content-type:
- application/json; charset=utf-8
expires:
- '-1'
pragma:
- no-cache
strict-transport-security:
- max-age=31536000; includeSubDomains
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.082'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 08:48:33 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/flow_with_dict_input/flow.dag.yaml
response:
body:
string: ''
headers:
accept-ranges:
- bytes
content-length:
- '443'
content-md5:
- vjjZga0sy5AWysBOvepjpg==
content-type:
- application/octet-stream
last-modified:
- Tue, 26 Dec 2023 07:03:14 GMT
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
vary:
- Origin
x-ms-blob-type:
- BlockBlob
x-ms-creation-time:
- Tue, 26 Dec 2023 07:03:14 GMT
x-ms-meta-name:
- b91cfd40-7fbe-4efc-8fbc-f59364b95246
x-ms-meta-upload_status:
- completed
x-ms-meta-version:
- '1'
x-ms-version:
- '2023-11-03'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/xml
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0)
x-ms-date:
- Fri, 12 Jan 2024 08:48:34 GMT
x-ms-version:
- '2023-11-03'
method: HEAD
uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/flow_with_dict_input/flow.dag.yaml
response:
body:
string: ''
headers:
server:
- Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
transfer-encoding:
- chunked
vary:
- Origin
x-ms-error-code:
- BlobNotFound
x-ms-version:
- '2023-11-03'
status:
code: 404
message: The specified blob does not exist.
- request:
body: '{"flowDefinitionDataStoreName": "workspaceblobstore", "flowDefinitionBlobPath":
"LocalUpload/000000000000000000000000000000000000/flow_with_dict_input/flow.dag.yaml",
"runId": "name", "runDisplayName": "name", "runExperimentName": "", "batchDataInput":
{"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/000000000000000000000000000000000000/webClassification3.jsonl"},
"inputsMapping": {"key": "{\"a\": 1}", "extra": "${data.url}"}, "connections":
{}, "environmentVariables": {}, "runtimeName": "fake-runtime-name", "sessionId":
"000000000000000000000000000000000000000000000000", "sessionSetupMode": "SystemWait",
"flowLineageId": "0000000000000000000000000000000000000000000000000000000000000000",
"runDisplayNameGenerationType": "UserProvidedMacro"}'
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '846'
Content-Type:
- application/json
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: POST
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/submit
response:
body:
string: '"name"'
headers:
connection:
- keep-alive
content-length:
- '38'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
x-content-type-options:
- nosniff
x-request-time:
- '5.646'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "get_dict_val", "type": "python",
"source": {"type": "code", "path": "get_dict_val.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "get_dict_val.py", "reduce": false}, {"name": "print_val", "type":
"python", "source": {"type": "code", "path": "print_val.py"}, "inputs": {"val":
"${get_dict_val.output.value}", "origin_val": "${get_dict_val.output.origin_value}"},
"tool": "print_val.py", "reduce": false}], "tools": [{"name": "Content Safety
(Text Analyze)", "type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"0.0.216", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "0.0.216", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Source LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CustomConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "endpoint_name":
{"type": ["string"], "default": "-- please enter an endpoint name --", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an Open Source model from the Azure Model catalog, deployed
to an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_source_llm", "class_name": "OpenSourceLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "0.0.216",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "is_builtin": true, "package": "promptflow-tools",
"package_version": "0.0.216", "default_prompt": "# system:\nAs an AI assistant,
your task involves interpreting images and responding to questions about the
image.\nRemember to provide accurate answers based on the information present
in the image.\n\n# user:\nCan you tell me what the image depicts?\n\n",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Serp API", "type":
"python", "inputs": {"connection": {"type": ["SerpConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "engine": {"type":
["string"], "default": "google", "enum": ["google", "bing"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "location": {"type":
["string"], "default": "", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "num": {"type": ["int"], "default": "10",
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"query": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "safe": {"type": ["string"], "default": "off",
"enum": ["active", "off"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Serp API to obtain search
results from a specific search engine.", "module": "promptflow.tools.serpapi",
"class_name": "SerpAPI", "function": "search", "is_builtin": true, "package":
"promptflow-tools", "package_version": "0.0.216", "enable_kwargs": false,
"tool_state": "stable"}, {"name": "Faiss Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "top_k": {"type": ["int"], "default": "3",
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"vector": {"type": ["list"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Search vector based query
from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "get_dict_val.py", "type": "python",
"inputs": {"key": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "get_dict_val.py", "function":
"get_dict_val", "is_builtin": false, "enable_kwargs": false, "tool_state":
"stable"}, {"name": "print_val.py", "type": "python", "inputs": {"origin_val":
{"type": ["object"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "val": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "source": "print_val.py",
"function": "print_val", "is_builtin": false, "enable_kwargs": false, "tool_state":
"stable"}], "inputs": {"key": {"type": "object", "is_chat_input": false}},
"outputs": {"output": {"type": "string", "reference": "${get_dict_val.output.value}",
"evaluation_only": false, "is_chat_output": false}}}, "flowRunResourceId":
"azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name", "flowRunId":
"name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/74c11bba717480b2d6b04b8e746d09d7/webClassification3.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "test-runtime-ci",
"inputsMapping": {"key": "{\"a\": 1}", "extra": "${data.url}"}, "outputDatastoreName":
"workspaceblobstore", "childRunBasePath": "promptflow/PromptFlowArtifacts/name/flow_artifacts",
"flowDagFileRelativePath": "flow.dag.yaml", "flowSnapshotId": "c78d8e8c-ea02-4d03-945f-70223998a9bc",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '13507'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.502'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "get_dict_val", "type": "python",
"source": {"type": "code", "path": "get_dict_val.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "get_dict_val.py", "reduce": false}, {"name": "print_val", "type":
"python", "source": {"type": "code", "path": "print_val.py"}, "inputs": {"val":
"${get_dict_val.output.value}", "origin_val": "${get_dict_val.output.origin_value}"},
"tool": "print_val.py", "reduce": false}], "tools": [{"name": "Content Safety
(Text Analyze)", "type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"0.0.216", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "0.0.216", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Source LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CustomConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "endpoint_name":
{"type": ["string"], "default": "-- please enter an endpoint name --", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an Open Source model from the Azure Model catalog, deployed
to an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_source_llm", "class_name": "OpenSourceLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "0.0.216",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "is_builtin": true, "package": "promptflow-tools",
"package_version": "0.0.216", "default_prompt": "# system:\nAs an AI assistant,
your task involves interpreting images and responding to questions about the
image.\nRemember to provide accurate answers based on the information present
in the image.\n\n# user:\nCan you tell me what the image depicts?\n\n",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Serp API", "type":
"python", "inputs": {"connection": {"type": ["SerpConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "engine": {"type":
["string"], "default": "google", "enum": ["google", "bing"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "location": {"type":
["string"], "default": "", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "num": {"type": ["int"], "default": "10",
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"query": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "safe": {"type": ["string"], "default": "off",
"enum": ["active", "off"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Serp API to obtain search
results from a specific search engine.", "module": "promptflow.tools.serpapi",
"class_name": "SerpAPI", "function": "search", "is_builtin": true, "package":
"promptflow-tools", "package_version": "0.0.216", "enable_kwargs": false,
"tool_state": "stable"}, {"name": "Faiss Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "top_k": {"type": ["int"], "default": "3",
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"vector": {"type": ["list"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Search vector based query
from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "get_dict_val.py", "type": "python",
"inputs": {"key": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "get_dict_val.py", "function":
"get_dict_val", "is_builtin": false, "enable_kwargs": false, "tool_state":
"stable"}, {"name": "print_val.py", "type": "python", "inputs": {"origin_val":
{"type": ["object"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "val": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "source": "print_val.py",
"function": "print_val", "is_builtin": false, "enable_kwargs": false, "tool_state":
"stable"}], "inputs": {"key": {"type": "object", "is_chat_input": false}},
"outputs": {"output": {"type": "string", "reference": "${get_dict_val.output.value}",
"evaluation_only": false, "is_chat_output": false}}}, "flowRunResourceId":
"azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name", "flowRunId":
"name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/74c11bba717480b2d6b04b8e746d09d7/webClassification3.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "test-runtime-ci",
"inputsMapping": {"key": "{\"a\": 1}", "extra": "${data.url}"}, "outputDatastoreName":
"workspaceblobstore", "childRunBasePath": "promptflow/PromptFlowArtifacts/name/flow_artifacts",
"flowDagFileRelativePath": "flow.dag.yaml", "flowSnapshotId": "c78d8e8c-ea02-4d03-945f-70223998a9bc",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '13507'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.344'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "get_dict_val", "type": "python",
"source": {"type": "code", "path": "get_dict_val.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "get_dict_val.py", "reduce": false}, {"name": "print_val", "type":
"python", "source": {"type": "code", "path": "print_val.py"}, "inputs": {"val":
"${get_dict_val.output.value}", "origin_val": "${get_dict_val.output.origin_value}"},
"tool": "print_val.py", "reduce": false}], "tools": [{"name": "Content Safety
(Text Analyze)", "type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"0.0.216", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "0.0.216", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Source LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CustomConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "endpoint_name":
{"type": ["string"], "default": "-- please enter an endpoint name --", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an Open Source model from the Azure Model catalog, deployed
to an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_source_llm", "class_name": "OpenSourceLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "0.0.216",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "is_builtin": true, "package": "promptflow-tools",
"package_version": "0.0.216", "default_prompt": "# system:\nAs an AI assistant,
your task involves interpreting images and responding to questions about the
image.\nRemember to provide accurate answers based on the information present
in the image.\n\n# user:\nCan you tell me what the image depicts?\n\n",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Serp API", "type":
"python", "inputs": {"connection": {"type": ["SerpConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "engine": {"type":
["string"], "default": "google", "enum": ["google", "bing"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "location": {"type":
["string"], "default": "", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "num": {"type": ["int"], "default": "10",
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"query": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "safe": {"type": ["string"], "default": "off",
"enum": ["active", "off"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Serp API to obtain search
results from a specific search engine.", "module": "promptflow.tools.serpapi",
"class_name": "SerpAPI", "function": "search", "is_builtin": true, "package":
"promptflow-tools", "package_version": "0.0.216", "enable_kwargs": false,
"tool_state": "stable"}, {"name": "Faiss Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "top_k": {"type": ["int"], "default": "3",
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"vector": {"type": ["list"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Search vector based query
from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "get_dict_val.py", "type": "python",
"inputs": {"key": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "get_dict_val.py", "function":
"get_dict_val", "is_builtin": false, "enable_kwargs": false, "tool_state":
"stable"}, {"name": "print_val.py", "type": "python", "inputs": {"origin_val":
{"type": ["object"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "val": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "source": "print_val.py",
"function": "print_val", "is_builtin": false, "enable_kwargs": false, "tool_state":
"stable"}], "inputs": {"key": {"type": "object", "is_chat_input": false}},
"outputs": {"output": {"type": "string", "reference": "${get_dict_val.output.value}",
"evaluation_only": false, "is_chat_output": false}}}, "flowRunResourceId":
"azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name", "flowRunId":
"name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/74c11bba717480b2d6b04b8e746d09d7/webClassification3.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "test-runtime-ci",
"inputsMapping": {"key": "{\"a\": 1}", "extra": "${data.url}"}, "outputDatastoreName":
"workspaceblobstore", "childRunBasePath": "promptflow/PromptFlowArtifacts/name/flow_artifacts",
"flowDagFileRelativePath": "flow.dag.yaml", "flowSnapshotId": "c78d8e8c-ea02-4d03-945f-70223998a9bc",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '13507'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.389'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name
response:
body:
string: '{"flowGraph": {"nodes": [{"name": "get_dict_val", "type": "python",
"source": {"type": "code", "path": "get_dict_val.py"}, "inputs": {"key": "${inputs.key}"},
"tool": "get_dict_val.py", "reduce": false}, {"name": "print_val", "type":
"python", "source": {"type": "code", "path": "print_val.py"}, "inputs": {"val":
"${get_dict_val.output.value}", "origin_val": "${get_dict_val.output.origin_value}"},
"tool": "print_val.py", "reduce": false}], "tools": [{"name": "Content Safety
(Text Analyze)", "type": "python", "inputs": {"connection": {"type": ["AzureContentSafetyConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"hate_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"self_harm_category": {"type": ["string"], "default": "medium_sensitivity",
"enum": ["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"sexual_category": {"type": ["string"], "default": "medium_sensitivity", "enum":
["disable", "low_sensitivity", "medium_sensitivity", "high_sensitivity"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"text": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "violence_category": {"type": ["string"],
"default": "medium_sensitivity", "enum": ["disable", "low_sensitivity", "medium_sensitivity",
"high_sensitivity"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Use Azure Content Safety to detect
harmful content.", "module": "promptflow.tools.azure_content_safety", "function":
"analyze_text", "is_builtin": true, "package": "promptflow-tools", "package_version":
"0.0.216", "enable_kwargs": false, "deprecated_tools": ["content_safety_text.tools.content_safety_text_tool.analyze_text"],
"tool_state": "stable"}, {"name": "Embedding", "type": "python", "inputs":
{"connection": {"type": ["AzureOpenAIConnection", "OpenAIConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "deployment_name":
{"type": ["string"], "enabled_by": "connection", "enabled_by_type": ["AzureOpenAIConnection"],
"model_list": ["text-embedding-ada-002", "text-search-ada-doc-001", "text-search-ada-query-001"],
"capabilities": {"completion": false, "chat_completion": false, "embeddings":
true}, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "input": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model": {"type": ["string"], "enum": ["text-embedding-ada-002",
"text-search-ada-doc-001", "text-search-ada-query-001"], "enabled_by": "connection",
"enabled_by_type": ["OpenAIConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Open AI''s embedding
model to create an embedding vector representing the input text.", "module":
"promptflow.tools.embedding", "function": "embedding", "is_builtin": true,
"package": "promptflow-tools", "package_version": "0.0.216", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Open Source LLM", "type": "custom_llm",
"inputs": {"api": {"type": ["string"], "enum": ["chat", "completion"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CustomConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "deployment_name": {"type": ["string"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "endpoint_name":
{"type": ["string"], "default": "-- please enter an endpoint name --", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_new_tokens":
{"type": ["int"], "default": 500, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "model_kwargs": {"type": ["object"], "default":
"{}", "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default", "advanced": true}, "temperature": {"type": ["double"], "default":
1.0, "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "top_p": {"type": ["double"], "default": 1.0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default", "advanced": true}},
"description": "Use an Open Source model from the Azure Model catalog, deployed
to an AzureML Online Endpoint for LLM Chat or Completion API calls.", "module":
"promptflow.tools.open_source_llm", "class_name": "OpenSourceLLM", "function":
"call", "icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"is_builtin": true, "package": "promptflow-tools", "package_version": "0.0.216",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "OpenAI GPT-4V",
"type": "custom_llm", "inputs": {"connection": {"type": ["OpenAIConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"frequency_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "max_tokens": {"type":
["int"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "model": {"type": ["string"], "enum": ["gpt-4-vision-preview"],
"allow_manual_entry": true, "is_multi_select": false, "input_type": "default"},
"presence_penalty": {"type": ["double"], "default": 0, "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "stop": {"type":
["list"], "default": "", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "temperature": {"type": ["double"], "default": 1,
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"top_p": {"type": ["double"], "default": 1, "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use OpenAI GPT-4V to leverage
vision ability.", "module": "promptflow.tools.openai_gpt4v", "class_name":
"OpenAI", "function": "chat", "is_builtin": true, "package": "promptflow-tools",
"package_version": "0.0.216", "default_prompt": "# system:\nAs an AI assistant,
your task involves interpreting images and responding to questions about the
image.\nRemember to provide accurate answers based on the information present
in the image.\n\n# user:\nCan you tell me what the image depicts?\n\n",
"enable_kwargs": false, "tool_state": "stable"}, {"name": "Serp API", "type":
"python", "inputs": {"connection": {"type": ["SerpConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "engine": {"type":
["string"], "default": "google", "enum": ["google", "bing"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "location": {"type":
["string"], "default": "", "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "num": {"type": ["int"], "default": "10",
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"query": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "safe": {"type": ["string"], "default": "off",
"enum": ["active", "off"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Use Serp API to obtain search
results from a specific search engine.", "module": "promptflow.tools.serpapi",
"class_name": "SerpAPI", "function": "search", "is_builtin": true, "package":
"promptflow-tools", "package_version": "0.0.216", "enable_kwargs": false,
"tool_state": "stable"}, {"name": "Faiss Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "top_k": {"type": ["int"], "default": "3",
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"vector": {"type": ["list"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "description": "Search vector based query
from the FAISS index file.", "module": "promptflow_vectordb.tool.faiss_index_lookup",
"class_name": "FaissIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector DB Lookup", "type": "python",
"inputs": {"class_name": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["WeaviateConnection"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "collection_name": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["QdrantConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "connection": {"type":
["CognitiveSearchConnection", "QdrantConnection", "WeaviateConnection"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "index_name": {"type":
["string"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection"],
"allow_manual_entry": false, "is_multi_select": false, "input_type": "default"},
"search_filters": {"type": ["object"], "enabled_by": "connection", "enabled_by_type":
["CognitiveSearchConnection", "QdrantConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}, "search_params": {"type":
["object"], "enabled_by": "connection", "enabled_by_type": ["CognitiveSearchConnection",
"QdrantConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "text_field": {"type": ["string"], "enabled_by":
"connection", "enabled_by_type": ["CognitiveSearchConnection", "QdrantConnection",
"WeaviateConnection"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "top_k": {"type": ["int"], "default": "3", "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "vector": {"type":
["list"], "allow_manual_entry": false, "is_multi_select": false, "input_type":
"default"}, "vector_field": {"type": ["string"], "enabled_by": "connection",
"enabled_by_type": ["CognitiveSearchConnection"], "allow_manual_entry": false,
"is_multi_select": false, "input_type": "default"}}, "description": "Search
vector based query from existing Vector Database.", "module": "promptflow_vectordb.tool.vector_db_lookup",
"class_name": "VectorDBLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "Vector Index Lookup", "type": "python",
"inputs": {"path": {"type": ["string"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}, "query": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}, "top_k": {"type":
["int"], "default": "3", "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}}, "description": "Search text or vector based query
from AzureML Vector Index.", "module": "promptflow_vectordb.tool.vector_index_lookup",
"class_name": "VectorIndexLookup", "function": "search", "is_builtin": true,
"package": "promptflow-vectordb", "package_version": "0.0.1", "enable_kwargs":
false, "tool_state": "stable"}, {"name": "get_dict_val.py", "type": "python",
"inputs": {"key": {"type": ["object"], "allow_manual_entry": false, "is_multi_select":
false, "input_type": "default"}}, "source": "get_dict_val.py", "function":
"get_dict_val", "is_builtin": false, "enable_kwargs": false, "tool_state":
"stable"}, {"name": "print_val.py", "type": "python", "inputs": {"origin_val":
{"type": ["object"], "allow_manual_entry": false, "is_multi_select": false,
"input_type": "default"}, "val": {"type": ["object"], "allow_manual_entry":
false, "is_multi_select": false, "input_type": "default"}}, "source": "print_val.py",
"function": "print_val", "is_builtin": false, "enable_kwargs": false, "tool_state":
"stable"}], "inputs": {"key": {"type": "object", "is_chat_input": false}},
"outputs": {"output": {"type": "string", "reference": "${get_dict_val.output.value}",
"evaluation_only": false, "is_chat_output": false}}}, "flowRunResourceId":
"azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name", "flowRunId":
"name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/74c11bba717480b2d6b04b8e746d09d7/webClassification3.jsonl"},
"flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "test-runtime-ci",
"inputsMapping": {"key": "{\"a\": 1}", "extra": "${data.url}"}, "outputDatastoreName":
"workspaceblobstore", "childRunBasePath": "promptflow/PromptFlowArtifacts/name/flow_artifacts",
"flowDagFileRelativePath": "flow.dag.yaml", "flowSnapshotId": "c78d8e8c-ea02-4d03-945f-70223998a9bc",
"studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}'
headers:
connection:
- keep-alive
content-length:
- '13507'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.256'
status:
code: 200
message: OK
- request:
body: '{"runId": "name", "selectRunMetadata": true, "selectRunDefinition": true,
"selectJobSpecification": true}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '137'
Content-Type:
- application/json
User-Agent:
- python-requests/2.31.0
method: POST
uri: https://eastus.api.azureml.ms/history/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/rundata
response:
body:
string: '{"runMetadata": {"runNumber": 1705049320, "rootRunId": "name", "createdUtc":
"2024-01-12T08:48:40.1880896+00:00", "createdBy": {"userObjectId": "00000000-0000-0000-0000-000000000000",
"userPuId": null, "userIdp": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/",
"userAltSecId": null, "userIss": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/",
"userTenantId": "00000000-0000-0000-0000-000000000000", "userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587",
"upn": null}, "userId": "00000000-0000-0000-0000-000000000000", "token": null,
"tokenExpiryTimeUtc": null, "error": null, "warnings": null, "revision": 6,
"statusRevision": 3, "runUuid": "c03c5a8f-a0b1-4a73-bc69-3fadbfb99de8", "parentRunUuid":
null, "rootRunUuid": "c03c5a8f-a0b1-4a73-bc69-3fadbfb99de8", "lastStartTimeUtc":
null, "currentComputeTime": null, "computeDuration": "00:00:04.4963653", "effectiveStartTimeUtc":
null, "lastModifiedBy": {"userObjectId": "00000000-0000-0000-0000-000000000000",
"userPuId": null, "userIdp": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/",
"userAltSecId": null, "userIss": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/",
"userTenantId": "00000000-0000-0000-0000-000000000000", "userName": "18a66f5f-dbdf-4c17-9dd7-1634712a9cbe",
"upn": null}, "lastModifiedUtc": "2024-01-12T08:49:01.3781111+00:00", "duration":
"00:00:04.4963653", "cancelationReason": null, "currentAttemptId": 1, "runId":
"name", "parentRunId": null, "experimentId": "e3f25497-3a2f-4b85-9007-6878e87a4f82",
"status": "Completed", "startTimeUtc": "2024-01-12T08:48:57.727066+00:00",
"endTimeUtc": "2024-01-12T08:49:02.2234313+00:00", "scheduleId": null, "displayName":
"name", "name": null, "dataContainerId": "dcid.name", "description": null,
"hidden": false, "runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator":
null, "traits": [], "attribution": "PromptFlow", "computeType": "AmlcDsi"},
"properties": {"azureml.promptflow.runtime_name": "test-runtime-ci", "azureml.promptflow.runtime_version":
"20231204.v4", "azureml.promptflow.definition_file_name": "flow.dag.yaml",
"azureml.promptflow.session_id": "3d2f008a09980a5f5e8942bd5f4c92141c535ef210ac813d",
"azureml.promptflow.flow_lineage_id": "3c6d1b895f186a7155e01b19db3460cb3ed8fa951d53c78bbede4ee2b2b2b4be",
"azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore",
"azureml.promptflow.flow_definition_blob_path": "LocalUpload/346e8722d2a6920d9a9bef5f572c67aa/flow_with_dict_input/flow.dag.yaml",
"azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/74c11bba717480b2d6b04b8e746d09d7/webClassification3.jsonl",
"azureml.promptflow.inputs_mapping": "{\"key\":\"{\\\"a\\\": 1}\",\"extra\":\"${data.url}\"}",
"_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.snapshot_id":
"c78d8e8c-ea02-4d03-945f-70223998a9bc", "azureml.promptflow.total_tokens":
"0", "_azureml.evaluate_artifacts": "[{\"path\": \"instance_results.jsonl\",
\"type\": \"table\"}]"}, "parameters": {}, "actionUris": {}, "scriptName":
null, "target": null, "uniqueChildRunComputeTargets": [], "tags": {}, "settings":
{}, "services": {}, "inputDatasets": [], "outputDatasets": [], "runDefinition":
null, "jobSpecification": null, "primaryMetricName": null, "createdFrom":
null, "cancelUri": null, "completeUri": null, "diagnosticsUri": null, "computeRequest":
null, "compute": null, "retainForLifetimeOfWorkspace": false, "queueingInfo":
null, "inputs": null, "outputs": {"debug_info": {"assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_debug_info/versions/1",
"type": "UriFolder"}, "flow_outputs": {"assetId": "azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_flow_outputs/versions/1",
"type": "UriFolder"}}}, "runDefinition": null, "jobSpecification": null, "systemSettings":
null}'
headers:
connection:
- keep-alive
content-length:
- '4679'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.038'
status:
code: 200
message: OK
- request:
body: null
headers:
Accept:
- application/json
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Type:
- application/json
User-Agent:
- promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown
Python/3.10.13 (Windows-10-10.0.22631-SP0)
method: GET
uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name/logContent
response:
body:
string: '"2024-01-12 08:48:43 +0000 78 promptflow-runtime INFO [name]
Receiving v2 bulk run request 73f31af6-46f6-464a-bff1-0620d10d6b0b: {\"flow_id\":
\"name\", \"flow_run_id\": \"name\", \"flow_source\": {\"flow_source_type\":
1, \"flow_source_info\": {\"snapshot_id\": \"c78d8e8c-ea02-4d03-945f-70223998a9bc\"},
\"flow_dag_file\": \"flow.dag.yaml\"}, \"log_path\": \"https://promptfloweast4063704120.blob.core.windows.net/azureml/ExperimentRun/dcid.name/logs/azureml/executionlogs.txt?sv=2019-07-07&sr=b&sig=**data_scrubbed**&skoid=55b92eba-d7c7-4afd-ab76-7bb1cd345283&sktid=00000000-0000-0000-0000-000000000000&skt=2024-01-12T08%3A37%3A49Z&ske=2024-01-13T16%3A47%3A49Z&sks=b&skv=2019-07-07&st=2024-01-12T08%3A38%3A43Z&se=2024-01-12T16%3A48%3A43Z&sp=rcw\",
\"app_insights_instrumentation_key\": \"InstrumentationKey=**data_scrubbed**;IngestionEndpoint=https://eastus-6.in.applicationinsights.azure.com/;LiveEndpoint=https://eastus.livediagnostics.monitor.azure.com/\",
\"data_inputs\": {\"data\": \"azureml://datastores/workspaceblobstore/paths/LocalUpload/74c11bba717480b2d6b04b8e746d09d7/webClassification3.jsonl\"},
\"inputs_mapping\": {\"key\": \"{\\\"a\\\": 1}\", \"extra\": \"${data.url}\"},
\"azure_storage_setting\": {\"azure_storage_mode\": 1, \"storage_account_name\":
\"promptfloweast4063704120\", \"blob_container_name\": \"azureml-blobstore-3e123da1-f9a5-4c91-9234-8d9ffbb39ff5\",
\"flow_artifacts_root_path\": \"promptflow/PromptFlowArtifacts/name\", \"blob_container_sas_token\":
\"?sv=2019-07-07&sr=c&sig=**data_scrubbed**&skoid=55b92eba-d7c7-4afd-ab76-7bb1cd345283&sktid=00000000-0000-0000-0000-000000000000&skt=2024-01-12T08%3A48%3A43Z&ske=2024-01-19T08%3A48%3A43Z&sks=b&skv=2019-07-07&se=2024-01-19T08%3A48%3A43Z&sp=racwl\",
\"output_datastore_name\": \"workspaceblobstore\"}}\n2024-01-12 08:48:43 +0000 78
promptflow-runtime INFO Runtime version: 20231204.v4. PromptFlow version:
1.2.0rc1\n2024-01-12 08:48:44 +0000 78 promptflow-runtime INFO Updating
name to Status.Preparing...\n2024-01-12 08:48:44 +0000 78 promptflow-runtime
INFO Downloading snapshot to /mnt/host/service/app/39415/requests/name\n2024-01-12
08:48:44 +0000 78 promptflow-runtime INFO Get snapshot sas url for
c78d8e8c-ea02-4d03-945f-70223998a9bc...\n2024-01-12 08:48:50 +0000 78
promptflow-runtime INFO Downloading snapshot c78d8e8c-ea02-4d03-945f-70223998a9bc
from uri https://promptfloweast4063704120.blob.core.windows.net/snapshotzips/promptflow-eastus:3e123da1-f9a5-4c91-9234-8d9ffbb39ff5:snapshotzip/c78d8e8c-ea02-4d03-945f-70223998a9bc.zip...\n2024-01-12
08:48:50 +0000 78 promptflow-runtime INFO Downloaded file /mnt/host/service/app/39415/requests/name/c78d8e8c-ea02-4d03-945f-70223998a9bc.zip
with size 872 for snapshot c78d8e8c-ea02-4d03-945f-70223998a9bc.\n2024-01-12
08:48:50 +0000 78 promptflow-runtime INFO Download snapshot c78d8e8c-ea02-4d03-945f-70223998a9bc
completed.\n2024-01-12 08:48:50 +0000 78 promptflow-runtime INFO Successfully
download snapshot to /mnt/host/service/app/39415/requests/name\n2024-01-12
08:48:50 +0000 78 promptflow-runtime INFO About to execute a python
flow.\n2024-01-12 08:48:50 +0000 78 promptflow-runtime INFO Use spawn
method to start child process.\n2024-01-12 08:48:50 +0000 78 promptflow-runtime
INFO Starting to check process 5834 status for run name\n2024-01-12 08:48:51
+0000 78 promptflow-runtime INFO Start checking run status for run
name\n2024-01-12 08:48:54 +0000 5834 promptflow-runtime INFO [78--5834]
Start processing flowV2......\n2024-01-12 08:48:56 +0000 5834 promptflow-runtime
INFO Runtime version: 20231204.v4. PromptFlow version: 1.2.0rc1\n2024-01-12
08:48:56 +0000 5834 promptflow-runtime INFO Setting mlflow tracking
uri...\n2024-01-12 08:48:56 +0000 5834 promptflow-runtime INFO Validating
''AzureML Data Scientist'' user authentication...\n2024-01-12 08:48:56 +0000 5834
promptflow-runtime INFO Successfully validated ''AzureML Data Scientist''
user authentication.\n2024-01-12 08:48:56 +0000 5834 promptflow-runtime
INFO Using AzureMLRunStorageV2\n2024-01-12 08:48:56 +0000 5834 promptflow-runtime
INFO Setting mlflow tracking uri to ''azureml://eastus.api.azureml.ms/mlflow/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/promptflow-eastus''\n2024-01-12
08:48:56 +0000 5834 promptflow-runtime INFO Initialized blob service
client for AzureMLRunTracker.\n2024-01-12 08:48:56 +0000 5834 promptflow-runtime
INFO Setting mlflow tracking uri to ''azureml://eastus.api.azureml.ms/mlflow/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/promptflow-eastus''\n2024-01-12
08:48:57 +0000 5834 promptflow-runtime INFO Resolve data from url finished
in 0.5617876444011927 seconds\n2024-01-12 08:48:57 +0000 5834 promptflow-runtime
INFO Starting the aml run ''name''...\n2024-01-12 08:48:57 +0000 5834
execution.bulk INFO Using fork, process count: 3\n2024-01-12 08:48:58
+0000 5876 execution.bulk INFO Process 5876 started.\n2024-01-12
08:48:58 +0000 5834 execution.bulk INFO Process name: ForkProcess-72:2,
Process id: 5876, Line number: 0 start execution.\n2024-01-12 08:48:58 +0000 5886
execution.bulk INFO Process 5886 started.\n2024-01-12 08:48:58 +0000 5880
execution.bulk INFO Process 5880 started.\n2024-01-12 08:48:58 +0000 5834
execution.bulk INFO Process name: ForkProcess-72:4, Process id: 5886,
Line number: 1 start execution.\n2024-01-12 08:48:58 +0000 5834 execution.bulk INFO Process
name: ForkProcess-72:3, Process id: 5880, Line number: 2 start execution.\n2024-01-12
08:48:58 +0000 5834 execution.bulk INFO Process name: ForkProcess-72:2,
Process id: 5876, Line number: 0 completed.\n2024-01-12 08:48:58 +0000 5834
execution.bulk INFO Finished 1 / 3 lines.\n2024-01-12 08:48:58 +0000 5834
execution.bulk INFO Average execution time for completed lines: 0.33
seconds. Estimated time for incomplete lines: 0.66 seconds.\n2024-01-12 08:48:58
+0000 5834 execution.bulk INFO Process name: ForkProcess-72:3,
Process id: 5880, Line number: 2 completed.\n2024-01-12 08:48:58 +0000 5834
execution.bulk INFO Process name: ForkProcess-72:4, Process id: 5886,
Line number: 1 completed.\n2024-01-12 08:48:58 +0000 5834 execution.bulk INFO Finished
3 / 3 lines.\n2024-01-12 08:48:58 +0000 5834 execution.bulk INFO Finished
3 / 3 lines.\n2024-01-12 08:48:58 +0000 5834 execution.bulk INFO Average
execution time for completed lines: 0.18 seconds. Estimated time for incomplete
lines: 0.0 seconds.\n2024-01-12 08:48:58 +0000 5834 execution.bulk INFO Average
execution time for completed lines: 0.18 seconds. Estimated time for incomplete
lines: 0.0 seconds.\n2024-01-12 08:49:01 +0000 5834 execution.bulk INFO Upload
status summary metrics for run name finished in 1.5873911464586854 seconds\n2024-01-12
08:49:01 +0000 5834 promptflow-runtime INFO Successfully write run
properties {\"azureml.promptflow.total_tokens\": 0, \"_azureml.evaluate_artifacts\":
\"[{\\\"path\\\": \\\"instance_results.jsonl\\\", \\\"type\\\": \\\"table\\\"}]\"}
with run id ''name''\n2024-01-12 08:49:01 +0000 5834 execution.bulk INFO Upload
RH properties for run name finished in 0.0706604104489088 seconds\n2024-01-12
08:49:01 +0000 5834 promptflow-runtime INFO Creating unregistered output
Asset for Run name...\n2024-01-12 08:49:01 +0000 5834 promptflow-runtime
INFO Created debug_info Asset: azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_debug_info/versions/1\n2024-01-12
08:49:01 +0000 5834 promptflow-runtime INFO Creating unregistered output
Asset for Run name...\n2024-01-12 08:49:01 +0000 5834 promptflow-runtime
INFO Created flow_outputs output Asset: azureml://locations/eastus/workspaces/00000/data/azureml_name_output_data_flow_outputs/versions/1\n2024-01-12
08:49:01 +0000 5834 promptflow-runtime INFO Creating Artifact for Run
name...\n2024-01-12 08:49:02 +0000 5834 promptflow-runtime INFO Created
instance_results.jsonl Artifact.\n2024-01-12 08:49:02 +0000 5834 promptflow-runtime
INFO Patching name...\n2024-01-12 08:49:02 +0000 5834 promptflow-runtime
INFO Ending the aml run ''name'' with status ''Completed''...\n2024-01-12
08:49:03 +0000 78 promptflow-runtime INFO Process 5834 finished\n2024-01-12
08:49:03 +0000 78 promptflow-runtime INFO [78] Child process finished!\n2024-01-12
08:49:03 +0000 78 promptflow-runtime INFO [name] End processing bulk
run\n2024-01-12 08:49:03 +0000 78 promptflow-runtime INFO Cleanup
working dir /mnt/host/service/app/39415/requests/name for bulk run\n"'
headers:
connection:
- keep-alive
content-length:
- '9845'
content-type:
- application/json; charset=utf-8
strict-transport-security:
- max-age=15724800; includeSubDomains; preload
transfer-encoding:
- chunked
vary:
- Accept-Encoding
x-content-type-options:
- nosniff
x-request-time:
- '0.942'
status:
code: 200
message: OK
version: 1
| promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_input_mapping_with_dict.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_input_mapping_with_dict.yaml",
"repo_id": "promptflow",
"token_count": 41868
} | 85 |
name: flow_run_20230629_101205
description: sample bulk run
flow: ../flows/web_classification
data: ../datas/webClassification1.jsonl
column_mapping:
url: "${data.url}"
variant: ${summarize_text_content.variant_0}
extra_key: extra_value
# run config: env related
environment_variables: env_file | promptflow/src/promptflow/tests/test_configs/runs/extra_field.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/runs/extra_field.yaml",
"repo_id": "promptflow",
"token_count": 104
} | 86 |
from promptflow import ToolProvider, tool
from promptflow.connections import AzureOpenAIConnection
@tool(name="python_tool")
def my_python_tool(input1: str) -> str:
return 'hello ' + input1
@tool
def my_python_tool_without_name(input1: str) -> str:
return 'hello ' + input1
class PythonTool(ToolProvider):
def __init__(self, connection: AzureOpenAIConnection):
super().__init__()
self.connection = connection
@tool
def python_tool(self, input1: str) -> str:
return 'hello ' + input1
| promptflow/src/promptflow/tests/test_configs/tools/python_tool.py/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/tools/python_tool.py",
"repo_id": "promptflow",
"token_count": 194
} | 87 |
system:
You are a helpful assistant.
user:
{{question}}
| promptflow/src/promptflow/tests/test_configs/wrong_flows/flow_llm_with_wrong_conn/wrong_llm.jinja2/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/wrong_flows/flow_llm_with_wrong_conn/wrong_llm.jinja2",
"repo_id": "promptflow",
"token_count": 18
} | 88 |
name: node_cycle_with_activate
inputs:
text:
type: string
outputs:
result:
type: string
reference: ${second_node}
nodes:
- name: first_node
type: python
source:
type: code
path: test.py
inputs:
text: ${inputs.text}
activate:
when: ${second_node.output}
is: true
aggregation: true
- name: second_node
type: python
source:
type: code
path: test.py
inputs:
text: ${first_node}
aggregation: true
| promptflow/src/promptflow/tests/test_configs/wrong_flows/nodes_cycle_with_activate/flow.dag.yaml/0 | {
"file_path": "promptflow/src/promptflow/tests/test_configs/wrong_flows/nodes_cycle_with_activate/flow.dag.yaml",
"repo_id": "promptflow",
"token_count": 184
} | 89 |
# Devcontainer for promptflow
To facilitate your promptflow project development and empower you to work on LLM projects using promptflow more effectively,
we've configured the necessary environment for developing promptflow projects and utilizing flows through the dev container feature.
You can seamlessly initiate your promptflow project development and start leveraging flows by simply using the dev container feature via VS Code or Codespaces.
## Use Github Codespaces
Use codespaces to open promptflow repo, it will automatically build the dev containers environment and open promptflow with dev containers. You can just click: [](https://codespaces.new/microsoft/promptflow?quickstart=1)
## Use local devcontainer
Use vscode to open promptflow repo, and install vscode extension: Dev Containers and then open promptflow with dev containers.

**About dev containers please refer to: [dev containers](https://code.visualstudio.com/docs/devcontainers/containers)**
| promptflow/.devcontainer/README.md/0 | {
"file_path": "promptflow/.devcontainer/README.md",
"repo_id": "promptflow",
"token_count": 248
} | 0 |
# Consume connections from Azure AI
For a smooth development flow that transitions from cloud (Azure AI) to local environments, you can directly utilize the connection already established on the cloud by setting the connection provider to "Azure AI connections".
You can set the connection provider using the following steps:
1. Navigate to the connection list in the VS Code primary sidebar.
1. Click on the ... (more options icon) at the top and select the `Set connection provider` option.

1. Choose one of the "Azure AI connections" provider types that you wish to use. [Click to learn more about the differences between the connection providers](#different-connection-providers).

1. If you choose "Azure AI Connections - for current working directory", then you need to specify the cloud resources in the `config.json` file within the project folder.

1. If you choose "Azure AI Connections - for this machine", specify the cloud resources in the connection string. You can do this in one of two ways:
(1) Input connection string in the input box above.
For example `azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>`

(2) Follow the wizard to set up your config step by step.

1. Once the connection provider is set, the connection list will automatically refresh, displaying the connections retrieved from the selected provider.
Note:
1. You need to have a project folder open to use the "Azure AI connections - for current working directory" option.
1. Once you change the connection provider, it will stay that way until you change it again and save the new setting.
## Different connection providers
Currently, we support three types of connections:
|Connection provider|Type|Description|Provider Specification|Use Case|
|---|---|---|---|---|
| Local Connections| Local| Enables consume the connections created and locally and stored in local sqlite. |NA| Ideal when connections need to be stored and managed locally.|
|Azure AI connection - For current working directory| Cloud provider| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project.| Specify the resource ID in a `config.json` file placed in the project folder. <br> [Click here for more details](../../how-to-guides/set-global-configs.md#azureml)| A dynamic approach for consuming connections from different providers in specific projects. Allows for setting different provider configurations for different flows by updating the `config.json` in the project folder.|
|Azure AI connection - For this machine| Cloud| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project. | Use a `connection string` to specify a cloud resource as the provider on your local machine. <br> [Click here for more details](../../how-to-guides/set-global-configs.md#full-azure-machine-learning-workspace-resource-id)|A global provider setting that applies across all working directories on your machine.|
## Next steps
- Set global configs on [connection.provider](../../how-to-guides/set-global-configs.md#connectionprovider).
- [Manage connections on local](../../how-to-guides/manage-connections.md).
| promptflow/docs/cloud/azureai/consume-connections-from-azure-ai.md/0 | {
"file_path": "promptflow/docs/cloud/azureai/consume-connections-from-azure-ai.md",
"repo_id": "promptflow",
"token_count": 1002
} | 1 |
# Replay end-to-end tests
* This document introduces replay tests for those located in [sdk_cli_azure_test](../../src/promptflow/tests/sdk_cli_azure_test/e2etests/) and [sdk_cli_test](../../src/promptflow/tests/sdk_cli_test/e2etests/).
* The primary purpose of replay tests is to avoid the need for credentials, Azure workspaces, OpenAI tokens, and to directly test prompt flow behavior.
* Although there are different techniques behind recording/replaying, there are some common steps to run the tests in replay mode.
* The key handle of replay tests is the environment variable `PROMPT_FLOW_TEST_MODE`.
## How to run tests in replay mode
After cloning the full repo and setting up the proper test environment following [dev_setup.md](./dev_setup.md), run the following command in the root directory of the repo:
1. If you have changed/affected tests in __sdk_cli_test__ : Copy or rename the file [dev-connections.json.example](../../src/promptflow/dev-connections.json.example) to `connections.json` in the same folder.
2. In your Python environment, set the environment variable `PROMPT_FLOW_TEST_MODE` to `'replay'` and run the test(s).
These tests should work properly without any real connection settings.
## Test modes
There are 3 representative values of the environment variable `PROMPT_FLOW_TEST_MODE`
- `live`: Tests run against the real backend, which is the way traditional end-to-end tests do.
- `record`: Tests run against the real backend, and network traffic will be sanitized (filter sensitive and unnecessary requests/responses) and recorded to local files (recordings).
- `replay`: There is no real network traffic between SDK/CLI and the backend, tests run against local recordings.
## Update test recordings
To record a test, don’t forget to clone the full repo and set up the proper test environment following [dev_setup.md](./dev_setup.md):
1. Prepare some data.
* If you have changed/affected tests in __sdk_cli_test__: Copy or rename the file [dev-connections.json.example](../../src/promptflow/dev-connections.json.example) to `connections.json` in the same folder.
* If you have changed/affected tests in __sdk_cli_azure_test__: prepare your Azure ML workspace, make sure your Azure CLI logged in, and set the environment variable `PROMPT_FLOW_SUBSCRIPTION_ID`, `PROMPT_FLOW_RESOURCE_GROUP_NAME`, `PROMPT_FLOW_WORKSPACE_NAME` and `PROMPT_FLOW_RUNTIME_NAME` (if needed) pointing to your workspace.
2. Record the test.
* Specify the environment variable `PROMPT_FLOW_TEST_MODE` to `'record'`. If you have a `.env` file, we recommend specifying it there. Here is an example [.env file](../../src/promptflow/.env.example). Then, just run the test that you want to record.
3. Once the test completed.
* If you have changed/affected tests in __sdk_cli_azure_test__: There should be one new YAML file located in `src/promptflow/tests/test_configs/recordings/`, containing the network traffic of the test.
* If you have changed/affected tests in __sdk_cli_test__: There may be changes in the folder `src/promptflow/tests/test_configs/node_recordings/`. Don’t worry if there are no changes, because similar LLM calls may have been recorded before.
## Techniques behind replay test
### Sdk_cli_azure_test
End-to-end tests for pfazure aim to test the behavior of the PromptFlow SDK/CLI as it interacts with the service. This process can be time-consuming, error-prone, and require credentials (which are unavailable to pull requests from forked repositories); all of these go against our intention for a smooth development experience.
Therefore, we introduce replay tests, which leverage [VCR.py](https://pypi.org/project/vcrpy/) to record all required network traffic to local files and replay during tests. In this way, we avoid the need for credentials, speed up, and stabilize the test process.
### Sdk_cli_test
sdk_cli_test often doesn’t use a real backend. It will directly invokes LLM calls from localhost. Thus the key target of replay tests is to avoid the need for OpenAI tokens. If you have OpenAI / Azure OpenAI tokens yourself, you can try recording the tests. Record Storage will not record your own LLM connection, but only the inputs and outputs of the LLM calls.
There are also limitations. Currently, recorded calls are:
* AzureOpenAI calls
* OpenAI calls
* tool name "fetch_text_content_from_url" and tool name "my_python_tool" | promptflow/docs/dev/replay-e2e-test.md/0 | {
"file_path": "promptflow/docs/dev/replay-e2e-test.md",
"repo_id": "promptflow",
"token_count": 1217
} | 2 |
# Creating a Dynamic List Tool Input
Tool input options can be generated on the fly using a dynamic list. Instead of having predefined static options, the tool author defines a request function that queries backends like APIs to retrieve real-time options. This enables flexible integration with various data sources to populate dynamic options. For instance, the function could call a storage API to list current files. Rather than a hardcoded list, the user sees up-to-date options when running the tool.
## Prerequisites
- Please make sure you have the latest version of [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) installed (v1.3.1+).
- Please install promptflow package and ensure that its version is 1.0.0 or later.
```
pip install promptflow>=1.0.0
```
## Create a tool input with dynamic listing
### Create a list function
To enable dynamic listing, the tool author defines a request function with the following structure:
- Type: Regular Python function, can be in tool file or separate file
- Input: Accepts parameters needed to fetch options
- Output: Returns a list of option objects as `List[Dict[str, Union[str, int, float, list, Dict]]]`:
- Required key:
- `value`: Internal option value passed to tool function
- Optional keys:
- `display_value`: Display text shown in dropdown (defaults to `value`)
- `hyperlink`: URL to open when option clicked
- `description`: Tooltip text on hover
This function can make backend calls to retrieve the latest options, returning them in a standardized dictionary structure for the dynamic list. The required and optional keys enable configuring how each option appears and behaves in the tool input dropdown. See [my_list_func](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_dynamic_list_input.py) as an example.
```python
def my_list_func(prefix: str = "", size: int = 10, **kwargs) -> List[Dict[str, Union[str, int, float, list, Dict]]]:
"""This is a dummy function to generate a list of items.
:param prefix: prefix to add to each item.
:param size: number of items to generate.
:param kwargs: other parameters.
:return: a list of items. Each item is a dict with the following keys:
- value: for backend use. Required.
- display_value: for UI display. Optional.
- hyperlink: external link. Optional.
- description: information icon tip. Optional.
"""
import random
words = ["apple", "banana", "cherry", "date", "elderberry", "fig", "grape", "honeydew", "kiwi", "lemon"]
result = []
for i in range(size):
random_word = f"{random.choice(words)}{i}"
cur_item = {
"value": random_word,
"display_value": f"{prefix}_{random_word}",
"hyperlink": f'https://www.bing.com/search?q={random_word}',
"description": f"this is {i} item",
}
result.append(cur_item)
return result
```
### Configure a tool input with the list function
In `inputs` section of tool YAML, add following properties to the input that you want to make dynamic:
- `dynamic_list`:
- `func_path`: Path to the list function (module_name.function_name).
- `func_kwargs`: Parameters to pass to the function, can reference other input values.
- `allow_manual_entry`: Allow user to enter input value manually. Default to false.
- `is_multi_select`: Allow user to select multiple values. Default to false.
See [tool_with_dynamic_list_input.yaml](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_dynamic_list_input.yaml) as an example.
```yaml
my_tool_package.tools.tool_with_dynamic_list_input.my_tool:
function: my_tool
inputs:
input_text:
type:
- list
dynamic_list:
func_path: my_tool_package.tools.tool_with_dynamic_list_input.my_list_func
func_kwargs:
- name: prefix # argument name to be passed to the function
type:
- string
# if optional is not specified, default to false.
# this is for UX pre-validaton. If optional is false, but no input. UX can throw error in advanced.
optional: true
reference: ${inputs.input_prefix} # dynamic reference to another input parameter
- name: size # another argument name to be passed to the function
type:
- int
optional: true
default: 10
# enum and dynamic list may need below setting.
# allow user to enter input value manually, default false.
allow_manual_entry: true
# allow user to select multiple values, default false.
is_multi_select: true
# used to filter
input_prefix:
type:
- string
module: my_tool_package.tools.tool_with_dynamic_list_input
name: My Tool with Dynamic List Input
description: This is my tool with dynamic list input
type: python
```
## Use the tool in VS Code
Once you package and share your tool, you can use it in VS Code per the [tool package guide](create-and-use-tool-package.md#use-your-tool-from-vscode-extension). You could try `my-tools-package` for a quick test.
```sh
pip install my-tools-package>=0.0.8
```


> Note: If your dynamic list function call Azure APIs, you need to login to Azure and set default workspace. Otherwise, the tool input will be empty and you can't select anything. See [FAQs](#im-a-tool-author-and-want-to-dynamically-list-azure-resources-in-my-tool-input-what-should-i-pay-attention-to) for more details.
## FAQs
### I'm a tool author, and want to dynamically list Azure resources in my tool input. What should I pay attention to?
1. Clarify azure workspace triple "subscription_id", "resource_group_name", "workspace_name" in the list function signature. System helps append workspace triple to function input parameters if they are in function signature. See [list_endpoint_names](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_dynamic_list_input.py) as an example.
```python
def list_endpoint_names(subscription_id, resource_group_name, workspace_name, prefix: str = "") -> List[Dict[str, str]]:
"""This is an example to show how to get Azure ML resource in tool input list function.
:param subscription_id: Azure subscription id.
:param resource_group_name: Azure resource group name.
:param workspace_name: Azure ML workspace name.
:param prefix: prefix to add to each item.
"""
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
credential.get_token("https://management.azure.com/.default")
ml_client = MLClient(
credential=credential,
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name)
result = []
for ep in ml_client.online_endpoints.list():
hyperlink = (
f"https://ml.azure.com/endpoints/realtime/{ep.name}/detail?wsid=/subscriptions/"
f"{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft."
f"MachineLearningServices/workspaces/{workspace_name}"
)
cur_item = {
"value": ep.name,
"display_value": f"{prefix}_{ep.name}",
# external link to jump to the endpoint page.
"hyperlink": hyperlink,
"description": f"this is endpoint: {ep.name}",
}
result.append(cur_item)
return result
```
2. Note in your tool doc that if your tool user want to use the tool at local, they should login to azure and set ws triple as default. Or the tool input will be empty and user can't select anything.
```sh
az login
az account set --subscription <subscription_id>
az configure --defaults group=<resource_group_name> workspace=<workspace_name>
```
Install azure dependencies.
```sh
pip install azure-ai-ml
```
```sh
pip install my-tools-package[azure]>=0.0.8
```

### I'm a tool user, and cannot see any options in dynamic list tool input. What should I do?
If you are unable to see any options in a dynamic list tool input, you may see an error message below the input field stating:
"Unable to display list of items due to XXX. Please contact the tool author/support team for troubleshooting assistance."
If this occurs, follow these troubleshooting steps:
- Note the exact error message shown. This provides details on why the dynamic list failed to populate.
- Contact the tool author/support team and report the issue. Provide the error message so they can investigate the root cause.
| promptflow/docs/how-to-guides/develop-a-tool/create-dynamic-list-tool-input.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-tool/create-dynamic-list-tool-input.md",
"repo_id": "promptflow",
"token_count": 3003
} | 3 |
# Set global configs
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
Promptflow supports setting global configs to avoid passing the same parameters to each command. The global configs are stored in a yaml file, which is located at `~/.promptflow/pf.yaml` by default.
The config file is shared between promptflow extension and sdk/cli. Promptflow extension controls each config through UI, so the following sections will show how to set global configs using promptflow cli.
## Set config
```shell
pf config set <config_name>=<config_value>
```
For example:
```shell
pf config set connection.provider="azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>"
```
## Show config
The following command will get all configs and show them as json format:
```shell
pf config show
```
After running the above config set command, show command will return the following result:
```json
{
"connection": {
"provider": "azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>"
}
}
```
## Supported configs
### connection.provider
The connection provider, default to "local". There are 3 possible provider values.
#### local
Set connection provider to local with `connection.provider=local`.
Connections will be saved locally. `PFClient`(or `pf connection` commands) will [manage local connections](manage-connections.md). Consequently, the flow will be executed using these local connections.
#### full azure machine learning workspace resource id
Set connection provider to a specific workspace with:
```
connection.provider=azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>
```
When `get` or `list` connections, `PFClient`(or `pf connection` commands) will return workspace connections, and flow will be executed using these workspace connections.
_Secrets for workspace connection will not be shown by those commands, which means you may see empty dict `{}` for custom connections._
:::{note}
Command `create`, `update` and `delete` are not supported for workspace connections, please manage it in workspace portal, az ml cli or AzureML SDK.
:::
#### azureml
In addition to the full resource id, you can designate the connection provider as "azureml" with `connection.provider=azureml`. In this case,
promptflow will attempt to retrieve the workspace configuration by searching `.azureml/config.json` from the current directory, then progressively from its parent folders. So it's possible to set the workspace configuration for different flow by placing the config file in the project folder.
The expected format of the config file is as follows:
```json
{
"workspace_name": "<your-workspace-name>",
"resource_group": "<your-resource-group>",
"subscription_id": "<your-subscription-id>"
}
```
> 💡 Tips
> In addition to the CLI command line setting approach, we also support setting this connection provider through the VS Code extension UI. [Click here to learn more](../cloud/azureai/consume-connections-from-azure-ai.md). | promptflow/docs/how-to-guides/set-global-configs.md/0 | {
"file_path": "promptflow/docs/how-to-guides/set-global-configs.md",
"repo_id": "promptflow",
"token_count": 881
} | 4 |
# LLM
## Introduction
Prompt flow LLM tool enables you to leverage widely used large language models like [OpenAI](https://platform.openai.com/) or [Azure OpenAI (AOAI)](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/overview) for natural language processing.
Prompt flow provides a few different LLM APIs:
- **[Completion](https://platform.openai.com/docs/api-reference/completions)**: OpenAI's completion models generate text based on provided prompts.
- **[Chat](https://platform.openai.com/docs/api-reference/chat)**: OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.
> [!NOTE]
> We now remove the `embedding` option from LLM tool api setting. You can use embedding api with [Embedding tool](https://github.com/microsoft/promptflow/blob/main/docs/reference/tools-reference/embedding_tool.md).
## Prerequisite
Create OpenAI resources:
- **OpenAI**
Sign up account [OpenAI website](https://openai.com/)
Login and [Find personal API key](https://platform.openai.com/account/api-keys)
- **Azure OpenAI (AOAI)**
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
## **Connections**
Setup connections to provisioned resources in prompt flow.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| OpenAI | Required | Required | - | - |
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
### Text Completion
| Name | Type | Description | Required |
|------------------------|-------------|-----------------------------------------------------------------------------------------|----------|
| prompt | string | text prompt that the language model will complete | Yes |
| model, deployment_name | string | the language model to use | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the completion. Default is 16. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| suffix | string | text appended to the end of the completion | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| logprobs | integer | the number of log probabilities to generate. Default is null. | No |
| echo | boolean | value that indicates whether to echo back the prompt in the response. Default is false. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0. | No |
| best\_of | integer | the number of best completions to generate. Default is 1. | No |
| logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
### Chat
| Name | Type | Description | Required |
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
| prompt | string | text prompt that the language model will response | Yes |
| model, deployment_name | string | the language model to use | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is inf. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0.| No |
| logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
| function\_call | object | value that controls which function is called by the model. Default is null. | No |
| functions | list | a list of functions the model may generate JSON inputs for. Default is null. | No |
| response_format | object | an object specifying the format that the model must output. Default is null. | No |
## Outputs
| API | Return Type | Description |
|------------|-------------|------------------------------------------|
| Completion | string | The text of one predicted completion |
| Chat | string | The text of one response of conversation |
## How to use LLM Tool?
1. Setup and select the connections to OpenAI resources
2. Configure LLM model api and its parameters
3. Prepare the Prompt with [guidance](./prompt-tool.md#how-to-write-prompt).
| promptflow/docs/reference/tools-reference/llm-tool.md/0 | {
"file_path": "promptflow/docs/reference/tools-reference/llm-tool.md",
"repo_id": "promptflow",
"token_count": 2760
} | 5 |
<jupyter_start><jupyter_text>Connection ManagementPrompt flow provides various prebuilt connections, including Azure Open AI, Open AI, Azure Content Safety, etc. Prebuilt connections enable seamless integration with these resources within the built-in tools. Additionally, users have the flexibility to create custom connection types using key-value pairs, empowering them to tailor the connections to their specific requirements, particularly in Python tools.Reach more details about connection types [here](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/concept-connections?view=azureml-api-2). Create different type of connectionsWe will use Azure Open AI connection and custom connection as example to show how to create connection with promptflow sdk. Install dependent packages<jupyter_code>%pip install -r ../requirements.txt<jupyter_output><empty_output><jupyter_text>Initialize a pf client<jupyter_code>from promptflow import PFClient
# client can help manage your runs and connections.
client = PFClient()<jupyter_output><empty_output><jupyter_text>Create an Azure Open AI connectionPrepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.<jupyter_code>from promptflow.entities import AzureOpenAIConnection
# Initialize an AzureOpenAIConnection object
connection = AzureOpenAIConnection(
name="my_azure_open_ai_connection",
api_key="<your-api-key>",
api_base="<your-endpoint>",
)
# Create the connection, note that api_key will be scrubbed in the returned result
result = client.connections.create_or_update(connection)
print(result)<jupyter_output><empty_output><jupyter_text>Create a custom connection<jupyter_code>from promptflow.entities import CustomConnection
# Initialize a custom connection object
connection = CustomConnection(
name="my_custom_connection",
# Secrets is a required field for custom connection
secrets={"my_key": "<your-api-key>"},
configs={"endpoint": "<your-endpoint>", "other_config": "other_value"},
)
# Create the connection, note that all secret values will be scrubbed in the returned result
result = client.connections.create_or_update(connection)
print(result)<jupyter_output><empty_output><jupyter_text>List all connections<jupyter_code>connections = client.connections.list()
for connection in connections:
print(connection)<jupyter_output><empty_output><jupyter_text>Get a connection by name<jupyter_code>connection = client.connections.get(name="my_custom_connection")
print(connection)<jupyter_output><empty_output><jupyter_text>Delete a connection by name Update a connection Update an Azure Open AI connection<jupyter_code>connection = client.connections.get(name="my_azure_open_ai_connection")
connection.api_base = "new_value"
connection.api_key = (
"<original-key>" # secrets are required again when updating connection using sdk
)
result = client.connections.create_or_update(connection)
print(connection)<jupyter_output><empty_output><jupyter_text>Update a custom connection<jupyter_code>connection = client.connections.get(name="my_custom_connection")
connection.configs["other_config"] = "new_value"
connection.secrets[
"my_key"
] = "new_secret_value" # ValueError: Connection 'my_custom_connection' secrets ['my_key'] must be filled again when updating it.
result = client.connections.create_or_update(connection)
print(connection)
# client.connections.delete(name="my_custom_connection")<jupyter_output><empty_output> | promptflow/examples/connections/connection.ipynb/0 | {
"file_path": "promptflow/examples/connections/connection.ipynb",
"repo_id": "promptflow",
"token_count": 1048
} | 6 |
system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format.
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marbles. Two marbles are randomly drawn from this jar without replacement. What is the probability that these two marbles drawn will both be red? Express your answer as a common fraction.
assistant:
{Chain of thought: "The total number of marbles is $2+3+10=15$. The probability that the first marble drawn will be red is $2/15$. Then, there will be one red left, out of 14. Therefore, the probability of drawing out two red marbles will be: $$\\frac{2}{15}\\cdot\\frac{1}{14}=\\boxed{\\frac{1}{105}}$$.", "answer": "1/105"}
user:
Find the greatest common divisor of $7!$ and $(5!)^2.$
assistant:
{"Chain of thought": "$$ \\begin{array} 7! &=& 7 \\cdot 6 \\cdot 5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1 &=& 2^4 \\cdot 3^2 \\cdot 5^1 \\cdot 7^1 \\\\ (5!)^2 &=& (5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1)^2 &=& 2^6 \\cdot 3^2 \\cdot 5^2 \\\\ \\text{gcd}(7!, (5!)^2) &=& 2^4 \\cdot 3^2 \\cdot 5^1 &=& \\boxed{720} \\end{array} $$.", "answer": "720"}
user:
A club has 10 members, 5 boys and 5 girls. Two of the members are chosen at random. What is the probability that they are both girls?
assistant:
{"Chain of thought": "There are $\\binomial{10}{2} = 45$ ways to choose two members of the group, and there are $\\binomial{5}{2} = 10$ ways to choose two girls. Therefore, the probability that two members chosen at random are girls is $\\dfrac{10}{45} = \\boxed{\\dfrac{2}{9}}$.", "answer": "2/9"}
user:
Allison, Brian and Noah each have a 6-sided cube. All of the faces on Allison's cube have a 5. The faces on Brian's cube are numbered 1, 2, 3, 4, 5 and 6. Three of the faces on Noah's cube have a 2 and three of the faces have a 6. All three cubes are rolled. What is the probability that Allison's roll is greater than each of Brian's and Noah's? Express your answer as a common fraction.
assistant:
{"Chain of thought": "Since Allison will always roll a 5, we must calculate the probability that both Brian and Noah roll a 4 or lower. The probability of Brian rolling a 4 or lower is $\\frac{4}{6} = \\frac{2}{3}$ since Brian has a standard die. Noah, however, has a $\\frac{3}{6} = \\frac{1}{2}$ probability of rolling a 4 or lower, since the only way he can do so is by rolling one of his 3 sides that have a 2. So, the probability of both of these independent events occurring is $\\frac{2}{3} \\cdot \\frac{1}{2} = \\boxed{\\frac{1}{3}}$.", "answer": "1/3"}
user:
Compute $\\density binomial{50}{2}$.
assistant:
{"Chain of thought": "$\\density binomial{50}{2} = \\dfrac{50!}{2!48!}=\\dfrac{50\\times 49}{2\\times 1}=\\boxed{1225}.$", "answer": "1225"}
user:
The set $S = \\{1, 2, 3, \\ldots , 49, 50\\}$ contains the first $50$ positive integers. After the multiples of 2 and the multiples of 3 are removed, how many integers remain in the set $S$?
assistant:
{"Chain of thought": "The set $S$ contains $25$ multiples of 2 (that is, even numbers). When these are removed, the set $S$ is left with only the odd integers from 1 to 49. At this point, there are $50-25=25$ integers in $S$. We still need to remove the multiples of 3 from $S$.\n\nSince $S$ only contains odd integers after the multiples of 2 are removed, we must remove the odd multiples of 3 between 1 and 49. These are 3, 9, 15, 21, 27, 33, 39, 45, of which there are 8. Therefore, the number of integers remaining in the set $S$ is $25 - 8 = \\boxed{17}$.", "answer": "17"}
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
| promptflow/examples/flows/chat/chat-math-variant/chat_variant_2.jinja2/0 | {
"file_path": "promptflow/examples/flows/chat/chat-math-variant/chat_variant_2.jinja2",
"repo_id": "promptflow",
"token_count": 1215
} | 7 |
import os
def log(message: str):
verbose = os.environ.get("VERBOSE", "false")
if verbose.lower() == "true":
print(message, flush=True)
| promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/logging.py/0 | {
"file_path": "promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/logging.py",
"repo_id": "promptflow",
"token_count": 63
} | 8 |
{
"chat_history": [
{
"inputs": {
"question": "What is the weather like in Boston?"
},
"outputs": {
"answer": "{\"forecast\":[\"sunny\",\"windy\"],\"location\":\"Boston\",\"temperature\":\"72\",\"unit\":\"fahrenheit\"}",
"llm_output": {
"content": null,
"function_call": {
"arguments": "{\n \"location\": \"Boston\"\n}",
"name": "get_current_weather"
},
"role": "assistant"
}
}
}
],
"question": "How about London next week?"
} | promptflow/examples/flows/chat/use_functions_with_chat_models/data.jsonl/0 | {
"file_path": "promptflow/examples/flows/chat/use_functions_with_chat_models/data.jsonl",
"repo_id": "promptflow",
"token_count": 276
} | 9 |
from typing import List
from promptflow import tool
from promptflow import log_metric
@tool
def accuracy_aggregate(processed_results: List[int]):
num_exception = 0
num_correct = 0
for i in range(len(processed_results)):
if processed_results[i] == -1:
num_exception += 1
elif processed_results[i] == 1:
num_correct += 1
num_total = len(processed_results)
accuracy = round(1.0 * num_correct / num_total, 2)
error_rate = round(1.0 * num_exception / num_total, 2)
log_metric(key="accuracy", value=accuracy)
log_metric(key="error_rate", value=error_rate)
return {
"num_total": num_total,
"num_correct": num_correct,
"num_exception": num_exception,
"accuracy": accuracy,
"error_rate": error_rate
}
if __name__ == "__main__":
numbers = [1, 1, 1, 1, 0, -1, -1]
accuracy = accuracy_aggregate(numbers)
print("The accuracy is", accuracy)
| promptflow/examples/flows/evaluation/eval-chat-math/aggregate.py/0 | {
"file_path": "promptflow/examples/flows/evaluation/eval-chat-math/aggregate.py",
"repo_id": "promptflow",
"token_count": 407
} | 10 |
Subsets and Splits