text
stringlengths
8
1.72M
id
stringlengths
22
143
metadata
dict
__index_level_0__
int64
0
104
# Basic flow with custom connection A basic standard flow that using custom python tool calls Azure OpenAI with connection info stored in custom connection. Tools used in this flow: - `prompt` tool - custom `python` Tool Connections used in this flow: - None ## Prerequisites Install promptflow sdk and other dependencies: ```bash pip install -r requirements.txt ``` ## Setup connection Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one. Create connection if you haven't done that. ```bash # Override keys with --set to avoid yaml file changes pf connection create -f custom.yml --set secrets.api_key=<your_api_key> configs.api_base=<your_api_base> ``` Ensure you have created `basic_custom_connection` connection. ```bash pf connection show -n basic_custom_connection ``` ## Run flow ### Run with single line input ```bash # test with default input value in flow.dag.yaml pf flow test --flow . # test with flow inputs pf flow test --flow . --inputs text="Hello World!" # test node with inputs pf flow test --flow . --node llm --inputs prompt="Write a simple Hello World! program that displays the greeting message when executed." ``` ### Run with multiple lines data - create run ```bash pf run create --flow . --data ./data.jsonl --column-mapping text='${data.text}' --stream ``` You can also skip providing `column-mapping` if provided data has same column name as the flow. Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI. - list and show run meta ```bash # list created run pf run list -r 3 # get a sample run name name=$(pf run list -r 10 | jq '.[] | select(.name | contains("basic_with_connection")) | .name'| head -n 1 | tr -d '"') # show specific run detail pf run show --name $name # show output pf run show-details --name $name # visualize run in browser pf run visualize --name $name ``` ### Run with connection override Ensure you have created `open_ai_connection` connection before. ```bash pf connection show -n open_ai_connection ``` Create connection if you haven't done that. ```bash # Override keys with --set to avoid yaml file changes pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> ``` Run flow with newly created connection. ```bash pf run create --flow . --data ./data.jsonl --connections llm.connection=open_ai_connection --column-mapping text='${data.text}' --stream ``` ### Run in cloud with connection override Ensure you have created `open_ai_connection` connection in cloud. Reference [this notebook](../../../tutorials/get-started/quickstart-azure.ipynb) on how to create connections in cloud with UI. Run flow with connection `open_ai_connection`. ```bash # set default workspace az account set -s <your_subscription_id> az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name> pfazure run create --flow . --data ./data.jsonl --connections llm.connection=open_ai_connection --column-mapping text='${data.text}' --stream ```
promptflow/examples/flows/standard/basic-with-connection/README.md/0
{ "file_path": "promptflow/examples/flows/standard/basic-with-connection/README.md", "repo_id": "promptflow", "token_count": 983 }
15
from promptflow import tool import random @tool def content_safety_check(text: str) -> str: # You can use a content safety node to replace this tool. return random.choice([True, False])
promptflow/examples/flows/standard/conditional-flow-for-if-else/content_safety_check.py/0
{ "file_path": "promptflow/examples/flows/standard/conditional-flow-for-if-else/content_safety_check.py", "repo_id": "promptflow", "token_count": 58 }
16
from promptflow import tool @tool def order_search(query: str) -> str: print(f"Your query is {query}.\nSearching for order...") return "Your order is being mailed, please wait patiently."
promptflow/examples/flows/standard/conditional-flow-for-switch/order_search.py/0
{ "file_path": "promptflow/examples/flows/standard/conditional-flow-for-switch/order_search.py", "repo_id": "promptflow", "token_count": 61 }
17
# Describe image flow A flow that take image input, flip it horizontally and uses OpenAI GPT-4V tool to describe it. Tools used in this flow: - `OpenAI GPT-4V` tool - custom `python` Tool Connections used in this flow: - OpenAI Connection ## Prerequisites Install promptflow sdk and other dependencies, create connection for OpenAI GPT-4V tool to use: ```bash pip install -r requirements.txt pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> name=aoai_gpt4v_connection api_version=2023-07-01-preview ``` ## Run flow - Prepare OpenAI connection Go to "Prompt flow" "Connections" tab. Click on "Create" button, and create an "OpenAI" connection. If you do not have an OpenAI account, please refer to [OpenAI](https://platform.openai.com/) for more details. - Test flow/node ```bash # test with default input value in flow.dag.yaml pf flow test --flow . # test with flow inputs pf flow test --flow . --inputs question="How many colors can you see?" input_image="https://developer.microsoft.com/_devcom/images/logo-ms-social.png" ``` - Create run with multiple lines data ```bash # using environment from .env file (loaded in user code: hello.py) pf run create --flow . --data ./data.jsonl --column-mapping question='${data.question}' --stream ``` You can also skip providing `column-mapping` if provided data has same column name as the flow. Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI. - List and show run meta ```bash # list created run pf run list # get a sample run name name=$(pf run list -r 10 | jq '.[] | select(.name | contains("describe_image_variant_0")) | .name'| head -n 1 | tr -d '"') # show specific run detail pf run show --name $name # show output pf run show-details --name $name # visualize run in browser pf run visualize --name $name ```
promptflow/examples/flows/standard/describe-image/README.md/0
{ "file_path": "promptflow/examples/flows/standard/describe-image/README.md", "repo_id": "promptflow", "token_count": 608 }
18
import sys from promptflow.tools.common import render_jinja_template from divider import Divider class PromptLimitException(Exception): def __init__(self, message="", **kwargs): super().__init__(message, **kwargs) self._message = str(message) self._kwargs = kwargs self._inner_exception = kwargs.get("error") self.exc_type, self.exc_value, self.exc_traceback = sys.exc_info() self.exc_type = self.exc_type.__name__ if self.exc_type else type(self._inner_exception) self.exc_msg = "{}, {}: {}".format(message, self.exc_type, self.exc_value) @property def message(self): if self._message: return self._message return self.__class__.__name__ def docstring_prompt(last_code: str = '', code: str = '', module: str = '') -> str: functions, _ = Divider.get_functions_and_pos(code) # Add the first few lines to the function, such as decorator, to make the docstring generated better by llm. first_three_lines = '\n'.join(last_code.split('\n')[-3:]) with open('doc_format.jinja2') as file: return render_jinja_template(prompt=file.read(), module=module.strip('\n'), code=(first_three_lines + code).strip('\n'), functions=functions)
promptflow/examples/flows/standard/gen-docstring/prompt.py/0
{ "file_path": "promptflow/examples/flows/standard/gen-docstring/prompt.py", "repo_id": "promptflow", "token_count": 518 }
19
from typing import List from promptflow import tool @tool def cleansing(entities_str: str) -> List[str]: # Split, remove leading and trailing spaces/tabs/dots parts = entities_str.split(",") cleaned_parts = [part.strip(" \t.\"") for part in parts] entities = [part for part in cleaned_parts if len(part) > 0] return entities
promptflow/examples/flows/standard/named-entity-recognition/cleansing.py/0
{ "file_path": "promptflow/examples/flows/standard/named-entity-recognition/cleansing.py", "repo_id": "promptflow", "token_count": 114 }
20
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json flow: ../../evaluation/eval-classification-accuracy data: data.jsonl run: web_classification_variant_1_20230724_173442_973403 # replace with your run name column_mapping: groundtruth: ${data.answer} prediction: ${run.outputs.category}
promptflow/examples/flows/standard/web-classification/run_evaluation.yml/0
{ "file_path": "promptflow/examples/flows/standard/web-classification/run_evaluation.yml", "repo_id": "promptflow", "token_count": 111 }
21
from promptflow import tool from typing import List, Union, Dict def my_list_func(prefix: str = "", size: int = 10, **kwargs) -> List[Dict[str, Union[str, int, float, list, Dict]]]: """This is a dummy function to generate a list of items. :param prefix: prefix to add to each item. :param size: number of items to generate. :param kwargs: other parameters. :return: a list of items. Each item is a dict with the following keys: - value: for backend use. Required. - display_value: for UI display. Optional. - hyperlink: external link. Optional. - description: information icon tip. Optional. """ import random words = ["apple", "banana", "cherry", "date", "elderberry", "fig", "grape", "honeydew", "kiwi", "lemon"] result = [] for i in range(size): random_word = f"{random.choice(words)}{i}" cur_item = { "value": random_word, "display_value": f"{prefix}_{random_word}", "hyperlink": f'https://www.bing.com/search?q={random_word}', "description": f"this is {i} item", } result.append(cur_item) return result def list_endpoint_names(subscription_id, resource_group_name, workspace_name, prefix: str = "") -> List[Dict[str, str]]: """This is an example to show how to get Azure ML resource in tool input list function. :param subscription_id: Azure subscription id. :param resource_group_name: Azure resource group name. :param workspace_name: Azure ML workspace name. :param prefix: prefix to add to each item. """ from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential credential = DefaultAzureCredential() credential.get_token("https://management.azure.com/.default") ml_client = MLClient( credential=credential, subscription_id=subscription_id, resource_group_name=resource_group_name, workspace_name=workspace_name) result = [] for ep in ml_client.online_endpoints.list(): hyperlink = ( f"https://ml.azure.com/endpoints/realtime/{ep.name}/detail?wsid=/subscriptions/" f"{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft." f"MachineLearningServices/workspaces/{workspace_name}" ) cur_item = { "value": ep.name, "display_value": f"{prefix}_{ep.name}", # external link to jump to the endpoint page. "hyperlink": hyperlink, "description": f"this is endpoint: {ep.name}", } result.append(cur_item) return result @tool def my_tool(input_prefix: str, input_text: list, endpoint_name: str) -> str: return f"Hello {input_prefix} {','.join(input_text)} {endpoint_name}"
promptflow/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_dynamic_list_input.py/0
{ "file_path": "promptflow/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_dynamic_list_input.py", "repo_id": "promptflow", "token_count": 1111 }
22
from my_tool_package.tools.tool_with_cascading_inputs import my_tool def test_my_tool(): result = my_tool(user_type="student", student_id="student_id") assert result == '123'
promptflow/examples/tools/tool-package-quickstart/tests/test_tool_with_cascading_inputs.py/0
{ "file_path": "promptflow/examples/tools/tool-package-quickstart/tests/test_tool_with_cascading_inputs.py", "repo_id": "promptflow", "token_count": 67 }
23
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json name: normal_custom_connection type: custom configs: api_base: test secrets: # must-have api_key: <to-be-replaced>
promptflow/examples/tools/use-cases/custom-strong-type-connection-script-tool-showcase/custom.yml/0
{ "file_path": "promptflow/examples/tools/use-cases/custom-strong-type-connection-script-tool-showcase/custom.yml", "repo_id": "promptflow", "token_count": 79 }
24
--- resources: examples/connections/azure_openai.yml, examples/flows/chat/chat-with-pdf --- # Tutorial: Chat with PDF ## Overview Retrieval Augmented Generation (or RAG) has become a prevalent pattern to build intelligent application with Large Language Models (or LLMs) since it can infuse external knowledge into the model, which is not trained with those up-to-date or proprietary information. The screenshot below shows how new Bing in Edge sidebar can answer questions based on the page content on the left - in this case, a PDF file. ![edge-chat-pdf](../../flows/chat/chat-with-pdf/assets/edge-chat-pdf.png) Note that new Bing will also search web for more information to generate the answer, let's ignore that part for now. In this tutorial we will try to mimic the functionality of retrieval of relevant information from the PDF to generate an answer with GPT. **We will guide you through the following steps:** Creating a console chatbot "chat_with_pdf" that takes a URL to a PDF file as an argument and answers questions based on the PDF's content. Constructing a prompt flow for the chatbot, primarily reusing the code from the first step. Creating a dataset with multiple questions to swiftly test the flow. Evaluating the quality of the answers generated by the chat_with_pdf flow. Incorporating these tests and evaluations into your development cycle, including unit tests and CI/CD. Deploying the flow to Azure App Service and Streamlit to handle real user traffic. ## Prerequisite To go through this tutorial you should: 1. Install dependencies ```bash cd ../../flows/chat/chat-with-pdf/ pip install -r requirements.txt ``` 2. Install and configure [Prompt flow for VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) follow [Quick Start Guide](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html). (_This extension is optional but highly recommended for flow development and debugging._) 3. Deploy an OpenAI or Azure OpenAI chat model (e.g. gpt4 or gpt-35-turbo-16k), and an Embedding model (text-embedding-ada-002). Follow the [how-to](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal) for an Azure OpenAI example. ## Console chatbot chat_with_pdf A typical RAG process consists of two steps: - **Retrieval**: Retrieve contextual information from external systems (database, search engine, files, etc.) - **Generation**: Construct the prompt with the retrieved context and get response from LLMs. The retrieval step, being more of a search problem, can be quite complex. A widely used, simple yet effective approach is vector search, which requires an index building process. Suppose you have one or more documents containing the contextual information, the index building process would look something like this: 1. **Chunk**: Break down the documents into multiple chunks of text. 2. **Embedding**: Each text chunk is then processed by an embedding model to convert it into an array of floating-point numbers, also known as embedding or vector. 3. **Indexing**: These vectors are then stored in an index or a database that supports vector search. This allows for the retrieval of the top K relevant or similar vectors from the index or database. Once the index is built, the **Retrieval** step simply involves converting the question into an embedding/vector and performing a vector search on the index to obtain the most relevant context for the question. OK now back to the chatbot we want to build, a simplified design could be: <img src="../../flows/chat/chat-with-pdf/assets/chat_with_pdf_simple.png" width="300" alt="chat with pdf simple design"/> A more robust or practical application might consider using an external vector database to store the vectors. For this simple example we're using a [FAISS](https://github.com/facebookresearch/faiss) index, which can be saved as a file. However, a more robust or practical application should consider using an external vector database with advanced management capabilities to store the vectors. With this sample's FAISS index, to prevent repetitive downloading and index building for same PDF file, we will add a check that if the PDF file already exists then we won't download, same for index building. This design is quite effective for question and answering, but it may fall short when it comes to multi-turn conversations with the chatbot. Consider a scenario like this: > $User: what is BERT? > > $Bot: BERT stands for Bidirectional Encoder Representations from Transformers. > > $User: is it better than GPT? > > $Bot: ... You would typically expect the chatbot to be intelligent enough to decipher that the "it" in your second question refers to BERT, and your actual question is "is BERT better than GPT". However, if you present the question "is it better than GPT" to the embedding model and then to the vector index/database, they won't recognize that "it" represents BERT. Consequently, you won't receive the most relevant context from the index. To address this issue, we will enlist the assistance of a Large Language Model (LLM), such as GPT, to "rewrite" the question based on the previous question. The updated design is as follows: <img src="../../flows/chat/chat-with-pdf/assets/chat_with_pdf_with_rewrite.png" width="400" alt="chat with pdf better design"/> A "rewrite_question" step is performed before feeding the question to "find_context" step. ### Configurations Despite being a minimalistic LLM application, there are several aspects we may want to adjust or experiment with in the future. We'll store these in environment variables for ease of access and modification. In the subsequent sections, we'll guide you on how to experiment with these configurations to enhance your chat application's quality. Create a .env file in the second chat_with_pdf directory (same directory with the main.py) and populate it with the following content. We can use the load_dotenv() function (from the python-dotenv package) to import these into our environment variables later on. We'll delve into what these variables represent when discussing how each step of the process is implemented. Rename the .env.example file in chat_with_pdf directory and modify per your need. > If you're using Open AI, your .env should look like: ```ini OPENAI_API_KEY=<open_ai_key> EMBEDDING_MODEL_DEPLOYMENT_NAME=<text-embedding-ada-002> CHAT_MODEL_DEPLOYMENT_NAME=<gpt-4> PROMPT_TOKEN_LIMIT=3000 MAX_COMPLETION_TOKENS=1024 CHUNK_SIZE=256 CHUNK_OVERLAP=64 VERBOSE=False ``` Note: if you have an org id, it can be set via OPENAI_ORG_ID=<your_org_id> > If you're using Azure Open AI, you .env should look like: ```ini OPENAI_API_TYPE=azure OPENAI_API_BASE=<AOAI_endpoint> OPENAI_API_KEY=<AOAI_key> OPENAI_API_VERSION=2023-05-15 EMBEDDING_MODEL_DEPLOYMENT_NAME=<text-embedding-ada-002> CHAT_MODEL_DEPLOYMENT_NAME=<gpt-4> PROMPT_TOKEN_LIMIT=3000 MAX_COMPLETION_TOKENS=1024 CHUNK_SIZE=256 CHUNK_OVERLAP=64 VERBOSE=False ``` Note: CHAT_MODEL_DEPLOYMENT_NAME should point to a chat model like gpt-3.5-turbo or gpt-4, OPENAI_API_KEY should use the deployment key, and EMBEDDING_MODEL_DEPLOYMENT_NAME should point to a text embedding model like text-embedding-ada-002. ### Take a look at the chatbot in action! You should be able to run the console app by: ```shell python chat_with_pdf/main.py https://arxiv.org/pdf/1810.04805.pdf ``` > Note: https://arxiv.org/pdf/1810.04805.pdf is the paper about one of the most famous earlier LLMs: BERT. It looks like below if everything goes fine: ![chatbot console](../../flows/chat/chat-with-pdf/assets/chatbot_console.gif) Now, let's delve into the actual code that implements the chatbot. ### Implementation of each steps #### Download pdf: [download.py](../../flows/chat/chat-with-pdf/chat_with_pdf/download.py) The downloaded PDF file will be stored into a temp folder. #### Build index: [build_index.py](../../flows/chat/chat-with-pdf/chat_with_pdf/build_index.py) Several libraries are used in this step to build index: 1. PyPDF2 for extraction of text from the PDF file. 2. OpenAI python library for generating embeddings. 3. The FAISS library is utilized to build a vector index and save it to a file. It's important to note that an additional dictionary is used to maintain the mapping from the vector index to the actual text snippet. This is because when we later attempt to query for the most relevant context, we need to locate the text snippets, not just the embedding or vector. The environment variables used in this step: - OPENAI_API_* and EMBEDDING_MODEL_DEPLOYMENT_NAME: to access the Azure OpenAI embedding model - CHUNK_SIZE and CHUNK_OVERLAP: controls how to split the PDF file into chunks for embedding #### Rewrite question: [rewrite_question.py](../../flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question.py) This step is to use ChatGPT/GPT4 to rewrite the question to be better fit for finding relevant context from the vector index. The prompt file [rewrite_question.md](../../flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question_prompt.md) should give you a better idea how it works. #### Find context: [find_context.py](../../flows/chat/chat-with-pdf/chat_with_pdf/find_context.py) In this step we load the FAISS index and the dict that were built in the "build index" step. We then turn the question into a vector using the same embedding function in the build index step. There is a small trick in this step to make sure the context will not exceed the token limit of model input prompt ([aoai model max request tokens](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models), OpenAI has similar limit). The output of this step is the final prompt that QnA step will send to the chat model. The PROMPT_TOKEN_LIMIT environment variable decides how big the context is. #### QnA: [qna.py](../../flows/chat/chat-with-pdf/chat_with_pdf/qna.py) Use OpenAI's ChatGPT or GPT4 model and ChatCompletion API to get an answer with the previous conversation history and context from PDF. #### The main loop: [main.py](../../flows/chat/chat-with-pdf/chat_with_pdf/main.py) This is the main entry of the chatbot, which includes a loop that reads questions from user input and subsequently calls the steps mentioned above to provide an answer. To simplify this example, we store the downloaded file and the constructed index as local files. Although there is a mechanism in place to utilize cached files/indices, loading the index still takes a certain amount of time and contributes to a latency that users may notice. Moreover, if the chatbot is hosted on a server, it requires requests for the same PDF file to hit the same server node in order to effectively use the cache. In a real-world scenario, it's likely preferable to store the index in a centralized service or database. There're many such database available, such as [Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/vector-search-overview), [Pinecone](https://www.pinecone.io/), [Qdrant](https://qdrant.tech/), ... ## Prompt flow: when you start considering the quality of your LLM app Having a functioning chatbot is a great start, but it's only the beginning of the journey. Much like any application based on machine learning, the development of a high-quality LLM app usually involves a substantial amount of tuning. This could include experimenting with different prompts such as rewriting questions or QnAs, adjusting various parameters like chunk size, overlap size, or context limit, or even redesigning the workflow (for instance, deciding whether to include the rewrite_question step in our example). Appropriate tooling is essential for facilitating this experimentation and fine-tuning process with LLM apps. This is where the concept of prompt flow comes into play. It enables you to test your LLM apps by: - Running a few examples and manually verifying the results. - Running larger scale tests with a formal approach (using metrics) to assess your app's quality. You may have already learned how to create a prompt flow from scratch. Building a prompt flow from existing code is also straightforward. You can construct a chat flow either by composing the YAML file or using the visual editor of [Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) and create a few wrappers for existing code. Check out below: - [flow.dag.yaml](../../flows/chat/chat-with-pdf/flow.dag.yaml) - [setup_env.py](../../flows/chat/chat-with-pdf/setup_env.py) - [download_tool.py](../../flows/chat/chat-with-pdf/download_tool.py) - [build_index_tool.py](../../flows/chat/chat-with-pdf/build_index_tool.py) - [rewrite_question_tool.py](../../flows/chat/chat-with-pdf/rewrite_question_tool.py) - [find_context_tool.py](../../flows/chat/chat-with-pdf/find_context_tool.py) - [qna_tool.py](../../flows/chat/chat-with-pdf/qna_tool.py) E.g. build_index_tool wrapper: ```python from promptflow import tool from chat_with_pdf.build_index import create_faiss_index @tool def build_index_tool(pdf_path: str) -> str: return create_faiss_index(pdf_path) ``` The setup_env node requires some explanation: you might recall that we use environment variables to manage different configurations, including OpenAI API key in the console chatbot, in prompt flow we use [Connection](https://microsoft.github.io/promptflow/concepts/concept-connections.html) to manage access to external services like OpenAI and support passing configuration object into flow so that you can do experimentation easier. The setup_env node is to write the properties from connection and configuration object into environment variables. This allows the core code of the chatbot remain unchanged. We're using Azure OpenAI in this example, below is the shell command to do so: **CLI** ```bash # create connection needed by flow if pf connection list | grep open_ai_connection; then echo "open_ai_connection already exists" else pf connection create --file ../../../connections/azure_openai.yml --name open_ai_connection --set api_key=<your_api_key> api_base=<your_api_base> fi ``` If you plan to use OpenAI instead you can use below instead: ```shell # create connection needed by flow if pf connection list | grep open_ai_connection; then echo "open_ai_connection already exists" else pf connection create --file ../../../connections/openai.yml --name open_ai_connection --set api_key=<your_api_key> fi ``` The flow looks like: <img src="../../flows/chat/chat-with-pdf/assets/multi-node-flow-chat-with-pdf.png" width="500" alt="chat with pdf flow, multi-node"/> ## Prompt flow evaluations Now the prompt flow for chat_with_pdf is created, you might have already run/debug flow through the Visual Studio Code extension. It's time to do some testing and evaluation, which starts with: 1. Create a test dataset which contains a few question and pdf_url pairs. 2. Use existing [evaluation flows](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation) or develop new evaluation flows to generate metrics. A small dataset can be found here: [bert-paper-qna.jsonl](../../flows/chat/chat-with-pdf/data/bert-paper-qna.jsonl) which contains around 10 questions for the BERT paper. Evaluations are executed through 'batch runs'. Conceptually, they are a batch run of an evaluation flow which uses the previous run as input. Here is an example of how to create a batch run for the chat_with_pdf flow using the test dataset and manually reviewing the output. This can be done through the Visual Studio Code extension, or CLI or Python SDK. **batch_run.yaml** ```yaml name: chat_with_pdf_default_20230820_162219_559000 flow: . data: ./data/bert-paper-qna.jsonl #run: <Uncomment to select a run input> column_mapping: chat_history: ${data.chat_history} pdf_url: ${data.pdf_url} question: ${data.question} config: EMBEDDING_MODEL_DEPLOYMENT_NAME: text-embedding-ada-002 CHAT_MODEL_DEPLOYMENT_NAME: gpt-35-turbo PROMPT_TOKEN_LIMIT: 3000 MAX_COMPLETION_TOKENS: 1024 VERBOSE: true CHUNK_SIZE: 256 CHUNK_OVERLAP: 64 ``` **CLI** ```bash run_name="chat_with_pdf_"$(openssl rand -hex 12) pf run create --file batch_run.yaml --stream --name $run_name ``` The output will include something like below: ```json { "name": "chat_with_pdf_default_20230820_162219_559000", "created_on": "2023-08-20T16:23:39.608101", "status": "Completed", "display_name": "chat_with_pdf_default_20230820_162219_559000", "description": null, "tags": null, "properties": { "flow_path": "/Users/<user>/Work/azure-promptflow/scratchpad/chat_with_pdf", "output_path": "/Users/<user>/.promptflow/.runs/chat_with_pdf_default_20230820_162219_559000" }, "flow_name": "chat_with_pdf", "data": "/Users/<user>/Work/azure-promptflow/scratchpad/chat_with_pdf/data/bert-paper-qna.jsonl", "output": "/Users/<user>/.promptflow/.runs/chat_with_pdf_default_20230820_162219_559000/ flow_outputs/output.jsonl" } ``` Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI. And we developed two evaluation flows one for "[groundedness](../../flows/evaluation/eval-groundedness/)" and one for "[perceived intelligence](../../flows/evaluation/eval-perceived-intelligence/)". These two flows are using GPT models (ChatGPT or GPT4) to "grade" the answers. Reading the prompts will give you better idea what are these two metrics: - [groundedness prompt](../../flows/evaluation/eval-groundedness/gpt_groundedness.md) - [perceived intelligence prompt](../../flows/evaluation/eval-perceived-intelligence/gpt_perceived_intelligence.md) The following example creates an evaluation flow. **eval_run.yaml:** ```yaml flow: ../../evaluation/eval-groundedness run: chat_with_pdf_default_20230820_162219_559000 column_mapping: question: ${run.inputs.question} answer: ${run.outputs.answer} context: ${run.outputs.context} ``` > NOTE: the run property in eval_run.yaml is the run name of batch_run.yaml **CLI:** ```bash eval_run_name="eval_groundedness_"$(openssl rand -hex 12) pf run create --file eval_run.yaml --run $run_name --name $eval_run_name ``` > Note: this assumes that you have followed previous steps to create OpenAI/Azure OpenAI connection with name "open_ai_connection". After the run completes you can use below commands to get detail of the runs: ```bash pf run show-details --name $eval_run_name pf run show-metrics --name $eval_run_name pf run visualize --name $eval_run_name ``` ## Experimentation!! We have now explored how to conduct tests and evaluations for prompt flow. Additionally, we have defined two metrics to gauge the performance of our chat_with_pdf flow. By trying out various settings and configurations, running evaluations, and then comparing the metrics, we can determine the optimal configuration for production deployment. There are several aspects we can experiment with, including but not limited to: * Varying prompts for the rewrite_question and/or QnA steps. * Adjusting the chunk size or chunk overlap during index building. * Modifying the context limit. These elements can be managed through the "config" object in the flow inputs. If you wish to experiment with the first point (varying prompts), you can add properties to the config object to control this behavior - simply by directing it to different prompt files. Take a look at how we experiment with #3 in below test: [test_eval in tests/chat_with_pdf_test.py](../../flows/chat/chat-with-pdf/tests/azure_chat_with_pdf_test.py). This test will create 6 runs in total: 1. chat_with_pdf_2k_context 2. chat_with_pdf_3k_context 3. eval_groundedness_chat_with_pdf_2k_context 4. eval_perceived_intelligence_chat_with_pdf_2k_context 5. eval_groundedness_chat_with_pdf_3k_context 6. eval_perceived_intelligence_chat_with_pdf_3k_context As you can probably tell through the names: run #3 and #4 generate metrics for run #1, run #5 and #6 generate metrics for run #2. You can compare these metrics to decide which performs better - 2K context or 3K context. NOTE: [azure_chat_with_pdf_test](../../flows/chat/chat-with-pdf/tests/azure_chat_with_pdf_test.py) does the same tests but using Azure AI as backend, so you can see all the runs in a nice web portal with all the logs and metrics comparison etc. Further reading: - Learn [how to experiment with the chat-with-pdf flow](../../flows/chat/chat-with-pdf/chat-with-pdf.ipynb) - Learn [how to experiment with the chat-with-pdf flow on Azure](../../flows/chat/chat-with-pdf/chat-with-pdf-azure.ipynb) so that you can collaborate with your team. ## Integrate prompt flow into your CI/CD workflow It's also straightforward to integrate these into your CI/CD workflow using either CLI or SDK. In this example we have various unit tests to run tests/evaluations for chat_with_pdf flow. Check the [test](../../flows/chat/chat-with-pdf/tests/) folder. ```bash # run all the tests python -m unittest discover -s tests -p '*_test.py' ``` ## Deployment The flow can be deployed across multiple platforms, such as a local development service, within a Docker container, onto a Kubernetes cluster, etc. The following sections will guide you through the process of deploying the flow to a Docker container, for more details about the other choices, please refer to [flow deploy docs](https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/index.html). ### Build a flow as docker format app Use the command below to build a flow as docker format app: ```bash pf flow build --source . --output dist --format docker ``` ### Deploy with Docker #### Build Docker image Like other Dockerfile, you need to build the image first. You can tag the image with any name you want. In this example, we use `promptflow-serve`. Run the command below to build image: ```shell docker build dist -t chat-with-pdf-serve ``` #### Run Docker image Run the docker image will start a service to serve the flow inside the container. ##### Connections If the service involves connections, all related connections will be exported as yaml files and recreated in containers. Secrets in connections won't be exported directly. Instead, we will export them as a reference to environment variables: ```yaml $schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json type: open_ai name: open_ai_connection module: promptflow.connections api_key: ${env:OPEN_AI_CONNECTION_API_KEY} # env reference ``` You'll need to set up the environment variables in the container to make the connections work. #### Run with `docker run` You can run the docker image directly set via below commands: ```shell # The started service will listen on port 8080.You can map the port to any port on the host machine as you want. docker run -p 8080:8080 -e OPEN_AI_CONNECTION_API_KEY=<secret-value> chat-with-pdf-serve ``` #### Test the endpoint After start the service, you can open the test page at `http://localhost:8080/` and test it: ![test-page](../../flows/chat/chat-with-pdf/assets/chat_with_pdf_test_page.png) or use curl to test it from cli: ```shell curl http://localhost:8080/score --data '{"question":"what is BERT?", "chat_history": [], "pdf_url": "https://arxiv.org/pdf/1810.04805.pdf", "config": {"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002", "CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo", "PROMPT_TOKEN_LIMIT": 3000, "MAX_COMPLETION_TOKENS": 256, "VERBOSE": true, "CHUNK_SIZE": 1024, "CHUNK_OVERLAP": 64}}' -X POST -H "Content-Type: application/json" ``` ![test-endpoint](../../flows/chat/chat-with-pdf/assets/chat_with_pdf_test_endpoint.png)
promptflow/examples/tutorials/e2e-development/chat-with-pdf.md/0
{ "file_path": "promptflow/examples/tutorials/e2e-development/chat-with-pdf.md", "repo_id": "promptflow", "token_count": 6861 }
25
import base64 import json import os import re import streamlit as st from pathlib import Path from streamlit_quill import st_quill # noqa: F401 from bs4 import BeautifulSoup, NavigableString, Tag from promptflow._sdk._utils import print_yellow_warning from promptflow._sdk._serving.flow_invoker import FlowInvoker from promptflow._utils.multimedia_utils import is_multimedia_dict, MIME_PATTERN invoker = None def start(): def clear_chat() -> None: st.session_state.messages = [] def show_image(image, key=None): if not image.startswith("data:image"): st.image(key + ',' + image) else: st.image(image) def json_dumps(value): try: return json.dumps(value) except Exception: return value def is_list_contains_rich_text(rich_text): result = False for item in rich_text: if isinstance(item, list): result |= is_list_contains_rich_text(item) elif isinstance(item, dict): result |= is_dict_contains_rich_text(item) else: if isinstance(item, str) and item.startswith("data:image"): result = True return result def is_dict_contains_rich_text(rich_text): result = False for rich_text_key, rich_text_value in rich_text.items(): if isinstance(rich_text_value, list): result |= is_list_contains_rich_text(rich_text_value) elif isinstance(rich_text_value, dict): result |= is_dict_contains_rich_text(rich_text_value) elif re.match(MIME_PATTERN, rich_text_key) or ( isinstance(rich_text_value, str) and rich_text_value.startswith("data:image")): result = True return result def render_message(role, message_items): def item_render_message(value, key=None): if key and re.match(MIME_PATTERN, key): show_image(value, key) elif isinstance(value, str) and value.startswith("data:image"): show_image(value) else: if key is None: st.markdown(f"`{json_dumps(value)},`") else: st.markdown(f"`{key}: {json_dumps(value)},`") def list_iter_render_message(message_items): if is_list_contains_rich_text(message_items): st.markdown("`[ `") for item in message_items: if isinstance(item, list): list_iter_render_message(item) if isinstance(item, dict): dict_iter_render_message(item) else: item_render_message(item) st.markdown("`], `") else: st.markdown(f"`{json_dumps(message_items)},`") def dict_iter_render_message(message_items): if is_multimedia_dict(message_items): key = list(message_items.keys())[0] value = message_items[key] show_image(value, key) elif is_dict_contains_rich_text(message_items): st.markdown("`{ `") for key, value in message_items.items(): if re.match(MIME_PATTERN, key): show_image(value, key) else: if isinstance(value, list): st.markdown(f"`{key}: `") list_iter_render_message(value) elif isinstance(value, dict): st.markdown(f"`{key}: `") dict_iter_render_message(value) else: item_render_message(value, key) st.markdown("`}, `") else: st.markdown(f"`{json_dumps(message_items)},`") with st.chat_message(role): dict_iter_render_message(message_items) def show_conversation() -> None: if "messages" not in st.session_state: st.session_state.messages = [] st.session_state.history = [] if st.session_state.messages: for role, message_items in st.session_state.messages: render_message(role, message_items) def get_chat_history_from_session(): if "history" in st.session_state: return st.session_state.history return [] def submit(**kwargs) -> None: st.session_state.messages.append(("user", kwargs)) session_state_history = dict() session_state_history.update({"inputs": kwargs}) with container: render_message("user", kwargs) # Force append chat history to kwargs response = run_flow(kwargs) st.session_state.messages.append(("assistant", response)) session_state_history.update({"outputs": response}) st.session_state.history.append(session_state_history) with container: render_message("assistant", response) def run_flow(data: dict) -> dict: global invoker if not invoker: flow = Path(__file__).parent / "flow" dump_path = flow.parent if flow.is_dir(): os.chdir(flow) else: os.chdir(flow.parent) invoker = FlowInvoker(flow, connection_provider="local", dump_to=dump_path) result, result_output = invoker.invoke(data) print_yellow_warning(f"Result: {result_output}") return result def extract_content(node): if isinstance(node, NavigableString): text = node.strip() if text: return [text] elif isinstance(node, Tag): if node.name == 'img': prefix, base64_str = node['src'].split(',', 1) return [{prefix: base64_str}] else: result = [] for child in node.contents: result.extend(extract_content(child)) return result return [] def parse_html_content(html_content): soup = BeautifulSoup(html_content, 'html.parser') result = [] for p in soup.find_all('p'): result.extend(extract_content(p)) return result def parse_image_content(image_content, image_type): if image_content is not None: file_contents = image_content.read() image_content = base64.b64encode(file_contents).decode('utf-8') prefix = f"data:{image_type};base64" return {prefix: image_content} st.title("web-classification APP") st.chat_message("assistant").write("Hello, please input following flow inputs.") container = st.container() with container: show_conversation() with st.form(key='input_form', clear_on_submit=True): settings_path = os.path.join(os.path.dirname(__file__), "settings.json") if os.path.exists(settings_path): with open(settings_path, "r") as file: json_data = json.load(file) environment_variables = list(json_data.keys()) for environment_variable in environment_variables: secret_input = st.text_input( label=environment_variable, type="password", placeholder=f"Please input {environment_variable} here. If you input before, you can leave it " f"blank.") if secret_input != "": os.environ[environment_variable] = secret_input url = st.text_input(label='url', placeholder='https://play.google.com/store/apps/details?id=com.twitter.android') cols = st.columns(7) submit_bt = cols[0].form_submit_button(label='Submit') clear_bt = cols[1].form_submit_button(label='Clear') if submit_bt: submit(url=url) if clear_bt: clear_chat() if __name__ == "__main__": start()
promptflow/examples/tutorials/flow-deploy/distribute-flow-as-executable-app/main.py/0
{ "file_path": "promptflow/examples/tutorials/flow-deploy/distribute-flow-as-executable-app/main.py", "repo_id": "promptflow", "token_count": 4079 }
26
import argparse from pathlib import Path from platform import system from utils import print_blue, run_command def setup_promptflow(extra_deps: list, command_args: dict) -> None: print_blue("- Setting up the promptflow SDK ") print_blue("- Installing promptflow Python SDK from local directory") package_location = f"{Path('./src/promptflow/').absolute()}" if extra_deps: print_blue(f"- Installing with extra dependencies: {extra_deps}") extra_deps = ",".join(extra_deps) package_location = f"{package_location}[{extra_deps}]" cmds = ["pip", "install", "-e", package_location] print_blue(f"Running {cmds}") run_command(commands=cmds, **command_args) run_command( commands=["pip", "install", "-r", str(Path("./src/promptflow/dev_requirements.txt").absolute())], **command_args, ) if __name__ == "__main__": epilog = """ Sample Usages: python scripts/building/dev_setup.py python scripts/building/dev_setup.py --promptflow-extra-deps azure """ parser = argparse.ArgumentParser( description="Welcome to promptflow dev setup!", epilog=epilog, ) parser.add_argument( "--promptflow-extra-deps", required=False, nargs="+", type=str, help="extra dependencies for promptflow" ) parser.add_argument("-v", "--verbose", action="store_true", required=False, help="turn on verbose output") args = parser.parse_args() command_args = {"shell": system() == "Windows", "stream_stdout": args.verbose} setup_promptflow(extra_deps=args.promptflow_extra_deps, command_args=command_args) run_command(commands=["pre-commit", "install"], **command_args)
promptflow/scripts/building/dev_setup.py/0
{ "file_path": "promptflow/scripts/building/dev_setup.py", "repo_id": "promptflow", "token_count": 646 }
27
<# .DESCRIPTION Script to build doc site .EXAMPLE PS> ./doc_generation.ps1 -SkipInstall # skip pip install PS> ./doc_generation.ps1 -BuildLinkCheck -WarningAsError:$true -SkipInstall #> [CmdletBinding()] param( [switch]$SkipInstall, [switch]$WarningAsError = $false, [switch]$BuildLinkCheck = $false, [switch]$WithReferenceDoc = $false ) [string] $ScriptPath = $PSCommandPath | Split-Path -Parent [string] $RepoRootPath = $ScriptPath | Split-Path -Parent | Split-Path -Parent [string] $DocPath = [System.IO.Path]::Combine($RepoRootPath, "docs") [string] $TempDocPath = New-TemporaryFile | % { Remove-Item $_; New-Item -ItemType Directory -Path $_ } [string] $PkgSrcPath = [System.IO.Path]::Combine($RepoRootPath, "src\promptflow\promptflow") [string] $OutPath = [System.IO.Path]::Combine($ScriptPath, "_build") [string] $SphinxApiDoc = [System.IO.Path]::Combine($DocPath, "sphinx_apidoc.log") [string] $SphinxBuildDoc = [System.IO.Path]::Combine($DocPath, "sphinx_build.log") [string] $WarningErrorPattern = "WARNING:|ERROR:|CRITICAL:" $apidocWarningsAndErrors = $null $buildWarningsAndErrors = $null if (-not $SkipInstall){ # Prepare doc generation packages pip install pydata-sphinx-theme==0.11.0 pip install sphinx==5.1 pip install sphinx-copybutton==0.5.0 pip install sphinx_design==0.3.0 pip install sphinx-sitemap==2.2.0 pip install sphinx-togglebutton==0.3.2 pip install sphinxext-rediraffe==0.2.7 pip install sphinxcontrib-mermaid==0.8.1 pip install ipython-genutils==0.2.0 pip install myst-nb==0.17.1 pip install numpydoc==1.5.0 pip install myst-parser==0.18.1 pip install matplotlib==3.4.3 pip install jinja2==3.0.1 Write-Host "===============Finished install requirements===============" } function ProcessFiles { # Exclude files not mean to be in doc site $exclude_files = "README.md", "dev" foreach ($f in $exclude_files) { $full_path = [System.IO.Path]::Combine($TempDocPath, $f) Remove-Item -Path $full_path -Recurse } } Write-Host "===============PreProcess Files===============" Write-Host "Copy doc to: $TempDocPath" ROBOCOPY $DocPath $TempDocPath /S /NFL /NDL /XD "*.git" [System.IO.Path]::Combine($DocPath, "_scripts\_build") ProcessFiles if($WithReferenceDoc){ $RefDocRelativePath = "reference\python-library-reference" $RefDocPath = [System.IO.Path]::Combine($TempDocPath, $RefDocRelativePath) if(!(Test-Path $RefDocPath)){ throw "Reference doc path not found. Please make sure '$RefDocRelativePath' is under '$DocPath'" } Remove-Item $RefDocPath -Recurse -Force Write-Host "===============Build Promptflow Reference Doc===============" sphinx-apidoc --module-first --no-headings --no-toc --implicit-namespaces "$PkgSrcPath" -o "$RefDocPath" | Tee-Object -FilePath $SphinxApiDoc $apidocWarningsAndErrors = Select-String -Path $SphinxApiDoc -Pattern $WarningErrorPattern Write-Host "=============== Overwrite promptflow.connections.rst ===============" # We are doing this overwrite because the connection entities are also defined in the promptflow.entities module # and it will raise duplicate object description error if we don't do so when we run sphinx-build later. $ConnectionRst = [System.IO.Path]::Combine($RepoRootPath, "scripts\docs\promptflow.connections.rst") $AutoGenConnectionRst = [System.IO.Path]::Combine($RefDocPath, "promptflow.connections.rst") Copy-Item -Path $ConnectionRst -Destination $AutoGenConnectionRst -Force } Write-Host "===============Build Documentation with internal=${Internal}===============" $BuildParams = [System.Collections.ArrayList]::new() if($WarningAsError){ $BuildParams.Add("-W") $BuildParams.Add("--keep-going") } if($BuildLinkCheck){ $BuildParams.Add("-blinkcheck") } sphinx-build $TempDocPath $OutPath -c $ScriptPath $BuildParams | Tee-Object -FilePath $SphinxBuildDoc $buildWarningsAndErrors = Select-String -Path $SphinxBuildDoc -Pattern $WarningErrorPattern Write-Host "Clean path: $TempDocPath" Remove-Item $TempDocPath -Recurse -Confirm:$False -Force if ($apidocWarningsAndErrors) { Write-Host "=============== API doc warnings and errors ===============" foreach ($line in $apidocWarningsAndErrors) { Write-Host $line -ForegroundColor Red } } if ($buildWarningsAndErrors) { Write-Host "=============== Build warnings and errors ===============" foreach ($line in $buildWarningsAndErrors) { Write-Host $line -ForegroundColor Red } }
promptflow/scripts/docs/doc_generation.ps1/0
{ "file_path": "promptflow/scripts/docs/doc_generation.ps1", "repo_id": "promptflow", "token_count": 1687 }
28
@echo off setlocal set MAIN_EXE=%~dp0.\pfcli.exe "%MAIN_EXE%" pfs %*
promptflow/scripts/installer/windows/scripts/pfs.bat/0
{ "file_path": "promptflow/scripts/installer/windows/scripts/pfs.bat", "repo_id": "promptflow", "token_count": 39 }
29
from pathlib import Path from .readme_step import ReadmeStepsManage, ReadmeSteps from ghactions_driver.telemetry_obj import Telemetry def write_readme_workflow(readme_path, output_telemetry=Telemetry()): relative_path = Path(readme_path).relative_to( Path(ReadmeStepsManage.git_base_dir()) ) workflow_path = relative_path.parent.as_posix() relative_name_path = Path(readme_path).relative_to( Path(ReadmeStepsManage.git_base_dir()) / "examples" ) workflow_name = ( relative_name_path.as_posix() .replace(".md", "") .replace("/README", "") .replace("/", "_") .replace("-", "_") ) workflow_name = "samples_" + workflow_name ReadmeSteps.setup_target( working_dir=workflow_path, template="basic_workflow_replace_config_json.yml.jinja2" if "e2e_development_chat_with_pdf" in workflow_name else "basic_workflow_replace.yml.jinja2", target=f"{workflow_name}.yml", readme_name=relative_path.as_posix(), ) ReadmeSteps.install_dependencies() ReadmeSteps.install_dev_dependencies() if ( workflow_name.endswith("flows_chat_chat_with_image") or workflow_name.endswith("flows_standard_describe_image") ): ReadmeSteps.create_env_gpt4() ReadmeSteps.env_create_aoai("aoai_gpt4v_connection") else: ReadmeSteps.create_env() if workflow_name.endswith("pdf"): ReadmeSteps.env_create_aoai("chat_with_pdf_custom_connection") ReadmeSteps.create_run_yaml() if ( workflow_name.endswith("flows_standard_basic_with_builtin_llm") or workflow_name.endswith("flows_standard_flow_with_symlinks") or workflow_name.endswith("flows_standard_flow_with_additional_includes") or workflow_name.endswith("flows_standard_basic_with_connection") ): ReadmeSteps.yml_create_aoai("examples/connections/azure_openai.yml") ReadmeSteps.azure_login() if ( workflow_name.endswith("flows_chat_chat_with_image") or workflow_name.endswith("flows_standard_describe_image") ): ReadmeSteps.extract_steps_and_run_gpt_four() else: ReadmeSteps.extract_steps_and_run() ReadmeStepsManage.write_workflow( workflow_name, "samples_readme_ci", output_telemetry ) ReadmeSteps.cleanup()
promptflow/scripts/readme/ghactions_driver/readme_workflow_generate.py/0
{ "file_path": "promptflow/scripts/readme/ghactions_driver/readme_workflow_generate.py", "repo_id": "promptflow", "token_count": 1043 }
30
class SecretNameAlreadyExistsException(Exception): pass class SecretNameInvalidException(Exception): pass class SecretNoSetPermissionException(Exception): pass
promptflow/scripts/tool/exceptions/secret_exceptions.py/0
{ "file_path": "promptflow/scripts/tool/exceptions/secret_exceptions.py", "repo_id": "promptflow", "token_count": 48 }
31
storage: storage_account: promptflowgall5817910653 deployment: subscription_id: 96aede12-2f73-41cb-b983-6d11a904839b resource_group: promptflow workspace_name: promptflow-gallery endpoint_name: tool-test638236049123389546 deployment_name: blue mt_service_endpoint: https://eastus2euap.api.azureml.ms
promptflow/scripts/tool/utils/configs/promptflow-gallery-tool-test.yaml/0
{ "file_path": "promptflow/scripts/tool/utils/configs/promptflow-gallery-tool-test.yaml", "repo_id": "promptflow", "token_count": 117 }
32
from enum import Enum from typing import Dict, List, Union import json import requests from promptflow import tool, ToolProvider from promptflow.connections import AzureContentSafetyConnection from promptflow.tools.exception import AzureContentSafetyInputValueError, AzureContentSafetySystemError class TextCategorySensitivity(str, Enum): DISABLE = "disable" LOW_SENSITIVITY = "low_sensitivity" MEDIUM_SENSITIVITY = "medium_sensitivity" HIGH_SENSITIVITY = "high_sensitivity" class AzureContentSafety(ToolProvider): """ Doc reference : https://review.learn.microsoft.com/en-us/azure/cognitive-services/content-safety/quickstart?branch=pr-en-us-233724&pivots=programming-language-rest """ def __init__(self, connection: AzureContentSafetyConnection): self.connection = connection super(AzureContentSafety, self).__init__() @tool def analyze_text( self, text: str, hate_category: TextCategorySensitivity = TextCategorySensitivity.MEDIUM_SENSITIVITY, sexual_category: TextCategorySensitivity = TextCategorySensitivity.MEDIUM_SENSITIVITY, self_harm_category: TextCategorySensitivity = TextCategorySensitivity.MEDIUM_SENSITIVITY, violence_category: TextCategorySensitivity = TextCategorySensitivity.MEDIUM_SENSITIVITY, ): content_safety = ContentSafety(self.connection.endpoint, self.connection.api_key, self.connection.api_version) media_type = MediaType.Text blocklists = [] detection_result = content_safety.detect(media_type, text, blocklists) # Set the reject thresholds for each category reject_thresholds = { Category.Hate: switch_category_threshold(hate_category), Category.SelfHarm: switch_category_threshold(self_harm_category), Category.Sexual: switch_category_threshold(sexual_category), Category.Violence: switch_category_threshold(violence_category), } # Make a decision based on the detection result and reject thresholds if self.connection.api_version == "2023-10-01": decision_result = content_safety.make_decision_1001(detection_result, reject_thresholds) else: decision_result = content_safety.make_decision(detection_result, reject_thresholds) return convert_decision_to_json(decision_result) @tool def analyze_text( connection: AzureContentSafetyConnection, text: str, hate_category: TextCategorySensitivity = TextCategorySensitivity.MEDIUM_SENSITIVITY, sexual_category: TextCategorySensitivity = TextCategorySensitivity.MEDIUM_SENSITIVITY, self_harm_category: TextCategorySensitivity = TextCategorySensitivity.MEDIUM_SENSITIVITY, violence_category: TextCategorySensitivity = TextCategorySensitivity.MEDIUM_SENSITIVITY, ): return AzureContentSafety(connection).analyze_text( text=text, hate_category=hate_category, sexual_category=sexual_category, self_harm_category=self_harm_category, violence_category=violence_category, ) def switch_category_threshold(sensitivity: TextCategorySensitivity) -> int: switcher = { TextCategorySensitivity.DISABLE: -1, TextCategorySensitivity.LOW_SENSITIVITY: 6, TextCategorySensitivity.MEDIUM_SENSITIVITY: 4, TextCategorySensitivity.HIGH_SENSITIVITY: 2, } return switcher.get(sensitivity, f"Non-supported sensitivity: {sensitivity}") class MediaType(Enum): Text = 1 Image = 2 class Category(Enum): Hate = 1 SelfHarm = 2 Sexual = 3 Violence = 4 class Action(Enum): Accept = "Accept" Reject = "Reject" class Decision(object): def __init__(self, suggested_action: Action, action_by_category: Dict[Category, Action]) -> None: """ Represents the decision made by the content moderation system. Args: - suggested_action (Action): The suggested action to take. - action_by_category (dict[Category, Action]): The action to take for each category. """ self.suggested_action = suggested_action self.action_by_category = action_by_category def convert_decision_to_json(decision: Decision): result_json = {} result_json["suggested_action"] = decision.suggested_action.value category_json = {} for key, value in decision.action_by_category.items(): category_json[key.name] = value.value result_json["action_by_category"] = category_json return result_json class ContentSafety(object): def __init__(self, endpoint: str, subscription_key: str, api_version: str) -> None: """ Creates a new ContentSafety instance. Args: - endpoint (str): The endpoint URL for the Content Safety API. - subscription_key (str): The subscription key for the Content Safety API. - api_version (str): The version of the Content Safety API to use. """ self.endpoint = endpoint self.subscription_key = subscription_key self.api_version = api_version def build_url(self, media_type: MediaType) -> str: """ Builds the URL for the Content Safety API based on the media type. Args: - media_type (MediaType): The type of media to analyze. Returns: - str: The URL for the Content Safety API. """ if media_type == MediaType.Text: return f"{self.endpoint}/contentsafety/text:analyze?api-version={self.api_version}" elif media_type == MediaType.Image: return f"{self.endpoint}/contentsafety/image:analyze?api-version={self.api_version}" else: error_message = f"Invalid Media Type {media_type}" raise AzureContentSafetyInputValueError(message=error_message) def build_headers(self) -> Dict[str, str]: """ Builds the headers for the Content Safety API request. Returns: - dict[str, str]: The headers for the Content Safety API request. """ return { "Ocp-Apim-Subscription-Key": self.subscription_key, "Content-Type": "application/json", "ms-azure-ai-sender": "prompt_flow" } def build_request_body( self, media_type: MediaType, content: str, blocklists: List[str], ) -> dict: """ Builds the request body for the Content Safety API request. Args: - media_type (MediaType): The type of media to analyze. - content (str): The content to analyze. - blocklists (list[str]): The blocklists to use for text analysis. Returns: - dict: The request body for the Content Safety API request. """ if media_type == MediaType.Text: return { "text": content, "blocklistNames": blocklists, } elif media_type == MediaType.Image: return {"image": {"content": content}} else: error_message = f"Invalid Media Type {media_type}" raise AzureContentSafetyInputValueError(message=error_message) def detect( self, media_type: MediaType, content: str, blocklists: List[str] = [], ) -> dict: url = self.build_url(media_type) headers = self.build_headers() request_body = self.build_request_body(media_type, content, blocklists) payload = json.dumps(request_body) response = requests.post(url, headers=headers, data=payload) print("status code: " + response.status_code.__str__()) print("response txt: " + response.text) res_content = response.json() if response.status_code != 200: error_message = f"Error in detecting content: {res_content['error']['message']}" raise AzureContentSafetySystemError(message=error_message) return res_content def get_detect_result_by_category(self, category: Category, detect_result: dict) -> Union[int, None]: if category == Category.Hate: return detect_result.get("hateResult", None) elif category == Category.SelfHarm: return detect_result.get("selfHarmResult", None) elif category == Category.Sexual: return detect_result.get("sexualResult", None) elif category == Category.Violence: return detect_result.get("violenceResult", None) else: error_message = f"Invalid Category {category}" raise AzureContentSafetyInputValueError(message=error_message) def get_detect_result_by_category_1001(self, category: Category, detect_result: dict) -> Union[int, None]: category_res = detect_result.get("categoriesAnalysis", None) for res in category_res: if category.name == res.get("category", None): return res error_message = f"Invalid Category {category}" raise AzureContentSafetyInputValueError(message=error_message) def make_decision( self, detection_result: dict, reject_thresholds: Dict[Category, int], ) -> Decision: action_result = {} final_action = Action.Accept for category, threshold in reject_thresholds.items(): if threshold not in (-1, 0, 2, 4, 6): error_message = "RejectThreshold can only be in (-1, 0, 2, 4, 6)" raise AzureContentSafetyInputValueError(message=error_message) cate_detect_res = self.get_detect_result_by_category(category, detection_result) if cate_detect_res is None or "severity" not in cate_detect_res: error_message = f"Can not find detection result for {category}" raise AzureContentSafetySystemError(message=error_message) severity = cate_detect_res["severity"] action = Action.Reject if threshold != -1 and severity >= threshold else Action.Accept action_result[category] = action if action.value > final_action.value: final_action = action if ( "blocklistsMatchResults" in detection_result and detection_result["blocklistsMatchResults"] and len(detection_result["blocklistsMatchResults"]) > 0 ): final_action = Action.Reject print(f"Action result: {action_result}") return Decision(final_action, action_result) def make_decision_1001( self, detection_result: dict, reject_thresholds: Dict[Category, int], ) -> Decision: action_result = {} final_action = Action.Accept for category, threshold in reject_thresholds.items(): if threshold not in (-1, 0, 2, 4, 6): error_message = "RejectThreshold can only be in (-1, 0, 2, 4, 6)" raise AzureContentSafetyInputValueError(message=error_message) cate_detect_res = self.get_detect_result_by_category_1001( category, detection_result ) if cate_detect_res is None or "severity" not in cate_detect_res: error_message = f"Can not find detection result for {category}" raise AzureContentSafetySystemError(message=error_message) severity = cate_detect_res["severity"] action = ( Action.Reject if threshold != -1 and severity >= threshold else Action.Accept ) action_result[category] = action if action.value > final_action.value: final_action = action if ( "blocklistsMatch" in detection_result and detection_result["blocklistsMatch"] and len(detection_result["blocklistsMatch"]) > 0 ): final_action = Action.Reject print(f"Action result: {action_result}") return Decision(final_action, action_result)
promptflow/src/promptflow-tools/promptflow/tools/azure_content_safety.py/0
{ "file_path": "promptflow/src/promptflow-tools/promptflow/tools/azure_content_safety.py", "repo_id": "promptflow", "token_count": 4937 }
33
import pytest from promptflow.tools.embedding import embedding from promptflow.tools.exception import InvalidConnectionType @pytest.mark.usefixtures("use_secrets_config_file") class TestEmbedding: def test_embedding_conn_aoai(self, azure_open_ai_connection): result = embedding( connection=azure_open_ai_connection, input="The food was delicious and the waiter", deployment_name="text-embedding-ada-002") assert len(result) == 1536 @pytest.mark.skip_if_no_api_key("open_ai_connection") def test_embedding_conn_oai(self, open_ai_connection): result = embedding( connection=open_ai_connection, input="The food was delicious and the waiter", model="text-embedding-ada-002") assert len(result) == 1536 def test_embedding_invalid_connection_type(self, serp_connection): error_codes = "UserError/ToolValidationError/InvalidConnectionType" with pytest.raises(InvalidConnectionType) as exc_info: embedding(connection=serp_connection, input="hello", deployment_name="text-embedding-ada-002") assert exc_info.value.error_codes == error_codes.split("/")
promptflow/src/promptflow-tools/tests/test_embedding.py/0
{ "file_path": "promptflow/src/promptflow-tools/tests/test_embedding.py", "repo_id": "promptflow", "token_count": 465 }
34
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- __path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore from promptflow._core.metric_logger import log_metric # flake8: noqa from promptflow._core.tool import ToolProvider, tool from promptflow._core.tracer import trace # control plane sdk functions from promptflow._sdk._load_functions import load_flow, load_run from ._sdk._pf_client import PFClient from ._version import VERSION # backward compatibility log_flow_metric = log_metric __version__ = VERSION __all__ = ["PFClient", "load_flow", "load_run", "log_metric", "ToolProvider", "tool", "trace"]
promptflow/src/promptflow/promptflow/__init__.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/__init__.py", "repo_id": "promptflow", "token_count": 214 }
35
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import argparse import json from typing import Dict, List from promptflow._cli._params import ( add_param_archived_only, add_param_flow_name, add_param_flow_type, add_param_include_archived, add_param_include_others, add_param_max_results, add_param_output_format, add_param_set, base_params, ) from promptflow._cli._pf_azure._utils import _get_azure_pf_client from promptflow._cli._utils import ( _output_result_list_with_format, _set_workspace_argument_for_subparsers, activate_action, exception_handler, ) from promptflow._sdk._constants import get_list_view_type def add_parser_flow(subparsers): """Add flow parser to the pf subparsers.""" flow_parser = subparsers.add_parser( "flow", description="Manage flows for prompt flow.", help="Manage prompt flows.", ) flow_subparsers = flow_parser.add_subparsers() add_parser_flow_create(flow_subparsers) add_parser_flow_show(flow_subparsers) add_parser_flow_list(flow_subparsers) flow_parser.set_defaults(action="flow") def add_parser_flow_create(subparsers): """Add flow create parser to the pf flow subparsers.""" epilog = """ Use "--set" to set flow properties like: display_name: Flow display name that will be created in remote. Default to be flow folder name + timestamp if not specified. type: Flow type. Default to be "standard" if not specified. Available types are: "standard", "evaluation", "chat". description: Flow description. e.g. "--set description=<description>." tags: Flow tags. e.g. "--set tags.key1=value1 tags.key2=value2." Note: In "--set" parameter, if the key name consists of multiple words, use snake-case instead of kebab-case. e.g. "--set display_name=<flow-display-name>" Examples: # Create a flow to azure portal with local flow folder. pfazure flow create --flow <flow-folder-path> --set display_name=<flow-display-name> type=<flow-type> # Create a flow with more properties pfazure flow create --flow <flow-folder-path> --set display_name=<flow-display-name> type=<flow-type> description=<flow-description> tags.key1=value1 tags.key2=value2 """ # noqa: E501 add_param_source = lambda parser: parser.add_argument( # noqa: E731 "--flow", type=str, help="Source folder of the flow." ) add_params = [ _set_workspace_argument_for_subparsers, add_param_source, add_param_set, ] + base_params activate_action( name="create", description="A CLI tool to create a flow to Azure.", epilog=epilog, add_params=add_params, subparsers=subparsers, help_message="Create a flow to Azure with local flow folder.", action_param_name="sub_action", ) def add_parser_flow_list(subparsers): """Add flow list parser to the pf flow subparsers.""" epilog = """ Examples: # List flows: pfazure flow list # List most recent 10 runs status: pfazure flow list --max-results 10 # List active and archived flows: pfazure flow list --include-archived # List archived flow only: pfazure flow list --archived-only # List all flows as table: pfazure flow list --output table # List flows with specific type: pfazure flow list --type standard # List flows that are owned by all users: pfazure flow list --include-others """ add_params = [ add_param_max_results, add_param_include_others, add_param_flow_type, add_param_archived_only, add_param_include_archived, add_param_output_format, _set_workspace_argument_for_subparsers, ] + base_params activate_action( name="list", description="List flows for promptflow.", epilog=epilog, add_params=add_params, subparsers=subparsers, help_message="pfazure flow list", action_param_name="sub_action", ) def add_parser_flow_show(subparsers): """Add flow get parser to the pf flow subparsers.""" epilog = """ Examples: # Get flow: pfazure flow show --name <flow-name> """ add_params = [add_param_flow_name, _set_workspace_argument_for_subparsers] + base_params activate_action( name="show", description="Show a flow from Azure.", epilog=epilog, add_params=add_params, subparsers=subparsers, help_message="pfazure flow show", action_param_name="sub_action", ) def add_parser_flow_download(subparsers): """Add flow download parser to the pf flow subparsers.""" add_param_source = lambda parser: parser.add_argument( # noqa: E731 "--source", type=str, help="The flow folder path on file share to download." ) add_param_destination = lambda parser: parser.add_argument( # noqa: E731 "--destination", "-d", type=str, help="The destination folder path to download." ) add_params = [ _set_workspace_argument_for_subparsers, add_param_source, add_param_destination, ] + base_params activate_action( name="download", description="Download a flow from file share to local.", epilog=None, add_params=add_params, subparsers=subparsers, help_message="pf flow download", action_param_name="sub_action", ) def dispatch_flow_commands(args: argparse.Namespace): if args.sub_action == "create": create_flow(args) elif args.sub_action == "show": show_flow(args) elif args.sub_action == "list": list_flows(args) def _get_flow_operation(subscription_id, resource_group, workspace_name): pf_client = _get_azure_pf_client(subscription_id, resource_group, workspace_name) return pf_client._flows @exception_handler("Create flow") def create_flow(args: argparse.Namespace): """Create a flow for promptflow.""" pf = _get_azure_pf_client(args.subscription, args.resource_group, args.workspace_name, debug=args.debug) params = _parse_flow_metadata_args(args.params_override) pf.flows.create_or_update(flow=args.flow, **params) @exception_handler("Show flow") def show_flow(args: argparse.Namespace): """Get a flow for promptflow.""" pf = _get_azure_pf_client(args.subscription, args.resource_group, args.workspace_name, debug=args.debug) flow = pf.flows.get(args.name) print(json.dumps(flow._to_dict(), indent=4)) def list_flows(args: argparse.Namespace): """List flows for promptflow.""" pf = _get_azure_pf_client(args.subscription, args.resource_group, args.workspace_name, debug=args.debug) flows = pf.flows.list( max_results=args.max_results, include_others=args.include_others, flow_type=args.type, list_view_type=get_list_view_type(args.archived_only, args.include_archived), ) flow_list = [flow._to_dict() for flow in flows] _output_result_list_with_format(flow_list, args.output) def _parse_flow_metadata_args(params: List[Dict[str, str]]) -> Dict: result, tags = {}, {} if not params: return result for param in params: for k, v in param.items(): if k.startswith("tags."): tag_key = k.replace("tags.", "") tags[tag_key] = v continue result[k] = v if tags: result["tags"] = tags return result
promptflow/src/promptflow/promptflow/_cli/_pf_azure/_flow.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_cli/_pf_azure/_flow.py", "repo_id": "promptflow", "token_count": 2954 }
36
import os from promptflow import tool from promptflow.connections import CustomConnection {{ function_import }} @tool def {{ tool_function }}( {% for arg in tool_arg_list %} {{ arg.name }}, {% endfor %} connection: CustomConnection) -> str: # set environment variables for key, value in dict(connection).items(): os.environ[key] = value # call the entry function return {{ entry_function }}( {% for arg in tool_arg_list %} {{ arg.name }}={{ arg.name }}, {% endfor %} )
promptflow/src/promptflow/promptflow/_cli/data/entry_flow/tool.py.jinja2/0
{ "file_path": "promptflow/src/promptflow/promptflow/_cli/data/entry_flow/tool.py.jinja2", "repo_id": "promptflow", "token_count": 192 }
37
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- from promptflow._cli._pf.entry import main # this is a compatibility layer for the old CLI which is used for vscode extension if __name__ == "__main__": main()
promptflow/src/promptflow/promptflow/_cli/pf.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_cli/pf.py", "repo_id": "promptflow", "token_count": 74 }
38
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- """ This file can generate a meta file for the given prompt template or a python file. """ import importlib.util import inspect import json import re import types from dataclasses import asdict from pathlib import Path from traceback import TracebackException from jinja2 import TemplateSyntaxError from jinja2.environment import COMMENT_END_STRING, COMMENT_START_STRING from promptflow._core._errors import MetaFileNotFound, MetaFileReadError, NotSupported from promptflow._core.tool import ToolProvider from promptflow._utils.exception_utils import ADDITIONAL_INFO_USER_CODE_STACKTRACE, get_tb_next, last_frame_info from promptflow._utils.tool_utils import function_to_interface, get_inputs_for_prompt_template from promptflow.contracts.tool import Tool, ToolType from promptflow.exceptions import ErrorTarget, UserErrorException PF_MAIN_MODULE_NAME = "__pf_main__" def asdict_without_none(obj): return asdict(obj, dict_factory=lambda x: {k: v for (k, v) in x if v}) def generate_prompt_tool(name, content, prompt_only=False, source=None): """Generate meta for a single jinja template file.""" # Get all the variable name from a jinja template tool_type = ToolType.PROMPT if prompt_only else ToolType.LLM try: inputs = get_inputs_for_prompt_template(content) except TemplateSyntaxError as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise JinjaParsingError( message_format=( "Generate tool meta failed for {tool_type} tool. Jinja parsing failed at line {line_number}: " "{error_type_and_message}" ), tool_type=tool_type.value, line_number=e.lineno, error_type_and_message=error_type_and_message, ) from e except Exception as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise JinjaParsingError( message_format=( "Generate tool meta failed for {tool_type} tool. Jinja parsing failed: {error_type_and_message}" ), tool_type=tool_type.value, error_type_and_message=error_type_and_message, ) from e pattern = f"{COMMENT_START_STRING}(((?!{COMMENT_END_STRING}).)*){COMMENT_END_STRING}" match_result = re.match(pattern, content) description = match_result.groups()[0].strip() if match_result else None # Construct the Tool structure tool = Tool( name=name, description=description, type=tool_type, inputs=inputs, outputs={}, ) if source is None: tool.code = content else: tool.source = source return tool def generate_prompt_meta_dict(name, content, prompt_only=False, source=None): return asdict_without_none(generate_prompt_tool(name, content, prompt_only, source)) def is_tool(f): if not isinstance(f, types.FunctionType): return False if not hasattr(f, "__tool"): return False return True def collect_tool_functions_in_module(m): tools = [] for _, obj in inspect.getmembers(m): if is_tool(obj): # Note that the tool should be in defined in exec but not imported in exec, # so it should also have the same module with the current function. if getattr(obj, "__module__", "") != m.__name__: continue tools.append(obj) return tools def collect_tool_methods_in_module(m): tools = [] for _, obj in inspect.getmembers(m): if isinstance(obj, type) and issubclass(obj, ToolProvider) and obj.__module__ == m.__name__: for _, method in inspect.getmembers(obj): if is_tool(method): tools.append(method) return tools def collect_tool_methods_with_init_inputs_in_module(m): tools = [] for _, obj in inspect.getmembers(m): if isinstance(obj, type) and issubclass(obj, ToolProvider) and obj.__module__ == m.__name__: for _, method in inspect.getmembers(obj): if is_tool(method): tools.append((method, obj.get_initialize_inputs())) return tools def _parse_tool_from_function(f, initialize_inputs=None, gen_custom_type_conn=False, skip_prompt_template=False): try: tool_type = getattr(f, "__type", None) or ToolType.PYTHON except Exception as e: raise e tool_name = getattr(f, "__name", None) description = getattr(f, "__description", None) if hasattr(f, "__tool") and isinstance(f.__tool, Tool): return f.__tool if hasattr(f, "__original_function"): f = f.__original_function try: inputs, _, _, enable_kwargs = function_to_interface( f, initialize_inputs=initialize_inputs, gen_custom_type_conn=gen_custom_type_conn, skip_prompt_template=skip_prompt_template, ) except Exception as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise BadFunctionInterface( message_format="Parse interface for tool '{tool_name}' failed: {error_type_and_message}", tool_name=f.__name__, error_type_and_message=error_type_and_message, ) from e class_name = None if "." in f.__qualname__: class_name = f.__qualname__.replace(f".{f.__name__}", "") # Construct the Tool structure return Tool( name=tool_name or f.__qualname__, description=description or inspect.getdoc(f), inputs=inputs, type=tool_type, class_name=class_name, function=f.__name__, module=f.__module__, enable_kwargs=enable_kwargs, ) def generate_python_tools_in_module(module): tool_functions = collect_tool_functions_in_module(module) tool_methods = collect_tool_methods_in_module(module) return [_parse_tool_from_function(f) for f in tool_functions + tool_methods] def generate_python_tools_in_module_as_dict(module): tools = generate_python_tools_in_module(module) return {f"{t.module}.{t.name}": asdict_without_none(t) for t in tools} def load_python_module_from_file(src_file: Path): # Here we hard code the module name as __pf_main__ since it is invoked as a main script in pf. src_file = Path(src_file).resolve() # Make sure the path is absolute to align with python import behavior. spec = importlib.util.spec_from_file_location("__pf_main__", location=src_file) if spec is None or spec.loader is None: raise PythonLoaderNotFound( message_format="Failed to load python file '{src_file}'. Please make sure it is a valid .py file.", src_file=src_file, ) m = importlib.util.module_from_spec(spec) try: spec.loader.exec_module(m) except Exception as e: # TODO: add stacktrace to additional info error_type_and_message = f"({e.__class__.__name__}) {e}" raise PythonLoadError( message_format="Failed to load python module from file '{src_file}': {error_type_and_message}", src_file=src_file, error_type_and_message=error_type_and_message, ) from e return m def load_python_module(content, source=None): # Source represents code first experience. if source is not None and Path(source).exists(): return load_python_module_from_file(Path(source)) try: m = types.ModuleType(PF_MAIN_MODULE_NAME) exec(content, m.__dict__) return m except Exception as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise PythonParsingError( message_format="Failed to load python module. Python parsing failed: {error_type_and_message}", error_type_and_message=error_type_and_message, ) from e def collect_tool_function_in_module(m): tool_functions = collect_tool_functions_in_module(m) tool_methods = collect_tool_methods_with_init_inputs_in_module(m) num_tools = len(tool_functions) + len(tool_methods) if num_tools == 0: raise NoToolDefined( message_format=( "No tool found in the python script. " "Please make sure you have one and only one tool definition in your script." ) ) elif num_tools > 1: tool_names = ", ".join(t.__name__ for t in tool_functions + tool_methods) raise MultipleToolsDefined( message_format=( "Expected 1 but collected {tool_count} tools: {tool_names}. " "Please make sure you have one and only one tool definition in your script." ), tool_count=num_tools, tool_names=tool_names, ) if tool_functions: return tool_functions[0], None else: return tool_methods[0] def generate_python_tool(name, content, source=None): m = load_python_module(content, source) f, initialize_inputs = collect_tool_function_in_module(m) tool = _parse_tool_from_function(f, initialize_inputs=initialize_inputs) tool.module = None if name is not None: tool.name = name if source is None: tool.code = content else: tool.source = source return tool def generate_python_meta_dict(name, content, source=None): return asdict_without_none(generate_python_tool(name, content, source)) # Only used in non-code first experience. def generate_python_meta(name, content, source=None): return json.dumps(generate_python_meta_dict(name, content, source), indent=2) def generate_prompt_meta(name, content, prompt_only=False, source=None): return json.dumps(generate_prompt_meta_dict(name, content, prompt_only, source), indent=2) def generate_tool_meta_dict_by_file(path: str, tool_type: ToolType): """Generate meta for a single tool file, which can be a python file or a jinja template file, note that if a python file is passed, correct working directory must be set and should be added to sys.path. """ tool_type = ToolType(tool_type) file = Path(path) if not file.is_file(): raise MetaFileNotFound( message_format="Generate tool meta failed for {tool_type} tool. Meta file '{file_path}' can not be found.", tool_type=tool_type.value, file_path=path, # Use a relative path here to make the error message more readable. ) try: content = file.read_text(encoding="utf-8") except Exception as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise MetaFileReadError( message_format=( "Generate tool meta failed for {tool_type} tool. " "Read meta file '{file_path}' failed: {error_type_and_message}" ), tool_type=tool_type.value, file_path=path, error_type_and_message=error_type_and_message, ) from e name = file.stem if tool_type == ToolType.PYTHON: return generate_python_meta_dict(name, content, path) elif tool_type == ToolType.LLM: return generate_prompt_meta_dict(name, content, source=path) elif tool_type == ToolType.PROMPT: return generate_prompt_meta_dict(name, content, prompt_only=True, source=path) else: raise NotSupported( message_format=( "Generate tool meta failed. " "The type '{tool_type}' is currently unsupported. " "Please choose from available types: {supported_tool_types} and try again." ), tool_type=tool_type.value, supported_tool_types=",".join([ToolType.PYTHON, ToolType.LLM, ToolType.PROMPT]), ) class ToolValidationError(UserErrorException): """Base exception raised when failed to validate tool.""" def __init__(self, **kwargs): super().__init__(target=ErrorTarget.TOOL, **kwargs) class JinjaParsingError(ToolValidationError): pass class ReservedVariableCannotBeUsed(JinjaParsingError): pass class PythonParsingError(ToolValidationError): pass class PythonLoaderNotFound(ToolValidationError): pass class NoToolDefined(PythonParsingError): pass class MultipleToolsDefined(PythonParsingError): pass class BadFunctionInterface(PythonParsingError): pass class PythonLoadError(PythonParsingError): @property def python_load_traceback(self): """Return the traceback inside user's source code scope. The traceback inside the promptflow's internal code will be taken off. """ exc = self.inner_exception if exc and exc.__traceback__ is not None: tb = exc.__traceback__ # The first three frames are always the code in tool.py who invokes the tool. # We do not want to dump it to user code's traceback. tb = get_tb_next(tb, next_cnt=3) if tb is not None: te = TracebackException(type(exc), exc, tb) formatted_tb = "".join(te.format()) return formatted_tb return None @property def additional_info(self): """Set the python load exception details as additional info.""" if not self.inner_exception: return None info = { "type": self.inner_exception.__class__.__name__, "message": str(self.inner_exception), "traceback": self.python_load_traceback, } info.update(last_frame_info(self.inner_exception)) return { ADDITIONAL_INFO_USER_CODE_STACKTRACE: info, }
promptflow/src/promptflow/promptflow/_core/tool_meta_generator.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_core/tool_meta_generator.py", "repo_id": "promptflow", "token_count": 5685 }
39
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import os from os import PathLike from pathlib import Path from typing import Any, Dict, List, Union from .._utils.logger_utils import get_cli_sdk_logger from ._configuration import Configuration from ._constants import MAX_SHOW_DETAILS_RESULTS from ._load_functions import load_flow from ._user_agent import USER_AGENT from ._utils import ClientUserAgentUtil, get_connection_operation, setup_user_agent_to_operation_context from .entities import Run from .entities._eager_flow import EagerFlow from .operations import RunOperations from .operations._connection_operations import ConnectionOperations from .operations._experiment_operations import ExperimentOperations from .operations._flow_operations import FlowOperations from .operations._tool_operations import ToolOperations logger = get_cli_sdk_logger() def _create_run(run: Run, **kwargs): client = PFClient() return client.runs.create_or_update(run=run, **kwargs) class PFClient: """A client class to interact with prompt flow entities.""" def __init__(self, **kwargs): logger.debug("PFClient init with kwargs: %s", kwargs) self._runs = RunOperations() self._connection_provider = kwargs.pop("connection_provider", None) self._config = kwargs.get("config", None) or {} # The credential is used as an option to override # DefaultAzureCredential when using workspace connection provider self._credential = kwargs.get("credential", None) # Lazy init to avoid azure credential requires too early self._connections = None self._flows = FlowOperations(client=self) self._tools = ToolOperations() # add user agent from kwargs if any if isinstance(kwargs.get("user_agent"), str): ClientUserAgentUtil.append_user_agent(kwargs["user_agent"]) self._experiments = ExperimentOperations(self) setup_user_agent_to_operation_context(USER_AGENT) def run( self, flow: Union[str, PathLike], *, data: Union[str, PathLike] = None, run: Union[str, Run] = None, column_mapping: dict = None, variant: str = None, connections: dict = None, environment_variables: dict = None, name: str = None, display_name: str = None, tags: Dict[str, str] = None, **kwargs, ) -> Run: """Run flow against provided data or run. .. note:: At least one of the ``data`` or ``run`` parameters must be provided. .. admonition:: Column_mapping Column mapping is a mapping from flow input name to specified values. If specified, the flow will be executed with provided value for specified inputs. The value can be: - from data: - ``data.col1`` - from run: - ``run.inputs.col1``: if need reference run's inputs - ``run.output.col1``: if need reference run's outputs - Example: - ``{"ground_truth": "${data.answer}", "prediction": "${run.outputs.answer}"}`` :param flow: Path to the flow directory to run evaluation. :type flow: Union[str, PathLike] :param data: Pointer to the test data (of variant bulk runs) for eval runs. :type data: Union[str, PathLike] :param run: Flow run ID or flow run. This parameter helps keep lineage between the current run and variant runs. Batch outputs can be referenced as ``${run.outputs.col_name}`` in inputs_mapping. :type run: Union[str, ~promptflow.entities.Run] :param column_mapping: Define a data flow logic to map input data. :type column_mapping: Dict[str, str] :param variant: Node & variant name in the format of ``${node_name.variant_name}``. The default variant will be used if not specified. :type variant: str :param connections: Overwrite node level connections with provided values. Example: ``{"node1": {"connection": "new_connection", "deployment_name": "gpt-35-turbo"}}`` :type connections: Dict[str, Dict[str, str]] :param environment_variables: Environment variables to set by specifying a property path and value. Example: ``{"key1": "${my_connection.api_key}", "key2"="value2"}`` The value reference to connection keys will be resolved to the actual value, and all environment variables specified will be set into os.environ. :type environment_variables: Dict[str, str] :param name: Name of the run. :type name: str :param display_name: Display name of the run. :type display_name: str :param tags: Tags of the run. :type tags: Dict[str, str] :return: Flow run info. :rtype: ~promptflow.entities.Run """ if not os.path.exists(flow): raise FileNotFoundError(f"flow path {flow} does not exist") if data and not os.path.exists(data): raise FileNotFoundError(f"data path {data} does not exist") if not run and not data: raise ValueError("at least one of data or run must be provided") # TODO(2901096): Support pf run with python file, maybe create a temp flow.dag.yaml in this case # load flow object for validation and early failure flow_obj = load_flow(source=flow) # validate param conflicts if isinstance(flow_obj, EagerFlow): if variant or connections: logger.warning("variant and connections are not supported for eager flow, will be ignored") variant, connections = None, None run = Run( name=name, display_name=display_name, tags=tags, data=data, column_mapping=column_mapping, run=run, variant=variant, flow=Path(flow), connections=connections, environment_variables=environment_variables, config=Configuration(overrides=self._config), ) return self.runs.create_or_update(run=run, **kwargs) def stream(self, run: Union[str, Run], raise_on_error: bool = True) -> Run: """Stream run logs to the console. :param run: Run object or name of the run. :type run: Union[str, ~promptflow.sdk.entities.Run] :param raise_on_error: Raises an exception if a run fails or canceled. :type raise_on_error: bool :return: flow run info. :rtype: ~promptflow.sdk.entities.Run """ return self.runs.stream(run, raise_on_error) def get_details( self, run: Union[str, Run], max_results: int = MAX_SHOW_DETAILS_RESULTS, all_results: bool = False ) -> "DataFrame": """Get the details from the run including inputs and outputs. .. note:: If `all_results` is set to True, `max_results` will be overwritten to sys.maxsize. :param run: The run name or run object :type run: Union[str, ~promptflow.sdk.entities.Run] :param max_results: The max number of runs to return, defaults to 100 :type max_results: int :param all_results: Whether to return all results, defaults to False :type all_results: bool :raises RunOperationParameterError: If `max_results` is not a positive integer. :return: The details data frame. :rtype: pandas.DataFrame """ return self.runs.get_details(name=run, max_results=max_results, all_results=all_results) def get_metrics(self, run: Union[str, Run]) -> Dict[str, Any]: """Get run metrics. :param run: Run object or name of the run. :type run: Union[str, ~promptflow.sdk.entities.Run] :return: Run metrics. :rtype: Dict[str, Any] """ return self.runs.get_metrics(run) def visualize(self, runs: Union[List[str], List[Run]]) -> None: """Visualize run(s). :param run: Run object or name of the run. :type run: Union[str, ~promptflow.sdk.entities.Run] """ self.runs.visualize(runs) @property def runs(self) -> RunOperations: """Run operations that can manage runs.""" return self._runs @property def tools(self) -> ToolOperations: """Tool operations that can manage tools.""" return self._tools def _ensure_connection_provider(self) -> str: if not self._connection_provider: # Get a copy with config override instead of the config instance self._connection_provider = Configuration(overrides=self._config).get_connection_provider() logger.debug("PFClient connection provider: %s", self._connection_provider) return self._connection_provider @property def connections(self) -> ConnectionOperations: """Connection operations that can manage connections.""" if not self._connections: self._ensure_connection_provider() self._connections = get_connection_operation(self._connection_provider, self._credential) return self._connections @property def flows(self) -> FlowOperations: """Operations on the flow that can manage flows.""" return self._flows def test( self, flow: Union[str, PathLike], *, inputs: dict = None, variant: str = None, node: str = None, environment_variables: dict = None, ) -> dict: """Test flow or node. :param flow: path to flow directory to test :type flow: Union[str, PathLike] :param inputs: Input data for the flow test :type inputs: dict :param variant: Node & variant name in format of ${node_name.variant_name}, will use default variant if not specified. :type variant: str :param node: If specified it will only test this node, else it will test the flow. :type node: str :param environment_variables: Environment variables to set by specifying a property path and value. Example: {"key1": "${my_connection.api_key}", "key2"="value2"} The value reference to connection keys will be resolved to the actual value, and all environment variables specified will be set into os.environ. :type environment_variables: dict :return: The result of flow or node :rtype: dict """ return self.flows.test( flow=flow, inputs=inputs, variant=variant, environment_variables=environment_variables, node=node )
promptflow/src/promptflow/promptflow/_sdk/_pf_client.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_sdk/_pf_client.py", "repo_id": "promptflow", "token_count": 4260 }
40
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- from promptflow._sdk._serving.utils import get_cost_up_to_now from promptflow._sdk._serving.monitor.metrics import ResponseType class StreamingMonitor: """StreamingMonitor is used to collect metrics & data for streaming response.""" def __init__(self, logger, flow_id: str, start_time: float, inputs: dict, outputs: dict, req_id: str, streaming_field_name: str, metric_recorder, data_collector, ) -> None: self.logger = logger self.flow_id = flow_id self.start_time = start_time self.inputs = inputs self.outputs = outputs self.streaming_field_name = streaming_field_name self.req_id = req_id self.metric_recorder = metric_recorder self.data_collector = data_collector self.response_message = [] def on_stream_start(self): """stream start call back function, record flow latency when first byte received.""" self.logger.info("start streaming response...") if self.metric_recorder: duration = get_cost_up_to_now(self.start_time) self.metric_recorder.record_flow_latency(self.flow_id, 200, True, ResponseType.FirstByte.value, duration) def on_stream_end(self, streaming_resp_duration: float): """stream end call back function, record flow latency and streaming response data when last byte received.""" if self.metric_recorder: duration = get_cost_up_to_now(self.start_time) self.metric_recorder.record_flow_latency(self.flow_id, 200, True, ResponseType.LastByte.value, duration) self.metric_recorder.record_flow_streaming_response_duration(self.flow_id, streaming_resp_duration) if self.data_collector: response_content = "".join(self.response_message) if self.streaming_field_name in self.outputs: self.outputs[self.streaming_field_name] = response_content self.data_collector.collect_flow_data(self.inputs, self.outputs, self.req_id) self.logger.info("finish streaming response.") def on_stream_event(self, message: str): """stream event call back function, record streaming response data chunk.""" self.response_message.append(message)
promptflow/src/promptflow/promptflow/_sdk/_serving/monitor/streaming_monitor.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/monitor/streaming_monitor.py", "repo_id": "promptflow", "token_count": 1046 }
41
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import collections import hashlib import json import multiprocessing import os import platform import re import shutil import stat import sys import tempfile import zipfile from contextlib import contextmanager from enum import Enum from functools import partial from os import PathLike from pathlib import Path from typing import Any, Dict, List, Optional, Set, Tuple, Union from urllib.parse import urlparse import keyring import pydash from cryptography.fernet import Fernet from filelock import FileLock from jinja2 import Template from keyring.errors import NoKeyringError from marshmallow import ValidationError import promptflow from promptflow._constants import EXTENSION_UA, PF_NO_INTERACTIVE_LOGIN, PF_USER_AGENT, USER_AGENT from promptflow._core.tool_meta_generator import generate_tool_meta_dict_by_file from promptflow._core.tools_manager import gen_dynamic_list, retrieve_tool_func_result from promptflow._sdk._constants import ( DAG_FILE_NAME, DEFAULT_ENCODING, FLOW_TOOLS_JSON, FLOW_TOOLS_JSON_GEN_TIMEOUT, HOME_PROMPT_FLOW_DIR, KEYRING_ENCRYPTION_KEY_NAME, KEYRING_ENCRYPTION_LOCK_PATH, KEYRING_SYSTEM, NODE, NODE_VARIANTS, NODES, PROMPT_FLOW_DIR_NAME, REFRESH_CONNECTIONS_DIR_LOCK_PATH, REGISTRY_URI_PREFIX, REMOTE_URI_PREFIX, USE_VARIANTS, VARIANTS, CommonYamlFields, ConnectionProvider, ) from promptflow._sdk._errors import ( DecryptConnectionError, GenerateFlowToolsJsonError, StoreConnectionEncryptionKeyError, UnsecureConnectionError, ) from promptflow._sdk._vendor import IgnoreFile, get_ignore_file, get_upload_files_from_folder from promptflow._utils.context_utils import _change_working_dir, inject_sys_path from promptflow._utils.dataclass_serializer import serialize from promptflow._utils.logger_utils import get_cli_sdk_logger from promptflow._utils.yaml_utils import dump_yaml, load_yaml, load_yaml_string from promptflow.contracts.tool import ToolType from promptflow.exceptions import ErrorTarget, UserErrorException logger = get_cli_sdk_logger() def snake_to_camel(name): return re.sub(r"(?:^|_)([a-z])", lambda x: x.group(1).upper(), name) def find_type_in_override(params_override: Optional[list] = None) -> Optional[str]: params_override = params_override or [] for override in params_override: if CommonYamlFields.TYPE in override: return override[CommonYamlFields.TYPE] return None # region Encryption CUSTOMIZED_ENCRYPTION_KEY_IN_KEY_RING = None ENCRYPTION_KEY_IN_KEY_RING = None @contextmanager def use_customized_encryption_key(encryption_key: str): global CUSTOMIZED_ENCRYPTION_KEY_IN_KEY_RING CUSTOMIZED_ENCRYPTION_KEY_IN_KEY_RING = encryption_key yield CUSTOMIZED_ENCRYPTION_KEY_IN_KEY_RING = None def set_encryption_key(encryption_key: Union[str, bytes]): if isinstance(encryption_key, bytes): encryption_key = encryption_key.decode("utf-8") keyring.set_password("promptflow", "encryption_key", encryption_key) _encryption_key_lock = FileLock(KEYRING_ENCRYPTION_LOCK_PATH) def get_encryption_key(generate_if_not_found: bool = False) -> str: global CUSTOMIZED_ENCRYPTION_KEY_IN_KEY_RING global ENCRYPTION_KEY_IN_KEY_RING if CUSTOMIZED_ENCRYPTION_KEY_IN_KEY_RING is not None: return CUSTOMIZED_ENCRYPTION_KEY_IN_KEY_RING if ENCRYPTION_KEY_IN_KEY_RING is not None: return ENCRYPTION_KEY_IN_KEY_RING def _get_from_keyring(): try: # Cache encryption key as mac will pop window to ask for permission when calling get_password return keyring.get_password(KEYRING_SYSTEM, KEYRING_ENCRYPTION_KEY_NAME) except NoKeyringError as e: raise StoreConnectionEncryptionKeyError( "System keyring backend service not found in your operating system. " "See https://pypi.org/project/keyring/ to install requirement for different operating system, " "or 'pip install keyrings.alt' to use the third-party backend. Reach more detail about this error at " "https://microsoft.github.io/promptflow/how-to-guides/faq.html#connection-creation-failed-with-storeconnectionencryptionkeyerror" # noqa: E501 ) from e ENCRYPTION_KEY_IN_KEY_RING = _get_from_keyring() if ENCRYPTION_KEY_IN_KEY_RING is not None or not generate_if_not_found: return ENCRYPTION_KEY_IN_KEY_RING _encryption_key_lock.acquire() # Note: we access the keyring twice, as global var can't share across processes. ENCRYPTION_KEY_IN_KEY_RING = _get_from_keyring() if ENCRYPTION_KEY_IN_KEY_RING is not None: return ENCRYPTION_KEY_IN_KEY_RING try: ENCRYPTION_KEY_IN_KEY_RING = Fernet.generate_key().decode("utf-8") keyring.set_password(KEYRING_SYSTEM, KEYRING_ENCRYPTION_KEY_NAME, ENCRYPTION_KEY_IN_KEY_RING) finally: _encryption_key_lock.release() return ENCRYPTION_KEY_IN_KEY_RING def encrypt_secret_value(secret_value): encryption_key = get_encryption_key(generate_if_not_found=True) fernet_client = Fernet(encryption_key) token = fernet_client.encrypt(secret_value.encode("utf-8")) return token.decode("utf-8") def decrypt_secret_value(connection_name, encrypted_secret_value): encryption_key = get_encryption_key() if encryption_key is None: raise Exception("Encryption key not found in keyring.") fernet_client = Fernet(encryption_key) try: return fernet_client.decrypt(encrypted_secret_value.encode("utf-8")).decode("utf-8") except Exception as e: if len(encrypted_secret_value) < 57: # This is to workaround old custom secrets that are not encrypted with Fernet. # Fernet token: https://github.com/fernet/spec/blob/master/Spec.md # Format: Version ‖ Timestamp ‖ IV ‖ Ciphertext ‖ HMAC # Version: 8 bits, Timestamp: 64 bits, IV: 128 bits, HMAC: 256 bits, # Ciphertext variable length, multiple of 128 bits # So the minimum length of a Fernet token is 57 bytes raise UnsecureConnectionError( f"Please delete and re-create connection {connection_name} " f"due to a security issue in the old sdk version." ) raise DecryptConnectionError( f"Decrypt connection {connection_name} secret failed: {str(e)}. " f"If you have ever changed your encryption key manually, " f"please revert it back to the original one, or delete all connections and re-create them." ) # endregion def decorate_validation_error(schema: Any, pretty_error: str, additional_message: str = "") -> str: return f"Validation for {schema.__name__} failed:\n\n {pretty_error} \n\n {additional_message}" def load_from_dict(schema: Any, data: Dict, context: Dict, additional_message: str = "", **kwargs): try: return schema(context=context).load(data, **kwargs) except ValidationError as e: pretty_error = json.dumps(e.normalized_messages(), indent=2) raise ValidationError(decorate_validation_error(schema, pretty_error, additional_message)) def strip_quotation(value): """ To avoid escaping chars in command args, args will be surrounded in quotas. Need to remove the pair of quotation first. """ if value.startswith('"') and value.endswith('"'): return value[1:-1] elif value.startswith("'") and value.endswith("'"): return value[1:-1] else: return value def parse_variant(variant: str) -> Tuple[str, str]: variant_regex = r"\${([^.]+).([^}]+)}" match = re.match(variant_regex, strip_quotation(variant)) if match: return match.group(1), match.group(2) else: error = ValueError( f"Invalid variant format: {variant}, variant should be in format of ${{TUNING_NODE.VARIANT}}" ) raise UserErrorException( target=ErrorTarget.CONTROL_PLANE_SDK, message=str(error), error=error, ) def _match_reference(env_val: str): env_val = env_val.strip() m = re.match(r"^\$\{([^.]+)\.([^.]+)}$", env_val) if not m: return None, None name, key = m.groups() return name, key # !!! Attention!!!: Please make sure you have contact with PRS team before changing the interface. def get_used_connection_names_from_environment_variables(): """The function will get all potential related connection names from current environment variables. for example, if part of env var is { "ENV_VAR_1": "${my_connection.key}", "ENV_VAR_2": "${my_connection.key2}", "ENV_VAR_3": "${my_connection2.key}", } The function will return {"my_connection", "my_connection2"}. """ return get_used_connection_names_from_dict(os.environ) def get_used_connection_names_from_dict(connection_dict: dict): connection_names = set() for key, val in connection_dict.items(): connection_name, _ = _match_reference(val) if connection_name: connection_names.add(connection_name) return connection_names # !!! Attention!!!: Please make sure you have contact with PRS team before changing the interface. def update_environment_variables_with_connections(built_connections): """The function will result env var value ${my_connection.key} to the real connection keys.""" return update_dict_value_with_connections(built_connections, os.environ) def _match_env_reference(val: str): try: val = val.strip() m = re.match(r"^\$\{env:(.+)}$", val) if not m: return None name = m.groups()[0] return name except Exception: # for exceptions when val is not a string, return return None def override_connection_config_with_environment_variable(connections: Dict[str, dict]): """ The function will use relevant environment variable to override connection configurations. For instance, if there is a custom connection named 'custom_connection' with a configuration key called 'chat_deployment_name,' the function will attempt to retrieve 'chat_deployment_name' from the environment variable 'CUSTOM_CONNECTION_CHAT_DEPLOYMENT_NAME' by default. If the environment variable is not set, it will use the original value as a fallback. """ for connection_name, connection in connections.items(): values = connection.get("value", {}) for key, val in values.items(): connection_name = connection_name.replace(" ", "_") env_name = f"{connection_name}_{key}".upper() if env_name not in os.environ: continue values[key] = os.environ[env_name] logger.info(f"Connection {connection_name}'s {key} is overridden with environment variable {env_name}") return connections def resolve_connections_environment_variable_reference(connections: Dict[str, dict]): """The function will resolve connection secrets env var reference like api_key: ${env:KEY}""" for connection in connections.values(): values = connection.get("value", {}) for key, val in values.items(): if not _match_env_reference(val): continue env_name = _match_env_reference(val) if env_name not in os.environ: raise UserErrorException(f"Environment variable {env_name} is not found.") values[key] = os.environ[env_name] return connections def update_dict_value_with_connections(built_connections, connection_dict: dict): for key, val in connection_dict.items(): connection_name, connection_key = _match_reference(val) if connection_name is None: continue if connection_name not in built_connections: continue if connection_key not in built_connections[connection_name]["value"]: continue connection_dict[key] = built_connections[connection_name]["value"][connection_key] def in_jupyter_notebook() -> bool: """ Checks if user is using a Jupyter Notebook. This is necessary because logging is not allowed in non-Jupyter contexts. Adapted from https://stackoverflow.com/a/22424821 """ try: # cspell:ignore ipython from IPython import get_ipython if "IPKernelApp" not in get_ipython().config: return False except ImportError: return False except AttributeError: return False return True def render_jinja_template(template_path, *, trim_blocks=True, keep_trailing_newline=True, **kwargs): with open(template_path, "r", encoding=DEFAULT_ENCODING) as f: template = Template(f.read(), trim_blocks=trim_blocks, keep_trailing_newline=keep_trailing_newline) return template.render(**kwargs) def print_yellow_warning(message): from colorama import Fore, init init(autoreset=True) print(Fore.YELLOW + message) def print_red_error(message): from colorama import Fore, init init(autoreset=True) print(Fore.RED + message) def safe_parse_object_list(obj_list, parser, message_generator): results = [] for obj in obj_list: try: results.append(parser(obj)) except Exception as e: extended_message = f"{message_generator(obj)} Error: {type(e).__name__}, {str(e)}" print_yellow_warning(extended_message) return results def _normalize_identifier_name(name): normalized_name = name.lower() normalized_name = re.sub(r"[\W_]", " ", normalized_name) # No non-word characters normalized_name = re.sub(" +", " ", normalized_name).strip() # No double spaces, leading or trailing spaces if re.match(r"\d", normalized_name): normalized_name = "n" + normalized_name # No leading digits return normalized_name def _sanitize_python_variable_name(name: str): return _normalize_identifier_name(name).replace(" ", "_") def _get_additional_includes(yaml_path): flow_dag = load_yaml(yaml_path) return flow_dag.get("additional_includes", []) def _is_folder_to_compress(path: Path) -> bool: """Check if the additional include needs to compress corresponding folder as a zip. For example, given additional include /mnt/c/hello.zip 1) if a file named /mnt/c/hello.zip already exists, return False (simply copy) 2) if a folder named /mnt/c/hello exists, return True (compress as a zip and copy) :param path: Given path in additional include. :type path: Path :return: If the path need to be compressed as a zip file. :rtype: bool """ if path.suffix != ".zip": return False # if zip file exists, simply copy as other additional includes if path.exists(): return False # remove .zip suffix and check whether the folder exists stem_path = path.parent / path.stem return stem_path.is_dir() def _resolve_folder_to_compress(base_path: Path, include: str, dst_path: Path) -> None: """resolve the zip additional include, need to compress corresponding folder.""" zip_additional_include = (base_path / include).resolve() folder_to_zip = zip_additional_include.parent / zip_additional_include.stem zip_file = dst_path / zip_additional_include.name with zipfile.ZipFile(zip_file, "w") as zf: zf.write(folder_to_zip, os.path.relpath(folder_to_zip, folder_to_zip.parent)) # write root in zip for root, _, files in os.walk(folder_to_zip, followlinks=True): for file in files: file_path = os.path.join(folder_to_zip, file) zf.write(file_path, os.path.relpath(file_path, folder_to_zip.parent)) @contextmanager def _merge_local_code_and_additional_includes(code_path: Path): # TODO: unify variable names: flow_dir_path, flow_dag_path, flow_path def additional_includes_copy(src, relative_path, target_dir): if src.is_file(): dst = Path(target_dir) / relative_path dst.parent.mkdir(parents=True, exist_ok=True) if dst.exists(): logger.warning( "Found duplicate file in additional includes, " f"additional include file {src} will overwrite {relative_path}" ) shutil.copy2(src, dst) else: for name in src.glob("*"): additional_includes_copy(name, Path(relative_path) / name.name, target_dir) if code_path.is_dir(): yaml_path = (Path(code_path) / DAG_FILE_NAME).resolve() code_path = code_path.resolve() else: yaml_path = code_path.resolve() code_path = code_path.parent.resolve() with tempfile.TemporaryDirectory() as temp_dir: shutil.copytree(code_path.resolve().as_posix(), temp_dir, dirs_exist_ok=True) for item in _get_additional_includes(yaml_path): src_path = Path(item) if not src_path.is_absolute(): src_path = (code_path / item).resolve() if _is_folder_to_compress(src_path): _resolve_folder_to_compress(code_path, item, Path(temp_dir)) # early continue as the folder is compressed as a zip file continue if not src_path.exists(): error = ValueError(f"Unable to find additional include {item}") raise UserErrorException( target=ErrorTarget.CONTROL_PLANE_SDK, message=str(error), error=error, ) additional_includes_copy(src_path, relative_path=src_path.name, target_dir=temp_dir) yield temp_dir def incremental_print(log: str, printed: int, fileout) -> int: count = 0 for line in log.splitlines(): if count >= printed: fileout.write(line + "\n") printed += 1 count += 1 return printed def get_promptflow_sdk_version() -> str: try: return promptflow.__version__ except AttributeError: # if promptflow is installed from source, it does not have __version__ attribute return "0.0.1" def print_pf_version(): print("promptflow\t\t\t {}".format(get_promptflow_sdk_version())) print() print("Executable '{}'".format(os.path.abspath(sys.executable))) print("Python ({}) {}".format(platform.system(), sys.version)) class PromptflowIgnoreFile(IgnoreFile): # TODO add more files to this list. IGNORE_FILE = [".runs", "__pycache__"] def __init__(self, prompt_flow_path: Union[Path, str]): super(PromptflowIgnoreFile, self).__init__(prompt_flow_path) self._path = Path(prompt_flow_path) self._ignore_tools_json = False @property def base_path(self) -> Path: return self._path def _get_ignore_list(self): """Get ignore list from ignore file contents.""" if not self.exists(): return [] base_ignore = get_ignore_file(self.base_path) result = self.IGNORE_FILE + base_ignore._get_ignore_list() if self._ignore_tools_json: result.append(f"{PROMPT_FLOW_DIR_NAME}/{FLOW_TOOLS_JSON}") return result def _generate_meta_from_files( tools: List[Tuple[str, str]], flow_directory: Path, tools_dict: dict, exception_dict: dict ) -> None: with _change_working_dir(flow_directory), inject_sys_path(flow_directory): for source, tool_type in tools: try: tools_dict[source] = generate_tool_meta_dict_by_file(source, ToolType(tool_type)) except Exception as e: exception_dict[source] = str(e) def _generate_tool_meta( flow_directory: Path, tools: List[Tuple[str, str]], raise_error: bool, timeout: int, *, include_errors_in_output: bool = False, load_in_subprocess: bool = True, ) -> Dict[str, dict]: """Generate tool meta from files. :param flow_directory: flow directory :param tools: tool list :param raise_error: whether raise error when generate meta failed :param timeout: timeout for generate meta :param include_errors_in_output: whether include errors in output :param load_in_subprocess: whether load tool meta with subprocess to prevent system path disturb. Default is True. If set to False, will load tool meta in sync mode and timeout need to be handled outside current process. :return: tool meta dict """ if load_in_subprocess: # use multiprocess generate to avoid system path disturb manager = multiprocessing.Manager() tools_dict = manager.dict() exception_dict = manager.dict() p = multiprocessing.Process( target=_generate_meta_from_files, args=(tools, flow_directory, tools_dict, exception_dict) ) p.start() p.join(timeout=timeout) if p.is_alive(): logger.warning(f"Generate meta timeout after {timeout} seconds, terminate the process.") p.terminate() p.join() else: tools_dict, exception_dict = {}, {} # There is no built-in method to forcefully stop a running thread/coroutine in Python # because abruptly stopping a thread can cause issues like resource leaks, # deadlocks, or inconsistent states. # Caller needs to handle the timeout outside current process. logger.warning( "Generate meta in current process and timeout won't take effect. " "Please handle timeout manually outside current process." ) _generate_meta_from_files(tools, flow_directory, tools_dict, exception_dict) res = {source: tool for source, tool in tools_dict.items()} for source in res: # remove name in tool meta res[source].pop("name") # convert string Enum to string if isinstance(res[source]["type"], Enum): res[source]["type"] = res[source]["type"].value # not all tools have inputs, so check first if "inputs" in res[source]: for tool_input in res[source]["inputs"]: tool_input_type = res[source]["inputs"][tool_input]["type"] for i in range(len(tool_input_type)): if isinstance(tool_input_type[i], Enum): tool_input_type[i] = tool_input_type[i].value # collect errors and print warnings errors = { source: exception for source, exception in exception_dict.items() } # for not processed tools, regard as timeout error for source, _ in tools: if source not in res and source not in errors: errors[source] = f"Generate meta timeout for source {source!r}." for source in errors: if include_errors_in_output: res[source] = errors[source] else: logger.warning(f"Generate meta for source {source!r} failed: {errors[source]}.") if raise_error and len(errors) > 0: error_message = "Generate meta failed, detail error(s):\n" + json.dumps(errors, indent=4) raise GenerateFlowToolsJsonError(error_message) return res def _retrieve_tool_func_result(func_call_scenario: str, function_config: Dict): """Retrieve tool func result according to func_call_scenario. :param func_call_scenario: function call scenario :param function_config: function config in tool meta. Should contain'func_path' and 'func_kwargs'. :return: func call result according to func_call_scenario. """ func_path = function_config.get("func_path", "") func_kwargs = function_config.get("func_kwargs", {}) # May call azure control plane api in the custom function to list Azure resources. # which may need Azure workspace triple. # TODO: move this method to a common place. from promptflow._cli._utils import get_workspace_triad_from_local workspace_triad = get_workspace_triad_from_local() if workspace_triad.subscription_id and workspace_triad.resource_group_name and workspace_triad.workspace_name: result = retrieve_tool_func_result(func_call_scenario, func_path, func_kwargs, workspace_triad._asdict()) # if no workspace triple available, just skip. else: result = retrieve_tool_func_result(func_call_scenario, func_path, func_kwargs) result_with_log = {"result": result, "logs": {}} return result_with_log def _gen_dynamic_list(function_config: Dict) -> List: """Generate dynamic list for a tool input. :param function_config: function config in tool meta. Should contain'func_path' and 'func_kwargs'. :return: a list of tool input dynamic enums. """ func_path = function_config.get("func_path", "") func_kwargs = function_config.get("func_kwargs", {}) # May call azure control plane api in the custom function to list Azure resources. # which may need Azure workspace triple. # TODO: move this method to a common place. from promptflow._cli._utils import get_workspace_triad_from_local workspace_triad = get_workspace_triad_from_local() if workspace_triad.subscription_id and workspace_triad.resource_group_name and workspace_triad.workspace_name: return gen_dynamic_list(func_path, func_kwargs, workspace_triad._asdict()) # if no workspace triple available, just skip. else: return gen_dynamic_list(func_path, func_kwargs) def _generate_package_tools(keys: Optional[List[str]] = None) -> dict: from promptflow._core.tools_manager import collect_package_tools return collect_package_tools(keys=keys) def _update_involved_tools_and_packages( _node, _node_path, *, tools: List, used_packages: Set, source_path_mapping: Dict[str, List[str]], ): source, tool_type = pydash.get(_node, "source.path", None), _node.get("type", None) used_packages.add(pydash.get(_node, "source.tool", None)) if source is None or tool_type is None: return # for custom LLM tool, its source points to the used prompt template so handle it as prompt tool if tool_type == ToolType.CUSTOM_LLM: tool_type = ToolType.PROMPT if pydash.get(_node, "source.type") not in ["code", "package_with_prompt"]: return pair = (source, tool_type.lower()) if pair not in tools: tools.append(pair) source_path_mapping[source].append(f"{_node_path}.source.path") def _get_involved_code_and_package( data: dict, ) -> Tuple[List[Tuple[str, str]], Set[str], Dict[str, List[str]]]: tools = [] # List[Tuple[source_file, tool_type]] used_packages = set() source_path_mapping = collections.defaultdict(list) for node_i, node in enumerate(data[NODES]): _update_involved_tools_and_packages( node, f"{NODES}.{node_i}", tools=tools, used_packages=used_packages, source_path_mapping=source_path_mapping, ) # understand DAG to parse variants # TODO: should we allow source to appear both in node and node variants? if node.get(USE_VARIANTS) is True: node_variants = data[NODE_VARIANTS][node["name"]] for variant_id in node_variants[VARIANTS]: node_with_variant = node_variants[VARIANTS][variant_id][NODE] _update_involved_tools_and_packages( node_with_variant, f"{NODE_VARIANTS}.{node['name']}.{VARIANTS}.{variant_id}.{NODE}", tools=tools, used_packages=used_packages, source_path_mapping=source_path_mapping, ) if None in used_packages: used_packages.remove(None) return tools, used_packages, source_path_mapping def generate_flow_tools_json( flow_directory: Union[str, Path], dump: bool = True, raise_error: bool = True, timeout: int = FLOW_TOOLS_JSON_GEN_TIMEOUT, *, include_errors_in_output: bool = False, target_source: str = None, used_packages_only: bool = False, source_path_mapping: Dict[str, List[str]] = None, ) -> dict: """Generate flow.tools.json for a flow directory. :param flow_directory: path to flow directory. :param dump: whether to dump to .promptflow/flow.tools.json, default value is True. :param raise_error: whether to raise the error, default value is True. :param timeout: timeout for generation, default value is 60 seconds. :param include_errors_in_output: whether to include error messages in output, default value is False. :param target_source: the source name to filter result, default value is None. Note that we will update system path in coroutine if target_source is provided given it's expected to be from a specific cli call. :param used_packages_only: whether to only include used packages, default value is False. :param source_path_mapping: if specified, record yaml paths for each source. """ flow_directory = Path(flow_directory).resolve() # parse flow DAG data = load_yaml(flow_directory / DAG_FILE_NAME) tools, used_packages, _source_path_mapping = _get_involved_code_and_package(data) # update passed in source_path_mapping if specified if source_path_mapping is not None: source_path_mapping.update(_source_path_mapping) # filter tools by target_source if specified if target_source is not None: tools = list(filter(lambda x: x[0] == target_source, tools)) # generate content # TODO: remove type in tools (input) and code (output) flow_tools = { "code": _generate_tool_meta( flow_directory, tools, raise_error=raise_error, timeout=timeout, include_errors_in_output=include_errors_in_output, # we don't need to protect system path according to the target usage when target_source is specified load_in_subprocess=target_source is None, ), # specified source may only appear in code tools "package": {} if target_source is not None else _generate_package_tools(keys=list(used_packages) if used_packages_only else None), } if dump: # dump as flow.tools.json promptflow_folder = flow_directory / PROMPT_FLOW_DIR_NAME promptflow_folder.mkdir(exist_ok=True) with open(promptflow_folder / FLOW_TOOLS_JSON, mode="w", encoding=DEFAULT_ENCODING) as f: json.dump(flow_tools, f, indent=4) return flow_tools class ClientUserAgentUtil: """SDK/CLI side user agent utilities.""" @classmethod def _get_context(cls): from promptflow._core.operation_context import OperationContext return OperationContext.get_instance() @classmethod def get_user_agent(cls): from promptflow._core.operation_context import OperationContext context = cls._get_context() # directly get from context since client side won't need promptflow/xxx. return context.get(OperationContext.USER_AGENT_KEY, "").strip() @classmethod def append_user_agent(cls, user_agent: Optional[str]): if not user_agent: return context = cls._get_context() context.append_user_agent(user_agent) @classmethod def update_user_agent_from_env_var(cls): # this is for backward compatibility: we should use PF_USER_AGENT in newer versions. for env_name in [USER_AGENT, PF_USER_AGENT]: if env_name in os.environ: cls.append_user_agent(os.environ[env_name]) @classmethod def update_user_agent_from_config(cls): """Update user agent from config. 1p customer will set it. We'll add PFCustomer_ as prefix.""" from promptflow._sdk._configuration import Configuration config = Configuration.get_instance() user_agent = config.get_user_agent() if user_agent: cls.append_user_agent(user_agent) def setup_user_agent_to_operation_context(user_agent): """Setup user agent to OperationContext. For calls from extension, ua will be like: prompt-flow-extension/ promptflow-cli/ promptflow-sdk/ For calls from CLI, ua will be like: promptflow-cli/ promptflow-sdk/ For calls from SDK, ua will be like: promptflow-sdk/ For 1p customer call which set user agent in config, ua will be like: PFCustomer_XXX/ """ # add user added UA after SDK/CLI ClientUserAgentUtil.append_user_agent(user_agent) ClientUserAgentUtil.update_user_agent_from_env_var() ClientUserAgentUtil.update_user_agent_from_config() return ClientUserAgentUtil.get_user_agent() def call_from_extension() -> bool: """Return true if current request is from extension.""" ClientUserAgentUtil.update_user_agent_from_env_var() user_agent = ClientUserAgentUtil.get_user_agent() return EXTENSION_UA in user_agent def generate_random_string(length: int = 6) -> str: import random import string return "".join(random.choice(string.ascii_lowercase) for _ in range(length)) def copy_tree_respect_template_and_ignore_file(source: Path, target: Path, render_context: dict = None): def is_template(path: str): return path.endswith(".jinja2") for source_path, target_path in get_upload_files_from_folder( path=source, ignore_file=PromptflowIgnoreFile(prompt_flow_path=source), ): (target / target_path).parent.mkdir(parents=True, exist_ok=True) if render_context is None or not is_template(source_path): shutil.copy(source_path, target / target_path) else: (target / target_path[: -len(".jinja2")]).write_bytes( # always use unix line ending render_jinja_template(source_path, **render_context) .encode("utf-8") .replace(b"\r\n", b"\n"), ) def get_local_connections_from_executable( executable, client, connections_to_ignore: List[str] = None, connections_to_add: List[str] = None ): """Get local connections from executable. executable: The executable flow object. client: Local client to get connections. connections_to_ignore: The connection names to ignore when getting connections. connections_to_add: The connection names to add when getting connections. """ connection_names = executable.get_connection_names() if connections_to_add: connection_names.update(connections_to_add) connections_to_ignore = connections_to_ignore or [] result = {} for n in connection_names: if n not in connections_to_ignore: conn = client.connections.get(name=n, with_secrets=True) result[n] = conn._to_execution_connection_dict() return result def _generate_connections_dir(): # Get Python executable path python_path = sys.executable # Hash the Python executable path hash_object = hashlib.sha1(python_path.encode()) hex_dig = hash_object.hexdigest() # Generate the connections system path using the hash connections_dir = (HOME_PROMPT_FLOW_DIR / "envs" / hex_dig / "connections").resolve() return connections_dir _refresh_connection_dir_lock = FileLock(REFRESH_CONNECTIONS_DIR_LOCK_PATH) # This function is used by extension to generate the connection files every time collect tools. def refresh_connections_dir(connection_spec_files, connection_template_yamls): connections_dir = _generate_connections_dir() # Use lock to prevent concurrent access with _refresh_connection_dir_lock: if os.path.isdir(connections_dir): shutil.rmtree(connections_dir) os.makedirs(connections_dir) if connection_spec_files and connection_template_yamls: for connection_name, content in connection_spec_files.items(): file_name = connection_name + ".spec.json" with open(connections_dir / file_name, "w", encoding=DEFAULT_ENCODING) as f: json.dump(content, f, indent=2) # use YAML to dump template file in order to keep the comments for connection_name, content in connection_template_yamls.items(): yaml_data = load_yaml_string(content) file_name = connection_name + ".template.yaml" with open(connections_dir / file_name, "w", encoding=DEFAULT_ENCODING) as f: dump_yaml(yaml_data, f) def dump_flow_result(flow_folder, prefix, flow_result=None, node_result=None, custom_path=None): """Dump flow result for extension. :param flow_folder: The flow folder. :param prefix: The file prefix. :param flow_result: The flow result returned by exec_line. :param node_result: The node result when test node returned by load_and_exec_node. :param custom_path: The custom path to dump flow result. """ if flow_result: flow_serialize_result = { "flow_runs": [serialize(flow_result.run_info)], "node_runs": [serialize(run) for run in flow_result.node_run_infos.values()], } else: flow_serialize_result = { "flow_runs": [], "node_runs": [serialize(node_result)], } dump_folder = Path(flow_folder) / PROMPT_FLOW_DIR_NAME if custom_path is None else Path(custom_path) dump_folder.mkdir(parents=True, exist_ok=True) with open(dump_folder / f"{prefix}.detail.json", "w", encoding=DEFAULT_ENCODING) as f: json.dump(flow_serialize_result, f, indent=2, ensure_ascii=False) if node_result: metrics = flow_serialize_result["node_runs"][0]["metrics"] output = flow_serialize_result["node_runs"][0]["output"] else: metrics = flow_serialize_result["flow_runs"][0]["metrics"] output = flow_serialize_result["flow_runs"][0]["output"] if metrics: with open(dump_folder / f"{prefix}.metrics.json", "w", encoding=DEFAULT_ENCODING) as f: json.dump(metrics, f, indent=2, ensure_ascii=False) if output: with open(dump_folder / f"{prefix}.output.json", "w", encoding=DEFAULT_ENCODING) as f: json.dump(output, f, indent=2, ensure_ascii=False) def read_write_by_user(): return stat.S_IRUSR | stat.S_IWUSR def remove_empty_element_from_dict(obj: dict) -> dict: """Remove empty element from dict, e.g. {"a": 1, "b": {}} -> {"a": 1}""" new_dict = {} for key, value in obj.items(): if isinstance(value, dict): value = remove_empty_element_from_dict(value) if value is not None: new_dict[key] = value return new_dict def is_github_codespaces(): # Ref: # https://docs.github.com/en/codespaces/developing-in-a-codespace/default-environment-variables-for-your-codespace return os.environ.get("CODESPACES", None) == "true" def interactive_credential_disabled(): return os.environ.get(PF_NO_INTERACTIVE_LOGIN, "false").lower() == "true" def is_from_cli(): from promptflow._cli._user_agent import USER_AGENT as CLI_UA return CLI_UA in ClientUserAgentUtil.get_user_agent() def is_url(value: Union[PathLike, str]) -> bool: try: result = urlparse(str(value)) return all([result.scheme, result.netloc]) except ValueError: return False def is_remote_uri(obj) -> bool: # return True if it's supported remote uri if isinstance(obj, str): if obj.startswith(REMOTE_URI_PREFIX): # azureml: started, azureml:name:version, azureml://xxx return True elif is_url(obj): return True return False def parse_remote_flow_pattern(flow: object) -> str: # Check if the input matches the correct pattern flow_name = None error_message = ( f"Invalid remote flow pattern, got {flow!r} while expecting " f"a remote workspace flow like '{REMOTE_URI_PREFIX}<flow-name>', or a remote registry flow like " f"'{REMOTE_URI_PREFIX}//registries/<registry-name>/models/<flow-name>/versions/<version>'" ) if not isinstance(flow, str) or not flow.startswith(REMOTE_URI_PREFIX): raise UserErrorException(error_message) # check for registry flow pattern if flow.startswith(REGISTRY_URI_PREFIX): pattern = r"azureml://registries/.*?/models/(?P<name>.*?)/versions/(?P<version>.*?)$" match = re.match(pattern, flow) if not match or len(match.groups()) != 2: raise UserErrorException(error_message) flow_name, _ = match.groups() # check for workspace flow pattern elif flow.startswith(REMOTE_URI_PREFIX): pattern = r"azureml:(?P<name>.*?)$" match = re.match(pattern, flow) if not match or len(match.groups()) != 1: raise UserErrorException(error_message) flow_name = match.groups()[0] return flow_name def get_connection_operation(connection_provider: str, credential=None, user_agent: str = None): """ Get connection operation based on connection provider. This function will be called by PFClient, so please do not refer to PFClient in this function. :param connection_provider: Connection provider, e.g. local, azureml, azureml://subscriptions..., etc. :type connection_provider: str :param credential: Credential when remote provider, default to chained credential DefaultAzureCredential. :type credential: object :param user_agent: User Agent :type user_agent: str """ if connection_provider == ConnectionProvider.LOCAL.value: from promptflow._sdk.operations._connection_operations import ConnectionOperations logger.debug("PFClient using local connection operations.") connection_operation = ConnectionOperations() elif connection_provider.startswith(ConnectionProvider.AZUREML.value): from promptflow._sdk.operations._local_azure_connection_operations import LocalAzureConnectionOperations logger.debug(f"PFClient using local azure connection operations with credential {credential}.") if user_agent is None: connection_operation = LocalAzureConnectionOperations(connection_provider, credential=credential) else: connection_operation = LocalAzureConnectionOperations(connection_provider, user_agent=user_agent) else: error = ValueError(f"Unsupported connection provider: {connection_provider}") raise UserErrorException( target=ErrorTarget.CONTROL_PLANE_SDK, message=str(error), error=error, ) return connection_operation # extract open read/write as partial to centralize the encoding read_open = partial(open, mode="r", encoding=DEFAULT_ENCODING) write_open = partial(open, mode="w", encoding=DEFAULT_ENCODING) # extract some file operations inside this file def json_load(file) -> str: with read_open(file) as f: return json.load(f) def json_dump(obj, file) -> None: with write_open(file) as f: json.dump(obj, f, ensure_ascii=False) def pd_read_json(file) -> "DataFrame": import pandas as pd with read_open(file) as f: return pd.read_json(f, orient="records", lines=True)
promptflow/src/promptflow/promptflow/_sdk/_utils.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_sdk/_utils.py", "repo_id": "promptflow", "token_count": 16979 }
42
Exported entry file & its dependencies are located in the same folder. The structure is as below: - flow: the folder contains all the flow files - connections: the folder contains yaml files to create all related connections - app.py: the entry file is included as the entry point for the bundled application. - app.spec: the spec file tells PyInstaller how to process your script. - main.py: it will start streamlit service and be called by the entry file. - settings.json: a json file to store the settings of the executable application. - build: a folder contains various log and working files. - dist: a folder contains the executable application. - README.md: the readme file to describe how to use the exported files and scripts. Please refer to [official doc](https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/index.html) for more details about how to use the exported files and scripts.
promptflow/src/promptflow/promptflow/_sdk/data/executable/README.md/0
{ "file_path": "promptflow/src/promptflow/promptflow/_sdk/data/executable/README.md", "repo_id": "promptflow", "token_count": 223 }
43
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- from .core import MutableValidationResult, ValidationResult, ValidationResultBuilder from .schema import SchemaValidatableMixin __all__ = [ "SchemaValidatableMixin", "MutableValidationResult", "ValidationResult", "ValidationResultBuilder", ]
promptflow/src/promptflow/promptflow/_sdk/entities/_validation/__init__.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_sdk/entities/_validation/__init__.py", "repo_id": "promptflow", "token_count": 106 }
44
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- from marshmallow import fields, post_load, pre_load from promptflow._sdk._constants import ExperimentNodeType from promptflow._sdk.schemas._base import PatchedSchemaMeta, YamlFileSchema from promptflow._sdk.schemas._fields import ( LocalPathField, NestedField, PrimitiveValueField, StringTransformedEnum, UnionField, ) from promptflow._sdk.schemas._run import RunSchema class CommandNodeSchema(YamlFileSchema): # TODO: Not finalized now. Need to revisit. name = fields.Str(required=True) display_name = fields.Str() type = StringTransformedEnum(allowed_values=ExperimentNodeType.COMMAND, required=True) code = LocalPathField(default=".") command = fields.Str(required=True) inputs = fields.Dict(keys=fields.Str) outputs = fields.Dict(keys=fields.Str, values=LocalPathField(allow_none=True)) environment_variables = fields.Dict(keys=fields.Str, values=fields.Str) # runtime field, only available for cloud run runtime = fields.Str() # TODO: Revisit the required fields class FlowNodeSchema(RunSchema): class Meta: exclude = ["flow", "column_mapping", "data", "run"] name = fields.Str(required=True) type = StringTransformedEnum(allowed_values=ExperimentNodeType.FLOW, required=True) inputs = fields.Dict(keys=fields.Str) path = UnionField([LocalPathField(required=True), fields.Str(required=True)]) @pre_load def warning_unknown_fields(self, data, **kwargs): # Override to avoid warning here return data class ExperimentDataSchema(metaclass=PatchedSchemaMeta): name = fields.Str(required=True) path = LocalPathField(required=True) class ExperimentInputSchema(metaclass=PatchedSchemaMeta): name = fields.Str(required=True) type = fields.Str(required=True) default = PrimitiveValueField() class ExperimentTemplateSchema(YamlFileSchema): name = fields.Str() description = fields.Str() data = fields.List(NestedField(ExperimentDataSchema)) # Optional inputs = fields.List(NestedField(ExperimentInputSchema)) # Optional nodes = fields.List( UnionField( [ NestedField(CommandNodeSchema), NestedField(FlowNodeSchema), ] ), required=True, ) @post_load def resolve_nodes(self, data, **kwargs): from promptflow._sdk.entities._experiment import CommandNode, FlowNode nodes = data.get("nodes", []) resolved_nodes = [] for node in nodes: if not isinstance(node, dict): continue node_type = node.get("type", None) if node_type == ExperimentNodeType.FLOW: resolved_nodes.append(FlowNode._load_from_dict(data=node, context=self.context, additional_message="")) elif node_type == ExperimentNodeType.COMMAND: resolved_nodes.append( CommandNode._load_from_dict(data=node, context=self.context, additional_message="") ) else: raise ValueError(f"Unknown node type {node_type} for node {node}.") data["nodes"] = resolved_nodes return data @post_load def resolve_data_and_inputs(self, data, **kwargs): from promptflow._sdk.entities._experiment import ExperimentData, ExperimentInput def resolve_resource(key, cls): items = data.get(key, []) resolved_result = [] for item in items: if not isinstance(item, dict): continue resolved_result.append( cls._load_from_dict( data=item, context=self.context, additional_message=f"Failed to load {cls.__name__}", ) ) return resolved_result data["data"] = resolve_resource("data", ExperimentData) data["inputs"] = resolve_resource("inputs", ExperimentInput) return data class ExperimentSchema(ExperimentTemplateSchema): node_runs = fields.Dict(keys=fields.Str(), values=fields.Str()) # TODO: Revisit this status = fields.Str(dump_only=True) properties = fields.Dict(keys=fields.Str(), values=fields.Str(allow_none=True)) created_on = fields.Str(dump_only=True) last_start_time = fields.Str(dump_only=True) last_end_time = fields.Str(dump_only=True)
promptflow/src/promptflow/promptflow/_sdk/schemas/_experiment.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_sdk/schemas/_experiment.py", "repo_id": "promptflow", "token_count": 1863 }
45
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- # This file is for open source, # so it should not contain any dependency on azure or azureml-related packages. import json import logging import os import sys from contextvars import ContextVar from dataclasses import dataclass from functools import partial from typing import List, Optional from promptflow._constants import PF_LOGGING_LEVEL from promptflow._utils.credential_scrubber import CredentialScrubber from promptflow._utils.exception_utils import ExceptionPresenter from promptflow.contracts.run_mode import RunMode # The maximum length of logger name is 18 ("promptflow-runtime"). # The maximum digit length of process id is 5. Fix the field width to 7. # So fix the length of these fields in the formatter. # May need to change if logger name/process id length changes. LOG_FORMAT = "%(asctime)s %(process)7d %(name)-18s %(levelname)-8s %(message)s" DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S %z" class CredentialScrubberFormatter(logging.Formatter): """Formatter that scrubs credentials in logs.""" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._default_scrubber = CredentialScrubber() self._context_var = ContextVar("credential_scrubber", default=None) @property def credential_scrubber(self): credential_scrubber = self._context_var.get() if credential_scrubber: return credential_scrubber return self._default_scrubber def set_credential_list(self, credential_list: List[str]): """Set credential list, which will be scrubbed in logs.""" credential_scrubber = CredentialScrubber() for c in credential_list: credential_scrubber.add_str(c) self._context_var.set(credential_scrubber) def clear(self): """Clear context variable.""" self._context_var.set(None) def format(self, record): """Override logging.Formatter's format method and remove credentials from log.""" s: str = super().format(record) s = self._handle_traceback(s, record) s = self._handle_customer_content(s, record) return self.credential_scrubber.scrub(s) def _handle_customer_content(self, s: str, record: logging.LogRecord) -> str: """Handle customer content in log message. Derived class can override this method to handle customer content in log. """ # If log record does not have "customer_content" field, return input logging string directly. if not hasattr(record, "customer_content"): return s customer_content = record.customer_content if isinstance(customer_content, Exception): # If customer_content is an exception, convert it to string. customer_str = self._convert_exception_to_str(customer_content) elif isinstance(customer_content, str): customer_str = customer_content else: customer_str = str(customer_content) return s.replace("{customer_content}", customer_str) def _handle_traceback(self, s: str, record: logging.LogRecord) -> str: """Interface method for handling traceback in log message. Derived class can override this method to handle traceback in log. """ return s def _convert_exception_to_str(self, ex: Exception) -> str: """Convert exception a user-friendly string.""" try: return json.dumps(ExceptionPresenter.create(ex).to_dict(include_debug_info=True), indent=2) except: # noqa: E722 return str(ex) class FileHandler: """Write compliant log to a file.""" def __init__(self, file_path: str, formatter: Optional[logging.Formatter] = None): self._stream_handler = self._get_stream_handler(file_path) if formatter is None: # Default formatter to scrub credentials in log message, exception and stack trace. self._formatter = CredentialScrubberFormatter(fmt=LOG_FORMAT, datefmt=DATETIME_FORMAT) else: self._formatter = formatter self._stream_handler.setFormatter(self._formatter) def set_credential_list(self, credential_list: List[str]): """Set credential list, which will be scrubbed in logs.""" self._formatter.set_credential_list(credential_list) def emit(self, record: logging.LogRecord): """Write logs.""" self._stream_handler.emit(record) def close(self): """Close stream handler.""" self._stream_handler.close() self._formatter.clear() def _get_stream_handler(self, file_path) -> logging.StreamHandler: """This method can be overridden by derived class to save log file in cloud.""" return logging.FileHandler(file_path, encoding="UTF-8") class FileHandlerConcurrentWrapper(logging.Handler): """Wrap context-local FileHandler instance for thread safety. A logger instance can write different log to different files in different contexts. """ def __init__(self): super().__init__() self._context_var = ContextVar("handler", default=None) @property def handler(self) -> FileHandler: return self._context_var.get() @handler.setter def handler(self, handler: FileHandler): self._context_var.set(handler) def emit(self, record: logging.LogRecord): """Override logging.Handler's emit method. Get inner file handler in current context and write log. """ stream_handler: FileHandler = self._context_var.get() if stream_handler is None: return stream_handler.emit(record) def clear(self): """Close file handler and clear context variable.""" handler: FileHandler = self._context_var.get() if handler: try: handler.close() except: # NOQA: E722 # Do nothing if handler close failed. pass self._context_var.set(None) valid_logging_level = {"CRITICAL", "FATAL", "ERROR", "WARN", "WARNING", "INFO", "DEBUG", "NOTSET"} def get_pf_logging_level(default=logging.INFO): logging_level = os.environ.get(PF_LOGGING_LEVEL, None) if logging_level not in valid_logging_level: # Fall back to info if user input is invalid. logging_level = default return logging_level def get_logger(name: str) -> logging.Logger: """Get logger used during execution.""" logger = logging.Logger(name) logger.setLevel(get_pf_logging_level()) logger.addHandler(FileHandlerConcurrentWrapper()) stdout_handler = logging.StreamHandler(sys.stdout) stdout_handler.setFormatter(CredentialScrubberFormatter(fmt=LOG_FORMAT, datefmt=DATETIME_FORMAT)) logger.addHandler(stdout_handler) return logger # Logs by flow_logger will only be shown in flow mode. # These logs should contain all detailed logs from executor and runtime. flow_logger = get_logger("execution.flow") # Logs by bulk_logger will only be shown in bulktest and eval modes. # These logs should contain overall progress logs and error logs. bulk_logger = get_logger("execution.bulk") # Logs by logger will be shown in all the modes above, # such as error logs. logger = get_logger("execution") logger_contexts = [] @dataclass class LogContext: """A context manager to setup logger context for input_logger, logger, flow_logger and bulk_logger.""" file_path: str # Log file path. run_mode: Optional[RunMode] = RunMode.Test credential_list: Optional[List[str]] = None # These credentials will be scrubbed in logs. input_logger: logging.Logger = None # If set, then context will also be set for input_logger. def get_initializer(self): return partial( LogContext, file_path=self.file_path, run_mode=self.run_mode, credential_list=self.credential_list ) @staticmethod def get_current() -> Optional["LogContext"]: global logger_contexts if logger_contexts: return logger_contexts[-1] return None @staticmethod def set_current(context: "LogContext"): global logger_contexts if isinstance(context, LogContext): logger_contexts.append(context) @staticmethod def clear_current(): global logger_contexts if logger_contexts: logger_contexts.pop() def __enter__(self): self._set_log_path() self._set_credential_list() LogContext.set_current(self) def __exit__(self, *args): """Clear context-local variables.""" all_logger_list = [logger, flow_logger, bulk_logger] if self.input_logger: all_logger_list.append(self.input_logger) for logger_ in all_logger_list: for handler in logger_.handlers: if isinstance(handler, FileHandlerConcurrentWrapper): handler.clear() elif isinstance(handler.formatter, CredentialScrubberFormatter): handler.formatter.clear() LogContext.clear_current() def _set_log_path(self): if not self.file_path: return logger_list = self._get_loggers_to_set_path() for logger_ in logger_list: for log_handler in logger_.handlers: if isinstance(log_handler, FileHandlerConcurrentWrapper): handler = FileHandler(self.file_path) log_handler.handler = handler def _set_credential_list(self): # Set credential list to all loggers. all_logger_list = self._get_execute_loggers_list() if self.input_logger: all_logger_list.append(self.input_logger) credential_list = self.credential_list or [] for logger_ in all_logger_list: for handler in logger_.handlers: if isinstance(handler, FileHandlerConcurrentWrapper) and handler.handler: handler.handler.set_credential_list(credential_list) elif isinstance(handler.formatter, CredentialScrubberFormatter): handler.formatter.set_credential_list(credential_list) def _get_loggers_to_set_path(self) -> List[logging.Logger]: logger_list = [logger] if self.input_logger: logger_list.append(self.input_logger) # For Batch run mode, set log path for bulk_logger, # otherwise for flow_logger. if self.run_mode == RunMode.Batch: logger_list.append(bulk_logger) else: logger_list.append(flow_logger) return logger_list @classmethod def _get_execute_loggers_list(cls) -> List[logging.Logger]: # return all loggers for executor return [logger, flow_logger, bulk_logger] def update_log_path(log_path: str, input_logger: logging.Logger = None): logger_list = [logger, bulk_logger, flow_logger] if input_logger: logger_list.append(input_logger) for logger_ in logger_list: update_single_log_path(log_path, logger_) def update_single_log_path(log_path: str, logger_: logging.Logger): for wrapper in logger_.handlers: if isinstance(wrapper, FileHandlerConcurrentWrapper): handler: FileHandler = wrapper.handler if handler: wrapper.handler = type(handler)(log_path, handler._formatter) def scrub_credentials(s: str): """Scrub credentials in string s. For example, for input string: "print accountkey=accountKey", the output will be: "print accountkey=**data_scrubbed**" """ for h in logger.handlers: if isinstance(h, FileHandlerConcurrentWrapper): if h.handler and h.handler._formatter: credential_scrubber = h.handler._formatter.credential_scrubber if credential_scrubber: return credential_scrubber.scrub(s) return CredentialScrubber().scrub(s) class LoggerFactory: @staticmethod def get_logger(name: str, verbosity: int = logging.INFO, target_stdout: bool = False): logger = logging.getLogger(name) logger.propagate = False # Set default logger level to debug, we are using handler level to control log by default logger.setLevel(logging.DEBUG) # Use env var at first, then use verbosity verbosity = get_pf_logging_level(default=None) or verbosity if not LoggerFactory._find_handler(logger, logging.StreamHandler): LoggerFactory._add_handler(logger, verbosity, target_stdout) # TODO: Find a more elegant way to set the logging level for azure.core.pipeline.policies._universal azure_logger = logging.getLogger("azure.core.pipeline.policies._universal") azure_logger.setLevel(logging.DEBUG) LoggerFactory._add_handler(azure_logger, logging.DEBUG, target_stdout) return logger @staticmethod def _find_handler(logger: logging.Logger, handler_type: type) -> Optional[logging.Handler]: for log_handler in logger.handlers: if isinstance(log_handler, handler_type): return log_handler return None @staticmethod def _add_handler(logger: logging.Logger, verbosity: int, target_stdout: bool = False) -> None: # set target_stdout=True can log data into sys.stdout instead of default sys.stderr, in this way # logger info and python print result can be synchronized handler = logging.StreamHandler(stream=sys.stdout) if target_stdout else logging.StreamHandler() formatter = logging.Formatter("[%(asctime)s][%(name)s][%(levelname)s] - %(message)s") handler.setFormatter(formatter) handler.setLevel(verbosity) logger.addHandler(handler) def get_cli_sdk_logger(): """Get logger used by CLI SDK.""" # cli sdk logger default logging level is WARNING # here the logger name "promptflow" is from promptflow._sdk._constants.LOGGER_NAME, # to avoid circular import error, use plain string here instead of importing from _constants # because this function is also called in _prepare_home_dir which is in _constants return LoggerFactory.get_logger("promptflow", verbosity=logging.WARNING)
promptflow/src/promptflow/promptflow/_utils/logger_utils.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/_utils/logger_utils.py", "repo_id": "promptflow", "token_count": 5614 }
46
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import copy import os.path from contextlib import contextmanager from os import PathLike from pathlib import Path from typing import Dict, List, Optional, Union import pydash from promptflow._sdk._constants import DAG_FILE_NAME, SERVICE_FLOW_TYPE_2_CLIENT_FLOW_TYPE, AzureFlowSource, FlowType from promptflow.azure._ml import AdditionalIncludesMixin, Code from ..._sdk._utils import PromptflowIgnoreFile, load_yaml, remove_empty_element_from_dict from ..._utils.flow_utils import dump_flow_dag, load_flow_dag from ..._utils.logger_utils import LoggerFactory from .._constants._flow import ADDITIONAL_INCLUDES, DEFAULT_STORAGE, ENVIRONMENT, PYTHON_REQUIREMENTS_TXT from .._restclient.flow.models import FlowDto # pylint: disable=redefined-builtin, unused-argument, f-string-without-interpolation logger = LoggerFactory.get_logger(__name__) class Flow(AdditionalIncludesMixin): DEFAULT_REQUIREMENTS_FILE_NAME = "requirements.txt" def __init__( self, path: Union[str, PathLike], name: Optional[str] = None, type: Optional[str] = None, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None, **kwargs, ): self._flow_source = kwargs.pop("flow_source", AzureFlowSource.LOCAL) self.path = path self.name = name self.type = type or FlowType.STANDARD self.display_name = kwargs.get("display_name", None) or name self.description = description self.tags = tags self.owner = kwargs.get("owner", None) self.is_archived = kwargs.get("is_archived", None) self.created_date = kwargs.get("created_date", None) self.flow_portal_url = kwargs.get("flow_portal_url", None) if self._flow_source == AzureFlowSource.LOCAL: absolute_path = self._validate_flow_from_source(path) # flow snapshot folder self.code = absolute_path.parent.as_posix() self._code_uploaded = False self.path = absolute_path.name self._flow_dict = self._load_flow_yaml(absolute_path) self.display_name = self.display_name or absolute_path.parent.name self.description = description or self._flow_dict.get("description", None) self.tags = tags or self._flow_dict.get("tags", None) elif self._flow_source == AzureFlowSource.PF_SERVICE: self.code = kwargs.get("flow_resource_id", None) elif self._flow_source == AzureFlowSource.INDEX: self.code = kwargs.get("entity_id", None) def _validate_flow_from_source(self, source: Union[str, PathLike]) -> Path: """Validate flow from source. :param source: The source of the flow. :type source: Union[str, PathLike] """ absolute_path = Path(source).resolve().absolute() if absolute_path.is_dir(): absolute_path = absolute_path / DAG_FILE_NAME if not absolute_path.exists(): raise ValueError(f"Flow file {absolute_path.as_posix()} does not exist.") return absolute_path def _load_flow_yaml(self, path: Union[str, Path]) -> Dict: """Load flow yaml file. :param path: The path of the flow yaml file. :type path: str """ return load_yaml(path) @classmethod def _resolve_requirements(cls, flow_path: Union[str, Path], flow_dag: dict): """If requirements.txt exists, add it to the flow snapshot. Return True if flow_dag is updated.""" flow_dir = Path(flow_path) if not (flow_dir / cls.DEFAULT_REQUIREMENTS_FILE_NAME).exists(): return False if pydash.get(flow_dag, f"{ENVIRONMENT}.{PYTHON_REQUIREMENTS_TXT}"): return False logger.debug( f"requirements.txt is found in the flow folder: {flow_path.resolve().as_posix()}, " "adding it to flow.dag.yaml." ) pydash.set_(flow_dag, f"{ENVIRONMENT}.{PYTHON_REQUIREMENTS_TXT}", cls.DEFAULT_REQUIREMENTS_FILE_NAME) return True @classmethod def _remove_additional_includes(cls, flow_dag: dict): """Remove additional includes from flow dag. Return True if removed.""" if ADDITIONAL_INCLUDES not in flow_dag: return False logger.debug("Additional includes are found in the flow dag, removing them from flow.dag.yaml after resolved.") flow_dag.pop(ADDITIONAL_INCLUDES, None) return True # region AdditionalIncludesMixin @contextmanager def _try_build_local_code(self) -> Optional[Code]: """Try to create a Code object pointing to local code and yield it. If there is no local code to upload, yield None. Otherwise, yield a Code object pointing to the code. """ with super()._try_build_local_code() as code: dag_updated = False if isinstance(code, Code): flow_dir = Path(code.path) _, flow_dag = load_flow_dag(flow_path=flow_dir) original_flow_dag = copy.deepcopy(flow_dag) if self._get_all_additional_includes_configs(): # Remove additional include in the flow yaml. dag_updated = self._remove_additional_includes(flow_dag) # promptflow snapshot has specific ignore logic, like it should ignore `.run` by default code._ignore_file = PromptflowIgnoreFile(flow_dir) # promptflow snapshot will always be uploaded to default storage code.datastore = DEFAULT_STORAGE dag_updated = self._resolve_requirements(flow_dir, flow_dag) or dag_updated if dag_updated: dump_flow_dag(flow_dag, flow_dir) try: yield code finally: if dag_updated: dump_flow_dag(original_flow_dag, flow_dir) def _get_base_path_for_code(self) -> Path: """Get base path for additional includes.""" # note that self.code is an absolute path, so it is safe to use it as base path return Path(self.code) def _get_all_additional_includes_configs(self) -> List: """Get all additional include configs. For flow, its additional include need to be read from dag with a helper function. """ from promptflow._sdk._utils import _get_additional_includes return _get_additional_includes(os.path.join(self.code, self.path)) # endregion @classmethod def _from_pf_service(cls, rest_object: FlowDto): return cls( flow_source=AzureFlowSource.PF_SERVICE, path=rest_object.flow_definition_file_path, name=rest_object.flow_id, type=SERVICE_FLOW_TYPE_2_CLIENT_FLOW_TYPE[str(rest_object.flow_type).lower()], description=rest_object.description, tags=rest_object.tags, display_name=rest_object.flow_name, flow_resource_id=rest_object.flow_resource_id, owner=rest_object.owner.as_dict(), is_archived=rest_object.is_archived, created_date=rest_object.created_date, flow_portal_url=rest_object.studio_portal_endpoint, ) @classmethod def _from_index_service(cls, rest_object: Dict): properties = rest_object["properties"] annotations = rest_object["annotations"] flow_type = properties.get("flowType", None).lower() # rag type flow is shown as standard flow in UX, not sure why this type exists in service code if flow_type == "rag": flow_type = FlowType.STANDARD elif flow_type: flow_type = SERVICE_FLOW_TYPE_2_CLIENT_FLOW_TYPE[flow_type] return cls( flow_source=AzureFlowSource.INDEX, path=properties.get("flowDefinitionFilePath", None), name=properties.get("flowId", None), display_name=annotations.get("flowName", None), type=flow_type, description=annotations.get("description", None), tags=annotations.get("tags", None), entity_id=rest_object["entityId"], owner=annotations.get("owner", None), is_archived=annotations.get("isArchived", None), created_date=annotations.get("createdDate", None), ) def _to_dict(self): result = { "name": self.name, "type": self.type, "description": self.description, "tags": self.tags, "path": self.path, "code": str(self.code), "display_name": self.display_name, "owner": self.owner, "is_archived": self.is_archived, "created_date": str(self.created_date), "flow_portal_url": self.flow_portal_url, } return remove_empty_element_from_dict(result)
promptflow/src/promptflow/promptflow/azure/_entities/_flow.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/azure/_entities/_flow.py", "repo_id": "promptflow", "token_count": 3918 }
47
# coding=utf-8 # -------------------------------------------------------------------------- # Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.8.0, generator: @autorest/[email protected]) # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- from ._bulk_runs_operations import BulkRunsOperations from ._connection_operations import ConnectionOperations from ._connections_operations import ConnectionsOperations from ._flow_runs_admin_operations import FlowRunsAdminOperations from ._flow_runtimes_operations import FlowRuntimesOperations from ._flow_runtimes_workspace_independent_operations import FlowRuntimesWorkspaceIndependentOperations from ._flows_operations import FlowsOperations from ._flow_sessions_operations import FlowSessionsOperations from ._flows_provider_operations import FlowsProviderOperations from ._tools_operations import ToolsOperations __all__ = [ 'BulkRunsOperations', 'ConnectionOperations', 'ConnectionsOperations', 'FlowRunsAdminOperations', 'FlowRuntimesOperations', 'FlowRuntimesWorkspaceIndependentOperations', 'FlowsOperations', 'FlowSessionsOperations', 'FlowsProviderOperations', 'ToolsOperations', ]
promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/__init__.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/aio/operations/__init__.py", "repo_id": "promptflow", "token_count": 348 }
48
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import asyncio import signal import threading import uuid from datetime import datetime from pathlib import Path from typing import Any, Dict, List, Mapping, Optional from promptflow._constants import LINE_NUMBER_KEY, LINE_TIMEOUT_SEC, FlowLanguage from promptflow._core._errors import UnexpectedError from promptflow._core.operation_context import OperationContext from promptflow._utils.async_utils import async_run_allowing_running_loop from promptflow._utils.context_utils import _change_working_dir from promptflow._utils.execution_utils import ( apply_default_value_for_input, collect_lines, get_aggregation_inputs_properties, handle_line_failures, ) from promptflow._utils.logger_utils import bulk_logger from promptflow._utils.utils import ( dump_list_to_jsonl, get_int_env_var, log_progress, resolve_dir_to_absolute, transpose, ) from promptflow._utils.yaml_utils import load_yaml from promptflow.batch._base_executor_proxy import AbstractExecutorProxy from promptflow.batch._batch_inputs_processor import BatchInputsProcessor from promptflow.batch._csharp_executor_proxy import CSharpExecutorProxy from promptflow.batch._python_executor_proxy import PythonExecutorProxy from promptflow.batch._result import BatchResult from promptflow.contracts.flow import Flow from promptflow.contracts.run_info import Status from promptflow.exceptions import ErrorTarget, PromptflowException from promptflow.executor._errors import InvalidFlowFileError from promptflow.executor._line_execution_process_pool import signal_handler from promptflow.executor._result import AggregationResult, LineResult from promptflow.executor.flow_validator import FlowValidator from promptflow.storage._run_storage import AbstractRunStorage OUTPUT_FILE_NAME = "output.jsonl" # TODO: will remain consistent with PF_WORKER_COUNT in the future DEFAULT_CONCURRENCY = 10 class BatchEngine: """This class is used to execute flows in batch mode""" executor_proxy_classes: Mapping[str, AbstractExecutorProxy] = { FlowLanguage.Python: PythonExecutorProxy, FlowLanguage.CSharp: CSharpExecutorProxy, } @classmethod def register_executor(cls, type: str, executor_proxy_cls: AbstractExecutorProxy): """Register a executor proxy class for a specific program language. This method allows users to register a executor proxy class for a particular programming language. The executor proxy class will be used when creating an instance of the BatchEngine for flows written in the specified language. :param type: The flow program language of the executor proxy, :type type: str :param executor_proxy_cls: The executor proxy class to be registered. :type executor_proxy_cls: ~promptflow.batch.AbstractExecutorProxy """ cls.executor_proxy_classes[type] = executor_proxy_cls def __init__( self, flow_file: Path, working_dir: Optional[Path] = None, *, connections: Optional[dict] = None, entry: Optional[str] = None, storage: Optional[AbstractRunStorage] = None, batch_timeout_sec: Optional[int] = None, **kwargs, ): """Create a new batch engine instance :param flow_file: The flow file path :type flow_file: Path :param working_dir: The flow working directory path :type working_dir: Optional[Path] :param connections: The connections used in the flow :type connections: Optional[dict] :param storage: The storage to store execution results :type storage: Optional[~promptflow.storage._run_storage.AbstractRunStorage] :param batch_timeout: The timeout of batch run in seconds :type batch_timeout: Optional[int] :param kwargs: The keyword arguments related to creating the executor proxy class :type kwargs: Any """ self._flow_file = flow_file self._working_dir = Flow._resolve_working_dir(flow_file, working_dir) if self._is_eager_flow_yaml(): if Path(flow_file).suffix.lower() in [".yaml", ".yml"]: entry, path = self._parse_eager_flow_yaml() self._flow_file = Path(path) self._is_dag_yaml_flow = False self._program_language = FlowLanguage.Python elif Path(flow_file).suffix.lower() in [".yaml", ".yml"]: self._flow = Flow.from_yaml(flow_file, working_dir=self._working_dir) FlowValidator.ensure_flow_valid_in_batch_mode(self._flow) self._is_dag_yaml_flow = True self._program_language = self._flow.program_language else: raise InvalidFlowFileError(message_format="Unsupported flow file type: {flow_file}.", flow_file=flow_file) self._connections = connections self._entry = entry self._storage = storage self._kwargs = kwargs self._batch_timeout_sec = batch_timeout_sec or get_int_env_var("PF_BATCH_TIMEOUT_SEC") self._line_timeout_sec = get_int_env_var("PF_LINE_TIMEOUT_SEC", LINE_TIMEOUT_SEC) # set it to True when the batch run is canceled self._is_canceled = False def run( self, input_dirs: Dict[str, str], inputs_mapping: Dict[str, str], output_dir: Path, run_id: Optional[str] = None, max_lines_count: Optional[int] = None, raise_on_line_failure: Optional[bool] = False, ) -> BatchResult: """Run flow in batch mode :param input_dirs: The directories path of input files :type input_dirs: Dict[str, str] :param inputs_mapping: The mapping of input names to their corresponding values. :type inputs_mapping: Dict[str, str] :param output_dir: output dir :type output_dir: The directory path of output files :param run_id: The run id of this run :type run_id: Optional[str] :param max_lines_count: The max count of inputs. If it is None, all inputs will be used. :type max_lines_count: Optional[int] :param raise_on_line_failure: Whether to raise exception when a line fails. :type raise_on_line_failure: Optional[bool] :return: The result of this batch run :rtype: ~promptflow.batch._result.BatchResult """ try: self._start_time = datetime.utcnow() with _change_working_dir(self._working_dir): # create executor proxy instance according to the flow program language executor_proxy_cls = self.executor_proxy_classes[self._program_language] self._executor_proxy: AbstractExecutorProxy = async_run_allowing_running_loop( executor_proxy_cls.create, self._flow_file, self._working_dir, connections=self._connections, entry=self._entry, storage=self._storage, **self._kwargs, ) try: # register signal handler for python flow in the main thread # TODO: For all executor proxies that are executed locally, it might be necessary to # register a signal for Ctrl+C in order to customize some actions beyond just killing # the process, such as terminating the executor service. if isinstance(self._executor_proxy, PythonExecutorProxy): if threading.current_thread() is threading.main_thread(): signal.signal(signal.SIGINT, signal_handler) else: bulk_logger.info( "Current thread is not main thread, skip signal handler registration in BatchEngine." ) # set batch input source from input mapping OperationContext.get_instance().set_batch_input_source_from_inputs_mapping(inputs_mapping) # if using eager flow, the self._flow is none, so we need to get inputs definition from executor inputs = ( self._flow.inputs if self._is_dag_yaml_flow else self._executor_proxy.get_inputs_definition() ) # resolve input data from input dirs and apply inputs mapping batch_input_processor = BatchInputsProcessor(self._working_dir, inputs, max_lines_count) batch_inputs = batch_input_processor.process_batch_inputs(input_dirs, inputs_mapping) # resolve output dir output_dir = resolve_dir_to_absolute(self._working_dir, output_dir) # run flow in batch mode return async_run_allowing_running_loop( self._exec_in_task, batch_inputs, run_id, output_dir, raise_on_line_failure ) finally: async_run_allowing_running_loop(self._executor_proxy.destroy) except Exception as e: bulk_logger.error(f"Error occurred while executing batch run. Exception: {str(e)}") if isinstance(e, PromptflowException): raise e else: # for unexpected error, we need to wrap it to SystemErrorException to allow us to see the stack trace. unexpected_error = UnexpectedError( target=ErrorTarget.BATCH, message_format=( "Unexpected error occurred while executing the batch run. Error: {error_type_and_message}." ), error_type_and_message=f"({e.__class__.__name__}) {e}", ) raise unexpected_error from e def cancel(self): """Cancel the batch run""" self._is_canceled = True async def _exec_in_task( self, batch_inputs: List[Dict[str, Any]], run_id: str = None, output_dir: Path = None, raise_on_line_failure: bool = False, ) -> BatchResult: # if the batch run is canceled, asyncio.CancelledError will be raised and no results will be returned, # so we pass empty line results list and aggr results and update them in _exec so that when the batch # run is canceled we can get the current completed line results and aggr results. line_results: List[LineResult] = [] aggr_result = AggregationResult({}, {}, {}) task = asyncio.create_task( self._exec(line_results, aggr_result, batch_inputs, run_id, output_dir, raise_on_line_failure) ) while not task.done(): # check whether the task is completed or canceled every 1s await asyncio.sleep(1) if self._is_canceled: task.cancel() # use current completed line results and aggregation results to create a BatchResult return BatchResult.create( self._start_time, datetime.utcnow(), line_results, aggr_result, status=Status.Canceled ) return task.result() async def _exec( self, line_results: List[LineResult], aggr_result: AggregationResult, batch_inputs: List[Dict[str, Any]], run_id: str = None, output_dir: Path = None, raise_on_line_failure: bool = False, ) -> BatchResult: # ensure executor health before execution await self._executor_proxy.ensure_executor_health() # apply default value in early stage, so we can use it both in line and aggregation nodes execution. # if the flow is None, we don't need to apply default value for inputs. if self._is_dag_yaml_flow: batch_inputs = [ apply_default_value_for_input(self._flow.inputs, each_line_input) for each_line_input in batch_inputs ] run_id = run_id or str(uuid.uuid4()) # execute lines if isinstance(self._executor_proxy, PythonExecutorProxy): line_results.extend( self._executor_proxy._exec_batch( batch_inputs, output_dir, run_id, batch_timeout_sec=self._batch_timeout_sec, line_timeout_sec=self._line_timeout_sec, ) ) else: await self._exec_batch(line_results, batch_inputs, run_id) handle_line_failures([r.run_info for r in line_results], raise_on_line_failure) # persist outputs to output dir outputs = [ {LINE_NUMBER_KEY: r.run_info.index, **r.output} for r in line_results if r.run_info.status == Status.Completed ] outputs.sort(key=lambda x: x[LINE_NUMBER_KEY]) self._persist_outputs(outputs, output_dir) # execute aggregation nodes aggr_exec_result = await self._exec_aggregation(batch_inputs, line_results, run_id) # use the execution result to update aggr_result to make sure we can get the aggr_result in _exec_in_task self._update_aggr_result(aggr_result, aggr_exec_result) # summary some infos from line results and aggr results to batch result return BatchResult.create(self._start_time, datetime.utcnow(), line_results, aggr_result) async def _exec_batch( self, line_results: List[LineResult], batch_inputs: List[Mapping[str, Any]], run_id: Optional[str] = None, ) -> List[LineResult]: worker_count = get_int_env_var("PF_WORKER_COUNT", DEFAULT_CONCURRENCY) semaphore = asyncio.Semaphore(worker_count) pending = [ asyncio.create_task(self._exec_line_under_semaphore(semaphore, line_inputs, i, run_id)) for i, line_inputs in enumerate(batch_inputs) ] total_lines = len(batch_inputs) completed_line = 0 while completed_line < total_lines: done, pending = await asyncio.wait(pending, return_when=asyncio.FIRST_COMPLETED) completed_line_results = [task.result() for task in done] self._persist_run_info(completed_line_results) line_results.extend(completed_line_results) log_progress( self._start_time, bulk_logger, len(line_results), total_lines, last_log_count=completed_line, ) completed_line = len(line_results) async def _exec_line_under_semaphore( self, semaphore, inputs: Mapping[str, Any], index: Optional[int] = None, run_id: Optional[str] = None, ): async with semaphore: return await self._executor_proxy.exec_line_async(inputs, index, run_id) async def _exec_aggregation( self, batch_inputs: List[dict], line_results: List[LineResult], run_id: Optional[str] = None, ) -> AggregationResult: if not self._is_dag_yaml_flow: return AggregationResult({}, {}, {}) aggregation_nodes = {node.name for node in self._flow.nodes if node.aggregation} if not aggregation_nodes: return AggregationResult({}, {}, {}) bulk_logger.info("Executing aggregation nodes...") run_infos = [r.run_info for r in line_results] succeeded = [i for i, r in enumerate(run_infos) if r.status == Status.Completed] succeeded_batch_inputs = [batch_inputs[i] for i in succeeded] resolved_succeeded_batch_inputs = [ FlowValidator.ensure_flow_inputs_type(flow=self._flow, inputs=input) for input in succeeded_batch_inputs ] succeeded_inputs = transpose(resolved_succeeded_batch_inputs, keys=list(self._flow.inputs.keys())) aggregation_inputs = transpose( [result.aggregation_inputs for result in line_results], keys=get_aggregation_inputs_properties(self._flow), ) succeeded_aggregation_inputs = collect_lines(succeeded, aggregation_inputs) try: aggr_result = await self._executor_proxy.exec_aggregation_async( succeeded_inputs, succeeded_aggregation_inputs, run_id ) # if the flow language is python, we have already persisted node run infos during execution. # so we should persist node run infos in aggr_result for other languages. if not isinstance(self._executor_proxy, PythonExecutorProxy): for node_run in aggr_result.node_run_infos.values(): self._storage.persist_node_run(node_run) bulk_logger.info("Finish executing aggregation nodes.") return aggr_result except PromptflowException as e: # for PromptflowException, we already do classification, so throw directly. raise e except Exception as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise UnexpectedError( message_format=( "Unexpected error occurred while executing the aggregated nodes. " "Please fix or contact support for assistance. The error details: {error_type_and_message}." ), error_type_and_message=error_type_and_message, ) from e def _persist_run_info(self, line_results: List[LineResult]): """Persist node run infos and flow run info in line result to storage""" for line_result in line_results: for node_run in line_result.node_run_infos.values(): self._storage.persist_node_run(node_run) self._storage.persist_flow_run(line_result.run_info) def _persist_outputs(self, outputs: List[Mapping[str, Any]], output_dir: Path): """Persist outputs to json line file in output directory""" output_file = output_dir / OUTPUT_FILE_NAME dump_list_to_jsonl(output_file, outputs) def _update_aggr_result(self, aggr_result: AggregationResult, aggr_exec_result: AggregationResult): """Update aggregation result with the aggregation execution result""" aggr_result.metrics = aggr_exec_result.metrics aggr_result.node_run_infos = aggr_exec_result.node_run_infos aggr_result.output = aggr_exec_result.output def _is_eager_flow_yaml(self): if Path(self._flow_file).suffix.lower() == ".py": return True elif Path(self._flow_file).suffix.lower() in [".yaml", ".yml"]: flow_file = self._working_dir / self._flow_file if self._working_dir else self._flow_file with open(flow_file, "r", encoding="utf-8") as fin: flow_dag = load_yaml(fin) if "entry" in flow_dag: return True return False def _parse_eager_flow_yaml(self): flow_file = self._working_dir / self._flow_file if self._working_dir else self._flow_file with open(flow_file, "r", encoding="utf-8") as fin: flow_dag = load_yaml(fin) return flow_dag.get("entry", ""), flow_dag.get("path", "")
promptflow/src/promptflow/promptflow/batch/_batch_engine.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/batch/_batch_engine.py", "repo_id": "promptflow", "token_count": 8341 }
49
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- from dataclasses import dataclass class Secret(str): """This class is used to hint a parameter is a secret to load.""" def set_secret_name(self, name): """Set the secret_name attribute for the Secret instance. :param name: The name of the secret. :type name: str """ self.secret_name = name class PromptTemplate(str): """This class is used to hint a parameter is a prompt template.""" pass class FilePath(str): """This class is used to hint a parameter is a file path.""" pass @dataclass class AssistantDefinition: """This class is used to define an assistant definition.""" model: str instructions: str tools: list @staticmethod def deserialize(data: dict) -> "AssistantDefinition": return AssistantDefinition( model=data.get("model", ""), instructions=data.get("instructions", ""), tools=data.get("tools", []) ) def serialize(self): return { "model": self.model, "instructions": self.instructions, "tools": self.tools, } def init_tool_invoker(self): from promptflow.executor._assistant_tool_invoker import AssistantToolInvoker return AssistantToolInvoker.init(self.tools)
promptflow/src/promptflow/promptflow/contracts/types.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/contracts/types.py", "repo_id": "promptflow", "token_count": 527 }
50
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import copy import inspect import types from dataclasses import dataclass from functools import partial from pathlib import Path from typing import Callable, List, Optional from promptflow._core._errors import InvalidSource from promptflow._core.connection_manager import ConnectionManager from promptflow._core.tool import STREAMING_OPTION_PARAMETER_ATTR from promptflow._core.tools_manager import BuiltinsManager, ToolLoader, connection_type_to_api_mapping from promptflow._utils.multimedia_utils import create_image, load_multimedia_data_recursively from promptflow._utils.tool_utils import get_inputs_for_prompt_template, get_prompt_param_name_from_func from promptflow._utils.yaml_utils import load_yaml from promptflow.contracts.flow import InputAssignment, InputValueType, Node, ToolSourceType from promptflow.contracts.tool import ConnectionType, Tool, ToolType, ValueType from promptflow.contracts.types import AssistantDefinition, PromptTemplate from promptflow.exceptions import ErrorTarget, PromptflowException, UserErrorException from promptflow.executor._errors import ( ConnectionNotFound, EmptyLLMApiMapping, InvalidConnectionType, InvalidCustomLLMTool, NodeInputValidationError, ResolveToolError, ValueTypeUnresolved, ) @dataclass class ResolvedTool: node: Node definition: Tool callable: Callable init_args: dict class ToolResolver: def __init__( self, working_dir: Path, connections: Optional[dict] = None, package_tool_keys: Optional[List[str]] = None, ): try: # Import openai and aoai for llm tool from promptflow.tools import aoai, openai # noqa: F401 except ImportError: pass self._tool_loader = ToolLoader(working_dir, package_tool_keys=package_tool_keys) self._working_dir = working_dir self._connection_manager = ConnectionManager(connections) @classmethod def start_resolver( cls, working_dir: Path, connections: Optional[dict] = None, package_tool_keys: Optional[List[str]] = None ): resolver = cls(working_dir, connections, package_tool_keys) resolver._activate_in_context(force=True) return resolver def _convert_to_connection_value(self, k: str, v: InputAssignment, node: Node, conn_types: List[ValueType]): connection_value = self._connection_manager.get(v.value) if not connection_value: raise ConnectionNotFound(f"Connection {v.value} not found for node {node.name!r} input {k!r}.") # Check if type matched if not any(type(connection_value).__name__ == typ for typ in conn_types): msg = ( f"Input '{k}' for node '{node.name}' of type {type(connection_value).__name__!r}" f" is not supported, valid types {conn_types}." ) raise NodeInputValidationError(message=msg) return connection_value def _convert_to_custom_strong_type_connection_value( self, k: str, v: InputAssignment, node: Node, tool: Tool, conn_types: List[str], module: types.ModuleType ): if not conn_types: msg = f"Input '{k}' for node '{node.name}' has invalid types: {conn_types}." raise NodeInputValidationError(message=msg) connection_value = self._connection_manager.get(v.value) if not connection_value: raise ConnectionNotFound(f"Connection {v.value} not found for node {node.name!r} input {k!r}.") custom_defined_connection_class_name = conn_types[0] if node.source.type == ToolSourceType.Package: module = tool.module return connection_value._convert_to_custom_strong_type( module=module, to_class=custom_defined_connection_class_name ) def _convert_to_assistant_definition(self, assistant_definition_path: str, input_name: str, node_name: str): if assistant_definition_path is None or not (self._working_dir / assistant_definition_path).is_file(): raise InvalidSource( target=ErrorTarget.EXECUTOR, message_format="Input '{input_name}' for node '{node_name}' of value '{source_path}' " "is not a valid path.", input_name=input_name, source_path=assistant_definition_path, node_name=node_name, ) file = self._working_dir / assistant_definition_path with open(file, "r", encoding="utf-8") as file: assistant_definition = load_yaml(file) return AssistantDefinition.deserialize(assistant_definition) def _convert_node_literal_input_types(self, node: Node, tool: Tool, module: types.ModuleType = None): updated_inputs = { k: v for k, v in node.inputs.items() if (v.value is not None and v.value != "") or v.value_type != InputValueType.LITERAL } for k, v in updated_inputs.items(): if v.value_type != InputValueType.LITERAL: continue tool_input = tool.inputs.get(k) if tool_input is None: # For kwargs input, tool_input is None. continue value_type = tool_input.type[0] updated_inputs[k] = InputAssignment(value=v.value, value_type=InputValueType.LITERAL) if ConnectionType.is_connection_class_name(value_type): if tool_input.custom_type: updated_inputs[k].value = self._convert_to_custom_strong_type_connection_value( k, v, node, tool, tool_input.custom_type, module=module ) else: updated_inputs[k].value = self._convert_to_connection_value(k, v, node, tool_input.type) elif value_type == ValueType.IMAGE: try: updated_inputs[k].value = create_image(v.value) except Exception as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise NodeInputValidationError( message_format="Failed to load image for input '{key}': {error_type_and_message}", key=k, error_type_and_message=error_type_and_message, target=ErrorTarget.EXECUTOR, ) from e elif value_type == ValueType.ASSISTANT_DEFINITION: try: updated_inputs[k].value = self._convert_to_assistant_definition(v.value, k, node.name) except Exception as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise NodeInputValidationError( message_format="Failed to load assistant definition from input '{key}': " "{error_type_and_message}", key=k, error_type_and_message=error_type_and_message, target=ErrorTarget.EXECUTOR, ) from e elif isinstance(value_type, ValueType): try: updated_inputs[k].value = value_type.parse(v.value) except Exception as e: raise NodeInputValidationError( message_format="Input '{key}' for node '{node_name}' of value '{value}' is not " "type {value_type}.", key=k, node_name=node.name, value=v.value, value_type=value_type.value, target=ErrorTarget.EXECUTOR, ) from e try: updated_inputs[k].value = load_multimedia_data_recursively(updated_inputs[k].value) except Exception as e: error_type_and_message = f"({e.__class__.__name__}) {e}" raise NodeInputValidationError( message_format="Failed to load image for input '{key}': {error_type_and_message}", key=k, error_type_and_message=error_type_and_message, target=ErrorTarget.EXECUTOR, ) from e else: # The value type is in ValueType enum or is connection type. null connection has been handled before. raise ValueTypeUnresolved( f"Unresolved input type {value_type!r}, please check if it is supported in current version.", target=ErrorTarget.EXECUTOR, ) updated_node = copy.deepcopy(node) updated_node.inputs = updated_inputs return updated_node def resolve_tool_by_node(self, node: Node, convert_input_types=True) -> ResolvedTool: try: if node.source is None: raise UserErrorException(f"Node {node.name} does not have source defined.") if node.type is ToolType.PYTHON: if node.source.type == ToolSourceType.Package: return self._resolve_package_node(node, convert_input_types=convert_input_types) elif node.source.type == ToolSourceType.Code: return self._resolve_script_node(node, convert_input_types=convert_input_types) raise NotImplementedError(f"Tool source type {node.source.type} for python tool is not supported yet.") elif node.type is ToolType.PROMPT: return self._resolve_prompt_node(node) elif node.type is ToolType.LLM: return self._resolve_llm_node(node, convert_input_types=convert_input_types) elif node.type is ToolType.CUSTOM_LLM: if node.source.type == ToolSourceType.PackageWithPrompt: resolved_tool = self._resolve_package_node(node, convert_input_types=convert_input_types) return self._integrate_prompt_in_package_node(resolved_tool) raise NotImplementedError( f"Tool source type {node.source.type} for custom_llm tool is not supported yet." ) else: raise NotImplementedError(f"Tool type {node.type} is not supported yet.") except Exception as e: if isinstance(e, PromptflowException) and e.target != ErrorTarget.UNKNOWN: raise ResolveToolError(node_name=node.name, target=e.target, module=e.module) from e raise ResolveToolError(node_name=node.name) from e def _load_source_content(self, node: Node) -> str: source = node.source # If is_file returns True, the path points to a existing file, so we don't need to check if exists. if source is None or source.path is None or not (self._working_dir / source.path).is_file(): raise InvalidSource( target=ErrorTarget.EXECUTOR, message_format="Node source path '{source_path}' is invalid on node '{node_name}'.", source_path=source.path if source is not None else None, node_name=node.name, ) file = self._working_dir / source.path return file.read_text(encoding="utf-8") def _validate_duplicated_inputs(self, prompt_tpl_inputs: list, tool_params: list, msg: str): duplicated_inputs = set(prompt_tpl_inputs) & set(tool_params) if duplicated_inputs: raise NodeInputValidationError( message=msg.format(duplicated_inputs=duplicated_inputs), target=ErrorTarget.EXECUTOR, ) def _load_images_for_prompt_tpl(self, prompt_tpl_inputs_mapping: dict, node_inputs: dict): for input_name, input in prompt_tpl_inputs_mapping.items(): if ValueType.IMAGE in input.type and input_name in node_inputs: if node_inputs[input_name].value_type == InputValueType.LITERAL: node_inputs[input_name].value = create_image(node_inputs[input_name].value) return node_inputs def _resolve_prompt_node(self, node: Node) -> ResolvedTool: prompt_tpl = self._load_source_content(node) prompt_tpl_inputs_mapping = get_inputs_for_prompt_template(prompt_tpl) from promptflow.tools.template_rendering import render_template_jinja2 params = inspect.signature(render_template_jinja2).parameters param_names = [name for name, param in params.items() if param.kind != inspect.Parameter.VAR_KEYWORD] msg = ( f"Invalid inputs {{duplicated_inputs}} in prompt template of node {node.name}. " f"These inputs are duplicated with the reserved parameters of prompt tool." ) self._validate_duplicated_inputs(prompt_tpl_inputs_mapping.keys(), param_names, msg) node.inputs = self._load_images_for_prompt_tpl(prompt_tpl_inputs_mapping, node.inputs) callable = partial(render_template_jinja2, template=prompt_tpl) return ResolvedTool(node=node, definition=None, callable=callable, init_args={}) @staticmethod def _remove_init_args(node_inputs: dict, init_args: dict): for k in init_args: if k in node_inputs: del node_inputs[k] def _get_node_connection(self, node: Node): connection = self._connection_manager.get(node.connection) if connection is None: raise ConnectionNotFound( message=f"Connection {node.connection!r} not found, available connection keys " f"{self._connection_manager._connections.keys()}.", target=ErrorTarget.EXECUTOR, ) return connection def _resolve_llm_node(self, node: Node, convert_input_types=False) -> ResolvedTool: connection = self._get_node_connection(node) if not node.provider: if not connection_type_to_api_mapping: raise EmptyLLMApiMapping() # If provider is not specified, try to resolve it from connection type connection_type = type(connection).__name__ if connection_type not in connection_type_to_api_mapping: raise InvalidConnectionType( message_format="Connection type {conn_type} is not supported for LLM.", conn_type=connection_type, ) node.provider = connection_type_to_api_mapping[connection_type] tool: Tool = self._tool_loader.load_tool_for_llm_node(node) key, connection = self._resolve_llm_connection_to_inputs(node, tool) updated_node = copy.deepcopy(node) updated_node.inputs[key] = InputAssignment(value=connection, value_type=InputValueType.LITERAL) if convert_input_types: updated_node = self._convert_node_literal_input_types(updated_node, tool) prompt_tpl = self._load_source_content(node) prompt_tpl_inputs_mapping = get_inputs_for_prompt_template(prompt_tpl) msg = ( f"Invalid inputs {{duplicated_inputs}} in prompt template of node {node.name}. " f"These inputs are duplicated with the parameters of {node.provider}.{node.api}." ) self._validate_duplicated_inputs(prompt_tpl_inputs_mapping.keys(), tool.inputs.keys(), msg) updated_node.inputs = self._load_images_for_prompt_tpl(prompt_tpl_inputs_mapping, updated_node.inputs) api_func, init_args = BuiltinsManager._load_package_tool( tool.name, tool.module, tool.class_name, tool.function, updated_node.inputs ) self._remove_init_args(updated_node.inputs, init_args) prompt_tpl_param_name = get_prompt_param_name_from_func(api_func) api_func = partial(api_func, **{prompt_tpl_param_name: prompt_tpl}) if prompt_tpl_param_name else api_func return ResolvedTool(updated_node, tool, api_func, init_args) def _resolve_llm_connection_to_inputs(self, node: Node, tool: Tool) -> Node: connection = self._get_node_connection(node) for key, input in tool.inputs.items(): if ConnectionType.is_connection_class_name(input.type[0]): if type(connection).__name__ not in input.type: msg = ( f"Invalid connection '{node.connection}' type {type(connection).__name__!r} " f"for node '{node.name}', valid types {input.type}." ) raise InvalidConnectionType(message=msg) return key, connection raise InvalidConnectionType( message_format="Connection type can not be resolved for tool {tool_name}", tool_name=tool.name ) def _resolve_script_node(self, node: Node, convert_input_types=False) -> ResolvedTool: m, tool = self._tool_loader.load_tool_for_script_node(node) # We only want to load script tool module once. # Reloading the same module changes the ID of the class, which can cause issues with isinstance() checks. # This is important when working with connection class checks. For instance, in user tool script it writes: # isinstance(conn, MyCustomConnection) # Custom defined script tool and custom defined strong type connection are in the same module. # The first time to load the module is in above line when loading a tool. # We need the module again when converting the custom connection to strong type when converting input types. # To avoid reloading, pass the loaded module to _convert_node_literal_input_types as an arg. if convert_input_types: node = self._convert_node_literal_input_types(node, tool, m) callable, init_args = BuiltinsManager._load_tool_from_module( m, tool.name, tool.module, tool.class_name, tool.function, node.inputs ) self._remove_init_args(node.inputs, init_args) return ResolvedTool(node=node, definition=tool, callable=callable, init_args=init_args) def _resolve_package_node(self, node: Node, convert_input_types=False) -> ResolvedTool: tool: Tool = self._tool_loader.load_tool_for_package_node(node) updated_node = copy.deepcopy(node) if convert_input_types: updated_node = self._convert_node_literal_input_types(updated_node, tool) callable, init_args = BuiltinsManager._load_package_tool( tool.name, tool.module, tool.class_name, tool.function, updated_node.inputs ) self._remove_init_args(updated_node.inputs, init_args) return ResolvedTool(node=updated_node, definition=tool, callable=callable, init_args=init_args) def _integrate_prompt_in_package_node(self, resolved_tool: ResolvedTool): node = resolved_tool.node prompt_tpl = PromptTemplate(self._load_source_content(node)) prompt_tpl_inputs_mapping = get_inputs_for_prompt_template(prompt_tpl) msg = ( f"Invalid inputs {{duplicated_inputs}} in prompt template of node {node.name}. " f"These inputs are duplicated with the inputs of custom llm tool." ) self._validate_duplicated_inputs(prompt_tpl_inputs_mapping.keys(), resolved_tool.definition.inputs.keys(), msg) node.inputs = self._load_images_for_prompt_tpl(prompt_tpl_inputs_mapping, node.inputs) callable = resolved_tool.callable prompt_tpl_param_name = get_prompt_param_name_from_func(callable) if prompt_tpl_param_name is None: raise InvalidCustomLLMTool( f"Invalid Custom LLM tool {resolved_tool.definition.name}: " f"function {callable.__name__} is missing a prompt template argument.", target=ErrorTarget.EXECUTOR, ) resolved_tool.callable = partial(callable, **{prompt_tpl_param_name: prompt_tpl}) # Copy the attributes to make sure they are still available after partial. attributes_to_set = [STREAMING_OPTION_PARAMETER_ATTR] for attr in attributes_to_set: attr_val = getattr(callable, attr, None) if attr_val is not None: setattr(resolved_tool.callable, attr, attr_val) return resolved_tool
promptflow/src/promptflow/promptflow/executor/_tool_resolver.py/0
{ "file_path": "promptflow/src/promptflow/promptflow/executor/_tool_resolver.py", "repo_id": "promptflow", "token_count": 8974 }
51
import json import uuid from collections import namedtuple from importlib.metadata import version from pathlib import Path from tempfile import mkdtemp from unittest.mock import patch import pytest from promptflow._core.operation_context import OperationContext from promptflow.batch._batch_engine import OUTPUT_FILE_NAME, BatchEngine from promptflow.contracts.run_mode import RunMode from promptflow.executor import FlowExecutor from ..utils import get_flow_folder, get_flow_inputs_file, get_yaml_file, load_jsonl IS_LEGACY_OPENAI = version("openai").startswith("0.") Completion = namedtuple("Completion", ["choices"]) Choice = namedtuple("Choice", ["delta"]) Delta = namedtuple("Delta", ["content"]) def stream_response(kwargs): if IS_LEGACY_OPENAI: delta = Delta(content=json.dumps(kwargs.get("headers", {}))) yield Completion(choices=[{"delta": delta}]) else: delta = Delta(content=json.dumps(kwargs.get("extra_headers", {}))) yield Completion(choices=[Choice(delta=delta)]) def mock_stream_chat(*args, **kwargs): return stream_response(kwargs) @pytest.mark.skip(reason="Skip on Mac and Windows and Linux, patch does not work in the spawn process") @pytest.mark.usefixtures("dev_connections") @pytest.mark.e2etest class TestExecutorTelemetry: def test_executor_openai_telemetry(self, dev_connections): """This test validates telemetry info header is correctly injected to OpenAI API by mocking chat api method. The mock method will return a generator that yields a namedtuple with a json string of the headers passed to the method. """ if IS_LEGACY_OPENAI: api = "openai.ChatCompletion.create" else: api = "openai.resources.chat.Completions.create" with patch(api, new=mock_stream_chat): operation_context = OperationContext.get_instance() operation_context.clear() flow_folder = "openai_chat_api_flow" # Set user-defined properties `scenario` in context operation_context.scenario = "test" executor = FlowExecutor.create(get_yaml_file(flow_folder), dev_connections) # flow run case inputs = {"question": "What's your name?", "chat_history": [], "stream": True} flow_result = executor.exec_line(inputs) assert isinstance(flow_result.output, dict) headers = json.loads(flow_result.output.get("answer", "")) assert "promptflow/" in headers.get("x-ms-useragent") assert headers.get("ms-azure-ai-promptflow-scenario") == "test" assert headers.get("ms-azure-ai-promptflow-run-mode") == RunMode.Test.name # batch run case run_id = str(uuid.uuid4()) batch_engine = BatchEngine( get_yaml_file(flow_folder), get_flow_folder(flow_folder), connections=dev_connections ) input_dirs = {"data": get_flow_inputs_file(flow_folder)} inputs_mapping = {"question": "${data.question}", "chat_history": "${data.chat_history}"} output_dir = Path(mkdtemp()) batch_engine.run(input_dirs, inputs_mapping, output_dir, run_id=run_id) outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME) for line in outputs: headers = json.loads(line.get("answer", "")) assert "promptflow/" in headers.get("x-ms-useragent") assert headers.get("ms-azure-ai-promptflow-scenario") == "test" assert headers.get("ms-azure-ai-promptflow-run-mode") == RunMode.Batch.name # single_node case run_info = FlowExecutor.load_and_exec_node( get_yaml_file("openai_chat_api_flow"), "chat", flow_inputs=inputs, connections=dev_connections, raise_ex=True, ) assert run_info.output is not None headers = json.loads(run_info.output) assert "promptflow/" in headers.get("x-ms-useragent") assert headers.get("ms-azure-ai-promptflow-scenario") == "test" assert headers.get("ms-azure-ai-promptflow-run-mode") == RunMode.SingleNode.name
promptflow/src/promptflow/tests/executor/e2etests/test_telemetry.py/0
{ "file_path": "promptflow/src/promptflow/tests/executor/e2etests/test_telemetry.py", "repo_id": "promptflow", "token_count": 1802 }
52
inputs: {} outputs: {} nodes: - name: tool_with_init_error type: python source: type: package tool: tool_with_init_error inputs: name: test_name
promptflow/src/promptflow/tests/executor/package_tools/tool_with_init_error/flow.dag.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/executor/package_tools/tool_with_init_error/flow.dag.yaml", "repo_id": "promptflow", "token_count": 67 }
53
import pytest from promptflow._utils.credential_scrubber import CredentialScrubber def mock_connection_string(): connection_str_before_key = "DefaultEndpointsProtocol=https;AccountName=accountName;" connection_str_after_key = "EndpointSuffix=core.windows.net" return ( f"{connection_str_before_key}AccountKey=accountKey;{connection_str_after_key}", f"{connection_str_before_key}AccountKey={CredentialScrubber.PLACE_HOLDER};{connection_str_after_key}", ) def mock_sas_uri(): uri_without_signature = "https://bloburi/containerName/file.txt?sv=2021-10-04&se=2023-05-17&sr=b&sp=rw" return (f"{uri_without_signature}&sig=signature", f"{uri_without_signature}&sig={CredentialScrubber.PLACE_HOLDER}") @pytest.mark.unittest class TestCredentialScrubber: def test_scrub_sigature_in_sasuri(self): input_str, ground_truth = mock_sas_uri() assert CredentialScrubber().scrub(input_str) == ground_truth def test_scrub_key_in_connection_string(self): input_str, ground_truth = mock_connection_string() output = CredentialScrubber().scrub(input_str) assert output == ground_truth def test_add_regex(self): scrubber = CredentialScrubber() scrubber.add_regex(r"(?<=credential=)[^\s;&]+") assert scrubber.scrub("test&credential=credential") == f"test&credential={CredentialScrubber.PLACE_HOLDER}" def test_add_str(self): scrubber = CredentialScrubber() scrubber.add_str(None) assert len(scrubber.custom_str_set) == 0 scrubber.add_str("credential") assert len(scrubber.custom_str_set) == 1 assert scrubber.scrub("test&secret=credential") == f"test&secret={CredentialScrubber.PLACE_HOLDER}" def test_add_str_length_threshold(self): """If the secret is too short (length <= 2 chars), it will not be scrubbed.""" scrubber = CredentialScrubber() scrubber.add_str("yy") assert scrubber.scrub("test&secret=yy") == "test&secret=yy" def test_normal_str_not_affected(self): assert CredentialScrubber().scrub("no secret") == "no secret" def test_clear(self): scrubber = CredentialScrubber() scrubber.add_str("credential") scrubber.add_regex(r"(?<=credential=)[^\s;&]+") assert len(scrubber.custom_str_set) == 1 assert len(scrubber.custom_regex_set) == 1 scrubber.clear() assert len(scrubber.custom_str_set) == 0 assert len(scrubber.custom_regex_set) == 0
promptflow/src/promptflow/tests/executor/unittests/_utils/test_credential_scrubber.py/0
{ "file_path": "promptflow/src/promptflow/tests/executor/unittests/_utils/test_credential_scrubber.py", "repo_id": "promptflow", "token_count": 1079 }
54
import json import socket import subprocess from pathlib import Path from tempfile import mkdtemp from unittest.mock import MagicMock, patch import pytest from promptflow._core._errors import MetaFileNotFound, MetaFileReadError from promptflow._sdk._constants import FLOW_TOOLS_JSON, PROMPT_FLOW_DIR_NAME from promptflow.batch import CSharpExecutorProxy from promptflow.executor._result import AggregationResult from ...utils import get_flow_folder, get_yaml_file async def get_executor_proxy(): flow_file = get_yaml_file("csharp_flow") working_dir = get_flow_folder("csharp_flow") with patch.object(CSharpExecutorProxy, "ensure_executor_startup", return_value=None): return await CSharpExecutorProxy.create(flow_file, working_dir) @pytest.mark.unittest class TestCSharpExecutorProxy: @pytest.mark.asyncio async def test_create(self): with patch("subprocess.Popen") as mock_popen: mock_popen.return_value = MagicMock() executor_proxy = await get_executor_proxy() mock_popen.assert_called_once() assert executor_proxy is not None assert executor_proxy._process is not None assert executor_proxy._port is not None assert executor_proxy.api_endpoint == f"http://localhost:{executor_proxy._port}" @pytest.mark.asyncio async def test_destroy_with_already_terminated(self): mock_process = MagicMock() mock_process.poll.return_value = 0 executor_proxy = await get_executor_proxy() executor_proxy._process = mock_process await executor_proxy.destroy() mock_process.poll.assert_called_once() mock_process.terminate.assert_not_called() @pytest.mark.asyncio async def test_destroy_with_terminates_gracefully(self): mock_process = MagicMock() mock_process.poll.return_value = None executor_proxy = await get_executor_proxy() executor_proxy._process = mock_process await executor_proxy.destroy() mock_process.poll.assert_called_once() mock_process.terminate.assert_called_once() mock_process.wait.assert_called_once_with(timeout=5) mock_process.kill.assert_not_called() @pytest.mark.asyncio async def test_destroy_with_force_kill(self): mock_process = MagicMock() mock_process.poll.return_value = None mock_process.wait.side_effect = subprocess.TimeoutExpired(cmd="cmd", timeout=5) executor_proxy = await get_executor_proxy() executor_proxy._process = mock_process await executor_proxy.destroy() mock_process.poll.assert_called_once() mock_process.terminate.assert_called_once() mock_process.wait.assert_called_once_with(timeout=5) mock_process.kill.assert_called_once() @pytest.mark.asyncio async def test_exec_aggregation_async(self): executor_proxy = await get_executor_proxy() aggr_result = await executor_proxy.exec_aggregation_async("", "", "") assert isinstance(aggr_result, AggregationResult) @pytest.mark.asyncio @pytest.mark.parametrize( "exit_code, expected_result", [ (None, True), (0, False), (1, False), ], ) async def test_is_executor_active(self, exit_code, expected_result): executor_proxy = await get_executor_proxy() executor_proxy._process = MagicMock() executor_proxy._process.poll.return_value = exit_code assert executor_proxy._is_executor_active() == expected_result def test_get_tool_metadata_succeed(self): working_dir = Path(mkdtemp()) expected_tool_meta = {"name": "csharp_flow", "version": "0.1.0"} tool_meta_file = working_dir / PROMPT_FLOW_DIR_NAME / FLOW_TOOLS_JSON tool_meta_file.parent.mkdir(parents=True, exist_ok=True) with open(tool_meta_file, "w") as file: json.dump(expected_tool_meta, file, indent=4) tool_meta = CSharpExecutorProxy.get_tool_metadata("", working_dir) assert tool_meta == expected_tool_meta def test_get_tool_metadata_failed_with_file_not_found(self): working_dir = Path(mkdtemp()) with pytest.raises(MetaFileNotFound): CSharpExecutorProxy.get_tool_metadata("", working_dir) def test_get_tool_metadata_failed_with_content_not_json(self): working_dir = Path(mkdtemp()) tool_meta_file = working_dir / PROMPT_FLOW_DIR_NAME / FLOW_TOOLS_JSON tool_meta_file.parent.mkdir(parents=True, exist_ok=True) tool_meta_file.touch() with pytest.raises(MetaFileReadError): CSharpExecutorProxy.get_tool_metadata("", working_dir) def test_find_available_port(self): port = CSharpExecutorProxy.find_available_port() assert isinstance(port, str) assert int(port) > 0, "Port number should be greater than 0" try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind(("localhost", int(port))) except OSError: pytest.fail("Port is not actually available")
promptflow/src/promptflow/tests/executor/unittests/batch/test_csharp_executor_proxy.py/0
{ "file_path": "promptflow/src/promptflow/tests/executor/unittests/batch/test_csharp_executor_proxy.py", "repo_id": "promptflow", "token_count": 2125 }
55
from concurrent.futures import Future from typing import Callable from unittest.mock import MagicMock import pytest from promptflow._core.flow_execution_context import FlowExecutionContext from promptflow.contracts.flow import Node from promptflow.executor._dag_manager import DAGManager from promptflow.executor._flow_nodes_scheduler import ( DEFAULT_CONCURRENCY_BULK, DEFAULT_CONCURRENCY_FLOW, FlowNodesScheduler, NoNodeExecutedError, ) @pytest.mark.unittest class TestFlowNodesScheduler: def setup_method(self): # Define mock objects and methods self.tools_manager = MagicMock() self.context = MagicMock(spec=FlowExecutionContext) self.context.invoke_tool.side_effect = lambda _, func, kwargs: func(**kwargs) self.scheduler = FlowNodesScheduler(self.tools_manager, {}, [], DEFAULT_CONCURRENCY_BULK, self.context) def test_maximun_concurrency(self): scheduler = FlowNodesScheduler(self.tools_manager, {}, [], 1000, self.context) assert scheduler._node_concurrency == DEFAULT_CONCURRENCY_FLOW def test_collect_outputs(self): future1 = Future() future1.set_result("output1") future2 = Future() future2.set_result("output2") node1 = MagicMock(spec=Node) node1.name = "node1" node2 = MagicMock(spec=Node) node2.name = "node2" self.scheduler._future_to_node = {future1: node1, future2: node2} completed_nodes_outputs = self.scheduler._collect_outputs([future1, future2]) assert completed_nodes_outputs == {"node1": future1.result(), "node2": future2.result()} def test_bypass_nodes(self): executor = MagicMock() dag_manager = MagicMock(spec=DAGManager) node1 = MagicMock(spec=Node) node1.name = "node1" # The return value will be a list with one item for the first time. # Will be a list without item for the second time. dag_manager.pop_bypassable_nodes.side_effect = ([node1], []) self.scheduler._dag_manager = dag_manager self.scheduler._execute_nodes(executor) self.scheduler._context.bypass_node.assert_called_once_with(node1) def test_submit_nodes(self): executor = MagicMock() dag_manager = MagicMock(spec=DAGManager) node1 = MagicMock(spec=Node) node1.name = "node1" dag_manager.pop_bypassable_nodes.return_value = [] # The return value will be a list with one item for the first time. # Will be a list without item for the second time. dag_manager.pop_ready_nodes.return_value = [node1] self.scheduler._dag_manager = dag_manager self.scheduler._execute_nodes(executor) self.scheduler._context.bypass_node.assert_not_called() assert node1 in self.scheduler._future_to_node.values() def test_future_cancelled_for_exception(self): dag_manager = MagicMock(spec=DAGManager) self.scheduler._dag_manager = dag_manager dag_manager.completed.return_value = False dag_manager.pop_bypassable_nodes.return_value = [] dag_manager.pop_ready_nodes.return_value = [] failed_future = Future() failed_future.set_exception(Exception("test")) from concurrent.futures._base import CANCELLED, FINISHED failed_future._state = FINISHED cancelled_future = Future() node1 = MagicMock(spec=Node) node1.name = "node1" node2 = MagicMock(spec=Node) node2.name = "node2" self.scheduler._future_to_node = {failed_future: node1, cancelled_future: node2} try: self.scheduler.execute() except Exception: pass # Assert another future is cancelled. assert CANCELLED in cancelled_future._state def test_success_result(self): dag_manager = MagicMock(spec=DAGManager) finished_future = Future() finished_future.set_result("output1") finished_node = MagicMock(spec=Node) finished_node.name = "node1" self.scheduler._dag_manager = dag_manager self.scheduler._future_to_node = {finished_future: finished_node} # No more nodes need to run. dag_manager.pop_bypassable_nodes.return_value = [] dag_manager.pop_ready_nodes.return_value = [] dag_manager.completed.side_effect = (False, True) bypassed_node_result = {"bypassed_node": "output2"} dag_manager.bypassed_nodes = bypassed_node_result completed_node_result = {"completed_node": "output1"} dag_manager.completed_nodes_outputs = completed_node_result result = self.scheduler.execute() dag_manager.complete_nodes.assert_called_once_with({"node1": "output1"}) assert result == (completed_node_result, bypassed_node_result) def test_no_nodes_to_run(self): dag_manager = MagicMock(spec=DAGManager) dag_manager.pop_bypassable_nodes.return_value = [] dag_manager.pop_ready_nodes.return_value = [] dag_manager.completed.return_value = False self.scheduler._dag_manager = dag_manager with pytest.raises(NoNodeExecutedError) as _: self.scheduler.execute() def test_execute_single_node(self): node_to_run = MagicMock(spec=Node) node_to_run.name = "node1" mock_callable = MagicMock(spec=Callable) mock_callable.return_value = "output1" self.scheduler._tools_manager.get_tool.return_value = mock_callable dag_manager = MagicMock(spec=DAGManager) dag_manager.get_node_valid_inputs.return_value = {"input": 1} result = self.scheduler._exec_single_node_in_thread((node_to_run, dag_manager)) mock_callable.assert_called_once_with(**{"input": 1}) assert result == "output1"
promptflow/src/promptflow/tests/executor/unittests/executor/test_flow_nodes_scheduler.py/0
{ "file_path": "promptflow/src/promptflow/tests/executor/unittests/executor/test_flow_nodes_scheduler.py", "repo_id": "promptflow", "token_count": 2450 }
56
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import pytest from .._azure_utils import DEFAULT_TEST_TIMEOUT, PYTEST_TIMEOUT_METHOD @pytest.fixture def connection_ops(pf): return pf._arm_connections @pytest.mark.timeout(timeout=DEFAULT_TEST_TIMEOUT, method=PYTEST_TIMEOUT_METHOD) @pytest.mark.e2etest @pytest.mark.usefixtures("vcr_recording") class TestArmConnectionOperations: def test_get_connection(self, connection_ops): # Note: Secrets will be returned by arm api result = connection_ops.get(name="azure_open_ai_connection") assert ( result._to_dict().items() > { "api_type": "azure", "module": "promptflow.connections", "name": "azure_open_ai_connection", }.items() ) result = connection_ops.get(name="custom_connection") assert ( result._to_dict().items() > { "name": "custom_connection", "module": "promptflow.connections", "configs": {}, }.items() )
promptflow/src/promptflow/tests/sdk_cli_azure_test/e2etests/test_arm_connection_operations.py/0
{ "file_path": "promptflow/src/promptflow/tests/sdk_cli_azure_test/e2etests/test_arm_connection_operations.py", "repo_id": "promptflow", "token_count": 530 }
57
[run] source = */promptflow/_cli/* */promptflow/_sdk/* */promptflow/azure/* omit = */promptflow/azure/_restclient/* *__init__.py*
promptflow/src/promptflow/tests/sdk_cli_test/.coveragerc/0
{ "file_path": "promptflow/src/promptflow/tests/sdk_cli_test/.coveragerc", "repo_id": "promptflow", "token_count": 72 }
58
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import datetime import json import uuid import pytest from promptflow._sdk._constants import ListViewType, RunStatus, RunTypes from promptflow._sdk._errors import RunNotFoundError from promptflow._sdk._orm import RunInfo @pytest.fixture() def run_name() -> str: name = str(uuid.uuid4()) run_info = RunInfo( name=name, type=RunTypes.BATCH, created_on=datetime.datetime.now().isoformat(), status=RunStatus.NOT_STARTED, display_name=name, description="", tags=None, properties=json.dumps({}), ) run_info.dump() return name @pytest.mark.sdk_test @pytest.mark.e2etest class TestRunInfo: def test_get(self, run_name: str) -> None: run_info = RunInfo.get(run_name) assert run_info.name == run_name assert run_info.type == RunTypes.BATCH assert run_info.status == RunStatus.NOT_STARTED assert run_info.display_name == run_name assert run_info.description == "" assert run_info.tags is None assert run_info.properties == json.dumps({}) def test_get_not_exist(self) -> None: not_exist_name = str(uuid.uuid4()) with pytest.raises(RunNotFoundError) as excinfo: RunInfo.get(not_exist_name) assert f"Run name {not_exist_name!r} cannot be found." in str(excinfo.value) def test_list_order_by_created_time_desc(self) -> None: for _ in range(3): RunInfo( name=str(uuid.uuid4()), created_on=datetime.datetime.now().isoformat(), status=RunStatus.NOT_STARTED, description="", tags=None, properties=json.dumps({}), ).dump() runs = RunInfo.list(max_results=3, list_view_type=ListViewType.ALL) # in very edge case, the created_on can be same, so use ">=" here assert runs[0].created_on >= runs[1].created_on >= runs[2].created_on def test_archive(self, run_name: str) -> None: run_info = RunInfo.get(run_name) assert run_info.archived is False run_info.archive() # in-memory archived flag assert run_info.archived is True # db archived flag assert RunInfo.get(run_name).archived is True def test_restore(self, run_name: str) -> None: run_info = RunInfo.get(run_name) run_info.archive() run_info = RunInfo.get(run_name) assert run_info.archived is True run_info.restore() # in-memory archived flag assert run_info.archived is False # db archived flag assert RunInfo.get(run_name).archived is False def test_update(self, run_name: str) -> None: run_info = RunInfo.get(run_name) assert run_info.status == RunStatus.NOT_STARTED assert run_info.display_name == run_name assert run_info.description == "" assert run_info.tags is None updated_status = RunStatus.COMPLETED updated_display_name = f"updated_{run_name}" updated_description = "updated_description" updated_tags = [{"key1": "value1", "key2": "value2"}] run_info.update( status=updated_status, display_name=updated_display_name, description=updated_description, tags=updated_tags, ) # in-memory status, display_name, description and tags assert run_info.status == updated_status assert run_info.display_name == updated_display_name assert run_info.description == updated_description assert run_info.tags == json.dumps(updated_tags) # db status, display_name, description and tags run_info = RunInfo.get(run_name) assert run_info.status == updated_status assert run_info.display_name == updated_display_name assert run_info.description == updated_description assert run_info.tags == json.dumps(updated_tags) def test_null_type_and_display_name(self) -> None: # test run_info table schema change: # 1. type can be null(we will deprecate this concept in the future) # 2. display_name can be null as default value name = str(uuid.uuid4()) run_info = RunInfo( name=name, created_on=datetime.datetime.now().isoformat(), status=RunStatus.NOT_STARTED, description="", tags=None, properties=json.dumps({}), ) run_info.dump() run_info_from_db = RunInfo.get(name) assert run_info_from_db.type is None assert run_info_from_db.display_name is None
promptflow/src/promptflow/tests/sdk_cli_test/e2etests/test_orm.py/0
{ "file_path": "promptflow/src/promptflow/tests/sdk_cli_test/e2etests/test_orm.py", "repo_id": "promptflow", "token_count": 2077 }
59
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import uuid import pytest from sqlalchemy import TEXT, Column, create_engine, inspect, text from sqlalchemy.orm import declarative_base, sessionmaker from promptflow._sdk._constants import HOME_PROMPT_FLOW_DIR from promptflow._sdk._orm.session import create_or_update_table, support_transaction TABLENAME = "orm_entity" def random_string() -> str: return str(uuid.uuid4()) def dump(obj, engine) -> None: session_maker = sessionmaker(bind=engine) with session_maker() as session: session.add(obj) session.commit() class SchemaV1(declarative_base()): __tablename__ = TABLENAME column1 = Column(TEXT, primary_key=True) column2 = Column(TEXT) __pf_schema_version__ = "1" @staticmethod def generate(engine) -> None: entity = SchemaV1(column1=random_string(), column2=random_string()) dump(entity, engine) return class SchemaV2(declarative_base()): __tablename__ = TABLENAME column1 = Column(TEXT, primary_key=True) column2 = Column(TEXT) column3 = Column(TEXT) __pf_schema_version__ = "2" @staticmethod def generate(engine) -> None: entity = SchemaV2(column1=random_string(), column2=random_string(), column3=random_string()) dump(entity, engine) return class SchemaV3(declarative_base()): __tablename__ = TABLENAME column1 = Column(TEXT, primary_key=True) column2 = Column(TEXT) column3 = Column(TEXT) column4 = Column(TEXT) __pf_schema_version__ = "3" @staticmethod def generate(engine) -> None: entity = SchemaV3( column1=random_string(), column2=random_string(), column3=random_string(), column4=random_string() ) dump(entity, engine) return # exactly same schema as SchemaV3 class SchemaV4(declarative_base()): __tablename__ = TABLENAME column1 = Column(TEXT, primary_key=True) column2 = Column(TEXT) column3 = Column(TEXT) column4 = Column(TEXT) __pf_schema_version__ = "4" @staticmethod def generate(engine) -> None: entity = SchemaV4( column1=random_string(), column2=random_string(), column3=random_string(), column4=random_string() ) dump(entity, engine) return def mock_use(engine, orm_class, entity_num: int = 1) -> None: create_or_update_table(engine, orm_class, TABLENAME) for _ in range(entity_num): orm_class.generate(engine) def generate_engine(): db_path = (HOME_PROMPT_FLOW_DIR / ".test" / f"{uuid.uuid4()}.sqlite").resolve() if not db_path.parent.is_dir(): db_path.parent.mkdir(parents=True, exist_ok=True) return create_engine(f"sqlite:///{str(db_path)}", future=True) @pytest.mark.sdk_test @pytest.mark.unittest class TestSchemaManagement: def test_fixed_version(self) -> None: engine = generate_engine() mock_use(engine, SchemaV3) mock_use(engine, SchemaV3, entity_num=2) mock_use(engine, SchemaV3, entity_num=3) # 1 table assert inspect(engine).has_table(TABLENAME) # 6 rows entities = [entity for entity in sessionmaker(bind=engine)().query(SchemaV3).all()] assert len(entities) == 6 def test_version_upgrade(self) -> None: engine = generate_engine() mock_use(engine, SchemaV1) mock_use(engine, SchemaV2) mock_use(engine, SchemaV3) # 3 tables: 1 current and 2 legacy assert inspect(engine).has_table(TABLENAME) assert inspect(engine).has_table(f"{TABLENAME}_v1") assert inspect(engine).has_table(f"{TABLENAME}_v2") # 2 rows in current table entities = [entity for entity in sessionmaker(bind=engine)().query(SchemaV3).all()] assert len(entities) == 3 def test_version_downgrade(self, capfd) -> None: engine = generate_engine() mock_use(engine, SchemaV3) mock_use(engine, SchemaV2) mock_use(engine, SchemaV1) # 1 table assert inspect(engine).has_table(TABLENAME) # 2 rows entities = [entity for entity in sessionmaker(bind=engine)().query(SchemaV1).all()] assert len(entities) == 3 # with warning message out, _ = capfd.readouterr() assert "While we will do our best to ensure compatibility, " in out def test_version_mixing(self) -> None: engine = generate_engine() mock_use(engine, SchemaV2, entity_num=2) mock_use(engine, SchemaV3, entity_num=3) # 1 upgrade mock_use(engine, SchemaV2, entity_num=1) mock_use(engine, SchemaV1, entity_num=4) mock_use(engine, SchemaV3, entity_num=2) # 2 tables: 1 current and 1 legacy assert inspect(engine).has_table(TABLENAME) assert inspect(engine).has_table(f"{TABLENAME}_v2") # 12(all) rows in current table entities = [entity for entity in sessionmaker(bind=engine)().query(SchemaV3).all()] assert len(entities) == 12 def test_version_across_same_schema_version(self, capfd) -> None: engine = generate_engine() # when 3->4, no warning message mock_use(engine, SchemaV3) mock_use(engine, SchemaV4) out, _ = capfd.readouterr() assert "While we will do our best to ensure compatibility, " not in out # same schema, no warning message mock_use(engine, SchemaV4) out, _ = capfd.readouterr() assert "While we will do our best to ensure compatibility, " not in out # when 4->3, warning message on upgrade should be printed mock_use(engine, SchemaV3) out, _ = capfd.readouterr() assert "While we will do our best to ensure compatibility, " in out def test_db_without_schema_info(self) -> None: engine = generate_engine() # manually create a table to avoid creation of schema_info table with engine.begin() as connection: connection.execute(text(f"CREATE TABLE {TABLENAME} (column1 TEXT PRIMARY KEY);")) connection.execute( text(f"INSERT INTO {TABLENAME} (column1) VALUES (:column1);"), {"column1": random_string()}, ) mock_use(engine, SchemaV3) # 2 tables: 1 current and 1 legacy with name containing timestamp assert inspect(engine).has_table(TABLENAME) # 2 rows in current table entities = [entity for entity in sessionmaker(bind=engine)().query(SchemaV3).all()] assert len(entities) == 2 @pytest.mark.sdk_test @pytest.mark.unittest class TestTransaction: def test_commit(self) -> None: engine = generate_engine() engine = support_transaction(engine) tablename = "transaction_test" sql = f"CREATE TABLE {tablename} (id INTEGER PRIMARY KEY);" with engine.begin() as connection: connection.execute(text(sql)) connection.commit() assert inspect(engine).has_table(tablename) def test_rollback(self) -> None: engine = generate_engine() engine = support_transaction(engine) tablename = "transaction_test" sql = f"CREATE TABLE {tablename} (id INTEGER PRIMARY KEY);" with engine.begin() as connection: connection.execute(text(sql)) connection.rollback() assert not inspect(engine).has_table(tablename) def test_exception_during_transaction(self) -> None: engine = generate_engine() engine = support_transaction(engine) tablename = "transaction_test" sql = f"CREATE TABLE {tablename} (id INTEGER PRIMARY KEY);" try: with engine.begin() as connection: connection.execute(text(sql)) # raise exception, so that SQLAlchemy should help rollback raise Exception("test exception") except Exception: pass assert not inspect(engine).has_table(tablename)
promptflow/src/promptflow/tests/sdk_cli_test/unittests/test_orm.py/0
{ "file_path": "promptflow/src/promptflow/tests/sdk_cli_test/unittests/test_orm.py", "repo_id": "promptflow", "token_count": 3383 }
60
{ "subscription_id": "sub1", "resource_group": "rg1", "workspace_name": "ws1" }
promptflow/src/promptflow/tests/test_configs/configs/mock_flow1/.azureml/config.json/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/configs/mock_flow1/.azureml/config.json", "repo_id": "promptflow", "token_count": 44 }
61
name: my_custom_strong_type_connection type: custom custom_type: MyFirstConnection module: my_tool_package.connections package: test-custom-tools package_version: 0.0.2 configs: api_base: "new_value" secrets: # must-have api_key: "******"
promptflow/src/promptflow/tests/test_configs/connections/update_custom_strong_type_connection.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/connections/update_custom_strong_type_connection.yaml", "repo_id": "promptflow", "token_count": 86 }
62
from dataclasses import dataclass @dataclass class Data: text: str models: list def my_flow(text: str = "default_text", models: list = ["default_model"]): return Data(text=text, models=models)
promptflow/src/promptflow/tests/test_configs/eager_flows/flow_with_dataclass_output/entry.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/eager_flows/flow_with_dataclass_output/entry.py", "repo_id": "promptflow", "token_count": 74 }
63
path: ./entry.py entry: my_flow
promptflow/src/promptflow/tests/test_configs/eager_flows/primitive_output/flow.dag.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/eager_flows/primitive_output/flow.dag.yaml", "repo_id": "promptflow", "token_count": 12 }
64
[ { "expected_node_count": 2, "expected_outputs":{ "text": "hello world" }, "expected_bypassed_nodes":[] } ]
promptflow/src/promptflow/tests/test_configs/flows/activate_with_no_inputs/expected_result.json/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/activate_with_no_inputs/expected_result.json", "repo_id": "promptflow", "token_count": 92 }
65
{ "text": "bypass" }
promptflow/src/promptflow/tests/test_configs/flows/all_nodes_bypassed/inputs.json/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/all_nodes_bypassed/inputs.json", "repo_id": "promptflow", "token_count": 14 }
66
from promptflow import tool import time @tool def passthrough_str_and_wait_sync(input1: str, wait_seconds=3) -> str: assert isinstance(input1, str), f"input1 should be a string, got {input1}" print(f"Wait for {wait_seconds} seconds in sync function") for i in range(wait_seconds): print(i) time.sleep(1) return input1
promptflow/src/promptflow/tests/test_configs/flows/async_tools_with_sync_tools/sync_passthrough.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/async_tools_with_sync_tools/sync_passthrough.py", "repo_id": "promptflow", "token_count": 134 }
67
import random import time from promptflow import tool @tool def get_current_city(): """Get current city.""" # Generating a random number between 0.2 and 1 for tracing purpose time.sleep(random.uniform(0.2, 1)) return random.choice(["Beijing", "Shanghai"])
promptflow/src/promptflow/tests/test_configs/flows/chat-with-assistant-no-file/get_current_city.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/chat-with-assistant-no-file/get_current_city.py", "repo_id": "promptflow", "token_count": 92 }
68
{"accuracy": 0.67}
promptflow/src/promptflow/tests/test_configs/flows/classification_accuracy_evaluation/expected_metrics.json/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/classification_accuracy_evaluation/expected_metrics.json", "repo_id": "promptflow", "token_count": 8 }
69
from promptflow import tool @tool def choose_investigation_method(method1="Skip job info extractor", method2="Skip incident info extractor"): method = {} if method1: method["first"] = method1 if method2: method["second"] = method2 return method
promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_activate/investigation_method.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/conditional_flow_with_activate/investigation_method.py", "repo_id": "promptflow", "token_count": 97 }
70
language: csharp inputs: question: type: string default: what is promptflow? outputs: answer: type: string reference: ${get_answer.output} nodes: - name: get_answer type: csharp source: type: package tool: (Basic)Basic.Flow.HelloWorld inputs: question: ${inputs.question}
promptflow/src/promptflow/tests/test_configs/flows/csharp_flow/flow.dag.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/csharp_flow/flow.dag.yaml", "repo_id": "promptflow", "token_count": 118 }
71
import os import pip def extract_intent(chat_prompt: str): from langchain.chat_models import AzureChatOpenAI from langchain.schema import HumanMessage if "AZURE_OPENAI_API_KEY" not in os.environ: # load environment variables from .env file try: from dotenv import load_dotenv except ImportError: # This can be removed if user using custom image. pip.main(["install", "python-dotenv"]) from dotenv import load_dotenv load_dotenv() chat = AzureChatOpenAI( deployment_name=os.environ["CHAT_DEPLOYMENT_NAME"], openai_api_key=os.environ["AZURE_OPENAI_API_KEY"], openai_api_base=os.environ["AZURE_OPENAI_API_BASE"], openai_api_type="azure", openai_api_version="2023-03-15-preview", temperature=0, ) reply_message = chat([HumanMessage(content=chat_prompt)]) return reply_message.content def generate_prompt(customer_info: str, history: list, user_prompt_template: str): from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate from langchain.prompts.prompt import PromptTemplate chat_history_text = "\n".join( [message["role"] + ": " + message["content"] for message in history] ) prompt_template = PromptTemplate.from_template(user_prompt_template) chat_prompt_template = ChatPromptTemplate.from_messages( [ HumanMessagePromptTemplate(prompt=prompt_template) ] ) return chat_prompt_template.format_prompt(customer_info=customer_info, chat_history=chat_history_text).to_string() if __name__ == "__main__": import json with open("./data/denormalized-flat.jsonl", "r") as f: data = [json.loads(line) for line in f.readlines()] # only ten samples data = data[:10] # load template from file with open("user_intent_zero_shot.md", "r") as f: user_prompt_template = f.read() # each test for item in data: chat_prompt = generate_prompt(item["customer_info"], item["history"], user_prompt_template) reply = extract_intent(chat_prompt) print("=====================================") # print("Customer info: ", item["customer_info"]) # print("+++++++++++++++++++++++++++++++++++++") print("Chat history: ", item["history"]) print("+++++++++++++++++++++++++++++++++++++") print(reply) print("+++++++++++++++++++++++++++++++++++++") print(f"Ground Truth: {item['intent']}") print("=====================================")
promptflow/src/promptflow/tests/test_configs/flows/export/linux/flow/intent.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/export/linux/flow/intent.py", "repo_id": "promptflow", "token_count": 1014 }
72
import os from promptflow import tool @tool def get_env_var(key: str): print(os.environ.get(key)) # get from env var return {"value": os.environ.get(key)}
promptflow/src/promptflow/tests/test_configs/flows/flow_with_environment_variables/print_env.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_environment_variables/print_env.py", "repo_id": "promptflow", "token_count": 67 }
73
{"text": "Hello 123 日本語"} {"text": "World 123 日本語"}
promptflow/src/promptflow/tests/test_configs/flows/flow_with_non_english_input/data.jsonl/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_non_english_input/data.jsonl", "repo_id": "promptflow", "token_count": 30 }
74
def foo(param: str) -> str: return f"{param} from func foo"
promptflow/src/promptflow/tests/test_configs/flows/flow_with_sys_inject/custom_lib/foo.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/flow_with_sys_inject/custom_lib/foo.py", "repo_id": "promptflow", "token_count": 25 }
75
from promptflow import tool @tool def echo(input: str) -> str: return input
promptflow/src/promptflow/tests/test_configs/flows/llm_tool/echo.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/llm_tool/echo.py", "repo_id": "promptflow", "token_count": 27 }
76
inputs: number: type: int outputs: output: type: int reference: ${mod_two.output.value} nodes: - name: mod_two type: python source: type: code path: mod_two.py inputs: number: ${inputs.number}
promptflow/src/promptflow/tests/test_configs/flows/mod-n/two/flow.dag.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/mod-n/two/flow.dag.yaml", "repo_id": "promptflow", "token_count": 99 }
77
{"prompt": "What is the capital of the United States of America?", "stream": true} {"prompt": "What is the capital of the United States of America?", "stream": false}
promptflow/src/promptflow/tests/test_configs/flows/openai_completion_api_flow/inputs.jsonl/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/openai_completion_api_flow/inputs.jsonl", "repo_id": "promptflow", "token_count": 44 }
78
inputs: text: type: string outputs: output_prompt: type: string reference: ${summarize_text_content_prompt.output} nodes: - name: summarize_text_content_prompt type: prompt source: type: code path: summarize_text_content_prompt.jinja2 inputs: text: ${inputs.text}
promptflow/src/promptflow/tests/test_configs/flows/prompt_tools/flow.dag.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/prompt_tools/flow.dag.yaml", "repo_id": "promptflow", "token_count": 118 }
79
from promptflow import ToolProvider, tool class ScriptToolWithInit(ToolProvider): def __init__(self, init_input: str): super().__init__() self.init_input = init_input @tool def call(self, input: str): return str.join(" ", [self.init_input, input])
promptflow/src/promptflow/tests/test_configs/flows/script_tool_with_init/script_tool_with_init.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/script_tool_with_init/script_tool_with_init.py", "repo_id": "promptflow", "token_count": 110 }
80
from promptflow import tool @tool def passthrough(input: str): return input
promptflow/src/promptflow/tests/test_configs/flows/simple_aggregation/passthrough.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/simple_aggregation/passthrough.py", "repo_id": "promptflow", "token_count": 25 }
81
from promptflow import tool @tool def hello_world(name: str) -> str: return f"Hello World {name}!"
promptflow/src/promptflow/tests/test_configs/flows/simple_hello_world/hello_world.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/flows/simple_hello_world/hello_world.py", "repo_id": "promptflow", "token_count": 36 }
82
interactions: - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000 response: body: string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000", "name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location": "eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic", "tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}' headers: cache-control: - no-cache content-length: - '3630' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding,Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.022' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false response: body: string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore", "name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores", "properties": {"description": null, "tags": null, "properties": null, "isDefault": true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty": null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup": "00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name", "containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol": "https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"}, "systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType": "Application"}}]}' headers: cache-control: - no-cache content-length: - '1372' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding,Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.068' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore response: body: string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore", "name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores", "properties": {"description": null, "tags": null, "properties": null, "isDefault": true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty": null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup": "00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name", "containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol": "https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"}, "systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType": "Application"}}' headers: cache-control: - no-cache content-length: - '1227' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding,Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.108' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '0' User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) method: POST uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets response: body: string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}' headers: cache-control: - no-cache content-length: - '134' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.158' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) x-ms-date: - Thu, 25 Jan 2024 09:43:28 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/simple_eager_flow_data.jsonl response: body: string: '' headers: accept-ranges: - bytes content-length: - '25' content-md5: - zt1zN1V/HR5p7N0Sh5396w== content-type: - application/octet-stream last-modified: - Tue, 23 Jan 2024 06:27:00 GMT server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 vary: - Origin x-ms-blob-type: - BlockBlob x-ms-creation-time: - Tue, 23 Jan 2024 06:26:59 GMT x-ms-meta-name: - 1e376ce4-7c3b-4683-82ad-412f5cd23626 x-ms-meta-upload_status: - completed x-ms-meta-version: - 7e65351c-7e4b-4a4d-90f8-304eacdc36bc x-ms-version: - '2023-11-03' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) x-ms-date: - Thu, 25 Jan 2024 09:43:32 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/simple_eager_flow_data.jsonl response: body: string: '' headers: server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 transfer-encoding: - chunked vary: - Origin x-ms-error-code: - BlobNotFound x-ms-version: - '2023-11-03' status: code: 404 message: The specified blob does not exist. - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore response: body: string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore", "name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores", "properties": {"description": null, "tags": null, "properties": null, "isDefault": true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty": null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup": "00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name", "containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol": "https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"}, "systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType": "Application"}}' headers: cache-control: - no-cache content-length: - '1227' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding,Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.212' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '0' User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.0 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) method: POST uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets response: body: string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}' headers: cache-control: - no-cache content-length: - '134' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.119' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) x-ms-date: - Thu, 25 Jan 2024 09:43:36 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/long_running/entry.py response: body: string: '' headers: accept-ranges: - bytes content-length: - '367' content-md5: - 7vIDardlf1klaJmWPh9h4w== content-type: - application/octet-stream last-modified: - Thu, 25 Jan 2024 09:38:51 GMT server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 vary: - Origin x-ms-blob-type: - BlockBlob x-ms-creation-time: - Thu, 25 Jan 2024 09:38:50 GMT x-ms-meta-name: - 0e590221-3014-45b3-97ca-b3c87651b95b x-ms-meta-upload_status: - completed x-ms-meta-version: - '1' x-ms-version: - '2023-11-03' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.11.5 (Windows-10-10.0.22621-SP0) x-ms-date: - Thu, 25 Jan 2024 09:43:39 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/long_running/entry.py response: body: string: '' headers: server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 transfer-encoding: - chunked vary: - Origin x-ms-error-code: - BlobNotFound x-ms-version: - '2023-11-03' status: code: 404 message: The specified blob does not exist. - request: body: '{"flowDefinitionDataStoreName": "workspaceblobstore", "flowDefinitionBlobPath": "LocalUpload/000000000000000000000000000000000000/long_running/flow.dag.yaml", "runId": "name", "runDisplayName": "name", "runExperimentName": "", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/000000000000000000000000000000000000/simple_eager_flow_data.jsonl"}, "inputsMapping": {}, "connections": {}, "environmentVariables": {}, "runtimeName": "fake-runtime-name", "sessionId": "000000000000000000000000000000000000000000000000", "sessionSetupMode": "SystemWait", "flowLineageId": "0000000000000000000000000000000000000000000000000000000000000000", "runDisplayNameGenerationType": "UserProvidedMacro"}' headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '785' Content-Type: - application/json User-Agent: - promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown Python/3.11.5 (Windows-10-10.0.22621-SP0) method: POST uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/submit response: body: string: '"name"' headers: connection: - keep-alive content-length: - '38' content-type: - application/json; charset=utf-8 strict-transport-security: - max-age=15724800; includeSubDomains; preload x-content-type-options: - nosniff x-request-time: - '7.114' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown Python/3.11.5 (Windows-10-10.0.22621-SP0) method: GET uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name response: body: string: '{"flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name", "flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/e62bc4d5a164939b21d42dd420469da7/simple_eager_flow_data.jsonl"}, "flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "automatic", "inputsMapping": {}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath": "promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath": "flow.dag.yaml", "flowSnapshotId": "9a52991b-8c36-40ad-9ccc-132524648ad7", "studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}' headers: connection: - keep-alive content-length: - '1028' content-type: - application/json; charset=utf-8 strict-transport-security: - max-age=15724800; includeSubDomains; preload transfer-encoding: - chunked vary: - Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.185' status: code: 200 message: OK - request: body: null headers: Accept: - text/plain, application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '0' User-Agent: - promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown Python/3.11.5 (Windows-10-10.0.22621-SP0) method: POST uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name/cancel response: body: string: '' headers: connection: - keep-alive content-length: - '0' strict-transport-security: - max-age=15724800; includeSubDomains; preload x-content-type-options: - nosniff x-request-time: - '0.177' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown Python/3.11.5 (Windows-10-10.0.22621-SP0) method: GET uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name response: body: string: '{"flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name", "flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/e62bc4d5a164939b21d42dd420469da7/simple_eager_flow_data.jsonl"}, "flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "automatic", "inputsMapping": {}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath": "promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath": "flow.dag.yaml", "flowSnapshotId": "9a52991b-8c36-40ad-9ccc-132524648ad7", "studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}' headers: connection: - keep-alive content-length: - '1028' content-type: - application/json; charset=utf-8 strict-transport-security: - max-age=15724800; includeSubDomains; preload transfer-encoding: - chunked vary: - Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.369' status: code: 200 message: OK - request: body: '{"runId": "name", "selectRunMetadata": true, "selectRunDefinition": true, "selectJobSpecification": true}' headers: Accept: - '*/*' Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '137' Content-Type: - application/json User-Agent: - python-requests/2.31.0 method: POST uri: https://eastus.api.azureml.ms/history/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/rundata response: body: string: '{"runMetadata": {"runNumber": 1706175834, "rootRunId": "name", "createdUtc": "2024-01-25T09:43:54.0903218+00:00", "createdBy": {"userObjectId": "00000000-0000-0000-0000-000000000000", "userPuId": "100320005227D154", "userIdp": null, "userAltSecId": null, "userIss": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/", "userTenantId": "00000000-0000-0000-0000-000000000000", "userName": "Han Wang", "upn": null}, "userId": "00000000-0000-0000-0000-000000000000", "token": null, "tokenExpiryTimeUtc": null, "error": {"error": {"code": "SystemError", "severity": null, "message": "''dict'' object has no attribute ''error_summary''", "messageFormat": "", "messageParameters": {}, "referenceCode": null, "detailsUri": null, "target": null, "details": [], "innerError": {"code": "AttributeError", "innerError": null}, "debugInfo": null, "additionalInfo": null}, "correlation": null, "environment": null, "location": null, "time": "2024-01-25T09:44:20.597641+00:00", "componentName": "promptflow-runtime/20240116.v1 Designer/1.0 promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown Python/3.11.5 (Windows-10-10.0.22621-SP0) promptflow/0.0.116642424"}, "warnings": null, "revision": 7, "statusRevision": 4, "runUuid": "75b5d063-7b17-4a7a-af1b-0135fd51f196", "parentRunUuid": null, "rootRunUuid": "75b5d063-7b17-4a7a-af1b-0135fd51f196", "lastStartTimeUtc": null, "currentComputeTime": null, "computeDuration": "00:00:12.5197205", "effectiveStartTimeUtc": null, "lastModifiedBy": {"userObjectId": "00000000-0000-0000-0000-000000000000", "userPuId": "100320005227D154", "userIdp": null, "userAltSecId": null, "userIss": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/", "userTenantId": "00000000-0000-0000-0000-000000000000", "userName": "Han Wang", "upn": null}, "lastModifiedUtc": "2024-01-25T09:43:56.5440371+00:00", "duration": "00:00:12.5197205", "cancelationReason": null, "currentAttemptId": 1, "runId": "name", "parentRunId": null, "experimentId": "b0eb6a8a-ae84-4bfe-9741-689a08e793d8", "status": "Canceled", "startTimeUtc": "2024-01-25T09:44:06.9551144+00:00", "endTimeUtc": "2024-01-25T09:44:19.4748349+00:00", "scheduleId": null, "displayName": "name", "name": null, "dataContainerId": "dcid.name", "description": null, "hidden": false, "runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null, "traits": [], "attribution": "PromptFlow", "computeType": null}, "properties": {"azureml.promptflow.runtime_name": "automatic", "azureml.promptflow.runtime_version": "20240116.v1", "azureml.promptflow.definition_file_name": "flow.dag.yaml", "azureml.promptflow.flow_lineage_id": "5ea4d8ed1431bf7fbc26f2c74ba16e38e8927ed93ec75ff608f8e86712ef17c2", "azureml.promptflow.flow_definition_datastore_name": "workspaceblobstore", "azureml.promptflow.flow_definition_blob_path": "LocalUpload/57d6499b57cea9bbbeb8c3ea421d33b8/long_running/flow.dag.yaml", "azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/e62bc4d5a164939b21d42dd420469da7/simple_eager_flow_data.jsonl", "_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.session_id": "eaef6f595b07c179512ceebe471607e1d38375ad0fe3a637", "azureml.promptflow.snapshot_id": "9a52991b-8c36-40ad-9ccc-132524648ad7", "azureml.promptflow.run_mode": "Eager"}, "parameters": {}, "actionUris": {}, "scriptName": null, "target": null, "uniqueChildRunComputeTargets": [], "tags": {}, "settings": {}, "services": {}, "inputDatasets": [], "outputDatasets": [], "runDefinition": null, "jobSpecification": null, "primaryMetricName": null, "createdFrom": null, "cancelUri": null, "completeUri": null, "diagnosticsUri": null, "computeRequest": null, "compute": null, "retainForLifetimeOfWorkspace": false, "queueingInfo": null, "inputs": null, "outputs": null}, "runDefinition": null, "jobSpecification": null, "systemSettings": null}' headers: connection: - keep-alive content-length: - '4658' content-type: - application/json; charset=utf-8 strict-transport-security: - max-age=15724800; includeSubDomains; preload transfer-encoding: - chunked vary: - Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.034' status: code: 200 message: OK version: 1
promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_eager_flow_cancel.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_eager_flow_cancel.yaml", "repo_id": "promptflow", "token_count": 11893 }
83
interactions: - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000 response: body: string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000", "name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location": "eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic", "tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}' headers: cache-control: - no-cache content-length: - '3630' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding,Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.037' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false response: body: string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore", "name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores", "properties": {"description": null, "tags": null, "properties": null, "isDefault": true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty": null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup": "00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name", "containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol": "https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"}, "systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType": "Application"}}]}' headers: cache-control: - no-cache content-length: - '1372' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding,Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.085' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore response: body: string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore", "name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores", "properties": {"description": null, "tags": null, "properties": null, "isDefault": true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty": null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup": "00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name", "containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol": "https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"}, "systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType": "Application"}}' headers: cache-control: - no-cache content-length: - '1227' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding,Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.078' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '0' User-Agent: - promptflow-sdk/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: POST uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets response: body: string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}' headers: cache-control: - no-cache content-length: - '134' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains transfer-encoding: - chunked vary: - Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.089' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) x-ms-date: - Fri, 12 Jan 2024 08:21:52 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/simple_hello_world.jsonl response: body: string: '' headers: accept-ranges: - bytes content-length: - '22' content-md5: - SaVvJK8fXJzgPgQkmSaCGA== content-type: - application/octet-stream last-modified: - Thu, 23 Nov 2023 12:11:21 GMT server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 vary: - Origin x-ms-blob-type: - BlockBlob x-ms-creation-time: - Thu, 23 Nov 2023 12:11:20 GMT x-ms-meta-name: - 74c8f1fa-9e14-4249-8fec-279efedeb400 x-ms-meta-upload_status: - completed x-ms-meta-version: - 2266d840-3ecd-4a91-9e63-8d57e7b0a62e x-ms-version: - '2023-11-03' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) x-ms-date: - Fri, 12 Jan 2024 08:21:53 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/simple_hello_world.jsonl response: body: string: '' headers: server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 transfer-encoding: - chunked vary: - Origin x-ms-error-code: - BlobNotFound x-ms-version: - '2023-11-03' status: code: 404 message: The specified blob does not exist. - request: body: '{"flowDefinitionResourceId": "azureml://registries/promptflow-preview/models/simple_hello_world/versions/202311241", "runId": "name", "runDisplayName": "name", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/000000000000000000000000000000000000/simple_hello_world.jsonl"}, "inputsMapping": {"name": "${data.name}"}, "connections": {}, "environmentVariables": {}, "runtimeName": "fake-runtime-name", "sessionId": "000000000000000000000000000000000000000000000000", "sessionSetupMode": "SystemWait", "flowLineageId": "0000000000000000000000000000000000000000000000000000000000000000", "runDisplayNameGenerationType": "UserProvidedMacro"}' headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '658' Content-Type: - application/json User-Agent: - promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown Python/3.10.13 (Windows-10-10.0.22631-SP0) method: POST uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/submit response: body: string: '"name"' headers: connection: - keep-alive content-length: - '38' content-type: - application/json; charset=utf-8 strict-transport-security: - max-age=15724800; includeSubDomains; preload x-content-type-options: - nosniff x-request-time: - '6.218' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 azsdk-python-azuremachinelearningdesignerserviceclient/unknown Python/3.10.13 (Windows-10-10.0.22631-SP0) method: GET uri: https://eastus.api.azureml.ms/flow/api/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/BulkRuns/name response: body: string: '{"flowGraph": {"nodes": [{"name": "hello_world", "type": "python", "source": {"type": "code", "path": "hello_world.py"}, "inputs": {"name": "${inputs.name}"}, "tool": "hello_world.py", "reduce": false}], "tools": [{"name": "hello_world.py", "type": "python", "inputs": {"name": {"type": ["string"], "allow_manual_entry": false, "is_multi_select": false, "input_type": "default"}}, "source": "hello_world.py", "function": "hello_world", "is_builtin": false, "enable_kwargs": false, "tool_state": "stable"}], "inputs": {"name": {"type": "string", "default": "hod", "is_chat_input": false}}, "outputs": {"result": {"type": "string", "reference": "${hello_world.output}", "evaluation_only": false, "is_chat_output": false}}}, "flowRunResourceId": "azureml://locations/eastus/workspaces/00000/flows/name/flowRuns/name", "flowRunId": "name", "flowRunDisplayName": "name", "batchDataInput": {"dataUri": "azureml://datastores/workspaceblobstore/paths/LocalUpload/79f088fae0e502653c43146c9682f425/simple_hello_world.jsonl"}, "flowRunType": "FlowRun", "flowType": "Default", "runtimeName": "test-runtime-ci", "inputsMapping": {"name": "${data.name}"}, "outputDatastoreName": "workspaceblobstore", "childRunBasePath": "promptflow/PromptFlowArtifacts/name/flow_artifacts", "flowDagFileRelativePath": "flow.dag.yaml", "flowSnapshotId": "2fc73c2b-511f-4059-8fc5-4fb6ce608f85", "studioPortalEndpoint": "https://ml.azure.com/runs/name?wsid=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000"}' headers: connection: - keep-alive content-length: - '1711' content-type: - application/json; charset=utf-8 strict-transport-security: - max-age=15724800; includeSubDomains; preload transfer-encoding: - chunked vary: - Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.209' status: code: 200 message: OK - request: body: '{"runId": "name", "selectRunMetadata": true, "selectRunDefinition": true, "selectJobSpecification": true}' headers: Accept: - '*/*' Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '137' Content-Type: - application/json User-Agent: - python-requests/2.31.0 method: POST uri: https://eastus.api.azureml.ms/history/v1.0/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/rundata response: body: string: '{"runMetadata": {"runNumber": 1705047718, "rootRunId": "name", "createdUtc": "2024-01-12T08:21:58.5939458+00:00", "createdBy": {"userObjectId": "00000000-0000-0000-0000-000000000000", "userPuId": null, "userIdp": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/", "userAltSecId": null, "userIss": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/", "userTenantId": "00000000-0000-0000-0000-000000000000", "userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587", "upn": null}, "userId": "00000000-0000-0000-0000-000000000000", "token": null, "tokenExpiryTimeUtc": null, "error": null, "warnings": null, "revision": 3, "statusRevision": 1, "runUuid": "4a452fe8-5f0e-48aa-a522-44b67e66a50e", "parentRunUuid": null, "rootRunUuid": "4a452fe8-5f0e-48aa-a522-44b67e66a50e", "lastStartTimeUtc": null, "currentComputeTime": "00:00:00", "computeDuration": null, "effectiveStartTimeUtc": null, "lastModifiedBy": {"userObjectId": "00000000-0000-0000-0000-000000000000", "userPuId": null, "userIdp": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/", "userAltSecId": null, "userIss": "https://sts.windows.net/00000000-0000-0000-0000-000000000000/", "userTenantId": "00000000-0000-0000-0000-000000000000", "userName": "4cbd0e2e-aae4-4099-b4ba-94d3a4910587", "upn": null}, "lastModifiedUtc": "2024-01-12T08:22:02.0692705+00:00", "duration": null, "cancelationReason": null, "currentAttemptId": 1, "runId": "name", "parentRunId": null, "experimentId": "8ed8abac-3b31-48a5-9fa1-81b7fe487f46", "status": "Preparing", "startTimeUtc": null, "endTimeUtc": null, "scheduleId": null, "displayName": "name", "name": null, "dataContainerId": "dcid.name", "description": null, "hidden": false, "runType": "azureml.promptflow.FlowRun", "runTypeV2": {"orchestrator": null, "traits": [], "attribution": "PromptFlow", "computeType": "AmlcDsi"}, "properties": {"azureml.promptflow.runtime_name": "test-runtime-ci", "azureml.promptflow.runtime_version": "20231204.v4", "azureml.promptflow.definition_file_name": "flow.dag.yaml", "azureml.promptflow.session_id": "simple_hello_world", "azureml.promptflow.flow_lineage_id": "simple_hello_world", "azureml.promptflow.flow_definition_resource_id": "azureml://registries/promptflow-preview/models/simple_hello_world/versions/202311241", "azureml.promptflow.input_data": "azureml://datastores/workspaceblobstore/paths/LocalUpload/79f088fae0e502653c43146c9682f425/simple_hello_world.jsonl", "azureml.promptflow.inputs_mapping": "{\"name\":\"${data.name}\"}", "_azureml.evaluation_run": "promptflow.BatchRun", "azureml.promptflow.snapshot_id": "2fc73c2b-511f-4059-8fc5-4fb6ce608f85"}, "parameters": {}, "actionUris": {}, "scriptName": null, "target": null, "uniqueChildRunComputeTargets": [], "tags": {}, "settings": {}, "services": {}, "inputDatasets": [], "outputDatasets": [], "runDefinition": null, "jobSpecification": null, "primaryMetricName": null, "createdFrom": null, "cancelUri": null, "completeUri": null, "diagnosticsUri": null, "computeRequest": null, "compute": null, "retainForLifetimeOfWorkspace": false, "queueingInfo": null, "inputs": null, "outputs": null}, "runDefinition": null, "jobSpecification": null, "systemSettings": null}' headers: connection: - keep-alive content-length: - '3785' content-type: - application/json; charset=utf-8 strict-transport-security: - max-age=15724800; includeSubDomains; preload transfer-encoding: - chunked vary: - Accept-Encoding x-content-type-options: - nosniff x-request-time: - '0.042' status: code: 200 message: OK version: 1
promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_run_bulk_with_registry_flow.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_run_bulk_with_registry_flow.yaml", "repo_id": "promptflow", "token_count": 8027 }
84
interactions: - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000 response: body: string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000", "name": "00000", "type": "Microsoft.MachineLearningServices/workspaces", "location": "eastus", "tags": {}, "etag": null, "kind": "Default", "sku": {"name": "Basic", "tier": "Basic"}, "properties": {"discoveryUrl": "https://eastus.api.azureml.ms/discovery"}}' headers: cache-control: - no-cache content-length: - '3630' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains vary: - Accept-Encoding x-cache: - CONFIG_NOCACHE x-content-type-options: - nosniff x-request-time: - '0.017' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores?count=30&isDefault=true&orderByAsc=false response: body: string: '{"value": [{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore", "name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores", "properties": {"description": null, "tags": null, "properties": null, "isDefault": true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty": null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup": "00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name", "containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol": "https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"}, "systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType": "Application"}}]}' headers: cache-control: - no-cache content-length: - '1372' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains vary: - Accept-Encoding x-cache: - CONFIG_NOCACHE x-content-type-options: - nosniff x-request-time: - '0.180' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore response: body: string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore", "name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores", "properties": {"description": null, "tags": null, "properties": null, "isDefault": true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty": null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup": "00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name", "containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol": "https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"}, "systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType": "Application"}}' headers: cache-control: - no-cache content-length: - '1227' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains vary: - Accept-Encoding x-cache: - CONFIG_NOCACHE x-content-type-options: - nosniff x-request-time: - '0.081' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '0' User-Agent: - promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: POST uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets response: body: string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}' headers: cache-control: - no-cache content-length: - '134' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains x-cache: - CONFIG_NOCACHE x-content-type-options: - nosniff x-request-time: - '0.120' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) x-ms-date: - Wed, 29 Nov 2023 09:03:57 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/webClassification3.jsonl response: body: string: '' headers: accept-ranges: - bytes content-length: - '379' content-md5: - lI/pz9jzTQ7Td3RHPL7y7w== content-type: - application/octet-stream last-modified: - Mon, 06 Nov 2023 08:30:18 GMT server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 vary: - Origin x-ms-blob-type: - BlockBlob x-ms-creation-time: - Mon, 06 Nov 2023 08:30:18 GMT x-ms-meta-name: - 94331215-cf7f-452a-9f1a-1d276bc9b0e4 x-ms-meta-upload_status: - completed x-ms-meta-version: - 3f163752-edb0-4afc-a6f5-b0a670bd7c24 x-ms-version: - '2023-11-03' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) x-ms-date: - Wed, 29 Nov 2023 09:03:58 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/webClassification3.jsonl response: body: string: '' headers: server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 transfer-encoding: - chunked vary: - Origin x-ms-error-code: - BlobNotFound x-ms-version: - '2023-11-03' status: code: 404 message: The specified blob does not exist. - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: GET uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore response: body: string: '{"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore", "name": "workspaceblobstore", "type": "Microsoft.MachineLearningServices/workspaces/datastores", "properties": {"description": null, "tags": null, "properties": null, "isDefault": true, "credentials": {"credentialsType": "AccountKey"}, "intellectualProperty": null, "subscriptionId": "00000000-0000-0000-0000-000000000000", "resourceGroup": "00000", "datastoreType": "AzureBlob", "accountName": "fake_account_name", "containerName": "fake-container-name", "endpoint": "core.windows.net", "protocol": "https", "serviceDataAccessAuthIdentity": "WorkspaceSystemAssignedIdentity"}, "systemData": {"createdAt": "2023-04-08T02:53:06.5886442+00:00", "createdBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "createdByType": "Application", "lastModifiedAt": "2023-04-08T02:53:07.521127+00:00", "lastModifiedBy": "779301c0-18b2-4cdc-801b-a0a3368fee0a", "lastModifiedByType": "Application"}}' headers: cache-control: - no-cache content-length: - '1227' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains vary: - Accept-Encoding x-cache: - CONFIG_NOCACHE x-content-type-options: - nosniff x-request-time: - '0.123' status: code: 200 message: OK - request: body: null headers: Accept: - application/json Accept-Encoding: - gzip, deflate Connection: - keep-alive Content-Length: - '0' User-Agent: - promptflow-sdk/0.0.1 promptflow/0.0.1 azure-ai-ml/1.12.1 azsdk-python-mgmt-machinelearningservices/0.1.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) method: POST uri: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/00000/providers/Microsoft.MachineLearningServices/workspaces/00000/datastores/workspaceblobstore/listSecrets response: body: string: '{"secretsType": "AccountKey", "key": "dGhpcyBpcyBmYWtlIGtleQ=="}' headers: cache-control: - no-cache content-length: - '134' content-type: - application/json; charset=utf-8 expires: - '-1' pragma: - no-cache strict-transport-security: - max-age=31536000; includeSubDomains x-cache: - CONFIG_NOCACHE x-content-type-options: - nosniff x-request-time: - '0.124' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) x-ms-date: - Wed, 29 Nov 2023 09:04:02 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/LocalUpload/000000000000000000000000000000000000/flow_with_dict_input/flow.dag.yaml response: body: string: '' headers: accept-ranges: - bytes content-length: - '390' content-md5: - rvNrgMFl6rXC96Bo0HAgiQ== content-type: - application/octet-stream last-modified: - Wed, 29 Nov 2023 09:02:57 GMT server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 vary: - Origin x-ms-blob-type: - BlockBlob x-ms-creation-time: - Wed, 29 Nov 2023 09:02:56 GMT x-ms-meta-name: - fea20834-dda6-4ae9-a2bf-b8c08cd7e883 x-ms-meta-upload_status: - completed x-ms-meta-version: - '1' x-ms-version: - '2023-11-03' status: code: 200 message: OK - request: body: null headers: Accept: - application/xml Accept-Encoding: - gzip, deflate Connection: - keep-alive User-Agent: - azsdk-python-storage-blob/12.19.0 Python/3.10.13 (Windows-10-10.0.22631-SP0) x-ms-date: - Wed, 29 Nov 2023 09:04:03 GMT x-ms-version: - '2023-11-03' method: HEAD uri: https://fake_account_name.blob.core.windows.net/fake-container-name/az-ml-artifacts/000000000000000000000000000000000000/flow_with_dict_input/flow.dag.yaml response: body: string: '' headers: server: - Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0 transfer-encoding: - chunked vary: - Origin x-ms-error-code: - BlobNotFound x-ms-version: - '2023-11-03' status: code: 404 message: The specified blob does not exist. version: 1
promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_tools_json_ignored.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/recordings/test_run_operations_TestFlowRun_test_tools_json_ignored.yaml", "repo_id": "promptflow", "token_count": 6859 }
85
name: flow_run_20230629_101205 flow: ../flows/web_classification data: ../datas/webClassification1.jsonl column_mapping: url: "${data.url}" variant: ${summarize_text_content.variant_0} # run config: env related environment_variables: env_file
promptflow/src/promptflow/tests/test_configs/runs/sample_bulk_run_cloud.yaml/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/runs/sample_bulk_run_cloud.yaml", "repo_id": "promptflow", "token_count": 91 }
86
from promptflow import tool @tool def divide_num(num: int) -> int: return (int)(num / 2)
promptflow/src/promptflow/tests/test_configs/wrong_flows/node_circular_dependency/divide_num.py/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/wrong_flows/node_circular_dependency/divide_num.py", "repo_id": "promptflow", "token_count": 35 }
87
{% for val in values %} {{val}}
promptflow/src/promptflow/tests/test_configs/wrong_tools/no_end.jinja2/0
{ "file_path": "promptflow/src/promptflow/tests/test_configs/wrong_tools/no_end.jinja2", "repo_id": "promptflow", "token_count": 12 }
88
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK --> ## Security Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. ## Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) * Full paths of source file(s) related to the manifestation of the issue * The location of the affected source code (tag/branch/commit or direct URL) * Any special configuration required to reproduce the issue * Step-by-step instructions to reproduce the issue * Proof-of-concept or exploit code (if possible) * Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. ## Preferred Languages We prefer all communications to be in English. ## Policy Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). <!-- END MICROSOFT SECURITY.MD BLOCK -->
promptflow/SECURITY.md/0
{ "file_path": "promptflow/SECURITY.md", "repo_id": "promptflow", "token_count": 674 }
0
# Concepts In this section, you will learn the basic concepts of prompt flow. ```{toctree} :maxdepth: 1 concept-flows concept-tools concept-connections concept-variants design-principles ```
promptflow/docs/concepts/index.md/0
{ "file_path": "promptflow/docs/concepts/index.md", "repo_id": "promptflow", "token_count": 61 }
1
# Adding Category and Tags for Tool This document is dedicated to guiding you through the process of categorizing and tagging your tools for optimal organization and efficiency. Categories help you organize your tools into specific folders, making it much easier to find what you need. Tags, on the other hand, work like labels that offer more detailed descriptions. They enable you to quickly search and filter tools based on specific characteristics or functions. By using categories and tags, you'll not only tailor your tool library to your preferences but also save time by effortlessly finding the right tool for any task. | Attribute | Type | Required | Description | | --------- | ---- | -------- | ----------- | | category | str | No | Organizes tools into folders by common features. | | tags | dict | No | Offers detailed, searchable descriptions of tools through key-value pairs. | **Important Notes:** - Tools without an assigned category will be listed in the root folder. - Tools lacking tags will display an empty tags field. ## Prerequisites - Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.1.0 or later. ## How to add category and tags for a tool Run the command below in your tool project directory to automatically generate your tool YAML, use _-c_ or _--category_ to add category, and use _--tags_ to add tags for your tool: ``` python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> --category <tool_category> --tags <tool_tags> ``` Here, we use [an existing tool](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/my_tool_1.yaml) as an example. If you wish to create your own tool, please refer to the [create and use tool package](create-and-use-tool-package.md#create-custom-tool-package) guide. ``` cd D:\proj\github\promptflow\examples\tools\tool-package-quickstart python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m my_tool_package.tools.my_tool_1 -o my_tool_package\yamls\my_tool_1.yaml --category "test_tool" --tags "{'tag1':'value1','tag2':'value2'}" ``` In the auto-generated tool YAML file, the category and tags are shown as below: ```yaml my_tool_package.tools.my_tool_1.my_tool: function: my_tool inputs: connection: type: - CustomConnection input_text: type: - string module: my_tool_package.tools.my_tool_1 name: My First Tool description: This is my first tool type: python # Category and tags are shown as below. category: test_tool tags: tag1: value1 tag2: value2 ``` ## Tool with category and tags experience in VS Code extension Follow the [steps](create-and-use-tool-package.md#use-your-tool-from-vscode-extension) to use your tool via the VS Code extension. - Experience in the tool tree ![category_and_tags_in_tool_tree](../../media/how-to-guides/develop-a-tool/category_and_tags_in_tool_tree.png) - Experience in the tool list By clicking `More` in the visual editor, you can view your tools along with their category and tags: ![category_and_tags_in_tool_list](../../media/how-to-guides/develop-a-tool/category_and_tags_in_tool_list.png) Furthermore, you have the option to search or filter tools based on tags: ![filter_tools_by_tag](../../media/how-to-guides/develop-a-tool/filter_tools_by_tag.png)
promptflow/docs/how-to-guides/develop-a-tool/add-category-and-tags-for-tool.md/0
{ "file_path": "promptflow/docs/how-to-guides/develop-a-tool/add-category-and-tags-for-tool.md", "repo_id": "promptflow", "token_count": 1073 }
2
# Quick Start This guide will walk you through the fist step using of prompt flow code-first experience. **Prerequisite** - To make the most of this tutorial, you'll need: - Know how to program with Python :) - A basic understanding of Machine Learning can be beneficial, but it's not mandatory. **Learning Objectives** - Upon completing this tutorial, you should learn how to: - Setup your python environment to run prompt flow - Clone a sample flow & understand what's a flow - Understand how to edit the flow using visual editor or yaml - Test the flow using your favorite experience: CLI, SDK or VS Code Extension. ## Set up your dev environment 1. A python environment with version `python=3.9` or higher version like 3.10. It's recommended to use python environment manager [miniconda](https://docs.conda.io/en/latest/miniconda.html). After you have installed miniconda, run below commands to create a python environment: ```bash conda create --name pf python=3.9 conda activate pf ``` 2. Install `promptflow` and `promptflow-tools`. ```sh pip install promptflow promptflow-tools ``` 3. Check the installation. ```bash # should print promptflow version, e.g. "0.1.0b3" pf -v ``` ## Understand what's a flow A flow, represented as a YAML file, is a DAG of functions, which is connected via input/output dependencies, and executed based on the topology by prompt flow executor. See [Flows](../../concepts/concept-flows.md) for more details. ### Get the flow sample Clone the sample repo and check flows in folder [examples/flows](https://github.com/microsoft/promptflow/tree/main/examples/flows). ```bash git clone https://github.com/microsoft/promptflow.git ``` ### Understand flow directory The sample used in this tutorial is the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification) flow, which categorizes URLs into several predefined classes. Classification is a traditional machine learning task, and this sample illustrates how to perform classification using GPT and prompts. ```bash cd promptflow/examples/flows/standard/web-classification ``` A flow directory is a directory that contains all contents of a flow. Structure of flow folder: - **flow.dag.yaml**: The flow definition with inputs/outputs, nodes, tools and variants for authoring purpose. - **.promptflow/flow.tools.json**: It contains tools meta referenced in `flow.dag.yaml`. - **Source code files (.py, .jinja2)**: User managed, the code scripts referenced by tools. - **requirements.txt**: Python package dependencies for this flow. ![flow_dir](../media/how-to-guides/quick-start/flow_directory.png) In order to run this specific flow, you need to install its requirements first. ```sh pip install -r requirements.txt ``` ### Understand the flow yaml The entry file of a flow directory is [`flow.dag.yaml`](https://github.com/microsoft/promptflow/blob/main/examples/flows/standard/web-classification/flow.dag.yaml) which describes the `DAG(Directed Acyclic Graph)` of a flow. Below is a sample of flow DAG: ![flow_dag](../media/how-to-guides/quick-start/flow_dag.png) This graph is rendered by VS Code extension which will be introduced in the next section. ### Using VS Code Extension to visualize the flow _Note: Prompt flow VS Code Extension is highly recommended for flow development and debugging._ 1. Prerequisites for VS Code extension. - Install latest stable version of [VS Code](https://code.visualstudio.com/) - Install [VS Code Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) 2. Install [Prompt flow for VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) 3. Select python interpreter ![vscode](../media/how-to-guides/quick-start/vs_code_interpreter_0.png) ![vscode](../media/how-to-guides/quick-start/vs_code_interpreter_1.png) 2. Open dag in vscode. You can open the `flow.dag.yaml` as yaml file, or you can also open it in `visual editor`. ![vscode](../media/how-to-guides/quick-start/vs_code_dag_0.png) ## Develop and test your flow ### How to edit the flow To test your flow with varying input data, you have the option to modify the default input. If you are well-versed with the structure, you may also add or remove nodes to alter the flow's arrangement. ```yaml $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: url: type: string # change the default value of input url here default: https://play.google.com/store/apps/details?id=com.twitter.android ... ``` See more details of this topic in [Develop a flow](./develop-a-flow/index.md). ### Create necessary connections :::{note} If you are using `WSL` or other OS without default keyring storage backend, you may encounter `StoreConnectionEncryptionKeyError`, please refer to [FAQ](./faq.md#connection-creation-failed-with-storeconnectionencryptionkeyerror) for the solutions. ::: The [`connection`](../concepts/concept-connections.md) helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety. The sample flow [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification) uses connection `open_ai_connection` inside, e.g. `classify_with_llm` node needs to talk to `llm` using the connection. We need to set up the connection if we haven't added it before. Once created, the connection will be stored in local db and can be used in any flow. ::::{tab-set} :::{tab-item} CLI :sync: CLI Firstly we need a connection yaml file `connection.yaml`: If you are using Azure Open AI, prepare your resource follow with this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one. ```yaml $schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json name: open_ai_connection type: azure_open_ai api_key: <test_key> api_base: <test_base> api_type: azure api_version: <test_version> ``` If you are using OpenAI, sign up account via [OpenAI website](https://openai.com/), login and [find personal API key](https://platform.openai.com/account/api-keys), then use this yaml: ```yaml $schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json name: open_ai_connection type: open_ai api_key: "<user-input>" organization: "" # optional ``` Then we can use CLI command to create the connection. ```sh pf connection create -f connection.yaml ``` More command details can be found in [CLI reference](../reference/pf-command-reference.md). ::: :::{tab-item} SDK :sync: SDK In SDK, connections can be created and managed with `PFClient`. ```python from promptflow import PFClient from promptflow.entities import AzureOpenAIConnection # PFClient can help manage your runs and connections. pf = PFClient() try: conn_name = "open_ai_connection" conn = pf.connections.get(name=conn_name) print("using existing connection") except: connection = AzureOpenAIConnection( name=conn_name, api_key="<test_key>", api_base="<test_base>", api_type="azure", api_version="<test_version>", ) # use this if you have an existing OpenAI account # from promptflow.entities import OpenAIConnection # connection = OpenAIConnection( # name=conn_name, # api_key="<user-input>", # ) conn = pf.connections.create_or_update(connection) print("successfully created connection") print(conn) ``` ::: :::{tab-item} VS Code Extension :sync: VS Code Extension 1. Click the promptflow icon to enter promptflow control panel ![vsc_add_connection](../media/how-to-guides/quick-start/vs_code_connection_0.png) 2. Create your connection. ![vsc_add_connection](../media/how-to-guides/quick-start/vs_code_connection_1.png) ![vsc_add_connection](../media/how-to-guides/quick-start/vs_code_connection_2.png) ![vsc_add_connection](../media/how-to-guides/quick-start/vs_code_connection_3.png) ::: :::: Learn more on more actions like delete connection in: [Manage connections](./manage-connections.md). ### Test the flow :::{admonition} Note Testing flow will NOT create a batch run record, therefore it's unable to use commands like `pf run show-details` to get the run information. If you want to persist the run record, see [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md) ::: Assuming you are in working directory `promptflow/examples/flows/standard/` ::::{tab-set} :::{tab-item} CLI :sync: CLI Change the default input to the value you want to test. ![q_0](../media/how-to-guides/quick-start/flow-directory-and-dag-yaml.png) ```sh pf flow test --flow web-classification # "web-classification" is the directory name ``` ![flow-test-output-cli](../media/how-to-guides/quick-start/flow-test-output-cli.png) ::: :::{tab-item} SDK :sync: SDK The return value of `test` function is the flow/node outputs. ```python from promptflow import PFClient pf = PFClient() flow_path = "web-classification" # "web-classification" is the directory name # Test flow flow_inputs = {"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g", "answer": "Channel", "evidence": "Url"} # The inputs of the flow. flow_result = pf.test(flow=flow_path, inputs=flow_inputs) print(f"Flow outputs: {flow_result}") # Test node in the flow node_name = "fetch_text_content_from_url" # The node name in the flow. node_inputs = {"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g"} # The inputs of the node. node_result = pf.test(flow=flow_path, inputs=node_inputs, node=node_name) print(f"Node outputs: {node_result}") ``` ![Flow test outputs](../media/how-to-guides/quick-start/flow_test_output.png) ::: :::{tab-item} VS Code Extension :sync: VS Code Extension Use the code lens action on the top of the yaml editor to trigger flow test ![dag_yaml_flow_test](../media/how-to-guides/quick-start/test_flow_dag_yaml.gif) Click the run flow button on the top of the visual editor to trigger flow test. ![visual_editor_flow_test](../media/how-to-guides/quick-start/test_flow_dag_editor.gif) ::: :::: See more details of this topic in [Initialize and test a flow](./init-and-test-a-flow.md). ## Next steps Learn more on how to: - [Develop a flow](./develop-a-flow/index.md): details on how to develop a flow by writing a flow yaml from scratch. - [Initialize and test a flow](./init-and-test-a-flow.md): details on how develop a flow from scratch or existing code. - [Add conditional control to a flow](./add-conditional-control-to-a-flow.md): how to use activate config to add conditional control to a flow. - [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md): run and evaluate the flow using multi line data file. - [Deploy a flow](./deploy-a-flow/index.md): how to deploy the flow as a web app. - [Manage connections](./manage-connections.md): how to manage the endpoints/secrets information to access external services including LLMs. - [Prompt flow in Azure AI](../cloud/azureai/quick-start.md): run and evaluate flow in Azure AI where you can collaborate with team better. And you can also check our [examples](https://github.com/microsoft/promptflow/tree/main/examples), especially: - [Getting started with prompt flow](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart.ipynb): the notebook covering the python sdk experience for sample introduced in this doc. - [Tutorial: Chat with PDF](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/e2e-development/chat-with-pdf.md): An end-to-end tutorial on how to build a high quality chat application with prompt flow, including flow development and evaluation with metrics.
promptflow/docs/how-to-guides/quick-start.md/0
{ "file_path": "promptflow/docs/how-to-guides/quick-start.md", "repo_id": "promptflow", "token_count": 3764 }
3
# Content Safety (Text) Azure Content Safety is a content moderation service developed by Microsoft that help users detect harmful content from different modalities and languages. This tool is a wrapper for the Azure Content Safety Text API, which allows you to detect text content and get moderation results. See the [Azure Content Safety](https://aka.ms/acs-doc) for more information. ## Requirements - For AzureML users, the tool is installed in default image, you can use the tool without extra installation. - For local users, `pip install promptflow-tools` > [!NOTE] > Content Safety (Text) tool is now incorporated into the latest `promptflow-tools` package. If you have previously installed the package `promptflow-contentsafety`, please uninstall it to avoid the duplication in your local tool list. ## Prerequisites - Create an [Azure Content Safety](https://aka.ms/acs-create) resource. - Add "Azure Content Safety" connection in prompt flow. Fill "API key" field with "Primary key" from "Keys and Endpoint" section of created resource. ## Inputs You can use the following parameters as inputs for this tool: | Name | Type | Description | Required | | ---- | ---- | ----------- | -------- | | text | string | The text that need to be moderated. | Yes | | hate_category | string | The moderation sensitivity for Hate category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for hate category. The other three options mean different degrees of strictness in filtering out hate content. The default option is *medium_sensitivity*. | Yes | | sexual_category | string | The moderation sensitivity for Sexual category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for sexual category. The other three options mean different degrees of strictness in filtering out sexual content. The default option is *medium_sensitivity*. | Yes | | self_harm_category | string | The moderation sensitivity for Self-harm category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for self-harm category. The other three options mean different degrees of strictness in filtering out self_harm content. The default option is *medium_sensitivity*. | Yes | | violence_category | string | The moderation sensitivity for Violence category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for violence category. The other three options mean different degrees of strictness in filtering out violence content. The default option is *medium_sensitivity*. | Yes | For more information, please refer to [Azure Content Safety](https://aka.ms/acs-doc) ## Outputs The following is an example JSON format response returned by the tool: <details> <summary>Output</summary> ```json { "action_by_category": { "Hate": "Accept", "SelfHarm": "Accept", "Sexual": "Accept", "Violence": "Accept" }, "suggested_action": "Accept" } ``` </details> The `action_by_category` field gives you a binary value for each category: *Accept* or *Reject*. This value shows if the text meets the sensitivity level that you set in the request parameters for that category. The `suggested_action` field gives you an overall recommendation based on the four categories. If any category has a *Reject* value, the `suggested_action` will be *Reject* as well.
promptflow/docs/reference/tools-reference/contentsafety_text_tool.md/0
{ "file_path": "promptflow/docs/reference/tools-reference/contentsafety_text_tool.md", "repo_id": "promptflow", "token_count": 917 }
4
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureContentSafetyConnection.schema.json name: azure_content_safety_connection type: azure_content_safety api_key: "<to-be-replaced>" endpoint: "endpoint" api_version: "2023-04-30-preview"
promptflow/examples/connections/azure_content_safety.yml/0
{ "file_path": "promptflow/examples/connections/azure_content_safety.yml", "repo_id": "promptflow", "token_count": 93 }
5
# Test your prompt variants for chat with math This is a prompt tuning case with 3 prompt variants for math question answering. By utilizing this flow, in conjunction with the `evaluation/eval-chat-math` flow, you can quickly grasp the advantages of prompt tuning and experimentation with prompt flow. Here we provide a [video](https://www.youtube.com/watch?v=gcIe6nk2gA4) and a [tutorial]((../../../tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md)) for you to get started. Tools used in this flow: - `llm` tool - custom `python` Tool ## Prerequisites Install promptflow sdk and other dependencies in this folder: ```bash pip install -r requirements.txt ``` ## Getting started ### 1 Create connection for LLM tool to use Go to "Prompt flow" "Connections" tab. Click on "Create" button, select one of LLM tool supported connection types and fill in the configurations. Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details. ```bash # Override keys with --set to avoid yaml file changes pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection ``` Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`. ```bash # show registered connection pf connection show --name open_ai_connection ``` ### 2 Start chatting ```bash # run chat flow with default question in flow.dag.yaml pf flow test --flow . # run chat flow with new question pf flow test --flow . --inputs question="2+5=?" # start a interactive chat session in CLI pf flow test --flow . --interactive # start a interactive chat session in CLI with verbose info pf flow test --flow . --interactive --verbose
promptflow/examples/flows/chat/chat-math-variant/README.md/0
{ "file_path": "promptflow/examples/flows/chat/chat-math-variant/README.md", "repo_id": "promptflow", "token_count": 623 }
6
inputs: chat_history: type: list default: - inputs: question: what is BERT? outputs: answer: BERT (Bidirectional Encoder Representations from Transformers) is a language representation model that pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. Unlike other language representation models, BERT can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks such as question answering and language inference, without substantial task-specific architecture modifications. BERT is effective for both fine-tuning and feature-based approaches. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement). pdf_url: type: string default: https://arxiv.org/pdf/1810.04805.pdf question: type: string is_chat_input: true default: what NLP tasks does it perform well? outputs: answer: type: string is_chat_output: true reference: ${chat_with_pdf_tool.output.answer} context: type: string reference: ${chat_with_pdf_tool.output.context} nodes: - name: setup_env type: python source: type: code path: setup_env.py inputs: conn: my_custom_connection - name: chat_with_pdf_tool type: python source: type: code path: chat_with_pdf_tool.py inputs: history: ${inputs.chat_history} pdf_url: ${inputs.pdf_url} question: ${inputs.question} ready: ${setup_env.output}
promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml.single-node/0
{ "file_path": "promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml.single-node", "repo_id": "promptflow", "token_count": 723 }
7
{"entities": ["software engineer","CEO"],"ground_truth": "\"CEO, Software Engineer, Finance Manager\""} {"entities": ["Software Engineer","CEO", "Finance Manager"],"ground_truth": "\"CEO, Software Engineer, Finance Manager\""}
promptflow/examples/flows/evaluation/eval-entity-match-rate/data.jsonl/0
{ "file_path": "promptflow/examples/flows/evaluation/eval-entity-match-rate/data.jsonl", "repo_id": "promptflow", "token_count": 57 }
8
from promptflow import tool def is_valid(input_item): return True if input_item and input_item.strip() else False @tool def validate_input(question: str, answer: str, documents: str, selected_metrics: dict) -> dict: input_data = {"question": is_valid(question), "answer": is_valid(answer), "documents": is_valid(documents)} expected_input_cols = set(input_data.keys()) dict_metric_required_fields = {"gpt_groundedness": set(["question", "answer", "documents"]), "gpt_relevance": set(["question", "answer", "documents"]), "gpt_retrieval_score": set(["question", "documents"])} actual_input_cols = set() for col in expected_input_cols: if input_data[col]: actual_input_cols.add(col) data_validation = selected_metrics for metric in selected_metrics: if selected_metrics[metric]: metric_required_fields = dict_metric_required_fields[metric] if metric_required_fields <= actual_input_cols: data_validation[metric] = True else: data_validation[metric] = False return data_validation
promptflow/examples/flows/evaluation/eval-qna-rag-metrics/validate_input.py/0
{ "file_path": "promptflow/examples/flows/evaluation/eval-qna-rag-metrics/validate_input.py", "repo_id": "promptflow", "token_count": 507 }
9
import os from openai.version import VERSION as OPENAI_VERSION from dotenv import load_dotenv from promptflow import tool # The inputs section will change based on the arguments of the tool function, after you save the code # Adding type to arguments and return value will help the system show the types properly # Please update the function name/signature per need def to_bool(value) -> bool: return str(value).lower() == "true" def get_client(): if OPENAI_VERSION.startswith("0."): raise Exception( "Please upgrade your OpenAI package to version >= 1.0.0 or using the command: pip install --upgrade openai." ) api_key = os.environ["AZURE_OPENAI_API_KEY"] conn = dict( api_key=os.environ["AZURE_OPENAI_API_KEY"], ) if api_key.startswith("sk-"): from openai import OpenAI as Client else: from openai import AzureOpenAI as Client conn.update( azure_endpoint=os.environ["AZURE_OPENAI_API_BASE"], api_version=os.environ.get("AZURE_OPENAI_API_VERSION", "2023-07-01-preview"), ) return Client(**conn) @tool def my_python_tool( prompt: str, # for AOAI, deployment name is customized by user, not model name. deployment_name: str, suffix: str = None, max_tokens: int = 120, temperature: float = 1.0, top_p: float = 1.0, n: int = 1, logprobs: int = None, echo: bool = False, stop: list = None, presence_penalty: float = 0, frequency_penalty: float = 0, best_of: int = 1, logit_bias: dict = {}, user: str = "", **kwargs, ) -> str: if "AZURE_OPENAI_API_KEY" not in os.environ: # load environment variables from .env file load_dotenv() if "AZURE_OPENAI_API_KEY" not in os.environ: raise Exception("Please specify environment variables: AZURE_OPENAI_API_KEY") # TODO: remove below type conversion after client can pass json rather than string. echo = to_bool(echo) response = get_client().completions.create( prompt=prompt, model=deployment_name, # empty string suffix should be treated as None. suffix=suffix if suffix else None, max_tokens=int(max_tokens), temperature=float(temperature), top_p=float(top_p), n=int(n), logprobs=int(logprobs) if logprobs else None, echo=echo, # fix bug "[] is not valid under any of the given schemas-'stop'" stop=stop if stop else None, presence_penalty=float(presence_penalty), frequency_penalty=float(frequency_penalty), best_of=int(best_of), # Logit bias must be a dict if we passed it to openai api. logit_bias=logit_bias if logit_bias else {}, user=user, ) # get first element because prompt is single. return response.choices[0].text
promptflow/examples/flows/standard/basic/hello.py/0
{ "file_path": "promptflow/examples/flows/standard/basic/hello.py", "repo_id": "promptflow", "token_count": 1164 }
10
{ "package": {}, "code": { "summarize_text_content.jinja2": { "type": "llm", "inputs": { "text": { "type": [ "string" ] } }, "description": "Summarize webpage content into a short paragraph." }, "summarize_text_content__variant_1.jinja2": { "type": "llm", "inputs": { "text": { "type": [ "string" ] } } }, "prepare_examples.py": { "type": "python", "function": "prepare_examples" } } }
promptflow/examples/flows/standard/flow-with-symlinks/.promptflow/flow.tools.json/0
{ "file_path": "promptflow/examples/flows/standard/flow-with-symlinks/.promptflow/flow.tools.json", "repo_id": "promptflow", "token_count": 321 }
11
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: source: type: string default: ./azure_open_ai.py outputs: code: type: string reference: ${combine_code.output} nodes: - name: load_code type: python source: type: code path: load_code_tool.py inputs: source: ${inputs.source} - name: divide_code type: python source: type: code path: divide_code_tool.py inputs: file_content: ${load_code.output} - name: generate_docstring type: python source: type: code path: generate_docstring_tool.py inputs: divided: ${divide_code.output} connection: open_ai_connection model: gpt-35-turbo - name: combine_code type: prompt source: type: code path: combine_code.jinja2 inputs: divided: ${generate_docstring.output} environment: python_requirements_txt: requirements.txt
promptflow/examples/flows/standard/gen-docstring/flow.dag.yaml/0
{ "file_path": "promptflow/examples/flows/standard/gen-docstring/flow.dag.yaml", "repo_id": "promptflow", "token_count": 352 }
12
from setuptools import find_packages, setup PACKAGE_NAME = "my-tools-package" setup( name=PACKAGE_NAME, version="0.0.12", description="This is my tools package", packages=find_packages(), entry_points={ "package_tools": ["my_tools = my_tool_package.tools.utils:list_package_tools"], }, include_package_data=True, # This line tells setuptools to include files from MANIFEST.in extras_require={ "azure": [ "azure-ai-ml>=1.11.0,<2.0.0" ] }, )
promptflow/examples/tools/tool-package-quickstart/setup.py/0
{ "file_path": "promptflow/examples/tools/tool-package-quickstart/setup.py", "repo_id": "promptflow", "token_count": 224 }
13
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json inputs: text: type: string default: Microsoft outputs: my_output: type: string reference: ${my_package_tool.output} nodes: - name: my_package_tool type: python source: type: package tool: my_tool_package.tools.tool_with_custom_strong_type_connection.my_tool inputs: connection: my_custom_connection input_text: ${inputs.text}
promptflow/examples/tools/use-cases/custom-strong-type-connection-package-tool-showcase/flow.dag.yaml/0
{ "file_path": "promptflow/examples/tools/use-cases/custom-strong-type-connection-package-tool-showcase/flow.dag.yaml", "repo_id": "promptflow", "token_count": 173 }
14
import json import logging from flask import Flask, jsonify, request from promptflow import load_flow from promptflow.connections import AzureOpenAIConnection from promptflow.entities import FlowContext from promptflow.exceptions import SystemErrorException, UserErrorException class SimpleScoreApp(Flask): pass app = SimpleScoreApp(__name__) logging.basicConfig(format="%(threadName)s:%(message)s") logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) # load flow as a function, the function object can be shared accross threads. f = load_flow("./echo_connection_flow/") @app.errorhandler(Exception) def handle_error(e): if isinstance(e, UserErrorException): return jsonify({"message": e.message, "additional_info": e.additional_info}), 400 elif isinstance(e, SystemErrorException): return jsonify({"message": e.message, "additional_info": e.additional_info}), 500 else: from promptflow._internal import ErrorResponse, ExceptionPresenter # handle other unexpected errors, can use internal class to format them # but interface may change in the future presenter = ExceptionPresenter.create(e) trace_back = presenter.formatted_traceback resp = ErrorResponse(presenter.to_dict(include_debug_info=False)) response_code = resp.response_code result = resp.to_simplified_dict() result.update({"trace_back": trace_back}) return jsonify(result), response_code @app.route("/health", methods=["GET"]) def health(): """Check if the runtime is alive.""" return {"status": "Healthy"} @app.route("/score", methods=["POST"]) def score(): """process a flow request in the runtime.""" raw_data = request.get_data() logger.info(f"Start loading request data '{raw_data}'.") data = json.loads(raw_data) # create a dummy connection object # the connection object will only exist in memory and won't store in local db. llm_connection = AzureOpenAIConnection( name="llm_connection", api_key="[determined by request]", api_base="[determined by request]" ) # configure flow contexts, create a new context object for each request to make sure they are thread safe. f.context = FlowContext( # override flow connections with connection object created above connections={"echo_connection": {"connection": llm_connection}}, # override the flow nodes' inputs or other flow configs, the overrides may come from the request # **Note**: after this change, node "echo_connection" will take input node_input from request overrides={"nodes.echo_connection.inputs.node_input": data["node_input"]} if "node_input" in data else {}, ) # data in request will be passed to flow as kwargs result_dict = f(**data) # Note: if specified streaming=True in the flow context, the result will be a generator # reference promptflow._sdk._serving.response_creator.ResponseCreator on how to handle it in app. return jsonify(result_dict) def create_app(**kwargs): return app if __name__ == "__main__": # test this with curl -X POST http://127.0.0.1:5000/score --header "Content-Type: application/json" --data '{"flow_input": "some_flow_input", "node_input": "some_node_input"}' # noqa: E501 create_app().run(debug=True)
promptflow/examples/tutorials/flow-deploy/create-service-with-flow/simple_score.py/0
{ "file_path": "promptflow/examples/tutorials/flow-deploy/create-service-with-flow/simple_score.py", "repo_id": "promptflow", "token_count": 1105 }
15
<jupyter_start><jupyter_text>Getting started with prompt flow**Prerequisite** - To make the most of this tutorial, you'll need:- A local clone of the prompt flow repository- A Python environment with Jupyter Notebook support (such as Jupyter Lab or the Python extension for Visual Studio Code)- Know how to program with Python :)_A basic understanding of Machine Learning can be beneficial, but it's not mandatory._**Learning Objectives** - Upon completing this tutorial, you should be able to:- Run your first prompt flow sample- Run your first evaluationThe sample used in this tutorial is the [web-classification](../../flows/standard/web-classification/README.md) flow, which categorizes URLs into several predefined classes. Classification is a traditional machine learning task, and this sample illustrates how to perform classification using GPT and prompts. 0. Install dependent packages<jupyter_code>%pip install -r ../../requirements.txt<jupyter_output><empty_output><jupyter_text>1. Create necessary connectionsConnection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.In this notebook, we will use flow `web-classification` which uses connection `open_ai_connection` inside, we need to set up the connection if we haven't added it before. After created, it's stored in local db and can be used in any flow.Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.<jupyter_code>import json from promptflow import PFClient from promptflow.connections import AzureOpenAIConnection, OpenAIConnection # client can help manage your runs and connections. pf = PFClient() try: conn_name = "open_ai_connection" conn = pf.connections.get(name=conn_name) print("using existing connection") except: # Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure Open AI resource. connection = AzureOpenAIConnection( name=conn_name, api_key="<test_key>", api_base="<test_base>", api_type="azure", api_version="<test_version>", ) # use this if you have an existing OpenAI account # connection = OpenAIConnection( # name=conn_name, # api_key="<user-input>", # ) conn = pf.connections.create_or_update(connection) print("successfully created connection") print(conn)<jupyter_output><empty_output><jupyter_text>2. Run web-classification flow`web-classification` is a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts. Set flow path<jupyter_code>flow = "../../flows/standard/web-classification" # path to the flow directory<jupyter_output><empty_output><jupyter_text>Quick test<jupyter_code># Test flow flow_inputs = { "url": "https://play.google.com/store/apps/details?id=com.twitter.android", } flow_result = pf.test(flow=flow, inputs=flow_inputs) print(f"Flow result: {flow_result}") # Test single node in the flow node_name = "fetch_text_content_from_url" node_inputs = { "url": "https://play.google.com/store/apps/details?id=com.twitter.android" } flow_result = pf.test(flow=flow, inputs=node_inputs, node=node_name) print(f"Node result: {flow_result}")<jupyter_output><empty_output><jupyter_text>Flow as a functionWe have also implemented a syntex sugar where you can consume a flow like a python function, with ability to override connections, inputs and other runtime configs.Reference [here](./flow-as-function.ipynb) for more details.<jupyter_code>from promptflow import load_flow flow_func = load_flow(flow) flow_result = flow_func(**flow_inputs) print(f"Flow function result: {flow_result}")<jupyter_output><empty_output><jupyter_text>Batch run with a data file (with multiple lines of test data)<jupyter_code>data = "../../flows/standard/web-classification/data.jsonl" # path to the data file # create run with default variant base_run = pf.run(flow=flow, data=data, stream=True) details = pf.get_details(base_run) details.head(10)<jupyter_output><empty_output><jupyter_text>3. Evaluate your flowThen you can use an evaluation method to evaluate your flow. The evaluation methods are also flows which use Python or LLM etc., to calculate metrics like accuracy, relevance score.In this notebook, we use `classification-accuracy-eval` flow to evaluate. This is a flow illustrating how to evaluate the performance of a classification system. It involves comparing each prediction to the groundtruth and assigns a "Correct" or "Incorrect" grade, and aggregating the results to produce metrics such as accuracy, which reflects how good the system is at classifying the data. Run evaluation on the previous batch runThe **base_run** is the batch run we completed in step 2 above, for web-classification flow with "data.jsonl" as input.<jupyter_code>eval_flow = "../../flows/evaluation/eval-classification-accuracy" eval_run = pf.run( flow=eval_flow, data="../../flows/standard/web-classification/data.jsonl", # path to the data file run=base_run, # specify base_run as the run you want to evaluate column_mapping={ "groundtruth": "${data.answer}", "prediction": "${run.outputs.category}", }, # map the url field from the data to the url input of the flow stream=True, ) details = pf.get_details(eval_run) details.head(10) metrics = pf.get_metrics(eval_run) print(json.dumps(metrics, indent=4)) pf.visualize([base_run, eval_run])<jupyter_output><empty_output><jupyter_text>By now you've successfully run your first prompt flow and even did evaluation on it. That's great!You can check out the [web-classification](../../flows/standard/web-classification/) flow and the [classification-accuracy](../../flows/evaluation/eval-classification-accuracy/) flow for more details, and start building your own flow.Or you can move on for a more advanced topic: experiment with a variant. Another batch run with a variant[Variant](../../../docs/concepts/concept-variants.md) in prompt flow is to allow you do experimentation with LLMs. You can set a variant of Prompt/LLM node pointing to different prompt or use different LLM parameters like temperature.In this example, `web-classification`'s node `summarize_text_content` has two variants: `variant_0` and `variant_1`. The difference between them is the inputs parameters:variant_0: - inputs: - deployment_name: gpt-35-turbo - max_tokens: '128' - temperature: '0.2' - text: ${fetch_text_content_from_url.output}variant_1: - inputs: - deployment_name: gpt-35-turbo - max_tokens: '256' - temperature: '0.3' - text: ${fetch_text_content_from_url.output}You can check the whole flow definition at [flow.dag.yaml](../../flows/standard/web-classification/flow.dag.yaml)<jupyter_code># use the variant1 of the summarize_text_content node. variant_run = pf.run( flow=flow, data=data, variant="${summarize_text_content.variant_1}", # here we specify node "summarize_text_content" to use variant 1 version. stream=True, ) details = pf.get_details(variant_run) details.head(10)<jupyter_output><empty_output><jupyter_text>Run evaluation on the variant runSo that later we can compare metrics and see which works better.<jupyter_code>eval_flow = "../../flows/evaluation/eval-classification-accuracy" eval_run_variant = pf.run( flow=eval_flow, data="../../flows/standard/web-classification/data.jsonl", # path to the data file run=variant_run, # use run as the variant column_mapping={ "groundtruth": "${data.answer}", "prediction": "${run.outputs.category}", }, # map the url field from the data to the url input of the flow stream=True, ) details = pf.get_details(eval_run_variant) details.head(10) metrics = pf.get_metrics(eval_run_variant) print(json.dumps(metrics, indent=4)) pf.visualize([eval_run, eval_run_variant])<jupyter_output><empty_output>
promptflow/examples/tutorials/get-started/quickstart.ipynb/0
{ "file_path": "promptflow/examples/tutorials/get-started/quickstart.ipynb", "repo_id": "promptflow", "token_count": 2618 }
16
// Get the head element let head = document.getElementsByTagName("head")[0]; // Create the script element let script = document.createElement("script"); script.async = true; script.src = "https://www.googletagmanager.com/gtag/js?id=G-KZXK5PFBZY"; // Create another script element for the gtag code let script2 = document.createElement("script"); script2.innerHTML = ` window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date());gtag('config', 'G-KZXK5PFBZY'); `; // Insert the script elements after the head element head.insertAdjacentElement("afterbegin", script2); head.insertAdjacentElement("afterbegin", script); // This is used to zoom in images when clicked on window.onload = () => { if (document.getElementById("lightbox") === null){ // Append lightbox div to each page let div = document.createElement('div'); div.innerHTML = '<div id="lightbox"></div>'; document.body.appendChild(div); } // (A) GET LIGHTBOX & ALL .ZOOMD IMAGES let all = document.getElementsByClassName("bd-article")[0].getElementsByTagName("img"), lightbox = document.getElementById("lightbox"); // (B) CLICK TO SHOW IMAGE IN LIGHTBOX // * SIMPLY CLONE INTO LIGHTBOX & SHOW if (all.length>0) { for (let i of all) { i.onclick = () => { let clone = i.cloneNode(); clone.className = ""; lightbox.innerHTML = ""; lightbox.appendChild(clone); lightbox.className = "show"; }; }} // (C) CLICK TO CLOSE LIGHTBOX lightbox.onclick = () => { lightbox.className = ""; }; }; if (window.location.pathname === "/promptflow/" || window.location.pathname === "/promptflow/index.html") { // This is used to control homepage background let observer = new MutationObserver(function(mutations) { const dark = document.documentElement.dataset.theme == 'dark'; document.body.style.backgroundSize = "100%"; document.body.style.backgroundPositionY = "bottom"; document.body.style.backgroundRepeat = "no-repeat" }) observer.observe(document.documentElement, {attributes: true, attributeFilter: ['data-theme']}); }
promptflow/scripts/docs/_static/custom.js/0
{ "file_path": "promptflow/scripts/docs/_static/custom.js", "repo_id": "promptflow", "token_count": 771 }
17
#!/usr/bin/env bash set -xe {{ command }}
promptflow/scripts/readme/ghactions_driver/bash_script/bash_script.sh.jinja2/0
{ "file_path": "promptflow/scripts/readme/ghactions_driver/bash_script/bash_script.sh.jinja2", "repo_id": "promptflow", "token_count": 16 }
18
- name: {{ step_name }} working-directory: ${{ '{{' }} github.workspace }} run: pf connection create --file {{ yaml_name }} --set api_key=${{ '{{' }} secrets.AOAI_API_KEY_TEST }} api_base=${{ '{{' }} secrets.AOAI_API_ENDPOINT_TEST }}
promptflow/scripts/readme/ghactions_driver/workflow_steps/step_yml_create_aoai.yml.jinja2/0
{ "file_path": "promptflow/scripts/readme/ghactions_driver/workflow_steps/step_yml_create_aoai.yml.jinja2", "repo_id": "promptflow", "token_count": 90 }
19
# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- import argparse import time from pathlib import Path import requests from azure.ai.ml import MLClient, load_environment from azure.identity import AzureCliCredential ENVIRONMENT_YAML = Path(__file__).parent / "runtime-env" / "env.yaml" EXAMPLE_RUNTIME_NAME = "example-runtime-ci" TEST_RUNTIME_NAME = "test-runtime-ci" class PFSRuntimeHelper: def __init__(self, ml_client: MLClient): subscription_id = ml_client._operation_scope.subscription_id resource_group_name = ml_client._operation_scope.resource_group_name workspace_name = ml_client._operation_scope.workspace_name location = ml_client.workspaces.get().location self._request_url_prefix = ( f"https://{location}.api.azureml.ms/flow/api/subscriptions/{subscription_id}" f"/resourceGroups/{resource_group_name}/providers/Microsoft.MachineLearningServices" f"/workspaces/{workspace_name}/FlowRuntimes" ) token = ml_client._credential.get_token("https://management.azure.com/.default").token self._headers = {"Authorization": f"Bearer {token}"} def update_runtime(self, name: str, env_asset_id: str) -> None: body = { "runtimeDescription": "Runtime hosted on compute instance, serves for examples checks.", "environment": env_asset_id, "instanceCount": "", } response = requests.put( f"{self._request_url_prefix}/{name}", headers=self._headers, json=body, ) response.raise_for_status() def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser() parser.add_argument("--path", help="Path to config.json", type=str) return parser.parse_args() def init_ml_client( subscription_id: str, resource_group_name: str, workspace_name: str, ) -> MLClient: return MLClient( credential=AzureCliCredential(), subscription_id=subscription_id, resource_group_name=resource_group_name, workspace_name=workspace_name, ) def create_environment(ml_client: MLClient) -> str: environment = load_environment(source=ENVIRONMENT_YAML) env = ml_client.environments.create_or_update(environment) # have observed delay between environment creation and asset id availability while True: try: ml_client.environments.get(name=env.name, version=env.version) break except Exception: time.sleep(10) # get workspace id from REST workspace object resource_group_name = ml_client._operation_scope.resource_group_name workspace_name = ml_client._operation_scope.workspace_name location = ml_client.workspaces.get().location workspace_id = ml_client._workspaces._operation.get( resource_group_name=resource_group_name, workspace_name=workspace_name ).workspace_id # concat environment asset id asset_id = ( f"azureml://locations/{location}/workspaces/{workspace_id}" f"/environments/{env.name}/versions/{env.version}" ) return asset_id def main(args: argparse.Namespace): subscription_id, resource_group_name, workspace_name = MLClient._get_workspace_info(args.path) ml_client = init_ml_client( subscription_id=subscription_id, resource_group_name=resource_group_name, workspace_name=workspace_name, ) pfs_runtime_helper = PFSRuntimeHelper(ml_client=ml_client) print("creating environment...") env_asset_id = create_environment(ml_client=ml_client) print("created environment, asset id:", env_asset_id) print("updating runtime for test...") pfs_runtime_helper.update_runtime(name=TEST_RUNTIME_NAME, env_asset_id=env_asset_id) print("updating runtime for example...") pfs_runtime_helper.update_runtime(name=EXAMPLE_RUNTIME_NAME, env_asset_id=env_asset_id) print("runtime updated!") if __name__ == "__main__": main(args=parse_args())
promptflow/scripts/runtime_mgmt/update-runtime.py/0
{ "file_path": "promptflow/scripts/runtime_mgmt/update-runtime.py", "repo_id": "promptflow", "token_count": 1584 }
20
from ruamel.yaml import YAML from pathlib import Path def collect_tools_from_directory(base_dir) -> dict: tools = {} yaml = YAML() for f in Path(base_dir).glob("**/*.yaml"): with open(f, "r") as f: tools_in_file = yaml.load(f) for identifier, tool in tools_in_file.items(): tools[identifier] = tool return tools def list_package_tools(): """List package tools""" yaml_dir = Path(__file__).parents[1] / "yamls" return collect_tools_from_directory(yaml_dir)
promptflow/scripts/tool/templates/utils.py.j2/0
{ "file_path": "promptflow/scripts/tool/templates/utils.py.j2", "repo_id": "promptflow", "token_count": 235 }
21
{ "azure_open_ai_connection": { "type": "AzureOpenAIConnection", "value": { "api_key": "aoai-api-key", "api_base": "aoai-api-endpoint", "api_type": "azure", "api_version": "2023-07-01-preview" }, "module": "promptflow.connections" }, "serp_connection": { "type": "SerpConnection", "value": { "api_key": "serpapi-api-key" }, "module": "promptflow.connections" }, "custom_connection": { "type": "CustomConnection", "value": { "key1": "hey", "key2": "val2" }, "module": "promptflow.connections", "secret_keys": [ "key1" ] }, "gpt2_connection": { "type": "CustomConnection", "value": { "endpoint_url": "custom-endpoint-url", "model_family": "GPT2", "endpoint_api_key": "custom-endpoint-api-key" }, "module": "promptflow.connections", "secret_keys": [ "endpoint_api_key" ] }, "open_source_llm_ws_service_connection": { "type": "CustomConnection", "value": { "service_credential": "service-credential" }, "module": "promptflow.connections", "secret_keys": [ "service_credential" ] }, "open_ai_connection": { "type": "OpenAIConnection", "value": { "api_key": "openai-api-key", "organization": "openai-api-org" }, "module": "promptflow.connections" }, "azure_content_safety_connection": { "type": "AzureContentSafetyConnection", "value": { "api_key": "azure-content-safety-api-key", "endpoint": "azure-content-safety-endpoint-url", "api_version": "2023-10-01", "api_type": "Content Safety", "name": "prompt-flow-acs-tool-test" }, "module": "promptflow.connections" } }
promptflow/src/promptflow-tools/connections.json.example/0
{ "file_path": "promptflow/src/promptflow-tools/connections.json.example", "repo_id": "promptflow", "token_count": 815 }
22
promptflow.tools.embedding.embedding: name: Embedding description: Use Open AI's embedding model to create an embedding vector representing the input text. type: python module: promptflow.tools.embedding function: embedding inputs: connection: type: [AzureOpenAIConnection, OpenAIConnection] deployment_name: type: - string enabled_by: connection enabled_by_type: [AzureOpenAIConnection] capabilities: completion: false chat_completion: false embeddings: true model_list: - text-embedding-ada-002 - text-search-ada-doc-001 - text-search-ada-query-001 model: type: - string enabled_by: connection enabled_by_type: [OpenAIConnection] enum: - text-embedding-ada-002 - text-search-ada-doc-001 - text-search-ada-query-001 allow_manual_entry: true input: type: - string
promptflow/src/promptflow-tools/promptflow/tools/yamls/embedding.yaml/0
{ "file_path": "promptflow/src/promptflow-tools/promptflow/tools/yamls/embedding.yaml", "repo_id": "promptflow", "token_count": 403 }
23
# System: You are a marketing writing assistant.For user: You help come up with creative content ideas and content like marketing emails, blog posts, tweets, ad copy and product descriptions.You write in a friendly yet professional tone but can tailor your writing style that best works for a user-specified audience.If you do not know the answer to a question, respond by saying "I do not know the answer to your question." {% for item in chat_history %} # user: {{item.inputs.user_input}} # assistant: {{item.outputs.response}} {% endfor %} # user: {{user_input}}
promptflow/src/promptflow-tools/tests/test_configs/prompt_templates/marketing_writer/prompt.jinja2/0
{ "file_path": "promptflow/src/promptflow-tools/tests/test_configs/prompt_templates/marketing_writer/prompt.jinja2", "repo_id": "promptflow", "token_count": 153 }
24
{ "azure_open_ai_connection": { "type": "AzureOpenAIConnection", "value": { "api_key": "aoai-api-key", "api_base": "aoai-api-endpoint", "api_type": "azure", "api_version": "2023-07-01-preview" }, "module": "promptflow.connections" }, "bing_config": { "type": "BingConnection", "value": { "api_key": "bing-api-key" }, "module": "promptflow.connections" }, "bing_connection": { "type": "BingConnection", "value": { "api_key": "bing-api-key" }, "module": "promptflow.connections" }, "azure_content_safety_config": { "type": "AzureContentSafetyConnection", "value": { "api_key": "content-safety-api-key", "endpoint": "https://content-safety-canary-test.cognitiveservices.azure.com", "api_version": "2023-04-30-preview" }, "module": "promptflow.connections" }, "serp_connection": { "type": "SerpConnection", "value": { "api_key": "serpapi-api-key" }, "module": "promptflow.connections" }, "translate_connection": { "type": "CustomConnection", "value": { "api_key": "<your-key>", "api_endpoint": "https://api.cognitive.microsofttranslator.com/", "api_region": "global" }, "module": "promptflow.connections", "module": "promptflow.connections", "secret_keys": [ "api_key" ] }, "custom_connection": { "type": "CustomConnection", "value": { "key1": "hey", "key2": "val2" }, "module": "promptflow.connections", "secret_keys": [ "key1" ] }, "custom_strong_type_connection": { "type": "CustomConnection", "value": { "api_key": "<your-key>", "api_base": "This is my first custom connection.", "promptflow.connection.custom_type": "MyFirstConnection", "promptflow.connection.module": "my_tool_package.connections" }, "module": "promptflow.connections", "secret_keys": [ "api_key" ] }, "open_ai_connection": { "type": "OpenAIConnection", "value": { "api_key": "<your-key>", "organization": "<your-organization>" }, "module": "promptflow.connections" } }
promptflow/src/promptflow/dev-connections.json.example/0
{ "file_path": "promptflow/src/promptflow/dev-connections.json.example", "repo_id": "promptflow", "token_count": 988 }
25