repo_id
stringlengths 15
132
| file_path
stringlengths 34
176
| content
stringlengths 2
3.52M
| __index_level_0__
int64 0
0
|
---|---|---|---|
promptflow_repo/promptflow/src | promptflow_repo/promptflow/src/promptflow/.env.example | DEFAULT_SUBSCRIPTION_ID="your-subscription-id"
DEFAULT_RESOURCE_GROUP_NAME="your-resource-group-name"
DEFAULT_WORKSPACE_NAME="your-workspace-name"
DEFAULT_RUNTIME_NAME="test-runtime-ci"
PROMPT_FLOW_TEST_MODE="replay"
| 0 |
promptflow_repo/promptflow/src | promptflow_repo/promptflow/src/promptflow/README.md |
# Prompt flow
[](https://pypi.org/project/promptflow/)
[](https://pypi.python.org/pypi/promptflow/)
[](https://pypi.org/project/promptflow/)
[](https://microsoft.github.io/promptflow/reference/pf-command-reference.html)
[](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow)
[](https://microsoft.github.io/promptflow/index.html)
[](https://github.com/microsoft/promptflow/issues/new/choose)
[](https://github.com/microsoft/promptflow/issues/new/choose)
[](https://github.com/microsoft/promptflow/blob/main/CONTRIBUTING.md)
[](https://github.com/microsoft/promptflow/blob/main/LICENSE)
> Welcome to join us to make prompt flow better by
> participating [discussions](https://github.com/microsoft/promptflow/discussions),
> opening [issues](https://github.com/microsoft/promptflow/issues/new/choose),
> submitting [PRs](https://github.com/microsoft/promptflow/pulls).
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
With prompt flow, you will be able to:
- **Create and iteratively develop flow**
- Create executable [flows](https://microsoft.github.io/promptflow/concepts/concept-flows.html) that link LLMs, prompts, Python code and other [tools](https://microsoft.github.io/promptflow/concepts/concept-tools.html) together.
- Debug and iterate your flows, especially the [interaction with LLMs](https://microsoft.github.io/promptflow/concepts/concept-connections.html) with ease.
- **Evaluate flow quality and performance**
- Evaluate your flow's quality and performance with larger datasets.
- Integrate the testing and evaluation into your CI/CD system to ensure quality of your flow.
- **Streamlined development cycle for production**
- Deploy your flow to the serving platform you choose or integrate into your app's code base easily.
- (Optional but highly recommended) Collaborate with your team by leveraging the cloud version of [prompt flow in Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
------
## Installation
Ensure you have a python environment, `python=3.9` is recommended.
```sh
pip install promptflow promptflow-tools
```
## Quick Start ⚡
**Create a chatbot with prompt flow**
Run the command to initiate a prompt flow from a chat template, it creates folder named `my_chatbot` and generates required files within it:
```sh
pf flow init --flow ./my_chatbot --type chat
```
**Setup a connection for your API key**
For OpenAI key, establish a connection by running the command, using the `openai.yaml` file in the `my_chatbot` folder, which stores your OpenAI key:
```sh
# Override keys with --set to avoid yaml file changes
pf connection create --file ./my_chatbot/openai.yaml --set api_key=<your_api_key> --name open_ai_connection
```
For Azure OpenAI key, establish the connection by running the command, using the `azure_openai.yaml` file:
```sh
pf connection create --file ./my_chatbot/azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
**Chat with your flow**
In the `my_chatbot` folder, there's a `flow.dag.yaml` file that outlines the flow, including inputs/outputs, nodes, connection, and the LLM model, etc
> Note that in the `chat` node, we're using a connection named `open_ai_connection` (specified in `connection` field) and the `gpt-35-turbo` model (specified in `deployment_name` field). The deployment_name filed is to specify the OpenAI model, or the Azure OpenAI deployment resource.
Interact with your chatbot by running: (press `Ctrl + C` to end the session)
```sh
pf flow test --flow ./my_chatbot --interactive
```
#### Continue to delve deeper into [prompt flow](https://github.com/microsoft/promptflow).
| 0 |
promptflow_repo/promptflow/src | promptflow_repo/promptflow/src/promptflow/setup.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import os
import re
from pathlib import Path
from typing import Any, Match, cast
from setuptools import find_packages, setup
PACKAGE_NAME = "promptflow"
PACKAGE_FOLDER_PATH = Path(__file__).parent / "promptflow"
with open(os.path.join(PACKAGE_FOLDER_PATH, "_version.py"), encoding="utf-8") as f:
version = cast(Match[Any], re.search(r'^VERSION\s*=\s*[\'"]([^\'"]*)[\'"]', f.read(), re.MULTILINE)).group(1)
with open("README.md", encoding="utf-8") as f:
readme = f.read()
with open("CHANGELOG.md", encoding="utf-8") as f:
changelog = f.read()
REQUIRES = [
"psutil", # get process information when bulk run
"httpx>=0.25.1", # used to send http requests asynchronously
"openai", # promptflow._core.api_injector
"flask>=2.2.3,<4.0.0", # Serving endpoint requirements
"sqlalchemy>=1.4.48,<3.0.0", # sqlite requirements
# note that pandas 1.5.3 is the only version to test in ci before promptflow 0.1.0b7 is released
# and pandas 2.x.x will be the only version to test in ci after that.
"pandas>=1.5.3,<3.0.0", # load data requirements
"python-dotenv>=1.0.0,<2.0.0", # control plane sdk requirements, to load .env file
"keyring>=24.2.0,<25.0.0", # control plane sdk requirements, to access system keyring service
"pydash>=6.0.0,<8.0.0", # control plane sdk requirements, to support parameter overrides in schema.
# vulnerability: https://github.com/advisories/GHSA-5cpq-8wj7-hf2v
"cryptography>=41.0.3,<42.0.0", # control plane sdk requirements to support connection encryption
"colorama>=0.4.6,<0.5.0", # producing colored terminal text for testing chat flow
"tabulate>=0.9.0,<1.0.0", # control plane sdk requirements, to print table in console
"filelock>=3.4.0,<4.0.0", # control plane sdk requirements, to lock for multiprocessing
# We need to pin the version due to the issue: https://github.com/hwchase17/langchain/issues/5113
"marshmallow>=3.5,<4.0.0",
"gitpython>=3.1.24,<4.0.0", # used git info to generate flow id
"tiktoken>=0.4.0",
"strictyaml>=1.5.0,<2.0.0", # used to identify exact location of validation error
"waitress>=2.1.2,<3.0.0", # used to serve local service
"opencensus-ext-azure<2.0.0", # configure opencensus to send telemetry to azure monitor
"ruamel.yaml>=0.17.10,<1.0.0", # used to generate connection templates with preserved comments
"pyarrow>=14.0.1,<15.0.0", # used to read parquet file with pandas.read_parquet
"pillow>=10.1.0,<11.0.0", # used to generate icon data URI for package tool
"filetype>=1.2.0", # used to detect the mime type for mulitmedia input
"jsonschema>=4.0.0,<5.0.0", # used to validate tool
"docutils", # used to generate description for tools
]
setup(
name=PACKAGE_NAME,
version=version,
description="Prompt flow Python SDK - build high-quality LLM apps",
long_description_content_type="text/markdown",
long_description=readme + "\n\n" + changelog,
license="MIT License",
author="Microsoft Corporation",
author_email="[email protected]",
url="https://github.com/microsoft/promptflow",
classifiers=[
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires="<4.0,>=3.8",
install_requires=REQUIRES,
extras_require={
"azure": [
"azure-core>=1.26.4,<2.0.0",
"azure-storage-blob[aio]>=12.13.0,<13.0.0", # add [aio] for async run download feature
"azure-identity>=1.12.0,<2.0.0",
"azure-ai-ml>=1.11.0,<2.0.0",
"pyjwt>=2.4.0,<3.0.0", # requirement of control plane SDK
],
"executable": ["pyinstaller>=5.13.2", "streamlit>=1.26.0", "streamlit-quill<0.1.0", "bs4"],
"pfs": [
"flask-restx>=1.2.0,<2.0.0",
],
"azureml-serving": [
# AzureML connection dependencies
"azure-identity>=1.12.0,<2.0.0",
"azure-ai-ml>=1.11.0,<2.0.0",
# OTel dependencies for monitoring
"opentelemetry-api>=1.21.0,<2.0.0",
"opentelemetry-sdk>=1.21.0,<2.0.0",
"azure-monitor-opentelemetry>=1.1.1,<2.0.0",
# MDC dependencies for monitoring
"azureml-ai-monitoring>=0.1.0b3,<1.0.0",
],
},
packages=find_packages(),
scripts=[
'pf',
'pf.bat'
],
entry_points={
"console_scripts": [
"pfazure = promptflow._cli._pf_azure.entry:main",
"pfs = promptflow._sdk._service.entry:main",
],
},
include_package_data=True,
project_urls={
"Bug Reports": "https://github.com/microsoft/promptflow/issues",
"Source": "https://github.com/microsoft/promptflow",
},
)
| 0 |
promptflow_repo/promptflow/src | promptflow_repo/promptflow/src/promptflow/pf.bat | @echo off
setlocal
SET PF_INSTALLER=PIP
IF EXIST "%~dp0\python.exe" (
"%~dp0\python.exe" -m promptflow._cli._pf.entry %*
) ELSE (
python -m promptflow._cli._pf.entry %*
)
| 0 |
promptflow_repo/promptflow/src | promptflow_repo/promptflow/src/promptflow/NOTICE.txt | NOTICES AND INFORMATION
Do Not Translate or Localize
This software incorporates material from third parties.
Microsoft makes certain open source code available at https://3rdpartysource.microsoft.com,
or you may send a check or money order for US $5.00, including the product name,
the open source component name, platform, and version number, to:
Source Code Compliance Team
Microsoft Corporation
One Microsoft Way
Redmond, WA 98052
USA
Notwithstanding any other terms, you may reverse engineer this software to the extent
required to debug changes to any libraries licensed under the GNU Lesser General Public License.
---------------------------------------------------------
openai 0.27.8 - MIT
Copyright (c) OpenAI (https://openai.com)
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
flask 2.2.3 - BSD-2-Clause AND BSD-3-Clause
Copyright 2010 Pallets
copyright 2010 Pallets
Copyright (c) 2015 CERN.
(c) Copyright 2010 by http://domain.invalid/'>
BSD-2-Clause AND BSD-3-Clause
---------------------------------------------------------
---------------------------------------------------------
dataset 1.6.0 - MIT
Copyright (c) 2013, Open Knowledge Foundation, Friedrich Lindenberg, Gregor Aisch
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
sqlalchemy 1.4.48 - MIT
(c) Zeno Rocha
Copyright (c) Microsoft
Copyright Sphinx contributors
Copyright 2007-2023 by the Sphinx team
Copyright SQLAlchemy 1.4 Documentation
(c) OpenJS Foundation and other contributors
Copyright (c) 2005-2023 Michael Bayer and contributors
Copyright (c) 2010 Gaetan de Menten [email protected]
Copyright 2005-2023 SQLAlchemy authors and contributors
Copyright (c) Microsoft Corporation', Microsoft SQL Azure
Copyright (c) 2021 the SQLAlchemy authors and contributors
Copyright (c) 2010-2011 Gaetan de Menten [email protected]
Copyright 2007-2023, the SQLAlchemy authors and contributors
copyright u'2007-2023, the SQLAlchemy authors and contributors
Copyright (c) 2005-2023 the SQLAlchemy authors and contributors
Copyright (c) 2006-2023 the SQLAlchemy authors and contributors
Copyright (c) 2009-2023 the SQLAlchemy authors and contributors
Copyright (c) 2010-2023 the SQLAlchemy authors and contributors
Copyright (c) 2013-2023 the SQLAlchemy authors and contributors
Copyright (c) 2020-2023 the SQLAlchemy authors and contributors
copyright (c) 2007 Fisch Asset Management AG https://www.fam.ch
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
pandas 2.1.1 - BSD-2-Clause AND BSD-3-Clause
Copyright (c) 2009', join
Copyright 2014-2019, xarray
Copyright (c) 2012 Google Inc.
Copyright (c) 2015 Jared Hobbs
Copyright (c) 1994 David Burren
Copyright (c) 2011 Szabolcs Nagy
Copyright (c) 2011 Valentin Ochs
Copyright (c) 2017 Anthony Sottile
Copyright (c) 2005-2014 Rich Felker
Copyright (c) 2010, Albert Sweigart
Copyright (c) 2002 Michael Ringgaard
Copyright (c) 2003-2011 David Schultz
Copyright (c) 2008 Stephen L. Moshier
Copyright (c) 2011 by Enthought, Inc.
Copyright 2017- dateutil contributors
Copyright (c) 2003-2009 Bruce D. Evans
Copyright (c) 2001-2008 Ville Laurikari
Copyright (c) 2003-2009 Steven G. Kargl
Copyright (c) 1993,2004 Sun Microsystems
Copyright (c) 2001, 2002 Enthought, Inc.
Copyright (c) 2003-2012 SciPy Developers
Copyright (c) 2012, Lambda Foundry, Inc.
Copyright (c) 1994 Sun Microsystems, Inc.
Copyright (c) 2005-2011, NumPy Developers
Copyright (c) 2017 - dateutil contributors
Copyright (c) 2015- - dateutil contributors
Copyright (c) 2016, PyData Development Team
Copyright (c) 2020, PyData Development Team
Copyright 2017- Paul Ganssle <[email protected]>
Copyright (c) 2011-2022, Open source contributors
Copyright (c) 2008 The Android Open Source Project
Copyright (c) 2015- - Paul Ganssle <[email protected]>
Copyright (c) 2010-2012 Archipel Asset Management AB.
Copyright (c) 2007 Nick Galbreath nickg at modp dot com
Copyright (c) Donald Stufft and individual contributors
Copyright (c) 2014-2016 - Yaron de Leeuw <[email protected]>
Copyright (c) 2019 Hadley Wickham RStudio and Evan Miller
Copyright (c) 2008- Attractive Chaos <[email protected]>
Copyright (c) 2003-2011 - Gustavo Niemeyer <[email protected]>
Copyright (c) 1988-1993 The Regents of the University of California
Copyright (c) 2011-2013, ESN Social Software AB and Jonas Tarnstrom
Copyright (c) 2012-2014 - Tomi Pievilainen <[email protected]>
Copyright (c) 1995-2001 Corporation for National Research Initiatives
Copyright (c) 2008, 2009, 2011 by Attractive Chaos <[email protected]>
Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam, The Netherlands
Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 Python Software Foundation
Copyright (c) 2008-2011, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team
BSD-2-Clause AND BSD-3-Clause
---------------------------------------------------------
---------------------------------------------------------
python-dotenv 1.0.0 - BSD-2-Clause AND BSD-3-Clause
Copyright (c) 2014, Saurabh Kumar
BSD-2-Clause AND BSD-3-Clause
---------------------------------------------------------
---------------------------------------------------------
keyring 24.2.0 - MIT
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
cryptography 41.0.3 - Apache-2.0 OR (Apache-2.0 AND BSD-3-Clause)
Copyright 2013-2023
copyright 2013-2023, Individual
Copyright (c) Individual contributors
Copyright (c) 2005-2020, NumPy Developers
Apache-2.0 OR (Apache-2.0 AND BSD-3-Clause)
---------------------------------------------------------
---------------------------------------------------------
colorama 0.4.6 - BSD-2-Clause AND BSD-3-Clause
Copyright Jonathan Hartley 2013
Copyright (c) 2010 Jonathan Hartley
Copyright Jonathan Hartley & Arnon Yaari, 2013-2020
BSD-2-Clause AND BSD-3-Clause
---------------------------------------------------------
---------------------------------------------------------
tabulate 0.9.0 - MIT
Copyright (c) 2011-2020 Sergey Astanin and contributors
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
filelock 3.12.2 - Unlicense
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>
---------------------------------------------------------
---------------------------------------------------------
azure-core 1.25.1 - MIT
Copyright (c) Microsoft Corporation
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
azure-storage-blob 12.13.1 - MIT
Copyright (c) Microsoft Corporation
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
azure-identity 1.11.0 - MIT
Copyright (c) Microsoft Corporation
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
azure-ai-ml 1.9.0 - MIT
Copyright (c) Microsoft Corporation
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
pyjwt 2.5.0 - MIT
Copyright 2015-2022 Jose Padilla
copyright 2015-2022, Jose Padilla
Copyright (c) 2015-2022 Jose Padilla
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
---------------------------------------------------------
---------------------------------------------------------
pathspec 0.10.1 - MPL-2.0
Copyright (c) 2013-2022 Caleb P. Burns credits dahlia <https://github.com/dahlia>
Mozilla Public License Version 2.0
1. Definitions
1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software.
1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution" means Covered Software of a particular Contributor.
1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof.
1.5. "Incompatible With Secondary Licenses" means
(a) that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License.
1.6. "Executable Form" means any form of the work other than Source Code Form.
1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software.
1.8. "License" means this document.
1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License.
1.10. "Modifications" means any of the following:
(a) any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or
(b) any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form" means the form of the work preferred for making modifications.
1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License. For legal entities, "You" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
(a) under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor:
(a) for any code that a Contributor has removed from Covered Software; or
(b) for infringements caused by: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or
(c) under Patent Claims infringed by Covered Software in the absence of its Contributions.
This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party's negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0.
---------------------------------------------------------
| 0 |
promptflow_repo/promptflow/src | promptflow_repo/promptflow/src/promptflow/CHANGELOG.md | # Release History
## 1.5.0 (Upcoming)
### Features Added
### Bugs Fixed
- [SDK/CLI] The inputs of node test allows the value of reference node output be passed directly in.
### Improvements
- [SDK/CLI] For `pf run delete`, `pf connection delete`, introducing an option to skip confirmation prompts.
## 1.4.0 (2024.01.22)
### Features Added
- [Executor] Calculate system_metrics recursively in api_calls.
- [Executor] Add flow root level api_calls, so that user can overview the aggregated metrics of a flow.
- [Executor] Add @trace decorator to make it possible to log traces for functions that are called by tools.
- [SDK/CLI][azure] Switch automatic runtime's session provision to system wait.
- [SDK/CLI] Add `--skip-open-browser` option to `pf flow serve` to skip opening browser.
- [SDK/CLI][azure] Support submit flow to sovereign cloud.
- [SDK/CLI] Support `pf run delete` to delete a run irreversibly.
- [SDK/CLI][azure] Automatically put requirements.txt to flow.dag.yaml if exists in flow snapshot.
- [SDK/CLI] Support `pf upgrade` to upgrade prompt flow to the latest version.
- [SDK/CLI] Support env variables in yaml file.
### Bugs Fixed
- Fix unaligned inputs & outputs or pandas exception during get details against run in Azure.
- Fix loose flow path validation for run schema.
- Fix "Without Import Data" in run visualize page results from invalid JSON value (`-Infinity`, `Infinity` and `NaN`).
- Fix "ValueError: invalid width -1" when show-details against long column(s) in narrow terminal window.
- Fix invalid tool code generated when initializing the script tool with icon.
### Improvements
- [SDK/CLI] For `pfazure flow create`:
- If used by non-msft tenant user, use user name instead of user object id in the remote flow folder path. (e.g. `Users/<user-name>/promptflow`).
- When flow has unknown attributes, log warning instead of raising error.
- Use local flow folder name and timestamp as the azure flow file share folder name.
- [SDK/CLI] For `pf/pfazure run create`, when run has unknown attribute, log warning instead of raising error.
- Replace `pyyaml` with `ruamel.yaml` to adopt YAML 1.2 specification.
## 1.3.0 (2023.12.27)
### Features Added
- [SDK/CLI] Support `pfazure run cancel` to cancel a run on Azure AI.
- Add support to configure prompt flow home directory via environment variable `PF_HOME_DIRECTORY`.
- Please set before importing `promptflow`, otherwise it won't take effect.
- [Executor] Handle KeyboardInterrupt in flow test so that the final state is Canceled.
### Bugs Fixed
- [SDK/CLI] Fix single node run doesn't work when consuming sub item of upstream node
### Improvements
- Change `ruamel.yaml` lower bound to 0.17.10.
- [SDK/CLI] Improve `pfazure run download` to handle large run data files.
- [Executor] Exit the process when all async tools are done or exceeded timeout after cancellation.
## 1.2.0 (2023.12.14)
### Features Added
- [SDK/CLI] Support `pfazure run download` to download run data from Azure AI.
- [SDK/CLI] Support `pf run create` to create a local run record from downloaded run data.
### Bugs Fixed
- [SDK/CLI] Removing telemetry warning when running commands.
- Empty node stdout & stderr to avoid large visualize HTML.
- Hide unnecessary fields in run list for better readability.
- Fix bug that ignores timeout lines in batch run status summary.
## 1.1.1 (2023.12.1)
### Bugs Fixed
- [SDK/CLI] Fix compatibility issue with `semantic-kernel==0.4.0.dev0` and `azure-ai-ml==1.12.0`.
- [SDK/CLI] Add back workspace information in CLI telemetry.
- [SDK/CLI] Disable the feature to customize user agent in CLI to avoid changes on operation context.
- Fix openai metrics calculator to adapt openai v1.
## 1.1.0 (2023.11.30)
### Features Added
- Add `pfazure flow show/list` to show or list flows from Azure AI.
- Display node status in run visualize page graph view.
- Add support for image input and output in prompt flow.
- [SDK/CLI] SDK/CLI will collect telemetry by default, user can use `pf config set telemetry.enabled=false` to opt out.
- Add `raise_on_error` for stream run API, by default we raise for failed run.
- Flow as function: consume a flow like a function with parameters mapped to flow inputs.
- Enable specifying the default output path for run.
- Use `pf config set run.output_path=<output-path>` to specify, and the run output path will be `<output-path>/<run-name>`.
- Introduce macro `${flow_directory}` for `run.output_path` in config, which will be replaced with corresponding flow directory.
- The flow directory cannot be set as run output path, which means `pf config set run.output_path='${flow_directory}'` is invalid; but you can use child folder, e.g. `pf config set run.output_path='${flow_directory}/.runs'`.
- Support pfazure run create with remote flow.
- For remote workspace flow: `pfazure run create --flow azureml:<flow-name>`
- For remote registry flow: `pfazure run create --flow azureml://registries/<registry-name>/models/<flow-name>/versions/<flow-version>`
- Support set logging level via environment variable `PF_LOGGING_LEVEL`, valid values includes `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`, default to `INFO`.
- Remove openai version restrictions
### Bugs Fixed
- [SDK/CLI] Fix node test with dict node input will raise "Required input(s) missing".
- [SDK/CLI] Will use run name as display name when display name not specified (used flow folder name before).
- [SDK/CLI] Fix pf flow build created unexpected layer of dist folder
- [SDK/CLI] Fix deploy prompt flow: connections value may be none
### Improvements
- Force 'az login' if using azureml connection provider in cli command.
- Add env variable 'PF_NO_INTERACTIVE_LOGIN' to disable interactive login if using azureml connection provider in promptflow sdk.
- Improved CLI invoke time.
- Bump `pydash` upper bound to 8.0.0.
- Bump `SQLAlchemy` upper bound to 3.0.0.
- Bump `flask` upper bound to 4.0.0, `flask-restx` upper bound to 2.0.0.
- Bump `ruamel.yaml` upper bound to 1.0.0.
## 1.0.0 (2023.11.09)
### Features Added
- [Executor] Add `enable_kwargs` tag in tools.json for customer python tool.
- [SDK/CLI] Support `pfazure flow create`. Create a flow on Azure AI from local flow folder.
- [SDK/CLI] Changed column mapping `${run.inputs.xx}`'s behavior, it will refer to run's data columns instead of run's inputs columns.
### Bugs Fixed
- [SDK/CLI] Keep original format in run output.jsonl.
- [Executor] Fix the bug that raise an error when an aggregation node references a bypassed node
### Improvements
- [Executor] Set the outputs of the bypassed nodes as None
## 0.1.0b8 (2023.10.26)
### Features Added
- [Executor] Add average execution time and estimated execution time to batch run logs
- [SDK/CLI] Support `pfazure run archive/restore/update`.
- [SDK/CLI] Support custom strong type connection.
- [SDK/CLI] Enable telemetry and won't collect by default, use `pf config set cli.telemetry_enabled=true` to opt in.
- [SDK/CLI] Exposed function `from promptflow import load_run` to load run object from local YAML file.
- [Executor] Support `ToolProvider` for script tools.
### Bugs Fixed
- **pf config set**:
- Fix bug for workspace `connection.provider=azureml` doesn't work as expected.
- [SDK/CLI] Fix the bug that using sdk/cli to submit batch run did not display the log correctly.
- [SDK/CLI] Fix encoding issues when input is non-English with `pf flow test`.
- [Executor] Fix the bug can't read file containing "Private Use" unicode character.
- [SDK/CLI] Fix string type data will be converted to integer/float.
- [SDK/CLI] Remove the max rows limitation of loading data.
- [SDK/CLI] Fix the bug --set not taking effect when creating run from file.
### Improvements
- [SDK/CLI] Experience improvements in `pf run visualize` page:
- Add column status.
- Support opening flow file by clicking run id.
## 0.1.0b7.post1 (2023.09.28)
### Bug Fixed
- Fix extra dependency bug when importing `promptflow` without `azure-ai-ml` installed.
## 0.1.0b7 (2023.09.27)
### Features Added
- **pf flow validate**: support validate flow
- **pf config set**: support set user-level promptflow config.
- Support workspace connection provider, usage: `pf config set connection.provider=azureml://subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace_name>`
- Support override openai connection's model when submitting a flow. For example: `pf run create --flow ./ --data ./data.jsonl --connection llm.model=xxx --column-mapping url='${data.url}'`
### Bugs Fixed
- [Flow build] Fix flow build file name and environment variable name when connection name contains space.
- Reserve `.promptflow` folder when dump run snapshot.
- Read/write log file with encoding specified.
- Avoid inconsistent error message when executor exits abnormally.
- Align inputs & outputs row number in case partial completed run will break `pfazure run show-details`.
- Fix bug that failed to parse portal url for run data when the form is an asset id.
- Fix the issue of process hanging for a long time when running the batch run.
### Improvements
- [Executor][Internal] Improve error message with more details and actionable information.
- [SDK/CLI] `pf/pfazure run show-details`:
- Add `--max-results` option to control the number of results to display.
- Add `--all-results` option to display all results.
- Add validation for azure `PFClient` constructor in case wrong parameter is passed.
## 0.1.0b6 (2023.09.15)
### Features Added
- [promptflow][Feature] Store token metrics in run properties
### Bugs Fixed
- Refine error message body for flow_validator.py
- Refine error message body for run_tracker.py
- [Executor][Internal] Add some unit test to improve code coverage of log/metric
- [SDK/CLI] Update portal link to remove flight.
- [Executor][Internal] Improve inputs mapping's error message.
- [API] Resolve warnings/errors of sphinx build
## 0.1.0b5 (2023.09.08)
### Features Added
- **pf run visualize**: support lineage graph & display name in visualize page
### Bugs Fixed
- Add missing requirement `psutil` in `setup.py`
## 0.1.0b4 (2023.09.04)
### Features added
- Support `pf flow build` commands
## 0.1.0b3 (2023.08.30)
- Minor bug fixes.
## 0.1.0b2 (2023.08.29)
- First preview version with major CLI & SDK features.
### Features added
- **pf flow**: init/test/serve/export
- **pf run**: create/update/stream/list/show/show-details/show-metrics/visualize/archive/restore/export
- **pf connection**: create/update/show/list/delete
- Azure AI support:
- **pfazure run**: create/list/stream/show/show-details/show-metrics/visualize
## 0.1.0b1 (2023.07.20)
- Stub version in Pypi.
| 0 |
promptflow_repo/promptflow/src/promptflow | promptflow_repo/promptflow/src/promptflow/tests/conftest.py | import importlib
import json
import os
import tempfile
from multiprocessing import Lock
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from _constants import (
CONNECTION_FILE,
DEFAULT_REGISTRY_NAME,
DEFAULT_RESOURCE_GROUP_NAME,
DEFAULT_RUNTIME_NAME,
DEFAULT_SUBSCRIPTION_ID,
DEFAULT_WORKSPACE_NAME,
ENV_FILE,
)
from _pytest.monkeypatch import MonkeyPatch
from dotenv import load_dotenv
from filelock import FileLock
from pytest_mock import MockerFixture
from sdk_cli_azure_test.recording_utilities import SanitizedValues, is_replay
from promptflow._cli._utils import AzureMLWorkspaceTriad
from promptflow._constants import PROMPTFLOW_CONNECTIONS
from promptflow._core.connection_manager import ConnectionManager
from promptflow._core.openai_injector import inject_openai_api
from promptflow._utils.context_utils import _change_working_dir
from promptflow.connections import AzureOpenAIConnection
load_dotenv()
@pytest.fixture(scope="session", autouse=True)
def modify_work_directory():
os.chdir(Path(__file__).parent.parent.absolute())
@pytest.fixture(autouse=True, scope="session")
def mock_build_info():
"""Mock BUILD_INFO environment variable in pytest.
BUILD_INFO is set as environment variable in docker image, but not in local test.
So we need to mock it in test senario. Rule - build_number is set as
ci-<BUILD_BUILDNUMBER> in CI pipeline, and set as local in local dev test."""
if "BUILD_INFO" not in os.environ:
m = MonkeyPatch()
build_number = os.environ.get("BUILD_BUILDNUMBER", "")
buid_info = {"build_number": f"ci-{build_number}" if build_number else "local-pytest"}
m.setenv("BUILD_INFO", json.dumps(buid_info))
yield m
@pytest.fixture(autouse=True, scope="session")
def inject_api():
"""Inject OpenAI API during test session.
AOAI call in promptflow should involve trace logging and header injection. Inject
function to API call in test scenario."""
inject_openai_api()
@pytest.fixture
def dev_connections() -> dict:
with open(CONNECTION_FILE, "r") as f:
return json.load(f)
@pytest.fixture
def use_secrets_config_file(mocker: MockerFixture):
mocker.patch.dict(os.environ, {PROMPTFLOW_CONNECTIONS: CONNECTION_FILE})
@pytest.fixture
def env_with_secrets_config_file():
_lock = Lock()
with _lock:
with open(ENV_FILE, "w") as f:
f.write(f"{PROMPTFLOW_CONNECTIONS}={CONNECTION_FILE}\n")
yield ENV_FILE
if os.path.exists(ENV_FILE):
os.remove(ENV_FILE)
@pytest.fixture
def azure_open_ai_connection() -> AzureOpenAIConnection:
return ConnectionManager().get("azure_open_ai_connection")
@pytest.fixture
def temp_output_dir() -> str:
with tempfile.TemporaryDirectory() as temp_dir:
yield temp_dir
@pytest.fixture
def prepare_symbolic_flow() -> str:
flows_dir = Path(__file__).parent / "test_configs" / "flows"
target_folder = flows_dir / "web_classification_with_symbolic"
source_folder = flows_dir / "web_classification"
with _change_working_dir(target_folder):
for file_name in os.listdir(source_folder):
if not Path(file_name).exists():
os.symlink(source_folder / file_name, file_name)
return target_folder
@pytest.fixture(scope="session")
def install_custom_tool_pkg():
# The tests could be running in parallel. Use a lock to prevent race conditions.
lock = FileLock("custom_tool_pkg_installation.lock")
with lock:
try:
import my_tool_package # noqa: F401
except ImportError:
import subprocess
import sys
subprocess.check_call([sys.executable, "-m", "pip", "install", "test-custom-tools==0.0.2"])
@pytest.fixture
def mocked_ws_triple() -> AzureMLWorkspaceTriad:
return AzureMLWorkspaceTriad("mock_subscription_id", "mock_resource_group", "mock_workspace_name")
@pytest.fixture(scope="session")
def mock_list_func():
"""Mock function object for dynamic list testing."""
def my_list_func(prefix: str = "", size: int = 10, **kwargs):
return [
{
"value": "fig0",
"display_value": "My_fig0",
"hyperlink": "https://www.bing.com/search?q=fig0",
"description": "this is 0 item",
},
{
"value": "kiwi1",
"display_value": "My_kiwi1",
"hyperlink": "https://www.bing.com/search?q=kiwi1",
"description": "this is 1 item",
},
]
return my_list_func
@pytest.fixture(scope="session")
def mock_module_with_list_func(mock_list_func):
"""Mock module object for dynamic list testing."""
mock_module = MagicMock()
mock_module.my_list_func = mock_list_func
mock_module.my_field = 1
original_import_module = importlib.import_module # Save this to prevent recursion
with patch.object(importlib, "import_module") as mock_import:
def side_effect(module_name, *args, **kwargs):
if module_name == "my_tool_package.tools.tool_with_dynamic_list_input":
return mock_module
else:
return original_import_module(module_name, *args, **kwargs)
mock_import.side_effect = side_effect
yield
# below fixtures are used for pfazure and global config tests
@pytest.fixture(scope="session")
def subscription_id() -> str:
if is_replay():
return SanitizedValues.SUBSCRIPTION_ID
else:
return os.getenv("PROMPT_FLOW_SUBSCRIPTION_ID", DEFAULT_SUBSCRIPTION_ID)
@pytest.fixture(scope="session")
def resource_group_name() -> str:
if is_replay():
return SanitizedValues.RESOURCE_GROUP_NAME
else:
return os.getenv("PROMPT_FLOW_RESOURCE_GROUP_NAME", DEFAULT_RESOURCE_GROUP_NAME)
@pytest.fixture(scope="session")
def workspace_name() -> str:
if is_replay():
return SanitizedValues.WORKSPACE_NAME
else:
return os.getenv("PROMPT_FLOW_WORKSPACE_NAME", DEFAULT_WORKSPACE_NAME)
@pytest.fixture(scope="session")
def runtime_name() -> str:
return os.getenv("PROMPT_FLOW_RUNTIME_NAME", DEFAULT_RUNTIME_NAME)
@pytest.fixture(scope="session")
def registry_name() -> str:
return os.getenv("PROMPT_FLOW_REGISTRY_NAME", DEFAULT_REGISTRY_NAME)
@pytest.fixture
def enable_logger_propagate():
"""This is for test cases that need to check the log output."""
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
original_value = logger.propagate
logger.propagate = True
yield
logger.propagate = original_value
| 0 |
promptflow_repo/promptflow/src/promptflow | promptflow_repo/promptflow/src/promptflow/tests/_constants.py | from pathlib import Path
PROMOTFLOW_ROOT = Path(__file__).parent.parent
RUNTIME_TEST_CONFIGS_ROOT = Path(PROMOTFLOW_ROOT / "tests/test_configs/runtime")
EXECUTOR_REQUESTS_ROOT = Path(PROMOTFLOW_ROOT / "tests/test_configs/executor_api_requests")
MODEL_ROOT = Path(PROMOTFLOW_ROOT / "tests/test_configs/e2e_samples")
CONNECTION_FILE = (PROMOTFLOW_ROOT / "connections.json").resolve().absolute().as_posix()
ENV_FILE = (PROMOTFLOW_ROOT / ".env").resolve().absolute().as_posix()
# below constants are used for pfazure and global config tests
DEFAULT_SUBSCRIPTION_ID = "96aede12-2f73-41cb-b983-6d11a904839b"
DEFAULT_RESOURCE_GROUP_NAME = "promptflow"
DEFAULT_WORKSPACE_NAME = "promptflow-eastus2euap"
DEFAULT_RUNTIME_NAME = "test-runtime-ci"
DEFAULT_REGISTRY_NAME = "promptflow-preview"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests | promptflow_repo/promptflow/src/promptflow/tests/sdk_cli_global_config_test/conftest.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import pytest
from promptflow import PFClient
from promptflow._sdk._configuration import Configuration
AZUREML_RESOURCE_PROVIDER = "Microsoft.MachineLearningServices"
RESOURCE_ID_FORMAT = "/subscriptions/{}/resourceGroups/{}/providers/{}/workspaces/{}"
@pytest.fixture
def pf() -> PFClient:
return PFClient()
@pytest.fixture
def global_config(subscription_id: str, resource_group_name: str, workspace_name: str) -> None:
config = Configuration.get_instance()
if Configuration.CONNECTION_PROVIDER in config._config:
return
config.set_config(
Configuration.CONNECTION_PROVIDER,
"azureml:"
+ RESOURCE_ID_FORMAT.format(subscription_id, resource_group_name, AZUREML_RESOURCE_PROVIDER, workspace_name),
)
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/sdk_cli_global_config_test | promptflow_repo/promptflow/src/promptflow/tests/sdk_cli_global_config_test/e2etests/test_global_config.py | from pathlib import Path
import pytest
FLOWS_DIR = Path(__file__).parent.parent.parent / "test_configs" / "flows"
DATAS_DIR = Path(__file__).parent.parent.parent / "test_configs" / "datas"
@pytest.mark.usefixtures("global_config")
@pytest.mark.e2etest
class TestGlobalConfig:
def test_basic_flow_bulk_run(self, pf) -> None:
data_path = f"{DATAS_DIR}/webClassification3.jsonl"
run = pf.run(flow=f"{FLOWS_DIR}/web_classification", data=data_path)
assert run.status == "Completed"
# Test repeated execute flow run
run = pf.run(flow=f"{FLOWS_DIR}/web_classification", data=data_path)
assert run.status == "Completed"
def test_connection_operations(self, pf) -> None:
connections = pf.connections.list()
assert len(connections) > 0, f"No connection found. Provider: {pf._connection_provider}"
# Assert create/update/delete not supported.
with pytest.raises(NotImplementedError):
pf.connections.create_or_update(connection=connections[0])
with pytest.raises(NotImplementedError):
pf.connections.delete(name="test_connection")
| 0 |
promptflow_repo/promptflow/src/promptflow/tests | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/utils.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import contextlib
import getpass
import json
from typing import Any, Dict, List
from unittest import mock
import werkzeug
from flask.testing import FlaskClient
@contextlib.contextmanager
def check_activity_end_telemetry(
*,
expected_activities: List[Dict[str, Any]] = None,
**kwargs,
):
if expected_activities is None and kwargs:
expected_activities = [kwargs]
with mock.patch("promptflow._sdk._telemetry.activity.log_activity_end") as mock_telemetry:
yield
actual_activities = [call.args[0] for call in mock_telemetry.call_args_list]
assert mock_telemetry.call_count == len(expected_activities), (
f"telemetry should not be called {len(expected_activities)} times but got {mock_telemetry.call_count}:\n"
f"{json.dumps(actual_activities, indent=2)}\n"
)
default_expected_call = {
"first_call": True,
"activity_type": "PublicApi",
"completion_status": "Success",
"user_agent": f"promptflow-sdk/0.0.1 Werkzeug/{werkzeug.__version__} local_pfs/0.0.1",
}
for i, expected_activity in enumerate(expected_activities):
temp = default_expected_call.copy()
temp.update(expected_activity)
expected_activity = temp
for key, expected_value in expected_activity.items():
value = actual_activities[i][key]
assert (
value == expected_value
), f"{key} mismatch in {i+1}th call: expect {expected_value} but got {value}"
class PFSOperations:
CONNECTION_URL_PREFIX = "/v1.0/Connections"
RUN_URL_PREFIX = "/v1.0/Runs"
TELEMETRY_PREFIX = "/v1.0/Telemetries"
def __init__(self, client: FlaskClient):
self._client = client
def remote_user_header(self):
return {"X-Remote-User": getpass.getuser()}
def heartbeat(self):
return self._client.get("/heartbeat")
# connection APIs
def connection_operation_with_invalid_user(self, status_code=None):
response = self._client.get(f"{self.CONNECTION_URL_PREFIX}/", headers={"X-Remote-User": "invalid_user"})
if status_code:
assert status_code == response.status_code, response.text
return response
def list_connections(self, status_code=None):
response = self._client.get(f"{self.CONNECTION_URL_PREFIX}/", headers=self.remote_user_header())
if status_code:
assert status_code == response.status_code, response.text
return response
def delete_connection(self, name: str, status_code=None):
response = self._client.delete(f"{self.CONNECTION_URL_PREFIX}/{name}", headers=self.remote_user_header())
if status_code:
assert status_code == response.status_code, response.text
return response
def list_connections_by_provider(self, working_dir, status_code=None):
response = self._client.get(
f"{self.CONNECTION_URL_PREFIX}/",
query_string={"working_directory": working_dir},
headers=self.remote_user_header(),
)
if status_code:
assert status_code == response.status_code, response.text
return response
def get_connection(self, name: str, status_code=None):
response = self._client.get(f"{self.CONNECTION_URL_PREFIX}/{name}", headers=self.remote_user_header())
if status_code:
assert status_code == response.status_code, response.text
return response
def get_connections_by_provider(self, name: str, working_dir, status_code=None):
response = self._client.get(
f"{self.CONNECTION_URL_PREFIX}/{name}",
data={"working_directory": working_dir},
headers=self.remote_user_header(),
)
if status_code:
assert status_code == response.status_code, response.text
return response
def get_connection_with_secret(self, name: str, status_code=None):
response = self._client.get(
f"{self.CONNECTION_URL_PREFIX}/{name}/listsecrets", headers=self.remote_user_header()
)
if status_code:
assert status_code == response.status_code, response.text
return response
def get_connection_specs(self, status_code=None):
response = self._client.get(f"{self.CONNECTION_URL_PREFIX}/specs")
if status_code:
assert status_code == response.status_code, response.text
return response
# run APIs
def list_runs(self, status_code=None):
# TODO: add query parameters
response = self._client.get(f"{self.RUN_URL_PREFIX}/", headers=self.remote_user_header())
if status_code:
assert status_code == response.status_code, response.text
return response
def submit_run(self, request_body, status_code=None):
response = self._client.post(f"{self.RUN_URL_PREFIX}/submit", json=request_body)
if status_code:
assert status_code == response.status_code, response.text
return response
def update_run(
self, name: str, display_name: str = None, description: str = None, tags: str = None, status_code=None
):
request_body = {
"display_name": display_name,
"description": description,
"tags": tags,
}
response = self._client.put(f"{self.RUN_URL_PREFIX}/{name}", json=request_body)
if status_code:
assert status_code == response.status_code, response.text
return response
def archive_run(self, name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}/archive")
if status_code:
assert status_code == response.status_code, response.text
return response
def restore_run(self, name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}/restore")
if status_code:
assert status_code == response.status_code, response.text
return response
def delete_run(self, name: str, status_code=None):
response = self._client.delete(f"{self.RUN_URL_PREFIX}/{name}")
if status_code:
assert status_code == response.status_code, response.text
return response
def get_run_visualize(self, name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}/visualize")
if status_code:
assert status_code == response.status_code, response.text
return response
def get_run(self, name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}")
if status_code:
assert status_code == response.status_code, response.text
return response
def get_child_runs(self, name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}/childRuns")
if status_code:
assert status_code == response.status_code, response.text
return response
def get_node_runs(self, name: str, node_name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}/nodeRuns/{node_name}")
if status_code:
assert status_code == response.status_code, response.text
return response
def get_run_metadata(self, name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}/metaData")
if status_code:
assert status_code == response.status_code, response.text
return response
def get_run_log(self, name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}/logContent")
if status_code:
assert status_code == response.status_code, response.text
return response
def get_run_metrics(self, name: str, status_code=None):
response = self._client.get(f"{self.RUN_URL_PREFIX}/{name}/metrics")
if status_code:
assert status_code == response.status_code, response.text
return response
# telemetry APIs
def create_telemetry(self, *, body, headers, status_code=None):
response = self._client.post(
f"{self.TELEMETRY_PREFIX}/",
headers={
**self.remote_user_header(),
**headers,
},
json=body,
)
if status_code:
assert status_code == response.status_code, response.text
return response
| 0 |
promptflow_repo/promptflow/src/promptflow/tests | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/conftest.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import pytest
from flask.app import Flask
from promptflow import PFClient
from .utils import PFSOperations
@pytest.fixture
def app() -> Flask:
from promptflow._sdk._service.app import create_app
app, _ = create_app()
app.config.update({"TESTING": True})
yield app
@pytest.fixture
def pfs_op(app: Flask) -> PFSOperations:
client = app.test_client()
return PFSOperations(client)
@pytest.fixture(scope="session")
def pf_client() -> PFClient:
return PFClient()
| 0 |
promptflow_repo/promptflow/src/promptflow/tests | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/.coveragerc | [run]
source =
*/promptflow/_sdk/_service/*
omit =
*/promptflow/_cli/*
*/promptflow/azure/*
*/promptflow/entities/*
*/promptflow/operations/*
*__init__.py*
| 0 |
promptflow_repo/promptflow/src/promptflow/tests | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/__init__.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/e2etests/test_run_apis.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import uuid
from dataclasses import fields
from pathlib import Path
import pytest
from promptflow import PFClient
from promptflow._sdk.entities import Run
from promptflow._sdk.operations._local_storage_operations import LocalStorageOperations
from promptflow.contracts._run_management import RunMetadata
from ..utils import PFSOperations, check_activity_end_telemetry
FLOW_PATH = "./tests/test_configs/flows/print_env_var"
DATA_PATH = "./tests/test_configs/datas/env_var_names.jsonl"
def create_run_against_multi_line_data(client: PFClient) -> Run:
return client.run(flow=FLOW_PATH, data=DATA_PATH)
@pytest.mark.usefixtures("use_secrets_config_file")
@pytest.mark.e2etest
class TestRunAPIs:
@pytest.fixture(autouse=True)
def _submit_run(self, pf_client):
self.run = create_run_against_multi_line_data(pf_client)
def test_list_runs(self, pfs_op: PFSOperations) -> None:
with check_activity_end_telemetry(activity_name="pf.runs.list"):
response = pfs_op.list_runs(status_code=200).json
assert len(response) >= 1
@pytest.mark.skip(reason="Task 2917711: cli command will give strange stdout in ci; re-enable after switch to sdk")
def test_submit_run(self, pfs_op: PFSOperations) -> None:
# run submit is done via cli, so no telemetry will be detected here
with check_activity_end_telemetry(expected_activities=[]):
response = pfs_op.submit_run(
{
"flow": Path(FLOW_PATH).absolute().as_posix(),
"data": Path(DATA_PATH).absolute().as_posix(),
},
status_code=200,
)
with check_activity_end_telemetry(activity_name="pf.runs.get"):
run_from_pfs = pfs_op.get_run(name=response.json["name"]).json
assert run_from_pfs
def update_run(self, pfs_op: PFSOperations) -> None:
display_name = "new_display_name"
tags = {"key": "value"}
with check_activity_end_telemetry(activity_name="pf.runs.update"):
run_from_pfs = pfs_op.update_run(
name=self.run.name, display_name=display_name, tags=json.dumps(tags), status_code=200
).json
assert run_from_pfs["display_name"] == display_name
assert run_from_pfs["tags"] == tags
def test_archive_restore_run(self, pf_client: PFClient, pfs_op: PFSOperations) -> None:
run = create_run_against_multi_line_data(pf_client)
with check_activity_end_telemetry(
expected_activities=[
{"activity_name": "pf.runs.get", "first_call": False},
{"activity_name": "pf.runs.archive"},
]
):
pfs_op.archive_run(name=run.name, status_code=200)
runs = pfs_op.list_runs().json
assert not any([item["name"] == run.name for item in runs])
with check_activity_end_telemetry(
expected_activities=[
{"activity_name": "pf.runs.get", "first_call": False},
{"activity_name": "pf.runs.restore"},
]
):
pfs_op.restore_run(name=run.name, status_code=200)
runs = pfs_op.list_runs().json
assert any([item["name"] == run.name for item in runs])
def test_delete_run(self, pf_client: PFClient, pfs_op: PFSOperations) -> None:
run = create_run_against_multi_line_data(pf_client)
local_storage = LocalStorageOperations(run)
path = local_storage.path
assert path.exists()
with check_activity_end_telemetry(
expected_activities=[
{"activity_name": "pf.runs.get", "first_call": False},
{"activity_name": "pf.runs.delete"},
]
):
pfs_op.delete_run(name=run.name, status_code=204)
runs = pfs_op.list_runs().json
assert not any([item["name"] == run.name for item in runs])
assert not path.exists()
def test_visualize_run(self, pfs_op: PFSOperations) -> None:
with check_activity_end_telemetry(
expected_activities=[
{"activity_name": "pf.runs.get", "first_call": False},
{"activity_name": "pf.runs.get", "first_call": False},
{"activity_name": "pf.runs.get_metrics", "first_call": False},
{"activity_name": "pf.runs.visualize"},
]
):
response = pfs_op.get_run_visualize(name=self.run.name, status_code=200)
assert response.data
def test_get_not_exist_run(self, pfs_op: PFSOperations) -> None:
random_name = str(uuid.uuid4())
with check_activity_end_telemetry(activity_name="pf.runs.get", completion_status="Failure"):
response = pfs_op.get_run(name=random_name)
assert response.status_code == 404
def test_get_run(self, pfs_op: PFSOperations) -> None:
with check_activity_end_telemetry(activity_name="pf.runs.get"):
run_from_pfs = pfs_op.get_run(name=self.run.name, status_code=200).json
assert run_from_pfs["name"] == self.run.name
def test_get_child_runs(self, pfs_op: PFSOperations) -> None:
with check_activity_end_telemetry(activity_name="pf.runs.get"):
run_from_pfs = pfs_op.get_child_runs(name=self.run.name, status_code=200).json
assert len(run_from_pfs) == 1
assert run_from_pfs[0]["parent_run_id"] == self.run.name
def test_get_node_runs(self, pfs_op: PFSOperations) -> None:
with check_activity_end_telemetry(activity_name="pf.runs.get"):
run_from_pfs = pfs_op.get_node_runs(name=self.run.name, node_name="print_env", status_code=200).json
assert len(run_from_pfs) == 1
assert run_from_pfs[0]["node"] == "print_env"
def test_get_run_log(self, pfs_op: PFSOperations, pf_client: PFClient) -> None:
with check_activity_end_telemetry(activity_name="pf.runs.get"):
log = pfs_op.get_run_log(name=self.run.name, status_code=200)
assert not log.data.decode("utf-8").startswith('"')
def test_get_run_metrics(self, pfs_op: PFSOperations) -> None:
with check_activity_end_telemetry(activity_name="pf.runs.get"):
metrics = pfs_op.get_run_metrics(name=self.run.name, status_code=200).json
assert metrics is not None
def test_get_run_metadata(self, pfs_op: PFSOperations) -> None:
with check_activity_end_telemetry(activity_name="pf.runs.get"):
metadata = pfs_op.get_run_metadata(name=self.run.name, status_code=200).json
for field in fields(RunMetadata):
assert field.name in metadata
assert metadata["name"] == self.run.name
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/e2etests/test_telemetry_apis.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import pytest
from ..utils import PFSOperations, check_activity_end_telemetry
@pytest.mark.usefixtures("use_secrets_config_file")
@pytest.mark.e2etest
class TestTelemetryAPIs:
def test_post_telemetry(self, pfs_op: PFSOperations) -> None:
from promptflow._sdk._telemetry.activity import generate_request_id
request_id = generate_request_id()
user_agent = "prompt-flow-extension/1.8.0 (win32; x64) VS/0.0.1"
_ = pfs_op.create_telemetry(
body={
"eventType": "Start",
"timestamp": "2021-01-01T00:00:00Z",
"metadata": {
"activityName": "pf.flow.test",
"activityType": "InternalCall",
},
},
status_code=200,
headers={
"x-ms-promptflow-request-id": request_id,
"User-Agent": user_agent,
},
).json
with check_activity_end_telemetry(
activity_name="pf.flow.test",
activity_type="InternalCall",
user_agent=f"{user_agent} local_pfs/0.0.1",
request_id=request_id,
):
response = pfs_op.create_telemetry(
body={
"eventType": "End",
"timestamp": "2021-01-01T00:00:00Z",
"metadata": {
"activityName": "pf.flow.test",
"activityType": "InternalCall",
"completionStatus": "Success",
"durationMs": 1000,
},
},
headers={
"x-ms-promptflow-request-id": request_id,
"User-Agent": user_agent,
},
status_code=200,
).json
assert len(response) >= 1
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/e2etests/test_connection_apis.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import tempfile
import uuid
from pathlib import Path
import mock
import pytest
from sdk_cli_azure_test.recording_utilities import is_replay
from promptflow import PFClient
from promptflow._sdk.entities import CustomConnection
from ..utils import PFSOperations, check_activity_end_telemetry
def create_custom_connection(client: PFClient) -> str:
name = str(uuid.uuid4())
connection = CustomConnection(name=name, configs={"api_base": "test"}, secrets={"api_key": "test"})
client.connections.create_or_update(connection)
return name
@pytest.mark.e2etest
class TestConnectionAPIs:
def test_list_connections(self, pf_client: PFClient, pfs_op: PFSOperations) -> None:
create_custom_connection(pf_client)
with check_activity_end_telemetry(activity_name="pf.connections.list"):
connections = pfs_op.list_connections().json
assert len(connections) >= 1
def test_get_connection(self, pf_client: PFClient, pfs_op: PFSOperations) -> None:
name = create_custom_connection(pf_client)
with check_activity_end_telemetry(activity_name="pf.connections.get"):
conn_from_pfs = pfs_op.get_connection(name=name, status_code=200).json
assert conn_from_pfs["name"] == name
assert conn_from_pfs["configs"]["api_base"] == "test"
assert "api_key" in conn_from_pfs["secrets"]
# get connection with secret
with check_activity_end_telemetry(activity_name="pf.connections.get"):
conn_from_pfs = pfs_op.get_connection_with_secret(name=name, status_code=200).json
assert not conn_from_pfs["secrets"]["api_key"].startswith("*")
def test_delete_connection(self, pf_client: PFClient, pfs_op: PFSOperations) -> None:
len_connections = len(pfs_op.list_connections().json)
name = create_custom_connection(pf_client)
with check_activity_end_telemetry(
expected_activities=[
{"activity_name": "pf.connections.delete", "first_call": True},
]
):
pfs_op.delete_connection(name=name, status_code=204)
len_connections_after = len(pfs_op.list_connections().json)
assert len_connections_after == len_connections
def test_list_connection_with_invalid_user(self, pfs_op: PFSOperations) -> None:
# TODO: should we record telemetry for this case?
with check_activity_end_telemetry(expected_activities=[]):
conn_from_pfs = pfs_op.connection_operation_with_invalid_user()
assert conn_from_pfs.status_code == 403
def test_get_connection_specs(self, pfs_op: PFSOperations) -> None:
with check_activity_end_telemetry(expected_activities=[]):
specs = pfs_op.get_connection_specs(status_code=200).json
assert len(specs) > 1
@pytest.mark.skipif(is_replay(), reason="connection provider test, skip in non-live mode.")
def test_get_connection_by_provicer(self, pfs_op, subscription_id, resource_group_name, workspace_name):
target = "promptflow._sdk._pf_client.Configuration.get_connection_provider"
provider_url_target = (
"promptflow._sdk.operations._local_azure_connection_operations."
"LocalAzureConnectionOperations._extract_workspace"
)
mock_provider_url = (subscription_id, resource_group_name, workspace_name)
with mock.patch(target) as mocked_config, mock.patch(provider_url_target) as mocked_provider_url:
mocked_config.return_value = "azureml"
mocked_provider_url.return_value = mock_provider_url
connections = pfs_op.list_connections(status_code=200).json
assert len(connections) > 0
connection = pfs_op.get_connection(name=connections[0]["name"], status_code=200).json
assert connection["name"] == connections[0]["name"]
target = "promptflow._sdk._pf_client.Configuration.get_config"
with tempfile.TemporaryDirectory() as temp:
config_file = Path(temp) / ".azureml" / "config.json"
config_file.parent.mkdir(parents=True, exist_ok=True)
with open(config_file, "w") as f:
config = {
"subscription_id": subscription_id,
"resource_group": resource_group_name,
"workspace_name": workspace_name,
}
json.dump(config, f)
with mock.patch(target) as mocked_config:
mocked_config.return_value = "azureml"
connections = pfs_op.list_connections_by_provider(working_dir=temp, status_code=200).json
assert len(connections) > 0
connection = pfs_op.get_connections_by_provider(
name=connections[0]["name"], working_dir=temp, status_code=200
).json
assert connection["name"] == connections[0]["name"]
# this test checked 2 cases:
# 1. if the working directory is not exist, it should return 400
# 2. working directory has been encoded and decoded correctly, so that previous call may pass validation
error_message = pfs_op.list_connections_by_provider(
working_dir=temp + "not exist", status_code=400
).json
assert error_message == {
"errors": {"working_directory": "Invalid working directory."},
"message": "Input payload validation failed",
}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/e2etests/test_general_apis.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import pytest
from promptflow._sdk._utils import get_promptflow_sdk_version
from ..utils import PFSOperations
@pytest.mark.e2etest
class TestGeneralAPIs:
def test_heartbeat(self, pfs_op: PFSOperations) -> None:
response = pfs_op.heartbeat()
assert response.status_code == 200
response_json = response.json
assert isinstance(response_json, dict)
assert "promptflow" in response_json
assert response_json["promptflow"] == get_promptflow_sdk_version()
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/e2etests/test_cli.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import subprocess
import sys
from time import sleep
import pytest
import requests
from promptflow._sdk._service.entry import main
from promptflow._sdk._service.utils.utils import get_port_from_config, get_random_port, kill_exist_service
@pytest.mark.e2etest
class TestPromptflowServiceCLI:
def _run_pfs_command(self, *args):
"""Run a pfs command with the given arguments."""
origin_argv = sys.argv
try:
sys.argv = ["pfs"] + list(args)
main()
finally:
sys.argv = origin_argv
def _test_start_service(self, port=None, force=False):
command = f"pfs start --port {port}" if port else "pfs start"
if force:
command = f"{command} --force"
start_pfs = subprocess.Popen(command, shell=True)
# Wait for service to be started
sleep(5)
assert self._is_service_healthy()
start_pfs.terminate()
start_pfs.wait(10)
def _is_service_healthy(self, port=None):
port = port or get_port_from_config()
response = requests.get(f"http://localhost:{port}/heartbeat")
return response.status_code == 200
def test_start_service(self):
try:
# start pfs by pf.yaml
self._test_start_service()
# Start pfs by specified port
random_port = get_random_port()
self._test_start_service(port=random_port, force=True)
# Force start pfs
start_pfs = subprocess.Popen("pfs start", shell=True)
# Wait for service to be started
sleep(5)
self._test_start_service(force=True)
# previous pfs is killed
assert start_pfs.poll() is not None
finally:
port = get_port_from_config()
kill_exist_service(port=port)
def test_show_service_status(self, capsys):
with pytest.raises(SystemExit):
self._run_pfs_command("show-status")
start_pfs = subprocess.Popen("pfs start", shell=True)
# Wait for service to be started
sleep(5)
self._run_pfs_command("show-status")
output, _ = capsys.readouterr()
assert str(get_port_from_config()) in output
start_pfs.terminate()
start_pfs.wait(10)
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test | promptflow_repo/promptflow/src/promptflow/tests/sdk_pfs_test/e2etests/__init__.py | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/form_recognizer_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/FormRecognizerConnection.schema.json
name: my_form_recognizer_connection
type: form_recognizer
api_key: "<to-be-replaced>"
endpoint: "endpoint"
api_version: "2023-07-31"
api_type: Form Recognizer
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/openai_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: my_open_ai_connection
type: open_ai
api_key: "<to-be-replaced>"
organization: "org"
base_url: "" | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/azure_openai_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: my_azure_open_ai_connection
type: azure_open_ai # snake case
api_key: "<to-be-replaced>"
api_base: "aoai-api-endpoint"
api_type: "azure"
api_version: "2023-07-01-preview"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/.env | aaa=bbb
ccc=ddd
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/qdrant_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/QdrantConnection.schema.json
name: my_qdrant_connection
type: qdrant
api_key: "<to-be-replaced>"
api_base: "endpoint"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/update_custom_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: my_custom_connection
type: custom
configs:
key1: "new_value"
secrets: # must-have
key2: "******" # Use the scrub value to test key2 not being updated
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/update_custom_strong_type_connection.yaml | name: my_custom_strong_type_connection
type: custom
custom_type: MyFirstConnection
module: my_tool_package.connections
package: test-custom-tools
package_version: 0.0.2
configs:
api_base: "new_value"
secrets: # must-have
api_key: "******" | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/update_azure_openai_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: my_azure_open_ai_connection
type: azure_open_ai # snake case
api_key: "******" # Use the scrub value to test key2 not being updated
api_base: "new_value"
api_type: "azure"
api_version: "2023-07-01-preview"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/cognitive_search_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/CognitiveSearchConnection.schema.json
name: my_cognitive_search_connection
type: cognitive_search # snake case
api_key: "<to-be-replaced>"
api_base: "endpoint"
api_version: "2023-07-01-Preview"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/custom_strong_type_connection.yaml | name: my_custom_strong_type_connection
type: custom
custom_type: MyFirstConnection
module: my_tool_package.connections
package: test-custom-tools
package_version: 0.0.2
configs:
api_base: "This is my first connection."
secrets: # must-have
api_key: "<to-be-replaced>" | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/custom_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: my_custom_connection
type: custom
configs:
key1: "test1"
secrets: # must-have
key2: "test2"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/serp_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/SerpConnection.schema.json
name: my_serp_connection
type: serp
api_key: "<to-be-replaced>"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/weaviate_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/WeaviateConnection.schema.json
name: my_weaviate_connection
type: weaviate
api_key: "<to-be-replaced>"
api_base: "endpoint"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/openai_connection_base_url.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: my_open_ai_connection
type: open_ai
api_key: "<to-be-replaced>"
organization: "org"
base_url: custom_base_url
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs | promptflow_repo/promptflow/src/promptflow/tests/test_configs/connections/azure_content_safety_connection.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureContentSafetyConnection.schema.json
name: my_azure_content_safety_connection
type: azure_content_safety # snake case
api_key: "<to-be-replaced>"
endpoint: "endpoint"
api_version: "2023-04-30-preview"
api_type: Content Safety
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_python_tool_and_aggregate/aggregate_num.py | import statistics
from typing import List
from promptflow import tool
@tool
def aggregate_num(num: List[int]) -> int:
return statistics.mean(num)
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_python_tool_and_aggregate/divide_num.py | from promptflow import tool
@tool
def divide_num(num: int) -> int:
return (int)(num / 2)
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_python_tool_and_aggregate/flow.dag.yaml | inputs:
num:
type: int
outputs:
content:
type: string
reference: ${divide_num.output}
aggregate_content:
type: string
reference: ${aggregate_num.output}
nodes:
- name: divide_num
type: python
source:
type: code
path: divide_num.py
inputs:
num: ${inputs.num}
- name: aggregate_num
type: python
source:
type: code
path: aggregate_num.py
inputs:
num: ${divide_num.output}
aggregation: True
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_package_tool_with_custom_connection/data.jsonl | {"text": "Hello World!"}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_package_tool_with_custom_connection/flow.dag.yaml | inputs:
text:
type: string
default: Hello!
outputs:
out:
type: string
reference: ${my_first_tool.output}
nodes:
- name: my_first_tool
type: python
source:
type: package
tool: my_tool_package.tools.my_tool_1.my_tool
inputs:
connection: custom_connection_3
input_text: ${inputs.text}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_non_english_input/data.jsonl | {"text": "Hello 123 日本語"}
{"text": "World 123 日本語"}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_non_english_input/hello.jinja2 | {{text}} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_non_english_input/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
text:
type: string
default: Hello 日本語
outputs:
output:
type: string
reference: ${hello_prompt.output}
nodes:
- name: hello_prompt
type: prompt
source:
type: code
path: hello.jinja2
inputs:
text: ${inputs.text} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/web_classification_invalid/classify_with_llm.jinja2 | Your task is to classify a given url into one of the following types:
Movie, App, Academic, Channel, Profile, PDF or None based on the text content information.
The classification will be based on the url, the webpage text content summary, or both.
Here are a few examples:
{% for ex in examples %}
URL: {{ex.url}}
Text content: {{ex.text_content}}
OUTPUT:
{"category": "{{ex.category}}", "evidence": "{{ex.evidence}}"}
{% endfor %}
For a given URL : {{url}}, and text content: {{text_content}}.
Classify above url to complete the category and indicate evidence.
OUTPUT:
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/web_classification_invalid/convert_to_dict.py | import json
import time
from promptflow import tool
# use this to test the timeout
time.sleep(2)
@tool
def convert_to_dict(input_str: str):
try:
return json.loads(input_str)
except Exception as e:
print("input is not valid, error: {}".format(e))
return {"category": "None", "evidence": "None"}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/web_classification_invalid/summarize_text_content__variant_1.jinja2 | Please summarize some keywords of this paragraph and have some details of each keywords.
Do not add any information that is not in the text.
Text: {{text}}
Summary:
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/web_classification_invalid/flow.dag.yaml | inputs:
url:
default: https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h
outputs:
category:
reference: ${convert_to_dict.output.category}
evidence:
type: string
reference: ${convert_to_dict.output.evidence}
nodes:
- name: fetch_text_content_from_url
type: python
source:
type: code
path: fetch_text_content_from_url.py
inputs:
url: ${inputs.url}
- name: summarize_text_content
type: llm
source:
type: code
path: summarize_text_content.jinja2
inputs:
deployment_name: gpt-35-turbo
suffix: ''
max_tokens: '128'
temperature: '0.2'
top_p: '1.0'
logprobs: ''
echo: 'False'
stop: ''
presence_penalty: '0'
frequency_penalty: '0'
best_of: '1'
logit_bias: ''
text: ${fetch_text_content_from_url.output}
provider: AzureOpenAI
connection: azure_open_ai_connection
api: completion
module: promptflow.tools.aoai
use_variants: true
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs:
- name: classify_with_llm
type: llm
source:
type: code
path: ./classify_with_llm.jinja2
inputs:
deployment_name: gpt-35-turbo
suffix: ''
max_tokens: '128'
temperature: '0.2'
top_p: '1.0'
logprobs: ''
echo: 'False'
stop: ''
presence_penalty: '0'
frequency_penalty: '0'
best_of: '1'
logit_bias: ''
url: ${inputs.url}
examples: ${prepare_examples.output}
text_content: ${summarize_text_content.output}
provider: AzureOpenAI
connection: azure_open_ai_connection
api: completion
module: promptflow.tools.aoai
- name: convert_to_dict
type: python
source:
type: code
path: convert_to_dict.py
inputs:
input_str: ${classify_with_llm.output}
- name: open_source_llm_tool
type: custom_llm
source:
type: package_with_prompt
tool: promptflow.tools.azure_translator.get_translation
path: classify_with_llm.jinja2
node_variants:
summarize_text_content:
default_variant_id: variant_1
variants:
variant_0:
node:
type: llm
source:
type: code
path: summarize_text_content.jinja2
inputs:
deployment_name: gpt-35-turbo
suffix: ''
max_tokens: '128'
temperature: '0.2'
top_p: '1.0'
logprobs: ''
echo: 'False'
stop: ''
presence_penalty: '0'
frequency_penalty: '0'
best_of: '1'
logit_bias: ''
text: ${fetch_text_content_from_url.output}
provider: AzureOpenAI
connection: azure_open_ai_connection
api: completion
module: promptflow.tools.aoai
variant_1:
node:
type: llm
source:
type: code
path: summarize_text_content__variant_1.jinja2
inputs:
deployment_name: gpt-35-turbo
suffix: ''
max_tokens: '256'
temperature: '0.2'
top_p: '1.0'
logprobs: ''
echo: 'False'
stop: ''
presence_penalty: '0'
frequency_penalty: '0'
best_of: '1'
logit_bias: ''
text: ${fetch_text_content_from_url.output}
provider: AzureOpenAI
connection: azure_open_ai_connection
api: completion
module: promptflow.tools.aoai
additional_includes:
- ../external_files/fetch_text_content_from_url.py
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/web_classification_invalid/prepare_examples.py | import time
from pathlib import Path
from promptflow import tool
@tool
def prepare_examples():
if not Path("summarize_text_content.jinja2").exists():
raise Exception("Cannot find summarize_text_content.jinja2")
return [
{
"url": "https://play.google.com/store/apps/details?id=com.spotify.music",
"text_content": "Spotify is a free music and podcast streaming app with millions of songs, albums, and original podcasts. It also offers audiobooks, so users can enjoy thousands of stories. It has a variety of features such as creating and sharing music playlists, discovering new music, and listening to popular and exclusive podcasts. It also has a Premium subscription option which allows users to download and listen offline, and access ad-free music. It is available on all devices and has a variety of genres and artists to choose from.",
"category": "App",
"evidence": "Both",
},
{
"url": "https://www.youtube.com/channel/UC_x5XG1OV2P6uZZ5FSM9Ttw",
"text_content": "NFL Sunday Ticket is a service offered by Google LLC that allows users to watch NFL games on YouTube. It is available in 2023 and is subject to the terms and privacy policy of Google LLC. It is also subject to YouTube's terms of use and any applicable laws.",
"category": "Channel",
"evidence": "URL",
},
{
"url": "https://arxiv.org/abs/2303.04671",
"text_content": "Visual ChatGPT is a system that enables users to interact with ChatGPT by sending and receiving not only languages but also images, providing complex visual questions or visual editing instructions, and providing feedback and asking for corrected results. It incorporates different Visual Foundation Models and is publicly available. Experiments show that Visual ChatGPT opens the door to investigating the visual roles of ChatGPT with the help of Visual Foundation Models.",
"category": "Academic",
"evidence": "Text content",
},
{
"url": "https://ab.politiaromana.ro/",
"text_content": "There is no content available for this text.",
"category": "None",
"evidence": "None",
},
]
raise Exception("Met error on purpose")
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/web_classification_invalid/samples.json | [
{
"line_number": 0,
"variant_id": "variant_0",
"groundtruth": "App",
"prediction": "App"
}
]
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image/pick_an_image.py | import random
from promptflow.contracts.multimedia import Image
from promptflow import tool
@tool
def pick_an_image(image_1: Image, image_2: Image) -> Image:
if random.choice([True, False]):
return image_1
else:
return image_2
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image/flow.dag.yaml | inputs:
image:
type: image
default: logo.jpg
outputs:
output:
type: image
reference: ${python_node_2.output}
nodes:
- name: python_node
type: python
source:
type: code
path: pick_an_image.py
inputs:
image_1: ${inputs.image}
image_2: logo_2.png
- name: python_node_2
type: python
source:
type: code
path: pick_an_image.py
inputs:
image_1: ${python_node.output}
image_2: logo_2.png
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image/inputs.jsonl | {"image": {"data:image/png;path":"logo.jpg"}}
{"image": {"data:image/png;path":"logo_2.png"}} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/python_tool_with_simple_image/image_inputs/inputs.jsonl | {"image": {"data:image/png;path":"logo_1.png"}}
{"image": {"data:image/png;path":"logo_2.png"}} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/chat_flow/show_answer.py | from promptflow import tool
@tool
def show_answer(chat_answer: str):
print("print:", chat_answer)
return chat_answer
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/chat_flow/chat.jinja2 | system:
You are a helpful assistant.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/chat_flow/flow.dag.yaml | inputs:
chat_history:
type: list
question:
type: string
is_chat_input: true
default: What is ChatGPT?
outputs:
answer:
type: string
reference: ${show_answer.output}
is_chat_output: true
nodes:
- inputs:
deployment_name: gpt-35-turbo
max_tokens: "256"
temperature: "0.7"
chat_history: ${inputs.chat_history}
question: ${inputs.question}
name: chat_node
type: llm
source:
type: code
path: chat.jinja2
api: chat
provider: AzureOpenAI
connection: azure_open_ai_connection
- name: show_answer
type: python
source:
type: code
path: show_answer.py
inputs:
chat_answer: ${chat_node.output}
node_variants: {}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/concurrent_execution_flow/wait_short.py | import threading
from time import sleep
from promptflow import tool
@tool
def wait(**kwargs) -> int:
if kwargs["throw_exception"]:
raise Exception("test exception")
for i in range(10):
print(f"Thread {threading.get_ident()} write test log number {i}")
sleep(2)
return 0
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/concurrent_execution_flow/wait_long.py | from time import sleep
from promptflow import tool
@tool
def wait(**args) -> int:
sleep(5)
return str(args)
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/concurrent_execution_flow/flow.dag.yaml | name: TestPythonToolLongWaitTime
inputs:
input1:
type: bool
input2:
type: bool
input3:
type: bool
input4:
type: bool
outputs:
output:
type: int
reference: ${wait_long_1.output}
nodes:
- name: wait_1
type: python
source:
type: code
path: wait_short.py
inputs:
throw_exception: ${inputs.input1}
- name: wait_2
type: python
source:
type: code
path: wait_short.py
inputs:
throw_exception: ${inputs.input2}
- name: wait_3
type: python
source:
type: code
path: wait_short.py
inputs:
throw_exception: ${inputs.input3}
- name: wait_4
type: python
source:
type: code
path: wait_short.py
inputs:
throw_exception: ${inputs.input4}
- name: wait_long_1
type: python
source:
type: code
path: wait_long.py
inputs:
text_1: ${wait_1.output}
text_2: ${wait_2.output}
text_3: ${wait_3.output}
text_4: ${wait_4.output}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/concurrent_execution_flow/inputs.json | {
"input1": "False",
"input2": "False",
"input3": "False",
"input4": "False"
} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_environment/flow.dag.yaml | inputs:
key:
type: string
outputs:
output:
type: string
reference: ${print_env.output.value}
nodes:
- name: print_env
type: python
source:
type: code
path: print_env.py
inputs:
key: ${inputs.key}
environment:
python_requirements_txt: requirements
image: python:3.8-slim
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_environment/requirements | tensorflow | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_environment/print_env.py | import os
from promptflow import tool
@tool
def get_env_var(key: str):
print(os.environ.get(key))
# get from env var
return {"value": os.environ.get(key)}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_environment | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_environment/.promptflow/flow.tools.json | {
"package": {},
"code": {
"print_env.py": {
"type": "python",
"inputs": {
"key": {
"type": [
"string"
]
}
},
"function": "get_env_var"
}
}
}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/my_python_tool.py | from promptflow import tool
import random
@tool
def my_python_tool(idx: int) -> int:
return idx | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/samples_all_timeout.json | [{"idx": 5}, {"idx": 5}] | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/expected_status_summary.json | {
"__pf__.nodes.my_python_tool.completed": 3,
"__pf__.nodes.my_python_tool_with_failed_line.completed": 2,
"__pf__.nodes.my_python_tool_with_failed_line.failed": 1,
"__pf__.lines.completed": 2,
"__pf__.lines.failed": 1
} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/flow.dag.yaml | inputs:
idx:
type: int
outputs:
output:
type: int
reference: ${my_python_tool_with_failed_line.output}
nodes:
- name: my_python_tool
type: python
source:
type: code
path: my_python_tool.py
inputs:
idx: ${inputs.idx}
- name: my_python_tool_with_failed_line
type: python
source:
type: code
path: my_python_tool_with_failed_line.py
inputs:
idx: ${my_python_tool.output} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/my_python_tool_with_failed_line.py | from promptflow import tool
import random
import time
@tool
def my_python_tool_with_failed_line(idx: int, mod=5) -> int:
if idx % mod == 0:
while True:
time.sleep(60)
return idx
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/one_line_of_bulktest_timeout/samples.json | [{"idx": 1}, {"idx": 4}, {"idx": 10}] | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/eval_flow_with_simple_image/merge_images.py | from promptflow import tool
@tool
def merge_images(image_1: list, image_2: list, image_3: list):
res = set()
res.add(image_1[0])
res.add(image_2[0])
res.add(image_3[0])
return list(res)
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/eval_flow_with_simple_image/pick_an_image.py | import random
from promptflow.contracts.multimedia import Image
from promptflow import tool
@tool
def pick_an_image(image_1: Image, image_2: Image) -> Image:
if random.choice([True, False]):
return image_1
else:
return image_2
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/eval_flow_with_simple_image/flow.dag.yaml | inputs:
image:
type: image
default: logo.jpg
outputs:
output:
type: image
reference: ${python_node.output}
nodes:
- name: python_node
type: python
source:
type: code
path: pick_an_image.py
inputs:
image_1: ${inputs.image}
image_2:
data:image/png;path: logo_2.png
- name: aggregate
type: python
source:
type: code
path: merge_images.py
inputs:
image_1:
- data:image/jpg;path: logo.jpg
image_2: ${inputs.image}
image_3: ${python_node.output}
aggregation: true
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/eval_flow_with_simple_image/inputs.jsonl | {"image": {"data:image/png;path":"logo.jpg"}}
{"image": {"data:image/png;path":"logo_2.png"}} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_ten_inputs/data.jsonl | {"input": "atom", "index": 0}
{"input": "atom", "index": 6}
{"input": "atom", "index": 12}
{"input": "atom", "index": 18}
{"input": "atom", "index": 24}
{"input": "atom", "index": 30}
{"input": "atom", "index": 36}
{"input": "atom", "index": 42}
{"input": "atom", "index": 48}
{"input": "atom", "index": 54}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_ten_inputs/python_node.py | from promptflow import tool
import time
@tool
def python_node(input: str, index: int) -> str:
time.sleep(index + 5)
return input
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_ten_inputs/flow.dag.yaml | id: template_standard_flow
name: Template Standard Flow
inputs:
input:
type: string
is_chat_input: false
index:
type: int
is_chat_input: false
outputs:
output:
type: string
reference: ${python_node.output}
nodes:
- name: python_node
type: python
source:
type: code
path: python_node.py
inputs:
index: ${inputs.index}
input: ${inputs.input}
use_variants: false
node_variants: {}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/simple_flow_with_ten_inputs/samples.json | [
{
"input": "atom",
"index": 0
},
{
"input": "atom",
"index": 6
},
{
"input": "atom",
"index": 12
},{
"input": "atom",
"index": 18
},{
"input": "atom",
"index": 24
},{
"input": "atom",
"index": 30
},{
"input": "atom",
"index": 36
},{
"input": "atom",
"index": 42
},{
"input": "atom",
"index": 48
},{
"input": "atom",
"index": 54
}
] | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/print_input_flow/print_input.py | from promptflow import tool
import sys
@tool
def print_inputs(
text: str = None,
):
print(f"STDOUT: {text}")
print(f"STDERR: {text}", file=sys.stderr)
return text
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/print_input_flow/flow.dag.yaml | inputs:
text:
type: string
outputs:
output_text:
type: string
reference: ${print_input.output}
nodes:
- name: print_input
type: python
source:
type: code
path: print_input.py
inputs:
text: ${inputs.text}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/print_input_flow/inputs.jsonl | {"text": "text_0"}
{"text": "text_1"}
{"text": "text_2"}
{"text": "text_3"}
{"text": "text_4"}
{"text": "text_5"}
{"text": "text_6"}
{"text": "text_7"}
{"text": "text_8"}
{"text": "text_9"} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/prompt_tool_with_duplicated_inputs/flow.dag.yaml | inputs:
text:
type: string
outputs:
output_prompt:
type: string
reference: ${prompt_tool_with_duplicated_inputs.output}
nodes:
- name: prompt_tool_with_duplicated_inputs
type: prompt
source:
type: code
path: prompt_with_duplicated_inputs.jinja2
inputs:
text: ${inputs.text} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/prompt_tool_with_duplicated_inputs/prompt_with_duplicated_inputs.jinja2 | {{template}} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/web_classification_with_invalid_additional_include/flow.dag.yaml | inputs:
url:
type: string
default: https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h
outputs:
category:
type: string
reference: ${convert_to_dict.output.category}
evidence:
type: string
reference: ${convert_to_dict.output.evidence}
nodes:
- name: fetch_text_content_from_url
type: python
source:
type: code
path: fetch_text_content_from_url.py
inputs:
url: ${inputs.url}
- name: summarize_text_content
type: llm
source:
type: code
path: summarize_text_content.jinja2
inputs:
deployment_name: gpt-35-turbo
suffix: ''
max_tokens: '128'
temperature: '0.2'
top_p: '1.0'
logprobs: ''
echo: 'False'
stop: ''
presence_penalty: '0'
frequency_penalty: '0'
best_of: '1'
logit_bias: ''
text: ${fetch_text_content_from_url.output}
provider: AzureOpenAI
connection: azure_open_ai_connection
api: completion
module: promptflow.tools.aoai
use_variants: true
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs: {}
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
deployment_name: gpt-35-turbo
suffix: ''
max_tokens: '128'
temperature: '0.2'
top_p: '1.0'
logprobs: ''
echo: 'False'
stop: ''
presence_penalty: '0'
frequency_penalty: '0'
best_of: '1'
logit_bias: ''
url: ${inputs.url}
examples: ${prepare_examples.output}
text_content: ${summarize_text_content.output}
provider: AzureOpenAI
connection: azure_open_ai_connection
api: completion
module: promptflow.tools.aoai
- name: convert_to_dict
type: python
source:
type: code
path: convert_to_dict.py
inputs:
input_str: ${classify_with_llm.output}
node_variants:
summarize_text_content:
default_variant_id: variant_1
variants:
variant_0:
node:
type: llm
source:
type: code
path: summarize_text_content.jinja2
inputs:
deployment_name: gpt-35-turbo
suffix: ''
max_tokens: '128'
temperature: '0.2'
top_p: '1.0'
logprobs: ''
echo: 'False'
stop: ''
presence_penalty: '0'
frequency_penalty: '0'
best_of: '1'
logit_bias: ''
text: ${fetch_text_content_from_url.output}
provider: AzureOpenAI
connection: azure_open_ai_connection
api: completion
module: promptflow.tools.aoai
variant_1:
node:
type: llm
source:
type: code
path: summarize_text_content__variant_1.jinja2
inputs:
deployment_name: gpt-35-turbo
suffix: ''
max_tokens: '256'
temperature: '0.2'
top_p: '1.0'
logprobs: ''
echo: 'False'
stop: ''
presence_penalty: '0'
frequency_penalty: '0'
best_of: '1'
logit_bias: ''
text: ${fetch_text_content_from_url.output}
provider: AzureOpenAI
connection: azure_open_ai_connection
api: completion
module: promptflow.tools.aoai
additional_includes:
- ../invalid/file/path | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/connection_as_input/conn_tool.py | from promptflow import tool
from promptflow.connections import AzureOpenAIConnection
@tool
def conn_tool(conn: AzureOpenAIConnection):
assert isinstance(conn, AzureOpenAIConnection)
return conn.api_base | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/connection_as_input/flow.dag.yaml | inputs: {}
outputs:
output:
type: string
reference: ${conn_node.output}
nodes:
- name: conn_node
type: python
source:
type: code
path: conn_tool.py
inputs:
conn: azure_open_ai_connection
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_sys_inject/hello.py | import os
import sys
from promptflow import tool
sys.path.append(f"{os.path.dirname(__file__)}/custom_lib")
from custom_lib.foo import foo
@tool
def my_python_tool(input1: str) -> str:
return foo(param=input1)
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_sys_inject/flow.dag.yaml | inputs:
text:
type: string
outputs:
output_prompt:
type: string
reference: ${echo_my_prompt.output}
nodes:
- inputs:
input1: ${inputs.text}
name: echo_my_prompt
type: python
source:
type: code
path: hello.py
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_sys_inject | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_sys_inject/custom_lib/foo.py | def foo(param: str) -> str:
return f"{param} from func foo"
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n/two/mod_two.py | from promptflow import tool
@tool
def mod_two(number: int):
if number % 2 != 0:
raise Exception("cannot mod 2!")
return {"value": number}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n/two/flow.dag.yaml | inputs:
number:
type: int
outputs:
output:
type: int
reference: ${mod_two.output.value}
nodes:
- name: mod_two
type: python
source:
type: code
path: mod_two.py
inputs:
number: ${inputs.number}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n/two | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n/two/.promptflow/flow.tools.json | {
"code": {
"mod_two.py": {
"type": "python",
"inputs": {
"number": {
"type": [
"int"
]
}
},
"source": "mod_two.py",
"function": "mod_two"
}
},
"package": {
"promptflow.tools.aoai_gpt4v.AzureOpenAI.chat": {
"name": "Azure OpenAI GPT-4 Turbo with Vision",
"description": "Use Azure OpenAI GPT-4 Turbo with Vision to leverage AOAI vision ability.",
"type": "custom_llm",
"module": "promptflow.tools.aoai_gpt4v",
"class_name": "AzureOpenAI",
"function": "chat",
"tool_state": "preview",
"icon": {
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg==",
"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC"
},
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting images and responding to questions about the image.\nRemember to provide accurate answers based on the information present in the image.\n\n# user:\nCan you tell me what the image depicts?\n\n",
"inputs": {
"connection": {
"type": [
"AzureOpenAIConnection"
]
},
"deployment_name": {
"type": [
"string"
]
},
"temperature": {
"default": 1,
"type": [
"double"
]
},
"top_p": {
"default": 1,
"type": [
"double"
]
},
"max_tokens": {
"default": 512,
"type": [
"int"
]
},
"stop": {
"default": "",
"type": [
"list"
]
},
"presence_penalty": {
"default": 0,
"type": [
"double"
]
},
"frequency_penalty": {
"default": 0,
"type": [
"double"
]
}
},
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.azure_content_safety.analyze_text": {
"module": "promptflow.tools.azure_content_safety",
"function": "analyze_text",
"inputs": {
"connection": {
"type": [
"AzureContentSafetyConnection"
]
},
"hate_category": {
"default": "medium_sensitivity",
"enum": [
"disable",
"low_sensitivity",
"medium_sensitivity",
"high_sensitivity"
],
"type": [
"string"
]
},
"self_harm_category": {
"default": "medium_sensitivity",
"enum": [
"disable",
"low_sensitivity",
"medium_sensitivity",
"high_sensitivity"
],
"type": [
"string"
]
},
"sexual_category": {
"default": "medium_sensitivity",
"enum": [
"disable",
"low_sensitivity",
"medium_sensitivity",
"high_sensitivity"
],
"type": [
"string"
]
},
"text": {
"type": [
"string"
]
},
"violence_category": {
"default": "medium_sensitivity",
"enum": [
"disable",
"low_sensitivity",
"medium_sensitivity",
"high_sensitivity"
],
"type": [
"string"
]
}
},
"name": "Content Safety (Text Analyze)",
"description": "Use Azure Content Safety to detect harmful content.",
"type": "python",
"deprecated_tools": [
"content_safety_text.tools.content_safety_text_tool.analyze_text"
],
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.embedding.embedding": {
"name": "Embedding",
"description": "Use Open AI's embedding model to create an embedding vector representing the input text.",
"type": "python",
"module": "promptflow.tools.embedding",
"function": "embedding",
"inputs": {
"connection": {
"type": [
"AzureOpenAIConnection",
"OpenAIConnection"
]
},
"deployment_name": {
"type": [
"string"
],
"enabled_by": "connection",
"enabled_by_type": [
"AzureOpenAIConnection"
],
"capabilities": {
"completion": false,
"chat_completion": false,
"embeddings": true
},
"model_list": [
"text-embedding-ada-002",
"text-search-ada-doc-001",
"text-search-ada-query-001"
]
},
"model": {
"type": [
"string"
],
"enabled_by": "connection",
"enabled_by_type": [
"OpenAIConnection"
],
"enum": [
"text-embedding-ada-002",
"text-search-ada-doc-001",
"text-search-ada-query-001"
],
"allow_manual_entry": true
},
"input": {
"type": [
"string"
]
}
},
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.openai_gpt4v.OpenAI.chat": {
"name": "OpenAI GPT-4V",
"description": "Use OpenAI GPT-4V to leverage vision ability.",
"type": "custom_llm",
"module": "promptflow.tools.openai_gpt4v",
"class_name": "OpenAI",
"function": "chat",
"tool_state": "preview",
"icon": {
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg==",
"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC"
},
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting images and responding to questions about the image.\nRemember to provide accurate answers based on the information present in the image.\n\n# user:\nCan you tell me what the image depicts?\n\n",
"inputs": {
"connection": {
"type": [
"OpenAIConnection"
]
},
"model": {
"enum": [
"gpt-4-vision-preview"
],
"allow_manual_entry": true,
"type": [
"string"
]
},
"temperature": {
"default": 1,
"type": [
"double"
]
},
"top_p": {
"default": 1,
"type": [
"double"
]
},
"max_tokens": {
"default": 512,
"type": [
"int"
]
},
"stop": {
"default": "",
"type": [
"list"
]
},
"presence_penalty": {
"default": 0,
"type": [
"double"
]
},
"frequency_penalty": {
"default": 0,
"type": [
"double"
]
}
},
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.open_model_llm.OpenModelLLM.call": {
"name": "Open Model LLM",
"description": "Use an open model from the Azure Model catalog, deployed to an AzureML Online Endpoint for LLM Chat or Completion API calls.",
"icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"type": "custom_llm",
"module": "promptflow.tools.open_model_llm",
"class_name": "OpenModelLLM",
"function": "call",
"inputs": {
"endpoint_name": {
"type": [
"string"
],
"dynamic_list": {
"func_path": "promptflow.tools.open_model_llm.list_endpoint_names"
},
"allow_manual_entry": true,
"is_multi_select": false
},
"deployment_name": {
"default": "",
"type": [
"string"
],
"dynamic_list": {
"func_path": "promptflow.tools.open_model_llm.list_deployment_names",
"func_kwargs": [
{
"name": "endpoint",
"type": [
"string"
],
"optional": true,
"reference": "${inputs.endpoint}"
}
]
},
"allow_manual_entry": true,
"is_multi_select": false
},
"api": {
"enum": [
"chat",
"completion"
],
"type": [
"string"
]
},
"temperature": {
"default": 1.0,
"type": [
"double"
]
},
"max_new_tokens": {
"default": 500,
"type": [
"int"
]
},
"top_p": {
"default": 1.0,
"advanced": true,
"type": [
"double"
]
},
"model_kwargs": {
"default": "{}",
"advanced": true,
"type": [
"object"
]
}
},
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.serpapi.SerpAPI.search": {
"name": "Serp API",
"description": "Use Serp API to obtain search results from a specific search engine.",
"inputs": {
"connection": {
"type": [
"SerpConnection"
]
},
"engine": {
"default": "google",
"enum": [
"google",
"bing"
],
"type": [
"string"
]
},
"location": {
"default": "",
"type": [
"string"
]
},
"num": {
"default": "10",
"type": [
"int"
]
},
"query": {
"type": [
"string"
]
},
"safe": {
"default": "off",
"enum": [
"active",
"off"
],
"type": [
"string"
]
}
},
"type": "python",
"module": "promptflow.tools.serpapi",
"class_name": "SerpAPI",
"function": "search",
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"my_tool_package.tools.my_tool_1.my_tool": {
"function": "my_tool",
"inputs": {
"connection": {
"type": [
"CustomConnection"
],
"custom_type": [
"MyFirstConnection",
"MySecondConnection"
]
},
"input_text": {
"type": [
"string"
]
}
},
"module": "my_tool_package.tools.my_tool_1",
"name": "My First Tool",
"description": "This is my first tool",
"type": "python",
"package": "test-custom-tools",
"package_version": "0.0.2"
},
"my_tool_package.tools.my_tool_2.MyTool.my_tool": {
"class_name": "MyTool",
"function": "my_tool",
"inputs": {
"connection": {
"type": [
"CustomConnection"
],
"custom_type": [
"MySecondConnection"
]
},
"input_text": {
"type": [
"string"
]
}
},
"module": "my_tool_package.tools.my_tool_2",
"name": "My Second Tool",
"description": "This is my second tool",
"type": "python",
"package": "test-custom-tools",
"package_version": "0.0.2"
},
"my_tool_package.tools.my_tool_with_custom_strong_type_connection.my_tool": {
"function": "my_tool",
"inputs": {
"connection": {
"custom_type": [
"MyCustomConnection"
],
"type": [
"CustomConnection"
]
},
"input_param": {
"type": [
"string"
]
}
},
"module": "my_tool_package.tools.my_tool_with_custom_strong_type_connection",
"name": "Tool With Custom Strong Type Connection",
"description": "This is my tool with custom strong type connection.",
"type": "python",
"package": "test-custom-tools",
"package_version": "0.0.2"
}
}
} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n/three/mod_three.py | from promptflow import tool
@tool
def mod_three(number: int):
if number % 3 != 0:
raise Exception("cannot mod 3!")
return {"value": number}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n/three/flow.dag.yaml | inputs:
number:
type: int
outputs:
output:
type: int
reference: ${mod_three.output.value}
nodes:
- name: mod_three
type: python
source:
type: code
path: mod_three.py
inputs:
number: ${inputs.number}
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n/three | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/mod-n/three/.promptflow/flow.tools.json | {
"code": {
"mod_three.py": {
"type": "python",
"inputs": {
"number": {
"type": [
"int"
]
}
},
"source": "mod_three.py",
"function": "mod_three"
}
},
"package": {
"promptflow.tools.aoai_gpt4v.AzureOpenAI.chat": {
"name": "Azure OpenAI GPT-4 Turbo with Vision",
"description": "Use Azure OpenAI GPT-4 Turbo with Vision to leverage AOAI vision ability.",
"type": "custom_llm",
"module": "promptflow.tools.aoai_gpt4v",
"class_name": "AzureOpenAI",
"function": "chat",
"tool_state": "preview",
"icon": {
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg==",
"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC"
},
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting images and responding to questions about the image.\nRemember to provide accurate answers based on the information present in the image.\n\n# user:\nCan you tell me what the image depicts?\n\n",
"inputs": {
"connection": {
"type": [
"AzureOpenAIConnection"
]
},
"deployment_name": {
"type": [
"string"
]
},
"temperature": {
"default": 1,
"type": [
"double"
]
},
"top_p": {
"default": 1,
"type": [
"double"
]
},
"max_tokens": {
"default": 512,
"type": [
"int"
]
},
"stop": {
"default": "",
"type": [
"list"
]
},
"presence_penalty": {
"default": 0,
"type": [
"double"
]
},
"frequency_penalty": {
"default": 0,
"type": [
"double"
]
}
},
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.azure_content_safety.analyze_text": {
"module": "promptflow.tools.azure_content_safety",
"function": "analyze_text",
"inputs": {
"connection": {
"type": [
"AzureContentSafetyConnection"
]
},
"hate_category": {
"default": "medium_sensitivity",
"enum": [
"disable",
"low_sensitivity",
"medium_sensitivity",
"high_sensitivity"
],
"type": [
"string"
]
},
"self_harm_category": {
"default": "medium_sensitivity",
"enum": [
"disable",
"low_sensitivity",
"medium_sensitivity",
"high_sensitivity"
],
"type": [
"string"
]
},
"sexual_category": {
"default": "medium_sensitivity",
"enum": [
"disable",
"low_sensitivity",
"medium_sensitivity",
"high_sensitivity"
],
"type": [
"string"
]
},
"text": {
"type": [
"string"
]
},
"violence_category": {
"default": "medium_sensitivity",
"enum": [
"disable",
"low_sensitivity",
"medium_sensitivity",
"high_sensitivity"
],
"type": [
"string"
]
}
},
"name": "Content Safety (Text Analyze)",
"description": "Use Azure Content Safety to detect harmful content.",
"type": "python",
"deprecated_tools": [
"content_safety_text.tools.content_safety_text_tool.analyze_text"
],
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.embedding.embedding": {
"name": "Embedding",
"description": "Use Open AI's embedding model to create an embedding vector representing the input text.",
"type": "python",
"module": "promptflow.tools.embedding",
"function": "embedding",
"inputs": {
"connection": {
"type": [
"AzureOpenAIConnection",
"OpenAIConnection"
]
},
"deployment_name": {
"type": [
"string"
],
"enabled_by": "connection",
"enabled_by_type": [
"AzureOpenAIConnection"
],
"capabilities": {
"completion": false,
"chat_completion": false,
"embeddings": true
},
"model_list": [
"text-embedding-ada-002",
"text-search-ada-doc-001",
"text-search-ada-query-001"
]
},
"model": {
"type": [
"string"
],
"enabled_by": "connection",
"enabled_by_type": [
"OpenAIConnection"
],
"enum": [
"text-embedding-ada-002",
"text-search-ada-doc-001",
"text-search-ada-query-001"
],
"allow_manual_entry": true
},
"input": {
"type": [
"string"
]
}
},
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.openai_gpt4v.OpenAI.chat": {
"name": "OpenAI GPT-4V",
"description": "Use OpenAI GPT-4V to leverage vision ability.",
"type": "custom_llm",
"module": "promptflow.tools.openai_gpt4v",
"class_name": "OpenAI",
"function": "chat",
"tool_state": "preview",
"icon": {
"light": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAx0lEQVR4nJWSwQ2CQBBFX0jAcjgqXUgPJNiIsQQrIVCIFy8GC6ABDcGDX7Mus9n1Xz7zZ+fPsLPwH4bUg0dD2wMPcbR48Uxq4AKU4iSTDwZ1LhWXipN/B3V0J6hjBTvgLHZNonewBXrgDpzEvXSIjN0BE3AACmmF4kl5F6tNzcCoLpW0SvGovFvsb4oZ2AANcAOu4ka6axCcINN3rg654sww+CYsPD0OwjcozFNh/Qcd78tqVbCIW+n+Fky472Bh/Q6SYb1EEy8tDzd+9IsVPAAAAABJRU5ErkJggg==",
"dark": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAA2ElEQVR4nJXSzW3CQBAF4DUSTjk+Al1AD0ikESslpBIEheRALhEpgAYSWV8OGUublf/yLuP3PPNmdndS+gdwXZrYDmh7fGE/W+wXbaYd8IYm4rxJPnZ0boI3wZcdJxs/n+AwV7DFK7aFyfQdYIMLPvES8YJNf5yp4jMeeEYdWh38gXOR35YGHe5xabvQdsHv6PLi8qV6gycc8YH3iMfQu6Lh4ASr+F5Hh3XwVWnQYzUkVlX1nccplAb1SN6Y/sfgmlK64VS8wimldIv/0yj2QLkHizG0iWP4AVAfQ34DVQONAAAAAElFTkSuQmCC"
},
"default_prompt": "# system:\nAs an AI assistant, your task involves interpreting images and responding to questions about the image.\nRemember to provide accurate answers based on the information present in the image.\n\n# user:\nCan you tell me what the image depicts?\n\n",
"inputs": {
"connection": {
"type": [
"OpenAIConnection"
]
},
"model": {
"enum": [
"gpt-4-vision-preview"
],
"allow_manual_entry": true,
"type": [
"string"
]
},
"temperature": {
"default": 1,
"type": [
"double"
]
},
"top_p": {
"default": 1,
"type": [
"double"
]
},
"max_tokens": {
"default": 512,
"type": [
"int"
]
},
"stop": {
"default": "",
"type": [
"list"
]
},
"presence_penalty": {
"default": 0,
"type": [
"double"
]
},
"frequency_penalty": {
"default": 0,
"type": [
"double"
]
}
},
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.open_model_llm.OpenModelLLM.call": {
"name": "Open Model LLM",
"description": "Use an open model from the Azure Model catalog, deployed to an AzureML Online Endpoint for LLM Chat or Completion API calls.",
"icon": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACgElEQVR4nGWSz2vcVRTFP/e9NzOZ1KDGohASslLEH6VLV0ak4l/QpeDCrfQPcNGliODKnVm4EBdBsIjQIlhciKW0ycKFVCSNbYnjdDLtmPnmO/nO9917XcxMkjYX3uLx7nnn3HOuMK2Nix4fP78ZdrYXVkLVWjf3l3B1B+HpcjzGFtmqa6cePz7/x0dnn1n5qhj3iBJPYREIURAJuCtpY8PjReDbrf9WG7H1fuefwQU9qKztTcMJT+PNnEFvjGVDBDlSsH6p/9MLzy6+NxwVqI8RAg4IPmWedMckdLYP6O6UpIaQfvyyXG012+e79/ZfHukoS1ISMT2hGTB1RkUmNgQ5QZ0w+a2VWDq73MbdEWmfnnv6UWe7oNzPaLapl5CwuLTXK9WUGBuCjqekzhP+z52ZXOrKMD3OJg0Hh778aiOuvpnYvp05d6GJO4iAO4QAe/eV36/X5LFRV4Zmn+AdkqlL8Vjp3oVioOz+WTPzzYEgsN+fgPLYyJVheSbPPVl2ikeGZRjtG52/8rHuaV9VOlpP2OtKyVndcRVCSqOhsvxa4vW359i6OuKdD+aP8Q4SYPdOzS/flGjt1JUSaMqZ5nwa1Y8qWb/Ud/eZZkHisYezEM0m+fcelDr8F1SqW2LNK6r1jXQwyLzy1hxvrLXZulry7ocL+FS6G4QIu3fG/Px1gdYeW7LIgXU2P/115TOA5G7e3Rmj2aS/m7l5pThiZzrCcE/d1XHzbln373nw7y6veeoUm5KCNKT/IPPwbiY1hYd/l5MIT65BMFt87sU4v9D7/JMflr44uV6hGh1+L4RCkg6z5iK2tAhNLeLsNGwYA4fDYnC/drvuuFxe86NV/x+Ut27g0FvykgAAAABJRU5ErkJggg==",
"type": "custom_llm",
"module": "promptflow.tools.open_model_llm",
"class_name": "OpenModelLLM",
"function": "call",
"inputs": {
"endpoint_name": {
"type": [
"string"
],
"dynamic_list": {
"func_path": "promptflow.tools.open_model_llm.list_endpoint_names"
},
"allow_manual_entry": true,
"is_multi_select": false
},
"deployment_name": {
"default": "",
"type": [
"string"
],
"dynamic_list": {
"func_path": "promptflow.tools.open_model_llm.list_deployment_names",
"func_kwargs": [
{
"name": "endpoint",
"type": [
"string"
],
"optional": true,
"reference": "${inputs.endpoint}"
}
]
},
"allow_manual_entry": true,
"is_multi_select": false
},
"api": {
"enum": [
"chat",
"completion"
],
"type": [
"string"
]
},
"temperature": {
"default": 1.0,
"type": [
"double"
]
},
"max_new_tokens": {
"default": 500,
"type": [
"int"
]
},
"top_p": {
"default": 1.0,
"advanced": true,
"type": [
"double"
]
},
"model_kwargs": {
"default": "{}",
"advanced": true,
"type": [
"object"
]
}
},
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"promptflow.tools.serpapi.SerpAPI.search": {
"name": "Serp API",
"description": "Use Serp API to obtain search results from a specific search engine.",
"inputs": {
"connection": {
"type": [
"SerpConnection"
]
},
"engine": {
"default": "google",
"enum": [
"google",
"bing"
],
"type": [
"string"
]
},
"location": {
"default": "",
"type": [
"string"
]
},
"num": {
"default": "10",
"type": [
"int"
]
},
"query": {
"type": [
"string"
]
},
"safe": {
"default": "off",
"enum": [
"active",
"off"
],
"type": [
"string"
]
}
},
"type": "python",
"module": "promptflow.tools.serpapi",
"class_name": "SerpAPI",
"function": "search",
"package": "promptflow-tools",
"package_version": "1.0.2"
},
"my_tool_package.tools.my_tool_1.my_tool": {
"function": "my_tool",
"inputs": {
"connection": {
"type": [
"CustomConnection"
],
"custom_type": [
"MyFirstConnection",
"MySecondConnection"
]
},
"input_text": {
"type": [
"string"
]
}
},
"module": "my_tool_package.tools.my_tool_1",
"name": "My First Tool",
"description": "This is my first tool",
"type": "python",
"package": "test-custom-tools",
"package_version": "0.0.2"
},
"my_tool_package.tools.my_tool_2.MyTool.my_tool": {
"class_name": "MyTool",
"function": "my_tool",
"inputs": {
"connection": {
"type": [
"CustomConnection"
],
"custom_type": [
"MySecondConnection"
]
},
"input_text": {
"type": [
"string"
]
}
},
"module": "my_tool_package.tools.my_tool_2",
"name": "My Second Tool",
"description": "This is my second tool",
"type": "python",
"package": "test-custom-tools",
"package_version": "0.0.2"
},
"my_tool_package.tools.my_tool_with_custom_strong_type_connection.my_tool": {
"function": "my_tool",
"inputs": {
"connection": {
"custom_type": [
"MyCustomConnection"
],
"type": [
"CustomConnection"
]
},
"input_param": {
"type": [
"string"
]
}
},
"module": "my_tool_package.tools.my_tool_with_custom_strong_type_connection",
"name": "Tool With Custom Strong Type Connection",
"description": "This is my tool with custom strong type connection.",
"type": "python",
"package": "test-custom-tools",
"package_version": "0.0.2"
}
}
} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_langchain_traces/test_langchain_traces.py | import os
from langchain.chat_models import AzureChatOpenAI
from langchain_core.messages import HumanMessage
from langchain.agents.agent_types import AgentType
from langchain.agents.initialize import initialize_agent
from langchain.agents.load_tools import load_tools
from promptflow import tool
from promptflow.connections import AzureOpenAIConnection
from promptflow.integrations.langchain import PromptFlowCallbackHandler
@tool
def test_langchain_traces(question: str, conn: AzureOpenAIConnection):
os.environ["AZURE_OPENAI_API_KEY"] = conn.api_key
os.environ["OPENAI_API_VERSION"] = conn.api_version
os.environ["AZURE_OPENAI_ENDPOINT"] = conn.api_base
model = AzureChatOpenAI(
temperature=0.7,
azure_deployment="gpt-35-turbo",
)
tools = load_tools(["llm-math"], llm=model)
# Please keep use agent to enable customized CallBack handler
agent = initialize_agent(
tools, model, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False,
callbacks=[PromptFlowCallbackHandler()]
)
message = HumanMessage(
content=question
)
try:
return agent.run(message)
except Exception as e:
return str(e)
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_langchain_traces/code_first_input.csv | question
What is 2 to the 10th power?
What is the sum of 2 and 2?
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_langchain_traces/data_inputs.json | {
"data": "code_first_input.csv"
} | 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_langchain_traces/flow.dag.yaml | inputs:
question:
type: string
outputs:
output:
type: string
reference: ${test_langchain_traces.output}
nodes:
- name: test_langchain_traces
type: python
source:
type: code
path: test_langchain_traces.py
inputs:
question: ${inputs.question}
conn: azure_open_ai_connection
| 0 |
promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows | promptflow_repo/promptflow/src/promptflow/tests/test_configs/flows/flow_with_langchain_traces/inputs.jsonl | {"question": "What is 2 to the 10th power?"}
{"question": "What is the sum of 2 and 2?"} | 0 |
Subsets and Splits