code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
zulipbot
=========
The unofficial Python API for Zulip bots. A Zulip email and API key are needed to use this.
To get your bot's Zulip email and API key, go to
your Settings page and scroll down to the Your Bots section.
Usage
-----
To install: ``pip install zulipbot``
Initialization
^^^^^^^^^^^^^^
Identify your Zulip bot's email and API key, and make a new Bot object.
.. code-block :: python
>>> import zulipbot
>>> email = '[email protected]'
>>> key = 'spammyeggs'
>>> my_bot = zulipbot.Bot(email, key)
Bot Methods
^^^^^^^^^^^
For an example on how the below methods work together in a program, check out `example.py`_.
.. _example.py: https://github.com/stephsamson/zulipbot/blob/master/example.py
``subscribe_to_all_streams`` - Subscribes to all Zulip streams.
``send_private_message`` and ``send_stream_message`` both have the same parameters: ``message_info`` and ``content``, i.e. the function signature for the two aforementioned methods is ``send_[[private||stream]]_message(message_info, content)``.
``message_info`` is the message meta-info dictionary that you acquire from the function callback that processes messages. ``content`` is the response that you want your bot to make.
| zulipbot | /zulipbot-0.0.1.tar.gz/zulipbot-0.0.1/README.rst | README.rst |
MIT License
Copyright (c) 2020 Derrick Gilland
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| zulu | /zulu-2.0.0-py3-none-any.whl/zulu-2.0.0.dist-info/LICENSE.rst | LICENSE.rst |
<p align="center">
<a href="https://github.com/daleal/zum">
<img src="https://zum.daleal.dev/assets/images/zum-300x300.png">
</a>
</p>
<h1 align="center">Zum</h1>
<p align="center">
<em>
Stop writing scripts to interact with your APIs. Call them as CLIs instead.
</em>
</p>
<p align="center">
<a href="https://pypi.org/project/zum" target="_blank">
<img src="https://img.shields.io/pypi/v/zum?label=version&logo=python&logoColor=%23fff&color=306998" alt="PyPI - Version">
</a>
<a href="https://github.com/daleal/zum/actions?query=workflow%3Atests" target="_blank">
<img src="https://img.shields.io/github/workflow/status/daleal/zum/tests?label=tests&logo=python&logoColor=%23fff" alt="Tests">
</a>
<a href="https://codecov.io/gh/daleal/zum" target="_blank">
<img src="https://img.shields.io/codecov/c/gh/daleal/zum?label=coverage&logo=codecov&logoColor=ffffff" alt="Coverage">
</a>
<a href="https://github.com/daleal/zum/actions?query=workflow%3Alinters" target="_blank">
<img src="https://img.shields.io/github/workflow/status/daleal/zum/linters?label=linters&logo=github" alt="Linters">
</a>
</p>
**Zum** (German word roughly meaning "_to the_" or "_to_" depending on the context, pronounced `/tsʊm/`) is a tool that lets you describe a web API using a [TOML](https://toml.io/en/) file and then interact with that API using your command line. This means that **the days of writing custom scripts to help you interact and develop each of your APIs are over**. Just create a `zum.toml`, describe your API and forget about maintaining more code!
## Why Zum?
While there are tools out there with goals similar to `zum`, the scopes are quite different. The common contenders are [OpenAPI](http://spec.openapis.org/oas/v3.0.3)-based tools (like [SwaggerUI](https://swagger.io/tools/swagger-ui/)) and [cURL](https://curl.se/). To me, using an OpenAPI-based documentation tool is essential on any large enough API, but the description method is **very** verbose and quite complex, so often times it is added once the API has quite a few endpoints. On the other hand, cURL gets very verbose and tedious very fast when querying APIs, so I don't like to use it when developing my APIs. As a comparison, here's a `curl` command to query a local endpoint with a JSON body:
```sh
curl --header "Content-Type: application/json" \
--request POST \
--data '{"name": "Dani", "city": "Santiago"}' \
http://localhost:8000/living-beings
```
And here is the `zum` command to achieve the same result:
```sh
zum create application/json Dani Santiago
```
Now, imagine having to run this command hundreads of times during API development changing only the values on the request body, for example. You can see how using cURL is **not ideal**.
The [complete documentation](https://zum.daleal.dev/docs/) is available on the [official website](https://zum.daleal.dev/).
## Installation
Install using pip!
```sh
pip install zum
```
## Usage
### Basic Usage
The basic principle is simple:
1. Describe your API using a `zum.toml` file.
2. Use the `zum` CLI to interact with your API.
We get more _in-depth_ with how to structure the `zum.toml` file and how to use the `zum` CLI on [the complete documentation](https://zum.daleal.dev/docs/), but for now let's see a very basic example. Imagine that you are developing an API that gets the URL of [a song on YouTube](https://youtu.be/6xlsR1c8yh4). This API, for now, has only 1 endpoint: `GET /song` (clearly a [WIP](https://www.urbandictionary.com/define.php?term=Wip)). To describe your API, you would have to write a `zum.toml` file similar to this one:
```toml
[metadata]
server = "http://localhost:8000"
[endpoints.dada]
route = "/song"
method = "get"
```
Now, to get your song's URL, all you need to do is to run:
```sh
zum dada
```
Notice that, after the `zum` command, we passed an argument, that in this case was `dada`. This argument tells `zum` that it should interact with the endpoint described on the `dada` endpoint section, denoted by the header `[endpoints.dada]`. As a rule, to access an endpoint described by the header `[endpoints.{my-endpoint-name}]`, you will call the `zum` command with the `{my-endpoint-name}` argument:
```sh
zum {my-endpoint-name}
```
### `params`, `headers` and `body`
**Beware!** There are some nuances on these attribute definitions, so reading [the complete documentation](https://zum.daleal.dev/docs/) is **highly recommended**.
#### The `params` of an endpoint
On the previous example, the `route` was static, which means that `zum` will **always** query the same route. For some things, this might not be the best of ideas (for example, for querying entities on REST APIs), and you might want to interpolate a value on the `route` string. Let's say that there's a collection of songs, and you wanted to get the song with `id` _57_. Your endpoint definition should look like the following:
```toml
[endpoints.get-song]
route = "/songs/{id}"
method = "get"
params = ["id"]
```
As you can see, the element inside `params` matches the element inside the brackets on the `route`. This means that whatever parameter you pass to the `zum` CLI, it will be replaced on the `route` _on-demand_:
```sh
zum get-song 57
```
Now, `zum` will send a `GET` HTTP request to `http://localhost:8000/songs/57`. Pretty cool!
#### The `headers` of an endpoint
The `headers` are defined **exactly** the same as the `params`. Let's see a small example to illustrate how to use them. Imagine that you have an API that requires [JWT](https://jwt.io/introduction) authorization to `GET` the songs of its catalog. Let's define that endpoint:
```toml
[endpoints.get-authorized-catalog]
route = "/catalog"
method = "get"
headers = ["Authorization"]
```
Now, to acquire the catalog, we would need to run:
```sh
zum get-authorized-catalog "Bearer super-secret-token"
```
> ⚠ **Warning**: Notice that, for the first time, we surrounded something with quotes on the CLI. The reason we did this is that, without the quotes, the console has no way of knowing if you want to pass a parameter with a space in the middle or if you want to pass multiple parameters, so it defaults to receiving the words as multiple parameters. To stop this from happening, you can surround the string in quotes, and now the whole string will be interpreted as only one parameter with the space in the middle of the string. This will be handy on future examples, so **keep it in mind**.
This will send a `GET` request to `http://localhost:8000/catalog` with the following headers:
```json
{
"Authorization": "Bearer super-secret-token"
}
```
And now you have your authorization-protected music catalog!
#### The `body` of an endpoint
Just like `params` and `headers`, the `body` (the body of the request) gets defined as an array:
```toml
[endpoints.create-living-being]
route = "/living-beings"
method = "post"
body = ["name", "city"]
```
To run this endpoint, you just need to run:
```sh
zum create-living-being Dani Santiago
```
This will send a `POST` request to `http://localhost:8000/living-beings` with the following request body:
```json
{
"name": "Dani",
"city": "Santiago"
}
```
**Notice that you can also cast the parameters to different types**. You can read more about this on the complete documentation's section about [the request body](https://zum.daleal.dev/docs/config-file.html#the-body-of-an-endpoint).
#### Combining `params`, `headers` and `body`
Of course, sometimes you need to use some `params`, some `headers` **and** a `body`. For example, if you wanted to create a song inside an authorization-protected album (a _nested entity_), you would need to use the album's id as a `param`, the "Authorization" key inside the `headers` to get the authorization and the new song's data as the `body`. For this example, the song has a `name` (which is a string) and a `duration` in seconds (which is an integer). Let's describe this situation!
```toml
[endpoints.create-song]
route = "/albums/{id}/songs"
method = "post"
params = ["id"]
headers = ["Authorization"]
body = [
"name",
{ name = "duration", type = "integer" }
]
```
Now, you can call the endpoint using:
```sh
zum create-song 8 "Bearer super-secret-token" "Con Altura" 161
```
This will call `POST /albums/8/songs` with the following headers:
```json
{
"Authorization": "Bearer super-secret-token"
}
```
And the following request body:
```json
{
"name": "Con Altura",
"duration": 161
}
```
As you can probably tell, `zum` receives the `params` first on the CLI, then the `headers` and then the `body`. In _pythonic_ terms, what `zum` does is that it kind of _unpacks_ the three arrays consecutively, something like the following:
```py
arguments = [*params, *headers, *body]
zum(arguments)
```
## Developing
Clone the repository:
```sh
git clone https://github.com/daleal/zum.git
cd zum
```
Recreate environment:
```sh
make get-poetry
make build-env
```
Run the linters:
```sh
make black flake8 isort mypy pylint
```
Run the tests:
```sh
make tests
```
## Resources
- [Official Website](https://zum.daleal.dev/)
- [Issue Tracker](https://github.com/daleal/zum/issues/)
- [Contributing Guidelines](.github/CONTRIBUTING.md)
| zum | /zum-0.3.0.tar.gz/zum-0.3.0/README.md | README.md |
Zumanji
=======
A web application for handling performance test results with heavy GitHub integration.
Integrates with `nose-performance <https://github.com/disqus/nose-performance>`_ to report and archive results.
See the included application in ``example/`` for information a sample setup.
.. note:: Zumanji is designed to work with GitHub, and will likely explode into a million little errors if your repo
does not exist on GitHub.
Usage
-----
Import JSON data from your test runner::
$ python manage.py import_performance_json <json file> --project=disqus/gargoyle
Goals
-----
Zumanji's mission is to be an extensible build reporting interface. It's primary target is improved
statistics around your tests, but may evolve into a generic build dashboard (as it has all of the
required features).
It should report things quickly and accurate, allowing to see precisely what problems may exist within
a particular build that weren't there before (whether that's a failing condition or a regression). The
system will also cater to alerting via setting trigger metrics (e.g. alert me when number of SQL calls
exceeds 15 in these tests).
Screenshots
-----------
Aggregate Report
~~~~~~~~~~~~~~~~
.. image:: https://github.com/disqus/zumanji/raw/master/screenshots/branch.png
Individual Test
~~~~~~~~~~~~~~~
.. image:: https://github.com/disqus/zumanji/raw/master/screenshots/leaf.png
Caveats
-------
This is still an evolving prototype and APIs are not stable, nor is the implementation the most efficient it could be. | zumanji | /zumanji-0.3.7.1.tar.gz/zumanji-0.3.7.1/README.rst | README.rst |
<p align="center"><img src="https://raw.githubusercontent.com/zumcoin/zum-assets/master/ZumCoin/zumcoin_logo_design/3d_green_lite_bg/ZumLogo_800x200px_lite_bg.png" width="400"></p>
# ZUM Services Python API Interface
This wrapper allows you to easily interact with the [ZUM Services](https://zum.services) 1.0.1 API to quickly develop applications that interact with the [ZumCoin](https://zumcoin.org) Network.
# Table of Contents
1. [Installation](#installation)
2. [Intialization](#intialization)
3. [Documentation](#documentation)
1. [Methods](#methods)
# Installation
```bash
pip install zumservices-api-py
```
# Intialization
```python
import os
from ZUMservices import ZS
os.environ["ZUM_SERVICES_TOKEN"] = "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoieW8iLCJhcHBJZCI6MjAsInVzZXJJZCI6MiwicGVybWlzc2lvbnMiOlsiYWRkcmVzczpuZXciLCJhZGRyZXNzOnZpZXciLCJhZGRyZXNzOmFsbCIsImFkZHJlc3M6c2NhbiIsImFkZHJlc3M6ZGVsZXRlIiwidHJhbnNmZXI6bmV3IiwidHJhbnNmZXI6dmlldyJdLCJpYXQiOjE1Mzk5OTQ4OTgsImV4cCI6MTU3MTU1MjQ5OCwiYXVkIjoiZ2FuZy5jb20iLCJpc3MiOiJUUlRMIFNlcnZpY2VzIiwianRpIjoiMjIifQ.KkKyg18aqZfLGMGTnUDhYQmVSUoocrr4CCdLBm2K7V87s2T-3hTtM2MChJB2UdbDLWnf58GiMa_t8xp9ZjZjIg"
os.environ["ZUM_SERVICES_TIMEOUT"] = 2000
```
Generate a token with the ZUM Services [Dashboard](https://zum.services) and store it as the variable ``ZUM_SERVICES_TOKEN`` in your os environment along with ``ZUM_SERVICES_TIMEOUT`` if you wish the change the default timeout of 2000.
# Documentation
API documentation is available at https://zum.services/documentation
## Methods
### createAddress()
Create a new ZUM addresses
```python
ZS.createAddress()
```
### getAddress(address)
Get address details by address
```python
ZS.getAddress("Zum1yfSrdpfiSNG5CtYmckgpGe1FiAc9gLCEZxKq29puNCX92DUkFYFfEGKugPS6EhWaJXmhAzhePGs3jXvNgK4NbWXG4yaGBHC")
```
### deleteAddress(address)
Delete a selected ZUM address
```python
ZS.deleteAdddress("Zum1yfSrdpfiSNG5CtYmckgpGe1FiAc9gLCEZxKq29puNCX92DUkFYFfEGKugPS6EhWaJXmhAzhePGs3jXvNgK4NbWXG4yaGBHC")
```
### getAddresses()
View all addresses.
```python
ZS.getAddresses()
```
### scanAddress(address, blockIndex)
Scan an address for transactions between a 100 block range starting from the specified blockIndex.
```python
ZS.scanAddress("Zum1yfSrdpfiSNG5CtYmckgpGe1FiAc9gLCEZxKq29puNCX92DUkFYFfEGKugPS6EhWaJXmhAzhePGs3jXvNgK4NbWXG4yaGBHC", 899093)
```
### getAddressKeys(address)
Get the public and secret spend key of an address.
```python
ZS.getAddressKeys("Zum1yfSrdpfiSNG5CtYmckgpGe1FiAc9gLCEZxKq29puNCX92DUkFYFfEGKugPS6EhWaJXmhAzhePGs3jXvNgK4NbWXG4yaGBHC")
```
### integrateAddress(address, paymentId)
Create an integrated address with an address and payment ID.
```python
ZS.integrateAddress("Zum1yfSrdpfiSNG5CtYmckgpGe1FiAc9gLCEZxKq29puNCX92DUkFYFfEGKugPS6EhWaJXmhAzhePGs3jXvNgK4NbWXG4yaGBHC", "7d89a2d16365a1198c46db5bbe1af03d2b503a06404f39496d1d94a0a46f8804")
```
### getIntegratedAddresses(address)
Get all integrated addresses by address.
```python
ZS.getIntegratedAddresses("Zum1yfSrdpfiSNG5CtYmckgpGe1FiAc9gLCEZxKq29puNCX92DUkFYFfEGKugPS6EhWaJXmhAzhePGs3jXvNgK4NbWXG4yaGBHC")
```
### getFee(amount)
Calculate the ZUM Services fee for an amount specified in ZUM with two decimal points.
```python
ZS.getFee(1.23)
```
### createTransfer(sender, receiver, amount, fee, paymentId, extra)
Send a ZUM transaction with an address with the amount specified two decimal points.
```python
ZS.createTransfer(
"Zum1yfSrdpfiSNG5CtYmckgpGe1FiAc9gLCEZxKq29puNCX92DUkFYFfEGKugPS6EhWaJXmhAzhePGs3jXvNgK4NbWXG4yaGBHC",
"Zum1yhbRwHsXj19c1hZgFzgxVcWDywsJcDKURDud83MqMNKoDTvKEDf6k7BoHnfCiPbj4kY2arEmQTwiVmhoELPv3UKhjYjCMcm",
1234.56,
1.23,
"7d89a2d16365a1198c46db5bbe1af03d2b503a06404f39496d1d94a0a46f8804",
"3938f915a11582f62d93f82f710df9203a029f929fd2f915f2701d947f920f99"
)
```
#### You can leave the last two fields (paymentId and extra) blank.
### getTransfer(address)
Get a transaction details specified by transaction hash.
```python
ZS.getTransfer("EohMUzR1DELyeQM9RVVwpmn5Y1DP0lh1b1ZpLQrfXQsgtvGHnDdJSG31nX2yESYZ")
```
### getWallet()
Get wallet container info and health check.
```python
ZS.getWallet()
```
### getStatus()
Get the current status of the ZUM Services infrastructure.
```python
ZS.getStatus()
```
# License
```
Copyright (c) 2019 ZumCoin Development Team
Please see the included LICENSE file for more information.
```
| zumservices-api-py | /zumservices-api-py-1.0.1.tar.gz/zumservices-api-py-1.0.1/README.md | README.md |
from . import _get
from . import _post
from . import _delete
import json
class ZS(object):
def __init__(self, id):
self.id = id
# Create Address
def createAddress():
data = {}
response = _post('address', data)
return response
# Get Address
def getAddress(address):
response = _get('address/' + address)
return json.dumps(response)
# Delete Address
def deleteAddress(address):
response = _delete('address/' + address)
return json.dumps(response)
# Get Addresses
def getAddresses():
response = _get('address/all')
return json.dumps(response)
# Scan Address
def scanAddress(address, blockIndex):
response = _get('address/scan/' + address + '/' + str(blockIndex))
return response
# Get Address Keys
def scanAddress(address):
response = _get('address/keys/' + address)
return json.dumps(response)
# Integrate Address
def integrateAddress(from_address, to_address, amount, fee, paymentId=None, extra=None):
if paymentId is None:
paymentId = ''
if extra is None:
extra = ''
data = {
'from': from_address,
'to': to_address,
'amount': float(amount),
'fee': float(fee),
'paymentId': paymentId,
'extra': extra
}
response = _post('address/integrate', data)
return json.dumps(response)
# Get Integrated Addresses
def getIntegratedAddresses(address):
response = _get('address/integrate/' + address)
return json.dumps(response)
# Get Fee
def getFee(amount):
response = _get('transfer/fee/' + str(amount))
return response
# Create Transfer
def createTransfer(from_address, to_address, amount, fee, paymentId=None, extra=None):
if paymentId is None:
paymentId = ''
if extra is None:
extra = ''
data = {
'from': from_address,
'to': to_address,
'amount': float(amount),
'fee': float(fee),
'paymentId': paymentId,
'extra': extra
}
response = _post('transfer', data)
return response
# Get Transfer
def getTransfer(transactionHash):
response = _get('transfer/' + transactionHash)
return json.dumps(response)
# Get Wallet
def getStatus():
response = _get('status')
return json.dumps(response)
# Get Status
def getStatus():
response = _get('status')
return response | zumservices-api-py | /zumservices-api-py-1.0.1.tar.gz/zumservices-api-py-1.0.1/ZUMservices/lib.py | lib.py |
==========
zun-ui
==========
Zun UI
* Free software: Apache license
* Source: https://opendev.org/openstack/zun-ui
* Bugs: https://bugs.launchpad.net/zun-ui
Manual Installation
-------------------
Install according to `Zun UI documentation <https://docs.openstack.org/zun-ui/latest/install/index.html>`_.
Enabling in DevStack
--------------------
Add this repo as an external repository into your ``local.conf`` file::
[[local|localrc]]
enable_plugin zun-ui https://github.com/openstack/zun-ui
| zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/README.rst | README.rst |
============
Installation
============
Manual Installation
-------------------
Install Horizon according to `Horizon documentation <https://docs.openstack.org/horizon/>`_.
.. note::
If your Horizon was installed by python3, Zun UI needs to be installed by
python3 as well. For example, replace ``pip`` with ``pip3`` and replace
``python`` with ``python3`` for commands below.
Clone Zun UI from git repository, checkout branch same as Horizon and Zun, and install it::
git clone https://github.com/openstack/zun-ui
cd zun-ui
git checkout <branch which you want>
pip install .
Enable Zun UI in your Horizon::
cp zun_ui/enabled/* <path to your horizon>/openstack_dashboard/local/enabled/
Run collectstatic command::
python <path to your horizon>/manage.py collectstatic
Compress static files (if enabled)::
python <path to your horizon>/manage.py compress
Then restart your Horizon.
After restart your Horizon, reload dashboard forcely using [Ctrl + F5] or etc. in your browser.
Enabling in DevStack
--------------------
Add this repo as an external repository into your ``local.conf`` file::
[[local|localrc]]
enable_plugin zun-ui https://github.com/openstack/zun-ui
| zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/doc/source/install/index.rst | index.rst |
=============
Configuration
=============
Image for Cloud Shell
---------------------
The image for Cloud Shell is set as `gbraad/openstack-client:alpine`
by default. If you want to use other image, edit `CLOUD_SHELL_IMAGE`
variable in file `_0330_cloud_shell_settings.py.sample`, and copy
it to `horizon/openstack_dashboard/local/local_settings.d/_0330_cloud_shell_settings.py`,
and restart Horizon.
For more configurations
-----------------------
See
`Configuration Guide
<https://docs.openstack.org/horizon/latest/configuration/index.html>`__
in the Horizon documentation.
| zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/doc/source/configuration/index.rst | index.rst |
import optparse
import os
import subprocess
import sys
class InstallVenv(object):
def __init__(self, root, venv, requirements,
test_requirements, py_version,
project):
self.root = root
self.venv = venv
self.requirements = requirements
self.test_requirements = test_requirements
self.py_version = py_version
self.project = project
def die(self, message, *args):
print(message % args, file=sys.stderr)
sys.exit(1)
def check_python_version(self):
if sys.version_info < (2, 6):
self.die("Need Python Version >= 2.6")
def run_command_with_code(self, cmd, redirect_output=True,
check_exit_code=True):
"""Runs a command in an out-of-process shell.
Returns the output of that command. Working directory is self.root.
"""
if redirect_output:
stdout = subprocess.PIPE
else:
stdout = None
proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout)
output = proc.communicate()[0]
if check_exit_code and proc.returncode != 0:
self.die('Command "%s" failed.\n%s', ' '.join(cmd), output)
return (output, proc.returncode)
def run_command(self, cmd, redirect_output=True, check_exit_code=True):
return self.run_command_with_code(cmd, redirect_output,
check_exit_code)[0]
def get_distro(self):
if (os.path.exists('/etc/fedora-release') or
os.path.exists('/etc/redhat-release')):
return Fedora(
self.root, self.venv, self.requirements,
self.test_requirements, self.py_version, self.project)
else:
return Distro(
self.root, self.venv, self.requirements,
self.test_requirements, self.py_version, self.project)
def check_dependencies(self):
self.get_distro().install_virtualenv()
def create_virtualenv(self, no_site_packages=True):
"""Creates the virtual environment and installs PIP.
Creates the virtual environment and installs PIP only into the
virtual environment.
"""
if not os.path.isdir(self.venv):
print('Creating venv...', end=' ')
if no_site_packages:
self.run_command(['virtualenv', '-q', '--no-site-packages',
self.venv])
else:
self.run_command(['virtualenv', '-q', self.venv])
print('done.')
else:
print("venv already exists...")
pass
def pip_install(self, *args):
self.run_command(['tools/with_venv.sh',
'pip', 'install', '--upgrade'] + list(args),
redirect_output=False)
def install_dependencies(self):
print('Installing dependencies with pip (this can take a while)...')
# First things first, make sure our venv has the latest pip and
# setuptools and pbr
self.pip_install('pip>=1.4')
self.pip_install('setuptools')
self.pip_install('pbr')
self.pip_install('-r', self.requirements, '-r', self.test_requirements)
def parse_args(self, argv):
"""Parses command-line arguments."""
parser = optparse.OptionParser()
parser.add_option('-n', '--no-site-packages',
action='store_true',
help="Do not inherit packages from global Python "
"install.")
return parser.parse_args(argv[1:])[0]
class Distro(InstallVenv):
def check_cmd(self, cmd):
return bool(self.run_command(['which', cmd],
check_exit_code=False).strip())
def install_virtualenv(self):
if self.check_cmd('virtualenv'):
return
if self.check_cmd('easy_install'):
print('Installing virtualenv via easy_install...', end=' ')
if self.run_command(['easy_install', 'virtualenv']):
print('Succeeded')
return
else:
print('Failed')
self.die('ERROR: virtualenv not found.\n\n%s development'
' requires virtualenv, please install it using your'
' favorite package management tool' % self.project)
class Fedora(Distro):
"""This covers all Fedora-based distributions.
Includes: Fedora, RHEL, CentOS, Scientific Linux
"""
def check_pkg(self, pkg):
return self.run_command_with_code(['rpm', '-q', pkg],
check_exit_code=False)[1] == 0
def install_virtualenv(self):
if self.check_cmd('virtualenv'):
return
if not self.check_pkg('python-virtualenv'):
self.die("Please install 'python-virtualenv'.")
super(Fedora, self).install_virtualenv() | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/tools/install_venv_common.py | install_venv_common.py |
'use strict';
var fs = require('fs');
var path = require('path');
var child_process = require("child_process");
module.exports = function (config) {
// This tox venv is setup in the post-install npm step
var pythonVersion = "python3.";
var stdout = child_process.execFileSync("python3", ["--version"]);
pythonVersion += stdout.toString().split(".")[1];
var toxPath = '../.tox/karma/lib/' + pythonVersion + '/site-packages/';
console.log("Karma will check on directory: ", toxPath);
process.env.PHANTOMJS_BIN = 'node_modules/phantomjs-prebuilt/bin/phantomjs';
config.set({
preprocessors: {
// Used to collect templates for preprocessing.
// NOTE: the templates must also be listed in the files section below.
'./static/**/*.html': ['ng-html2js'],
// Used to indicate files requiring coverage reports.
'./static/**/!(*.spec).js': ['coverage'],
},
// Sets up module to process templates.
ngHtml2JsPreprocessor: {
prependPrefix: '/',
moduleName: 'templates'
},
basePath: './',
// Contains both source and test files.
files: [
/*
* shim, partly stolen from /i18n/js/horizon/
* Contains expected items not provided elsewhere (dynamically by
* Django or via jasmine template.
*/
'../test-shim.js',
// from jasmine.html
toxPath + 'xstatic/pkg/jquery/data/jquery.js',
toxPath + 'xstatic/pkg/angular/data/angular.js',
toxPath + 'xstatic/pkg/angular/data/angular-route.js',
toxPath + 'xstatic/pkg/angular/data/angular-mocks.js',
toxPath + 'xstatic/pkg/angular/data/angular-cookies.js',
toxPath + 'xstatic/pkg/angular_bootstrap/data/angular-bootstrap.js',
toxPath + 'xstatic/pkg/angular_gettext/data/angular-gettext.js',
toxPath + 'xstatic/pkg/angular/data/angular-sanitize.js',
toxPath + 'xstatic/pkg/d3/data/d3.js',
toxPath + 'xstatic/pkg/rickshaw/data/rickshaw.js',
toxPath + 'xstatic/pkg/angular_smart_table/data/smart-table.js',
toxPath + 'xstatic/pkg/angular_lrdragndrop/data/lrdragndrop.js',
toxPath + 'xstatic/pkg/spin/data/spin.js',
toxPath + 'xstatic/pkg/spin/data/spin.jquery.js',
toxPath + 'xstatic/pkg/tv4/data/tv4.js',
toxPath + 'xstatic/pkg/objectpath/data/ObjectPath.js',
toxPath + 'xstatic/pkg/angular_schema_form/data/schema-form.js',
// TODO: These should be mocked.
toxPath + 'horizon/static/horizon/js/horizon.js',
/**
* Include framework source code from horizon that we need.
* Otherwise, karma will not be able to find them when testing.
* These files should be mocked in the foreseeable future.
*/
toxPath + 'horizon/static/framework/**/*.module.js',
toxPath + 'horizon/static/framework/**/!(*.spec|*.mock).js',
toxPath + 'openstack_dashboard/static/**/*.module.js',
toxPath + 'openstack_dashboard/static/**/!(*.spec|*.mock).js',
toxPath + 'openstack_dashboard/dashboards/**/static/**/*.module.js',
toxPath + 'openstack_dashboard/dashboards/**/static/**/!(*.spec|*.mock).js',
/**
* First, list all the files that defines application's angular modules.
* Those files have extension of `.module.js`. The order among them is
* not significant.
*/
'./static/**/*.module.js',
/**
* Followed by other JavaScript files that defines angular providers
* on the modules defined in files listed above. And they are not mock
* files or spec files defined below. The order among them is not
* significant.
*/
'./static/**/!(*.spec|*.mock).js',
/**
* Then, list files for mocks with `mock.js` extension. The order
* among them should not be significant.
*/
toxPath + 'openstack_dashboard/static/**/*.mock.js',
/**
* Finally, list files for spec with `spec.js` extension. The order
* among them should not be significant.
*/
'./static/**/*.spec.js',
/**
* Angular external templates
*/
'./static/**/*.html'
],
autoWatch: true,
frameworks: ['jasmine'],
browsers: ['Firefox'],
browserNoActivityTimeout: 60000,
reporters: ['progress', 'coverage', 'threshold'],
plugins: [
'karma-firefox-launcher',
'karma-jasmine',
'karma-ng-html2js-preprocessor',
'karma-coverage',
'karma-threshold-reporter'
],
// Places coverage report in HTML format in the subdirectory below.
coverageReporter: {
type: 'html',
dir: '../cover/karma/'
},
// Coverage threshold values.
thresholdReporter: {
statements: 10, // target 100
branches: 0, // target 100
functions: 10, // target 100
lines: 10 // target 100
}
});
}; | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/karma.conf.js | karma.conf.js |
import logging
import shlex
from django.conf import settings
from horizon import exceptions
from horizon.utils.memoized import memoized
from openstack_dashboard.api import base
from neutronclient.v2_0 import client as neutron_client
from zunclient import api_versions
from zunclient.common import template_format
from zunclient.common import utils
from zunclient.v1 import client as zun_client
LOG = logging.getLogger(__name__)
CONTAINER_CREATE_ATTRS = zun_client.containers.CREATION_ATTRIBUTES
CAPSULE_CREATE_ATTRS = zun_client.capsules.CREATION_ATTRIBUTES
IMAGE_PULL_ATTRS = zun_client.images.PULL_ATTRIBUTES
API_VERSION = api_versions.APIVersion(api_versions.DEFAULT_API_VERSION)
def get_auth_params_from_request(request):
"""Extracts properties needed by zunclient call from the request object.
These will be used to memoize the calls to zunclient.
"""
endpoint_override = ""
try:
endpoint_override = base.url_for(request, 'container')
except exceptions.ServiceCatalogException:
LOG.debug('No Container Management service is configured.')
return None
return (
request.user.username,
request.user.token.id,
request.user.tenant_id,
endpoint_override
)
@memoized
def zunclient(request):
(
username,
token_id,
project_id,
endpoint_override
) = get_auth_params_from_request(request)
insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False)
cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None)
LOG.debug('zunclient connection created using the token "%s" and url'
' "%s"' % (token_id, endpoint_override))
api_version = API_VERSION
if API_VERSION.is_latest():
c = zun_client.Client(
username=username,
project_id=project_id,
auth_token=token_id,
endpoint_override=endpoint_override,
insecure=insecure,
cacert=cacert,
api_version=api_versions.APIVersion("1.1"),
)
api_version = api_versions.discover_version(c, api_version)
c = zun_client.Client(username=username,
project_id=project_id,
auth_token=token_id,
endpoint_override=endpoint_override,
insecure=insecure,
cacert=cacert,
api_version=api_version)
return c
def get_auth_params_from_request_neutron(request):
"""Extracts properties needed by neutronclient call from the request object.
These will be used to memoize the calls to neutronclient.
"""
return (
request.user.token.id,
base.url_for(request, 'network'),
base.url_for(request, 'identity')
)
@memoized
def neutronclient(request):
(
token_id,
neutron_url,
auth_url
) = get_auth_params_from_request_neutron(request)
insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False)
cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None)
LOG.debug('neutronclient connection created using the token "%s" and url'
' "%s"' % (token_id, neutron_url))
c = neutron_client.Client(token=token_id,
auth_url=auth_url,
endpoint_url=neutron_url,
insecure=insecure, ca_cert=cacert)
return c
def _cleanup_params(attrs, check, **params):
args = {}
run = False
for (key, value) in params.items():
if key == "run":
run = value
elif key == "cpu":
args[key] = float(value)
elif key == "memory" or key == "disk":
args[key] = int(value)
elif key == "interactive" or key == "mounts" or key == "nets" \
or key == "security_groups" or key == "hints"\
or key == "auto_remove" or key == "auto_heal":
args[key] = value
elif key == "restart_policy":
args[key] = utils.check_restart_policy(value)
elif key == "environment" or key == "labels":
values = {}
vals = value.split(",")
for v in vals:
kv = v.split("=", 1)
values[kv[0]] = kv[1]
args[str(key)] = values
elif key == "command":
args[key] = shlex.split(value)
elif key in attrs:
if value is None:
value = ''
args[str(key)] = str(value)
elif check:
raise exceptions.BadRequest(
"Key must be in %s" % ",".join(attrs))
return args, run
def _delete_attributes_with_same_value(old, new):
'''Delete attributes with same value from new dict
If new dict has same value in old dict, remove the attributes
from new dict.
'''
for k in old.keys():
if k in new:
if old[k] == new[k]:
del new[k]
return new
def container_create(request, **kwargs):
args, run = _cleanup_params(CONTAINER_CREATE_ATTRS, True, **kwargs)
response = None
if run:
response = zunclient(request).containers.run(**args)
else:
response = zunclient(request).containers.create(**args)
return response
def container_update(request, id, **kwargs):
'''Update Container
Get current Container attributes and check updates.
And update with "rename" for "name", then use "update" for
"cpu" and "memory".
'''
# get current data
container = zunclient(request).containers.get(id).to_dict()
if container["memory"] is not None:
container["memory"] = int(container["memory"].replace("M", ""))
args, run = _cleanup_params(CONTAINER_CREATE_ATTRS, True, **kwargs)
# remove same values from new params
_delete_attributes_with_same_value(container, args)
# do update
if len(args):
zunclient(request).containers.update(id, **args)
return args
def container_delete(request, **kwargs):
return zunclient(request).containers.delete(**kwargs)
def container_list(request, limit=None, marker=None, sort_key=None,
sort_dir=None):
return zunclient(request).containers.list(limit, marker, sort_key,
sort_dir)
def container_show(request, id):
return zunclient(request).containers.get(id)
def container_logs(request, id):
args = {}
args["stdout"] = True
args["stderr"] = True
return zunclient(request).containers.logs(id, **args)
def container_start(request, id):
return zunclient(request).containers.start(id)
def container_stop(request, id, timeout):
return zunclient(request).containers.stop(id, timeout)
def container_restart(request, id, timeout):
return zunclient(request).containers.restart(id, timeout)
def container_rebuild(request, id, **kwargs):
return zunclient(request).containers.rebuild(id, **kwargs)
def container_pause(request, id):
return zunclient(request).containers.pause(id)
def container_unpause(request, id):
return zunclient(request).containers.unpause(id)
def container_execute(request, id, command):
args = {"command": command}
return zunclient(request).containers.execute(id, **args)
def container_kill(request, id, signal=None):
return zunclient(request).containers.kill(id, signal)
def container_attach(request, id):
return zunclient(request).containers.attach(id)
def container_resize(request, id, width, height):
return zunclient(request).containers.resize(id, width, height)
def container_network_attach(request, id):
network = request.DATA.get("network") or None
zunclient(request).containers.network_attach(id, network)
return {"container": id, "network": network}
def container_network_detach(request, id):
network = request.DATA.get("network") or None
zunclient(request).containers.network_detach(id, network)
return {"container": id, "network": network}
def port_update_security_groups(request):
port = request.DATA.get("port") or None
security_groups = request.DATA.get("security_groups") or None
kwargs = {"security_groups": security_groups}
neutronclient(request).update_port(port, body={"port": kwargs})
return {"port": port, "security_group": security_groups}
def availability_zone_list(request):
list = zunclient(request).availability_zones.list()
return list
def capsule_list(request, limit=None, marker=None, sort_key=None,
sort_dir=None):
return zunclient(request).capsules.list(limit, marker, sort_key,
sort_dir)
def capsule_show(request, id):
return zunclient(request).capsules.get(id)
def capsule_create(request, **kwargs):
args, run = _cleanup_params(CAPSULE_CREATE_ATTRS, True, **kwargs)
args["template"] = template_format.parse(args["template"])
return zunclient(request).capsules.create(**args)
def capsule_delete(request, **kwargs):
return zunclient(request).capsules.delete(**kwargs)
def image_list(request, limit=None, marker=None, sort_key=None,
sort_dir=None):
return zunclient(request).images.list(limit, marker, sort_key,
sort_dir)
def image_create(request, **kwargs):
args, run = _cleanup_params(IMAGE_PULL_ATTRS, True, **kwargs)
return zunclient(request).images.create(**args)
def image_delete(request, id, **kwargs):
return zunclient(request).images.delete(id, **kwargs)
def host_list(request, limit=None, marker=None, sort_key=None,
sort_dir=None):
return zunclient(request).hosts.list(limit, marker, sort_key,
sort_dir)
def host_show(request, id):
return zunclient(request).hosts.get(id) | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/api/client.py | client.py |
from django.views import generic
from zun_ui.api import client
from openstack_dashboard.api.rest import urls
from openstack_dashboard.api.rest import utils as rest_utils
def change_to_id(obj):
"""Change key named 'uuid' to 'id'
Zun returns objects with a field called 'uuid' many of Horizons
directives however expect objects to have a field called 'id'.
"""
obj['id'] = obj.pop('uuid')
return obj
@urls.register
class Container(generic.View):
"""API for retrieving a single container"""
url_regex = r'zun/containers/(?P<id>[^/]+)$'
@rest_utils.ajax()
def get(self, request, id):
"""Get a specific container"""
return change_to_id(client.container_show(request, id).to_dict())
@rest_utils.ajax(data_required=True)
def patch(self, request, id):
"""Update a Container.
Returns the Container object on success.
"""
args = client.container_update(request, id, **request.DATA)
return args
@urls.register
class ContainerActions(generic.View):
"""API for retrieving a single container"""
url_regex = r'zun/containers/(?P<id>[^/]+)/(?P<action>[^/]+)$'
@rest_utils.ajax()
def get(self, request, id, action):
"""Get a specific container info"""
if action == 'logs':
return client.container_logs(request, id)
@rest_utils.ajax()
def post(self, request, id, action):
"""Execute a action of the Containers."""
if action == 'start':
return client.container_start(request, id)
elif action == 'stop':
timeout = request.DATA.get("timeout") or 10
return client.container_stop(request, id, timeout)
elif action == 'restart':
timeout = request.DATA.get("timeout") or 10
return client.container_restart(request, id, timeout)
elif action == 'rebuild':
return client.container_rebuild(request, id, **request.DATA)
elif action == 'pause':
return client.container_pause(request, id)
elif action == 'unpause':
return client.container_unpause(request, id)
elif action == 'execute':
command = request.DATA.get("command")
return client.container_execute(request, id, command)
elif action == 'kill':
signal = request.DATA.get("signal") or None
return client.container_kill(request, id, signal)
elif action == 'attach':
return client.container_attach(request, id)
elif action == 'resize':
width = request.DATA.get("width") or 500
height = request.DATA.get("height") or 400
return client.container_resize(request, id, width, height)
elif action == 'network_attach':
return client.container_network_attach(request, id)
elif action == 'network_detach':
return client.container_network_detach(request, id)
elif action == 'port_update_security_groups':
return client.port_update_security_groups(request)
@rest_utils.ajax(data_required=True)
def delete(self, request, id, action):
"""Delete specified Container with option.
Returns HTTP 204 (no content) on successful deletion.
"""
opts = {'id': id}
if action == 'force':
opts['force'] = True
elif action == 'stop':
opts['stop'] = True
return client.container_delete(request, **opts)
@urls.register
class Containers(generic.View):
"""API for Zun Containers"""
url_regex = r'zun/containers/$'
@rest_utils.ajax()
def get(self, request):
"""Get a list of the Containers for a project.
The returned result is an object with property 'items' and each
item under this is a Container.
"""
result = client.container_list(request)
return {'items': [change_to_id(n.to_dict()) for n in result]}
@rest_utils.ajax(data_required=True)
def delete(self, request):
"""Delete one or more Containers by id.
Returns HTTP 204 (no content) on successful deletion.
"""
for id in request.DATA:
opts = {'id': id}
client.container_delete(request, **opts)
@rest_utils.ajax(data_required=True)
def post(self, request):
"""Create a new Container.
Returns the new Container object on success.
If 'run' attribute is set true, do 'run' instead 'create'
"""
new_container = client.container_create(request, **request.DATA)
return rest_utils.CreatedResponse(
'/api/zun/container/%s' % new_container.uuid,
new_container.to_dict())
@urls.register
class AvailabilityZones(generic.View):
"""API for Zun AvailabilityZones"""
url_regex = r'zun/availability_zones/$'
@rest_utils.ajax()
def get(self, request):
"""Get a list of the Zun AvailabilityZones.
The returned result is an object with property 'items' and each
item under this is a Zun AvailabilityZones.
"""
result = client.availability_zone_list(request)
return {'items': [i.to_dict() for i in result]}
@urls.register
class Capsules(generic.View):
"""API for Capsules"""
url_regex = r'zun/capsules/$'
@rest_utils.ajax()
def get(self, request):
"""Get a list of the Capsules.
The returned result is an object with property 'items' and each
item under this is a Capsules.
"""
result = client.capsule_list(request)
return {'items': [i.to_dict() for i in result]}
@rest_utils.ajax(data_required=True)
def delete(self, request):
"""Delete one or more Capsules by id.
Returns HTTP 204 (no content) on successful deletion.
"""
for id in request.DATA:
opts = {'id': id}
client.capsule_delete(request, **opts)
@rest_utils.ajax(data_required=True)
def post(self, request):
"""Create a new Capsule.
Returns the new Capsule object on success.
"""
new_capsule = client.capsule_create(request, **request.DATA)
return rest_utils.CreatedResponse(
'/api/zun/capsules/%s' % new_capsule.uuid,
new_capsule.to_dict())
@urls.register
class Capsule(generic.View):
"""API for retrieving a single capsule"""
url_regex = r'zun/capsules/(?P<id>[^/]+)$'
@rest_utils.ajax()
def get(self, request, id):
"""Get a specific capsule"""
return change_to_id(client.capsule_show(request, id).to_dict())
@urls.register
class Images(generic.View):
"""API for Zun Images"""
url_regex = r'zun/images/$'
@rest_utils.ajax()
def get(self, request):
"""Get a list of the Images for admin users.
The returned result is an object with property 'items' and each
item under this is a Image.
"""
result = client.image_list(request)
return {'items': [change_to_id(i.to_dict()) for i in result]}
@rest_utils.ajax(data_required=True)
def delete(self, request):
"""Delete one or more Images by id.
Returns HTTP 204 (no content) on successful deletion.
"""
for id in request.DATA:
client.image_delete(request, id)
@rest_utils.ajax(data_required=True)
def post(self, request):
"""Create a new Image.
Returns the new Image object on success.
"""
new_image = client.image_create(request, **request.DATA)
return rest_utils.CreatedResponse(
'/api/zun/image/%s' % new_image.uuid,
new_image.to_dict())
@urls.register
class Hosts(generic.View):
"""API for Zun Hosts"""
url_regex = r'zun/hosts/$'
@rest_utils.ajax()
def get(self, request):
"""Get a list of the Hosts for admin users.
The returned result is an object with property 'items' and each
item under this is a HOst.
"""
result = client.host_list(request)
return {'items': [change_to_id(i.to_dict()) for i in result]}
@urls.register
class Host(generic.View):
"""API for retrieving a single host"""
url_regex = r'zun/hosts/(?P<id>[^/]+)$'
@rest_utils.ajax()
def get(self, request, id):
"""Get a specific host"""
return change_to_id(client.host_show(request, id).to_dict()) | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/api/rest_api.py | rest_api.py |
(function() {
"use strict";
angular
.module('horizon.cloud-shell')
.controller('horizon.cloud-shell.controller', cloudShellController);
cloudShellController.$inject = [
'$scope',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.webRoot',
'horizon.framework.util.http.service'
];
function cloudShellController(
$scope,
zun,
webRoot,
http
) {
var ctrl = this;
ctrl.openInNewWindow = openInNewWindow;
ctrl.close = closeShell;
ctrl.consoleUrl = null;
ctrl.container = {};
ctrl.resizeTerminal = resizeTerminal;
// close existing shell
closeShell();
// default size for shell
var cols = 80;
var rows = 24;
// get clouds.yaml for OpenStack Client
var cloudsYaml;
http.get('/project/api_access/clouds.yaml/').then(function(response) {
// cloud.yaml to be set to .config/openstack/clouds.yaml in container
cloudsYaml = response.data;
ctrl.user = cloudsYaml.match(/username: "(.+)"/)[1];
ctrl.project = cloudsYaml.match(/project_name: "(.+)"/)[1];
ctrl.userDomain = cloudsYaml.match(/user_domain_name: "(.+)"/);
ctrl.projectDomain = cloudsYaml.match(/project_domain_name: "(.+)"/);
ctrl.domain = (ctrl.userDomain.length === 2) ? ctrl.userDomain[1] : ctrl.projectDomain[1];
ctrl.region = cloudsYaml.match(/region_name: "(.+)"/)[1];
// container name
ctrl.containerLabel = "cloud-shell-" + ctrl.user + "-" + ctrl.project +
"-" + ctrl.domain + "-" + ctrl.region;
// get container
zun.getContainers().then(findContainer);
});
function findContainer(response) {
var container = response.data.items.find(function(item) {
return item.labels['cloud-shell'] === ctrl.containerLabel;
});
if (typeof (container) === 'undefined') {
onFailGetContainer();
} else {
onGetContainer({data: container});
}
}
function onGetContainer(response) {
ctrl.container = response.data;
// attach console to existing container
ctrl.consoleUrl = webRoot + "containers/" + ctrl.container.id + "/console";
var console = $("<p>To display console, interactive mode needs to be enabled " +
"when this container was created.</p>");
if (ctrl.container.status !== "Running") {
console = $("<p>Container is not running. Please wait for starting up container.</p>");
} else if (ctrl.container.interactive) {
console = $("<iframe id=\"console_embed\" src=\"" + ctrl.consoleUrl +
"\" style=\"width:100%;height:100%\"></iframe>");
// execute openrc.sh on the container
var command = "sh -c 'printf \"" + cloudsYaml + "\" > ~/.config/openstack/clouds.yaml'";
zun.executeContainer(ctrl.container.id, {command: command}).then(function() {
var command = "sh -c 'printf \"export OS_CLOUD=openstack\" > ~/.bashrc'";
zun.executeContainer(ctrl.container.id, {command: command}).then(function() {
angular.noop();
});
});
}
// append shell content
angular.element("#shell-content").append(console);
}
// watcher for iframe contents loading, seems to emit once.
$scope.$watch(function() {
return angular.element("#shell-content > iframe").contents()
.find("#terminalNode").attr("termCols");
}, resizeTerminal);
// event handler to resize console according to window resize.
angular.element(window).bind('resize', resizeTerminal);
// also, add resizeTerminal into callback attribute for resizer directive
function resizeTerminal() {
var shellIframe = angular.element("#shell-content > iframe");
var newCols = shellIframe.contents().find("#terminalNode").attr("termCols");
var newRows = shellIframe.contents().find("#terminalNode").attr("termRows");
if ((newCols !== cols || newRows !== rows) && newCols > 0 && newRows > 0 &&
ctrl.container.id) {
// resize tty
zun.resizeContainer(ctrl.container.id, {width: newCols, height: newRows}).then(function() {
cols = newCols;
rows = newRows;
});
}
}
function onFailGetContainer() {
// create new container and attach console to it.
var image = angular.element("#cloud-shell-menu").attr("cloud-shell-image");
var model = {
image: image,
command: "/bin/bash",
interactive: true,
run: true,
environment: "OS_CLOUD=openstack",
labels: "cloud-shell=" + ctrl.containerLabel
};
zun.createContainer(model).then(function (response) {
// attach
onGetContainer({data: {id: response.data.id}});
});
}
function openInNewWindow() {
// open shell in new window
window.open(ctrl.consoleUrl, "_blank");
closeShell();
}
function closeShell() {
// close shell
angular.element("#cloud-shell").remove();
angular.element("#cloud-shell-resizer").remove();
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/cloud-shell/cloud-shell.controller.js | cloud-shell.controller.js |
(function () {
'use strict';
angular
.module('horizon.app.core.openstack-service-api')
.factory('horizon.app.core.openstack-service-api.zun', ZunAPI);
ZunAPI.$inject = [
'$location',
'horizon.framework.util.http.service',
'horizon.framework.widgets.toast.service',
'horizon.framework.util.i18n.gettext'
];
function ZunAPI($location, apiService, toastService, gettext) {
var containersPath = '/api/zun/containers/';
var zunAvailabilityZonesPath = '/api/zun/availability_zones/';
var capsulesPath = '/api/zun/capsules/';
var imagesPath = '/api/zun/images/';
var hostsPath = '/api/zun/hosts/';
var service = {
createContainer: createContainer,
updateContainer: updateContainer,
getContainer: getContainer,
getContainers: getContainers,
deleteContainer: deleteContainer,
deleteContainers: deleteContainers,
deleteContainerForce: deleteContainerForce,
deleteContainerStop: deleteContainerStop,
startContainer: startContainer,
stopContainer: stopContainer,
logsContainer: logsContainer,
restartContainer: restartContainer,
rebuildContainer: rebuildContainer,
pauseContainer: pauseContainer,
unpauseContainer: unpauseContainer,
executeContainer: executeContainer,
killContainer: killContainer,
resizeContainer: resizeContainer,
attachNetwork: attachNetwork,
detachNetwork: detachNetwork,
updatePortSecurityGroup: updatePortSecurityGroup,
getZunAvailabilityZones: getZunAvailabilityZones,
getCapsules: getCapsules,
getCapsule: getCapsule,
createCapsule: createCapsule,
deleteCapsule: deleteCapsule,
pullImage: pullImage,
getImages: getImages,
deleteImage: deleteImage,
getHosts: getHosts,
getHost: getHost,
isAdmin: isAdmin
};
return service;
///////////////
// Containers //
///////////////
function createContainer(params) {
var msg = gettext('Unable to create Container.');
return apiService.post(containersPath, params).error(error(msg));
}
function updateContainer(id, params) {
var msg = gettext('Unable to update Container.');
return apiService.patch(containersPath + id, params).error(error(msg));
}
function getContainer(id, suppressError) {
var promise = apiService.get(containersPath + id);
return suppressError ? promise : promise.catch(function onError() {
var msg = gettext('Unable to retrieve the Container.');
toastService.add('error', msg);
});
}
function getContainers() {
var msg = gettext('Unable to retrieve the Containers.');
return apiService.get(containersPath).catch(function onError() {
error(msg);
});
}
function deleteContainer(id, suppressError) {
var promise = apiService.delete(containersPath, [id]);
return suppressError ? promise : promise.catch(function onError() {
var msg = gettext('Unable to delete the Container with id: %(id)s');
toastService.add('error', interpolate(msg, { id: id }, true));
});
}
// FIXME(shu-mutou): Unused for batch-delete in Horizon framework in Feb, 2016.
function deleteContainers(ids) {
var msg = gettext('Unable to delete the Containers.');
return apiService.delete(containersPath, ids).catch(function onError() {
error(msg);
});
}
function deleteContainerForce(id, suppressError) {
var promise = apiService.delete(containersPath + id + '/force', [id]);
return suppressError ? promise : promise.catch(function onError() {
var msg = gettext('Unable to delete forcely the Container with id: %(id)s');
toastService.add('error', interpolate(msg, { id: id }, true));
});
}
function deleteContainerStop(id, suppressError) {
var promise = apiService.delete(containersPath + id + '/stop', [id]);
return suppressError ? promise : promise.catch(function onError() {
var msg = gettext('Unable to stop and delete the Container with id: %(id)s');
toastService.add('error', interpolate(msg, { id: id }, true));
});
}
function startContainer(id) {
var msg = gettext('Unable to start Container.');
return apiService.post(containersPath + id + '/start').catch(function onError() {
error(msg);
});
}
function stopContainer(id, params) {
var msg = gettext('Unable to stop Container.');
return apiService.post(containersPath + id + '/stop', params).catch(function onError() {
error(msg);
});
}
function logsContainer(id) {
var msg = gettext('Unable to get logs of Container.');
return apiService.get(containersPath + id + '/logs').catch(function onError() {
error(msg);
});
}
function restartContainer(id, params) {
var msg = gettext('Unable to restart Container.');
return apiService.post(containersPath + id + '/restart', params).catch(function onError() {
error(msg);
});
}
function rebuildContainer(id, params) {
var msg = gettext('Unable to rebuild Container.');
return apiService.post(containersPath + id + '/rebuild', params).catch(function onError() {
error(msg);
});
}
function pauseContainer(id) {
var msg = gettext('Unable to pause Container');
return apiService.post(containersPath + id + '/pause').catch(function onError() {
error(msg);
});
}
function unpauseContainer(id) {
var msg = gettext('Unable to unpause of Container.');
return apiService.post(containersPath + id + '/unpause').catch(function onError() {
error(msg);
});
}
function executeContainer(id, params) {
var msg = gettext('Unable to execute the command.');
return apiService.post(containersPath + id + '/execute', params).catch(function onError() {
error(msg);
});
}
function killContainer(id, params) {
var msg = gettext('Unable to send kill signal.');
return apiService.post(containersPath + id + '/kill', params).catch(function onError() {
error(msg);
});
}
function resizeContainer(id, params) {
var msg = gettext('Unable to resize console.');
return apiService.post(containersPath + id + '/resize', params).catch(function onError() {
error(msg);
});
}
function attachNetwork(id, net) {
var msg = gettext('Unable to attach network.');
return apiService.post(containersPath + id + '/network_attach', {network: net})
.catch(function onError() {
error(msg);
});
}
function detachNetwork(id, net) {
var msg = gettext('Unable to detach network.');
return apiService.post(containersPath + id + '/network_detach', {network: net})
.catch(function onError() {
error(msg);
});
}
function updatePortSecurityGroup(id, port, sgs) {
var msg = interpolate(
gettext('Unable to update security groups %(sgs)s for port %(port)s.'),
{port: port, sgs: sgs}, true);
return apiService.post(containersPath + id + '/port_update_security_groups',
{port: port, security_groups: sgs})
.catch(function onError() {
error(msg);
});
}
//////////////////////////////
// Zun AvailabilityZones //
//////////////////////////////
function getZunAvailabilityZones() {
var msg = gettext('Unable to retrieve the Zun Availability Zones.');
return apiService.get(zunAvailabilityZonesPath).catch(function onError() {
error(msg);
});
}
//////////////
// Capsules //
//////////////
function getCapsules() {
var msg = gettext('Unable to retrieve the Capsules.');
return apiService.get(capsulesPath).catch(function onError() {
error(msg);
});
}
function getCapsule(id) {
var msg = gettext('Unable to retrieve the Capsule.');
return apiService.get(capsulesPath + id).catch(function onError() {
error(msg);
});
}
function createCapsule(params) {
var msg = gettext('Unable to create Capsule.');
return apiService.post(capsulesPath, params).catch(function onError() {
error(msg);
});
}
function deleteCapsule(id, suppressError) {
var promise = apiService.delete(capsulesPath, [id]);
return suppressError ? promise : promise.catch(function onError() {
var msg = gettext('Unable to delete the Capsule with id: %(id)s');
toastService.add('error', interpolate(msg, { id: id }, true));
});
}
////////////
// Images //
////////////
function pullImage(params) {
var msg = gettext('Unable to pull Image.');
return apiService.post(imagesPath, params).catch(function onError() {
error(msg);
});
}
function getImages() {
var msg = gettext('Unable to retrieve the Images.');
return apiService.get(imagesPath).catch(function onError() {
error(msg);
});
}
function deleteImage(id, suppressError) {
var promise = apiService.delete(imagesPath, [id]);
return suppressError ? promise : promise.catch(function onError() {
var msg = gettext('Unable to delete the Image with id: %(id)s');
toastService.add('error', interpolate(msg, { id: id }, true));
});
}
///////////
// Hosts //
///////////
function getHosts() {
var msg = gettext('Unable to retrieve the Hosts.');
return apiService.get(hostsPath).catch(function onError() {
error(msg);
});
}
function getHost(id) {
var msg = gettext('Unable to retrieve the Host.');
return apiService.get(hostsPath + id).catch(function onError() {
error(msg);
});
}
function error(message) {
return function() {
toastService.add('error', message);
};
}
function isAdmin() {
var isAdmin = false;
if ($location.url().startsWith("/admin") ||
$location.url().endsWith("?nav=%2Fadmin%2Fcontainer%2Fcontainers%2F")
) {
isAdmin = true;
}
return isAdmin;
}
}
}()); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/zun.service.js | zun.service.js |
(function() {
'use strict';
/**
* @ngdoc overview
* @name horizon.dashboard.container.containers
* @ngModule
* @description
* Provides all the services and widgets require to display the container
* panel
*/
angular
.module('horizon.dashboard.container.containers', [
'ngRoute',
'horizon.dashboard.container.containers.actions',
'horizon.dashboard.container.containers.details'
])
.constant('horizon.dashboard.container.containers.events', events())
.constant('horizon.dashboard.container.containers.validStates', validStates())
.constant('horizon.dashboard.container.containers.adminActions', adminActions())
.constant('horizon.dashboard.container.containers.resourceType', 'OS::Zun::Container')
.run(run)
.config(config);
/**
* @ngdoc constant
* @name horizon.dashboard.container.containers.events
* @description A list of events used by Container
* @returns {Object} Event constants
*/
function events() {
return {
CREATE_SUCCESS: 'horizon.dashboard.container.containers.CREATE_SUCCESS',
DELETE_SUCCESS: 'horizon.dashboard.container.containers.DELETE_SUCCESS'
};
}
function validStates() {
var states = {
ERROR: 'Error', RUNNING: 'Running', STOPPED: 'Stopped',
PAUSED: 'Paused', UNKNOWN: 'Unknown', CREATING: 'Creating',
CREATED: 'Created', DELETED: 'Deleted', DELETING: 'Deleting',
REBUILDING: 'Rebuilding', DEAD: 'Dead', RESTARTING: 'Restarting'
};
return {
update: [states.CREATED, states.RUNNING, states.STOPPED, states.PAUSED],
start: [states.CREATED, states.STOPPED, states.ERROR],
stop: [states.RUNNING],
restart: [states.CREATED, states.RUNNING, states.STOPPED, states.ERROR],
rebuild: [states.CREATED, states.RUNNING, states.STOPPED, states.ERROR],
pause: [states.RUNNING],
unpause: [states.PAUSED],
execute: [states.RUNNING],
kill: [states.RUNNING],
delete: [states.CREATED, states.ERROR, states.STOPPED, states.DELETED, states.DEAD],
/* NOTE(shu-mutow): Docker does not allow us to delete PAUSED container.
* There are ways to delete paused container in server,
* but we are according to Docker's policy as now.
*/
delete_force: [
states.CREATED, states.CREATING, states.ERROR, states.RUNNING,
states.STOPPED, states.UNKNOWN, states.DELETED, states.DEAD,
states.RESTARTING, states.REBUILDING, states.DELETING
],
delete_stop: [
states.RUNNING, states.CREATED, states.ERROR, states.STOPPED,
states.DELETED, states.DEAD
],
manage_security_groups: [states.CREATED, states.RUNNING, states.STOPPED, states.PAUSED]
};
}
function adminActions() {
return ["update", "start", "stop", "restart", "rebuild", "kill", "delete_force"];
}
run.$inject = [
'horizon.framework.conf.resource-type-registry.service',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.containers.basePath',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.service'
];
function run(registry, zun, basePath, resourceType, containerService) {
registry.getResourceType(resourceType)
.setNames(gettext('Container'), gettext('Containers'))
.setSummaryTemplateUrl(basePath + 'details/drawer.html')
.setDefaultIndexUrl(containerService.getDefaultIndexUrl())
.setProperties(containerProperties())
.setListFunction(containerService.getContainersPromise)
.tableColumns
.append({
id: 'name',
priority: 1,
sortDefault: true,
filters: ['noName'],
urlFunction: containerService.getDetailsPath
})
.append({
id: 'id',
priority: 3
})
.append({
id: 'image',
priority: 2
})
.append({
id: 'status',
priority: 2
})
.append({
id: 'task_state',
priority: 2
});
// for magic-search
registry.getResourceType(resourceType).filterFacets
.append({
'label': gettext('Name'),
'name': 'name',
'singleton': true
})
.append({
'label': gettext('ID'),
'name': 'id',
'singleton': true
})
.append({
'label': gettext('Image'),
'name': 'image',
'singleton': true
})
.append({
'label': gettext('Status'),
'name': 'status',
'singleton': true
})
.append({
'label': gettext('Task State'),
'name': 'task_state',
'singleton': true
});
}
function containerProperties() {
return {
'addresses': { label: gettext('Addresses'), filters: ['noValue', 'json'] },
'auto_heal': { label: gettext('Auto Heal'), filters: ['yesno'] },
'auto_remove': { label: gettext('Auto Remove'), filters: ['yesno'] },
'command': { label: gettext('Command'), filters: ['noValue'] },
'cpu': { label: gettext('CPU'), filters: ['noValue'] },
'disk': { label: gettext('Disk'), filters: ['gb', 'noValue'] },
'environment': { label: gettext('Environment'), filters: ['noValue', 'json'] },
'host': { label: gettext('Host'), filters: ['noValue'] },
'hostname': { label: gettext('Hostname'), filters: ['noValue'] },
'id': {label: gettext('ID'), filters: ['noValue'] },
'image': {label: gettext('Image'), filters: ['noValue'] },
'image_driver': {label: gettext('Image Driver'), filters: ['noValue'] },
'image_pull_policy': {label: gettext('Image Pull Policy'), filters: ['noValue'] },
'interactive': {label: gettext('Interactive'), filters: ['yesno'] },
'labels': {label: gettext('Labels'), filters: ['noValue', 'json'] },
'links': {label: gettext('Links'), filters: ['noValue', 'json'] },
'memory': {label: gettext('Memory'), filters: ['noValue'] },
'name': {label: gettext('Name'), filters: ['noName'] },
'ports': {label: gettext('Ports'), filters: ['noValue', 'json'] },
'restart_policy': {label: gettext('Restart Policy'), filters: ['noValue', 'json'] },
'runtime': {label: gettext('Runtime'), filters: ['noName'] },
'security_groups': {label: gettext('Security Groups'), filters: ['noValue', 'json'] },
'status': {label: gettext('Status'), filters: ['noValue'] },
'status_detail': {label: gettext('Status Detail'), filters: ['noValue'] },
'status_reason': {label: gettext('Status Reason'), filters: ['noValue'] },
'task_state': {label: gettext('Task State'), filters: ['noValue'] },
'workdir': {label: gettext('Workdir'), filters: ['noValue'] }
};
}
config.$inject = [
'$provide',
'$windowProvider',
'$routeProvider'
];
/**
* @name config
* @param {Object} $provide
* @param {Object} $windowProvider
* @param {Object} $routeProvider
* @description Routes used by this module.
* @returns {undefined} Returns nothing
*/
function config($provide, $windowProvider, $routeProvider) {
var path = $windowProvider.$get().STATIC_URL + 'dashboard/container/containers/';
$provide.constant('horizon.dashboard.container.containers.basePath', path);
$routeProvider
.when('/project/container/containers', {
templateUrl: path + 'panel.html'
})
.when('/admin/container/containers', {
templateUrl: path + 'panel.html'
});
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/containers.module.js | containers.module.js |
(function() {
'use strict';
/**
* @ngdoc overview
* @ngname horizon.dashboard.container.containers.actions
*
* @description
* Provides all of the actions for containers.
*/
angular.module('horizon.dashboard.container.containers.actions',
[
'horizon.framework',
'horizon.dashboard.container'
])
.run(registerContainerActions);
registerContainerActions.$inject = [
'horizon.framework.conf.resource-type-registry.service',
'horizon.framework.util.i18n.gettext',
'horizon.dashboard.container.containers.create.service',
'horizon.dashboard.container.containers.update.service',
'horizon.dashboard.container.containers.delete.service',
'horizon.dashboard.container.containers.delete-force.service',
'horizon.dashboard.container.containers.delete-stop.service',
'horizon.dashboard.container.containers.start.service',
'horizon.dashboard.container.containers.stop.service',
'horizon.dashboard.container.containers.restart.service',
'horizon.dashboard.container.containers.rebuild.service',
'horizon.dashboard.container.containers.pause.service',
'horizon.dashboard.container.containers.unpause.service',
'horizon.dashboard.container.containers.execute.service',
'horizon.dashboard.container.containers.kill.service',
'horizon.dashboard.container.containers.refresh.service',
'horizon.dashboard.container.containers.manage-security-groups.service',
'horizon.dashboard.container.containers.resourceType'
];
function registerContainerActions(
registry,
gettext,
createContainerService,
updateContainerService,
deleteContainerService,
deleteContainerForceService,
deleteContainerStopService,
startContainerService,
stopContainerService,
restartContainerService,
rebuildContainerService,
pauseContainerService,
unpauseContainerService,
executeContainerService,
killContainerService,
refreshContainerService,
manageSecurityGroupService,
resourceType
) {
var containersResourceType = registry.getResourceType(resourceType);
containersResourceType.globalActions
.append({
id: 'createContainerAction',
service: createContainerService,
template: {
type: 'create',
text: gettext('Create Container')
}
});
containersResourceType.batchActions
.append({
id: 'batchDeleteContainerAction',
service: deleteContainerService,
template: {
type: 'delete-selected',
text: gettext('Delete Containers')
}
});
containersResourceType.itemActions
.append({
id: 'refreshContainerAction',
service: refreshContainerService,
template: {
text: gettext('Refresh')
}
})
.append({
id: 'updateContainerAction',
service: updateContainerService,
template: {
text: gettext('Update Container')
}
})
.append({
id: 'manageSecurityGroupService',
service: manageSecurityGroupService,
template: {
text: gettext('Manage Security Groups')
}
})
.append({
id: 'startContainerAction',
service: startContainerService,
template: {
text: gettext('Start Container')
}
})
.append({
id: 'stopContainerAction',
service: stopContainerService,
template: {
text: gettext('Stop Container')
}
})
.append({
id: 'restartContainerAction',
service: restartContainerService,
template: {
text: gettext('Restart Container')
}
})
.append({
id: 'rebuildContainerAction',
service: rebuildContainerService,
template: {
text: gettext('Rebuild Container')
}
})
.append({
id: 'pauseContainerAction',
service: pauseContainerService,
template: {
text: gettext('Pause Container')
}
})
.append({
id: 'unpauseContainerAction',
service: unpauseContainerService,
template: {
text: gettext('Unpause Container')
}
})
.append({
id: 'executeContainerAction',
service: executeContainerService,
template: {
text: gettext('Execute Command')
}
})
.append({
id: 'killContainerAction',
service: killContainerService,
template: {
text: gettext('Send Kill Signal')
}
})
.append({
id: 'deleteContainerAction',
service: deleteContainerService,
template: {
type: 'delete',
text: gettext('Delete Container')
}
})
.append({
id: 'deleteContainerStopAction',
service: deleteContainerStopService,
template: {
type: 'delete',
text: gettext('Stop and Delete Container')
}
})
.append({
id: 'deleteContainerForceAction',
service: deleteContainerForceService,
template: {
type: 'delete',
text: gettext('Delete Container Forcely')
}
});
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions.module.js | actions.module.js |
(function() {
"use strict";
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.service', containersService);
containersService.$inject = [
'$location',
'horizon.app.core.detailRoute',
'horizon.app.core.openstack-service-api.zun',
];
/*
* @ngdoc factory
* @name horizon.cluster.containers.service
*
* @description
* This service provides functions that are used through
* the containers features.
*/
function containersService($location, detailRoute, zun) {
return {
getDefaultIndexUrl: getDefaultIndexUrl,
getDetailsPath: getDetailsPath,
getContainerPromise: getContainerPromise,
getContainersPromise: getContainersPromise
};
function getDefaultIndexUrl() {
var dashboard;
var path = "/container/containers";
if (zun.isAdmin()) {
dashboard = "/admin";
} else {
dashboard = "/project";
}
var url = dashboard + path + "/";
return url;
}
/*
* @ngdoc function
* @name getDetailsPath
* @param item {Object} - The container object
* @description
* Returns the relative path to the details view.
*/
function getDetailsPath(item) {
var detailsPath = detailRoute + 'OS::Zun::Container/' + item.id;
if ($location.url() === '/admin/container/containers') {
detailsPath = detailsPath + "?nav=/admin/container/containers/";
}
return detailsPath;
}
/*
* @ngdoc function
* @name getContainerPromise
* @description
* Given an id, returns a promise for the container data.
*/
function getContainerPromise(identifier) {
return zun.getContainer(identifier);
}
/*
* @ngdoc function
* @name getContainersPromise
* @description
* Given filter/query parameters, returns a promise for the matching
* containers. This is used in displaying lists of containers.
*/
function getContainersPromise(params) {
return zun.getContainers(params).then(modifyResponse);
}
function modifyResponse(response) {
return {data: {items: response.data.items.map(modifyItem)}};
function modifyItem(item) {
var timestamp = new Date();
item.trackBy = item.id.concat(timestamp.getTime());
return item;
}
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/containers.service.js | containers.service.js |
(function() {
'use strict';
/**
* @ngDoc factory
* @name horizon.dashboard.container.containers.delete.service
* @Description
* Brings up the delete containers confirmation modal dialog.
* On submit, delete selected resources.
* On cancel, do nothing.
*/
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.delete.service', deleteService);
deleteService.$inject = [
'$location',
'$q',
'$rootScope',
'horizon.app.core.openstack-service-api.zun',
'horizon.app.core.openstack-service-api.policy',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.modal.deleteModalService',
'horizon.framework.widgets.table.events',
'horizon.framework.widgets.toast.service',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.events',
'horizon.dashboard.container.containers.validStates'
];
function deleteService(
$location, $q, $rootScope, zun, policy, actionResult, gettext, $qExtensions, deleteModal,
tableEvents, toast, adminActions, resourceType, events, validStates
) {
var scope;
var context = {
labels: null,
deleteEntity: deleteEntity,
successEvent: events.DELETE_SUCCESS
};
var service = {
initAction: initAction,
allowed: allowed,
perform: perform
};
var notAllowedMessage = gettext("You are not allowed to delete containers: %s");
return service;
//////////////
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("delete") >= 0;
}
// only row actions pass in container
// otherwise, assume it is a batch action
var state;
if (container) {
state = $qExtensions.booleanAsPromise(
validStates.delete.indexOf(container.status) >= 0
);
} else {
state = $qExtensions.booleanAsPromise(true);
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
state
]);
}
// delete selected resource objects
function perform(selected, newScope) {
scope = newScope;
selected = angular.isArray(selected) ? selected : [selected];
context.labels = labelize(selected.length);
return $qExtensions.allSettled(selected.map(checkPermission)).then(afterCheck);
}
function labelize(count) {
return {
title: ngettext('Confirm Delete Container',
'Confirm Delete Containers', count),
/* eslint-disable max-len */
message: ngettext('You have selected "%s". Please confirm your selection. Deleted container is not recoverable.',
'You have selected "%s". Please confirm your selection. Deleted containers are not recoverable.', count),
/* eslint-enable max-len */
submit: ngettext('Delete Container',
'Delete Containers', count),
success: ngettext('Deleted Container: %s.',
'Deleted Containers: %s.', count),
error: ngettext('Unable to delete Container: %s.',
'Unable to delete Containers: %s.', count)
};
}
// for batch delete
function checkPermission(selected) {
return {promise: allowed(selected), context: selected};
}
// for batch delete
function afterCheck(result) {
var outcome = $q.reject().catch(angular.noop); // Reject the promise by default
if (result.fail.length > 0) {
toast.add('error', getMessage(notAllowedMessage, result.fail));
outcome = $q.reject(result.fail).catch(angular.noop);
}
if (result.pass.length > 0) {
outcome = deleteModal.open(scope, result.pass.map(getEntity), context).then(createResult);
}
return outcome;
}
function createResult(deleteModalResult) {
// To make the result of this action generically useful, reformat the return
// from the deleteModal into a standard form
var result = actionResult.getActionResult();
deleteModalResult.pass.forEach(function markDeleted(item) {
result.updated(resourceType, getEntity(item).id);
});
deleteModalResult.fail.forEach(function markFailed(item) {
result.failed(resourceType, getEntity(item).id);
});
var indexPath = '/project/container/containers';
var currentPath = $location.path();
if (result.result.failed.length === 0 && result.result.updated.length > 0 &&
currentPath !== indexPath) {
$location.path(indexPath);
} else {
$rootScope.$broadcast(tableEvents.CLEAR_SELECTIONS);
return result.result;
}
}
function getMessage(message, entities) {
return interpolate(message, [entities.map(getName).join(", ")]);
}
function getName(result) {
return getEntity(result).name;
}
// for batch delete
function getEntity(result) {
return result.context;
}
// call delete REST API
function deleteEntity(id) {
return zun.deleteContainer(id, true);
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/delete.service.js | delete.service.js |
(function() {
'use strict';
/**
* @ngDoc factory
* @name horizon.dashboard.container.containers.rebuild.service
* @Description
* rebuild container.
*/
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.rebuild.service', rebuildService);
rebuildService.$inject = [
'$q',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.basePath',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.validStates',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.form.ModalFormService',
'horizon.framework.widgets.toast.service'
];
function rebuildService(
$q, zun, adminActions, basePath, resourceType, validStates,
actionResult, gettext, $qExtensions, modal, toast
) {
var imageDrivers = [
{value: "", name: gettext("Select image driver for changing image.")},
{value: "docker", name: gettext("Docker Hub")},
{value: "glance", name: gettext("Glance")}
];
// model
var model = {
id: "",
name: "",
image: "",
image_driver: ""
};
// schema
var schema = {
type: "object",
properties: {
image: {
title: gettext("Image"),
type: "string"
},
image_driver: {
title: gettext("Image Driver"),
type: "string"
}
}
};
// form
var form = [
{
type: 'section',
htmlClass: 'row',
items: [
{
type: 'section',
htmlClass: 'col-sm-12',
items: [
{
"key": "image",
"placeholder": gettext("Specify an image to change.")
},
{
"key": "image_driver",
"type": "select",
"titleMap": imageDrivers,
"condition": model.image !== ""
}
]
}
]
}
];
var message = {
success: gettext('Container %s was successfully rebuilt.')
};
var service = {
initAction: initAction,
allowed: allowed,
perform: perform
};
return service;
//////////////
// include this function in your service
// if you plan to emit events to the parent controller
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("rebuild") >= 0;
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.rebuild.indexOf(container.status) >= 0
)
]);
}
function perform(selected) {
model.id = selected.id;
model.name = selected.name;
// modal config
var config = {
"title": gettext('Rebuild Container'),
"submitText": gettext('Rebuild'),
"schema": schema,
"form": form,
"model": model
};
return modal.open(config).then(submit);
function submit(context) {
var id = context.model.id;
var name = context.model.name;
delete context.model.id;
delete context.model.name;
return zun.rebuildContainer(id, context.model).then(function() {
toast.add('success', interpolate(message.success, [name]));
var result = actionResult.getActionResult().updated(resourceType, id);
return result.result;
});
}
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/rebuild.service.js | rebuild.service.js |
(function() {
'use strict';
/**
* @ngdoc factory
* @name horizon.dashboard.container.containers.kill.service
* @description
* Service to send kill signals to the container
*/
angular
.module('horizon.dashboard.container.containers')
.factory(
'horizon.dashboard.container.containers.kill.service',
killContainerService);
killContainerService.$inject = [
'$q',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.basePath',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.validStates',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.form.ModalFormService',
'horizon.framework.widgets.toast.service'
];
function killContainerService(
$q, zun, adminActions, basePath, resourceType, validStates,
actionResult, gettext, $qExtensions, modal, toast
) {
// schema
var schema = {
type: "object",
properties: {
signal: {
title: gettext("Kill Signal"),
type: "string"
}
}
};
// form
var form = [
{
type: 'section',
htmlClass: 'row',
items: [
{
type: 'section',
htmlClass: 'col-sm-6',
items: [
{
"key": "signal",
"placeholder": gettext("The kill signal to send.")
}
]
},
{
type: 'template',
templateUrl: basePath + 'actions/kill.help.html'
}
]
}
];
// model
var model;
var message = {
success: gettext('Kill signal was successfully sent to container %s.')
};
var service = {
initAction: initAction,
perform: perform,
allowed: allowed
};
return service;
//////////////
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("kill") >= 0;
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.kill.indexOf(container.status) >= 0
)
]);
}
function perform(selected) {
model = {
id: selected.id,
name: selected.name,
signal: ''
};
// modal config
var config = {
"title": gettext('Send Kill Signal'),
"submitText": gettext('Send'),
"schema": schema,
"form": form,
"model": model
};
return modal.open(config).then(submit);
}
function submit(context) {
var id = context.model.id;
var name = context.model.name;
delete context.model.id;
delete context.model.name;
return zun.killContainer(id, context.model).then(function() {
toast.add('success', interpolate(message.success, [name]));
var result = actionResult.getActionResult().updated(resourceType, id);
return result.result;
});
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/kill.service.js | kill.service.js |
(function() {
'use strict';
/**
* @ngdoc overview
* @name horizon.dashboard.container.containers.create.service
* @description Service for the container create modal
*/
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.create.service', createService);
createService.$inject = [
'$q',
'horizon.app.core.openstack-service-api.policy',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.workflow',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.form.ModalFormService',
'horizon.framework.widgets.toast.service'
];
function createService(
$q, policy, zun, adminActions, resourceType, workflow,
actionResult, gettext, $qExtensions, modal, toast
) {
var message = {
success: gettext('Container %s was successfully created.')
};
var service = {
initAction: initAction,
perform: perform,
allowed: allowed
};
return service;
//////////////
function initAction() {
}
function perform() {
var title, submitText;
title = gettext('Create Container');
submitText = gettext('Create');
var config = workflow.init('create', title, submitText);
return modal.open(config).then(submit);
}
function allowed() {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("create") >= 0;
}
return $q.all([
policy.ifAllowed({ rules: [['container', 'add_container']] }),
$qExtensions.booleanAsPromise(adminAction)
]);
}
function submit(context) {
delete context.model.exit_policy;
if (context.model.restart_policy === "on-failure") {
if (!context.model.restart_policy_max_retry) {
delete context.model.restart_policy_max_retry;
} else {
context.model.restart_policy =
context.model.restart_policy + ":" +
context.model.restart_policy_max_retry;
}
}
delete context.model.restart_policy_max_retry;
context.model.mounts = setMounts(context.model.mounts);
delete context.model.availableCinderVolumes;
context.model.nets = setNetworksAndPorts(context.model);
context.model.security_groups = setSecurityGroups(context.model);
delete context.model.networks;
delete context.model.ports;
delete context.model.availableNetworks;
delete context.model.allocatedNetworks;
delete context.model.availableSecurityGroups;
delete context.model.allocatedSecurityGroups;
delete context.model.availablePorts;
context.model.hints = setSchedulerHints(context.model);
delete context.model.availableHints;
delete context.model.hintsTree;
context.model = cleanNullProperties(context.model);
return zun.createContainer(context.model).then(success);
}
function success(response) {
response.data.id = response.data.uuid;
toast.add('success', interpolate(message.success, [response.data.name]));
var result = actionResult.getActionResult().created(resourceType, response.data.name);
return result.result;
}
function cleanNullProperties(model) {
// Initially clean fields that don't have any value.
// Not only "null", blank too.
for (var key in model) {
if (model.hasOwnProperty(key) && model[key] === null || model[key] === "" ||
key === "tabs" || (key === "auto_remove" && model[key] === false)) {
delete model[key];
}
}
return model;
}
function setMounts(mounts) {
var mnts = [];
mounts.forEach(function(mount) {
if (mount.type === "cinder-available") {
mnts.push({source: mount.source, destination: mount.destination});
} else if (mount.type === "cinder-new") {
mnts.push({source: "", size: mount.size.toString(), destination: mount.destination});
}
});
return mnts;
}
function setNetworksAndPorts(model) {
// pull out the ids from the security groups objects
var nets = [];
model.networks.forEach(function(network) {
nets.push({network: network.id});
});
model.ports.forEach(function(port) {
nets.push({port: port.id});
});
return nets;
}
function setSecurityGroups(model) {
// pull out the ids from the security groups objects
var securityGroups = [];
model.security_groups.forEach(function(securityGroup) {
securityGroups.push(securityGroup.name);
});
return securityGroups;
}
function setSchedulerHints(model) {
var schedulerHints = {};
if (model.hintsTree) {
var hints = model.hintsTree.getExisting();
if (!angular.equals({}, hints)) {
angular.forEach(hints, function(value, key) {
schedulerHints[key] = value + '';
});
}
}
return schedulerHints;
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/create.service.js | create.service.js |
(function() {
'use strict';
/**
* @ngDoc factory
* @name horizon.dashboard.container.containers.restart.service
* @Description
* restart container.
*/
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.restart.service', restartService);
restartService.$inject = [
'$q',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.basePath',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.validStates',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.form.ModalFormService',
'horizon.framework.widgets.toast.service'
];
function restartService(
$q, zun, adminActions, basePath, resourceType, validStates,
actionResult, gettext, $qExtensions, modal, toast
) {
// schema
var schema = {
type: "object",
properties: {
timeout: {
title: gettext("Restart Container"),
type: "number",
minimum: 1
}
}
};
// form
var form = [
{
type: 'section',
htmlClass: 'row',
items: [
{
type: 'section',
htmlClass: 'col-sm-12',
items: [
{
"key": "timeout",
"placeholder": gettext("Specify a shutdown timeout in seconds. (default: 10)")
}
]
}
]
}
];
// model
var model;
var message = {
success: gettext('Container %s was successfully restarted.')
};
var service = {
initAction: initAction,
allowed: allowed,
perform: perform
};
return service;
//////////////
// include this function in your service
// if you plan to emit events to the parent controller
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("restart") >= 0;
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.restart.indexOf(container.status) >= 0
)
]);
}
function perform(selected) {
model = {
id: selected.id,
name: selected.name,
timeout: null
};
// modal config
var config = {
"title": gettext('Restart Container'),
"submitText": gettext('Restart'),
"schema": schema,
"form": form,
"model": model
};
return modal.open(config).then(submit);
function submit(context) {
var id = context.model.id;
var name = context.model.name;
delete context.model.id;
delete context.model.name;
return zun.restartContainer(id, context.model).then(function() {
toast.add('success', interpolate(message.success, [name]));
var result = actionResult.getActionResult().updated(resourceType, id);
return result.result;
});
}
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/restart.service.js | restart.service.js |
(function() {
'use strict';
/**
* @ngdoc factory
* @name horizon.dashboard.container.containers.execute.service
* @description
* Service for the command execution in the container
*/
angular
.module('horizon.dashboard.container.containers')
.factory(
'horizon.dashboard.container.containers.execute.service',
executeContainerService);
executeContainerService.$inject = [
'$q',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.validStates',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.form.ModalFormService',
'horizon.framework.widgets.modal-wait-spinner.service',
'horizon.framework.widgets.toast.service'
];
function executeContainerService(
$q, zun, adminActions, resourceType, validStates, actionResult,
gettext, $qExtensions, modal, waitSpinner, toast
) {
// schema
var schema = {
type: "object",
properties: {
command: {
title: gettext("Command"),
type: "string"
},
output: {
title: gettext("Output"),
type: "string"
}
}
};
// form
var form = [
{
type: "section",
htmlClass: "col-sm-12",
items: [
{ // for result message
type: "help",
helpvalue: "",
condition: true
},
{
key: "command",
placeholder: gettext("The command to execute."),
required: true
},
{ // for exit code
type: "help",
helpvalue: "",
condition: true
},
{
key: "output",
type: "textarea",
readonly: true,
condition: true
}
]
}
];
// model
var model = {
id: '',
name: '',
command: ''
};
// modal config
var config = {
title: gettext("Execute Command"),
submitText: gettext("Execute"),
schema: schema,
form: angular.copy(form),
model: model
};
var message = {
success: gettext("Command was successfully executed at container %s."),
exit_code: gettext("Exit Code")
};
var service = {
initAction: initAction,
perform: perform,
allowed: allowed
};
return service;
//////////////
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("execute") >= 0;
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.execute.indexOf(container.status) >= 0
)
]);
}
function perform(selected) {
config.model.id = selected.id;
config.model.name = selected.name;
config.model.command = '';
config.model.output = '';
config.form = angular.copy(form);
modal.open(config).then(submit);
}
function submit(context) {
var id = context.model.id;
var name = context.model.name;
delete context.model.id;
delete context.model.name;
delete context.model.output;
waitSpinner.showModalSpinner(gettext('Executing'));
return zun.executeContainer(id, context.model).then(function(response) {
config.model = {
id: id,
name: name,
command: context.model.command,
output: response.data.output
};
config.form = angular.copy(form);
// for result message
config.form[0].items[0].helpvalue = "<div class='alert alert-success'>" +
interpolate(message.success, [name]) + "</div>";
config.form[0].items[0].condition = false;
// for exit_code
var resClass = 'success';
if (response.data.exit_code !== 0) {
resClass = 'danger';
}
config.form[0].items[2].condition = false;
config.form[0].items[2].helpvalue = "<div class='alert alert-" + resClass + "'>" +
message.exit_code + " : " + String(response.data.exit_code) + "</div>";
// for output
config.form[0].items[3].condition = false;
// display new dialog
waitSpinner.hideModalSpinner();
modal.open(config).then(submit);
var result = actionResult.getActionResult().updated(resourceType, id);
return result.results;
}, function(response) {
// close spinner and dispaly toast
waitSpinner.hideModalSpinner();
toast.add('error', response.data.split("(")[0].trim() + ".");
var result = actionResult.getActionResult().failed(resourceType, id);
return result.results;
});
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/execute.service.js | execute.service.js |
(function() {
'use strict';
/**
* @ngdoc overview
* @name horizon.dashboard.container.containers.update.service
* @description Service for the container update modal
*/
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.update.service', updateService);
updateService.$inject = [
'$q',
'horizon.app.core.openstack-service-api.policy',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.validStates',
'horizon.dashboard.container.containers.workflow',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.form.ModalFormService',
'horizon.framework.widgets.toast.service'
];
function updateService(
$q, policy, zun, adminActions, resourceType, validStates, workflow,
actionResult, gettext, $qExtensions, modal, toast
) {
var message = {
success: gettext('Container %s was successfully updated.'),
successAttach: gettext('Network %s was successfully attached to container %s.'),
successDetach: gettext('Network %s was successfully detached from container %s.')
};
var service = {
initAction: initAction,
perform: perform,
allowed: allowed
};
return service;
//////////////
function initAction() {
}
function perform(selected) {
var title, submitText;
title = gettext('Update Container');
submitText = gettext('Update');
var config = workflow.init('update', title, submitText, selected.id);
return modal.open(config).then(submit);
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("update") >= 0;
}
return $q.all([
policy.ifAllowed({ rules: [['container', 'edit_container']] }),
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.update.indexOf(container.status) >= 0
)
]);
}
function submit(context) {
var id = context.model.id;
var newNets = [];
context.model.networks.forEach(function (newNet) {
newNets.push(newNet.id);
});
changeNetworks(id, context.model.allocatedNetworks, newNets);
delete context.model.networks;
delete context.model.availableNetworks;
delete context.model.allocatedNetworks;
context.model = cleanUpdateProperties(context.model);
return $q.all([
zun.updateContainer(id, context.model).then(success)
]);
}
function success(response) {
response.data.id = response.data.uuid;
toast.add('success', interpolate(message.success, [response.data.name]));
var result = actionResult.getActionResult().updated(resourceType, response.data.name);
return result.result;
}
function cleanUpdateProperties(model) {
// Initially clean fields that don't have any value.
// Not only "null", blank too.
// only "cpu" and "memory" fields are editable.
for (var key in model) {
if (model.hasOwnProperty(key) && model[key] === null || model[key] === "" ||
(key !== "name" && key !== "cpu" && key !== "memory" && key !== "nets")) {
delete model[key];
}
}
return model;
}
function changeNetworks(container, oldNets, newNets) {
// attach and detach networks
var attachedNets = [];
var detachedNets = [];
newNets.forEach(function(newNet) {
if (!oldNets.includes(newNet)) {
attachedNets.push(newNet);
}
});
oldNets.forEach(function(oldNet) {
if (!newNets.includes(oldNet)) {
detachedNets.push(oldNet);
}
});
attachedNets.forEach(function (net) {
zun.attachNetwork(container, net).then(successAttach);
});
detachedNets.forEach(function (net) {
zun.detachNetwork(container, net).then(successDetach);
});
}
function successAttach(response) {
toast.add('success', interpolate(message.successAttach,
[response.data.network, response.data.container]));
var result = actionResult.getActionResult().updated(resourceType, response.data.container);
return result.result;
}
function successDetach(response) {
toast.add('success', interpolate(message.successDetach,
[response.data.network, response.data.container]));
var result = actionResult.getActionResult().updated(resourceType, response.data.container);
return result.result;
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/update.service.js | update.service.js |
(function() {
'use strict';
/**
* @ngDoc factory
* @name horizon.dashboard.container.containers.delete-stop.service
* @Description
* Brings up the stop and delete container confirmation modal dialog.
* On submit, delete after stop selected resources.
* On cancel, do nothing.
*/
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.delete-stop.service', deleteStopService);
deleteStopService.$inject = [
'$location',
'$q',
'horizon.app.core.openstack-service-api.zun',
'horizon.app.core.openstack-service-api.policy',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.modal.deleteModalService',
'horizon.framework.widgets.toast.service',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.events',
'horizon.dashboard.container.containers.validStates'
];
function deleteStopService(
$location, $q, zun, policy, actionResult, gettext, $qExtensions, deleteModal,
toast, adminActions, resourceType, events, validStates
) {
var scope;
var context = {
labels: null,
deleteEntity: deleteEntity,
successEvent: events.DELETE_SUCCESS
};
var service = {
initAction: initAction,
allowed: allowed,
perform: perform
};
var notAllowedMessage = gettext("You are not allowed to stop and delete container: %s");
return service;
//////////////
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("delete_stop") >= 0;
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.delete_stop.indexOf(container.status) >= 0
)
]);
}
// delete selected resource objects
function perform(selected, newScope) {
scope = newScope;
selected = angular.isArray(selected) ? selected : [selected];
context.labels = labelize(selected.length);
return $qExtensions.allSettled(selected.map(checkPermission)).then(afterCheck);
}
function labelize(count) {
return {
title: ngettext('Confirm Delete After Stop Container',
'Confirm Delete After Stop Containers', count),
/* eslint-disable max-len */
message: ngettext('You have selected "%s". Please confirm your selection. The container will be stopped before deleting. Deleted container is not recoverable.',
'You have selected "%s". Please confirm your selection. The containers will be stopped before deleting. Deleted containers are not recoverable.', count),
/* eslint-enable max-len */
submit: ngettext('Delete Container After Stop',
'Delete Containers After Stop', count),
success: ngettext('Deleted Container After Stop: %s.',
'Deleted Containers After Stop: %s.', count),
error: ngettext('Unable to delete Container after stopping: %s.',
'Unable to delete Containers after stopping: %s.', count)
};
}
// for batch delete
function checkPermission(selected) {
return {promise: allowed(selected), context: selected};
}
// for batch delete
function afterCheck(result) {
var outcome = $q.reject().catch(angular.noop); // Reject the promise by default
if (result.fail.length > 0) {
toast.add('error', getMessage(notAllowedMessage, result.fail));
outcome = $q.reject(result.fail).catch(angular.noop);
}
if (result.pass.length > 0) {
outcome = deleteModal.open(scope, result.pass.map(getEntity), context).then(createResult);
}
return outcome;
}
function createResult(deleteModalResult) {
// To make the result of this action generically useful, reformat the return
// from the deleteModal into a standard form
var result = actionResult.getActionResult();
deleteModalResult.pass.forEach(function markDeleted(item) {
result.updated(resourceType, getEntity(item).id);
});
deleteModalResult.fail.forEach(function markFailed(item) {
result.failed(resourceType, getEntity(item).id);
});
var indexPath = '/project/container/containers';
var currentPath = $location.path();
if (result.result.failed.length === 0 && result.result.updated.length > 0 &&
currentPath !== indexPath) {
$location.path(indexPath);
} else {
return result.result;
}
}
function getMessage(message, entities) {
return interpolate(message, [entities.map(getName).join(", ")]);
}
function getName(result) {
return getEntity(result).name;
}
// for batch delete
function getEntity(result) {
return result.context;
}
// call delete REST API
function deleteEntity(id) {
return zun.deleteContainerStop(id, true);
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/delete-stop.service.js | delete-stop.service.js |
(function() {
'use strict';
/**
* @ngDoc factory
* @name horizon.dashboard.container.containers.delete-force.service
* @Description
* Brings up the delete container forcely confirmation modal dialog.
* On submit, delete selected resources.
* On cancel, do nothing.
*/
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.delete-force.service', deleteForceService);
deleteForceService.$inject = [
'$location',
'$q',
'horizon.app.core.openstack-service-api.zun',
'horizon.app.core.openstack-service-api.policy',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.modal.deleteModalService',
'horizon.framework.widgets.toast.service',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.events',
'horizon.dashboard.container.containers.validStates'
];
function deleteForceService(
$location, $q, zun, policy, actionResult, gettext, $qExtensions, deleteModal,
toast, adminActions, resourceType, events, validStates
) {
var scope;
var context = {
labels: null,
deleteEntity: deleteEntity,
successEvent: events.DELETE_SUCCESS
};
var service = {
initAction: initAction,
allowed: allowed,
perform: perform
};
var notAllowedMessage = gettext("You are not allowed to delete container forcely: %s");
return service;
//////////////
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("delete_force") >= 0;
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.delete_force.indexOf(container.status) >= 0
)
]);
}
// delete selected resource objects
function perform(selected, newScope) {
scope = newScope;
selected = angular.isArray(selected) ? selected : [selected];
context.labels = labelize(selected.length);
return $qExtensions.allSettled(selected.map(checkPermission)).then(afterCheck);
}
function labelize(count) {
return {
title: ngettext('Confirm Delete Container Forcely',
'Confirm Delete Containers Forcely', count),
/* eslint-disable max-len */
message: ngettext('You have selected "%s". Please confirm your selection. Deleted container is not recoverable.',
'You have selected "%s". Please confirm your selection. Deleted containers are not recoverable.', count),
/* eslint-enable max-len */
submit: ngettext('Delete Container Forcely',
'Delete Containers Forcely', count),
success: ngettext('Deleted Container Forcely: %s.',
'Deleted Containers Forcely: %s.', count),
error: ngettext('Unable to delete Container forcely: %s.',
'Unable to delete Containers forcely: %s.', count)
};
}
// for batch delete
function checkPermission(selected) {
return {promise: allowed(selected), context: selected};
}
// for batch delete
function afterCheck(result) {
var outcome = $q.reject().catch(angular.noop); // Reject the promise by default
if (result.fail.length > 0) {
toast.add('error', getMessage(notAllowedMessage, result.fail));
outcome = $q.reject(result.fail).catch(angular.noop);
}
if (result.pass.length > 0) {
outcome = deleteModal.open(scope, result.pass.map(getEntity), context).then(createResult);
}
return outcome;
}
function createResult(deleteModalResult) {
// To make the result of this action generically useful, reformat the return
// from the deleteModal into a standard form
var result = actionResult.getActionResult();
deleteModalResult.pass.forEach(function markDeleted(item) {
result.updated(resourceType, getEntity(item).id);
});
deleteModalResult.fail.forEach(function markFailed(item) {
result.failed(resourceType, getEntity(item).id);
});
var indexPath = '/project/container/containers';
var currentPath = $location.path();
if (result.result.failed.length === 0 && result.result.updated.length > 0 &&
currentPath !== indexPath) {
$location.path(indexPath);
} else {
return result.result;
}
}
function getMessage(message, entities) {
return interpolate(message, [entities.map(getName).join(", ")]);
}
function getName(result) {
return getEntity(result).name;
}
// for batch delete
function getEntity(result) {
return result.context;
}
// call delete REST API
function deleteEntity(id) {
return zun.deleteContainerForce(id, true);
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/delete-force.service.js | delete-force.service.js |
(function() {
'use strict';
/**
* @ngDoc factory
* @name horizon.dashboard.container.containers.stop.service
* @Description
* Stop container.
*/
angular
.module('horizon.dashboard.container.containers')
.factory('horizon.dashboard.container.containers.stop.service', stopService);
stopService.$inject = [
'$q',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.basePath',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.validStates',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.form.ModalFormService',
'horizon.framework.widgets.toast.service'
];
function stopService(
$q, zun, adminActions, basePath, resourceType, validStates,
actionResult, gettext, $qExtensions, modal, toast
) {
// schema
var schema = {
type: "object",
properties: {
timeout: {
title: gettext("Stop Container"),
type: "number",
minimum: 1
}
}
};
// form
var form = [
{
type: 'section',
htmlClass: 'row',
items: [
{
type: 'section',
htmlClass: 'col-sm-12',
items: [
{
"key": "timeout",
"placeholder": gettext("Specify a shutdown timeout in seconds. (default: 10)")
}
]
}
]
}
];
// model
var model;
var message = {
success: gettext('Container %s was successfully stoped.')
};
var service = {
initAction: initAction,
allowed: allowed,
perform: perform
};
return service;
//////////////
// include this function in your service
// if you plan to emit events to the parent controller
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("stop") >= 0;
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.stop.indexOf(container.status) >= 0
)
]);
}
function perform(selected) {
model = {
id: selected.id,
name: selected.name,
timeout: null
};
// modal config
var config = {
"title": gettext('Stop Container'),
"submitText": gettext('Stop'),
"schema": schema,
"form": form,
"model": model
};
return modal.open(config).then(submit);
function submit(context) {
var id = context.model.id;
var name = context.model.name;
delete context.model.id;
delete context.model.name;
return zun.stopContainer(id, context.model).then(function() {
toast.add('success', interpolate(message.success, [name]));
var result = actionResult.getActionResult().updated(resourceType, id);
return result.result;
});
}
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/stop.service.js | stop.service.js |
(function () {
'use strict';
/**
* @ngdoc controller
* @name horizon.dashboard.container.containers.manage-security-groups
* @description
* Controller for the Manage Security Groups dialog.
*/
angular
.module('horizon.dashboard.container.containers')
.controller('horizon.dashboard.container.containers.manage-security-groups',
ManageSecurityGroupsController);
ManageSecurityGroupsController.$inject = [
'$scope',
'horizon.app.core.openstack-service-api.neutron',
'horizon.app.core.openstack-service-api.security-group',
'horizon.dashboard.container.containers.manage-security-groups.delete.service',
'horizon.dashboard.container.containers.manage-security-groups.delete.events'
];
function ManageSecurityGroupsController(
$scope, neutron, securityGroup, deleteSecurityGroupService, events
) {
var ctrl = this;
// form settings to add association of port and security group into table ///////////
// model template
ctrl.modelTemplate = {
id: "",
port: "",
port_name: "",
security_group: "",
security_group_name: ""
};
// initiate model
ctrl.model = angular.copy(ctrl.modelTemplate);
// for port selection
ctrl.availablePorts = [
{id: "", name: gettext("Select port.")}
];
// for security group selection
var message = {
portNotSelected: gettext("Select port first."),
portSelected: gettext("Select security group.")
};
ctrl.availableSecurityGroups = [
{id: "", name: message.portNotSelected, selected: false}
];
ctrl.refreshAvailableSecurityGroups = refreshAvailableSecurityGroups;
// add association into table
ctrl.addSecurityGroup = function(event) {
ctrl.model.id = ctrl.model.port + " " + ctrl.model.security_group;
ctrl.model.port_name = getPortName(ctrl.model.port);
ctrl.model.security_group_name = getSecurityGroupName(ctrl.model.security_group);
var model = angular.copy(ctrl.model);
$scope.model.port_security_groups.push(model);
// clean up form
ctrl.model = angular.copy(ctrl.modelTemplate);
ctrl.disabledSecurityGroup = true;
event.stopPropagation();
event.preventDefault();
refreshAvailableSecurityGroups();
};
// get port name from available ports.
function getPortName(port) {
var result = "";
ctrl.availablePorts.forEach(function (ap) {
if (port === ap.id) {
result = ap.name;
}
});
return result;
}
// get security group name from available security groups.
function getSecurityGroupName(sg) {
var result = "";
ctrl.availableSecurityGroups.forEach(function (asg) {
if (sg === asg.id) {
result = asg.name;
}
});
return result;
}
// refresh available security group selection, according to addition/deletion of associations.
ctrl.disabledSecurityGroup = true;
function refreshAvailableSecurityGroups() {
if (ctrl.model.port) {
// if port is selected, enable port selection.
ctrl.disabledSecurityGroup = false;
} else {
// otherwise disable port selection.
ctrl.disabledSecurityGroup = true;
}
// set "selected" to true, if the security group already added into table.
ctrl.availableSecurityGroups.forEach(function (sg) {
sg.selected = false;
ctrl.items.forEach(function (item) {
if (sg.id === item.security_group && ctrl.model.port === item.port) {
// mark already selected
sg.selected = true;
}
});
});
}
// enable "Add Security Group" button, if both of port and security group are selected.
ctrl.validateSecurityGroup = function () {
return !(ctrl.model.port && ctrl.model.security_group);
};
// retrieve available ports and security groups ///////////////////////////
// get security groups first, then get networks
securityGroup.query().then(onGetSecurityGroups).then(getNetworks);
function onGetSecurityGroups(response) {
angular.forEach(response.data.items, function (item) {
ctrl.availableSecurityGroups.push({id: item.id, name: item.name, selected: false});
// if association of port and security group in $scope.model.port_security_groups,
// push it into table for update.
if ($scope.model.port_security_groups.includes(item.id)) {
ctrl.security_groups.push(item);
}
});
return response;
}
// get available neutron networks and ports
function getNetworks() {
return neutron.getNetworks().then(onGetNetworks).then(getPorts);
}
function onGetNetworks(response) {
return response;
}
function getPorts(networks) {
networks.data.items.forEach(function(network) {
return neutron.getPorts({network_id: network.id}).then(
function(ports) {
onGetPorts(ports, network);
}
);
});
return networks;
}
function onGetPorts(ports, network) {
ports.data.items.forEach(function(port) {
// no device_owner or compute:kuryr means that the port can be associated
// with security group
if ((port.device_owner === "" || port.device_owner === "compute:kuryr") &&
port.admin_state === "UP") {
port.subnet_names = getPortSubnets(port, network.subnets);
port.network_name = network.name;
if ($scope.model.ports.includes(port.id)) {
var portName = port.network_name + " - " + port.subnet_names + " - " + port.name;
ctrl.availablePorts.push({
id: port.id,
name: portName});
port.security_groups.forEach(function (sgId) {
var sgName;
ctrl.availableSecurityGroups.forEach(function (sg) {
if (sgId === sg.id) {
sgName = sg.name;
}
});
$scope.model.port_security_groups.push({
id: port.id + " " + sgId,
port: port.id,
port_name: portName,
security_group: sgId,
security_group_name: sgName
});
});
}
}
});
}
// helper function to return an object of IP:NAME pairs for subnet mapping
function getPortSubnets(port, subnets) {
//var subnetNames = {};
var subnetNames = "";
port.fixed_ips.forEach(function (ip) {
subnets.forEach(function (subnet) {
if (ip.subnet_id === subnet.id) {
//subnetNames[ip.ip_address] = subnet.name;
if (subnetNames) {
subnetNames += ", ";
}
subnetNames += ip.ip_address + ": " + subnet.name;
}
});
});
return subnetNames;
}
// settings for table of added security groups ////////////////////////////
ctrl.items = $scope.model.port_security_groups;
ctrl.config = {
selectAll: false,
expand: false,
trackId: 'id',
columns: [
{id: 'port_name', title: gettext('Port')},
{id: 'security_group_name', title: gettext('Security Group')}
]
};
ctrl.itemActions = [
{
id: 'deleteSecurityGroupAction',
service: deleteSecurityGroupService,
template: {
type: "delete",
text: gettext("Delete")
}
}
];
// register watcher for security group deletion from table
var deleteWatcher = $scope.$on(events.DELETE_SUCCESS, deleteSecurityGroup);
$scope.$on('$destroy', function destroy() {
deleteWatcher();
});
// on delete security group from table
function deleteSecurityGroup(event, deleted) {
// delete security group from table
ctrl.items.forEach(function (sg, index) {
if (sg.id === deleted.port + " " + deleted.security_group) {
delete ctrl.items.splice(index, 1);
}
});
// enable deleted security group in selection
refreshAvailableSecurityGroups();
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/manage-security-groups/manage-security-groups.controller.js | manage-security-groups.controller.js |
(function() {
"use strict";
angular
.module("horizon.dashboard.container.containers")
.factory("horizon.dashboard.container.containers.manage-security-groups.service",
manageSecurityGroup);
manageSecurityGroup.$inject = [
"$q",
"horizon.app.core.openstack-service-api.neutron",
"horizon.app.core.openstack-service-api.security-group",
"horizon.app.core.openstack-service-api.zun",
"horizon.dashboard.container.basePath",
'horizon.dashboard.container.containers.adminActions',
'horizon.dashboard.container.containers.resourceType',
'horizon.dashboard.container.containers.validStates',
'horizon.framework.util.actions.action-result.service',
"horizon.framework.util.i18n.gettext",
"horizon.framework.util.q.extensions",
"horizon.framework.widgets.form.ModalFormService",
"horizon.framework.widgets.toast.service"
];
function manageSecurityGroup(
$q, neutron, securityGroup, zun, basePath, adminActions, resourceType, validStates,
actionResult, gettext, $qExtensions, modal, toast
) {
// title for dialog
var title = gettext("Manage Security Groups: container %s");
// schema
var schema = {
type: "object",
properties: {
signal: {
title: gettext("Manage Security Groups"),
type: "string"
}
}
};
// form
var form = [
{
type: 'section',
htmlClass: 'row',
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
type: "template",
/* eslint-disable max-len */
templateUrl: basePath + "containers/actions/manage-security-groups/manage-security-groups.html"
}
]
}
]
}
];
// model
var model = {};
var message = {
success: gettext("Changes security groups %(sgs)s for port %(port)s was successfully submitted.")
};
var service = {
initAction: initAction,
perform: perform,
allowed: allowed
};
return service;
//////////////
function initAction() {
}
function allowed(container) {
var adminAction = true;
if (zun.isAdmin()) {
adminAction = adminActions.indexOf("manage_security_groups") >= 0;
}
return $q.all([
$qExtensions.booleanAsPromise(adminAction),
$qExtensions.booleanAsPromise(
validStates.manage_security_groups.indexOf(container.status) >= 0
)
]);
}
function perform(selected) {
model.id = selected.id;
model.name = selected.name;
model.ports = [];
Object.keys(selected.addresses).map(function(key) {
selected.addresses[key].forEach(function (addr) {
model.ports.push(addr.port);
});
});
model.port_security_groups = [];
// modal config
var config = {
"title": interpolate(title, [model.name]),
"submitText": gettext("Submit"),
"schema": schema,
"form": form,
"model": model
};
return modal.open(config).then(submit);
}
function submit(context) {
var id = context.model.id;
var portSgs = context.model.port_security_groups;
var aggregatedPortSgs = {};
// initialize port list
model.ports.forEach(function (port) {
aggregatedPortSgs[port] = [];
});
// add security groups for each port
portSgs.forEach(function (portSg) {
aggregatedPortSgs[portSg.port].push(portSg.security_group);
});
Object.keys(aggregatedPortSgs).map(function (port) {
return zun.updatePortSecurityGroup(id, port, aggregatedPortSgs[port]).then(function() {
var sgs = gettext("(empty)");
if (aggregatedPortSgs[port].length) {
sgs = aggregatedPortSgs[port];
}
toast.add(
'success',
interpolate(message.success, {port: port, sgs: sgs}, true));
var result = actionResult.getActionResult().updated(resourceType, id);
return result.result;
});
});
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/manage-security-groups/manage-security-groups.service.js | manage-security-groups.service.js |
(function() {
"use strict";
angular
.module("horizon.dashboard.container.containers")
.factory("horizon.dashboard.container.containers.workflow", workflow);
workflow.$inject = [
"horizon.app.core.openstack-service-api.cinder",
"horizon.app.core.openstack-service-api.neutron",
"horizon.app.core.openstack-service-api.security-group",
'horizon.app.core.openstack-service-api.zun',
"horizon.dashboard.container.basePath",
"horizon.framework.util.i18n.gettext",
'horizon.framework.util.q.extensions',
"horizon.framework.widgets.metadata.tree.service"
];
function workflow(
cinder, neutron, securityGroup, zun, basePath, gettext,
$qExtensions, treeService
) {
var workflow = {
init: init
};
function init(action, title, submitText, id) {
var push = Array.prototype.push;
var schema, form, model;
var imageDrivers = [
{value: "docker", name: gettext("Docker Hub")},
{value: "glance", name: gettext("Glance")}
];
var imagePullPolicies = [
{value: "", name: gettext("Select policy.")},
{value: "ifnotpresent", name: gettext("If not present")},
{value: "always", name: gettext("Always")},
{value: "never", name: gettext("Never")}
];
var exitPolicies = [
{value: "", name: gettext("Select policy.")},
{value: "no", name: gettext("No")},
{value: "on-failure", name: gettext("Restart on failure")},
{value: "always", name: gettext("Always restart")},
{value: "unless-stopped", name: gettext("Restart if stopped")},
{value: "remove", name: gettext("Remove container")}
];
var availabilityZones = [
{value: "", name: gettext("Select availability zone.")}
];
// schema
schema = {
type: "object",
properties: {
// info
name: {
title: gettext("Name"),
type: "string"
},
image: {
title: gettext("Image"),
type: "string"
},
image_driver: {
title: gettext("Image Driver"),
type: "string"
},
image_pull_policy: {
title: gettext("Image Pull Policy"),
type: "string"
},
command: {
title: gettext("Command"),
type: "string"
},
run: {
title: gettext("Start container after creation"),
type: "boolean"
},
// spec
hostname: {
title: gettext("Hostname"),
type: "string"
},
runtime: {
title: gettext("Runtime"),
type: "string"
},
cpu: {
title: gettext("CPU"),
type: "number",
minimum: 0
},
memory: {
title: gettext("Memory"),
type: "number",
minimum: 4
},
disk: {
title: gettext("Disk"),
type: "number",
minimum: 0
},
availability_zone: {
title: gettext("Availability Zone"),
type: "string"
},
exit_policy: {
title: gettext("Exit Policy"),
type: "string"
},
restart_policy_max_retry: {
title: gettext("Max Retry"),
type: "number",
minimum: 0
},
auto_heal: {
title: gettext("Enable auto heal"),
type: "boolean"
},
// misc
workdir: {
title: gettext("Working Directory"),
type: "string"
},
environment: {
title: gettext("Environment Variables"),
type: "string"
},
interactive: {
title: gettext("Enable interactive mode"),
type: "boolean"
},
// labels
labels: {
title: gettext("Labels"),
type: "string"
}
}
};
// form
form = [
{
type: "tabs",
tabs: [
{
title: gettext("Info"),
help: basePath + "containers/actions/workflow/info.help.html",
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
key: "name",
placeholder: gettext("Name of the container to create.")
},
{
key: "image",
placeholder: gettext("Name or ID of the container image."),
readonly: action === "update",
required: true
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "image_driver",
readonly: action === "update",
type: "select",
titleMap: imageDrivers
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "image_pull_policy",
type: "select",
readonly: action === "update",
titleMap: imagePullPolicies
}
]
},
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
key: "command",
placeholder: gettext("A command that will be sent to the container."),
readonly: action === "update"
},
{
key: "run",
readonly: action === "update"
}
]
}
]
},
{
title: gettext("Spec"),
help: basePath + "containers/actions/workflow/spec.help.html",
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "hostname",
placeholder: gettext("The host name of this container."),
readonly: action === "update"
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "runtime",
placeholder: gettext("The runtime to create container with."),
readonly: action === "update"
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "cpu",
step: 0.1,
placeholder: gettext("The number of virtual cpu for this container.")
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "memory",
step: 128,
placeholder: gettext("The container memory size in MiB.")
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "disk",
step: 1,
placeholder: gettext("The disk size in GiB for per container.")
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "availability_zone",
readonly: action === "update",
type: "select",
titleMap: availabilityZones
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "exit_policy",
type: "select",
readonly: action === "update",
titleMap: exitPolicies,
onChange: function() {
var notOnFailure = model.exit_policy !== "on-failure";
if (notOnFailure) {
model.restart_policy_max_retry = "";
}
form[0].tabs[1].items[7].items[0].readonly = notOnFailure;
// set auto_remove whether exit_policy is "remove".
// if exit_policy is set as "remove", clear restart_policy.
// otherwise, set restart_policy as same value as exit_policy.
model.auto_remove = (model.exit_policy === "remove");
if (model.auto_remove) {
model.restart_policy = "";
} else {
model.restart_policy = model.exit_policy;
}
}
}
]
},
{
type: "section",
htmlClass: "col-xs-6",
items: [
{
key: "restart_policy_max_retry",
placeholder: gettext("Retry times for 'Restart on failure' policy."),
readonly: true
}
]
},
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
key: "auto_heal",
readonly: action === "update"
}
]
}
]
},
{
"title": gettext("Volumes"),
help: basePath + "containers/actions/workflow/mounts/mounts.help.html",
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
type: "template",
templateUrl: basePath + "containers/actions/workflow/mounts/mounts.html"
}
]
}
],
condition: action === "update"
},
{
"title": gettext("Networks"),
help: basePath + "containers/actions/workflow/networks/networks.help.html",
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
type: "template",
templateUrl: basePath + "containers/actions/workflow/networks/networks.html"
}
]
}
]
},
{
"title": gettext("Ports"),
help: basePath + "containers/actions/workflow/ports/ports.help.html",
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
type: "template",
templateUrl: basePath + "containers/actions/workflow/ports/ports.html"
}
]
}
],
condition: action === "update"
},
{
"title": gettext("Security Groups"),
/* eslint-disable max-len */
help: basePath + "containers/actions/workflow/security-groups/security-groups.help.html",
/* eslint-disable max-len */
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
type: "template",
/* eslint-disable max-len */
templateUrl: basePath + "containers/actions/workflow/security-groups/security-groups.html"
/* eslint-disable max-len */
}
]
}
],
condition: action === "update"
},
{
"title": gettext("Miscellaneous"),
help: basePath + "containers/actions/workflow/misc.help.html",
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
key: "workdir",
placeholder: gettext("The working directory for commands to run in."),
readonly: action === "update"
},
{
key: "environment",
placeholder: gettext("KEY1=VALUE1,KEY2=VALUE2..."),
readonly: action === "update"
},
{
key: "interactive",
readonly: action === "update"
}
]
}
]
},
{
title: gettext("Labels"),
help: basePath + "containers/actions/workflow/labels.help.html",
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
key: "labels",
placeholder: gettext("KEY1=VALUE1,KEY2=VALUE2..."),
readonly: action === "update"
}
]
}
]
},
{
"title": gettext("Scheduler Hints"),
/* eslint-disable max-len */
help: basePath + "containers/actions/workflow/scheduler-hints/scheduler-hints.help.html",
/* eslint-disable max-len */
type: "section",
htmlClass: "row",
items: [
{
type: "section",
htmlClass: "col-xs-12",
items: [
{
type: "template",
/* eslint-disable max-len */
templateUrl: basePath + "containers/actions/workflow/scheduler-hints/scheduler-hints.html"
/* eslint-disable max-len */
}
]
}
],
condition: action === "update"
}
]
}
];
// model
model = {
// info
name: "",
image: "",
image_driver: "docker",
image_pull_policy: "",
command: "",
run: true,
// spec
hostname: "",
runtime: "",
cpu: "",
memory: "",
disks: "",
availability_zone: "",
exit_policy: "",
restart_policy: "",
restart_policy_max_retry: "",
auto_remove: false,
auto_heal: false,
// mounts
mounts: [],
// networks
networks: [],
// ports
ports: [],
// security groups
security_groups: [],
// misc
workdir: "",
environment: "",
interactive: true,
// labels
labels: "",
// hints
availableHints: [],
hintsTree: null,
hints: {}
};
// initialize tree object for scheduler hints.
model.hintsTree = new treeService.Tree(model.availableHints, {});
// available cinder volumes
model.availableCinderVolumes = [
{id: "", name: gettext("Select available Cinder volume")}
];
// networks
model.availableNetworks = [];
model.allocatedNetworks = [];
// available ports
model.availablePorts = [];
// security groups
model.availableSecurityGroups = [];
model.allocatedSecurityGroups = [];
// get resources
getContainer(action, id).then(function () {
getVolumes();
getNetworks();
securityGroup.query().then(onGetSecurityGroups);
zun.getZunAvailabilityZones().then(onGetZunServices);
});
// get container when action equals "update"
function getContainer (action, id) {
if (action === 'create') {
return $qExtensions.booleanAsPromise(true);
} else {
return zun.getContainer(id).then(onGetContainer);
}
}
// get container for update
function onGetContainer(response) {
model.id = id;
model.name = response.data.name
? response.data.name : "";
model.image = response.data.image
? response.data.image : "";
model.image_driver = response.data.image_driver
? response.data.image_driver : "docker";
model.image_pull_policy = response.data.image_pull_policy
? response.data.image_pull_policy : "";
model.command = response.data.command
? response.data.command : "";
model.hostname = response.data.hostname
? response.data.hostname : "";
model.runtime = response.data.runtime
? response.data.runtime : "";
model.cpu = response.data.cpu
? response.data.cpu : "";
model.memory = response.data.memory
? parseInt(response.data.memory, 10) : "";
model.restart_policy = response.data.restart_policy.Name
? response.data.restart_policy.Name : "";
model.restart_policy_max_retry = response.data.restart_policy.MaximumRetryCount
? parseInt(response.data.restart_policy.MaximumRetryCount, 10) : null;
if (config.model.auto_remove) {
config.model.exit_policy = "remove";
} else {
config.model.exit_policy = config.model.restart_policy;
}
model.allocatedNetworks = getAllocatedNetworks(response.data.addresses);
model.allocatedSecurityGroups = response.data.security_groups;
model.workdir = response.data.workdir
? response.data.workdir : "";
model.environment = response.data.environment
? hashToString(response.data.environment) : "";
model.interactive = response.data.interactive
? response.data.interactive : false;
model.auto_remove = response.data.auto_remove
? response.data.auto_remove : false;
model.labels = response.data.labels
? hashToString(response.data.labels) : "";
return response;
}
function getAllocatedNetworks(addresses) {
var allocated = [];
Object.keys(addresses).forEach(function (id) {
allocated.push(id);
});
return allocated;
}
function hashToString(hash) {
var str = "";
for (var key in hash) {
if (hash.hasOwnProperty(key)) {
if (str.length > 0) {
str += ",";
}
str += key + "=" + hash[key];
}
}
return str;
}
// get available cinder volumes
function getVolumes() {
return cinder.getVolumes().then(onGetVolumes);
}
function onGetVolumes(response) {
push.apply(model.availableCinderVolumes,
response.data.items.filter(function(volume) {
return volume.status === "available";
}));
model.availableCinderVolumes.forEach(function(volume) {
volume.selected = false;
return volume;
});
return response;
}
// get available neutron networks and ports
function getNetworks() {
return neutron.getNetworks().then(onGetNetworks).then(getPorts);
}
function onGetNetworks(response) {
push.apply(model.availableNetworks,
response.data.items.filter(function(network) {
return network.subnets.length > 0;
}));
// if network in model.allocatedNetworks, push it to mode.network for update
model.availableNetworks.forEach(function (available) {
model.allocatedNetworks.forEach(function (allocated) {
if (available.id === allocated) {
model.networks.push(available);
}
});
});
return response;
}
function getPorts(networks) {
networks.data.items.forEach(function(network) {
return neutron.getPorts({network_id: network.id}).then(
function(ports) {
onGetPorts(ports, network);
}
);
});
return networks;
}
function onGetPorts(ports, network) {
ports.data.items.forEach(function(port) {
// no device_owner means that the port can be attached
if (port.device_owner === "" && port.admin_state === "UP") {
port.subnet_names = getPortSubnets(port, network.subnets);
port.network_name = network.name;
model.availablePorts.push(port);
}
});
}
// helper function to return an object of IP:NAME pairs for subnet mapping
function getPortSubnets(port, subnets) {
var subnetNames = {};
port.fixed_ips.forEach(function (ip) {
subnets.forEach(function (subnet) {
if (ip.subnet_id === subnet.id) {
subnetNames[ip.ip_address] = subnet.name;
}
});
});
return subnetNames;
}
// get security groups
function onGetSecurityGroups(response) {
angular.forEach(response.data.items, function (item) {
// 'default' is a special security group in neutron. It can not be
// deleted and is guaranteed to exist. It by default contains all
// of the rules needed for an instance to reach out to the network
// so the instance can provision itself.
if (item.name === 'default' && action === "create") {
model.security_groups.push(item);
}
// if network in model.allocatedSecurityGroups,
// push it to mode.security_groups for update
else if (model.allocatedSecurityGroups.includes(item.id)) {
model.security_groups.push(item);
}
});
push.apply(model.availableSecurityGroups, response.data.items);
return response;
}
// get availability zones from zun services
function onGetZunServices(response) {
var azs = [];
response.data.items.forEach(function (service) {
azs.push({value: service.availability_zone, name: service.availability_zone});
});
push.apply(availabilityZones, azs);
}
var config = {
title: title,
submitText: submitText,
schema: schema,
form: form,
model: model
};
return config;
}
return workflow;
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/workflow/workflow.service.js | workflow.service.js |
(function () {
'use strict';
/**
* @ngdoc controller
* @name horizon.dashboard.container.containers.workflow.networks
* @description
* Controller for the Create Container - Networks Step.
*/
angular
.module('horizon.dashboard.container.containers')
.controller('horizon.dashboard.container.containers.workflow.networks',
NetworksController);
NetworksController.$inject = [
'$scope',
'horizon.framework.widgets.action-list.button-tooltip.row-warning.service'
];
function NetworksController($scope, tooltipService) {
var ctrl = this;
ctrl.networkStatuses = {
'ACTIVE': gettext('Active'),
'DOWN': gettext('Down')
};
ctrl.networkAdminStates = {
'UP': gettext('Up'),
'DOWN': gettext('Down')
};
ctrl.tableDataMulti = {
available: $scope.model.availableNetworks,
allocated: $scope.model.networks,
displayedAvailable: [],
displayedAllocated: []
};
ctrl.tableLimits = {
maxAllocation: -1
};
ctrl.tableHelpText = {
allocHelpText: gettext('Select networks from those listed below.')
};
ctrl.tooltipModel = tooltipService;
/**
* Filtering - client-side MagicSearch
*/
// All facets for network step
ctrl.networkFacets = [
{
label: gettext('Name'),
name: 'name',
singleton: true
},
{
label: gettext('Shared'),
name: 'shared',
singleton: true,
options: [
{ label: gettext('No'), key: false },
{ label: gettext('Yes'), key: true }
]
},
{
label: gettext('Admin State'),
name: 'admin_state',
singleton: true,
options: [
{ label: gettext('Up'), key: "UP" },
{ label: gettext('Down'), key: "DOWN" }
]
},
{
label: gettext('Status'),
name: 'status',
singleton: true,
options: [
{ label: gettext('Active'), key: "ACTIVE"},
{ label: gettext('Down'), key: "DOWN" }
]
}
];
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/workflow/networks/networks.controller.js | networks.controller.js |
(function () {
'use strict';
/**
* @ngdoc controller
* @name horizon.dashboard.container.containers.workflow.mounts
* @description
* Controller for the Create Container - Mounts Step.
*/
angular
.module('horizon.dashboard.container.containers')
.controller('horizon.dashboard.container.containers.workflow.mounts',
MountsController);
MountsController.$inject = [
'$scope',
'horizon.dashboard.container.containers.workflow.mounts.delete-volume.service',
'horizon.dashboard.container.containers.workflow.mounts.delete-volume.events'
];
function MountsController($scope, deleteVolumeService, events) {
var ctrl = this;
ctrl.id = 0;
ctrl.initModel = {
type: "cinder-available",
source: "",
size: null,
destination: ""
};
// form for adding volume
ctrl.model = angular.copy(ctrl.initModel);
ctrl.types = [
{value: "cinder-available", label: gettext("Existing Cinder Volume")},
{value: "cinder-new", label: gettext("New Cinder Volume")}
];
ctrl.availableCinderVolumes = $scope.model.availableCinderVolumes;
// add volume to table
ctrl.addVolume = function(event) {
var model = angular.copy(ctrl.model);
ctrl.id++;
model.id = ctrl.id;
if (model.type === "cinder-available") {
model.size = null;
} else if (model.type === "cinder-new") {
model.source = "";
}
$scope.model.mounts.push(model);
// maintain available cinder volume array
$scope.model.availableCinderVolumes.forEach(function (volume) {
if (model.type === "cinder-available" && volume.id === model.source) {
// mark selected volume
volume.selected = true;
// add selected volume name on table
$scope.model.mounts.forEach(function (allocated) {
if (allocated.source === volume.id) {
allocated.name = volume.name;
}
});
}
});
// clean up form
ctrl.model = angular.copy(ctrl.initModel);
event.stopPropagation();
event.preventDefault();
};
// register watcher for volume deletion from table
var deleteWatcher = $scope.$on(events.DELETE_SUCCESS, deleteVolume);
$scope.$on('$destroy', function destroy() {
deleteWatcher();
});
// on delete volume from table
function deleteVolume(event, deleted) {
// delete volume from table
ctrl.items.forEach(function (volume, index) {
if (volume.id === deleted.id) {
delete ctrl.items.splice(index, 1);
}
});
// enable deleted volume in source selection
$scope.model.availableCinderVolumes.forEach(function (volume) {
if (volume.id === deleted.source) {
// mark not selected volume
volume.selected = false;
}
});
}
// settings for table of added volumes
ctrl.items = $scope.model.mounts;
ctrl.config = {
selectAll: false,
expand: false,
trackId: 'id',
columns: [
{id: 'type', title: gettext('Type')},
{id: 'source', title: gettext('Source'), filters: ['noValue']},
{id: 'name', title: gettext('Name'), filters: ['noValue']},
{id: 'size', title: gettext('Size (GB)'), filters: ['noValue']},
{id: 'destination', title: gettext('Destination')}
]
};
ctrl.itemActions = [
{
id: 'deleteVolumeAction',
service: deleteVolumeService,
template: {
type: "delete",
text: gettext("Delete")
}
}
];
ctrl.validateVolume = function () {
return !((ctrl.model.type === "cinder-available" && ctrl.model.source) ||
(ctrl.model.type === "cinder-new" && ctrl.model.size)) ||
!ctrl.model.destination;
};
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/containers/actions/workflow/mounts/mounts.controller.js | mounts.controller.js |
(function() {
'use strict';
/**
* @ngdoc overview
* @name horizon.dashboard.container.images
* @ngModule
* @description
* Provides all the services and widgets require to display the images
* panel
*/
angular
.module('horizon.dashboard.container.images', [
'ngRoute',
'horizon.dashboard.container.images.actions'
])
.constant('horizon.dashboard.container.images.events', events())
.constant('horizon.dashboard.container.images.resourceType', 'OS::Zun::Image')
.run(run)
.config(config);
/**
* @ngdoc constant
* @name horizon.dashboard.container.images.events
* @description A list of events used by Images
* @returns {Object} Event constants
*/
function events() {
return {
CREATE_SUCCESS: 'horizon.dashboard.container.images.CREATE_SUCCESS',
DELETE_SUCCESS: 'horizon.dashboard.container.images.DELETE_SUCCESS'
};
}
run.$inject = [
'horizon.framework.conf.resource-type-registry.service',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.images.basePath',
'horizon.dashboard.container.images.resourceType',
'horizon.dashboard.container.images.service'
];
function run(registry, zun, basePath, resourceType, imageService) {
registry.getResourceType(resourceType)
.setNames(gettext('Image'), gettext('Images'))
// for detail summary view on table row.
.setSummaryTemplateUrl(basePath + 'drawer.html')
.setDefaultIndexUrl('/admin/container/images/')
// for table row items and detail summary view.
.setProperties(imageProperties())
.setListFunction(imageService.getImagesPromise)
.tableColumns
.append({
id: 'id',
priority: 2
})
.append({
id: 'repo',
priority: 1,
sortDefault: true
})
.append({
id: 'tag',
priority: 1
})
.append({
id: 'size',
priority: 1
})
.append({
id: 'host',
priority: 1
})
.append({
id: 'image_id',
priority: 3
})
.append({
id: 'project_id',
priority: 2
});
// for magic-search
registry.getResourceType(resourceType).filterFacets
.append({
'label': gettext('Image'),
'name': 'repo',
'singleton': true
})
.append({
'label': gettext('Tag'),
'name': 'tag',
'singleton': true
})
.append({
'label': gettext('Host'),
'name': 'host',
'singleton': true
})
.append({
'label': gettext('ID'),
'name': 'id',
'singleton': true
})
.append({
'label': gettext('Image ID'),
'name': 'image_id',
'singleton': true
})
.append({
'label': gettext('Project ID'),
'name': 'project_id',
'singleton': true
});
}
function imageProperties() {
return {
'id': {label: gettext('ID'), filters: ['noValue'] },
'repo': { label: gettext('Image'), filters: ['noValue'] },
'tag': { label: gettext('Tag'), filters: ['noValue'] },
'host': { label: gettext('Host'), filters: ['noValue'] },
'size': { label: gettext('Size'), filters: ['noValue', 'bytes'] },
'image_id': { label: gettext('Image ID'), filters: ['noValue'] },
'project_id': { label: gettext('Project ID'), filters: ['noValue'] }
};
}
config.$inject = [
'$provide',
'$windowProvider',
'$routeProvider'
];
/**
* @name config
* @param {Object} $provide
* @param {Object} $windowProvider
* @param {Object} $routeProvider
* @description Routes used by this module.
* @returns {undefined} Returns nothing
*/
function config($provide, $windowProvider, $routeProvider) {
var path = $windowProvider.$get().STATIC_URL + 'dashboard/container/images/';
$provide.constant('horizon.dashboard.container.images.basePath', path);
$routeProvider.when('/admin/container/images', {
templateUrl: path + 'panel.html'
});
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/images/images.module.js | images.module.js |
(function() {
'use strict';
/**
* @ngDoc factory
* @name horizon.dashboard.container.images.actions.delete.service
* @Description
* Brings up the delete images confirmation modal dialog.
* On submit, delete selected resources.
* On cancel, do nothing.
*/
angular
.module('horizon.dashboard.container.images')
.factory('horizon.dashboard.container.images.actions.delete.service', deleteService);
deleteService.$inject = [
'$location',
'$q',
'$rootScope',
'horizon.app.core.openstack-service-api.zun',
'horizon.app.core.openstack-service-api.policy',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.modal.deleteModalService',
'horizon.framework.widgets.table.events',
'horizon.framework.widgets.toast.service',
'horizon.dashboard.container.images.resourceType',
'horizon.dashboard.container.images.events'
];
function deleteService(
$location, $q, $rootScope, zun, policy, actionResult, gettext, $qExtensions, deleteModal,
tableEvents, toast, resourceType, events
) {
var scope;
var context = {
labels: null,
deleteEntity: deleteEntity,
successEvent: events.DELETE_SUCCESS
};
var service = {
initAction: initAction,
allowed: allowed,
perform: perform
};
var notAllowedMessage = gettext("You are not allowed to delete images: %s");
return service;
//////////////
function initAction() {
}
function allowed() {
return $qExtensions.booleanAsPromise(true);
}
// delete selected resource objects
function perform(selected, newScope) {
scope = newScope;
selected = angular.isArray(selected) ? selected : [selected];
context.labels = labelize(selected.length);
return $qExtensions.allSettled(selected.map(checkPermission)).then(afterCheck);
}
function labelize(count) {
return {
title: ngettext('Confirm Delete Image',
'Confirm Delete Images', count),
/* eslint-disable max-len */
message: ngettext('You have selected "%s". Please confirm your selection. Deleted image is not recoverable.',
'You have selected "%s". Please confirm your selection. Deleted images are not recoverable.', count),
/* eslint-enable max-len */
submit: ngettext('Delete Image',
'Delete Images', count),
success: ngettext('Deleted Image: %s.',
'Deleted Images: %s.', count),
error: ngettext('Unable to delete Image: %s.',
'Unable to delete Images: %s.', count)
};
}
// for batch delete
function checkPermission(selected) {
return {promise: allowed(selected), context: selected};
}
// for batch delete
function afterCheck(result) {
var outcome = $q.reject().catch(angular.noop); // Reject the promise by default
if (result.fail.length > 0) {
toast.add('error', getMessage(notAllowedMessage, result.fail));
outcome = $q.reject(result.fail).catch(angular.noop);
}
if (result.pass.length > 0) {
outcome = deleteModal.open(scope, result.pass.map(getEntity), context).then(createResult);
}
return outcome;
}
function createResult(deleteModalResult) {
// To make the result of this action generically useful, reformat the return
// from the deleteModal into a standard form
var result = actionResult.getActionResult();
deleteModalResult.pass.forEach(function markDeleted(item) {
result.updated(resourceType, getEntity(item).id);
});
deleteModalResult.fail.forEach(function markFailed(item) {
result.failed(resourceType, getEntity(item).id);
});
var indexPath = '/admin/container/images';
var currentPath = $location.path();
if (result.result.failed.length === 0 && result.result.updated.length > 0 &&
currentPath !== indexPath) {
$location.path(indexPath);
} else {
$rootScope.$broadcast(tableEvents.CLEAR_SELECTIONS);
return result.result;
}
}
function getMessage(message, entities) {
return interpolate(message, [entities.map(getName).join(", ")]);
}
function getName(result) {
return getEntity(result).name;
}
// for batch delete
function getEntity(result) {
return result.context;
}
// call delete REST API
function deleteEntity(id) {
return zun.deleteImage(id);
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/images/actions/delete.service.js | delete.service.js |
(function() {
'use strict';
/**
* @ngdoc factory
* @name horizon.dashboard.container.images.workflow
* @description
* Workflow for pulling image
*/
angular
.module('horizon.dashboard.container.images.actions')
.factory('horizon.dashboard.container.images.actions.workflow', workflow);
workflow.$inject = [
'horizon.app.core.openstack-service-api.zun',
'horizon.framework.util.i18n.gettext'
];
function workflow(zun, gettext) {
var workflow = {
init: init
};
function init(actionType, title, submitText) {
var push = Array.prototype.push;
var schema, form, model;
var hosts = [
{value: "", name: gettext("Select host that stores the image.")}
];
// schema
schema = {
type: 'object',
properties: {
repo: {
title: gettext('Image'),
type: 'string'
},
host: {
title: gettext('Host'),
type: 'string'
}
}
};
// form
form = [
{
type: 'section',
htmlClass: 'row',
items: [
{
type: 'section',
htmlClass: 'col-sm-12',
items: [
{
key: 'repo',
placeholder: gettext('Name of the image.'),
required: true
},
{
key: 'host',
type: "select",
titleMap: hosts,
required: true
}
]
}
]
}
]; // form
model = {
repo: '',
host: ''
};
// get hosts for zun
zun.getHosts().then(onGetZunHosts);
function onGetZunHosts(response) {
var hs = [];
response.data.items.forEach(function (host) {
hs.push({value: host.id, name: host.hostname});
});
push.apply(hosts, hs);
}
var config = {
title: title,
submitText: submitText,
schema: schema,
form: form,
model: model
};
return config;
}
return workflow;
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/images/actions/workflow.service.js | workflow.service.js |
(function() {
'use strict';
/**
* @ngdoc overview
* @name horizon.dashboard.container.hosts
* @ngModule
* @description
* Provides all the services and widgets require to display the hosts
* panel
*/
angular
.module('horizon.dashboard.container.hosts', [
'ngRoute',
'horizon.dashboard.container.hosts.details'
])
.constant('horizon.dashboard.container.hosts.resourceType', 'OS::Zun::Host')
.run(run)
.config(config);
run.$inject = [
'horizon.framework.conf.resource-type-registry.service',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.hosts.basePath',
'horizon.dashboard.container.hosts.resourceType',
'horizon.dashboard.container.hosts.service'
];
function run(registry, zun, basePath, resourceType, hostService) {
registry.getResourceType(resourceType)
.setNames(gettext('Host'), gettext('Hosts'))
// for detail summary view on table row.
.setSummaryTemplateUrl(basePath + 'drawer.html')
// for table row items and detail summary view.
.setDefaultIndexUrl('/admin/container/hosts/')
.setProperties(hostProperties())
.setListFunction(hostService.getHostsPromise)
.tableColumns
.append({
id: 'id',
priority: 3
})
.append({
id: 'hostname',
priority: 1,
sortDefault: true,
urlFunction: hostService.getDetailsPath
})
.append({
id: 'mem_total',
priority: 2
})
.append({
id: 'cpus',
priority: 2
})
.append({
id: 'disk_total',
priority: 2
});
// for magic-search
registry.getResourceType(resourceType).filterFacets
.append({
'label': gettext('Hostname'),
'name': 'repo',
'singleton': true
})
.append({
'label': gettext('ID'),
'name': 'id',
'singleton': true
});
}
function hostProperties() {
return {
'id': {label: gettext('ID'), filters: ['noValue'] },
'hostname': { label: gettext('Hostname'), filters: ['noValue'] },
'mem_total': { label: gettext('Memory Total'), filters: ['noValue', 'mb'] },
'mem_used': { label: gettext('Memory Used'), filters: ['noValue', 'mb'] },
'cpus': { label: gettext('CPU Total'), filters: ['noValue'] },
'cpu_used': { label: gettext('CPU Used'), filters: ['noValue'] },
'disk_total': { label: gettext('Disk Total'), filters: ['noValue', 'gb'] },
'disk_used': { label: gettext('Disk Used'), filters: ['noValue', 'gb'] },
'disk_quota_supported': { label: gettext('Disk Quota Supported'),
filters: ['noValue', 'yesno'] },
'total_containers': { label: gettext('Total Containers'), filters: ['noValue'] },
'os': { label: gettext('OS'), filters: ['noValue'] },
'os_type': { label: gettext('OS Type'), filters: ['noValue'] },
'architecture': { label: gettext('Architecture'), filters: ['noValue'] },
'kernel_version': { label: gettext('Kernel Version'), filters: ['noValue'] },
'runtimes': { label: gettext('Runtimes'), filters: ['noValue', 'json'] },
'labels': { label: gettext('Labels'), filters: ['noValue', 'json'] },
'links': { label: gettext('Links'), filters: ['noValue', 'json'] }
};
}
config.$inject = [
'$provide',
'$windowProvider',
'$routeProvider'
];
/**
* @name config
* @param {Object} $provide
* @param {Object} $windowProvider
* @param {Object} $routeProvider
* @description Routes used by this module.
* @returns {undefined} Returns nothing
*/
function config($provide, $windowProvider, $routeProvider) {
var path = $windowProvider.$get().STATIC_URL + 'dashboard/container/hosts/';
$provide.constant('horizon.dashboard.container.hosts.basePath', path);
$routeProvider.when('/admin/container/hosts', {
templateUrl: path + 'panel.html'
});
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/hosts/hosts.module.js | hosts.module.js |
(function() {
"use strict";
angular
.module('horizon.dashboard.container.hosts')
.controller('horizon.dashboard.container.hosts.OverviewController', controller);
controller.$inject = [
'$scope'
];
function controller(
$scope
) {
var ctrl = this;
ctrl.chartSettings = {
innerRadius: 24,
outerRadius: 48,
titleClass: "pie-chart-title-medium",
showTitle: false,
showLabel: true,
showLegend: false,
tooltipIcon: 'fa-square'
};
// Chart data is watched by pie-chart directive.
// So to refresh chart after retrieving data, update whole of 'data' array.
ctrl.chartDataMem = {
maxLimit: 10,
data: []
};
ctrl.chartDataCpu = {
maxLimit: 10,
data: []
};
ctrl.chartDataDisk = {
maxLimit: 10,
data: []
};
// container for temporal chart data
var dataMem = [];
var dataCpu = [];
var dataDisk = [];
$scope.context.loadPromise.then(onGetHost);
function onGetHost(host) {
ctrl.host = host.data;
// set data for memory chart
dataMem = [
{label: gettext("Used"), value: host.data.mem_used, colorClass: "exists"},
{label: gettext("Margin"), value: host.data.mem_total - host.data.mem_used,
colorClass: "margin"}
];
ctrl.chartDataMem = generateChartData(dataMem, gettext("Memory"));
// set data for CPU chart
dataCpu = [
{label: gettext("Used"), value: host.data.cpu_used, colorClass: "exists"},
{label: gettext("Margin"), value: host.data.cpus - host.data.cpu_used,
colorClass: "margin"}
];
ctrl.chartDataCpu = generateChartData(dataCpu, gettext("CPU"));
// set data for disk chart
dataDisk = [
{label: gettext("Used"), value: host.data.disk_used, colorClass: "exists"},
{label: gettext("Margin"), value: host.data.disk_total - host.data.disk_used,
colorClass: "margin"}
];
ctrl.chartDataDisk = generateChartData(dataDisk, gettext("Disk"));
}
function generateChartData(data, title) {
var sum = data[0].value;
var max = data[0].value + data[1].value;
var percent = Math.round(sum / max * 100);
var overMax = percent > 100;
var result = {
title: title,
label: percent + '%',
maxLimit: max,
overMax: overMax,
data: data
};
return result;
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/hosts/details/overview.controller.js | overview.controller.js |
(function() {
'use strict';
/**
* @ngdoc overview
* @name horizon.dashboard.container.capsules
* @ngModule
* @description
* Provides all the services and widgets require to display the capsules
* panel
*/
angular
.module('horizon.dashboard.container.capsules', [
'ngRoute',
'horizon.dashboard.container.capsules.actions',
'horizon.dashboard.container.capsules.details'
])
.constant('horizon.dashboard.container.capsules.events', events())
.constant('horizon.dashboard.container.capsules.resourceType', 'OS::Zun::Capsule')
.run(run)
.config(config);
/**
* @ngdoc constant
* @name horizon.dashboard.container.capsules.events
* @description A list of events used by Capsules
* @returns {Object} Event constants
*/
function events() {
return {
CREATE_SUCCESS: 'horizon.dashboard.container.capsules.CREATE_SUCCESS',
DELETE_SUCCESS: 'horizon.dashboard.container.capsules.DELETE_SUCCESS'
};
}
run.$inject = [
'$filter',
'horizon.framework.conf.resource-type-registry.service',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.capsules.basePath',
'horizon.dashboard.container.capsules.resourceType',
'horizon.dashboard.container.capsules.service'
];
function run($filter, registry, zun, basePath, resourceType, capsuleService) {
registry.getResourceType(resourceType)
.setNames(gettext('Capsule'), gettext('Capsules'))
.setSummaryTemplateUrl(basePath + 'drawer.html')
.setDefaultIndexUrl('/project/container/capsules/')
.setProperties(capsuleProperties())
.setListFunction(capsuleService.getCapsulesPromise)
.tableColumns
.append({
id: 'name',
priority: 1,
sortDefault: true,
urlFunction: capsuleService.getDetailsPath
})
.append({
id: 'id',
priority: 2
})
.append({
id: 'status',
priority: 1
})
.append({
id: 'cpu',
priority: 3
})
.append({
id: 'memory',
priority: 3
});
// for magic-search
registry.getResourceType(resourceType).filterFacets
.append({
'label': gettext('Capsule ID'),
'name': 'capsule_id',
'singleton': true
})
.append({
'label': gettext('Name'),
'name': 'name',
'singleton': true
})
.append({
'label': gettext('Status'),
'name': 'status',
'singleton': true
});
}
function capsuleProperties() {
return {
'addresses': {label: gettext('Addresses'), filters: ['noValue', 'json'] },
'capsule_versionid': {label: gettext('Capsule Version ID'), filters: ['noValue'] },
'containers': {label: gettext('Containers'), filters: ['noValue', 'json'] },
'container_uuids': {label: gettext('Container UUIDs'), filters: ['noValue', 'json'] },
'cpu': {label: gettext('CPU'), filters: ['noValue'] },
'created_at': { label: gettext('Created'), filters: ['simpleDate'] },
'id': {label: gettext('ID'), filters: ['noValue'] },
'links': {label: gettext('Links'), filters: ['noValue', 'json'] },
'memory': { label: gettext('Memory'), filters: ['noValue'] },
'meta_labels': {label: gettext('Labels'), filters: ['noValue', 'json'] },
'name': { label: gettext('Name'), filters: ['noName'] },
'project_id': { label: gettext('Project ID'), filters: ['noValue'] },
'restart_policy': { label: gettext('Restart Policy'), filters: ['noValue'] },
'status': { label: gettext('Status'), filters: ['noValue'] },
'status_reason': { label: gettext('Status Reason'), filters: ['noValue'] },
'updated_at': { label: gettext('Updated'), filters: ['simpleDate'] },
'user_id': { label: gettext('User ID'), filters: ['noValue'] },
'volumes_info': {label: gettext('Volumes Info'), filters: ['noValue', 'json'] }
};
}
config.$inject = [
'$provide',
'$windowProvider',
'$routeProvider'
];
/**
* @name config
* @param {Object} $provide
* @param {Object} $windowProvider
* @param {Object} $routeProvider
* @description Routes used by this module.
* @returns {undefined} Returns nothing
*/
function config($provide, $windowProvider, $routeProvider) {
var path = $windowProvider.$get().STATIC_URL + 'dashboard/container/capsules/';
$provide.constant('horizon.dashboard.container.capsules.basePath', path);
$routeProvider.when('/project/container/capsules', {
templateUrl: path + 'panel.html'
});
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/capsules/capsules.module.js | capsules.module.js |
(function() {
"use strict";
angular
.module('horizon.dashboard.container.capsules')
.factory('horizon.dashboard.container.capsules.service', capsulesService);
capsulesService.$inject = [
'horizon.app.core.detailRoute',
'horizon.app.core.openstack-service-api.zun'
];
/*
* @ngdoc factory
* @name horizon.dashboard.container.capsules.service
*
* @description
* This service provides functions that are used through
* the capsules of container features.
*/
function capsulesService(detailRoute, zun) {
return {
getDetailsPath: getDetailsPath,
getCapsulePromise: getCapsulePromise,
getCapsulesPromise: getCapsulesPromise
};
/*
* @ngdoc function
* @name getDetailsPath
* @param item {Object} - The capsule object
* @description
* Returns the relative path to the details view.
*/
function getDetailsPath(item) {
return detailRoute + 'OS::Zun::Capsule/' + item.id;
}
/*
* @ngdoc function
* @name getCapsulePromise
* @description
* Given an id, returns a promise for the capsule data.
*/
function getCapsulePromise(identifier) {
return zun.getCapsule(identifier).then(modifyDetails);
}
function modifyDetails(response) {
return {data: modifyItem(response.data)};
}
/*
* @ngdoc function
* @name getCapsulesPromise
* @description
* Given filter/query parameters, returns a promise for the matching
* capsules. This is used in displaying lists of capsules.
*/
function getCapsulesPromise(params) {
return zun.getCapsules(params).then(modifyResponse);
}
function modifyResponse(response) {
return {data: {items: response.data.items.map(modifyItem)}};
}
function modifyItem(item) {
item.name = item.meta_name;
item.capsule_id = item.id;
item.id = item.uuid ? item.uuid : item.capsule_id;
item.trackBy = item.id.concat(item.updated_at);
return item;
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/capsules/capsules.service.js | capsules.service.js |
(function() {
'use strict';
/**
* @ngDoc factory
* @name horizon.dashboard.container.capsules.actions.delete.service
* @Description
* Brings up the delete capsules confirmation modal dialog.
* On submit, delete selected resources.
* On cancel, do nothing.
*/
angular
.module('horizon.dashboard.container.capsules.actions')
.factory('horizon.dashboard.container.capsules.actions.delete.service', deleteService);
deleteService.$inject = [
'$location',
'$q',
'$rootScope',
'horizon.app.core.openstack-service-api.zun',
'horizon.app.core.openstack-service-api.policy',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.modal.deleteModalService',
'horizon.framework.widgets.table.events',
'horizon.framework.widgets.toast.service',
'horizon.dashboard.container.capsules.resourceType',
'horizon.dashboard.container.capsules.events'
];
function deleteService(
$location, $q, $rootScope, zun, policy, actionResult, gettext, $qExtensions, deleteModal,
tableEvents, toast, resourceType, events
) {
var scope;
var context = {
labels: null,
deleteEntity: deleteEntity,
successEvent: events.DELETE_SUCCESS
};
var service = {
initAction: initAction,
allowed: allowed,
perform: perform
};
var notAllowedMessage = gettext("You are not allowed to delete capsules: %s");
return service;
//////////////
function initAction() {
}
function allowed() {
// only row actions pass in capsule
// otherwise, assume it is a batch action
return $qExtensions.booleanAsPromise(true);
}
// delete selected resource objects
function perform(selected, newScope) {
scope = newScope;
selected = angular.isArray(selected) ? selected : [selected];
context.labels = labelize(selected.length);
return $qExtensions.allSettled(selected.map(checkPermission)).then(afterCheck);
}
function labelize(count) {
return {
title: ngettext('Confirm Delete Capsule',
'Confirm Delete Capsules', count),
/* eslint-disable max-len */
message: ngettext('You have selected "%s". Please confirm your selection. Deleted capsule is not recoverable.',
'You have selected "%s". Please confirm your selection. Deleted capsules are not recoverable.', count),
/* eslint-enable max-len */
submit: ngettext('Delete Capsule',
'Delete Capsules', count),
success: ngettext('Deleted Capsule: %s.',
'Deleted Capsules: %s.', count),
error: ngettext('Unable to delete Capsule: %s.',
'Unable to delete Capsules: %s.', count)
};
}
// for batch delete
function checkPermission(selected) {
return {promise: allowed(selected), context: selected};
}
// for batch delete
function afterCheck(result) {
var outcome = $q.reject().catch(angular.noop); // Reject the promise by default
if (result.fail.length > 0) {
toast.add('error', getMessage(notAllowedMessage, result.fail));
outcome = $q.reject(result.fail).catch(angular.noop);
}
if (result.pass.length > 0) {
outcome = deleteModal.open(scope, result.pass.map(getEntity), context).then(createResult);
}
return outcome;
}
function createResult(deleteModalResult) {
// To make the result of this action generically useful, reformat the return
// from the deleteModal into a standard form
var result = actionResult.getActionResult();
deleteModalResult.pass.forEach(function markDeleted(item) {
result.updated(resourceType, getEntity(item).id);
});
deleteModalResult.fail.forEach(function markFailed(item) {
result.failed(resourceType, getEntity(item).id);
});
var indexPath = '/project/container/capsules';
var currentPath = $location.path();
if (result.result.failed.length === 0 && result.result.updated.length > 0 &&
currentPath !== indexPath) {
$location.path(indexPath);
} else {
$rootScope.$broadcast(tableEvents.CLEAR_SELECTIONS);
return result.result;
}
}
function getMessage(message, entities) {
return interpolate(message, [entities.map(getName).join(", ")]);
}
function getName(result) {
return getEntity(result).name;
}
// for batch delete
function getEntity(result) {
return result.context;
}
// call delete REST API
function deleteEntity(id) {
return zun.deleteCapsule(id, true);
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/capsules/actions/delete.service.js | delete.service.js |
(function() {
'use strict';
/**
* @ngdoc factory
* @name horizon.dashboard.container.capsules.actions.create.service
* @description
* Service for the create capsule modal
*/
angular
.module('horizon.dashboard.container.capsules.actions')
.factory('horizon.dashboard.container.capsules.actions.create.service', createCapsuleService);
createCapsuleService.$inject = [
'horizon.app.core.openstack-service-api.policy',
'horizon.app.core.openstack-service-api.zun',
'horizon.dashboard.container.capsules.actions.workflow',
'horizon.dashboard.container.capsules.resourceType',
'horizon.framework.util.actions.action-result.service',
'horizon.framework.util.i18n.gettext',
'horizon.framework.util.q.extensions',
'horizon.framework.widgets.form.ModalFormService',
'horizon.framework.widgets.toast.service'
];
function createCapsuleService(
policy, zun, workflow, resourceType,
actionResult, gettext, $qExtensions, modal, toast
) {
var message = {
success: gettext('Request to create capsule %s has been accepted.')
};
var service = {
initAction: initAction,
perform: perform,
allowed: allowed
};
return service;
//////////////
function initAction() {
}
function perform() {
var title, submitText;
title = gettext('Create Capsule');
submitText = gettext('Create');
var config = workflow.init('create', title, submitText);
return modal.open(config).then(submit);
}
function allowed() {
return policy.ifAllowed({ rules: [['capsule', 'create_capsule']] });
}
function submit(context) {
return zun.createCapsule(context.model, true).then(success, true);
}
function success(response) {
toast.add('success', interpolate(message.success, [response.data.id]));
var result = actionResult.getActionResult().created(resourceType, response.data.name);
return result.result;
}
}
})(); | zun-ui | /zun-ui-11.0.0.tar.gz/zun-ui-11.0.0/zun_ui/static/dashboard/container/capsules/actions/create.service.js | create.service.js |
========================
Team and repository tags
========================
.. image:: https://governance.openstack.org/tc/badges/zun.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. image:: https://www.openstack.org/themes/openstack/images/project-mascots/Zun/OpenStack_Project_Zun_mascot.jpg
.. Change things from this point on
===
Zun
===
OpenStack Containers service
Zun (ex. Higgins) is the OpenStack Containers service. It aims to provide an
API service for running application containers without the need to manage
servers or clusters.
* Free software: Apache license
* Get Started: https://docs.openstack.org/zun/latest/contributor/quickstart.html
* Documentation: https://docs.openstack.org/zun/latest/
* Source: https://opendev.org/openstack/zun
* Bugs: https://bugs.launchpad.net/zun
* Blueprints: https://blueprints.launchpad.net/zun
* REST Client: https://opendev.org/openstack/python-zunclient
Features
--------
* TODO
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/README.rst | README.rst |
The source repository for this project can be found at:
https://opendev.org/openstack/zun
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Launchpad:
https://bugs.launchpad.net/zun
For more specific information about contributing to this repository, see the
zun contributor guide:
https://docs.openstack.org/zun/latest/contributor/contributing.html
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/CONTRIBUTING.rst | CONTRIBUTING.rst |
Zun Style Commandments
======================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
Zun Specific Commandments
-------------------------
- [Z302] Change assertEqual(A is not None) by optimal assert like
assertIsNotNone(A).
- [Z310] timeutils.utcnow() wrapper must be used instead of direct calls to
datetime.datetime.utcnow() to make it easy to override its return value.
- [Z316] Change assertTrue(isinstance(A, B)) by optimal assert like
assertIsInstance(A, B).
- [Z322] Method's default argument shouldn't be mutable.
- [Z323] Change assertEqual(True, A) or assertEqual(False, A) by optimal assert
like assertTrue(A) or assertFalse(A)
- [Z336] Must use a dict comprehension instead of a dict constructor
with a sequence of key-value pairs.
- [Z338] Use assertIn/NotIn(A, B) rather than assertEqual(A in B, True/False).
- [Z339] Don't use xrange()
- [Z352] LOG.warn is deprecated. Enforce use of LOG.warning.
- [Z353] Don't translate logs.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/HACKING.rst | HACKING.rst |
Budil-Zun-Service-Dcoker Image
=======================
By using this dockerfile, you can build your own Zun service docker container eaily and quickly.
Build docker container image
for example, we build a docker container image named `zun-service-img`.
```docker build -t zun-service-img .```
Run zun service container
Start a container by unsing our build image above.
```docker run --name zun-service \
--net=host \
-v /var/run:/var/run \
zun-service-img```
Note: You should enter the container and config the zun config file in the path /etc/zun/,
More info about the config please reference the installation docs.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/contrib/quick-start/README.md | README.md |
===================================
Legacy Init Script for Ubuntu 14.04
===================================
#. Clone the Zun repository:
.. code-block:: console
$ git clone https://opendev.org/openstack/zun.git
#. Enable and start zun-api:
.. code-block:: console
# cp zun/contrib/legacy-ubuntu-init/etc/init/zun-api.conf \
/etc/init/zun-api.conf
# start zun-api
#. Enable and start zun-wsproxy:
.. code-block:: console
# cp zun/contrib/legacy-ubuntu-init/etc/init/zun-wsproxy.conf \
/etc/init/zun-wsproxy.conf
# start zun-wsproxy
#. Enable and start zun-compute:
.. code-block:: console
# cp zun/contrib/legacy-ubuntu-init/etc/init/zun-compute.conf \
/etc/init/zun-compute.conf
# start zun-compute
#. Verify that zun services are running:
.. code-block:: console
# status zun-api
# status zun-wsproxy
# status zun-compute
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/contrib/legacy-ubuntu-init/README.rst | README.rst |
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
===============================
Welcome to Zun's documentation!
===============================
What is Zun?
=============
Zun is an OpenStack Container service. It aims to provide an API service for
running application containers without the need to manage servers or clusters.
It requires the following additional OpenStack services for basic function:
* `Keystone <https://docs.openstack.org/keystone/latest/>`__
* `Neutron <https://docs.openstack.org/neutron/latest/>`__
* `Kuryr-libnetwork <https://docs.openstack.org/kuryr-libnetwork/latest/>`__
It can also integrate with other services to include:
* `Cinder <https://docs.openstack.org/cinder/latest/>`__
* `Heat <https://docs.openstack.org/heat/latest/>`__
* `Glance <https://docs.openstack.org/glance/latest/>`__
For End Users
=============
As an end user of Zun, you'll use Zun to create and manage containerized
workload with either tools or the API directly.
All end user (and some administrative) features of Zun are exposed via a REST
API, which can be consumed directly. The following resources will help you get
started with consuming the API directly.
* `API Reference <https://docs.openstack.org/api-ref/application-container/>`_
Alternatively, end users can consume the REST API via various tools or SDKs.
These tools are collected below.
* `Horizon
<https://docs.openstack.org/zun-ui/latest/>`_: The
official web UI for the OpenStack Project.
* `OpenStack Client
<https://docs.openstack.org/python-openstackclient/latest/>`_: The official
CLI for OpenStack Projects.
* `Zun Client
<https://docs.openstack.org/python-zunclient/latest/>`_: The Python client
for consuming the Zun's API.
For Operators
=============
Installation
------------
The detailed install guide for Zun. A functioning Zun will also require
having installed `Keystone
<https://docs.openstack.org/keystone/latest/install/>`__, `Neutron
<https://docs.openstack.org/neutron/latest/install/>`__, and `Kuryr-libnetwork
<https://docs.openstack.org/kuryr-libnetwork/latest/install/>`__.
Please ensure that you follow their install guides first.
.. toctree::
:maxdepth: 2
install/index
For Contributors
================
If you are new to Zun, the developer quick-start guide should help you quickly
setup the development environment and get started.
There are also a number of technical references on various topics
collected in contributors guide.
.. toctree::
:glob:
:maxdepth: 2
contributor/quickstart
contributor/index
Additional Material
===================
.. toctree::
:glob:
:maxdepth: 2
cli/index
admin/index
configuration/index
user/filter-scheduler
reference/index
.. only:: html
Search
======
* :ref:`Zun document search <search>`: Search the contents of this document.
* `OpenStack wide search <https://docs.openstack.org>`_: Search the wider
set of OpenStack documentation, including forums.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/index.rst | index.rst |
==========
zun-status
==========
-------------------------------------
CLI interface for Zun status commands
-------------------------------------
Synopsis
========
::
zun-status <category> <command> [<args>]
Description
===========
:program:`zun-status` is a tool that provides routines for checking the
status of a Zun deployment.
Options
=======
The standard pattern for executing a :program:`zun-status` command is::
zun-status <category> <command> [<args>]
Run without arguments to see a list of available command categories::
zun-status
Categories are:
* ``upgrade``
Detailed descriptions are below:
You can also run with a category argument such as ``upgrade`` to see a list of
all commands in that category::
zun-status upgrade
These sections describe the available categories and arguments for
:program:`zun-status`.
Upgrade
~~~~~~~
.. _zun-status-checks:
``zun-status upgrade check``
Performs a release-specific readiness check before restarting services with
new code. For example, missing or changed configuration options,
incompatible object states, or other conditions that could lead to
failures while upgrading.
**Return Codes**
.. list-table::
:widths: 20 80
:header-rows: 1
* - Return code
- Description
* - 0
- All upgrade readiness checks passed successfully and there is nothing
to do.
* - 1
- At least one check encountered an issue and requires further
investigation. This is considered a warning but the upgrade may be OK.
* - 2
- There was an upgrade status check failure that needs to be
investigated. This should be considered something that stops an
upgrade.
* - 255
- An unexpected error occurred.
**History of Checks**
**3.0.0 (Stein)**
* Sample check to be filled in with checks as they are added in Stein.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/cli/zun-status.rst | zun-status.rst |
===============================================
Zun Installation Documentation (source/install)
===============================================
Introduction:
-------------
This directory is intended to hold any installation documentation for Zun.
Documentation that explains how to bring Zun up to the point that it is
ready to use in an OpenStack or standalone environment should be put
in this directory.
The full spec for organization of documentation may be seen in the
`OS Manuals Migration Spec
<https://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html>`.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/install/README.rst | README.rst |
==========================
Container service overview
==========================
The Container service consists of the following components:
``zun-api``
An OpenStack-native REST API that processes API requests by sending
them to the ``zun-compute`` over Remote Procedure Call (RPC).
``zun-compute``
A worker daemon that creates and terminates containers or capsules (pods)
through container engine API. Manage containers, capsules and compute
resources in local host.
``zun-wsproxy``
Provides a proxy for accessing running containers through a websocket
connection.
``zun-cni-daemon``
Provides a CNI daemon service that provides implementation for the Zun CNI
plugin.
Optionally, one may wish to utilize the following associated projects for
additional functionality:
python-zunclient_
A command-line interface (CLI) and python bindings for interacting with the
Container service.
zun-ui_
The Horizon plugin for providing Web UI for Zun.
.. _python-zunclient: https://docs.openstack.org/python-zunclient/latest/
.. _zun-ui: https://docs.openstack.org/zun-ui/latest/
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/install/get_started.rst | get_started.rst |
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Container service
on the controller node for Ubuntu 16.04 (LTS) and CentOS 7.
Prerequisites
-------------
Before you install and configure Zun, you must create a database,
service credentials, and API endpoints.
#. To create the database, complete these steps:
* Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
# mysql
* Create the ``zun`` database:
.. code-block:: console
MariaDB [(none)] CREATE DATABASE zun;
* Grant proper access to the ``zun`` database:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'localhost' \
IDENTIFIED BY 'ZUN_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON zun.* TO 'zun'@'%' \
IDENTIFIED BY 'ZUN_DBPASS';
Replace ``ZUN_DBPASS`` with a suitable password.
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to
admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
#. To create the service credentials, complete these steps:
* Create the ``zun`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt zun
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | ca2e175b851943349be29a328cc5e360 |
| name | zun |
+-----------+----------------------------------+
* Add the ``admin`` role to the ``zun`` user:
.. code-block:: console
$ openstack role add --project service --user zun admin
.. note::
This command provides no output.
* Create the ``zun`` service entities:
.. code-block:: console
$ openstack service create --name zun \
--description "Container Service" container
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Container Service |
| enabled | True |
| id | 727841c6f5df4773baa4e8a5ae7d72eb |
| name | zun |
| type | container |
+-------------+----------------------------------+
#. Create the Container service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
container public http://controller:9517/v1
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 3f4dab34624e4be7b000265f25049609 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 727841c6f5df4773baa4e8a5ae7d72eb |
| service_name | zun |
| service_type | container |
| url | http://controller:9517/v1 |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
container internal http://controller:9517/v1
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 9489f78e958e45cc85570fec7e836d98 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 727841c6f5df4773baa4e8a5ae7d72eb |
| service_name | zun |
| service_type | container |
| url | http://controller:9517/v1 |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
container admin http://controller:9517/v1
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 76091559514b40c6b7b38dde790efe99 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 727841c6f5df4773baa4e8a5ae7d72eb |
| service_name | zun |
| service_type | container |
| url | http://controller:9517/v1 |
+--------------+-----------------------------------------+
Install and configure components
--------------------------------
#. Create zun user and necessary directories:
* Create user:
.. code-block:: console
# groupadd --system zun
# useradd --home-dir "/var/lib/zun" \
--create-home \
--system \
--shell /bin/false \
-g zun \
zun
* Create directories:
.. code-block:: console
# mkdir -p /etc/zun
# chown zun:zun /etc/zun
#. Install the following dependencies:
For Ubuntu, run:
.. code-block:: console
# apt-get install python3-pip git
For CentOS, run:
.. code-block:: console
# yum install python3-pip git python3-devel libffi-devel gcc openssl-devel
#. Clone and install zun:
.. code-block:: console
# cd /var/lib/zun
# git clone https://opendev.org/openstack/zun.git
# chown -R zun:zun zun
# git config --global --add safe.directory /var/lib/zun/zun
# cd zun
# pip3 install -r requirements.txt
# python3 setup.py install
#. Generate a sample configuration file:
.. code-block:: console
# su -s /bin/sh -c "oslo-config-generator \
--config-file etc/zun/zun-config-generator.conf" zun
# su -s /bin/sh -c "cp etc/zun/zun.conf.sample \
/etc/zun/zun.conf" zun
#. Copy api-paste.ini:
.. code-block:: console
# su -s /bin/sh -c "cp etc/zun/api-paste.ini /etc/zun" zun
#. Edit the ``/etc/zun/zun.conf``:
* In the ``[DEFAULT]`` section,
configure ``RabbitMQ`` message queue access:
.. code-block:: ini
[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in ``RabbitMQ``.
* In the ``[api]`` section, configure the IP address that Zun API
server is going to listen:
.. code-block:: ini
[api]
...
host_ip = 10.0.0.11
port = 9517
Replace ``10.0.0.11`` with the management interface IP address
of the controller node if different.
* In the ``[database]`` section, configure database access:
.. code-block:: ini
[database]
...
connection = mysql+pymysql://zun:ZUN_DBPASS@controller/zun
Replace ``ZUN_DBPASS`` with the password you chose for
the zun database.
* In the ``[keystone_auth]`` section, configure
Identity service access:
.. code-block:: ini
[keystone_auth]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = ZUN_PASS
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL
* In the ``[keystone_authtoken]`` section, configure
Identity service access:
.. code-block:: ini
[keystone_authtoken]
...
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = ZUN_PASS
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL
Replace ZUN_PASS with the password you chose for the zun user in the
Identity service.
* In the ``[oslo_concurrency]`` section, configure the ``lock_path``:
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/zun/tmp
* In the ``[oslo_messaging_notifications]`` section, configure the
``driver``:
.. code-block:: ini
[oslo_messaging_notifications]
...
driver = messaging
* In the ``[websocket_proxy]`` section, configure the IP address that
the websocket proxy is going to listen to:
.. code-block:: ini
[websocket_proxy]
...
wsproxy_host = 10.0.0.11
wsproxy_port = 6784
base_url = ws://controller:6784/
.. note::
This ``base_url`` will be used by end users to access the console of
their containers so make sure this URL is accessible from your
intended users and the port ``6784`` is not blocked by firewall.
Replace ``10.0.0.11`` with the management interface IP address
of the controller node if different.
.. note::
Make sure that ``/etc/zun/zun.conf`` still have the correct
permissions. You can set the permissions again with:
# chown zun:zun /etc/zun/zun.conf
#. Populate Zun database:
.. code-block:: console
# su -s /bin/sh -c "zun-db-manage upgrade" zun
Finalize installation
---------------------
#. Create an upstart config, it could be named as
``/etc/systemd/system/zun-api.service``:
.. note::
CentOS might install binary files into ``/usr/bin/``.
If it does, replace ``/usr/local/bin/`` directory with the correct
in the following example files.
.. code-block:: bash
[Unit]
Description = OpenStack Container Service API
[Service]
ExecStart = /usr/local/bin/zun-api
User = zun
[Install]
WantedBy = multi-user.target
#. Create an upstart config, it could be named as
``/etc/systemd/system/zun-wsproxy.service``:
.. code-block:: bash
[Unit]
Description = OpenStack Container Service Websocket Proxy
[Service]
ExecStart = /usr/local/bin/zun-wsproxy
User = zun
[Install]
WantedBy = multi-user.target
#. Enable and start zun-api and zun-wsproxy:
.. code-block:: console
# systemctl enable zun-api
# systemctl enable zun-wsproxy
.. code-block:: console
# systemctl start zun-api
# systemctl start zun-wsproxy
#. Verify that zun-api and zun-wsproxy services are running:
.. code-block:: console
# systemctl status zun-api
# systemctl status zun-wsproxy
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/install/controller-install.rst | controller-install.rst |
.. _launch-container:
Launch a container
~~~~~~~~~~~~~~~~~~
In environments that include the Container service, you can launch a
container.
#. Source the ``demo`` credentials to perform
the following steps as a non-administrative project:
.. code-block:: console
$ . demo-openrc
#. Determine available networks.
.. code-block:: console
$ openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------+--------------------------------------+
| 4716ddfe-6e60-40e7-b2a8-42e57bf3c31c | selfservice | 2112d5eb-f9d6-45fd-906e-7cabd38b7c7c |
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider | 310911f6-acf0-4a47-824e-3032916582ff |
+--------------------------------------+-------------+--------------------------------------+
.. note::
This output may differ from your environment.
#. Set the ``NET_ID`` environment variable to reflect the ID of a network.
For example, using the selfservice network:
.. code-block:: console
$ export NET_ID=$(openstack network list | awk '/ selfservice / { print $2 }')
#. Run a CirrOS container on the selfservice network:
.. code-block:: console
$ openstack appcontainer run --name container --net network=$NET_ID cirros ping 8.8.8.8
#. After a short time, verify successful creation of the container:
.. code-block:: console
$ openstack appcontainer list
+--------------------------------------+-----------+--------+---------+------------+-------------------------------------------------+-------+
| uuid | name | image | status | task_state | addresses | ports |
+--------------------------------------+-----------+--------+---------+------------+-------------------------------------------------+-------+
| 4ec10d48-1ed8-492a-be5a-402be0abc66a | container | cirros | Running | None | 10.0.0.11, fd13:fd51:ebe8:0:f816:3eff:fe9c:7612 | [] |
+--------------------------------------+-----------+--------+---------+------------+-------------------------------------------------+-------+
#. Access the container and verify access to the internet:
.. code-block:: console
$ openstack appcontainer exec --interactive container /bin/sh
# ping -c 4 openstack.org
# exit
#. Stop and delete the container.
.. code-block:: console
$ openstack appcontainer stop container
$ openstack appcontainer delete container
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/install/launch-container.rst | launch-container.rst |
Install and configure a compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service on a
compute node.
.. note::
This section assumes that you are following the instructions in this guide
step-by-step to configure the first compute node. If you want to configure
additional compute nodes, prepare them in a similar fashion. Each additional
compute node requires a unique IP address.
Prerequisites
-------------
Before you install and configure Zun, you must have Docker and
Kuryr-libnetwork installed properly in the compute node, and have Etcd
installed properly in the controller node. Refer `Get Docker
<https://docs.docker.com/engine/install#supported-platforms>`_
for Docker installation and `Kuryr libnetwork installation guide
<https://docs.openstack.org/kuryr-libnetwork/latest/install>`_,
`Etcd installation guide
<https://docs.openstack.org/install-guide/environment-etcd.html>`_
Install and configure components
--------------------------------
#. Create zun user and necessary directories:
* Create user:
.. code-block:: console
# groupadd --system zun
# useradd --home-dir "/var/lib/zun" \
--create-home \
--system \
--shell /bin/false \
-g zun \
zun
* Create directories:
.. code-block:: console
# mkdir -p /etc/zun
# chown zun:zun /etc/zun
* Create CNI directories:
.. code-block:: console
# mkdir -p /etc/cni/net.d
# chown zun:zun /etc/cni/net.d
#. Install the following dependencies:
For Ubuntu, run:
.. code-block:: console
# apt-get install python3-pip git numactl
For CentOS, run:
.. code-block:: console
# yum install python3-pip git python3-devel libffi-devel gcc openssl-devel numactl
#. Clone and install zun:
.. code-block:: console
# cd /var/lib/zun
# git clone https://opendev.org/openstack/zun.git
# chown -R zun:zun zun
# git config --global --add safe.directory /var/lib/zun/zun
# cd zun
# pip3 install -r requirements.txt
# python3 setup.py install
#. Generate a sample configuration file:
.. code-block:: console
# su -s /bin/sh -c "oslo-config-generator \
--config-file etc/zun/zun-config-generator.conf" zun
# su -s /bin/sh -c "cp etc/zun/zun.conf.sample \
/etc/zun/zun.conf" zun
# su -s /bin/sh -c "cp etc/zun/rootwrap.conf \
/etc/zun/rootwrap.conf" zun
# su -s /bin/sh -c "mkdir -p /etc/zun/rootwrap.d" zun
# su -s /bin/sh -c "cp etc/zun/rootwrap.d/* \
/etc/zun/rootwrap.d/" zun
# su -s /bin/sh -c "cp etc/cni/net.d/* /etc/cni/net.d/" zun
#. Configure sudoers for ``zun`` users:
.. note::
CentOS might install binary files into ``/usr/bin/``.
If it does, replace ``/usr/local/bin/`` directory with the correct
in the following command.
.. code-block:: console
# echo "zun ALL=(root) NOPASSWD: /usr/local/bin/zun-rootwrap \
/etc/zun/rootwrap.conf *" | sudo tee /etc/sudoers.d/zun-rootwrap
#. Edit the ``/etc/zun/zun.conf``:
* In the ``[DEFAULT]`` section,
configure ``RabbitMQ`` message queue access:
.. code-block:: ini
[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the
``openstack`` account in ``RabbitMQ``.
* In the ``[DEFAULT]`` section,
configure the path that is used by Zun to store the states:
.. code-block:: ini
[DEFAULT]
...
state_path = /var/lib/zun
* In the ``[database]`` section, configure database access:
.. code-block:: ini
[database]
...
connection = mysql+pymysql://zun:ZUN_DBPASS@controller/zun
Replace ``ZUN_DBPASS`` with the password you chose for
the zun database.
* In the ``[keystone_auth]`` section, configure
Identity service access:
.. code-block:: ini
[keystone_auth]
memcached_servers = controller:11211
www_authenticate_uri = http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = ZUN_PASS
username = zun
auth_url = http://controller:5000
auth_type = password
auth_version = v3
auth_protocol = http
service_token_roles_required = True
endpoint_type = internalURL
* In the ``[keystone_authtoken]`` section, configure
Identity service access:
.. code-block:: ini
[keystone_authtoken]
...
memcached_servers = controller:11211
www_authenticate_uri= http://controller:5000
project_domain_name = default
project_name = service
user_domain_name = default
password = ZUN_PASS
username = zun
auth_url = http://controller:5000
auth_type = password
Replace ZUN_PASS with the password you chose for the zun user in the
Identity service.
* In the ``[oslo_concurrency]`` section, configure the ``lock_path``:
.. code-block:: ini
[oslo_concurrency]
...
lock_path = /var/lib/zun/tmp
* (Optional) If you want to run both containers and nova instances in
this compute node, in the ``[compute]`` section,
configure the ``host_shared_with_nova``:
.. code-block:: ini
[compute]
...
host_shared_with_nova = true
.. note::
Make sure that ``/etc/zun/zun.conf`` still have the correct
permissions. You can set the permissions again with:
# chown zun:zun /etc/zun/zun.conf
#. Configure Docker and Kuryr:
* Create the directory ``/etc/systemd/system/docker.service.d``
.. code-block:: console
# mkdir -p /etc/systemd/system/docker.service.d
* Create the file ``/etc/systemd/system/docker.service.d/docker.conf``.
Configure docker to listen to port 2375 as well as the default
unix socket. Also, configure docker to use etcd3 as storage backend:
.. code-block:: ini
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --group zun -H tcp://compute1:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller:2379
* Restart Docker:
.. code-block:: console
# systemctl daemon-reload
# systemctl restart docker
* Edit the Kuryr config file ``/etc/kuryr/kuryr.conf``.
Set ``capability_scope`` to ``global`` and
``process_external_connectivity`` to ``False``:
.. code-block:: ini
[DEFAULT]
...
capability_scope = global
process_external_connectivity = False
* Restart Kuryr-libnetwork:
.. code-block:: console
# systemctl restart kuryr-libnetwork
#. Configure containerd:
* Generate config file for containerd:
.. code-block:: console
# containerd config default > /etc/containerd/config.toml
* Edit the ``/etc/containerd/config.toml``. In the ``[grpc]`` section,
configure the ``gid`` as the group ID of the ``zun`` user:
.. code-block:: ini
[grpc]
...
gid = ZUN_GROUP_ID
Replace ``ZUN_GROUP_ID`` with the real group ID of ``zun`` user.
You can retrieve the ID by (for example):
.. code-block:: console
# getent group zun | cut -d: -f3
.. note::
Make sure that ``/etc/containerd/config.toml`` still have the correct
permissions. You can set the permissions again with:
# chown zun:zun /etc/containerd/config.toml
* Restart containerd:
.. code-block:: console
# systemctl restart containerd
#. Configure CNI:
* Download and install the standard loopback plugin:
.. code-block:: console
# mkdir -p /opt/cni/bin
# curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz \
| tar -C /opt/cni/bin -xzvf - ./loopback
* Install the Zun CNI plugin:
.. code-block:: console
# install -o zun -m 0555 -D /usr/local/bin/zun-cni /opt/cni/bin/zun-cni
.. note::
CentOS might install binary files into ``/usr/bin/``.
If it does, replace ``/usr/local/bin/zun-cni`` with the correct path
in the command above.
Finalize installation
---------------------
#. Create an upstart config for zun compute, it could be named as
``/etc/systemd/system/zun-compute.service``:
.. note::
CentOS might install binary files into ``/usr/bin/``.
If it does, replace ``/usr/local/bin/`` directory with the correct
in the following example file.
.. code-block:: bash
[Unit]
Description = OpenStack Container Service Compute Agent
[Service]
ExecStart = /usr/local/bin/zun-compute
User = zun
[Install]
WantedBy = multi-user.target
#. Create an upstart config for zun cni daemon, it could be named as
``/etc/systemd/system/zun-cni-daemon.service``:
.. note::
CentOS might install binary files into ``/usr/bin/``,
If it does, replace ``/usr/local/bin/`` directory with the correct
in the following example file.
.. code-block:: bash
[Unit]
Description = OpenStack Container Service CNI daemon
[Service]
ExecStart = /usr/local/bin/zun-cni-daemon
User = zun
[Install]
WantedBy = multi-user.target
#. Enable and start zun-compute:
.. code-block:: console
# systemctl enable zun-compute
# systemctl start zun-compute
#. Enable and start zun-cni-daemon:
.. code-block:: console
# systemctl enable zun-cni-daemon
# systemctl start zun-cni-daemon
#. Verify that zun-compute and zun-cni-daemon services are running:
.. code-block:: console
# systemctl status zun-compute
# systemctl status zun-cni-daemon
Enable Kata Containers (Optional)
---------------------------------
By default, ``runc`` is used as the container runtime.
If you want to use Kata Containers instead, this section describes the
additional configuration steps.
.. note::
Kata Containers requires nested virtualization or bare metal.
See the `official document
<https://github.com/kata-containers/documentation/tree/master/install#prerequisites>`_
for details.
#. Enable the repository for Kata Containers:
For Ubuntu, run:
.. code-block:: console
# curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/master/xUbuntu_$(lsb_release -rs)/Release.key | apt-key add -
# add-apt-repository "deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/master/xUbuntu_$(lsb_release -rs)/ /"
For CentOS, run:
.. code-block:: console
# yum-config-manager --add-repo "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/master/CentOS_7/home:katacontainers:releases:$(arch):master.repo"
#. Install Kata Containers:
For Ubuntu, run:
.. code-block:: console
# apt-get update
# apt install kata-runtime kata-proxy kata-shim
For CentOS, run:
.. code-block:: console
# yum install kata-runtime kata-proxy kata-shim
#. Configure Docker to add Kata Container as runtime:
* Edit the file ``/etc/systemd/system/docker.service.d/docker.conf``.
Append ``--add-runtime`` option to add kata-runtime to Docker:
.. code-block:: ini
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --group zun -H tcp://compute1:2375 -H unix:///var/run/docker.sock --cluster-store etcd://controller:2379 --add-runtime kata=/usr/bin/kata-runtime
* Restart Docker:
.. code-block:: console
# systemctl daemon-reload
# systemctl restart docker
#. Configure containerd to add Kata Containers as runtime:
* Edit the ``/etc/containerd/config.toml``.
In the ``[plugins.cri.containerd]`` section,
add the kata runtime configuration:
.. code-block:: ini
[plugins]
...
[plugins.cri]
...
[plugins.cri.containerd]
...
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
* Restart containerd:
.. code-block:: console
# systemctl restart containerd
#. Configure Zun to use Kata runtime:
* Edit the ``/etc/zun/zun.conf``. In the ``[DEFAULT]`` section,
configure ``container_runtime`` as kata:
.. code-block:: ini
[DEFAULT]
...
container_runtime = kata
* Restart zun-compute:
.. code-block:: console
# systemctl restart zun-compute
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/install/compute-install.rst | compute-install.rst |
.. _verify:
Verify operation
~~~~~~~~~~~~~~~~
Verify operation of the Container service.
.. note::
Perform these commands on the controller node.
#. Install python-zunclient:
.. code-block:: console
# pip3 install python-zunclient
#. Source the ``admin`` tenant credentials:
.. code-block:: console
$ . admin-openrc
#. List service components to verify successful launch and
registration of each process:
.. code-block:: console
$ openstack appcontainer service list
+----+-----------------------+-------------+-------+----------+-----------------+---------------------------+--------------------+
| Id | Host | Binary | State | Disabled | Disabled Reason | Updated At | Availability Zone |
+----+-----------------------+-------------+-------+----------+-----------------+---------------------------+--------------------+
| 1 | localhost.localdomain | zun-compute | up | False | None | 2018-03-13 14:15:40+00:00 | nova |
+----+-----------------------+-------------+-------+----------+-----------------+---------------------------+--------------------+
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/install/verify.rst | verify.rst |
========
Overview
========
The Container service provides OpenStack-native API for launching and managing
application containers without any virtual machine managements.
Also known as the ``zun`` project, the OpenStack Container service may,
depending upon configuration, interact with several other OpenStack services.
This includes:
- The OpenStack Identity service (``keystone``) for request authentication and
to locate other OpenStack services
- The OpenStack Networking service (``neutron``) for DHCP and network
configuration
- The Docker remote network driver for OpenStack (``kuryr-libnetwork``)
- The OpenStack Placement service (``placement``) for resource tracking and
container allocation claiming.
- The OpenStack Block Storage (``cinder``) provides volumes for container
(optional).
- The OpenStack Image service (``glance``) from which to retrieve container
images (optional).
- The OpenStack Dashboard service (``horizon``) for providing the web UI
(optional).
- The OpenStack Orchestration service (``heat``) for providing orchestration
between containers and other OpenStack resources (optional).
Zun requires at least two nodes (Controller node and Compute node) to run
a container. Optional services such as Block Storage require additional nodes.
Controller
----------
The controller node runs the Identity service, Image service, management
portions of Zun, management portion of Networking, various Networking
agents, and the Dashboard. It also includes supporting services such as an SQL
database, message queue, and Network Time Protocol (NTP).
Optionally, the controller node runs portions of the Block Storage, Object
Storage, and Orchestration services.
The controller node requires a minimum of two network interfaces.
Compute
-------
The compute node runs the engine portion of Zun that operates containers.
By default, Zun uses Docker as container engine. The compute node also runs
a Networking service agent that connects containers to virtual networks and
provides firewalling services to instances via security groups.
You can deploy more than one compute node. Each node requires a minimum of two
network interfaces.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/install/overview.rst | overview.rst |
===========================================
How to use private docker registry with Zun
===========================================
Zun by default pull container images from Docker Hub.
However, it is possible to configure Zun to pull images from a
private registry.
This document provides an example to deploy and configure a
docker registry for Zun. For a comprehensive guide about deploying
a docker registry, see `here <https://docs.docker.com/registry/deploying/>`_
Deploy Private Docker Registry
==============================
A straightforward approach to install a private docker registry is to
deploy it as a Zun container:
.. code-block:: console
$ openstack appcontainer create \
--restart always \
--expose-port 443 \
--name registry \
--environment REGISTRY_HTTP_ADDR=0.0.0.0:443 \
--environment REGISTRY_HTTP_TLS_CERTIFICATE=/domain.crt \
--environment REGISTRY_HTTP_TLS_KEY=/domain.key \
registry:2
.. note::
Depending on the configuration of your tenant network, you might need
to make sure the container is accessible from other tenants of your cloud.
For example, you might need to associate a floating IP to the container.
In order to make your registry accessible to external hosts,
you must use a TLS certificate (issued by a certificate issuer) or create
self-signed certificates. This document shows you how to generate and use
self-signed certificates:
.. code-block:: console
$ mkdir -p certs
$ cat > certs/domain.conf <<EOF
[req]
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no
[req_distinguished_name]
CN = zunregistry.com
[req_ext]
subjectAltName = IP:172.24.4.49
EOF
$ openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt -config certs/domain.conf
.. note::
Replace ``zunregistry.com`` with the domain name of your registry.
.. note::
Replace ``172.24.4.49`` with the IP address of your registry.
.. note::
You need to make sure the domain name (i.e. ``zunregistry.com``)
will be resolved to the IP address (i.e. ``172.24.4.49``).
For example, you might need to edit ``/etc/hosts`` accordingly.
Copy the certificates to registry:
.. code-block:: console
$ openstack appcontainer cp certs/domain.key registry:/
$ openstack appcontainer cp certs/domain.crt registry:/
Configure docker daemon to accept the certificates:
.. code-block:: console
# mkdir -p /etc/docker/certs.d/zunregistry.com
# cp certs/domain.crt /etc/docker/certs.d/zunregistry.com/ca.crt
.. note::
Replace ``zunregistry.com`` with the domain name of your registry.
.. note::
Perform this steps in every compute nodes.
Start the registry:
.. code-block:: console
$ openstack appcontainer start registry
Verify the registry is working:
.. code-block:: console
$ docker pull ubuntu:16.04
$ docker tag ubuntu:16.04 zunregistry.com/my-ubuntu
$ docker push zunregistry.com/my-ubuntu
$ openstack appcontainer run --interactive zunregistry.com/my-ubuntu /bin/bash
.. note::
Replace ``zunregistry.com`` with the domain name of your registry.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/admin/private_registry.rst | private_registry.rst |
=========================
Manage container security
=========================
Security groups are sets of IP filter rules that define networking access to
the container. Group rules are project specific; project members can edit the
default rules for their group and add new rule sets.
All projects have a ``default`` security group which is applied to any
container that has no other defined security group. Unless you change the
default, this security group denies all incoming traffic and allows only
outgoing traffic to your container.
Create a container with security group
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When adding a new security group, you should pick a descriptive but brief name.
This name shows up in brief descriptions of the containers that use it where
the longer description field often does not. For example, seeing that a
container is using security group "http" is much easier to understand than
"bobs\_group" or "secgrp1".
#. Add the new security group, as follows:
.. code-block:: console
$ openstack security group create SEC_GROUP_NAME --description Description
For example:
.. code-block:: console
$ openstack security group create global_http --description "Allows Web traffic anywhere on the Internet."
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
| created_at | 2016-11-03T13:50:53Z |
| description | Allows Web traffic anywhere on the Internet. |
| headers | |
| id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 |
| name | global_http |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| revision_number | 1 |
| rules | created_at='2016-11-03T13:50:53Z', direction='egress', ethertype='IPv4', id='4d8cec94-e0ee-4c20-9f56-8fb67c21e4df', |
| | project_id='5669caad86a04256994cdf755df4d3c1', revision_number='1', updated_at='2016-11-03T13:50:53Z' |
| | created_at='2016-11-03T13:50:53Z', direction='egress', ethertype='IPv6', id='31be2ad1-be14-4aef-9492-ecebede2cf12', |
| | project_id='5669caad86a04256994cdf755df4d3c1', revision_number='1', updated_at='2016-11-03T13:50:53Z' |
| updated_at | 2016-11-03T13:50:53Z |
+-----------------+--------------------------------------------------------------------------------------------------------------------------+
#. Add a new group rule, as follows:
.. code-block:: console
$ openstack security group rule create SEC_GROUP_NAME \
--protocol PROTOCOL --dst-port FROM_PORT:TO_PORT --remote-ip CIDR
The arguments are positional, and the ``from-port`` and ``to-port``
arguments specify the local port range connections are allowed to access,
not the source and destination ports of the connection. For example:
.. code-block:: console
$ openstack security group rule create global_http \
--protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2016-11-06T14:02:00Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| headers | |
| id | 2ba06233-d5c8-43eb-93a9-8eaa94bc9eb5 |
| port_range_max | 80 |
| port_range_min | 80 |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| project_id | 5669caad86a04256994cdf755df4d3c1 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | c0b92b20-4575-432a-b4a9-eaf2ad53f696 |
| updated_at | 2016-11-06T14:02:00Z |
+-------------------+--------------------------------------+
#. Create a container with the new security group, as follows:
.. code-block:: console
$ openstack appcontainer run --security-group SEC_GROUP_NAME IMAGE
For example:
.. code-block:: console
$ openstack appcontainer run --security-group global_http nginx
Find container's security groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you cannot access your application inside the container, you might want to
check the security groups of the container to ensure the rules don't block
the traffic.
#. List the containers, as follows:
.. code-block:: console
$ openstack appcontainer list
+--------------------------------------+--------------------+-------+---------+------------+-----------+-------+
| uuid | name | image | status | task_state | addresses | ports |
+--------------------------------------+--------------------+-------+---------+------------+-----------+-------+
| 6595aff8-6c1c-4e64-8aad-bfd3793efa54 | delta-24-container | nginx | Running | None | 10.5.0.14 | [80] |
+--------------------------------------+--------------------+-------+---------+------------+-----------+-------+
#. Find all your container's ports, as follows:
.. code-block:: console
$ openstack port list --fixed-ip ip-address=10.5.0.14
+--------------------------------------+-----------------------------------------------------------------------+-------------------+--------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+-----------------------------------------------------------------------+-------------------+--------------------------------------------------------------------------+--------+
| b02df384-fd58-43ee-a44a-f17be9dd4838 | 405061f9eeda5dbfa10701a72051c91a5555d19f6ef7b3081078d102fe6f60ab-port | fa:16:3e:52:3c:0c | ip_address='10.5.0.14', subnet_id='7337ad8b-7314-4a33-ba54-7362f0a7a680' | ACTIVE |
+--------------------------------------+-----------------------------------------------------------------------+-------------------+--------------------------------------------------------------------------+--------+
#. View the details of each port to retrieve the list of security groups,
as follows:
.. code-block:: console
$ openstack port show b02df384-fd58-43ee-a44a-f17be9dd4838
+-----------------------+--------------------------------------------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | None |
| binding_profile | None |
| binding_vif_details | None |
| binding_vif_type | None |
| binding_vnic_type | normal |
| created_at | 2018-05-11T21:58:42Z |
| data_plane_status | None |
| description | |
| device_id | 6595aff8-6c1c-4e64-8aad-bfd3793efa54 |
| device_owner | compute:kuryr |
| dns_assignment | None |
| dns_name | None |
| extra_dhcp_opts | |
| fixed_ips | ip_address='10.5.0.14', subnet_id='7337ad8b-7314-4a33-ba54-7362f0a7a680' |
| id | b02df384-fd58-43ee-a44a-f17be9dd4838 |
| ip_address | None |
| mac_address | fa:16:3e:52:3c:0c |
| name | 405061f9eeda5dbfa10701a72051c91a5555d19f6ef7b3081078d102fe6f60ab-port |
| network_id | 695aff90-66c6-4383-b37c-7484c4046a64 |
| option_name | None |
| option_value | None |
| port_security_enabled | True |
| project_id | c907162152fe41f288912e991762b6d9 |
| qos_policy_id | None |
| revision_number | 9 |
| security_group_ids | ba20b63e-8a61-40e4-a1a3-5798412cc36b |
| status | ACTIVE |
| subnet_id | None |
| tags | kuryr.port.existing |
| trunk_details | None |
| updated_at | 2018-05-11T21:58:47Z |
+-----------------------+--------------------------------------------------------------------------+
#. View the rules of security group showed up at ``security_group_ids`` field
of the port, as follows:
.. code-block:: console
$ openstack security group rule list ba20b63e-8a61-40e4-a1a3-5798412cc36b
+--------------------------------------+-------------+-----------+------------+-----------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+------------+-----------------------+
| 24ebfdb8-591c-40bb-a7d3-f5b5eadc72ca | None | None | | None |
| 907bf692-3dbb-4b34-ba7a-22217e6dbc4f | None | None | | None |
| bbcd3b46-0214-4966-8050-8b5d2f9121d1 | tcp | 0.0.0.0/0 | 80:80 | None |
+--------------------------------------+-------------+-----------+------------+-----------------------+
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/admin/security-groups.rst | security-groups.rst |
=======================
Clear Containers in Zun
=======================
Zun now supports running Clear Containers with regular Docker containers.
Clear containers run containers as very lightweight virtual machines
which boot up really fast and has low memory footprints. It provides
security to the containers with an isolated environment. You can read
more about Clear Containers `here <https://github.com/clearcontainers/runtime/wiki>`_.
Installation with DevStack
==========================
It is possible to run Clear Containers with Zun. Follow the
:doc:`/contributor/quickstart` to download DevStack, Zun code and copy the
local.conf file. Now perform the following steps to install Clear Containers
with DevStack::
cd /opt/stack/devstack
echo "ENABLE_CLEAR_CONTAINER=true" >> local.conf
./stack.sh
Verify the installation by::
$ sudo docker info | grep Runtimes
Runtimes: cor runc
Using Clear Containers with Zun
===============================
To create Clear Containers with Zun, specify the `--runtime` option::
zun run --name clear-container --runtime cor cirros ping -c 4 8.8.8.8
.. note::
Clear Containers support in Zun is not production ready. It is recommended
not to running Clear Containers and runc containers on the same host.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/admin/clear-containers.rst | clear-containers.rst |
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
======================
Use OSProfiler in Zun
======================
This is the demo for Zun integrating with osprofiler. `Zun
<https://wiki.openstack.org/wiki/Zun>`_ is an OpenStack container
management services, while `OSProfiler
<https://docs.openstack.org/osprofiler/latest/>`_ provides
a tiny but powerful library that is used by most OpenStack projects and
their python clients.
Install Redis database
----------------------
After osprofiler 1.4.0, user can choose mongodb or redis as the backend storage
option without using ceilometer. Here just use Redis as an example, user
can choose mongodb, elasticsearch, and `etc
<https://opendev.org/openstack/osprofiler/src/branch/master/osprofiler/drivers>`_.
Install Redis as the `centralized collector
<https://docs.openstack.org/osprofiler/latest/user/collectors.html>`_.
Redis in container is easy to launch, `choose Redis Docker
<https://hub.docker.com/_/redis/>`_ and run::
$ docker run --name some-redis -p 6379:6379 -d redis
Now there is a redis database which has an expose port to access. OSProfiler
will send data to this key-value database.
Change the configure file
-------------------------
Change the /etc/zun/zun.conf, add the following lines, change the <ip-address>
to the real IP::
[profiler]
enabled = True
trace_sqlalchemy = True
hmac_keys = SECRET_KEY
connection_string = redis://<ip-address>:6379/
Then restart zun-api and zun-compute (Attention, the newest version of
Zun has move zun-api service to apache2 server. You can't restart the
service just in screen. Use "systemctl restart apache2" will work).
Use below commands to get the trace information::
$ zun --profile SECRET_KEY list
Use <TRACE-ID>, you will get a <TRACE-ID> for trace::
$ osprofiler trace show <TRACE-ID> --connection-string=redis://<ip-address>:6379 --html
Troubleshooting
---------------
How to check whether the integration is fine:
Stop the Redis container, then run the command::
$ zun --profile SECRET_KEY list
In the zun-api log, will see "ConnectionError: Error 111 connecting to
<ip-address>:6379. ECONNREFUSED." That means that osprofiler will write
the trace data to redis, but can't connect it. So the integration is fine.
When /etc/zun/api-paste.ini file changed (change the pipeline), you need to
re-deploy the zun service.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/admin/osprofiler.rst | osprofiler.rst |
=======================
Keep Containers Alive
=======================
As we know, the Docker daemon shuts down all running containers
during daemon downtime. Starting with Docker Engine 1.12, users can
configure the daemon so that containers remain running when the
docker service becomes unavailable. This functionality is called
live restore. You can read more about Live Restore
`here <https://docs.docker.com/config/containers/live-restore>`_.
Installation with DevStack
==========================
It is possible to keep containers alive. Follow the
:doc:`/contributor/quickstart` to download DevStack, Zun code and copy the
local.conf file. Now perform the following steps to install Zun with DevStack::
cd /opt/stack/devstack
echo "ENABLE_LIVE_RESTORE=true" >> local.conf
./stack.sh
Verify the installation by::
$ sudo docker info | grep "Live Restore"
Live Restore Enabled: true
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/admin/keep-containers-alive.rst | keep-containers-alive.rst |
Continuous Integration with Jenkins
===================================
Zun uses a `Jenkins <http://jenkins-ci.org>`_ server to automate development
tasks.
Jenkins performs tasks such as:
`gate-zun-pep8-ubuntu-xenial`
Run PEP8 checks on proposed code changes that have been reviewed.
`gate-zun-python27-ubuntu-xenial`
Run unit tests using python2.7 on proposed code changes that have been
reviewed.
`gate-zun-python35`
Run unit tests using python3.5 on proposed code changes that have been
reviewed.
`gate-zun-docs-ubuntu-xenial`
Build this documentation and push it to `OpenStack Zun
<https://docs.openstack.org/zun/latest/>`_.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/jenkins.rst | jenkins.rst |
============================
So You Want to Contribute...
============================
For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects: the
accounts you need, the basics of interacting with our Gerrit review system,
how we communicate as a community, etc.
Below will cover the more project specific information you need to get started
with Zun.
Communication
~~~~~~~~~~~~~
- IRC channel: #openstack-zun
- Mailing list's prefix: [zun]
- Office Hours:
This is general Zun team meeting. Anyone can bring up a topic to discuss
with the Zun team.
- time: http://eavesdrop.openstack.org/#Zun_Team_Meeting
Contacting the Core Team
~~~~~~~~~~~~~~~~~~~~~~~~
The list of current Zun core reviewers is available on `gerrit
<https://review.opendev.org/#/admin/groups/1382,members>`_.
New Feature Planning
~~~~~~~~~~~~~~~~~~~~
Zun team uses Launchpad to propose new features.
A blueprint should be submitted in Launchpad first.
Such blueprints need to be discussed and approved by the `Zun
driver team <https://launchpad.net/~zun-drivers>`_
Task Tracking
~~~~~~~~~~~~~
We track our tasks in Launchpad
https://bugs.launchpad.net/zun
If you're looking for some smaller, easier work item to pick up and get started
on, search for the 'low-hanging-fruit' tag.
.. NOTE: If your tag is not 'low-hanging-fruit' please change the text above.
Reporting a Bug
~~~~~~~~~~~~~~~
You found an issue and want to make sure we are aware of it? You can do so on
`Launchpad
<https://bugs.launchpad.net/zun>`_.
Getting Your Patch Merged
~~~~~~~~~~~~~~~~~~~~~~~~~
All changes proposed to the Zun project
require one or two +2 votes from Zun core reviewers before one of the core
reviewers can approve patch by giving ``Workflow +1`` vote.
Project Team Lead Duties
~~~~~~~~~~~~~~~~~~~~~~~~
All common PTL duties are enumerated in the `PTL guide
<https://docs.openstack.org/project-team-guide/ptl.html>`_.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/contributing.rst | contributing.rst |
API Microversions
=================
Background
----------
Zun uses a framework we call 'API Microversions' for allowing changes
to the API while preserving backward compatibility. The basic idea is
that a user has to explicitly ask for their request to be treated with
a particular version of the API. So breaking changes can be added to
the API without breaking users who don't specifically ask for it. This
is done with an HTTP header ``OpenStack-API-Version`` which has as its
value a string containing the name of the service, ``container``, and a
monotonically increasing semantic version number starting from ``1.1``.
The full form of the header takes the form::
OpenStack-API-Version: container 1.1
If a user makes a request without specifying a version, they will get
the ``BASE_VER`` as defined in
``zun/api/controllers/versions.py``. This value is currently ``1.1`` and
is expected to remain so for quite a long time.
When do I need a new Microversion?
----------------------------------
A microversion is needed when the contract to the user is
changed. The user contract covers many kinds of information such as:
- the Request
- the list of resource urls which exist on the server
Example: adding a new container/{ID}/foo which didn't exist in a
previous version of the code
- the list of query parameters that are valid on urls
Example: adding a new parameter ``is_yellow`` container/{ID}?is_yellow=True
- the list of query parameter values for non free form fields
Example: parameter filter_by takes a small set of constants/enums "A",
"B", "C". Adding support for new enum "D".
- new headers accepted on a request
- the list of attributes and data structures accepted.
Example: adding a new attribute 'locked': True/False to the request body
- the Response
- the list of attributes and data structures returned
Example: adding a new attribute 'locked': True/False to the output
of container/{ID}
- the allowed values of non free form fields
Example: adding a new allowed ``status`` to container/{ID}
- the list of status codes allowed for a particular request
Example: an API previously could return 200, 400, 403, 404 and the
change would make the API now also be allowed to return 409.
See [#f2]_ for the 400, 403, 404 and 415 cases.
- changing a status code on a particular response
Example: changing the return code of an API from 501 to 400.
.. note:: Fixing a bug so that a 400+ code is returned rather than a 500 or
503 does not require a microversion change. It's assumed that clients are
not expected to handle a 500 or 503 response and therefore should not
need to opt-in to microversion changes that fixes a 500 or 503 response
from happening.
According to the OpenStack API Working Group, a
**500 Internal Server Error** should **not** be returned to the user for
failures due to user error that can be fixed by changing the request on
the client side. See [#f1]_.
- new headers returned on a response
The following flow chart attempts to walk through the process of "do
we need a microversion".
.. graphviz::
digraph states {
label="Do I need a microversion?"
silent_fail[shape="diamond", style="", group=g1, label="Did we silently
fail to do what is asked?"];
ret_500[shape="diamond", style="", group=g1, label="Did we return a 500
before?"];
new_error[shape="diamond", style="", group=g1, label="Are we changing what
status code is returned?"];
new_attr[shape="diamond", style="", group=g1, label="Did we add or remove an
attribute to a payload?"];
new_param[shape="diamond", style="", group=g1, label="Did we add or remove
an accepted query string parameter or value?"];
new_resource[shape="diamond", style="", group=g1, label="Did we add or remove a
resource url?"];
no[shape="box", style=rounded, label="No microversion needed"];
yes[shape="box", style=rounded, label="Yes, you need a microversion"];
no2[shape="box", style=rounded, label="No microversion needed, it's
a bug"];
silent_fail -> ret_500[label=" no"];
silent_fail -> no2[label="yes"];
ret_500 -> no2[label="yes [1]"];
ret_500 -> new_error[label=" no"];
new_error -> new_attr[label=" no"];
new_error -> yes[label="yes"];
new_attr -> new_param[label=" no"];
new_attr -> yes[label="yes"];
new_param -> new_resource[label=" no"];
new_param -> yes[label="yes"];
new_resource -> no[label=" no"];
new_resource -> yes[label="yes"];
{rank=same; yes new_attr}
{rank=same; no2 ret_500}
{rank=min; silent_fail}
}
**Footnotes**
.. [#f1] When fixing 500 errors that previously caused stack traces, try
to map the new error into the existing set of errors that API call
could previously return (400 if nothing else is appropriate). Changing
the set of allowed status codes from a request is changing the
contract, and should be part of a microversion (except in [#f2]_).
The reason why we are so strict on contract is that we'd like
application writers to be able to know, for sure, what the contract is
at every microversion in Zun. If they do not, they will need to write
conditional code in their application to handle ambiguities.
When in doubt, consider application authors. If it would work with no
client side changes on both Zun versions, you probably don't need a
microversion. If, on the other hand, there is any ambiguity, a
microversion is probably needed.
.. [#f2] The exception to not needing a microversion when returning a
previously unspecified error code is the 400, 403, 404 and 415 cases. This is
considered OK to return even if previously unspecified in the code since
it's implied given keystone authentication can fail with a 403 and API
validation can fail with a 400 for invalid JSON request body. Request to
url/resource that does not exist always fails with 404. Invalid content types
are handled before API methods are called which results in a 415.
.. note:: When in doubt about whether or not a microversion is required
for changing an error response code, consult the `Zun Team`_.
.. _Zun Team: https://wiki.openstack.org/wiki/Zun
When a microversion is not needed
---------------------------------
A microversion is not needed in the following situation:
- the response
- Changing the error message without changing the response code
does not require a new microversion.
- Removing an inapplicable HTTP header, for example, suppose the Retry-After
HTTP header is being returned with a 4xx code. This header should only be
returned with a 503 or 3xx response, so it may be removed without bumping
the microversion.
In Code
-------
In ``zun/api/controllers/base.py`` we define an ``@api_version`` decorator
which is intended to be used on top-level Controller methods. It is
not appropriate for lower-level methods. Some examples:
Adding a new API method
~~~~~~~~~~~~~~~~~~~~~~~
In the controller class::
@base.Controller.api_version("1.2")
def my_api_method(self, req, id):
....
This method would only be available if the caller had specified an
``OpenStack-API-Version`` of >= ``1.2``. If they had specified a
lower version (or not specified it and received the default of ``1.1``)
the server would respond with ``HTTP/406``.
Removing an API method
~~~~~~~~~~~~~~~~~~~~~~
In the controller class::
@base.Controller.api_version("1.2", "1.3")
def my_api_method(self, req, id):
....
This method would only be available if the caller had specified an
``OpenStack-API-Version`` of >= ``1.2`` and
``OpenStack-API-Version`` of <= ``1.3``. If ``1.4`` or later
is specified the server will respond with ``HTTP/406``.
Changing a method's behavior
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the controller class::
@base.Controller.api_version("1.2", "1.3")
def my_api_method(self, req, id):
.... method_1 ...
@base.Controller.api_version("1.4") # noqa
def my_api_method(self, req, id):
.... method_2 ...
If a caller specified ``1.2``, ``1.3`` (or received the default
of ``1.1``) they would see the result from ``method_1``,
and for ``1.4`` or later they would see the result from ``method_2``.
It is vital that the two methods have the same name, so the second of
them will need ``# noqa`` to avoid failing flake8's ``F811`` rule. The
two methods may be different in any kind of semantics (schema
validation, return values, response codes, etc)
When not using decorators
~~~~~~~~~~~~~~~~~~~~~~~~~
When you don't want to use the ``@api_version`` decorator on a method
or you want to change behavior within a method (say it leads to
simpler or simply a lot less code) you can directly test for the
requested version with a method as long as you have access to the api
request object (commonly accessed with ``pecan.request``). Every API
method has an versions object attached to the request object and that
can be used to modify behavior based on its value::
def index(self):
<common code>
req_version = pecan.request.version
req1_min = versions.Version('', '', '', "1.1")
req1_max = versions.Version('', '', '', "1.5")
req2_min = versions.Version('', '', '', "1.6")
req2_max = versions.Version('', '', '', "1.10")
if req_version.matches(req1_min, req1_max):
....stuff....
elif req_version.matches(req2min, req2_max):
....other stuff....
elif req_version > versions.Version("1.10"):
....more stuff.....
<common code>
The first argument to the matches method is the minimum acceptable version
and the second is maximum acceptable version. If the specified minimum
version and maximum version are null then ``ValueError`` is returned.
Other necessary changes
-----------------------
If you are adding a patch which adds a new microversion, it is
necessary to add changes to other places which describe your change:
* Update ``REST_API_VERSION_HISTORY`` in
``zun/api/controllers/versions.py``
* Update ``CURRENT_MAX_VER`` in
``zun/api/controllers/versions.py``
* Add a verbose description to
``zun/api/rest_api_version_history.rst``. There should
be enough information that it could be used by the docs team for
release notes.
* Update ``min_microversion`` in ``.zuul.yaml``.
* Update the expected versions in affected tests, for example in
``zun/tests/unit/api/controllers/test_root.py``.
* Update ``CURRENT_VERSION`` in ``zun/tests/unit/api/base.py``.
* Make a new commit to python-zunclient and update corresponding
files to enable the newly added microversion API.
* If the microversion changes the response schema, a new schema and test for
the microversion must be added to Tempest.
Allocating a microversion
-------------------------
If you are adding a patch which adds a new microversion, it is
necessary to allocate the next microversion number. Except under
extremely unusual circumstances and this would have been mentioned in
the zun spec for the change, the minor number of ``CURRENT_MAX_VER``
will be incremented. This will also be the new microversion number for
the API change.
It is possible that multiple microversion patches would be proposed in
parallel and the microversions would conflict between patches. This
will cause a merge conflict. We don't reserve a microversion for each
patch in advance as we don't know the final merge order. Developers
may need over time to rebase their patch calculating a new version
number as above based on the updated value of ``CURRENT_MAX_VER``.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/api-microversion.rst | api-microversion.rst |
Contributing Documentation to Zun
=================================
Zun's documentation has been moved from the openstack-manuals repository
to the ``docs`` directory in the Zun repository. This makes it even more
important that Zun add and maintain good documentation.
This page provides guidance on how to provide documentation for those
who may not have previously been active writing documentation for
OpenStack.
Using RST
---------
OpenStack documentation uses reStructuredText to write documentation.
The files end with a ``.rst`` extension. The ``.rst`` files are then
processed by Sphinx to build HTML based on the RST files.
.. note::
Files that are to be included using the ``.. include::`` directive in an
RST file should use the ``.inc`` extension. If you instead use the ``.rst``
this will result in the RST file being processed twice during the build and
cause Sphinx to generate a warning during the build.
reStructuredText is a powerful language for generating web pages. The
documentation team has put together an `RST conventions`_ page with information
and links related to RST.
.. _RST conventions: https://docs.openstack.org/contributor-guide/rst-conv.html
Building Zun's Documentation
----------------------------
To build documentation the following command should be used:
.. code-block:: console
tox -e docs,pep8
When building documentation it is important to also run pep8 as it is easy
to introduce pep8 failures when adding documentation. Currently, we do not
have the build configured to treat warnings as errors, so it is also important
to check the build output to ensure that no warnings are produced by Sphinx.
.. note::
Many Sphinx warnings result in improperly formatted pages being generated.
During the documentation build a number of things happen:
* All of the RST files under ``doc/source`` are processed and built.
* The openstackdocs theme is applied to all of the files so that they
will look consistent with all the other OpenStack documentation.
* The resulting HTML is put into ``doc/build/html``.
* Sample files like zun.conf.sample are generated and put into
``doc/soure/_static``.
After the build completes the results may be accessed via a web browser in
the ``doc/build/html`` directory structure.
Review and Release Process
--------------------------
Documentation changes go through the same review process as all other changes.
.. note::
Reviewers can see the resulting web page output by clicking on
``gate-zun-docs-ubuntu-xenial``!
Once a patch is approved it is immediately released to the docs.openstack.org
website and can be seen under Zun's Documentation Page at
https://docs.openstack.org/zun/latest . When a new release is cut a
snapshot of that documentation will be kept at
https://docs.openstack.org/zun/<release> . Changes from master can be
backported to previous branches if necessary.
Doc Directory Structure
-----------------------
The main location for Zun's documentation is the ``doc/source`` directory.
The top level index file that is seen at
`https://docs.openstack.org/zun/latest`_ resides here as well as the
``conf.py`` file which is used to set a number of parameters for the build
of OpenStack's documentation.
Each of the directories under source are for specific kinds of documentation
as is documented in the ``README`` in each directory:
.. toctree::
:maxdepth: 1
../admin/README
../cli/README
../configuration/README
../contributor/README
../install/README
.. _https://docs.openstack.org/zun/latest: https://docs.openstack.org/zun/latest
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/documentation.rst | documentation.rst |
Release notes
=============
The release notes for a patch should be included in the patch.
If the following applies to the patch, a release note is required:
* Upgrades
* The deployer needs to take an action when upgrading
* A new config option is added that the deployer should consider changing
from the default
* A configuration option is deprecated or removed
* Features
* A new feature or driver is implemented
* Feature is deprecated or removed
* Current behavior is changed
* Bugs
* A security bug is fixed
* A long-standing or important bug is fixed
* APIs
* REST API changes
Zun uses `reno <https://docs.openstack.org/reno/latest/>`_ to
generate release notes. Please read the docs for details. In summary, use
.. code-block:: bash
$ tox -e venv -- reno new <bug-,bp-,whatever>
Then edit the sample file that was created and push it with your change.
To see the results:
.. code-block:: bash
$ git commit # Commit the change because reno scans git log.
$ tox -e releasenotes
Then look at the generated release notes files in releasenotes/build/html in
your favorite browser.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/releasenotes.rst | releasenotes.rst |
Contributor's Guide
===================
In this section you will find information on how to contribute to Zun.
Content includes architectural overviews, tips and tricks for setting up
a development environment, and information on Cinder's lower level programming
APIs.
HowTos and Tutorials
--------------------
If you are new to Zun, this section contains information that should help
you quickly get started.
.. toctree::
:maxdepth: 1
Setting Up Your Development Environment <quickstart>
There are documents that should help you develop and contribute to the
project.
.. toctree::
:maxdepth: 1
Developer Contribution Guide <contributing>
Setting Up Your Development Environment Under Mod WSGI <mod-wsgi>
Running Tempest Tests <tempest-tests>
Running Unit Tests <unit-tests>
Multinode Devstack <multinode-devstack>
There are some other important documents also that helps new contributors to
contribute effectively towards code standards to the project.
.. toctree::
:maxdepth: 1
Adding a New API Method <api-microversion>
Changing Zun DB Objects <objects>
Documentation Contribution
--------------------------
.. toctree::
:maxdepth: 2
documentation
Other Resources
---------------
.. toctree::
:maxdepth: 3
launchpad
gerrit
jenkins
releasenotes
capsule
vision-reflection
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/index.rst | index.rst |
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
===================
Multi-host Devstack
===================
This is a guide for developers who want to setup Zun in more than one hosts.
Prerequisite
============
You need to deploy Zun in a devstack environment in the first host.
Refer the ``Exercising the Services Using Devstack`` session at `Developer
Quick-Start Guide <https://docs.openstack.org/zun/latest/contributor/quickstart.html#exercising-the-services-using-devstack>`_
for details.
Enable additional zun host
==========================
Refer to the `Multi-Node lab
<https://docs.openstack.org/devstack/latest/guides/multinode-lab.html>`__
for more information.
On the second host, clone devstack::
# Create a root directory for devstack if needed
$ sudo mkdir -p /opt/stack
$ sudo chown $USER /opt/stack
$ git clone https://opendev.org/openstack/devstack /opt/stack/devstack
The second host will only need zun-compute service along with kuryr-libnetwork
support. You also need to tell devstack where the SERVICE_HOST is::
$ SERVICE_HOST=<controller's ip>
$ HOST_IP=<your ip>
$ git clone https://opendev.org/openstack/zun /opt/stack/zun
$ cat /opt/stack/zun/devstack/local.conf.subnode.sample \
| sed "s/HOST_IP=.*/HOST_IP=$HOST_IP/" \
| sed "s/SERVICE_HOST=.*/SERVICE_HOST=$SERVICE_HOST/" \
> /opt/stack/devstack/local.conf
Run devstack::
$ cd /opt/stack/devstack
$ ./stack.sh
On the controller host, you can see 2 zun-compute hosts available::
$ zun service-list
+----+-------------+-------------+-------+----------+-----------------+---------------------------+---------------------------+
| Id | Host | Binary | State | Disabled | Disabled Reason | Updated At | Availability Zone |
+----+-------------+-------------+-------+----------+-----------------+---------------------------+---------------------------+
| 1 | zun-hosts-1 | zun-compute | up | False | None | 2018-03-13 14:15:40+00:00 | Nova |
| 2 | zun-hosts-2 | zun-compute | up | False | None | 2018-03-13 14:15:41+00:00 | Nova |
+----+-------------+-------------+-------+----------+-----------------+---------------------------+---------------------------+
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/multinode-devstack.rst | multinode-devstack.rst |
Project hosting with Launchpad
==============================
`Launchpad`_ hosts the Zun project. The Zun project homepage on
Launchpad is http://launchpad.net/zun.
Mailing list
------------
The mailing list email is ``[email protected]``. This is a common
mailing list across the OpenStack projects. To participate in the mailing list:
#. Subscribe to the list at
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
The mailing list archives are at http://lists.openstack.org/pipermail/openstack/.
Bug tracking
------------
Report Zun bugs at https://bugs.launchpad.net/zun
Launchpad credentials
---------------------
Creating a login on Launchpad is important even if you don't use the Launchpad
site itself, since Launchpad credentials are used for logging in on several
OpenStack-related sites.
.. only:: html
These sites include:
* `Wiki`_
* Gerrit (see :doc:`gerrit`)
* Jenkins (see :doc:`jenkins`)
Feature requests (Blueprints)
-----------------------------
Zun uses Launchpad Blueprints to track feature requests. Blueprints are at
https://blueprints.launchpad.net/zun.
Technical support (Answers)
---------------------------
Zun no longer uses Launchpad Answers to track Zun technical support questions.
Note that `Ask OpenStack`_ (which is not hosted on Launchpad) can
be used for technical support requests.
.. _Launchpad: https://launchpad.net
.. _Wiki: https://wiki.openstack.org/wiki/Main_Page
.. _Zun Team: https://launchpad.net/~zun
.. _OpenStack Team: https://launchpad.net/~openstack
.. _Ask OpenStack: https://ask.openstack.org
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/launchpad.rst | launchpad.rst |
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
====================
Capsule Quick Start
====================
Capsule is a container composition unit that includes sandbox container,
multiple application containers and multiple volumes. All container inside
the capsule share the same network, ipc, pid namespaces. In general, it is
the same unit like Azure Container Instance(ACI) or Kubernetes Pod.
The diagram below is an overview of the structure of ``capsule``.
::
+-----------------------------------------------------------+
| +-----------+ |
| | | |
| | Sandbox | |
| | | |
| +-----------+ |
| |
| |
| +-------------+ +-------------+ +-------------+ |
| | | | | | | |
| | Container | | Container | | Container | |
| | | | | | | |
| +-------------+ +-------------+ +-------------+ |
| |
| |
| +----------+ +----------+ |
| | | | | |
| | Volume | | Volume | |
| | | | | |
| +----------+ +----------+ |
| |
+-----------------------------------------------------------+
Capsule API is currently in v1 phase now.
Now basic capsule functions are supported. Capsule API methods:
* Create: Create a capsule based on special yaml file or json file.
* Delete: Delete an existing capsule.
* Describe: Get detailed information about selected capsule.
* List: List all the capsules with essential fields.
.. note::
Volume is not yet supported, but it is in the roadmap. It will be
implemented after Zun volume support has been finished.
If you need to access to the capsule port, you might need to open the port in
security group rules and access the port via the floating IP that assigned to
the capsule. The capsule example below assumes that a capsule has been launched
with security group "default" and user want to access the port 22, 80 and 3306:
.. code-block:: yaml
# use "-" because that the fields have many items
capsuleVersion: beta
kind: capsule
metadata:
name: template
labels:
app: web
foo: bar
restartPolicy: Always
spec:
containers:
- image: ubuntu
command:
- "/bin/bash"
imagePullPolicy: ifnotpresent
workDir: /root
ports:
- name: ssh-port
containerPort: 22
hostPort: 22
protocol: TCP
resources:
requests:
cpu: 1
memory: 1024
env:
ENV1: /usr/local/bin
ENV2: /usr/sbin
volumeMounts:
- name: volume1
mountPath: /data1
readOnly: True
- image: centos
command:
- "/bin/bash"
args:
- "-c"
- "\"while true; do echo hello world; sleep 1; done\""
imagePullPolicy: ifnotpresent
workDir: /root
ports:
- name: nginx-port
containerPort: 80
hostPort: 80
protocol: TCP
- name: mysql-port
containerPort: 3306
hostPort: 3306
protocol: TCP
resources:
requests:
cpu: 1
memory: 1024
env:
ENV2: /usr/bin/
volumeMounts:
- name: volume2
mountPath: /data2
- name: volume3
mountPath: /data3
volumes:
- name: volume1
cinder:
size: 5
autoRemove: True
- name: volume2
cinder:
volumeID: 9f81cbb2-10f9-4bab-938d-92fe33c57a24
- name: volume3
cinder:
volumeID: 67618d54-dd55-4f7e-91b3-39ffb3ba7f5f
Pay attention, the volume2 and volume3 referred in the above yaml are already
created by Cinder. Also capsule doesn't support Cinder multiple attach now.
One volume only could be attached to one Container.
Capsule management commands in details:
Create capsule, it will create capsule based on capsule.yaml:
.. code-block:: console
$ source ~/devstack/openrc demo demo
$ zun capsule-create -f capsule.yaml
If you want to get access to the port, you need to set the security group
rules for it.
.. code-block:: console
$ openstack security group rule create default \
--protocol tcp --dst-port 3306:3306 --remote-ip 0.0.0.0/0
$ openstack security group rule create default \
--protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0
$ openstack security group rule create default \
--protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
Delete capsule:
.. code-block:: console
$ zun capsule-delete <uuid>
$ zun capsule-delete <capsule-name>
List capsule:
.. code-block:: console
$ zun capsule-list
Describe capsule:
.. code-block:: console
$ zun capsule-describe <uuid>
$ zun capsule-describe <capsule-name>
TODO
---------
`Add security group set to Capsule`
Build this documentation and push it to .
`Add Gophercloud support for Capsule`
See `Gophercloud support for Zun
<https://blueprints.launchpad.net/zun/+spec/golang-client>`_
`Add Kubernetes connect to Capsule`
see `zun connector for k8s
<https://blueprints.launchpad.net/zun/+spec/zun-connector-for-k8s>`_.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/capsule.rst | capsule.rst |
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
============================
Installing the API via WSGI
============================
This document provides two WSGI deployments as examples: uwsgi and mod_wsgi.
.. seealso::
https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html#uwsgi-vs-mod-wsgi
Installing the API behind mod_wsgi
==================================
Zun comes with a few example files for configuring the API
service to run behind Apache with ``mod_wsgi``.
app.wsgi
========
The file ``zun/api/app.wsgi`` sets up the V2 API WSGI
application. The file is installed with the rest of the zun
application code, and should not need to be modified.
etc/apache2/zun.conf
======================
The ``etc/apache2/zun.conf`` file contains example settings that
work with a copy of zun installed via devstack.
.. literalinclude:: ../../../etc/apache2/zun.conf.template
1. On deb-based systems copy or symlink the file to
``/etc/apache2/sites-available``. For rpm-based systems the file will go in
``/etc/httpd/conf.d``.
2. Modify the ``WSGIDaemonProcess`` directive to set the ``user`` and
``group`` values to an appropriate user on your server. In many
installations ``zun`` will be correct. Modify the ``WSGIScriptAlias``
directive to set the path of the wsgi script. If you are using devstack,
the value should be ``/opt/stack/zun/zun/api/app.wsgi``. In the
``ErrorLog`` and ``CustomLog`` directives, replace ``%APACHE_NAME%`` with
``apache2``.
3. Enable the zun site. On deb-based systems::
$ a2ensite zun
$ service apache2 reload
On rpm-based systems::
$ service httpd reload
Installing the API with uwsgi
=============================
Create zun-uwsgi.ini file::
[uwsgi]
http = 0.0.0.0:9517
wsgi-file = <path_to_zun>/zun/api/app.wsgi
plugins = python
# This is running standalone
master = true
# Set die-on-term & exit-on-reload so that uwsgi shuts down
exit-on-reload = true
die-on-term = true
# uwsgi recommends this to prevent thundering herd on accept.
thunder-lock = true
# Override the default size for headers from the 4k default. (mainly for keystone token)
buffer-size = 65535
enable-threads = true
# Set the number of threads usually with the returns of command nproc
threads = 8
# Make sure the client doesn't try to re-use the connection.
add-header = Connection: close
# Set uid and gip to a appropriate user on your server. In many
# installations ``zun`` will be correct.
uid = zun
gid = zun
Then start the uwsgi server::
uwsgi ./zun-uwsgi.ini
Or start in background with::
uwsgi -d ./zun-uwsgi.ini
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/mod-wsgi.rst | mod-wsgi.rst |
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
========================
Technical Vision for Zun
========================
This document is a self-evaluation of Zun with regard to the
Technical Committee's `technical vision`_.
.. _technical vision: https://governance.openstack.org/tc/reference/technical-vision.html
Mission Statement
=================
Zun's mission is to provide an OpenStack containers service that integrates
with various container technologies for managing application containers on
OpenStack.
Vision for OpenStack
====================
Self-service
------------
Zun are self-service. It provides users with the ability to deploy
containerized applications on demand without having to wait for human action.
Zun containers are isolated between tenants. Containers controlled by
one tenant are not accessible by other tenants.
Quotas are used to limit the number of containers or compute resources
(i.e. CPU, RAM) within a tenant.
Application Control
-------------------
Zun allows application control of containers by offering RESTful API,
CLI, and Python API binding. In addition, there are third-party tools
like `Gophercloud`_ that provide API binding for other programming
languages. The access of Zun's API is secured by Keystone so
applications that are authenticatable with Keystone can access Zun's API
securely.
.. _Gophercloud: https://github.com/gophercloud/gophercloud
Interoperability
----------------
Zun containers (and other API resources) are designed to be deployable and
portable across a variety of public and private OpenStack clouds.
Zun's API hides differences between container engines and
exposes standard container abstraction.
Bidirectional Compatibility
---------------------------
Zun implements `API microversion`_.
API consumers can query the min/max API version that an OpenStack cloud
supports, as well as pinning a specific API version to guarantee consistent
API behavior across different versions of OpenStack.
.. _API microversion: https://docs.openstack.org/zun/latest/reference/api-microversion-history.html
Cross-Project Dependencies
--------------------------
Zun depends on Keystone for authentication, Neutron for container networks,
Cinder for container volumes. Zun aims to integrate with Placement for
tracking compute resources and retrieving allocation candidates.
Therefore, Placement is expected to be another dependency of Zun
in the near future.
Partitioning
------------
It is totally fine to deploy Zun in multiple OpenStack regions,
and each region could have a Zun endpoint in Keystone service catalog.
Zun also supports the concept of 'availability zones' - groupings within
a region that share no common points of failure.
Basic Physical Data Center Management
-------------------------------------
Zun interfaces with external systems like Docker engine, which
consumes compute resources in data center and offers compute
capacity to end-users in the form of containers.
Zun APIs provide a consistent interface to various container technologies,
which can be implemented by different Open Source projects.
Hardware Virtualisation
-----------------------
Similar to Nova, Zun also aims to virtualize hardware resources
(essentially physical servers) and provide them to users via a
vendor-independent API. The difference is that Zun delivers
compute resources in the form of containers instead of VMs.
Operators have a choice of container runtimes which could be
a hypervisor-based runtime (i.e. Kata Container) or a traditional
runtime (i.e. runc). The choice of container runtime is a trade-off
between tenant isolation and performance.
Plays Well With Others
----------------------
Zun plays well with Container Orchestration Engines like Kubernetes.
In particular, there is an `OpenStack provider`_ for Virtual Kubelet,
which mimics Kubelet to register itself as a node in a Kubernetes cluster.
The OpenStack provider leverages Zun to launch container workloads that
Kubernetes schedules to the virtual node.
.. _OpenStack provider: https://github.com/virtual-kubelet/virtual-kubelet/tree/master/providers/openstack
Infinite, Continuous Scaling
----------------------------
Zun facilitates infinite and continuous scaling of applications.
It allows users to scale up their applications by spinning up containers
on demand (without pre-creating a container host or cluster).
Containers allow sharing of physical resources in data center at a more
fine-grained level than a VM thus resulting in a better utilization of
resources.
Built-in Reliability and Durability
-----------------------------------
Unlike VMs, containers are usually transient and allowed to be deleted
and re-created in response to failure.
In this context, Zun aims to provide primitives for deployers
to deploy a highly available applications. For example, it allows deployers
to deploy their applications across different availability zones.
It supports health check of containers so that orchestrators can quickly
detect failure and perform recover actions.
Customizable Integration
------------------------
Zun is integrated with Heat, which allows users to 'wire' containers
with resources provided by other services (i.e. networks, volumes,
security groups, floating IPs, load balancers, or even VMs).
In addition, the Kubernetes integration feature provides another option
to 'wire' containers to customize the topology of application deployments.
Graphical User Interface
------------------------
Zun has a Horizon plugin, which allows users to consume Zun services
through a graphical user interface provided by Horizon.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/vision-reflection.rst | vision-reflection.rst |
Code Reviews with Gerrit
========================
Zun uses the `Gerrit`_ tool to review proposed code changes. The review site
is https://review.opendev.org.
Gerrit is a complete replacement for Github pull requests. `All Github pull
requests to the Zun repository will be ignored`.
See `Gerrit Workflow Quick Reference`_ for information about how to get
started using Gerrit. See `Development Workflow`_ for more detailed
documentation on how to work with Gerrit.
.. _Gerrit: https://bugs.chromium.org/p/gerrit/
.. _Development Workflow: https://docs.openstack.org/infra/manual/developers.html#development-workflow
.. _Gerrit Workflow Quick Reference: https://docs.openstack.org/infra/manual/developers.html#development-workflow
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/gerrit.rst | gerrit.rst |
.. _quickstart:
=====================
Developer Quick-Start
=====================
This is a quick walkthrough to get you started developing code for Zun.
This assumes you are already familiar with submitting code reviews to
an OpenStack project.
.. seealso::
https://docs.openstack.org/infra/manual/developers.html
Exercising the Services Using Devstack
======================================
This session has been tested on Ubuntu 16.04 (Xenial) only.
Clone devstack::
# Create a root directory for devstack if needed
$ sudo mkdir -p /opt/stack
$ sudo chown $USER /opt/stack
$ git clone https://opendev.org/openstack/devstack /opt/stack/devstack
We will run devstack with minimal local.conf settings required to enable
required OpenStack services::
$ HOST_IP=<your ip>
$ git clone https://opendev.org/openstack/zun /opt/stack/zun
$ cat /opt/stack/zun/devstack/local.conf.sample \
| sed "s/HOST_IP=.*/HOST_IP=$HOST_IP/" \
> /opt/stack/devstack/local.conf
.. note::
By default, *KURYR_CAPABILITY_SCOPE=global*. It will work in both
all-in-one and multi-node scenario. You still can change it to *local*
(in **all-in-one scenario only**)::
$ sed -i "s/KURYR_CAPABILITY_SCOPE=.*/KURYR_CAPABILITY_SCOPE=local/" /opt/stack/devstack/local.conf
More devstack configuration information can be found at `Devstack Configuration
<https://docs.openstack.org/devstack/latest/configuration.html>`_
More neutron configuration information can be found at `Devstack Neutron
Configuration <https://docs.openstack.org/devstack/latest/guides/neutron.html>`_
Run devstack::
$ cd /opt/stack/devstack
$ ./stack.sh
.. note::
If the developer have a previous devstack environment and they want to re-stack
the environment, they need to uninstall the pip packages before restacking::
$ ./unstack.sh
$ ./clean.sh
$ pip freeze | grep -v '^\-e' | xargs sudo pip uninstall -y
$ ./stack.sh
Prepare your session to be able to use the various openstack clients including
nova, neutron, and glance. Create a new shell, and source the devstack openrc
script::
$ source /opt/stack/devstack/openrc admin admin
Using the service
=================
We will create and run a container that pings the address 8.8.8.8 four times::
$ zun run --name test cirros ping -c 4 8.8.8.8
Above command will use the Docker image ``cirros`` from DockerHub which is a
public image repository. Alternatively, you can use Docker image from Glance
which serves as a private image repository::
$ docker pull cirros
$ docker save cirros | openstack image create cirros --public --container-format docker --disk-format raw
$ zun run --image-driver glance cirros ping -c 4 8.8.8.8
You should see a similar output to::
$ zun list
+--------------------------------------+------+--------+---------+------------+------------+-------+
| uuid | name | image | status | task_state | addresses | ports |
+--------------------------------------+------+--------+---------+------------+------------+-------+
| 46dd001b-7474-412c-a0f4-7adc047aaedf | test | cirros | Stopped | None | 172.17.0.2 | [] |
+--------------------------------------+------+--------+---------+------------+------------+-------+
$ zun logs test
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=40 time=25.513 ms
64 bytes from 8.8.8.8: seq=1 ttl=40 time=25.348 ms
64 bytes from 8.8.8.8: seq=2 ttl=40 time=25.226 ms
64 bytes from 8.8.8.8: seq=3 ttl=40 time=25.275 ms
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 25.226/25.340/25.513 ms
Delete the container::
$ zun delete test
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/quickstart.rst | quickstart.rst |
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Versioned Objects
=================
Zun uses the `oslo.versionedobjects library
<https://docs.openstack.org/oslo.versionedobjects/latest/>`_ to
construct an object model that can be communicated via RPC. These objects have
a version history and functionality to convert from one version to a previous
version. This allows for 2 different levels of the code to still pass objects
to each other, as in the case of rolling upgrades.
Object Version Testing
----------------------
In order to ensure object versioning consistency is maintained,
oslo.versionedobjects has a fixture to aid in testing object versioning.
`oslo.versionedobjects.fixture.ObjectVersionChecker
<https://docs.openstack.org/oslo.versionedobjects/latest/reference/fixture.html#objectversionchecker>`_
generates fingerprints of each object, which is a combination of the current
version number of the object, along with a hash of the RPC-critical parts of
the object (fields and remotable methods).
The tests hold a static mapping of the fingerprints of all objects. When an
object is changed, the hash generated in the test will differ from that held in
the static mapping. This will signal to the developer that the version of the
object needs to be increased. Following this version increase, the fingerprint
that is then generated by the test can be copied to the static mapping in the
tests. This symbolizes that if the code change is approved, this is the new
state of the object to compare against.
Object Change Example
'''''''''''''''''''''
The following example shows the unit test workflow when changing an object
(Container was updated to hold a new 'foo' field)::
tox -e py27 zun.tests.unit.objects.test_objects
This results in a unit test failure with the following output:
.. code-block:: python
testtools.matchers._impl.MismatchError: !=:
reference = {'Container': '1.0-35edde13ad178e9419e7ea8b6d580bcd'}
actual = {'Container': '1.0-22b40e8eed0414561ca921906b189820'}
.. code-block:: console
: Fields or remotable methods in some objects have changed. Make sure the versions of the objects has been bumped, and update the hashes in the static fingerprints tree (object_data). For more information, read https://docs.openstack.org/zun/latest/.
This is an indication that me adding the 'foo' field to Container means I need
to bump the version of Container, so I increase the version and add a comment
saying what I changed in the new version:
.. code-block:: python
@base.ZunObjectRegistry.register
class Container(base.ZunPersistentObject, base.ZunObject,
base.ZunObjectDictCompat):
# Version 1.0: Initial version
# Version 1.1: Add container_id column
# Version 1.2: Add memory column
# Version 1.3: Add task_state column
# Version 1.4: Add cpu, workdir, ports, hostname and labels columns
# Version 1.5: Add meta column
# Version 1.6: Add addresses column
# Version 1.7: Add host column
# Version 1.8: Add restart_policy
# Version 1.9: Add status_detail column
# Version 1.10: Add tty, stdin_open
# Version 1.11: Add image_driver
VERSION = '1.11'
Now that I have updated the version, I will run the tests again and let the
test tell me the fingerprint that I now need to put in the static tree:
.. code-block:: python
testtools.matchers._impl.MismatchError: !=:
reference = {'Container': '1.10-35edde13ad178e9419e7ea8b6d580bcd'}
actual = {'Container': '1.11-ddffeb42cb5472decab6d73534fe103f'}
I can now copy the new fingerprint needed
(1.11-ddffeb42cb5472decab6d73534fe103f), to the object_data map within
zun/tests/unit/objects/test_objects.py:
.. code-block:: python
object_data = {
'Container': '1.11-ddffeb42cb5472decab6d73534fe103f',
'Image': '1.0-0b976be24f4f6ee0d526e5c981ce0633',
'NUMANode': '1.0-cba878b70b2f8b52f1e031b41ac13b4e',
'NUMATopology': '1.0-b54086eda7e4b2e6145ecb6ee2c925ab',
'ResourceClass': '1.0-2c41abea55d0f7cb47a97bdb345b37fd',
'ResourceProvider': '1.0-92b427359d5a4cf9ec6c72cbe630ee24',
'ZunService': '1.0-2a19ab9987a746621b2ada02d8aadf22',
}
Running the unit tests now shows no failure.
If I did not update the version, and rather just copied the new hash to the
object_data map, the review would show the hash (but not the version) was
updated in object_data. At that point, a reviewer should point this out, and
mention that the object version needs to be updated.
If a remotable method were added/changed, the same process is followed, because
this will also cause a hash change.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/contributor/objects.rst | objects.rst |
====================
Policy configuration
====================
Configuration
~~~~~~~~~~~~~
.. warning::
JSON formatted policy file is deprecated since Zun 7.0.0 (Wallaby).
This `oslopolicy-convert-json-to-yaml`__ tool will migrate your existing
JSON-formatted policy file to YAML in a backward-compatible way.
.. __: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html
The following is an overview of all available policies in Zun. For a sample
configuration file.
.. show-policy::
:config-file: ../../etc/zun/zun-policy-generator.conf
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/configuration/policy.rst | policy.rst |
Filter Scheduler
================
The **Filter Scheduler** supports `filtering` zun compute hosts to make
decisions on where a new container should be created.
Filtering
---------
Filter Scheduler iterates over all found compute hosts,evaluating each host
against a set of filters. The Scheduler then chooses a host for the requested
container. A specific filter can decide whether to pass or filter out a
specific host. The decision is made based on the user request specification,
the state of the host, and/or some extra information.
If the Scheduler cannot find candidates for the container, it means that
there are no appropriate host where that container can be scheduled.
The Filter Scheduler has a set of ``filters`` that are built-in. If the
built-in filters are insufficient, you can implement your own filters with your
filtering algorithm.
There are many standard filter classes which may be used
(:mod:`zun.scheduler.filters`):
* CPUFilter - filters based on CPU core utilization. It passes hosts with
sufficient number of CPU cores.
* RamFilter - filters hosts by their RAM. Only hosts with sufficient RAM
to host the instance are passed.
* LabelFilter - filters hosts based on whether host has the CLI specified
labels.
* ComputeFilter - filters hosts that are operational and enabled. In general,
you should always enable this filter.
* RuntimeFilter - filters hosts by their runtime. It passes hosts with
the specified runtime.
Configuring Filters
-------------------
To use filters you specify two settings:
* ``filter_scheduler.available_filters`` - Defines filter classes made
available to the scheduler.
* ``filter_scheduler.enabled_filters`` - Of the available filters, defines
those that the scheduler uses by default.
The default values for these settings in zun.conf are:
::
--filter_scheduler.available_filters=zun.scheduler.filters.all_filters
--filter_scheduler.enabled_filters=RamFilter,CPUFilter,ComputeFilter,RuntimeFilter
With this configuration, all filters in ``zun.scheduler.filters``
would be available, and by default the RamFilter and CPUFilter would be
used.
Writing Your Own Filter
-----------------------
To create **your own filter** you must inherit from
BaseHostFilter and implement one method:
``host_passes``. This method should return ``True`` if the host passes the
filter.
P.S.: you can find more examples of using Filter Scheduler and standard filters
in :mod:`zun.tests.scheduler`.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/doc/source/user/filter-scheduler.rst | filter-scheduler.rst |
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==================
Zun Release Notes
==================
.. toctree::
:maxdepth: 1
unreleased
zed
yoga
xena
wallaby
victoria
ussuri
train
stein
rocky
queens
pike
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/releasenotes/source/index.rst | index.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
===========================
Container SR-IOV networking
===========================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/container-sr-iov-networking
Problem description
===================
SR-IOV (Single-root input/output virtualization) is a technique that allows
a single physical PCIe (Peripheral Component Interconnect Express) device
to be shared across several clients (VMs or containers). SR-IOV networking
allows Nova VMs and containers access to virtual networks via SR-IOV NICs.
Each such SR-IOV NIC would have a single PF (Physical Function) and multiple
VFs (Virtual Functions). Essentially the PF and VFs appears as multiple PCIe
SR-IOV based NICs. With SR-IOV networking, the traditional virtual bridge is
no longer required, and thus higher networking performance can be achieved.
This is an important requirement for most Virtualized Network Functions (VNFs).
To support VNF application deployment over Zun containers, it is desirable to
support SR-IOV networking feature for Zun.
To enable SR-IOV networking for Zun containers, Zun should provide a data
model for PCI passthrough devices, and a filter to enable Zun scheduler
locate the Zun computes that have access to the PCI passthrough devices.
These two dependencies are addressed under separated blueprint[1][2].
Kuryr driver is used for Zun container networking. Kuryr implements a
libnetwork remote network driver and maps its calls to OpenStack Neutron.
It works as a translator between libnetwork's Container Network Model (CNM)
and Neutron's networking model. Kuryr also acts as a libnetwork IPAM driver.
This design will try to use the existing functions provided by Kuryr and
identify the new requirements for Kuryr and Zun for the SR-IOV support.
With Kuryr driver, Zun will implement SR-IOV networking following the
procedure below:
- Cloud admin enables PCI-passthrough filter at /etc/zun.conf [1];
- Cloud admin creates a new PCI-passthrough alias [2];
- Cloud admin configures PCI passthrough whitelist at Zun compute [2];
- Cloud admin enables sriovnicswitch on Neutron server (with existing Neutron
support)[3];
- Cloud admin configures supported_pci_vendor_devs at
/etc/neutron/plugins/ml2/ml2_conf_sriov.ini (with existing Neutron
support)[3];
- Cloud admin configures sriov_nic.physical_device_mappings at
/etc/neutron/plugins/ml2/ml2_conf_sriov.ini on Zun compute nodes (with
existing Neutron support)[3];
- Cloud admin creates a network that SR-IOV NIC connects (with existing
Neutron support)[3];
- Cloud user creates a SR-IOV network port with an IP address (with existing
Neutron support)[3]; Usually, the IP address is optional for creating a
Neutron port. But Kuryr currently (in release Ocata) only support matching
IP address to find the Neutron port that to be used for a container, thus
IP address becomes a mandatory parameter here. This limitation will be
removed once Kuryr can take neutron port-id as input.
- Cloud user creates a new container bound with the SR-IOV network port.
This design spec focuses on the last step above.
Proposed change
===============
1. Introduce a new option to allow user specify the neutron port-id when zun
creates a container, for example:
zun create --name container_name --nets port=port_uuid ...
For a neutron SR-IOV port, the vnic_type attribute of the port is "direct".
Ideally, kuryr_libnetwork should use the vnic_type to decide which port
driver will be used to attach the neutron port to the container. However,
kuryr can only support one port driver per host with current Ocata release.
This means if we enable new SR-IOV port driver through the existing
configuration at kuryr.conf, zun can only create containers using SR-IOV
ports on the host. We expect kuryr will remove this limitation in the
future.
2. Implement a new sriov_port_driver at kuryr_libnetwork.
The sriov_port_driver implements the abstract function create_host_iface().
The function allocates an SR-IOV VF on the host for the specified neutron
port (e.g. pass-in parameter). Then the port is bound to the corresponding
network subsystem[5].
The sriov_port_driver should also implement delete_host_iface(). The
function de-allocates the SR-IOV VF and adds it back to the available VF
pool.
The sriov_port_driver also implements get_container_iface_name(). This
function should return the name of the VF instance.
Once a SR-IOV network port is available, Docker will call kuryr-libnetwork
API to bind the neutron port to a network interface attached to the
container. The name of the allocated VF instance will be passed to
Docker[6]. The VF interface name representing the actual OS level VF
interface that should be moved by LibNetwork into the sandbox[7].
Alternatives
------------
Two other alternatives have been considered for the design. But both options
requires significant changes in existing Kuryr implementations. The above
design is believed to have minimum impact on Kuryr.
Option-1:
Zun creates containers with pre-configured SR-IOV port, and also manages
the VF resources. This design option is very similar to the proposed
design. The only difference is that VFs are managed or allocated by Zun.
Zun then pass the VF name to Kuryr. The potential benefit of this option
is to reuse all (PCI-passthrough and SR-IOV) resource management functions
to be implemented for non-networking application.
Option-2:
Zun creates containers with both network and SR-IOV networking option.
Kuryr driver will integrate with Docker SR-IOV driver[8] and create Docker
network with SR-IOV VF interfaces as needed. This design offloads the SR-IOV
specific implementation to the Docker SR-IOV driver, but at same time
introduce additional work to integrate with the driver. With this design,
the VF resources are managed by the Docker SR-IOV driver.
Data model impact
-----------------
Refer to [2].
REST API impact
---------------
The proposed design relies an API change in container creation. The change
allows user to specify an pre-created neutron port to be used for the
container. The implementation of the change is in progress[9].
The existing container CRUD APIs will allow a set of new parameters for
neutron networks with port-ID, for example:
::
"nets": [
{
"v4-fixed-ip": "",
"network": "",
"v6-fixed-ip": "",
"port": "890699a9-4690-4bd6-8b70-3a9c1be77ecb"
}
]
Security impact
---------------
Security group feature are not supported on SR-IOV ports. The same limitation
applies to SR-IOV networking with Nova virtual machines.
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
1. Change the networking option to allow port as an option when creating
containers;
2. Implement sriov_port_driver at kuryr-libnetwork
Assignee(s)
-----------
Primary assignee:
TBD
Other contributors:
Bin Zhou
Hongbin Lu
Work Items
----------
Implement container creation with existing neutron port[9].
Dependencies
============
SR-IOV port driver implementation at Kuryr-libnetwork.
Testing
=======
Each patch will have unit tests, and Tempest functional tests covered.
Documentation Impact
====================
A user guide will be required to describe the full configurations and
operations.
References
==========
[1] https://blueprints.launchpad.net/zun/+spec/container-pci-device-modeling
[2] https://blueprints.launchpad.net/zun/+spec/support-pcipassthroughfilter
[3] https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
[4] https://docs.openstack.org/kuryr-libnetwork/latest/devref/libnetwork_remote_driver_design.html
[5] https://github.com/openstack/kuryr-libnetwork/tree/master/kuryr_libnetwork/port_driver
[6] https://github.com/openstack/kuryr-libnetwork/blob/master/kuryr_libnetwork/controllers.py
[7] https://github.com/Docker/libnetwork/blob/master/docs/remote.md#join
[8] https://github.com/Mellanox/Docker-passthrough-plugin
[9] https://review.openstack.org/481861
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/container-SRIOV-networking.rst | container-SRIOV-networking.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
============================
Local Volume Integration
============================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/support-volume-binds
Zun has introduced an option for users to bind-mount Cinder volumes
to containers.
However, users can't bind-mount file or directory in local file system
into the container. This function is like the option '-v' of docker run/create:
$ docker run -v /host/path:/container/path <image>
The above command will bind-mount the directory with path '/host/path'
into path '/container/path' inside the container.
Problem description
===================
Some special application containers need use the files/directories
in localhost for initializing process or getting a large amount of data.
So zun should implement the option, and this option should work well with
the cinder volume together.
Proposed change
===============
This spec proposes the following changes.
1. It's unsafe to mount the host directory into the container, so only admin
can bind-mount file or directory in local file system into the container.
2. We leverage the --mount option for cinder volume bindmount. It is better to
reuse this option for bind-mounting local file system.
For example:
$ zun run --mount type=<local|cinder>,source=...,destination=... <image>
3. Zun introduces a config (called 'allowed_mount_path.conf').
Operators can tune this config to restrict the path for bind-mounting.
4. The administrator would be aware that a special container should be
scheduled on which nodes. Users may combine --mount and --hint options to
create a container.
Workflow
=============
The typical workflow to create a container with a Local volume will be as
following:
1. A user calls Zun APIs to create a container with a local volume::
$ zun run --mount type=local,source=/proc,destination=/proc \
--hint <key=value> centos
2. After receiving this request, Zun will check if the mount info has local
volumes. Then it will check the user has administrator permissions
operation.
3. Zun will create an item for local volume, and store in the volume_mapping
table.
4. Zun will choose a node by the option --hint, and check the local volume
whether in the volume lists in forbidden_volume.conf.
5. Zun will calls Docker API to create a container and use the option "-v".
$ docker run -d -v /proc:/proc centos
Security impact
---------------
1. Only admin can bind-mount file or directory in local file system into the
container.
2. Zun introduces a config (called 'allowed_mount_path.conf') to check the
files/directories can be bind-mounted. When the config is unsetted or empty,
zun will raise Exception when using the bind-mounted option.
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
Deployers need to deploy a Cinder.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Feng Shengqin
Other contributors:
Dependencies
============
Testing
=======
Each patch will have unit tests, and Tempest functional tests covered.
Documentation Impact
====================
A set of documentation for this new feature will be required.
References
==========
[1] https://docker-py.readthedocs.io/en/stable/containers.html#container-objects.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/local-volume-integration.rst | local-volume-integration.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
=================
Container Sandbox
=================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/neutron-integration
Zun needs to manage containers as well as their associated IaaS resources,
such as IP addresses, security groups, ports, volumes, etc.. To decouple the
management of containers from their associated resources, we proposed to
introduce a new concept called ``sandbox``.
A sandbox represents an isolated environment for one or multiple containers.
The primary responsibility of sandbox is to provision and manage IaaS
resources associated with a container or a group of containers. In this model,
each container must have a sandbox, and resources (i.e. Neutron ports) are
attached to sandboxes (instead of directly being attached to containers).
The implementation of sandbox is driver-specific. Each driver needs to
implement the sandbox interface (as well as the container interface).
For docker driver, sandbox can be implemented by using docker container itself.
In this case, creating a container in Zun might create two docker containers
internally: a sandbox container and a 'real' container. The real container
might use the sandbox container as a proxy for networking, storage, or others.
Alternatively, users might create a container in an existing sandbox if they
don't want to create an extra sandbox.
Problem description
===================
Zun containers and Nova instances share the common needs for networking,
storage, scheduling, host management, quota management, and many others.
On the one hand, it is better to consolidate the management containers
and Nova instances to minimize the duplication, but on the other hand,
Zun needs to expose container-specific features that might go beyond the
Nova APIs.
To provide a full-featured container experience with minimized duplication
with Nova, an approach is to decouple the management of containers (implemented
in Zun) from management of other resources (implemented in Nova). This
motivates the introduction of sandbox that can be implemented as a docker
container provisioned by Nova. As a result, we obtain flexibility to offload
complex tasks to Nova.
Proposed change
===============
1. Introduce a new abstraction called ``sandbox``. Sandbox represents an
isolated environment for one or multiple containers. All drivers are
required to implement the sandbox abstraction with business logic to create
and manage the isolated environment. For Linux container, an isolated
environment can be implemented by using various Linux namespaces
(i.e. pid, net, or ipc namespace).
2. For docker container, its sandbox could be implemented by using docker
container itself. The sandbox container might not do anything real, but
has one or multiple Neutron ports (or other resources) attached.
The sandbox container is provisioned and managed by Nova (with a
Zun-provided docker virt-driver). After the sandbox container is created,
the real container can be created with the options
``--net=container:<sandbox>``, ``--ipc=container:<sandbox>``,
``--pid=container:<sandbox>`` and/or ``--volumes-from=<sandbox>``.
This will create a container by joining the namespaces of the sandbox
container so that resources in sandbox container can be shared.
3. The design should be extensible so that operators can plug-in their
own drivers if they are not satisfied by the built-in sandbox
implementation(s).
The diagram below offers an overview of the system architecture. The Zun
service may communicate with Nova to create a sandbox that is actually a
docker container. Like normal Nova instances, sandboxes are scheduled by Nova
scheduler and have Neutron ports attached for providing networking.
Containers are created by Zun, and run inside the namespaces of sandboxes.
Note that a sandbox can contain one or multiple containers. All containers
in a sandbox will be co-located and share the Linux namespaces of the sandbox.
::
+---------+
| CLI |
+----+----+
|
+----+----+
+-------- Nova -------+ +-+ REST +----- Zun -----+
| | | +---------+ |
| +--+ |
| | | |
+-----------+---------+ +---------------+-----------+
| |
+-----------|----+ Compute Host ---------|-----------+
| +------+-------+ +-----+-----+ |
| +--+ Nova Compute +--+ +---+ Zun Agent +-+ |
| | +--------------+ | | +-----------+ | |
| | | | | |
| | +-----+-------|---+ | |
| | | | | | |
| +-- Sandbox -+ +-- Sandbox --|-+ +-- Sandbox --|-+ |
| | | | | | | | | |
| | | | +-----------+ | | +-----------+ | |
| | | | | Container | | | | Container | | |
| | | | +-----------+ | | +-----------+ | |
| | | | | | +-----------+ | |
| | | | | | | Container | | |
| | | | | | +-----------+ | |
| | | | | | | |
| +------------+ +---------------+ +---------------+ |
| |
+----------------------------------------------------+
Design Principles
-----------------
1. Minimum duplication between Nova and Zun. If possibly, reuse everything that
has been implemented in Nova.
2. Similar user experience between VMs and containers. In particular, the ways
to configure networking of a container should be similar as the VM
equivalent.
3. Full-featured container APIs.
Alternatives
------------
1. Directly bind resources (i.e. Neutron ports) to containers. This will have a
large amount of duplication between Nova and Zun.
2. Use Kuryr. Kuryr is designed for users who use native tool (i.e. docker
CLI) to manage containers. In particular, what Kuryr provided to translate
API calls to docker to API calls to Neutron. Zun doesn't expose native APIs
to end-users so Kuryr cannot address Zun's use cases.
Data model impact
-----------------
Add a 'sandbox' table. Add a foreign key 'sandbox_id' to the existing table
'container'.
REST API impact
---------------
1. Add an new API endpoint /sandboxes to the REST API interface.
2. In the API endpoint of creating a container, add a new option to specify
the sandbox where the container will be created from. If the sandbox is not
specified, Zun will create a new sandbox for the container.
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
Performance penalty is expected since provisioning sandboxes take extra
compute resources. In addition, the Nova API will be used to create sandboxes,
which might also incur performance penalty.
Other deployer impact
---------------------
Deployers need to deploy a custom Nova virt-driver for provisioning sandboxes.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Hongbin Lu
Other contributors:
Work Items
----------
1. Implement a custom Nova virt-driver to provision sandboxes.
2. Implement a new API endpoint for sandboxes.
3. Implement unit/integration test.
Dependencies
============
Add a dependency to Nova
Testing
=======
Each patch will have unit tests, and Tempest functional tests covered.
Documentation Impact
====================
A set of documentation for this new feature will be required.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/container-sandbox.rst | container-sandbox.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
==========================
Supporting CPU sets in ZUN
==========================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/cpuset-container
ZUN presently does not have a way to allow users to specify dedicated
resources for workloads that require higher performance. Such workloads
can be classified as Network Function Virtualization (NFV) based, AI
based or HPC based. This spec takes a first step towards supporting
such workloads with dedicated resources. The first of such resources
can be the cpusets or cores on a given physical host.
Problem description
===================
Exposing cpusets to the cloud users cannot be done in its raw form.
This is because, exposing such parameters to the end user breaks
the cloudy model of doing things.
Exposing cpusets can be broadly thought of as combination of user policies
and host capabilities.
The user policies are applied against compute host capabilities and if it
matches, the user is allowed to perform the CRUD operations on a container.
Proposed change
===============
1. Piggy back on the work done for host capabilities.
More details of this work would be covered on a separate blueprint:
https://blueprints.launchpad.net/zun/+spec/expose-host-capabilities
2. Hydrate the schema with information obtained via calling driver specific
methods that obtain the details of a host inventory. For cpusets, lscpu -p
can be used to obtain the required information. Implement a periodic task
that inventories the host at regular intervals.
3. Define 2 user cpu-policies called "dedicated" and "shared". The first
policy signifies that the user wants to use dedicated cpusets for their
workloads. The shared mode is very similar to the default behavior. If unless
specified, the behavior will be defaulted to "shared".
4. Write driver methods to provision containers with dedicated cpusets.
The logic of 'what' cpusets should be picked up for a given requests lies
in the control of the zun code and is not exposed to the user.
5. The cpu-policy parameter is specified in conjunction with the vcpus field
for container creation. The number of vcpus shall determine the number of
cpusets requested for dedicated usage.
6. If this feature is being used with the zun scheduler, then the scheduler
needs to be aware of the host capabilities to choose the right host.
For example::
$ zun run -i -t --name test --cpu 4 --cpu-policy dedicated
We would try to support scheduling using both of these policies on the same
host.
How it works internally?
Once the user specifies the number of cpus, we would try to select a numa node
that has the same or more number of cpusets unpinned that can satisfy
the request.
Once the cpusets are determined by the scheduler and it's corresponding numa
node, a driver method should be called for the actual provisoning of the
request on the compute node. Corresponding updates would be made to the
inventory table.
In case of the docker driver - this can be achieved by a docker run
equivalent::
$ docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3"
The cpuset-mems would allow the memory access for the cpusets to
stay localized.
If the container is in paused/stopped state, the DB will still continue to
block the pinset information for the container instead of releasing it.
Design Principles
-----------------
1. Build a host capability model that can be leveraged by the zun scheduler.
2. Create abstract user policies for the cloud user instead of raw
values.
Alternatives
------------
None
Data model impact
-----------------
- Introduce a new field in the container object called 'cpu_policy'.
- Add a new numa.py object to store the inventory information.
- Add numa topology obtained to the compute_node object.
REST API impact
---------------
The existing container CRUD APIs should allow a new parameter
for cpu policy.
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Sudipta Biswas ([email protected])
Other contributors:
Hongbin Lu, Pradeep Singh
Work Items
----------
1. Create the new schema.
2. Add cpu_policy field in the REST APIs and zun client.
3. Write logic to hydrate the inventory tables.
4. Implement a periodic task that inventories the host.
5. Write logic to check the cpusets of a given host.
6. Implement unit/integration test.
Dependencies
============
Testing
=======
Each patch will have unit tests.
Documentation Impact
====================
A set of documentation for this new feature will be required.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/cpuset-container.rst | cpuset-container.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
============================
Container Cinder Integration
============================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/cinder-zun-integration
Zun needs to manage containers as well as their associated IaaS resources,
such as IP addresses, security groups, ports, volumes, etc.
Zun containers should be able to use volumes which has multiple storage
vendors or plugins support.
As Zun is project belongs to OpenStack ecosystem and Zun has integration
with Cinder which is block storage service.
Problem description
===================
Data persistence in container is infeasible. The root file system of a Docker
container is a Union File System. Data resides in Union File System is
ephemeral because the data will lose whenever the container is deleted or
the host goes down. In addition, the performance of persisting a large amount
of data into Union File System is suboptimal.
To address the use cases that require persisting a large amount of data,
a common solution is to leverage the Docker data volume. A data volume is a
specially-designated directory within one or more containers that bypasses
the Union File System [1]. It is designed for storing data and share the data
across containers. Data volume can be provisioned by directly mounting a host
directory, or by a volume plugin that interfaces with a cloud storage backend.
Proposed change
===============
This spec proposes the following changes.
1. Enhance existing API to support bind-mounting Cinder volumes to a container
as data volumes.
2. Define a pluggable interface that can be implemented by different volume
drivers. A volume driver is a module that is responsible for managing Cinder
volumes for containers. Initially, we are going to provide two drivers:
a Cinder driver and a Fuxi driver.
Cinder driver
=============
This driver is responsible to manage the bind-mounting of Cinder volumes.
If users want to create a container with a volume, they are required to
pre-create the volume in Cinder. Zun will perform the necessary steps to make
the Cinder volume available to the container, which typically includes
retrieving volume information from Cinder, connecting to the volume by using
os-brick library, mounting the connected volume into a directory in the
host's filesystem, and calling Docker API to bind-mount the specific directory
into the container.
The typical workflow to create a container with a Cinder volume will be as
following:
1. A user calls Zun APIs to create a container with a volume::
$ zun run --volume-driver=cinder -v my-cinder-volume:/data cirros
2. After receiving this request, Zun will make an API call to Cinder to
reserve the volume. This step will update the status of the volume to
"attaching" in Cinder to ensure it cannot be used by other users::
cinderclient.volumes.reserve(volume)
3. Zun makes an API call to Cinder to retrieve the connection information::
conn_info = cinderclient.volumes.initialize_connection(volume, ...)
4. Zun uses os-brick library with the returned connection to do the connect.
A successful connection should return the device information that will be
used for mounting::
device_info = brick_connector.connect_volume(conn_info)
5. Zun makes an API call to Cinder to finalize the volume connection.
This will update the status of the volume from "attaching" to "attached"
in Cinder::
cinderclient.volumes.attach(volume)
6. Zun mounts the storage device (provided by step 4) into a directory in the
host's filesystem, and calls Docker API to create a container and use
that directory as a data volume::
$ docker run -d -v /opt/stack/data/zun/mnt/<uuid>:/data cirros
The typical workflow to delete a container with a Cinder volume will be as
following:
1. A user calls Zun APIs to delete a container::
$ zun delete my-container
2. After receiving this request, Zun will make an API call to Cinder to
begin detaching the volume. This will update the status of the volume to
"detaching" state::
cinderclient.volumes.begin_detaching(volume)
3. Zun uses os-brick library to disconnect the volume::
device_info = brick_connector.disconnect_volume(conn_info)
4. Zun makes an API call to Cinder to terminate the connection::
conn_info = cinderclient.volumes.terminate_connection(volume, ...)
5. Zun makes an API call to Cinder to finalize the volume disconnection.
This will update the status of the volume from "detaching" to "available"
in Cinder::
cinderclient.volumes.detach(volume)
Fuxi driver
===========
Fuxi is new OpenStack project which aims to integrate Cinder to Docker
volumes. Fuxi can be used as the unified persistence storage provider for
various storage services such as Cinder and Manila.
The implementation of Cinder is enabled using Fuxi driver from Zun. We need
to implement Cinder driver in Zun which manages volumes, let Fuxi control the
mount/unmount volume from Docker container.
There are two approaches Docker provides to add volume to Container.
1. Using Docker run::
$ docker run -d --volume-driver=fuxi -v my-named-volume:/data --name web_app
2. Create volume first & then add it to Container::
$ docker volume create --driver fuxi \
--name my-named-volume \
-o size=1 \
-o fstype=ext4 \
-o multiattach=true
$ docker run -d --name web_app -v my-named-volume:/data
I think we can support both.
1. To implement the first approach, we need following changes
- Introduce fields in Container API - volume-driver, vol-name, vol-size.
- We pass call to Volume Driver to create volume.
- Volume driver connects to Cinder & handles volume creation.
- Once volume is created in Cinder, then we finally go add volume-driver
as Fuxi & add volume name which created in Cinder.
- Fuxi should be installed in Docker host and configured with Cinder engine.
2. To implement the second approach, we need following changes
- Introduce Volume API in Zun which has fields volume-driver, volume-name,
volume-size etc.
- Volume API will connect to volume driver which will sit under
/zun/volume/driver.py.
- Volume Driver connects to Cinder and handles volume creation in Cinder.
- Once the volume is created in Cinder, it communicates to Docker Volume API
to attach the created volume in Docker.
- Docker Volume API use --driver=Fuxi which goes talks to Cinder and attach
created Volume in Docker.
- Prerequisite here is, Fuxi should be installed on Docker host & configured
with Cinder. If not, it returns the 500 response.
- Also we need to introduce new Volume table which contains field vol-driver,
vol-name, vol-size fields.
- We need to add storage section in conf file, where we can specify some
default attributes like storage engine Cinder, Cinder endpoint etc.
- We also need to configure Cinder endpoint in Fuxi conf file.
- We can use same implementation for Flocker also as it supports Cinder.
- I think if we can create separate CinderDriver which calls from Volume
volume driver. This approach enables way to implement multiple storages
supports in the future and we can plug-in multiple storage implementation.
The diagram below offers an overview of the system architecture. The Zun
service may communicate with Fuxi and fuxi talks to Cinder for volumes.
::
+---------+
| CLI |
+----+----+
|
+----+----+
|+-+ REST +----- Zun ----+|
|+-- --+|
|+------------+----------+|
|
|+--------------------+ Volume Driver+-------------+|
|| | | |
|| | | |
|| | | |
|| +-----------+ +---------------+ |
|| | Cinder | | Docker Volume | |
|| +-----------+ +---------------+ |
|| | | |
|| +---------+ +-----------+ |
|| | Fuxi | | Flocker | |
|| +----+----+ +-----------+ |
|+------------+ +---------------+ +----------------+|
| |
+---------------------------------------------------+
Design Principles
-----------------
1. Similar user experience between VMs and containers. In particular, the ways
to configure volumes of a container should be similar to the VM equivalent.
2. Full-featured container APIs.
Alternatives
------------
1. We can use rexray [2] for storage support, its again third party tool which
increases the dependency.
Data model impact
-----------------
Add volume-driver, vol-name, size field in the Volume Table.
Need to add volume_id to Container Table.
REST API impact
---------------
We need to add below APIs
1. Create a volume - POST /v1/volumes
2. List volumes - GET /v1/volumes
3. Inspect volume - GET /v1/volumes/<uuid>
4. Delete Volume - DELETE /v1/volumes/<uuid>
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
Deployers need to deploy a Fuxi and Cinder.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Digambar
Other contributors:
Work Items
----------
1. We need to introduce new Volume API.
2. Implement volume driver in Zun.
3. Implement Cinder calls under the volume driver.
4. Implement Docker volume support in Zun.
5. Add volume section in zun.conf.
6. Add volume-driver support in CLI.
7. Implement unit/integration test.
Dependencies
============
Add a dependency to Cinder.
Testing
=======
Each patch will have unit tests, and Tempest functional tests covered.
Documentation Impact
====================
A set of documentation for this new feature will be required.
References
==========
[1] https://docs.docker.com/engine/tutorials/dockervolumes/
[2] https://github.com/codedellemc/rexray
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/cinder-integration.rst | cinder-integration.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
===============================
PCI passthrough device modeling
===============================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/container-pci-device-modeling
PCI passthrough enables full access and direct control of a physical PCI
device in a Zun container. With PCI passthrough, the full physical device
is assigned to only one container and cannot be shared. This mechanism is
generic for any kind of PCI devices. For example, it runs with a Network
Interface Card (NIC), Graphics Processing Unit (GPU), or any other devices
that can be attached to a PCI bus. To properly use the devices, containers
are required to install the correct driver of the device.
Some PCI devices provide Single Root I/O Virtualization and Sharing (SR-IOV)
capabilities to share the PCI device among different VMs or containers. When
SR-IOV is used, a physical device is virtualized and appears as multiple PCI
devices. Virtual PCI devices are assigned to the same or different containers.
Since release Ocata, Openstack Nova enables flavor-based PCI-passthough device
management. This design will not depend on the Nova's implementation of PCI-
passthough work. However the design and implementation in Zun will refer to
the existing of Nova work and try to be consistent with Nova's implementation.
Problem description
===================
Currently, Zun scheduler can only schedule work loads with requests of regular
compute resources, such as CPUs, RAMs. There are some emerging use cases
requiring containers to access resources such as GPUs, NICs and so on. PCI
passthrough and SR-IOV are the common technology to enable such use cases.
To support the new use cases, the new resources will be added to Zun compute
resource model, and allow Zun scheduler to place the work load according to
the availability of the resources.
Proposed change
===============
1. Introduce a new configuration to abstract the PCI devices at Zun Compute.
A new PCI passthrough whitelist configuration will allow cloud
administrators to explicitly define a list PCI devices to be available
for Zun Compute services. The whitelist of PCI devices should be common
for both network PCI devices and other compute or storage PCI devices.
The PCI whitelist can be implemented exactly as Nova[1]:
In zun.conf, whitelist entries are defined as the following:
::
pci_passthrough_whitelist = {<entry>}
Each whitelist entry is defined in the format:
["vendor_id": "<id>",]
["product_id": "<id>",]
["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
"devname": "PCI Device Name",]
["tag":"<tag_value>",]
The valid key values are:
"vendor_id": Vendor ID of the device in hexadecimal.
"product_id": Product ID of the device in hexadecimal.
"address": PCI address of the device.
"devname": Device name of the device (for e.g. interface name). Not all
PCI devices have a name.
"<tag>": Additional <tag> and <tag_value> used for matching PCI devices.
Supported <tag>: "physical_network". The pre-defined tag
"physical_network" is used to define the physical network, that
the SR-IOV NIC devices are attached to.
2. Introduce a new configuration pci alias to allow zun to specify the PCI
device without needing to repeat all the PCI property requirement.
For example,
::
alias = {
"name": "QuickAssist",
"product_id": "0443",
"vendor_id": "8086",
"device_type": "type-PCI"
}
defines an alias for the Intel QuickAssist card. Valid key values are
"name": Name of the PCI alias.
"product_id": Product ID of the device in hexadecimal.
"vendor_id": Vendor ID of the device in hexadecimal.
"device_type": Type of PCI device. Valid values are: "type-PCI",
"type-PF" and "type-VF".
The typical workflow will be as following:
1. A cloud admin configures PCI-PASSTHROUGH alias at /etc/zun.conf on the
openstack controller nodes.
::
[default]
pci_alias = {
"name": "QuickAssist",
"product_id": "0443",
"vendor_id": "8086",
"device_type": "type-PCI"
}
2. Cloud admin enables the PCI-PASSTHROUGH filter to /etc/zun.conf at
openstack controller nodes.
::
scheduler_available_filters=zun.scheduler.filters.all_filters
scheduler_default_filters= ..., PciPassthroughFilter
3. Cloud admin restarts the Zun-API service to make the configuration
effective;
4. Cloud admin adds available PCI Passthrough devices to the whitelist of
/etc/zun.conf at Zun compute nodes. An example can be the following:
::
[default]
pci_passthrough_whitelist = {
"product_id": "0443",
"vendor_id": "8086",
"device_type": "type-PCI",
"address": ":0a:00."
}
All PCI devices matching the vendor_id and product_id are added to the pool
of PCI devices available for passthrough to Zun containers.
5. Cloud admin restarts Zun Compute service to make the configuration
effective.
6. Each Zun Compute service updates the PCI-Passthough devices' availability to
Zun Scheduler perioadially.
7. Cloud user creates a new container with request of a PCI-Passthrough
device. For example, the following command will create a test_QuickAssist
container with two PCI devices named "QuickAssist" attached. The design and
implementation details of creating a workload with PCI_Passthrough devices
are out of the scope of this design spec. Please refer to the other
blueprints (TBD) for more details.
$ zun create --pci_passthrough QuickAssist:1 test_QuickAssist
Alternatives
------------
It is a more desirable way to define workloads using flavors. PCI-Passthough
configurations, in particularly pci_alias can be included in the flavor
configuration [2][3]. Thus users will use the flavor to specify the PCI device
to be used for container.
Integration with OpenStack Cyborg is another mid to long term alternative[4].
Cyborg as a service for managing accelerators of any kind needs to cooperate
with Zun on two planes: first, Cyborg should inform Zun about the resources
through placement API[5], so that scheduler can leverage user request for
particular functionality into assignment of specific resource using resource
provider which possess an accelerator; second, Cyborg should be able to provide
information how Zun compute can attach particular resource to containers.
Data model impact
-----------------
- Introduce a new object list pci-alias, which is a list of alias object:
::
fields = {
"name" : fields.StringField(nullable=False),
"vendor_id": fields.StringField(nullable=False),
"product_id": fields.StringField(nullable=False),
"device_type": fields.StringField(nullable=False)
}
- Introduce a new field in the container object called 'pci-alias-usage',
for example:
"pci_alias_name": fields.StringField(nullable=False),
"count": fields.IntField(nullable=True)
- Add pci-devices to the compute_node object. Each pci-device should have
the following fields as an example:
::
{
"vendor_id": fields.StringField(nullable=False),
"product_id": fields.StringField(nullable=False),
"address": fields.StringField(nullable=True),
"devname": fields.StringField(nullable=True),
"physical_network": fields.StringField(nullable=True),
}
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Other contributors:
Work Items
----------
1. Implement codes to read and validate pci_alias configuration;
2. Implement codes to read and validate pci_whitelist configuration;
3. Implement new pci-alias model and the verify if a pci_alias match
a given pci_whitelist entry upon a new zun compute service available;
4. Implement unit/integration test.
Dependencies
============
The full function of enable Pci passthrough will depend on other component
in Zun or outside of Zun such as Neutron and Kuryr;
Support GPU PCI-PASSTHROUGH will require the support of NVIDIA docker run-time;
Support SR-IOV NIC PCI-PASSTHROUGH will require SR-IOV port binding from Kuryr.
Testing
=======
Each patch will have unit tests, and Tempest functional tests covered.
Documentation Impact
====================
A set of documentation for this new feature will be required.
References
==========
[1] https://docs.openstack.org/nova/latest/admin/pci-passthrough.html
[2] PCI flavor-based device assignment https://docs.google.com/
document/d/1vadqmurlnlvZ5bv3BlUbFeXRS_wh-dsgi5plSjimWjU
[3] https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support
[4] https://review.openstack.org/#/c/448228/
[5] https://docs.openstack.org/nova/latest/user/placement.html
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/pci-device-model.rst | pci-device-model.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
==============
API Validation
==============
`bp api-json-input-validation <https://blueprints.launchpad.net/zun/+spec/api-json-input-validation>`_
The purpose of this blueprint is to track the progress of validating the
request bodies sent to the Zun API server, accepting requests
that fit the resource schema and rejecting requests that do not fit the
schema. Depending on the content of the request body, the request should
be accepted or rejected consistently regardless of the resource the request
is for.
Problem Description
===================
Currently Zun validates each type of resource in request body by defining a
type class for that resource, although such approach is good but is not very
scalable. It also require conversion of request body to controller object and
vice versa.
Use Case: As an End User, I want to observe consistent API validation and
values passed to the Zun API server.
Proposed Change
===============
One possible way to validate the Zun API is to use jsonschema
(https://pypi.org/project/jsonschema/). A jsonschema validator object can
be used to check each resource against an appropriate schema for that
resource. If the validation passes, the request can follow the existing flow
of control to the resource manager. If the request body parameters fail the
validation specified by the resource schema, a validation error will be
returned from the server.
Example:
"Invalid input for field 'cpu'. The value is 'some invalid cpu value'.
We can build in some sort of truncation check if the value of the attribute is
too long. For example, if someone tries to pass in a 300 character name of
container we should check for that case and then only return a useful message,
instead of spamming the logs. Truncating some really long container name might
not help readability for the user, so return a message to the user with what
failed validation.
Example:
"Invalid input for field 'name'."
Some notes on doing this implementation:
* Common parameter types can be leveraged across all Zun resources. An
example of this would be as follows::
from zun.common.validation import parameter_types
<snip>
CREATE = {
'type': 'object',
'properties': {
'name': parameter_types.name,
'image': parameter_types.image,
'command': parameter_types.command,
<snip>
},
'required': ['image'],
'additionalProperties': True,
}
* The validation can take place at the controller layer.
* When adding a new extension to Zun, the new extension must be proposed
with its appropriate schema.
Alternatives
------------
`Voluptuous <https://github.com/alecthomas/voluptuous>`_ might be another
option for input validation.
Data Model Impact
-----------------
This blueprint shouldn't require a database migration or schema change.
REST API Impact
---------------
This blueprint shouldn't affect the existing API.
Security Impact
---------------
None
Notifications Impact
--------------------
None
Other End User Impact
---------------------
None
Performance Impact
------------------
Changes required for request validation do not require any locking mechanisms.
Other Deployer Impact
---------------------
None
Developer Impact
----------------
This will require developers contributing new extensions to Zun to have
a proper schema representing the extension's API.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
pksingh (Pradeep Kumar Singh <[email protected]>)
Work Items
----------
1. Initial validator implementation, which will contain common validator code
designed to be shared across all resource controllers validating request
bodies.
2. Introduce validation schemas for existing core API resources.
3. Enforce validation on proposed core API additions and extensions.
Dependencies
============
None
Testing
=======
Tempest tests can be added as each resource is validated against its schema.
Documentation Impact
====================
None
References
==========
Useful Links:
* [Understanding JSON Schema] (https://spacetelescope.github.io/understanding-json-schema/reference/object.html)
* [Nova Validation Examples] (https://opendev.org/openstack/nova/src/branch/master/nova/api/validation)
* [JSON Schema on PyPI] (https://pypi.org/project/jsonschema/)
* [JSON Schema core definitions and terminology] (https://tools.ietf.org/html/draft-zyp-json-schema-04)
* [JSON Schema Documentation] (https://json-schema.org/documentation.html)
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/zun-api-validation.rst | zun-api-validation.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=====================
Container Composition
=====================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/introduce-compose
Kubernetes Pod[1] or Docker compose[2] are popular for deploying applications
or application components that span multiple containers. It is a basic unit for
scheduler, resource allocation. This spec proposes to support a similar
feature in Zun, we can put multiple containers, a sandbox and other related
resources into one unit, we name the unit ``capsule``.
The containers in a ``capsule`` are relatively tightly coupled, they share
the capsule's context like Linux namespaces, cgroups and etc, and they work
together closely to form a cohesive unit of service.
Problem description
===================
Currently running or deploying one container to do the operation is not a
very effective way in micro services, while multiple different containers run
as an integration has widely used in different scenarios, such as pod in
Kubernetes. The pod has the independent network, storage, while the compose has
an easy way to defining and running multi-container Docker applications. They
are becoming the basic unit for container application scenarios.
Nowadays Zun doesn't support creating and running multiple containers as an
integration. So we will introduce the new Object ``capsule`` to realize this
function. ``capsule`` is the basic unit for zun to support service to external.
The ``capsule`` will be designed based on some similar concepts such as pod and
compose. For example, ``capsule`` can be specified in a yaml file that might be
similar to the format of k8s pod manifest. However, the specification of
``capsule`` will be exclusive to Zun. The details will be showed in the
following section.
Proposed change
===============
A ``capsule`` has the following properties:
* Structure: It can contains one or multiple containers, and has a sandbox
container which will support the network namespace for the capsule.
* Scheduler: Containers inside a capsule are scheduled as a unit, thus all
containers inside a capsule is co-located. All containers inside a capsule
will be launched in one compute host.
* Network: Containers inside a capsule share the same network namespace, so
they share IP address(es) and can find each other via localhost by using
different remapping network port. Capsule IP address(es) will re-use the
sandbox IP. Containers communication between different capsules will use
capsules IP and port.
* LifeCycle: Capsule has different status:
Starting: Capsule is created, but one or more container inside the capsule is
being created.
Running: Capsule is created, and all the containers are running.
Finished: All containers inside the capsule have successfully executed
and exited.
Failed: Capsule creation is failed
* Restart Policy: Capsule will have a restart policy just like container.
The restart policy relies on container restart policy to execute.
* Health checker:
In the first step of realization, container inside the capsule will send its
status to capsule when its status changed.
* Upgrade and rollback:
Upgrade: Support capsule update(different from zun update). That means the
container image will update, launch the new capsule from new image, then
destroy the old capsule. The capsule IP address will change. For Volume,
need to clarify it after Cinder integration.
Rollback: When update failed, rollback to it origin status.
* CPU and memory resources: Given that host resource allocation, cpu and memory
support will be implemented.
Implementation:
1. Introduce a new abstraction called ``capsule``. It represents a tightly
coupled unit of multiple containers and other resources like sandbox. The
containers in a capsule shares the capsule's context like Linux namespaces
and cgroups.
2. Support the CRUD operations against capsule object, capsule should be a
basic unit for scheduling and spawning. To be more specific, all containers
in a capsule should be scheduled to and spawned on the same host. Server
side will keep the information in DB.
3. Add functions about yaml file parser in the CLI side. After parsing the
yaml, send the REST to API server side, scheduler will decide which host to
run the capsule.
4. Introduce new REST API for capsule. The capsule creation workflow is:
CLI Parsing capsule information from yaml file -->
API server do the CRUD operation, call scheduler to launch the capsule,
from Cinder to get volume, from Kuryr to get network support -->
Compute host launch the capsule, attach the volume -->
Send the status to API server, update the DB.
5. Capsule creation will finally depend on the backend container driver.
Now choose Docker driver first.
6. Define a yaml file structure for capsule. The yaml file will be compatible
with Kubernetes pod yaml file, at the same time Zun will define the
available properties, metadata and template of the yaml file. In the first
step, only essential properties will be defined.
The diagram below offers an overview of the architecture of ``capsule``.
::
+-----------------------------------------------------------+
| +-----------+ |
| | | |
| | Sandbox | |
| | | |
| +-----------+ |
| |
| |
| +-------------+ +-------------+ +-------------+ |
| | | | | | | |
| | Container | | Container | | Container | |
| | | | | | | |
| +-------------+ +-------------+ +-------------+ |
| |
| |
| +----------+ +----------+ |
| | | | | |
| | Volume | | Volume | |
| | | | | |
| +----------+ +----------+ |
| |
+-----------------------------------------------------------+
Yaml format for ``capsule``:
Sample capsule:
.. code-block:: yaml
apiVersion: beta
kind: capsule
metadata:
name: capsule-example
lables:
app: web
restartPolicy: Always
hostSelector: node1
spec:
containers:
- image: ubuntu:trusty
command: ["echo"]
args: ["Hello World"]
imagePullPolicy: Always
imageDriver: Glance
workDir: /root
labels:
app: web
volumeMounts:
- name: volume1
mountPath: /root/mnt
readOnly: True
ports:
- name: nginx-port
containerPort: 80
hostPort: 80
protocol: TCP
env:
PATH: /usr/local/bin
resources:
requests:
cpu: 1
memory: 2GB
volumes:
- name: volume1
drivers: cinder
driverOptions: options
size: 5GB
volumeType: type1
image: ubuntu-xenial
Capsule fields:
* apiVersion(string): the first version is beta
* kind(string): the flag to show yaml file property
* metadata(ObjectMeta): metadata Object
* spec(CapsuleSpec): capsule specifications
* restartPolicy(string): [Always | Never | OnFailure], by default is Always
* hostSelector(string): Specify the host that will launch the capsule
ObjectMeta fields:
* name(string): capsule name
* lables(dict, name: string): labels for capsule
CapsuleSpec fields:
* containers(Containers array): containers info array, one capsule have
multiple containers
* volumes(Volumes array): volume information
Containers fields:
* name(string): name for container
* image(string): container image for container
* imagePullPolicy(string): [Always | Never | IfNotPresent]
* imageDriver(string): glance or dockerRegistory, by default is according to
zun configuration
* command(string): container command when starting
* args(string): container args for the command
* workDir(string): workDir for the container
* labels(dict, name:string): labels for the container
* volumeMounts(VolumnMounts array): volumeMounts information for container
* ports(Ports array): Port mapping information for container
* env(dict, name:string): environment variables for container
* resources(RecourcesObject): resources that container needed
VolumnMounts fields:
* name(string): volume name that listed in below field "volumes"
* mountPath(string): mount path that in the container, absolute path
* readOnly(boolean): read only flags
Ports fields:
* name(string): port name, optional
* containerPort(int): port number that container need to listen
* hostPort(int): port number that capsule need to listen
* protocol(string): TCP or UDP, by default is TCP
RecourcesObject fields:
* requests(AllocationObject): the resources that the capsule needed
AllocationObject:
* cpu(string): cpu resources, cores number
* memory(string): memory resources, MB or GB
Volumes fields:
* name(string): volume name
* driver(string): volume drivers
* driverOptions(string): options for volume driver
* size(string): volume size
* volumeType(string): volume type that cinder need. by default is from cinder
config
* image(string): cinder needed to boot from image
Alternatives
------------
1. Abstract all the information from yaml file and implement the capsule CRUD
in client side.
2. Implement the CRUD in server side.
Data model impact
-----------------
* Add a field to container to store the id of the capsule which include the
container
* Create a 'capsule' table. Each entry in this table is a record of a capsule.
.. code-block:: python
Introduce the capsule Object:
fields = {
'capsuleVersion': fields.StringField(nullable=True),
'kind': fields.StringField(nullable=True),
'id': fields.IntegerField(),
'uuid': fields.UUIDField(nullable=True),
'name': fields.StringField(nullable=True),
'project_id': fields.StringField(nullable=True),
'user_id': fields.StringField(nullable=True),
'status': z_fields.ContainerStatusField(nullable=True),
'status_reason': fields.StringField(nullable=True),
# conclude the readable message that show why capsule is in this status
# 'key': 'value'--> 'time':'message'
'message': fields.DictOfStringsField(nullable=True),
'startTime': fields.StringField(nullable=True),
'cpu': fields.FloatField(nullable=True),
'memory': fields.StringField(nullable=True),
'task_state': z_fields.TaskStateField(nullable=True),
'host': fields.StringField(nullable=True),
'restart_policy': fields.DictOfStringsField(nullable=True),
'meta': fields.DictOfStringsField(nullable=True),
'volumes': fields.DictOfStringsField(nullable=True),
'ip': fields.StringField(nullable=True),
'labels': fields.DictOfStringsField(nullable=True),
'ports': z_fields.ListOfIntegersField(nullable=True),
'hostname': fields.StringField(nullable=True),
}
REST API impact
---------------
* Add a new API endpoint /capsule to the REST API interface.
* Capsule API: Capsule consider to support multiple operations as container
composition.
* Container API: Many container API will be extended to capsule. Here in this
section will define the API usage range.
::
Capsule API:
list <List all the capsule, add parameters about list capsules with the same labels>
create <-f yaml file><-f directory>
describe <display the details state of one or more resource>
delete
<capsule name>
<-l name=label-name>
<–all>
run <--capsule ... container-image>
If "--capsule .." is set, the container will be created inside the capsule.
Otherwise, it will be created as normal.
Container API:
* show/list allow all containers
* create/delete allow bare container only (disallow in-capsule containers)
* attach/cp/logs/top allow all containers
* start/stop/restart/kill/pause/unpause allow bare container only (disallow in-capsule containers)
* update for container in the capsule, need <--capsule> params.
Bare container doesn't need.
Security impact
---------------
None
Notifications impact
--------------------
Need to support "zun notification" for capsule events
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
The implementation is divided into the following parts:
1. Define the ``capsule`` data structure. Take Kubernetes Pod as a
reference.
2. Define the yaml structure for ``capsule``, add the parser for the
yaml file. The parser realization is in CLI. CLI parse info from yaml
and then send to API server.
3. Implement a new API endpoint for capsule, including capsule life
cycle and information.
4. Implement the API server side, including DB CRUD, compute node
scheduler, etc.
5. Implement the compute server side, now using Docker Driver first.
The first step will just realize the several containers in the same
sandbox which have the same network namespace. The storage share in
the capsule will be added after Cinder integration.
We will split the implementation into several blueprints for easy task
tracking.
Assignee(s)
-----------
Primary assignee:
Wenzhi Yu <yuywz>
Kevin Zhao <kevinz>
Work Items
----------
1. Implement a new API endpoint for capsules.
2. Implement unit/integration test.
3. Document the new capsule API.
Dependencies
============
1. Need to add support for select host to launch capsule
2. Need to add support for port mapping
3. Need to support "zun notification" for capsule events
Testing
=======
Each patch will have unit tests, and Tempest functional tests covered.
Documentation Impact
====================
A set of documentation for this new feature will be required.
References
==========
[1] https://kubernetes.io/
[2] https://docs.docker.com/compose/
[3] https://etherpad.openstack.org/p/zun-container-composition
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/container-composition.rst | container-composition.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
==================
Container Snapshot
==================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/container-snapshot
Zun needs to snapshot a running container, and make it available to user.
Potentially, a user can restore the container from this snapshot image.
Problem description
===================
It is a common requirement from users of containers to save the changes of a
current running container to a new image. Zun currently does not support
taking a snapshot of a container.
Proposed change
===============
1. Introduce a new CLI command to enable a user to take a snapshot of a running
container instance::
$ zun commit <container-name> <image-name>
$ zun help commit
usage: zun commit <container-name> <image-name>
Create a new image by taking a snapshot of a running container.
Positional arguments:
<container-name> Name or ID of container.
<image-name> Name of snapshot.
2. Extend docker driver to enable "docker commit" command to create a
new image.
3. The new image should be accessible from other hosts. There are two
options to support this:
a) upload the image to glance
b) upload the image to docker hub
Option a) will be implemented as default; future enhancement can be
done to support option b).
Design Principles
=================
Similar user experience between VMs and containers. In particular,
the ways to snapshot a container should be similar as the VM equivalent.
Alternatives
============
1. Using linked volumes to persistent changes in a container.
2. Use docker cp to copy data from the container onto the host machine.
Data model impact
=================
None
REST API impact
===============
Creates an image from a container.
Specify the image name in the request body.
After making this request, a user typically must keep polling the status of the
created image from glance to determine whether the request succeeded.
If the operation succeeds, the created image has a status of active. User can
also see the new image in the image back end that OpenStack Image service
manages.
Preconditions:
1. The container must exist.
2. User can only create a new image from the container when its status is
Running, Stopped, and Paused.
3. The connection to the Image service is valid.
::
POST /containers/<ID>/commit: commit a container
Example commit
{
"image-name" : "foo-image"
}
Response:
If successful, this method does not return content in the response body.
- Normal response codes: 202
- Error response codes: BadRequest(400), Unauthorized(401), Forbidden(403),
ItemNotFound(404)
Security impact
===============
None
Notifications impact
====================
None
Other end user impact
=====================
None
Performance Impact
==================
None
Other deployer impact
=====================
None
Developer impact
================
None
Implementation
==============
Assignee(s)
Primary assignee: Bin Zhou
Other contributors:
Work Items
1. Expend docker driver to enable "docker commit".
2. Upload the generated image to glance.
3. Implement a new API endpoint for createImage.
4. Implement unit/integration test.
Dependencies
============
None
Testing
=======
Each patch will have unit tests, and Tempest functional tests covered.
Documentation Impact
====================
A set of documentation for this new feature will be required.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/container-snapshot.rst | container-snapshot.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
==========================
Container Interactive mode
==========================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/support-interactive-mode
Zun needs to support the function that the interactive mode which is the
basic function in Docker and rkt. Currently user can run commands inside a
container using zun. With interactive mode, users will be able to perform
interactive operations on these containers using Zun APIs.
The implementation of interactive mode is driver-specific. Each driver needs to
implement this function needs to do different thing about API command and
interactive interface. Take docker driver as the first container driver.
Problem description
===================
Zun containers now take Docker as the first container driver. Docker can use
"docker run -it" to enter into the interactive mode, where user can do
operations looks like chroot. Zun use docker-py as the interface to get access
the docker deamon.
To reach the goal of realizing the container interactive, zun needs to pass the
correct parameters when create container and start container at the time
it has been created, so that a tty will be created inside the container. User
can connect the stdin, stdout, stderr to get a pseudo tty in local client.
Since Kubernetes realize this function, refer to Kubernetes realization is a
feasible way.
https://github.com/kubernetes/kubernetes/pull/3763
For Kubectl interactive description, go to:
https://kubernetes.io/docs/user-guide/kubectl/kubectl_run/
Proposed change
===============
1. Let each Docker daemon listens to 0.0.0.0:port, so that zun-api will easily
talk with Docker deamon. This might reduce the load on zun-compute a bit
and let zun-compute have more rooms to serve other workload.
2. For docker daemon, the new two parameters about tty and stdin_open should
be added to the container field and corresponding database.
3. Zun api will wait for the container start and get the websocket link to zun
CLIs.
4. For python-zunclient, it will filter the parameters and pick up the correct
for interactive mode. Then will open the connection to container at local
terminal after the "zun run" command return. Zun CLIs will directly connect
the websocket link from Docker daemon (for authorization problem, will fix
it in the follow bp/bug).
The diagram below offers an overview of the interactive mode architecture.
E.g : zun run -i --name test cirros /bin/sh
The sequence diagram is in this link:
https://github.com/kevin-zhaoshuai/workflow
Design Principles
-----------------
1. Keep commonality for Docker and other container driver. Easy for other
driver integration.
2. Take into account all the interactive conditions.
3. Pty connection functions need to be independent and extensible
Alternatives
------------
Data model impact
-----------------
Add some fields to container object, including "tty" "stdin_open" and a flag
to show whether the container has been attached,
maybe "attached" = "true"/"false".
REST API impact
---------------
Add an new API for "zun attach"
Zun CLIs will first send the message to zun-api, zun-api will directly talk
with Docker daemon. Then after the container is successfully started, zun-api
notify zun CLIs with the attach url. Zun CLIs will attach its stdin ,stdout
and stderr to the container. Since zun CLIs connect with the websocket in
Docker daemon, so that will not hijack HTTP request (from zun CLIs to zun api),
user can do another zun api command in another terminal.
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
Other deployer impact
---------------------
Developer impact
----------------
In the future integration with other container driver, need to tweak some code
about client pty connection.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kevin Zhao
Other contributors:
Work Items
----------
1. Implement a function for connect the tty inside the container.
2. Modify the zun run and zun exec code about the interactive.
3. Implement unit/integration test.
Dependencies
============
Testing
=======
Each patch will have unit tests.
Documentation Impact
====================
A set of documentation for this new feature will be required.
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/container-interactive-mode.rst | container-interactive-mode.rst |
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
=================
Kuryr Integration
=================
Related Launchpad Blueprint:
https://blueprints.launchpad.net/zun/+spec/kuryr-integration
Zun provides APIs for managing application containers, and the implementation
of the APIs is provided by drivers. Currently, Zun has two built-in drivers:
the native Docker driver and the Nova Docker driver. The Nova driver leverages
existing Nova capability to provide networking for containers. However, the
native Docker driver doesn't have an ideal solution for networking yet.
This spec proposed to leverage Kuryr-libnetwork [1] for providing networking
for containers. Generally speaking, Kuryr-libnetwork is a remote Docker
networking plugin that receives requests from Docker engine and interfaces
with Neutron for managing the network resources. Kuryr-libnetwork provides
several features, such as IPAM, Neutron port binding, etc., all of which
could be leveraged by Zun.
Problem description
===================
Currently, the native Docker driver doesn't integrate with Neutron. It uses
the default networking capability provided by Docker engine. Containers
created by that driver has limited networking capabilities, and they
are not able to communicate with other OpenStack resources (i.e. Nova
instances).
Proposed change
===============
1. Introduce a network abstraction in Zun. Implement an API for users to
create/delete/manage container networks backed by Kuryr. If the container
runtime is Docker, creating a container network will call docker network
APIs to create a Docker network by using the pre-created Neutron resources
(this capability is provided by Kuryr [2][3]). Deleting a container network
will be similar as create.
2. Support creating a container from a network. If a user specifies a network
on creating a container, the container will be created from the network.
If not, the driver will take care the networking of the container. For
example, some drivers might choose to create the container from a default
network that might be specified in a config file or hard-coded. It is up to
individual driver to decide how to create a container from a network.
On the Zun's Docker driver, this is implemented by adding --net=<ID> option
when creating the sandbox [4] of the container.
The typical workflow will be as following:
1. Users call Zun APIs to create a container network by passing a name/uuid of
a neutron network::
$ zun network-create --neutron-net private --name foo
2. After receiving this request, Zun will make several API calls to Neutron
to retrieve the necessary information about the specified network
('private' in this example). In particular, Zun will list all the subnets
that belong to 'private' network. The number of subnets under a network
should only be one or two. If the number of subnets is two, they must be
an ipv4 subnet and an ipv6 subnet respectively. Zun will retrieve the
cidr/gateway/subnetpool of each subnet and pass these information to
Docker to create a Docker network. The API call will be similar to::
$ docker network create -d kuryr --ipam-driver=kuryr \
--subnet <ipv4_cidr> \
--gateway <ipv4_gateway> \
-ipv6 --subnet <ipv6_cidr> \
--gateway <ipv6_gateway> \
-o neutron.net.uuid=<network_uuid> \
-o neutron.pool.uuid=<ipv4_pool_uuid> \
--ipam-opt neutron.pool.uuid=<ipv4_pool_uuid> \
-o neutron.pool.v6.uuid=<ipv6_pool_uuid> \
--ipam-opt neutron.pool.v6.uuid=<ipv6_pool_uuid> \
foo
NOTE: In this step, docker engine will check the list of registered network
plugin and find the API endpoint of Kuryr, then make a call to Kuryr to create
a network with existing Neutron resources (i.e. network, subnetpool, etc.).
This example assumed that the Neutron resources were pre-created by cloud
administrator (which should be the case at most of the clouds). If this is
not true, users need to manually create the resources.
3. Users call Zun APIs to create a container from the container network 'foo'::
$ zun run --net=foo nginx
4. Under the hood, Zun will perform several steps to configure the networking.
First, call neutron API to create a port from the specified neutron
network::
$ neutron port-create private
5. Then, Zun will retrieve information of the created neutron port and retrieve
its IP address(es). A port could have one or two IP addresses: an ipv4
address and/or an ipv6 address. Then, call Docker APIs to create the
container by using the IP address(es) of the neutron port. This is
equivalent to::
$ docker run --net=foo kubernetes/pause --ip <ipv4_address> \
--ip6 <ipv6_address>
NOTE: In this step, docker engine will make a call to Kuryr to setup the
networking of the container. After receiving the request from Docker, Kuryr
will perform the necessary steps to connect the container to the neutron port.
This might include something like: create a veth pair, connect one end of the
veth pair to the container, connect the other end of the veth pair a
neutron-created bridge, etc.
6. Users calls Zun API to list/show the created network(s)::
$ zun network-list
$ zun network-show foo
7. Upon completion, users calls Zun API to remove the container and network::
$ zun delete <container_id>
$ zun network-delete foo
Alternatives
------------
1. Directly integrate with Neutron (without Kuryr-libnetwork). This approach
basically re-invented Kuryr functionalities in Zun, which is unnecessary.
2. Use alternative networking solution (i.e. Flannel) instead of Neutron.
This doesn't provide a good OpenStack integration.
Data model impact
-----------------
* Create a 'network' table. Each entry in this table is a record of a network.
A network must belong to an OpenStack project so there will be a 'project_id'
column in this table.
* Create a 'network_attachment' table. Each entry in this table is a record of
an attachment between a network and a container. In fact, this table defines
a many-to-many relationship between networks and containers.
REST API impact
---------------
1. Add a new API endpoint /networks to the REST API interface.
2. In the API endpoint of creating a container, add a new option to specify
the network where the container will be created from.
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
Deployers need to deploy a Kuryr-libnetwork as a prerequisites of using this
feature.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Hongbin Lu
Other contributors:
Sudipta Biswas
Work Items
----------
1. Implement a new API endpoint for networks.
2. Extend the Docker driver to support creating containers from a network.
3. Implement unit/integration test.
4. Document the new network API.
Dependencies
============
Add a dependency to Kuryr-libnetwork and Neutron
Testing
=======
Each patch will have unit tests, and Tempest functional tests covered.
Documentation Impact
====================
A set of documentation for this new feature will be required.
References
==========
[1] https://opendev.org/openstack/kuryr-libnetwork
[2] https://blueprints.launchpad.net/kuryr/+spec/existing-neutron-network
[3] https://blueprints.launchpad.net/kuryr-libnetwork/+spec/existing-subnetpool
[4] https://opendev.org/openstack/zun/src/branch/master/specs/container-sandbox.rst
| zun | /zun-11.0.0.tar.gz/zun-11.0.0/specs/kuryr-integration.rst | kuryr-integration.rst |
Zunda Python
===================
|pyversion| |version| |license|
Zunda: Japanese Enhanced Modality Analyzer client for Python.
Zunda is an extended modality analyzer for Japanese.
For details about Zunda, See https://jmizuno.github.io/zunda/ (Written in Japanese)
this module requires installing Zunda, which is available at (https://github.com/jmizuno/zunda/releases), CaboCha (https://taku910.github.io/cabocha/), and MeCab (http://taku910.github.io/mecab/).
Contributions are welcome!
Installation
==============
::
# Install Zunda
wget https://github.com/jmizuno/zunda/archive/2.0b4.tar.gz
tar xzf zunda-2.0b4.tar.gz
rm zunda-2.0b4.tar.gz
cd zunda-2.0b4
./configure
make
sudo make install
cd ../
rm -rf zunda-2.0b4
# Install zunda-python
pip install zunda-python
Example
===========
.. code:: python
import zunda
parser = zunda.Parser()
parser.parse('花子は太郎を食事に誘った裕子が嫌いだった')
# => [{'assumptional': '0',
'authenticity': '成立',
'chunks': [{'func': 'に',
'head': '食事',
'link_from': [],
'link_to': 3,
'score': 1.883877,
'words': [{'feature': '名詞,サ変接続,*,*,*,*,食事,ショクジ,ショクジ',
'funcexp': 'O',
'surface': '食事'},
{'feature': '助詞,格助詞,一般,*,*,*,に,ニ,ニ',
'funcexp': 'B:判断',
'surface': 'に'}]}],
'sentiment': '0',
'source': '筆者',
'tense': '非未来',
'type': '叙述',
'word': '食事',
'words': '食事に'},
{'assumptional': '0',
'authenticity': '成立',
'chunks': [{'func': 'を',
'head': '太郎',
'link_from': [],
'link_to': 3,
'score': 1.640671,
'words': [{'feature': '名詞,固有名詞,地域,一般,*,*,太郎,タロウ,タロー',
'funcexp': 'O',
'surface': '太郎'},
{'feature': '助詞,格助詞,一般,*,*,*,を,ヲ,ヲ', 'funcexp': 'O', 'surface': 'を'}]},
{'func': 'に',
'head': '食事',
'link_from': [],
'link_to': 3,
'score': 1.883877,
'words': [{'feature': '名詞,サ変接続,*,*,*,*,食事,ショクジ,ショクジ',
'funcexp': 'O',
'surface': '食事'},
{'feature': '助詞,格助詞,一般,*,*,*,に,ニ,ニ', 'funcexp': 'B:判断', 'surface': 'に'}]},
{'func': 'た',
'head': '誘っ',
'link_from': [1, 2],
'link_to': 4,
'score': 1.565227,
'words': [{'feature': '動詞,自立,*,*,五段・ワ行促音便,連用タ接続,誘う,サソッ,サソッ',
'funcexp': 'O',
'surface': '誘っ'},
{'feature': '助動詞,*,*,*,特殊・タ,基本形,た,タ,タ',
'funcexp': 'B:完了',
'surface': 'た'}]}],
'sentiment': '0',
'source': '筆者',
'tense': '非未来',
'type': '叙述',
'word': '誘っ',
'words': '太郎を食事に誘った'},
{'assumptional': '0',
'authenticity': '成立',
'chunks': [{'func': 'は',
'head': '花子',
'link_from': [],
'link_to': 5,
'score': -1.81792,
'words': [{'feature': '名詞,固有名詞,人名,名,*,*,花子,ハナコ,ハナコ',
'funcexp': 'O',
'surface': '花子'},
{'feature': '助詞,係助詞,*,*,*,*,は,ハ,ワ', 'funcexp': 'O', 'surface': 'は'}]},
{'func': 'が',
'head': '裕子',
'link_from': [3],
'link_to': 5,
'score': -1.81792,
'words': [{'feature': '名詞,固有名詞,人名,名,*,*,裕子,ユウコ,ユーコ',
'funcexp': 'O',
'surface': '裕子'},
{'feature': '助詞,格助詞,一般,*,*,*,が,ガ,ガ', 'funcexp': 'O', 'surface': 'が'}]},
{'func': 'た',
'head': '嫌い',
'link_from': [0, 4],
'link_to': -1,
'score': 0.0,
'words': [{'feature': '名詞,形容動詞語幹,*,*,*,*,嫌い,キライ,キライ',
'funcexp': 'O',
'surface': '嫌い'},
{'feature': '助動詞,*,*,*,特殊・ダ,連用タ接続,だ,ダッ,ダッ',
'funcexp': 'B:判断',
'surface': 'だっ'},
{'feature': '助動詞,*,*,*,特殊・タ,基本形,た,タ,タ',
'funcexp': 'B:完了',
'surface': 'た'}]}],
'sentiment': '0',
'source': '筆者',
'tense': '非未来',
'type': '叙述',
'word': '嫌い',
'words': '花子は裕子が嫌いだった'}]
LICENSE
=========
MIT License
Copyright
=============
Zunda Python
(c) 2019- Yukino Ikegami. All Rights Reserved.
Zunda (Original version)
(c) 2013- @jmizuno
ACKNOWLEDGEMENT
=================
This module uses Zunda.
I thank to @jmizuno and Tohoku University Inui-Okazaki Lab.
.. |pyversion| image:: https://img.shields.io/pypi/pyversions/zunda-python.svg
.. |version| image:: https://img.shields.io/pypi/v/zunda-python.svg
:target: http://pypi.python.org/pypi/zunda-python/
:alt: latest version
.. |license| image:: https://img.shields.io/pypi/l/zunda-python.svg
:target: http://pypi.python.org/pypi/zunda-python/
:alt: license
| zunda-python | /zunda-python-0.1.3.tar.gz/zunda-python-0.1.3/README.rst | README.rst |
from subprocess import Popen, PIPE
class Parser(object):
"""Zunda: Japanese Enhanced Modality Analyzer
Zunda is an extended modality analyzer for Japanese.
Please see details in https://jmizuno.github.io/zunda/ (written in Japanese)
And this module requires installing Zunda, which is available at https://github.com/jmizuno/zunda/releases
>>> import zunda
>>> parser = zunda.Parser()
>>> parser.parse('花子は太郎を食事に誘った裕子が嫌いだった')
[{'assumptional': '0',
'authenticity': '成立',
'chunks': [{'func': 'に',
'head': '食事',
'link_from': [],
'link_to': 3,
'score': 1.883877,
'words': [{'feature': '名詞,サ変接続,*,*,*,*,食事,ショクジ,ショクジ',
'funcexp': 'O',
'surface': '食事'},
{'feature': '助詞,格助詞,一般,*,*,*,に,ニ,ニ',
'funcexp': 'B:判断',
'surface': 'に'}]}],
'sentiment': '0',
'source': '筆者',
'tense': '非未来',
'type': '叙述',
'word': '食事',
'words': '食事に'},
{'assumptional': '0',
'authenticity': '成立',
'chunks': [{'func': 'を',
'head': '太郎',
'link_from': [],
'link_to': 3,
'score': 1.640671,
'words': [{'feature': '名詞,固有名詞,地域,一般,*,*,太郎,タロウ,タロー',
'funcexp': 'O',
'surface': '太郎'},
{'feature': '助詞,格助詞,一般,*,*,*,を,ヲ,ヲ', 'funcexp': 'O', 'surface': 'を'}]},
{'func': 'に',
'head': '食事',
'link_from': [],
'link_to': 3,
'score': 1.883877,
'words': [{'feature': '名詞,サ変接続,*,*,*,*,食事,ショクジ,ショクジ',
'funcexp': 'O',
'surface': '食事'},
{'feature': '助詞,格助詞,一般,*,*,*,に,ニ,ニ', 'funcexp': 'B:判断', 'surface': 'に'}]},
{'func': 'た',
'head': '誘っ',
'link_from': [1, 2],
'link_to': 4,
'score': 1.565227,
'words': [{'feature': '動詞,自立,*,*,五段・ワ行促音便,連用タ接続,誘う,サソッ,サソッ',
'funcexp': 'O',
'surface': '誘っ'},
{'feature': '助動詞,*,*,*,特殊・タ,基本形,た,タ,タ',
'funcexp': 'B:完了',
'surface': 'た'}]}],
'sentiment': '0',
'source': '筆者',
'tense': '非未来',
'type': '叙述',
'word': '誘っ',
'words': '太郎を食事に誘った'},
{'assumptional': '0',
'authenticity': '成立',
'chunks': [{'func': 'は',
'head': '花子',
'link_from': [],
'link_to': 5,
'score': -1.81792,
'words': [{'feature': '名詞,固有名詞,人名,名,*,*,花子,ハナコ,ハナコ',
'funcexp': 'O',
'surface': '花子'},
{'feature': '助詞,係助詞,*,*,*,*,は,ハ,ワ', 'funcexp': 'O', 'surface': 'は'}]},
{'func': 'が',
'head': '裕子',
'link_from': [3],
'link_to': 5,
'score': -1.81792,
'words': [{'feature': '名詞,固有名詞,人名,名,*,*,裕子,ユウコ,ユーコ',
'funcexp': 'O',
'surface': '裕子'},
{'feature': '助詞,格助詞,一般,*,*,*,が,ガ,ガ', 'funcexp': 'O', 'surface': 'が'}]},
{'func': 'た',
'head': '嫌い',
'link_from': [0, 4],
'link_to': -1,
'score': 0.0,
'words': [{'feature': '名詞,形容動詞語幹,*,*,*,*,嫌い,キライ,キライ',
'funcexp': 'O',
'surface': '嫌い'},
{'feature': '助動詞,*,*,*,特殊・ダ,連用タ接続,だ,ダッ,ダッ',
'funcexp': 'B:判断',
'surface': 'だっ'},
{'feature': '助動詞,*,*,*,特殊・タ,基本形,た,タ,タ',
'funcexp': 'B:完了',
'surface': 'た'}]}],
'sentiment': '0',
'source': '筆者',
'tense': '非未来',
'type': '叙述',
'word': '嫌い',
'words': '花子は裕子が嫌いだった'}]
"""
def __init__(self, zunda_args='', encoding='utf-8'):
"""
Params:
zunda_args (str) : argument for zunda
encoding (str) : character encoding (default utf-8)
"""
self.zunda_args = zunda_args
self.encoding = encoding
def _parse_zunda_return(self, zunda_return):
events = []
chunks = []
word_count = 0
for line in zunda_return.splitlines()[:-1]: # The last line is EOS
if not line:
continue
elif line.startswith('#FUNCEXP'):
funcexp_str = line.split('\t')[1]
funcexp = funcexp_str.split(',')
elif line.startswith('#EVENT'):
event_info = line.split('\t')
event = {'word': int(event_info[1]), 'source': event_info[2].split(':')[1],
'tense': event_info[3], 'assumptional': event_info[4],
'type': event_info[5], 'authenticity': event_info[6],
'sentiment': event_info[7], 'chunks': []}
events.append(event)
elif line.startswith('* '):
chunk_info = line.split(' ')
chunk = {'link_to': int(chunk_info[2][:-1]), 'link_from': [],
'head': int(chunk_info[3].split('/')[0]),
'func': int(chunk_info[3].split('/')[1]),
'score': float(chunk_info[4]), 'words': []}
chunks.append(chunk)
else:
(surface, feature) = line.split('\t')
chunks[-1]['words'].append({'surface': surface, 'feature': feature,
'funcexp': funcexp[word_count]})
word_count += 1
for (i, chunk) in enumerate(chunks):
if chunk['link_to'] != -1:
chunks[chunk['link_to']]['link_from'].append(i)
chunks[i]['head'] = chunk['words'][chunks[i]['head']]['surface']
chunks[i]['func'] = chunk['words'][chunks[i]['func']]['surface']
word_count = 0
for (i, event) in enumerate(events):
for (j, chunk) in enumerate(chunks):
for word in chunk['words']:
if event['word'] == word_count:
events[i]['word'] = word['surface']
for link_chunk in chunk['link_from']:
events[i]['chunks'].append(chunks[link_chunk])
events[i]['chunks'].append(chunk)
events[i]['words'] = ''.join([word['surface'] for chunk in events[i]['chunks'] for word in chunk['words']])
word_count += 1
word_count = 0
return events
def parse(self, sentence):
"""Parse the sentence
Param:
sentence (str)
Return:
events (list of dict)
"""
cmd = 'echo %s| zunda %s' % (sentence, self.zunda_args)
with Popen(cmd, shell=True, stdout=PIPE) as proc:
zunda_return = proc.communicate()[0].decode(self.encoding)
events = self._parse_zunda_return(zunda_return)
return events | zunda-python | /zunda-python-0.1.3.tar.gz/zunda-python-0.1.3/zunda/zunda.py | zunda.py |
# ZundamonGPTonYouTube
English | [日本語](README2.md)<br><br>
Intelligent Zundamon replies YouTube chat with GPT brain.
<br><br>
## First of all
- This Application is Japanese only since it's depend on Japanese voice engine "VOICEVOX", but You can cutomize by modifying MIT liccenced source codes.
## This application works on
- Windows OS (tested on Windows 10)
- .Net Framework v.4 (tested on v4.7.2)
- the machine on which installed [VOICEVOX](https://voicevox.hiroshiba.jp/) (tested on v.0.14.6)
Core Module is implemented by Python, so it can adapt to other OS or voice generators.
<br><br>
## This application can
- automatically pick up messages from YouTube chat and make Zundamon speak the GPT answer of those messages out. <br>
Thogh non Japanese messages are given, Zundamon answers in Japanese.
- display all comments of YouTube chat, picked up comments, answers of picked up comments.
- display Zundamon portrait with transparent background.
- You can use not only Zundamon voice and image but also other ones. <br>
[](https://www.youtube.com/embed/wpTGk_0Yf3M)
## Usage
- Install [VOICEVOX](https://voicevox.hiroshiba.jp/)
- Get OpenAI api-key. Please refer [here(English)](https://www.howtogeek.com/885918/how-to-get-an-openai-api-key/) or [here(Japanese)](https://laboratory.kazuuu.net/how-to-get-an-openai-api-key/)
- if you want to launch from .exe file.
- click [here](https://github.com/GeneralYadoc/ZundamonGPTonYouTube/releases) to download newest version.
- Unzip Downloaded "ZundamonGPTonYouTube.zip" file.
- Open "ZundamonGPTonYouTube" and double click ZundamonGPTonYouTube.exe.
- if you want to launch from source codes.
- Install ffmpeg.<br>
<b>For Linux:</b> Execute following command.
```ffmpeg installation for Linux
$ sudo apt-get install ffmpeg
```
<b>For Windows:</b> Access [here](https://github.com/BtbN/FFmpeg-Builds/releases), download '*-win64-gpl.zip', extract the zip and move three exe files (ffmpeg.exe, ffprobe.exe, ffplay.exe) to the folder where you'll execute the sample or added path.<br>
<br>
<b>For Mac:</b> Access [here](https://brew.sh/), copy installation command to your terminal and push Enter key, and execute following command.
```
brew install ffmpeg
```
- Clone repository.<br>
```clone
git clone https://github.com/GeneralYadoc/ZundamonGPTonYouTube.git
```
- Move to ZundamonGPTonYouTube directory
```mv
mv ZundamonGPTonYouTube.
```
- Install the application.
```install
pip install .
```
- Start the application.
```
python3 ZundamonGPTonYouTube.py
```
- Check Video ID of target YouTube stream.<br>

- Fill in the Video ID brank of start form. (use Ctrl+V to paste)
- Fill in the API Key (of OpenAI) brank of start form. (use Ctrl+V to paste)
- Click "すたーと" button which means "start".<br>

### Notice
- OpenAI api key and Video ID is recorded in "variable_cache.yaml" and You can skip either or both from the 2nd time.
- Please be aware of treating "variable_cache.yaml" in order to avoid leaking OpenAI api key.
<br><br>
## GUI is consisted of
### Main window
- You can change visibility of chat window by pressing "ちゃっと" button, asking window by pressing "しつもん" button, answering window by pressing "こたえ" button, portrait window by pressing "立ち絵" button.
- You can change voice volume by using slide bar which is at the bottom of the window.
- Also you can change voice volume by putting value on text box at the just right of the slide bar and press enter key.
- You can exit the application by closing this window. window "x" button of TopRight.<br>

### Portrait window
- You can display a portrait of avatar which you like by specifying the path in setting file.
- You can switch opaque or transparent background by double clicking the avatar.
- You can resize the avatar in opaque background mode, please erase background after adjusting avatar size if you want.
- Minimizinq window also is available in opaque backgroune mode.<br>
- The Application keeps running even if this window is closed, so you can close this window if unnecessary.<br>

### YouTube chat monitor window
- Almost all messages are shown in this window.
- Messages contain only emoticons are ignored.
- Some messages which exist in polling gap may lost.
- You can switch visibility of window frame by double clicking message area of the window.
- Please turn on the frame when resizing the window.
- The Application keeps running even if this window is closed, so you can close this window if unnecessary.<br>

### Window for asking
- All picked up messages which will be answered by ChatAI are shown in this window.
- You can switch visibility of window frame by double clicking message area of the window.
- Please turn on the frame when resizing the window.
- The Application keeps running even if this window is closed, so you can close this window if unnecessary.<br>

### Window for answering
- ChatAI answers for picked up messages are shown in this window.
- You can switch visibility of window frame by double clicking message area of the window.
- Please turn on the frame when resizing the window.
- The Application keeps running even if this window is closed, so you can close this window if unnecessary.<br>
<br>
### Notice
- The following window is VOICEVOX which is external application.<br>
It's necessary for generating Zundamon voices, so please don't close the window. (please minimize if you want to hide it.)<br>
<br>
<br><br>
# Settings
You can customize the application with "setting.yaml" which is exist in the same layer of the application exe file.
```setting.yaml
# VoiceVoxの設定
voicevox_path: ''
# チャット欄ウィンドウの設定
display_user_name_on_chat_window: true
chat_window_title: 'ちゃっとらん'
chat_window_padx : 9
chat_window_pady : 9
chat_window_color: '#ffffff'
chat_font_color: '#000000'
chat_font_size: 10
chat_font_type: 'Courier'
chat_rendering_method: 'normal'
# 質問ウィンドウの設定
display_user_name_on_ask_window: false
ask_window_title: 'ぐみんのしつもん'
ask_window_padx : 9
ask_window_pady : 9
ask_window_color: '#354c87'
ask_font_color: '#ffe4fb'
ask_font_size: 12
ask_font_type: 'Courier'
ask_rendering_method: 'refresh'
# 回答ウィンドウの設定
answer_window_title: 'てんさいずんだもんのこたえ'
answer_window_padx : 9
answer_window_pady : 9
answer_window_color: '#ffe4e0'
answer_font_color: '#004cF7'
answer_font_size: 13
answer_font_type: 'Helvetica'
answer_rendering_method: 'incremental'
# 立ち絵ウインドウの設定
image_window_title: '立ち絵'
image_window_refresh_rate: 30
image_window_transparent_color: '#00ff00'
image_window_font_color: '#0000ff'
image_window_font_size: 11
image_window_font_type: 'Helvetica'
image_window_label: 'ダブルクリックで\n背景透過/非透過を\n切り替えられます'
# AIの設定
model: 'gpt-3.5-turbo'
max_tokens_per_request: 1024
ask_interval_sec: 20.0
# 回答キャラクターの設定
speaker_type: 1
volume: 100
system_role: 'あなたはユーザーとの会話を楽しく盛り上げるために存在する、日本語話者の愉快なアシスタントです。'
```
- "voicevox_path" can remain blank if VOICEVOX has been installed to default path.
- You can change AI model by changing "model" value.
- You can change voice actor by changing "speaker_type" value.
- You can change Avatar image by changing "image_file" path.
<br>
Current size and position, frame visibility, bagckground transparency of these windows is memorized and inherited to them in next time of executing.<br>
They are recorded in "variable_cache.yaml". you can change window size and position also by editing the file when the application is not running.
``` variable_cache.yaml
answer_frame_visible: false
answer_window_height: 450
answer_window_visible: true
answer_window_width: 500
answer_window_x: 659
answer_window_y: 521
api_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ask_frame_visible: false
ask_window_height: 250
ask_window_visible: true
ask_window_width: 500
ask_window_x: 663
ask_window_y: 224
chat_frame_visible: false
chat_window_height: 754
chat_window_visible: true
chat_window_width: 350
chat_window_x: 246
chat_window_y: 225
image_bg_visible: false
image_window_height: 816
image_window_visible: true
image_window_width: 522
image_window_x: 1234
image_window_y: 175
video_id: XXXXXXXXXXX
```
<br><br>
# Licence
- The lisence type of this application is MIT, so you can customize freely.
- the lisence type of ffmeg executable files included in release package is LGPL.
<br><br>
# Links
- [Pixiv page of 坂本アヒル](https://www.pixiv.net/users/12147115)   I obtained static Zundamon portrait which is the material of the gif animation from here.
- [ChatAIStreamer](https://github.com/taizan-hokuto/pytchat)   Python library for getting ChatGPT voiced answer of YouTube chat stream.
| zundamonai-streamer | /zundamonai-streamer-3.0.0.tar.gz/zundamonai-streamer-3.0.0/README.md | README.md |
from importlib.resources import path
from PIL import Image, ImageTk
import sys
import os
import time
import yaml
import multiprocessing
import tkinter as tk
import tkinter.font as font
multiprocessing.freeze_support()
class TransparentViewer(multiprocessing.Process):
def __refresh_image(self):
self.__img_params["base_image"].seek(self.__img_params["cur_frame_index"])
if self.__root.winfo_width() * self.__img_params["base_image"].height / self.__img_params["base_image"].width > self.__root.winfo_height():
new_image_width = self.__root.winfo_height() * self.__img_params["base_image"].width / self.__img_params["base_image"].height
new_image_height = self.__root.winfo_height()
else:
new_image_height = self.__root.winfo_width() * self.__img_params["base_image"].height / self.__img_params["base_image"].width
new_image_width = self.__root.winfo_width()
self.__canvas.delete("image")
self.__img_params["resized_image"] = self.__img_params["base_image"].resize([int(new_image_width), int(new_image_height)], Image.Resampling.HAMMING)
self.__img_params["photo_image"] = ImageTk.PhotoImage(self.__img_params["resized_image"])
self.__canvas.configure(width=self.__root.winfo_width(), height=self.__root.winfo_height())
self.__canvas.create_image( (self.__root.winfo_width() - new_image_width) / 2,
(self.__root.winfo_height() - new_image_height) / 2,
image=self.__img_params["photo_image"], anchor=tk.NW,
tag="image" )
def __refresh_text(self):
self.__canvas.delete("text")
if (self.__txt_params["visible"]):
self.__canvas.create_text( self.__root.winfo_width(),
self.__root.winfo_height(),
text=self.__txt_params["text"],
font=font.Font(size=str(self.__txt_params["font_size"]), family=self.__txt_params["font_type"], weight="bold"),
anchor=tk.SE, justify=tk.RIGHT,
fill=self.__txt_params["font_color"],
tag="text" )
def __refresh_canvas(self):
self.__refresh_image()
self.__refresh_text()
def __show_frames(self):
cur_time = time.time()
offset_time_ms = int((time.time() - self.__start_time) * 1000)
rest_time_ms = offset_time_ms % self.__img_params["total_time_ms"]
frame_index = i = 0
while (rest_time_ms > 0 and i < self.__img_params["base_image"].n_frames - 1):
frame_index = i
rest_time_ms -= self.__img_params["durations"][frame_index]
i += 1
self.__img_params["cur_frame_index"] = frame_index
self.__refresh_image()
window_duration = int(1000 / self.__win_params["refresh_rate"]) - int((cur_time - self.__prev_time) * 1000)
min_win_duration = int(667 / self.__win_params["refresh_rate"])
min_win_duration = 1 if min_win_duration <= 0 else min_win_duration
if window_duration < min_win_duration:
window_duration = min_win_duration
self.__prev_time = cur_time
if not self.__img_params["decimated_animation"]:
self.__root.after(window_duration, self.__show_frames)
else:
self.__img_params["decimated_animation"] = False
self.__root.after(int(1000 / self.__win_params["refresh_rate"]), self.__show_frames)
def __initialize_duration_info(self):
for i in range(self.__img_params["base_image"].n_frames):
self.__img_params["base_image"].seek(i)
self.__img_params["durations"].append(self.__img_params["base_image"].info["duration"])
self.__img_params["total_time_ms"] += self.__img_params["durations"][i]
def __play_animation(self):
if self.__img_params["base_image"].n_frames > 1:
self.__initialize_duration_info()
self.__start_time = time.time()
self.__show_frames()
else:
self.__refresh_image()
def __save_visibility(self, visible):
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
variable_cache["image_window_visible"] = visible
try:
with open(self.__variable_cache_path, 'w', encoding='UTF-8') as file:
yaml.safe_dump(variable_cache, file)
except:
pass
def __apply_visibility(self):
if self.__win_params["visible_mem"]:
if self.__win_params["visible_mem"].value:
cur_visible = True
else:
cur_visible = False
elif self.__win_params["visible"]:
cur_visible = True
else:
cur_visible = False
if cur_visible and not self.__win_params["prev_visible"]:
self.__root.deiconify()
self.__save_visibility(True)
self.__win_params["prev_visible"] = True
elif not cur_visible and self.__win_params["prev_visible"]:
self.__root.withdraw()
self.__save_visibility(False)
self.__win_params["prev_visible"] = False
self.__root.after(200, self.__apply_visibility)
def __image_window_is_cached(self):
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
return ("image_window_width" in variable_cache or "image_window_height" in variable_cache or
"image_window_x" in variable_cache or "image_window_y" in variable_cache)
def __createImageWindow(self):
self.__img_params["base_image"] = Image.open(self.__img_params["path"])
self.__root.wm_minsize(width=3, height=3)
self.__root.title(self.__title)
self.__root.iconbitmap(default = self.__icon)
if self.__is_client:
self.__root.protocol("WM_DELETE_WINDOW", self.__changeVisible)
self.__root.bind(sequence="<Configure>", func=lambda event:self.__configureWindow(event=event))
frame = tk.Frame(self.__root)
frame.pack(fill = tk.BOTH)
self.__canvas = tk.Canvas(bg=self.__transparent_color)
self.__canvas.place(x=-2, y=-2)
default_width = self.__win_params["default_width"]
default_height = self.__win_params["default_height"]
default_x = self.__win_params["default_x"]
default_y = self.__win_params["default_y"]
if self.__image_window_is_cached():
width = default_width
height = default_height
elif default_width * self.__img_params["base_image"].height / self.__img_params["base_image"].width > default_height:
width = default_height * self.__img_params["base_image"].width / self.__img_params["base_image"].height
height = default_height
else:
height = default_width * self.__img_params["base_image"].height / self.__img_params["base_image"].width
width = default_width
x = default_x
y = default_y
self.__root.geometry(f"{int(width)}x{int(height)}+{x}+{y}")
self.__root.update()
self.__win_params["prev_width"] = self.__root.winfo_width()
self.__win_params["prev_height"] = self.__root.winfo_height()
if not self.__win_params["default_bg_visible"]:
self.__root.wm_overrideredirect(True)
self.__root.wm_attributes("-transparentcolor", self.__transparent_color)
if self.__is_client:
self.__apply_visibility()
self.__play_animation()
self.__root.bind(sequence="<Button-1>", func=lambda event:self.__clickWindow(event=event))
self.__root.bind(sequence="<B1-Motion>", func=lambda event:self.__moveWindow(event=event))
self.__root.bind(sequence="<Double-Button-1>", func=lambda event:self.__doubleclickWindow(event=event))
def __applySettings(self, workspace):
settings = {}
with open(self.__setting_path, 'r', encoding='shift_jis') as file:
settings = yaml.safe_load(file)
self.__title = settings["image_window_title"]
self.__transparent_color = settings["image_window_transparent_color"]
self.__win_params["refresh_rate"] = settings["image_window_refresh_rate"]
self.__img_params["path"] = os.path.join(workspace, settings["image_file"])
self.__txt_params["font_color"] = settings["image_window_font_color"]
self.__txt_params["font_size"] = settings["image_window_font_size"]
self.__txt_params["font_type"] = settings["image_window_font_type"]
self.__txt_params["text"] = settings["image_window_label"]
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
if "image_window_width" in variable_cache:
self.__win_params["default_width"] = variable_cache["image_window_width"]
if "image_window_height" in variable_cache:
self.__win_params["default_height"] = variable_cache["image_window_height"]
if "image_window_x" in variable_cache:
self.__win_params["default_x"] = variable_cache["image_window_x"]
if "image_window_y" in variable_cache:
self.__win_params["default_y"] = variable_cache["image_window_y"]
if "image_bg_visible" in variable_cache:
self.__win_params["default_bg_visible"] = variable_cache["image_bg_visible"]
self.__txt_params["visible"] = self.__win_params["default_bg_visible"]
def __init__(self, visible_mem=None, is_client=False, workspace="./"):
self.__is_client = is_client
self.__title = "立ち絵"
self.__icon = os.path.join(workspace, "zundamon_icon1.ico")
self.__transparent_color = "#00ff00"
self.__mouse_position = [0, 0]
self.__canvas = None
self.__start_time = 0
self.__prev_time = 0
self.__win_params = {}
self.__win_params["default_width"] = 400
self.__win_params["default_height"] = 700
self.__win_params["prev_width"] = 0
self.__win_params["prev_height"] = 0
self.__win_params["default_x"] = 50
self.__win_params["default_y"] = 50
self.__win_params["visible"] = 1
self.__win_params["prev_visible"] = not self.__win_params["visible"]
self.__win_params["visible_mem"] = None
self.__win_params["default_bg_visible"] = True
if visible_mem:
self.__win_params["visible_mem"] = visible_mem
self.__win_params["prev_visible"] = not visible_mem.value
self.__win_params["refresh_rate"] = 30
self.__img_params = {}
self.__img_params["base_image"] = None
self.__img_params["resized_image"] = None
self.__img_params["photo_image"] = None
self.__img_params["path"] = os.path.join(workspace, "Zundamon.gif")
self.__img_params["cur_frame_index"] = 0
self.__img_params["durations"] = []
self.__img_params["total_time_ms"] = 0
self.__img_params["decimated_animation"] = False
self.__txt_params = {}
self.__txt_params["font_color"] = "#0000ff"
self.__txt_params["font_size"] = "11"
self.__txt_params["font_type"] = "Helvetica"
self.__txt_params["visible"] = "True"
self.__txt_params["text"] = "ダブルクリックで\n背景透過/非透過を\n切り替えられます"
self.__setting_path = os.path.join(workspace, "setting.yaml")
self.__variable_cache_path = os.path.join(workspace, "variable_cache.yaml")
self.__applySettings(workspace)
super(TransparentViewer, self).__init__(daemon=True)
def run(self):
self.__root = tk.Tk()
self.__createImageWindow()
self.__root.mainloop()
def __changeVisible(self):
if self.__win_params["visible_mem"]:
if self.__win_params["visible_mem"].value:
self.__win_params["visible_mem"].value = False
else:
self.__win_params["visible_mem"].value = True
elif self.__win_params["visible"]:
self.__win_params["visible"] = False
else:
self.__win_params["visible"] = True
def __clickWindow(self, event):
self.__mouse_position[0] = event.x_root
self.__mouse_position[1] = event.y_root
def __moveWindow(self, event):
moved_x = event.x_root - self.__mouse_position[0]
moved_y = event.y_root - self.__mouse_position[1]
self.__mouse_position[0] = event.x_root
self.__mouse_position[1] = event.y_root
cur_position_x = self.__root.winfo_x() + moved_x
cur_position_y = self.__root.winfo_y() + moved_y
self.__root.geometry(f"+{cur_position_x}+{cur_position_y}")
def __doubleclickWindow(self, event=None):
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
if self.__root.wm_overrideredirect():
new_bg_visible = True
self.__canvas.create_text( self.__root.winfo_width(),
self.__root.winfo_height(),
text=self.__txt_params["text"],
font=font.Font(size=str(self.__txt_params["font_size"]), family=self.__txt_params["font_type"], weight="bold"),
fill=self.__txt_params["font_color"],
anchor=tk.SE, justify=tk.RIGHT,
tag="text" )
self.__root.wm_attributes("-transparentcolor", "")
self.__root.wm_overrideredirect(False)
else:
new_bg_visible = False
self.__canvas.delete("text")
self.__root.wm_attributes("-transparentcolor", self.__transparent_color)
self.__root.wm_overrideredirect(True)
variable_cache["image_bg_visible"] = new_bg_visible
self.__txt_params["visible"] = new_bg_visible
try:
with open(self.__variable_cache_path, 'w', encoding='UTF-8') as file:
yaml.safe_dump(variable_cache, file)
except:
pass
def __configureWindow(self, event=None):
if self.__img_params["base_image"].n_frames > 1:
self.__img_params["decimated_animation"] = True
if self.__win_params["prev_width"] != self.__root.winfo_width() or self.__win_params["prev_height"] != self.__root.winfo_height():
self.__refresh_canvas()
self.__win_params["prev_width"] = self.__root.winfo_width()
self.__win_params["prev_height"] = self.__root.winfo_height()
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
variable_cache["image_window_width"] = self.__root.winfo_width()
variable_cache["image_window_height"] = self.__root.winfo_height()
variable_cache["image_window_x"] = self.__root.winfo_x()
variable_cache["image_window_y"] = self.__root.winfo_y()
try:
with open(self.__variable_cache_path, 'w', encoding='UTF-8') as file:
yaml.safe_dump(variable_cache, file)
except:
pass
if __name__ == "__main__":
is_client = False
for arg in sys.argv:
if arg == "-c":
is_client = True
break
manager = multiprocessing.Manager()
visible = manager.Value('b', True)
ui = TransparentViewer(visible=visible, is_client=is_client)
ui.start()
ui.join() | zundamonai-streamer | /zundamonai-streamer-3.0.0.tar.gz/zundamonai-streamer-3.0.0/src/TransparentViewer.py | TransparentViewer.py |
from dataclasses import dataclass
from typing import Callable
import time
import math
import threading
import queue
import ZundamonAIStreamer as zasr
streamParams = zasr.streamParams
aiParams = zasr.aiParams
streamerParams = zasr.streamerParams
@dataclass
class params(zasr.params):
send_message_cb: Callable[[str, str, bool], None] = None
speaker_type: int = 1
volume: int = 100
@dataclass
class voicedAnswer():
user_message: any = ""
completion: any = ""
voice: any = None
class ZundamonAIStreamerManager(threading.Thread):
# Customized sleep for making available of running flag interruption.
def __interruptibleSleep(self, time_sec):
counter = math.floor(time_sec / 0.10)
frac = time_sec - (counter * 0.10)
for i in range(counter):
if not self.__running:
break
time.sleep(0.10)
if not self.__running:
return
time.sleep(frac)
def __getItemCB(self, c):
self.__send_message_cb(key="chat", name=c.author.name, message=c.message)
pass
def __get_volume_cb(self):
return self.__volume
# callback for getting answer of ChatGPT
# The voice generated by ZundamonGenerator is given.
def __speak(self):
while self.__running:
while self.__running and self.__voiced_answers_queue.empty():
self.__interruptibleSleep(0.1)
voiced_answer = self.__voiced_answers_queue.get()
user_message = voiced_answer.user_message
completion = voiced_answer.completion
voice = voiced_answer.voice
self.__send_message_cb(key="ask", name=user_message.extern.author.name, message=user_message.message)
self.__interruptibleSleep(1)
# Play the voice by VoicePlayer of ZundamonAIStreamer
self.__player = None
self.__player = zasr.VoicePlayer(voice, get_volume_cb=self.__get_volume_cb)
self.__interruptibleSleep(1)
self.__player.start()
self.__send_message_cb(key="answer", message=completion.choices[0]["message"]["content"])
# Wait finishing Playng the voice.
self.__player.join()
del self.__player
self.__player = None
self.__interruptibleSleep(0.1)
# callback for getting answer of ChatGPT
# The voice generated by ZundamonGenerator is given.
def __answerWithVoiceCB(self, user_message, completion, voice):
while self.__running and self.__voiced_answers_queue.full():
self.__interruptibleSleep(0.1)
self.__voiced_answers_queue.put(voicedAnswer(user_message=user_message, completion=completion, voice=voice))
@property
def volume(self):
return self.__volume
@volume.setter
def volume(self, volume):
self.__volume = volume
def __init__(self, params):
self.__send_message_cb = params.send_message_cb
self.__player = None
self.__volume = params.volume
# Set params of getting messages from stream source.
params.stream_params.get_item_cb=self.__getItemCB
# Create ZundamonVoiceGenerator
params.streamer_params.voice_generator=zasr.ZundamonGenerator(speaker=params.speaker_type)
params.streamer_params.answer_with_voice_cb=self.__answerWithVoiceCB
# Create ZundamonAIStreamer instance.
# 'voice_generator=' is omittable for English generator.
self.ai_streamer =zasr.ZundamonAIStreamer(params)
self.__voiced_answers_queue = queue.Queue(2)
self.__speaker_thread = threading.Thread(target=self.__speak, daemon=True)
super(ZundamonAIStreamerManager, self).__init__(daemon=True)
def run(self):
self.__running = True
# Wake up internal thread to get chat messages from stream and play VoiceVox voices of reading ChatGPT answers aloud.
self.ai_streamer.start()
self.__speaker_thread.start()
def disconnect(self):
self.__running=False
# Finish generating gTTS voices.
# Internal thread will stop soon.
self.ai_streamer.disconnect()
# terminating internal thread.
self.ai_streamer.join() | zundamonai-streamer | /zundamonai-streamer-3.0.0.tar.gz/zundamonai-streamer-3.0.0/src/ZundamonAIStreamerManager.py | ZundamonAIStreamerManager.py |
from dataclasses import dataclass
import ZundamonAIStreamerManager as zm
import TransparentViewer as tv
import sys
import os
import math
import time
import yaml
import queue
import multiprocessing
import tkinter as tk
import tkinter.font as font
@dataclass
class messageSlot():
message: str
refresh: bool
class ZundamonAIStreamerUI:
def __createStartWindow(self):
self.__root.geometry('336x120')
self.__root.title('なんでもこたえてくれるずんだもん')
self.__root.iconbitmap(default = self.__icon)
self.__clearStartWindow()
self.__widgits_start["video_id_label"] = tk.Label(text='Video ID')
self.__widgits_start["video_id_label"].place(x=30, y=15)
self.__widgits_start["video_id_entry"] = tk.Entry(width=32)
self.__widgits_start["video_id_entry"].place(x=90, y=15)
self.__widgits_start["api_key_label"] = tk.Label(text='API Key')
self.__widgits_start["api_key_label"].place(x=30, y=48)
self.__widgits_start["api_key_entry"] = tk.Entry(width=32)
self.__widgits_start["api_key_entry"].place(x=90, y=48)
self.__widgits_start["button"] = tk.Button(self.__root, text="すたーと", command=self.__start)
self.__widgits_start["button"].place(x=143, y=80)
def __clearStartWindow(self):
for widgit in self.__widgits_start.values():
widgit.destroy()
self.__widgits_start.clear()
def __createMainWindow(self):
self.__root.geometry('336x120')
self.__root.title('めいんういんどう')
self.__root.iconbitmap(default = self.__icon)
self.__root.attributes("-topmost", True)
self.__clearMainWindow()
self.__widgits_main["buttonChat"] = tk.Button(self.__root, text="ちゃっと", width="6", command=lambda:self.__changeVisible("chat"))
self.__widgits_main["buttonChat"].place(x=30, y=20)
self.__widgits_main["buttonAsk"] = tk.Button(self.__root, text="しつもん", width="6", command=lambda:self.__changeVisible("ask"))
self.__widgits_main["buttonAsk"].place(x=104, y=20)
self.__widgits_main["buttonAnswer"] = tk.Button(self.__root, text="こたえ", width="6", command=lambda:self.__changeVisible("answer"))
self.__widgits_main["buttonAnswer"].place(x=178, y=20)
self.__widgits_main["buttonPortrait"] = tk.Button(self.__root, text="立ち絵", width="6", command=self.__changeVisiblePortrait)
self.__widgits_main["buttonPortrait"].place(x=252, y=20)
self.__widgits_main["volumeLabel"] = tk.Label(text='volume')
self.__widgits_main["volumeLabel"].place(x=27, y=70)
self.__widgits_main["scaleVolume"] = tk.Scale( self.__root,
variable = tk.DoubleVar(),
command = self.__changeVolume,
orient=tk.HORIZONTAL,
sliderlength = 20,
length = 200,
from_ = 0,
to = 500,
resolution=5,
tickinterval=250 )
self.__widgits_main["scaleVolume"].set(self.__initial_volume)
self.__widgits_main["scaleVolume"].place(x=72, y=50)
self.__widgits_main["volumeEntry"] = tk.Entry(width=3, justify=tk.RIGHT)
self.__widgits_main["volumeEntry"].bind(sequence="<Return>", func=self.__scaleVolume)
self.__widgits_main["volumeEntry"].place(x=282, y=70)
self.__receiveMessage()
def __clearMainWindow(self):
for widgit in self.__widgits_main.values():
widgit.destroy()
self.__widgits_main.clear()
def __createMessageWindow(self, key):
window = tk.Toplevel()
window.title(self.__sub_window_settings[key]["title"])
width = self.__sub_window_settings[key]["window_default_width"]
height = self.__sub_window_settings[key]["window_default_height"]
x = self.__sub_window_settings[key]["window_default_x"]
y = self.__sub_window_settings[key]["window_default_y"]
window.geometry(f"{width}x{height}+{x}+{y}")
window.protocol("WM_DELETE_WINDOW", lambda:self.__changeVisible(key))
frame = tk.Frame(window)
frame.pack(fill = tk.BOTH)
text = tk.Text( frame,
bg=self.__sub_window_settings[key]["window_color"],
fg=self.__sub_window_settings[key]["font_color"],
selectbackground=self.__sub_window_settings[key]["window_color"],
selectforeground=self.__sub_window_settings[key]["font_color"],
width=800,
height=100,
bd="0",
padx=str(self.__sub_window_settings[key]["window_padx"]),
pady=str(self.__sub_window_settings[key]["window_pady"]),
font=font.Font( size=str(self.__sub_window_settings[key]["font_size"]),
family=self.__sub_window_settings[key]["font_type"],
weight="bold" ),
state="disabled" )
self.__sub_windows[key] = {
"visible" : self.__sub_window_settings[key]["window_default_visible"],
"body" : window,
"text" : text,
"mouse_position" : [0, 0],
"message_queue" : queue.Queue(1000)
}
self.__sub_windows[key]["body"].bind(sequence="<Button-1>", func=lambda event:self.__clickWindow(key=key, event=event))
self.__sub_windows[key]["body"].bind(sequence="<B1-Motion>", func=lambda event:self.__moveWindow(key=key, event=event))
self.__sub_windows[key]["body"].bind(sequence="<Double-Button-1>", func=lambda event:self.__doubleclickWindow(key=key, event=event))
self.__sub_windows[key]["body"].bind(sequence="<Configure>", func=lambda event:self.__configureWindow(key=key, event=event))
self.__sub_windows[key]["text"].pack()
if self.__sub_windows[key]["visible"]:
self.__sub_windows[key]["body"].deiconify()
else:
self.__sub_windows[key]["body"].withdraw()
self.__sub_windows[key]["body"].wm_overrideredirect(not self.__sub_window_settings[key]["frame_default_visible"])
def __interruptibleSleep(self, time_sec):
counter = math.floor(time_sec / 0.10)
frac = time_sec - (counter * 0.10)
for i in range(counter):
if not self.__running:
break
time.sleep(0.10)
if not self.__running:
return
time.sleep(frac)
def __sendMessageCore(self, key, message, refresh):
message_queue = self.__sub_windows[key]["message_queue"]
if not message_queue.full():
slot = messageSlot(message=message, refresh=refresh)
message_queue.put(slot)
def __sendMessage(self, key, name="", message=""):
rendering_method = self.__sub_window_settings[key]["rendering_method"]
display_name = self.__sub_window_settings[key]["display_name"]
if display_name:
message = f"[{name}] {message}"
if rendering_method == "incremental":
for i in range(len(message)):
refresh = True if i == 0 else False
self.__sendMessageCore(key, message[i : i+1], refresh)
self.__interruptibleSleep(0.10)
else:
refresh = True if rendering_method == "refresh" else False
if not refresh:
message = f"\n{message}\n"
self.__sendMessageCore(key, message, refresh)
def __showMessage(self, text, message, refresh):
text.configure(state="normal")
try:
pos = text.index('end')
except:
return
if refresh:
text.delete('1.0', 'end')
pos = text.index('end')
text.insert(pos, message)
text.see("end")
text.configure(state="disabled")
def __receiveMessage(self):
for key in self.__sub_windows.keys():
message_queue = self.__sub_windows[key]["message_queue"]
text = self.__sub_windows[key]["text"]
while not message_queue.empty():
slot = message_queue.get()
self.__showMessage(text, slot.message, slot.refresh)
self.__root.after(ms=33, func=self.__receiveMessage)
def __init__(self, workspace="./"):
self.__running = False
self.__variable_cache_path = os.path.join(workspace, "variable_cache.yaml")
self.__setting_path = os.path.join(workspace, "setting.yaml")
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
with open(self.__setting_path, 'r', encoding='shift_jis') as file:
settings = yaml.safe_load(file)
self.__sub_window_settings = {}
self.__sub_window_settings["chat"] = {}
self.__sub_window_settings["chat"]["display_user_name"] = settings["display_user_name_on_chat_window"]
self.__sub_window_settings["chat"]["title"] = settings["chat_window_title"]
self.__sub_window_settings["chat"]["window_padx"] = settings["chat_window_padx"]
self.__sub_window_settings["chat"]["window_pady"] = settings["chat_window_pady"]
self.__sub_window_settings["chat"]["window_color"] = settings["chat_window_color"]
self.__sub_window_settings["chat"]["font_color"] = settings["chat_font_color"]
self.__sub_window_settings["chat"]["font_size"] = settings["chat_font_size"]
self.__sub_window_settings["chat"]["font_type"] = settings["chat_font_type"]
self.__sub_window_settings["chat"]["rendering_method"] = settings["chat_rendering_method"]
self.__sub_window_settings["chat"]["display_name"] = settings["display_user_name_on_chat_window"]
self.__sub_window_settings["ask"] = {}
self.__sub_window_settings["ask"]["display_user_name"] = settings["display_user_name_on_ask_window"]
self.__sub_window_settings["ask"]["title"] = settings["ask_window_title"]
self.__sub_window_settings["ask"]["window_padx"] = settings["ask_window_padx"]
self.__sub_window_settings["ask"]["window_pady"] = settings["ask_window_pady"]
self.__sub_window_settings["ask"]["window_color"] = settings["ask_window_color"]
self.__sub_window_settings["ask"]["font_color"] = settings["ask_font_color"]
self.__sub_window_settings["ask"]["font_size"] = settings["ask_font_size"]
self.__sub_window_settings["ask"]["font_type"] = settings["ask_font_type"]
self.__sub_window_settings["ask"]["rendering_method"] = settings["ask_rendering_method"]
self.__sub_window_settings["ask"]["display_name"] = settings["display_user_name_on_ask_window"]
self.__sub_window_settings["answer"] = {}
self.__sub_window_settings["answer"]["title"] = settings["answer_window_title"]
self.__sub_window_settings["answer"]["window_padx"] = settings["answer_window_padx"]
self.__sub_window_settings["answer"]["window_pady"] = settings["answer_window_pady"]
self.__sub_window_settings["answer"]["window_color"] = settings["answer_window_color"]
self.__sub_window_settings["answer"]["font_color"] = settings["answer_font_color"]
self.__sub_window_settings["answer"]["font_size"] = settings["answer_font_size"]
self.__sub_window_settings["answer"]["font_type"] = settings["answer_font_type"]
self.__sub_window_settings["answer"]["rendering_method"] = settings["answer_rendering_method"]
self.__sub_window_settings["answer"]["display_name"] = False
self.__sub_window_settings["chat"]["window_default_width"] = 350
self.__sub_window_settings["chat"]["window_default_height"] = 754
self.__sub_window_settings["chat"]["window_default_x"] = 20
self.__sub_window_settings["chat"]["window_default_y"] = 20
self.__sub_window_settings["chat"]["window_default_visible"] = False
self.__sub_window_settings["chat"]["frame_default_visible"] = True
self.__sub_window_settings["ask"]["window_default_width"] = 500
self.__sub_window_settings["ask"]["window_default_height"] = 250
self.__sub_window_settings["ask"]["window_default_x"] = 30
self.__sub_window_settings["ask"]["window_default_y"] = 30
self.__sub_window_settings["ask"]["window_default_visible"] = False
self.__sub_window_settings["ask"]["frame_default_visible"] = True
self.__sub_window_settings["answer"]["window_default_width"] = 500
self.__sub_window_settings["answer"]["window_default_height"] = 450
self.__sub_window_settings["answer"]["window_default_x"] = 40
self.__sub_window_settings["answer"]["window_default_y"] = 40
self.__sub_window_settings["answer"]["window_default_visible"] = False
self.__sub_window_settings["answer"]["frame_default_visible"] = True
if "chat_window_width" in variable_cache:
self.__sub_window_settings["chat"]["window_default_width"] = variable_cache["chat_window_width"]
if "chat_window_height" in variable_cache:
self.__sub_window_settings["chat"]["window_default_height"] = variable_cache["chat_window_height"]
if "chat_window_x" in variable_cache:
self.__sub_window_settings["chat"]["window_default_x"] = variable_cache["chat_window_x"]
if "chat_window_y" in variable_cache:
self.__sub_window_settings["chat"]["window_default_y"] = variable_cache["chat_window_y"]
if "chat_window_visible" in variable_cache:
self.__sub_window_settings["chat"]["window_default_visible"] = variable_cache["chat_window_visible"]
if "chat_frame_visible" in variable_cache:
self.__sub_window_settings["chat"]["frame_default_visible"] = variable_cache["chat_frame_visible"]
if "ask_window_width" in variable_cache:
self.__sub_window_settings["ask"]["window_default_width"] = variable_cache["ask_window_width"]
if "ask_window_height" in variable_cache:
self.__sub_window_settings["ask"]["window_default_height"] = variable_cache["ask_window_height"]
if "ask_window_x" in variable_cache:
self.__sub_window_settings["ask"]["window_default_x"] = variable_cache["ask_window_x"]
if "ask_window_y" in variable_cache:
self.__sub_window_settings["ask"]["window_default_y"] = variable_cache["ask_window_y"]
if "ask_window_visible" in variable_cache:
self.__sub_window_settings["ask"]["window_default_visible"] = variable_cache["ask_window_visible"]
if "ask_frame_visible" in variable_cache:
self.__sub_window_settings["ask"]["frame_default_visible"] = variable_cache["ask_frame_visible"]
if "answer_window_width" in variable_cache:
self.__sub_window_settings["answer"]["window_default_width"] = variable_cache["answer_window_width"]
if "answer_window_height" in variable_cache:
self.__sub_window_settings["answer"]["window_default_height"] = variable_cache["answer_window_height"]
if "answer_window_x" in variable_cache:
self.__sub_window_settings["answer"]["window_default_x"] = variable_cache["answer_window_x"]
if "answer_window_y" in variable_cache:
self.__sub_window_settings["answer"]["window_default_y"] = variable_cache["answer_window_y"]
if "answer_window_visible" in variable_cache:
self.__sub_window_settings["answer"]["window_default_visible"] = variable_cache["answer_window_visible"]
if "answer_frame_visible" in variable_cache:
self.__sub_window_settings["answer"]["frame_default_visible"] = variable_cache["answer_frame_visible"]
if "answer_window_visible" in variable_cache:
self.__sub_window_settings["answer"]["window_default_visible"] = variable_cache["answer_window_visible"]
if "video_id" not in variable_cache:
variable_cache["video_id"] = ""
stream_params = zm.streamParams(
video_id = variable_cache["video_id"],
)
if "api_key" not in variable_cache:
variable_cache["api_key"] = ""
ai_params = zm.aiParams(
api_key = variable_cache["api_key"],
model = settings["model"],
system_role = settings["system_role"],
max_tokens_per_request = settings["max_tokens_per_request"],
interval_sec = settings["ask_interval_sec"]
)
volume = (lambda v: 0 if v < 0 else 500 if v > 500 else v)(settings['volume'])
self.__zm_streamer_params = zm.params( stream_params=stream_params,
ai_params=ai_params,
speaker_type = settings["speaker_type"],
volume=volume,
send_message_cb=self.__sendMessage )
if "voicevox_path" in settings and settings['voicevox_path'] and settings['voicevox_path'] != "":
self.__zm_streamer_params.streamer_params.voicevox_path = settings['voicevox_path']
self.__manager = None
self.__root = tk.Tk()
self.__root.resizable(False, False)
self.__root.protocol("WM_DELETE_WINDOW", self.__close)
self.__initial_volume = volume
self.__icon = os.path.join(workspace, "zundamon_icon1.ico")
self.__widgits_start = {}
self.__widgits_main = {}
self.__sub_windows = {}
self.__portrait_window = None
self.__mem_manager = None
self.__createStartWindow()
def __start(self):
self.__running = True
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
if self.__widgits_start["video_id_entry"].get() == "":
variable_cache["video_id"] = self.__zm_streamer_params.stream_params.video_id
else:
variable_cache["video_id"] = self.__widgits_start["video_id_entry"].get()
self.__zm_streamer_params.stream_params.video_id = variable_cache["video_id"]
if self.__widgits_start["api_key_entry"].get() == "":
variable_cache["api_key"] = self.__zm_streamer_params.ai_params.api_key
else:
variable_cache["api_key"] = self.__widgits_start["api_key_entry"].get()
self.__zm_streamer_params.ai_params.api_key = variable_cache["api_key"]
image_visible = False
if "image_window_visible"in variable_cache:
image_visible = variable_cache["image_window_visible"]
try:
with open(self.__variable_cache_path, 'w', encoding='UTF-8') as file:
yaml.safe_dump(variable_cache, file)
except:
pass
self.__root.title("めいんういんどう")
self.__clearStartWindow()
self.__createMainWindow()
self.__createMessageWindow(key = "chat")
self.__createMessageWindow(key = "ask")
self.__createMessageWindow(key = "answer")
self.__mem_manager = multiprocessing.Manager()
self.__portrait_visible = self.__mem_manager.Value('b', image_visible)
self.__portrait_window = tv.TransparentViewer(visible_mem=self.__portrait_visible, is_client=True)
self.__portrait_window.start()
self.__manager = zm.ZundamonAIStreamerManager(self.__zm_streamer_params)
self.__manager.start()
def __changeVolume(self, event=None):
volume = int(self.__widgits_main["scaleVolume"].get())
self.__widgits_main["volumeEntry"].delete(0, tk.END)
self.__widgits_main["volumeEntry"].insert(0, str(volume))
if self.__manager:
self.__manager.volume = volume
def __scaleVolume(self, event=None):
volume_str = self.__widgits_main["volumeEntry"].get()
try:
volume = int(volume_str)
except:
return
volume = (lambda v: 0 if v < 0 else 500 if v > 500 else v)(volume)
self.__widgits_main["volumeEntry"].delete(0, tk.END)
self.__widgits_main["volumeEntry"].insert(0, str(volume))
self.__widgits_main["scaleVolume"].set(volume)
def __saveVisibility(self, key, visible):
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
if key == "chat":
variable_cache["chat_window_visible"] = visible
elif key == "ask":
variable_cache["ask_window_visible"] = visible
elif key == "answer":
variable_cache["answer_window_visible"] = visible
try:
with open(self.__variable_cache_path, 'w', encoding='UTF-8') as file:
yaml.safe_dump(variable_cache, file)
except:
pass
def __changeVisible(self, key):
if self.__sub_windows[key]["visible"]:
self.__sub_windows[key]["visible"] = False
self.__sub_windows[key]["body"].withdraw()
self.__saveVisibility(key, False)
else:
self.__sub_windows[key]["visible"] = True
self.__sub_windows[key]["body"].deiconify()
self.__saveVisibility(key, True)
def __changeVisiblePortrait(self):
self.__portrait_visible.value = not self.__portrait_visible.value
def __clickWindow(self, key, event):
self.__sub_windows[key]["mouse_position"][0] = event.x_root
self.__sub_windows[key]["mouse_position"][1] = event.y_root
def __moveWindow(self, key, event):
moved_x = event.x_root - self.__sub_windows[key]["mouse_position"][0]
moved_y = event.y_root - self.__sub_windows[key]["mouse_position"][1]
self.__sub_windows[key]["mouse_position"][0] = event.x_root
self.__sub_windows[key]["mouse_position"][1] = event.y_root
cur_position_x = self.__sub_windows[key]["body"].winfo_x() + moved_x
cur_position_y = self.__sub_windows[key]["body"].winfo_y() + moved_y
self.__sub_windows[key]["body"].geometry(f"+{cur_position_x}+{cur_position_y}")
pass
def __doubleclickWindow(self, key, event=None):
cur_frame_visible = not self.__sub_windows[key]["body"].wm_overrideredirect()
next_frame_visible = not cur_frame_visible
self.__sub_windows[key]["body"].wm_overrideredirect(not next_frame_visible)
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
if key == "chat":
variable_cache["chat_frame_visible"] = next_frame_visible
elif key == "ask":
variable_cache["ask_frame_visible"] = next_frame_visible
elif key == "answer":
variable_cache["answer_frame_visible"] = next_frame_visible
try:
with open(self.__variable_cache_path, 'w', encoding='UTF-8') as file:
yaml.safe_dump(variable_cache, file)
except:
pass
def __configureWindow(self, key, event=None):
variable_cache = {}
try:
with open(self.__variable_cache_path, 'r') as file:
variable_cache = yaml.safe_load(file)
except:
pass
if key == "chat":
variable_cache["chat_window_width"] = self.__sub_windows["chat"]["body"].winfo_width()
variable_cache["chat_window_height"] = self.__sub_windows["chat"]["body"].winfo_height()
variable_cache["chat_window_x"] = self.__sub_windows["chat"]["body"].winfo_x()
variable_cache["chat_window_y"] = self.__sub_windows["chat"]["body"].winfo_y()
elif key == "ask":
variable_cache["ask_window_width"] = self.__sub_windows["ask"]["body"].winfo_width()
variable_cache["ask_window_height"] = self.__sub_windows["ask"]["body"].winfo_height()
variable_cache["ask_window_x"] = self.__sub_windows["ask"]["body"].winfo_x()
variable_cache["ask_window_y"] = self.__sub_windows["ask"]["body"].winfo_y()
elif key == "answer":
variable_cache["answer_window_width"] = self.__sub_windows["answer"]["body"].winfo_width()
variable_cache["answer_window_height"] = self.__sub_windows["answer"]["body"].winfo_height()
variable_cache["answer_window_x"] = self.__sub_windows["answer"]["body"].winfo_x()
variable_cache["answer_window_y"] = self.__sub_windows["answer"]["body"].winfo_y()
try:
with open(self.__variable_cache_path, 'w', encoding='UTF-8') as file:
yaml.safe_dump(variable_cache, file)
except:
pass
def __close(self):
self.__running = False
if self.__manager:
self.__manager.disconnect()
self.__manager.join()
self.__root.destroy()
def mainloop(self):
self.__root.mainloop()
if self.__portrait_window:
del self.__portrait_window
del self.__mem_manager
if __name__ == "__main__":
ui = ZundamonAIStreamerUI(workspace=os.path.abspath(os.path.dirname(sys.argv[0])))
ui.mainloop() | zundamonai-streamer | /zundamonai-streamer-3.0.0.tar.gz/zundamonai-streamer-3.0.0/src/ZundamonAIStreamerUI.py | ZundamonAIStreamerUI.py |
from dataclasses import dataclass
from deep_translator import GoogleTranslator
import threading
import subprocess
import ctypes
import os
import time
import math
import glob
import tempfile
import json
import requests
import pydub
import pydub.playback as pb
import ChatAIStreamer as casr
ChatAIStreamer = casr.ChatAIStreamer
streamParams = casr.streamParams
userMessage = casr.userMessage
aiParams = casr.aiParams
TMPFILE_POSTFIX = "_ZundamonAIStreamer"
# Data type of voice used by ZundamonAIStreamer
@dataclass
class voiceData:
content: bytes=None
# Concrete VoiceGanarator of ZundamonAIStreamer
# 'generate' is overridden.
class ZundamonGenerator(casr.voiceGenerator):
def __init__(self, speaker=1, max_retry=20):
self.__speaker = speaker
self.__max_retry = max_retry
super(ZundamonGenerator, self).__init__()
@property
def speaker(self):
return self.__speaker
@speaker.setter
def speaker(self, speaker):
self.__speaker = speaker
def __generate(self, text):
query_payload = {"text": text, "speaker": self.__speaker}
for i in range(self.__max_retry):
r = requests.post( "http://localhost:50021/audio_query",
params=query_payload, timeout=(10.0, 300.0) )
if r.status_code == 200:
query_data = r.json()
break
time.sleep(1)
if r.status_code != 200:
return r
synth_payload = {"speaker": self.__speaker}
for i in range(self.__max_retry):
r = requests.post( "http://localhost:50021/synthesis", params=synth_payload,
data=json.dumps(query_data), timeout=(10.0, 300.0) )
if r.status_code == 200:
break
time.sleep(1)
return r
def generate(self, text):
voice = None
# Translate into Japanese.
try:
text_ja = GoogleTranslator(target='ja').translate(text=text)
except:
return text, voice
r = self.__generate(text_ja)
if r.status_code == 200:
# Pack in viceData format.
voice = voiceData(content=bytes(r.content))
# return voice with translated text.
return text_ja, voice
# Extend streamerParams and params to hold voiceGenerator instance.
# voice_generator is defaultly intialized with English voiceGenerator.
@dataclass
class streamerParams(casr.streamerParams):
voice_generator : casr.voiceGenerator = ZundamonGenerator()
voicevox_path : str = os.getenv("LOCALAPPDATA") + "/" + "Programs/VOICEVOX/VOICEVOX.exe"
@dataclass
class params(casr.params):
streamer_params : streamerParams = streamerParams()
class InterruptPlaying(Exception):
pass
# VoicePlayer class which plays the voice generated by ZundamonGenerator.
class VoicePlayer(threading.Thread):
def __init__(self, voice, get_volume_cb):
self.__voice = voice
self.__get_volume_cb = get_volume_cb
super(VoicePlayer, self).__init__(daemon=True)
def run(self):
if self.__voice:
self.__play(self.__voice)
def __play(self, voice):
with tempfile.NamedTemporaryFile() as file:
file_path = file.name + TMPFILE_POSTFIX
with open(file_path, "wb") as file:
file.write(voice.content)
segment = pydub.AudioSegment.from_wav(file_path)
segment = segment[10:]
self.__interrupted = False
remain_sec = total_sec = segment.duration_seconds
playng_first_section = True
while remain_sec > 0.0:
volume = self.__get_volume_cb()
if playng_first_section:
playng_first_section = False
prev_volume = volume
tmp_segment = edit_segment(segment, volume , total_sec - remain_sec)
play_thread = threading.Thread(target=self.play_interruptible, args=[tmp_segment,], daemon=True)
play_thread.start()
elif volume != prev_volume:
self.__interrupted = True
play_thread.join()
self.__interrupted = False
tmp_segment = edit_segment(segment, volume , total_sec - remain_sec)
play_thread = threading.Thread(target=self.play_interruptible, args=[tmp_segment,], daemon=True)
play_thread.start()
remain_sec = remain_sec - 0.1
prev_volume = volume
time.sleep(1)
remain_sec = remain_sec - 1.0
play_thread.join()
def __interrupt(self, length_sec, th):
if hasattr(th, '_thread_id'):
th_id = th._thread_id
else:
for id, thread in threading._active.items():
if thread is th:
th_id = id
for i in range(math.ceil(length_sec / 0.01)):
if self.__interrupted:
ctypes.pythonapi.PyThreadState_SetAsyncExc(th_id, ctypes.py_object(InterruptPlaying))
break
time.sleep(0.01)
def __pb_play(self, segment):
try:
pb.play(segment)
except InterruptPlaying:
pass
def play_interruptible(self, segment):
length_sec = segment.duration_seconds
th = threading.Thread(target=self.__pb_play, args=[segment,], daemon=True)
th_interrupt = threading.Thread(target=self.__interrupt, args=[length_sec, th])
th.start()
th_interrupt.start()
th_interrupt.join()
th.join()
def edit_segment(segment, volume, start_sec):
delta = pydub.utils.ratio_to_db(volume / 100.)
segment = segment[start_sec * 1000:] + delta
return segment
# ZundamonAIStreamer as inheritence from ChatAIStreamer
class ZundamonAIStreamer(casr.ChatAIStreamer):
def __init__(self, params):
# Remove garbage of temporary files.
with tempfile.NamedTemporaryFile() as file:
tmp_dir_path = os.path.dirname(file.name)
for file_path in glob.glob(f"{tmp_dir_path}/*{TMPFILE_POSTFIX}*"):
os.remove(file_path)
program_path = params.streamer_params.voicevox_path
if not program_path or program_path == "" or not os.path.exists(program_path):
program_path = os.getenv("LOCALAPPDATA") + "/" + "Programs/VOICEVOX/VOICEVOX.exe"
if os.path.exists(program_path):
subprocess.Popen(program_path)
time.sleep(1)
super(ZundamonAIStreamer, self).__init__(params) | zundamonai-streamer | /zundamonai-streamer-3.0.0.tar.gz/zundamonai-streamer-3.0.0/src/ZundamonAIStreamer.py | ZundamonAIStreamer.py |
# Zungle: Python Web Framework built for learning purposes


Zungle is a Python web framework built for learning purposes.
It's a WSGI framework and can be used with any WSGI application server such as Gunicorn.
## Installation
```shell
pip install zungle
```
### Basic usage:
```python
from zungle.api import API
app = API()
@app.route("/home")
def home(request, response):
response.text = "Hello from the HOME page"
@app.route("/hello/{name}")
def greeting(request, response, name):
response.text = f"Hello, {name}"
@app.route("/book")
class BooksResource:
def get(self, req, resp):
resp.text = "Books Page"
def post(self, req, resp):
resp.text = "Endpoint to create a book"
@app.route("/template")
def template_handler(req, resp):
resp.body = app.template(
"index.html", context={"name": "Zungle", "title": "Best Framework"}).encode()
```
### Unit Tests
The recommended way of writing unit tests is with [pytest](https://docs.pytest.org/en/latest/). There are two built in fixtures
that you may want to use when writing unit tests with Zungle. The first one is `app` which is an instance of the main `API` class:
```python
def test_route_overlap_throws_exception(app):
@app.route("/")
def home(req, resp):
resp.text = "Welcome Home."
with pytest.raises(AssertionError):
@app.route("/")
def home2(req, resp):
resp.text = "Welcome Home2."
```
The other one is `client` that you can use to send HTTP requests to your handlers. It is based on the famous [requests](http://docs.python-requests.org/en/master/) and it should feel very familiar:
```python
def test_parameterized_route(app, client):
@app.route("/{name}")
def hello(req, resp, name):
resp.text = f"hey {name}"
assert client.get("http://testserver/matthew").text == "hey matthew"
```
## Templates
The default folder for templates is `templates`. You can change it when initializing the main `API()` class:
```python
app = API(templates_dir="templates_dir_name")
```
Then you can use HTML files in that folder like so in a handler:
```python
@app.route("/show/template")
def handler_with_template(req, resp):
resp.html = app.template(
"example.html", context={"title": "Awesome Framework", "body": "welcome to the future!"})
```
## Static Files
Just like templates, the default folder for static files is `static` and you can override it:
```python
app = API(static_dir="static_dir_name")
```
Then you can use the files inside this folder in HTML files:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>{{title}}</title>
<link href="/static/main.css" rel="stylesheet" type="text/css">
</head>
<body>
<h1>{{body}}</h1>
<p>This is a paragraph</p>
</body>
</html>
```
### Middleware
You can create custom middleware classes by inheriting from the `zungle.middleware.Middleware` class and overriding its two methods
that are called before and after each request:
```python
from zungle.api import API
from zungle.middleware import Middleware
app = API()
class SimpleCustomMiddleware(Middleware):
def process_request(self, req):
print("Before dispatch", req.url)
def process_response(self, req, res):
print("After dispatch", req.url)
app.add_middleware(SimpleCustomMiddleware)
``` | zungle | /zungle-0.0.4.tar.gz/zungle-0.0.4/README.md | README.md |
ZüNIS: Normalizing flows for neural importance sampling
==============================
ZüNIS (Zürich Neural Importance Sampling) a work-in-progress Pytorch-based library for Monte-Carlo integration
based on Neural imporance sampling [[1]](https://arxiv.org/abs/1808.03856), developed at ETH Zürich.
In simple terms, we use artificial intelligence to compute integrals faster.
The goal is to provide a flexible library to integrate black-box functions with a level of automation comparable to the
VEGAS Library [[2]](https://pypi.org/project/vegas/), while using state-of-the-art methods that go around
the limitations of existing tools.
## Installation
### Using `pip`
The library is available on PyPI:
```bash
pip install zunis
```
The latest version can be installed directly from GitHub:
```bash
pip install 'git+https://github.com/ndeutschmann/zunis#egg=zunis&subdirectory=zunis_lib'
```
### Setting up a development environment
If you would like to contribute to the library, run the benchmarks or try the examples,
the easiest is to clone this repository directly and install the extended requirements:
````bash
# Clone the repository
git clone https://github.com/ndeutschmann/zunis.git ./zunis
# Create a virtual environment (recommended)
python3.7 -m venv zunis_venv
source ./zunis_venv/bin/activate
pip install --upgrade pip
# Install the requirements
cd ./zunis
pip install -r requirements.txt
# Run one benchmark (GPU recommended)
cd ./experiments/benchmarks
python benchmark_hypersphere.py
````
## Library usage
For basic applications, the integrator is provided with default choices and can be created and used as follows:
```python
import torch
from zunis.integration import Integrator
device = torch.device("cuda")
d = 2
def f(x):
return x[:,0]**2 + x[:,1]**2
integrator = Integrator(d=d,f=f,device=device)
result, uncertainty, history = integrator.integrate()
```
The function `f` is integrated over the `d`-dimensional unit hypercube and
* takes `torch.Tensor` batched inputs with shape `(N,d)` for arbitrary batch size `N` on `device`
* returns `torch.Tensor` batched inputs with shape `(N,)` for arbitrary batch size `N` on `device`
A more systematic documentation is under construction [here](https://zunis.readthedocs.io). | zunis | /zunis-0.3rc1.tar.gz/zunis-0.3rc1/README.md | README.md |
# Zunzun Framework
Zunzun is a Python framework that uses other libraries to implement its features. Ones of them are:
- [injector](https://pypi.org/project/injector/) It's used to Dependency Injection.
- [click](https://pypi.org/project/click/) For creating commands line interface.
- [blinker](https://pypi.org/project/blinker/) Provides a fast dispatching system.
- [SQLAlchemy](https://pypi.org/project/SQLAlchemy/) The Python SQL Toolkit and Object Relational Mapper.
## Create an application
## Controller
We can create two types of controller, a function controller, or a class controller.
### Function controller
```python
from main import router
@router.post("/")
def register():
return "Register"
```
### Class controller
To create a class controller we can use the following command
```
python zunzun.py maker controller Role --route="/role"
```
Where "Role" will be the name of the controller and "/role" the path to access the controller.
The command will generate the file `app/controllers/role.py`
```python
from zunzun import Response
from main import router
class RoleController:
@router.get('/role')
def index(self):
return "RoleController Index"
```
In the class controller or in the function controller we can inject dependencies.
For example, if we have a service named "PaypalService" we can inject it, with the following code.
```python
from main import router
from app.services import PaypalService
@router.post("/")
def register(paypal: PaypalService):
paypal.call_a_method()
return "Register"
```
In a function class, we can inject dependencies in the constructor or in any function.
```python
from zunzun import Response
from main import router
from app.services import PaypalService, SomeService
class RoleController:
def __init__(self, some_service: SomeService):
self.some_service = some_service
@router.get('/role')
def index(self, paypal: PaypalService):
return "RoleController Index"
```
## Commands
Commands allow us to implement command line features. To do that we can use the following command.
```
python zunzun.py maker command role
```
Where "role" will be the name of the command. This command will create the following file `app/commands/role.py`.
```python
import click
from injector import singleton, inject
from zunzun import Command
@singleton
class roleCommand(Command):
@inject
def __init__(self):
super().__init__("role")
self.add_option("--some-option")
self.add_argument("some-argument")
def handle(self, some_option, some_argument):
click.echo(
f"roleCommand [some_argument: {some_argument}] [some_option: {some_option}]"
)
```
To use the new command we can type the following in the console.
```
python zunzun.py app role "An argument value" --some-option="An option value"
```
## Listener
The listener allows us to implement the Event-Dispatcher pattern. To create a new listener with its signal we can use the following command.
```
python zunzun.py maker listener Role Role
```
Where the first word "Role" will be the listener name and the second will be the signal name.
The command will generate the following files:
- Signal file `app/signals/role.py`
- Listener file `app/listeners/role.py`
The signal file will have this code.
```python
from zunzun import Signal
from injector import singleton
@singleton
class RoleSignal(Signal):
pass
```
The listener file will have this code.
```python
from injector import singleton
@singleton
class RoleListener:
def __call__(self, sender, **kwargs):
pass
```
## Services
We can create classes to implement any logic that we need. For example to create a service to integrate Paypal we can use the following command.
```
python zunzun.py maker service Paypal
```
The command will create the file `app/services/paypal.py` with the following code.
```python
from injector import singleton, inject
@singleton
class PaypalService:
@inject
def __init__(self):
pass
```
## ORM
Zunzun uses **SQLAlchemy** to implement the ORM features. The framework uses two type of classes.
- The model represents a single row in the database.
- The repository is a class to implement the queries to the database.
To create the model and its repository we can use the following command.
```
python zunzun.py orm model_create Role
```
The model will be
```python
from zunzun import orm
from sqlalchemy import Column, Integer
class Role(orm.BaseModel):
__tablename__ = "Role"
id = Column(Integer, primary_key=True)
```
The repository will be
```python
from injector import singleton
from zunzun import orm
from app.model import Role
@singleton
class RoleRepository(orm.BaseRepository):
def new(self, **kwargs):
return Role(**kwargs)
```
## Dependency injection
The framework uses this pattern to manage dependencies. To know how you can use see the documentation on [inject](https://pypi.org/project/injector/) | zunzun | /zunzun-0.0.1.tar.gz/zunzun-0.0.1/README.md | README.md |
Design Goals
============
* Keep it simple and small, avoiding extra complexity at all cost. `KISS <http://en.wikipedia.org/wiki/KISS_principle>`_
* Create routes on the fly or by defining regular expressions.
* Support API versions out of the box without altering routes.
* Thread safety.
* Via decorator or in a defined route, accepts only certain `HTTP methods <http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html>`_.
* Follow the single responsibility `principle <http://en.wikipedia.org/wiki/Single_responsibility_principle>`_.
* Be compatible with any WSGI server. Example: `uWSGI <http://uwsgi-docs.readthedocs.org/en/latest/>`_, `Gunicorn <http://gunicorn.org/>`_, `Twisted <http://twistedmatrix.com/>`_, etc.
* Tracing Request-ID "rid" per request.
* Compatibility with Google App Engine. `demo <http://api.zunzun.io>`_
* `Multi-tenant <http://en.wikipedia.org/wiki/Multitenancy>`_ Support.
* Ability to create almost anything easy, example: Support `chunked transfer encoding <http://en.wikipedia.org/wiki/Chunked_transfer_encoding>`_.
Install
.......
Via pip::
$ pip install zunzuncito
If you don't have pip, after downloading the sources, you can run::
$ python setup.py install
Quick start
...........
* `http://docs.zunzun.io/en/latest/Quickstart.html <http://docs.zunzun.io/en/latest/Quickstart.html>`_
Documentation
..............
* `docs.zunzun.io <http://docs.zunzun.io>`_
* `www.zunzun.io <http://www.zunzun.io>`_
What ?
.......
ZunZuncito is a python package that allows to create and maintain `REST <http://en.wikipedia.org/wiki/REST>`_ API's without hassle.
The simplicity for sketching and debugging helps to develop very fast; versioning is inherit by default, which allows to serve and maintain existing applications, while working in new releases with no need to create separate instances. All the applications are WSGI `PEP 333 <http://www.python.org/dev/peps/pep-0333/>`_ compliant, allowing to migrate existing code to more robust frameworks, without need to modify the existing code.
Why ?
.....
* The need to upload large files by chunks and support resumable uploads trying to accomplish something like the `nginx upload module <http://www.grid.net.ru/nginx/resumable_uploads.en.html>`_ does in pure python.
The idea of creating ZunZuncito, was the need of a very small and light tool (batteries included), that could help to create and deploy REST API's quickly, without forcing the developers to learn or follow a complex flow but, in contrast, from the very beginning, guide them to properly structure their API, giving special attention to "versioned URI's", having with this a solid base that allows to work in different versions within a single ZunZun instance without interrupting service of any existing API `resources <http://en.wikipedia.org/wiki/Web_resource>`_.
| zunzuncito | /zunzuncito-0.1.20.tar.gz/zunzuncito-0.1.20/README.rst | README.rst |
Subsets and Splits