code
stringlengths 1
5.19M
| package
stringlengths 1
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
.. _pipeline:
Pipeline
========
A pipeline describes a workflow operation in Zuul. It associates jobs
for a given project with triggering and reporting events.
Its flexible configuration allows for characterizing any number of
workflows, and by specifying each as a named configuration, makes it
easy to apply similar workflow operations to projects or groups of
projects.
By way of example, one of the primary uses of Zuul is to perform
project gating. To do so, one can create a :term:`gate` pipeline
which tells Zuul that when a certain event (such as approval by a code
reviewer) occurs, the corresponding change or pull request should be
enqueued into the pipeline. When that happens, the jobs which have
been configured to run for that project in the gate pipeline are run,
and when they complete, the pipeline reports the results to the user.
Pipeline configuration items may only appear in :term:`config-projects
<config-project>`.
Generally, a Zuul administrator would define a small number of
pipelines which represent the workflow processes used in their
environment. Each project can then be added to the available
pipelines as appropriate.
Here is an example :term:`check` pipeline, which runs whenever a new
patchset is created in Gerrit. If the associated jobs all report
success, the pipeline reports back to Gerrit with ``Verified`` vote of
+1, or if at least one of them fails, a -1:
.. code-block:: yaml
- pipeline:
name: check
manager: independent
trigger:
my_gerrit:
- event: patchset-created
success:
my_gerrit:
Verified: 1
failure:
my_gerrit:
Verified: -1
.. TODO: See TODO for more annotated examples of common pipeline configurations.
.. attr:: pipeline
The attributes available on a pipeline are as follows (all are
optional unless otherwise specified):
.. attr:: name
:required:
This is used later in the project definition to indicate what jobs
should be run for events in the pipeline.
.. attr:: manager
:required:
There are several schemes for managing pipelines. The following
table summarizes their features; each is described in detail
below.
=========== ============================= ============ ===== ============= =========
Manager Use Case Dependencies Merge Shared Queues Window
=========== ============================= ============ ===== ============= =========
Independent :term:`check`, :term:`post` No No No Unlimited
Dependent :term:`gate` Yes Yes Yes Variable
Serial :term:`deploy` No No Yes 1
Supercedent :term:`post`, :term:`promote` No No Project-ref 1
=========== ============================= ============ ===== ============= =========
.. value:: independent
Every event in this pipeline should be treated as independent
of other events in the pipeline. This is appropriate when
the order of events in the pipeline doesn't matter because
the results of the actions this pipeline performs can not
affect other events in the pipeline. For example, when a
change is first uploaded for review, you may want to run
tests on that change to provide early feedback to reviewers.
At the end of the tests, the change is not going to be
merged, so it is safe to run these tests in parallel without
regard to any other changes in the pipeline. They are
independent.
Another type of pipeline that is independent is a post-merge
pipeline. In that case, the changes have already merged, so
the results can not affect any other events in the pipeline.
.. value:: dependent
The dependent pipeline manager is designed for gating. It
ensures that every change is tested exactly as it is going to
be merged into the repository. An ideal gating system would
test one change at a time, applied to the tip of the
repository, and only if that change passed tests would it be
merged. Then the next change in line would be tested the
same way. In order to achieve parallel testing of changes,
the dependent pipeline manager performs speculative execution
on changes. It orders changes based on their entry into the
pipeline. It begins testing all changes in parallel,
assuming that each change ahead in the pipeline will pass its
tests. If they all succeed, all the changes can be tested
and merged in parallel. If a change near the front of the
pipeline fails its tests, each change behind it ignores
whatever tests have been completed and are tested again
without the change in front. This way gate tests may run in
parallel but still be tested correctly, exactly as they will
appear in the repository when merged.
For more detail on the theory and operation of Zuul's
dependent pipeline manager, see: :doc:`/gating`.
.. value:: serial
This pipeline manager supports shared queues (like depedent
pipelines) but only one item in each shared queue is
processed at a time.
This may be useful for post-merge pipelines which perform
partial production deployments (i.e., there are jobs with
file matchers which only deploy to affected parts of the
system). In such a case it is important for every change to
be processed, but they must still be processed one at a time
in order to ensure that the production system is not
inadvertently regressed. Support for shared queues ensures
that if multiple projects are involved deployment runs still
execute sequentially.
.. value:: supercedent
This is like an independent pipeline, in that every item is
distinct, except that items are grouped by project and ref,
and only one item for each project-ref is processed at a
time. If more than one additional item is enqueued for the
project-ref, previously enqueued items which have not started
processing are removed.
In other words, this pipeline manager will only run jobs for
the most recent item enqueued for a given project-ref.
This may be useful for post-merge pipelines which perform
artifact builds where only the latest version is of use. In
these cases, build resources can be conserved by avoiding
building intermediate versions.
.. note:: Since this pipeline filters intermediate buildsets
using it in combination with file filters on jobs
is dangerous. In this case jobs of in between
buildsets can be unexpectedly skipped entirely. If
file filters are needed the ``independent`` or
``serial`` pipeline managers should be used.
.. attr:: post-review
:default: false
This is a boolean which indicates that this pipeline executes
code that has been reviewed. Some jobs perform actions which
should not be permitted with unreviewed code. When this value
is ``false`` those jobs will not be permitted to run in the
pipeline. If a pipeline is designed only to be used after
changes are reviewed or merged, set this value to ``true`` to
permit such jobs.
For more information, see :ref:`secret` and
:attr:`job.post-review`.
.. attr:: description
This field may be used to provide a textual description of the
pipeline. It may appear in the status page or in documentation.
.. attr:: variant-description
:default: branch name
This field may be used to provide a textual description of the
variant. It may appear in the status page or in documentation.
.. attr:: success-message
:default: Build successful.
The introductory text in reports when all the voting jobs are
successful.
.. attr:: failure-message
:default: Build failed.
The introductory text in reports when at least one voting job
fails.
.. attr:: start-message
:default: Starting {pipeline.name} jobs.
The introductory text in reports when jobs are started.
Three replacement fields are available ``status_url``, ``pipeline`` and
``change``.
.. attr:: enqueue-message
The introductory text in reports when an item is enqueued.
Empty by default.
.. attr:: merge-conflict-message
:default: Merge failed.
The introductory text in the message reported when a change
fails to merge with the current state of the repository.
Defaults to "Merge failed."
.. attr:: no-jobs-message
The introductory text in reports when an item is dequeued
without running any jobs. Empty by default.
.. attr:: dequeue-message
:default: Build canceled.
The introductory text in reports when an item is dequeued.
The dequeue message only applies if the item was dequeued without
a result.
.. attr:: footer-message
Supplies additional information after test results. Useful for
adding information about the CI system such as debugging and
contact details.
.. attr:: trigger
At least one trigger source must be supplied for each pipeline.
Triggers are not exclusive -- matching events may be placed in
multiple pipelines, and they will behave independently in each
of the pipelines they match.
Triggers are loaded from their connection name. The driver type
of the connection will dictate which options are available. See
:ref:`drivers`.
.. attr:: require
If this section is present, it establishes prerequisites for
any kind of item entering the Pipeline. Regardless of how the
item is to be enqueued (via any trigger or automatic dependency
resolution), the conditions specified here must be met or the
item will not be enqueued. These requirements may vary
depending on the source of the item being enqueued.
Requirements are loaded from their connection name. The driver
type of the connection will dictate which options are available.
See :ref:`drivers`.
.. attr:: reject
If this section is present, it establishes prerequisites that
can block an item from being enqueued. It can be considered a
negative version of :attr:`pipeline.require`.
Requirements are loaded from their connection name. The driver
type of the connection will dictate which options are available.
See :ref:`drivers`.
.. attr:: allow-other-connections
:default: true
If this is set to `false` then any change enqueued into the
pipeline (whether it is enqueued to run jobs or merely as a
dependency) must be from one of the connections specified in the
pipeline configuration (this includes any trigger, reporter, or
source requirement). When used in conjuctions with
:attr:`pipeline.require`, this can ensure that pipeline
requirements are exhaustive.
.. attr:: supercedes
The name of a pipeline, or a list of names, that this pipeline
supercedes. When a change is enqueued in this pipeline, it will
be removed from the pipelines listed here. For example, a
:term:`gate` pipeline may supercede a :term:`check` pipeline so
that test resources are not spent running near-duplicate jobs
simultaneously.
.. attr:: dequeue-on-new-patchset
:default: true
Normally, if a new patchset is uploaded to a change that is in a
pipeline, the existing entry in the pipeline will be removed
(with jobs canceled and any dependent changes that can no longer
merge as well. To suppress this behavior (and allow jobs to
continue running), set this to ``false``.
.. attr:: ignore-dependencies
:default: false
In any kind of pipeline (dependent or independent), Zuul will
attempt to enqueue all dependencies ahead of the current change
so that they are tested together (independent pipelines report
the results of each change regardless of the results of changes
ahead). To ignore dependencies completely in an independent
pipeline, set this to ``true``. This option is ignored by
dependent pipelines.
.. attr:: precedence
:default: normal
Indicates how the build scheduler should prioritize jobs for
different pipelines. Each pipeline may have one precedence,
jobs for pipelines with a higher precedence will be run before
ones with lower. The value should be one of ``high``,
``normal``, or ``low``. Default: ``normal``.
.. _reporters:
The following options configure :term:`reporters <reporter>`.
Reporters are complementary to triggers; where a trigger is an
event on a connection which causes Zuul to enqueue an item, a
reporter is the action performed on a connection when an item is
dequeued after its jobs complete. The actual syntax for a reporter
is defined by the driver which implements it. See :ref:`drivers`
for more information.
.. attr:: success
Describes where Zuul should report to if all the jobs complete
successfully. This section is optional; if it is omitted, Zuul
will run jobs and do nothing on success -- it will not report at
all. If the section is present, the listed :term:`reporters
<reporter>` will be asked to report on the jobs. The reporters
are listed by their connection name. The options available
depend on the driver for the supplied connection.
.. attr:: failure
These reporters describe what Zuul should do if at least one job
fails.
.. attr:: merge-conflict
These reporters describe what Zuul should do if it is unable to
merge the patchset into the current state of the target
branch. If no merge-conflict reporters are listed then the
``failure`` reporters will be used.
.. attr:: config-error
These reporters describe what Zuul should do if it encounters a
configuration error while trying to enqueue the item. If no
config-error reporters are listed then the ``failure`` reporters
will be used.
.. attr:: enqueue
These reporters describe what Zuul should do when an item is
enqueued into the pipeline. This may be used to indicate to a
system or user that Zuul is aware of the triggering event even
though it has not evaluated whether any jobs will run.
.. attr:: start
These reporters describe what Zuul should do when jobs start
running for an item in the pipeline. This can be used, for
example, to reset a previously reported result.
.. attr:: no-jobs
These reporters describe what Zuul should do when an item is
dequeued from a pipeline without running any jobs. This may be
used to indicate to a system or user that the pipeline is not
relevant for a change.
.. attr:: disabled
These reporters describe what Zuul should do when a pipeline is
disabled. See ``disable-after-consecutive-failures``.
.. attr:: dequeue
These reporters describe what Zuul should do if an item is
dequeued. The dequeue reporters will only apply, if the item
was dequeued without a result.
The following options can be used to alter Zuul's behavior to
mitigate situations in which jobs are failing frequently (perhaps
due to a problem with an external dependency, or unusually high
non-deterministic test failures).
.. attr:: disable-after-consecutive-failures
If set, a pipeline can enter a *disabled* state if too many
changes in a row fail. When this value is exceeded the pipeline
will stop reporting to any of the **success**, **failure** or
**merge-conflict** reporters and instead only report to the
**disabled** reporters. (No **start** reports are made when a
pipeline is disabled).
.. attr:: window
:default: 20
Dependent pipeline managers only. Zuul can rate limit dependent
pipelines in a manner similar to TCP flow control. Jobs are
only started for items in the queue if they are within the
actionable window for the pipeline. The initial length of this
window is configurable with this value. The value given should
be a positive integer value. A value of ``0`` disables rate
limiting on the :value:`dependent pipeline manager
<pipeline.manager.dependent>`.
.. attr:: window-floor
:default: 3
Dependent pipeline managers only. This is the minimum value for
the window described above. Should be a positive non zero
integer value.
.. attr:: window-increase-type
:default: linear
Dependent pipeline managers only. This value describes how the
window should grow when changes are successfully merged by zuul.
.. value:: linear
Indicates that **window-increase-factor** should be added to
the previous window value.
.. value:: exponential
Indicates that **window-increase-factor** should be
multiplied against the previous window value and the result
will become the window size.
.. attr:: window-increase-factor
:default: 1
Dependent pipeline managers only. The value to be added or
multiplied against the previous window value to determine the
new window after successful change merges.
.. attr:: window-decrease-type
:default: exponential
Dependent pipeline managers only. This value describes how the
window should shrink when changes are not able to be merged by
Zuul.
.. value:: linear
Indicates that **window-decrease-factor** should be
subtracted from the previous window value.
.. value:: exponential
Indicates that **window-decrease-factor** should be divided
against the previous window value and the result will become
the window size.
.. attr:: window-decrease-factor
:default: 2
:value:`Dependent pipeline managers
<pipeline.manager.dependent>` only. The value to be subtracted
or divided against the previous window value to determine the
new window after unsuccessful change merges.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/config/pipeline.rst
|
pipeline.rst
|
.. _nodeset:
Nodeset
=======
A Nodeset is a named collection of nodes for use by a job. Jobs may
specify what nodes they require individually, however, by defining
groups of node types once and referring to them by name, job
configuration may be simplified.
Nodesets, like most configuration items, are unique within a tenant,
though a nodeset may be defined on multiple branches of the same
project as long as the contents are the same. This is to aid in
branch maintenance, so that creating a new branch based on an existing
branch will not immediately produce a configuration error.
.. code-block:: yaml
- nodeset:
name: nodeset1
nodes:
- name: controller
label: controller-label
- name: compute1
label: compute-label
- name:
- compute2
- web
label: compute-label
groups:
- name: ceph-osd
nodes:
- controller
- name: ceph-monitor
nodes:
- controller
- compute1
- compute2
- name: ceph-web
nodes:
- web
Nodesets may also be used to express that Zuul should use the first of
multiple alternative node configurations to run a job. When a Nodeset
specifies a list of :attr:`nodeset.alternatives`, Zuul will request the
first Nodeset in the series, and if allocation fails for any reason,
Zuul will re-attempt the request with the subsequent Nodeset and so
on. The first Nodeset which is sucessfully supplied by Nodepool will
be used to run the job. An example of such a configuration follows.
.. code-block:: yaml
- nodeset:
name: fast-nodeset
nodes:
- label: fast-label
name: controller
- nodeset:
name: slow-nodeset
nodes:
- label: slow-label
name: controller
- nodeset:
name: fast-or-slow
alternatives:
- fast-nodeset
- slow-nodeset
In the above example, a job that requested the `fast-or-slow` nodeset
would receive `fast-label` nodes if a provider was able to supply
them, otherwise it would receive `slow-label` nodes. A Nodeset may
specify nodes and groups, or alternative nodesets, but not both.
.. attr:: nodeset
A Nodeset requires two attributes:
.. attr:: name
:required:
The name of the Nodeset, to be referenced by a :ref:`job`.
This is required when defining a standalone Nodeset in Zuul.
When defining an in-line anonymous nodeset within a job
definition, this attribute should be omitted.
.. attr:: nodes
This attribute is required unless `alteranatives` is supplied.
A list of node definitions, each of which has the following format:
.. attr:: name
:required:
The name of the node. This will appear in the Ansible inventory
for the job.
This can also be as a list of strings. If so, then the list of hosts in
the Ansible inventory will share a common ansible_host address.
.. attr:: label
:required:
The Nodepool label for the node. Zuul will request a node with
this label.
.. attr:: groups
Additional groups can be defined which are accessible from the ansible
playbooks.
.. attr:: name
:required:
The name of the group to be referenced by an ansible playbook.
.. attr:: nodes
:required:
The nodes that shall be part of the group. This is specified as a list
of strings.
.. attr:: alternatives
:type: list
A list of alternative nodesets for which requests should be
attempted in series. The first request which succeeds will be
used for the job.
The items in the list may be either strings, in which case they
refer to other Nodesets within the layout, or they may be a
dictionary which is a nested anonymous Nodeset definition. The
two types (strings or nested definitions) may be mixed.
An alternative Nodeset definition may in turn refer to other
alternative nodeset definitions. In this case, the tree of
definitions will be flattened in a breadth-first manner to
create the ordered list of alternatives.
A Nodeset which specifies alternatives may not also specify
nodes or groups (this attribute is exclusive with
:attr:`nodeset.nodes` and :attr:`nodeset.groups`.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/config/nodeset.rst
|
nodeset.rst
|
.. _semaphore:
Semaphore
=========
Semaphores can be used to restrict the number of certain jobs which
are running at the same time. This may be useful for jobs which
access shared or limited resources. A semaphore has a value which
represents the maximum number of jobs which use that semaphore at the
same time.
Semaphores, like most configuration items, are unique within a tenant,
though a semaphore may be defined on multiple branches of the same
project as long as the value is the same. This is to aid in branch
maintenance, so that creating a new branch based on an existing branch
will not immediately produce a configuration error.
Zuul also supports global semaphores (see :ref:`global_semaphore`)
which may only be created by the Zuul administrator, but can be used
to coordinate resources across multiple tenants.
Semaphores are never subject to dynamic reconfiguration. If the value
of a semaphore is changed, it will take effect only when the change
where it is updated is merged. However, Zuul will attempt to validate
the configuration of semaphores in proposed updates, even if they
aren't used.
An example usage of semaphores follows:
.. code-block:: yaml
- semaphore:
name: semaphore-foo
max: 5
- semaphore:
name: semaphore-bar
max: 3
.. attr:: semaphore
The following attributes are available:
.. attr:: name
:required:
The name of the semaphore, referenced by jobs.
.. attr:: max
:default: 1
The maximum number of running jobs which can use this semaphore.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/config/semaphore.rst
|
semaphore.rst
|
.. _project:
Project
=======
A project corresponds to a source code repository with which Zuul is
configured to interact. The main responsibility of the project
configuration item is to specify which jobs should run in which
pipelines for a given project. Within each project definition, a
section for each :ref:`pipeline <pipeline>` may appear. This
project-pipeline definition is what determines how a project
participates in a pipeline.
Multiple project definitions may appear for the same project (for
example, in a central :term:`config projects <config-project>` as well
as in a repo's own ``.zuul.yaml``). In this case, all of the project
definitions for the relevant branch are combined (the jobs listed in
all of the matching definitions will be run). If a project definition
appears in a :term:`config-project`, it will apply to all branches of
the project. If it appears in a branch of an
:term:`untrusted-project` it will only apply to changes on that
branch. In the case of an item which does not have a branch (for
example, a tag), all of the project definitions will be combined.
Consider the following project definition::
- project:
name: yoyodyne
queue: integrated
check:
jobs:
- check-syntax
- unit-tests
gate:
jobs:
- unit-tests
- integration-tests
The project has two project-pipeline stanzas, one for the ``check``
pipeline, and one for ``gate``. Each specifies which jobs should run
when a change for that project enters the respective pipeline -- when
a change enters ``check``, the ``check-syntax`` and ``unit-test`` jobs
are run.
Pipelines which use the dependent pipeline manager (e.g., the ``gate``
example shown earlier) maintain separate queues for groups of
projects. When Zuul serializes a set of changes which represent
future potential project states, it must know about all of the
projects within Zuul which may have an effect on the outcome of the
jobs it runs. If project *A* uses project *B* as a library, then Zuul
must be told about that relationship so that it knows to serialize
changes to A and B together, so that it does not merge a change to B
while it is testing a change to A.
Zuul could simply assume that all projects are related, or even infer
relationships by which projects a job indicates it uses, however, in a
large system that would become unwieldy very quickly, and
unnecessarily delay changes to unrelated projects. To allow for
flexibility in the construction of groups of related projects, the
change queues used by dependent pipeline managers are specified
manually. To group two or more related projects into a shared queue
for a dependent pipeline, set the ``queue`` parameter to the same
value for those projects.
The ``gate`` project-pipeline definition above specifies that this
project participates in the ``integrated`` shared queue for that
pipeline.
.. attr:: project
The following attributes may appear in a project:
.. attr:: name
The name of the project. If Zuul is configured with two or more
unique projects with the same name, the canonical hostname for
the project should be included (e.g., `git.example.com/foo`).
This can also be a regex. In this case the regex must start with ``^``
and match the full project name following the same rule as name without
regex. If not given it is implicitly derived from the project where this
is defined.
.. attr:: templates
A list of :ref:`project-template` references; the
project-pipeline definitions of each Project Template will be
applied to this project. If more than one template includes
jobs for a given pipeline, they will be combined, as will any
jobs specified in project-pipeline definitions on the project
itself.
.. attr:: default-branch
:default: master
The name of a branch that Zuul should check out in jobs if no
better match is found. Typically Zuul will check out the branch
which matches the change under test, or if a job has specified
an :attr:`job.override-checkout`, it will check that out.
However, if there is no matching or override branch, then Zuul
will checkout the default branch.
Each project may only have one ``default-branch`` therefore Zuul
will use the first value that it encounters for a given project
(regardless of in which branch the definition appears). It may
not appear in a :ref:`project-template` definition.
.. attr:: merge-mode
:default: (driver specific)
The merge mode which is used by Git for this project. Be sure
this matches what the remote system which performs merges (i.e.,
Gerrit). The requested merge mode will also be used by the
GitHub and GitLab drivers when performing merges.
The default is :value:`project.merge-mode.merge` for all drivers
except Gerrit, where the default is
:value:`project.merge-mode.merge-resolve`.
Each project may only have one ``merge-mode`` therefore Zuul
will use the first value that it encounters for a given project
(regardless of in which branch the definition appears). It may
not appear in a :ref:`project-template` definition.
It must be one of the following values:
.. value:: merge
Uses the default git merge strategy (recursive). This maps to
the merge mode ``merge`` in GitHub and GitLab.
.. value:: merge-resolve
Uses the resolve git merge strategy. This is a very
conservative merge strategy which most closely matches the
behavior of Gerrit. This maps to the merge mode ``merge`` in
GitHub and GitLab.
.. value:: cherry-pick
Cherry-picks each change onto the branch rather than
performing any merges. This is not supported by GitHub and GitLab.
.. value:: squash-merge
Squash merges each change onto the branch. This maps to the
merge mode ``squash`` in GitHub and GitLab.
.. value:: rebase
Rebases the changes onto the branch. This is only supported
by GitHub and maps to the ``rebase`` merge mode (but
does not alter committer information in the way that GitHub
does in the repos that Zuul prepares for jobs).
.. attr:: vars
:default: None
A dictionary of variables to be made available for all jobs in
all pipelines of this project. For more information see
:ref:`variable inheritance <user_jobs_variable_inheritance>`.
.. attr:: queue
This specifies the
name of the shared queue this project is in. Any projects
which interact with each other in tests should be part of the
same shared queue in order to ensure that they don't merge
changes which break the others. This is a free-form string;
just set the same value for each group of projects.
The name can refer to the name of a :attr:`queue` which allows
further configuration of the queue.
Each pipeline for a project can only belong to one queue,
therefore Zuul will use the first value that it encounters.
It need not appear in the first instance of a :attr:`project`
stanza; it may appear in secondary instances or even in a
:ref:`project-template` definition.
.. note:: This attribute is not evaluated speculatively and
its setting shall be merged to be effective.
.. attr:: <pipeline>
Each pipeline that the project participates in should have an
entry in the project. The value for this key should be a
dictionary with the following format:
.. attr:: jobs
:required:
A list of jobs that should be run when items for this project
are enqueued into the pipeline. Each item of this list may
be a string, in which case it is treated as a job name, or it
may be a dictionary, in which case it is treated as a job
variant local to this project and pipeline. In that case,
the format of the dictionary is the same as the top level
:attr:`job` definition. Any attributes set on the job here
will override previous versions of the job.
.. attr:: debug
If this is set to `true`, Zuul will include debugging
information in reports it makes about items in the pipeline.
This should not normally be set, but in situations were it is
difficult to determine why Zuul did or did not run a certain
job, the additional information this provides may help.
.. attr:: fail-fast
:default: false
If this is set to `true`, Zuul will report a build failure
immediately and abort all still running builds. This can be used
to save resources in resource constrained environments at the cost
of potentially requiring multiple attempts if more than one problem
is present.
Once this is defined it cannot be overridden afterwards. So this
can be forced to a specific value by e.g. defining it in a config
repo.
.. _project-template:
Project Template
================
A Project Template defines one or more project-pipeline definitions
which can be re-used by multiple projects.
A Project Template uses the same syntax as a :ref:`project`
definition, however, in the case of a template, the
:attr:`project.name` attribute does not refer to the name of a
project, but rather names the template so that it can be referenced in
a :ref:`project` definition.
Because Project Templates may be used outside of the projects where
they are defined, they honor the implied branch :ref:`pragmas <pragma>`
(unlike Projects). The same heuristics described in
:attr:`job.branches` that determine what implied branches a :ref:`job`
will receive apply to Project Templates (with the exception that it is
not possible to explicity set a branch matcher on a Project Template).
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/config/project.rst
|
project.rst
|
.. _job:
Job
===
A job is a unit of work performed by Zuul on an item enqueued into a
pipeline. Items may run any number of jobs (which may depend on each
other). Each job is an invocation of an Ansible playbook with a
specific inventory of hosts. The actual tasks that are run by the job
appear in the playbook for that job while the attributes that appear in the
Zuul configuration specify information about when, where, and how the
job should be run.
Jobs in Zuul support inheritance. Any job may specify a single parent
job, and any attributes not set on the child job are collected from
the parent job. In this way, a configuration structure may be built
starting with very basic jobs which describe characteristics that all
jobs on the system should have, progressing through stages of
specialization before arriving at a particular job. A job may inherit
from any other job in any project (however, if the other job is marked
as :attr:`job.final`, jobs may not inherit from it). Generally,
attributes on child jobs will override (or completely replace)
attributes on the parent, however some attributes are combined. See
the documentation for individual attributes for these exceptions.
A job with no parent is called a *base job* and may only be defined in
a :term:`config-project`. Every other job must have a parent, and so
ultimately, all jobs must have an inheritance path which terminates at
a base job. Each tenant has a default parent job which will be used
if no explicit parent is specified.
Multiple job definitions with the same name are called variants.
These may have different selection criteria which indicate to Zuul
that, for instance, the job should behave differently on a different
git branch. Unlike inheritance, all job variants must be defined in
the same project. Some attributes of jobs marked :attr:`job.final`
may not be overridden.
When Zuul decides to run a job, it performs a process known as
freezing the job. Because any number of job variants may be
applicable, Zuul collects all of the matching variants and applies
them in the order they appeared in the configuration. The resulting
frozen job is built from attributes gathered from all of the
matching variants. In this way, exactly what is run is dependent on
the pipeline, project, branch, and content of the item.
In addition to the job's main playbook, each job may specify one or
more pre- and post-playbooks. These are run, in order, before and
after (respectively) the main playbook. They may be used to set up
and tear down resources needed by the main playbook. When combined
with inheritance, they provide powerful tools for job construction. A
job only has a single main playbook, and when inheriting from a
parent, the child's main playbook overrides (or replaces) the
parent's. However, the pre- and post-playbooks are appended and
prepended in a nesting fashion. So if a parent job and child job both
specified pre and post playbooks, the sequence of playbooks run would
be:
* parent pre-run playbook
* child pre-run playbook
* child playbook
* child post-run playbook
* parent post-run playbook
* parent cleanup-run playbook
Further inheritance would nest even deeper.
Here is an example of two job definitions:
.. code-block:: yaml
- job:
name: base
pre-run: copy-git-repos
post-run: copy-logs
- job:
name: run-tests
parent: base
nodeset:
nodes:
- name: test-node
label: fedora
.. attr:: job
The following attributes are available on a job; all are optional
unless otherwise specified:
.. attr:: name
:required:
The name of the job. By default, Zuul looks for a playbook with
this name to use as the main playbook for the job. This name is
also referenced later in a project pipeline configuration.
.. TODO: figure out how to link the parent default to tenant.default.parent
.. attr:: parent
:default: Tenant default-parent
Specifies a job to inherit from. The parent job can be defined
in this or any other project. Any attributes not specified on a
job will be collected from its parent. If no value is supplied
here, the job specified by :attr:`tenant.default-parent` will be
used. If **parent** is set to ``null`` (which is only valid in
a :term:`config-project`), this is a :term:`base job`.
.. attr:: description
A textual description of the job. Not currently used directly
by Zuul, but it is used by the zuul-sphinx extension to Sphinx
to auto-document Zuul jobs (in which case it is interpreted as
ReStructuredText.
.. attr:: final
:default: false
To prevent other jobs from inheriting from this job, and also to
prevent changing execution-related attributes when this job is
specified in a project's pipeline, set this attribute to
``true``.
.. warning::
It is possible to circumvent the use of `final` in an
:term:`untrusted-project` by creating a change which
`Depends-On` a change which alters `final`. This limitation
does not apply to jobs in a :term:`config-project`.
.. attr:: protected
:default: false
When set to ``true`` only jobs defined in the same project may inherit
from this job. This includes changing execution-related attributes when
this job is specified in a project's pipeline. Once this is set to
``true`` it cannot be reset to ``false``.
.. warning::
It is possible to circumvent the use of `protected` in an
:term:`untrusted-project` by creating a change which
`Depends-On` a change which alters `protected`. This
limitation does not apply to jobs in a
:term:`config-project`.
.. attr:: abstract
:default: false
To indicate a job is not intended to be run directly, but
instead must be inherited from, set this attribute to ``true``.
Once this is set to ``true`` in a job it cannot be reset to
``false`` within the same job by other variants; however jobs
which inherit from it can (and by default do) reset it to
``false``.
.. warning::
It is possible to circumvent the use of `abstract` in an
:term:`untrusted-project` by creating a change which
`Depends-On` a change which alters `abstract`. This
limitation does not apply to jobs in a
:term:`config-project`.
.. attr:: intermediate
:default: false
An intermediate job must be inherited by an abstract job; it can
not be inherited by a final job. All ``intermediate`` jobs
*must* also be ``abstract``; a configuration error will be
raised if not.
Once this is set to ``true`` in a job it cannot be reset to
``false`` within the same job by other variants; however jobs
which inherit from it can (and by default do) reset it to
``false``.
For example, you may define a base abstract job `foo` and create
two abstract jobs that inherit from `foo` called
`foo-production` and `foo-development`. If it would be an error
to accidentally inherit from the base job `foo` instead of
choosing one of the two variants, `foo` could be marked as
``intermediate``.
.. attr:: success-message
:default: SUCCESS
Normally when a job succeeds, the string ``SUCCESS`` is reported
as the result for the job. If set, this option may be used to
supply a different string.
.. attr:: failure-message
:default: FAILURE
Normally when a job fails, the string ``FAILURE`` is reported as
the result for the job. If set, this option may be used to
supply a different string.
.. attr:: hold-following-changes
:default: false
In a dependent pipeline, this option may be used to indicate
that no jobs should start on any items which depend on the
current item until this job has completed successfully. This
may be used to conserve build resources, at the expense of
inhibiting the parallelization which speeds the processing of
items in a dependent pipeline.
.. attr:: voting
:default: true
Indicates whether the result of this job should be used in
determining the overall result of the item.
.. attr:: semaphore
A deprecated alias of :attr:`job.semaphores`.
.. attr:: semaphores
The name of a :ref:`semaphore` (or list of them) or
:ref:`global_semaphore` which should be acquired and released
when the job begins and ends. If the semaphore is at maximum
capacity, then Zuul will wait until it can be acquired before
starting the job. The format is either a string, a dictionary,
or a list of either of those in the case of multiple
semaphores. If it's a string it references a semaphore using the
default value for :attr:`job.semaphores.resources-first`.
Also the name of a semaphore can be any string (without being
previosly defined via `semaphore` directive). In this case
an implicit semaphore is created with capacity max=1.
If multiple semaphores are requested, the job will not start
until all have been acquired, and Zuul will wait until all are
available before acquiring any.
When inheriting jobs or applying variants, the list of
semaphores is extended (semaphores specified in a job definition
are added to any supplied by their parents).
.. attr:: name
:required:
The name of the referenced semaphore
.. attr:: resources-first
:default: False
By default a semaphore is acquired before the resources are
requested. However in some cases the user may want to run
cheap jobs as quickly as possible in a consecutive manner. In
this case `resources-first` can be enabled to request the
resources before locking the semaphore. This can lead to some
amount of blocked resources while waiting for the semaphore
so this should be used with caution.
.. attr:: tags
Metadata about this job. Tags are units of information attached
to the job; they do not affect Zuul's behavior, but they can be
used within the job to characterize the job. For example, a job
which tests a certain subsystem could be tagged with the name of
that subsystem, and if the job's results are reported into a
database, then the results of all jobs affecting that subsystem
could be queried. This attribute is specified as a list of
strings, and when inheriting jobs or applying variants, tags
accumulate in a set, so the result is always a set of all the
tags from all the jobs and variants used in constructing the
frozen job, with no duplication.
.. attr:: provides
A list of free-form strings which identifies resources provided
by this job which may be used by other jobs for other changes
using the :attr:`job.requires` attribute.
When inheriting jobs or applying variants, the list of
`provides` is extended (`provides` specified in a job definition
are added to any supplied by their parents).
.. attr:: requires
A list of free-form strings which identify resources which may
be provided by other jobs for other changes (via the
:attr:`job.provides` attribute) that are used by this job.
When Zuul encounters a job with a `requires` attribute, it
searches for those values in the `provides` attributes of any
jobs associated with any queue items ahead of the current
change. In this way, if a change uses either git dependencies
or a `Depends-On` header to indicate a dependency on another
change, Zuul will be able to determine that the parent change
affects the run-time environment of the child change. If such a
relationship is found, the job with `requires` will not start
until all of the jobs with matching `provides` have completed or
paused. Additionally, the :ref:`artifacts <return_artifacts>`
returned by the `provides` jobs will be made available to the
`requires` job.
When inheriting jobs or applying variants, the list of
`requires` is extended (`requires` specified in a job definition
are added to any supplied by their parents).
For example, a job which produces a builder container image in
one project that is then consumed by a container image build job
in another project might look like this:
.. code-block:: yaml
- job:
name: build-builder-image
provides: images
- job:
name: build-final-image
requires: images
- project:
name: builder-project
check:
jobs:
- build-builder-image
- project:
name: final-project
check:
jobs:
- build-final-image
.. attr:: secrets
A list of secrets which may be used by the job. A
:ref:`secret` is a named collection of private information
defined separately in the configuration. The secrets that
appear here must be defined in the same project as this job
definition.
Each item in the list may may be supplied either as a string,
in which case it references the name of a :ref:`secret` definition,
or as a dict. If an element in this list is given as a dict, it
may have the following fields:
.. attr:: name
:required:
The name to use for the Ansible variable into which the secret
content will be placed.
.. attr:: secret
:required:
The name to use to find the secret's definition in the
configuration.
.. attr:: pass-to-parent
:default: false
A boolean indicating that this secret should be made
available to playbooks in parent jobs. Use caution when
setting this value -- parent jobs may be in different
projects with different security standards. Setting this to
true makes the secret available to those playbooks and
therefore subject to intentional or accidental exposure.
For example:
.. code-block:: yaml
- secret:
name: important-secret
data:
key: encrypted-secret-key-data
- job:
name: amazing-job
secrets:
- name: ssh_key
secret: important-secret
will result in the following being passed as a variable to the playbooks
in ``amazing-job``:
.. code-block:: yaml
ssh_key:
key: decrypted-secret-key-data
.. attr:: nodeset
The nodes which should be supplied to the job. This parameter
may be supplied either as a string, in which case it references
a :ref:`nodeset` definition which appears elsewhere in the
configuration, or a dictionary, in which case it is interpreted
in the same way as a Nodeset definition, though the top-level
nodeset ``name`` attribute should be omitted (in essence, it is
an anonymous Nodeset definition unique to this job; the nodes
themselves still require names). See the :ref:`nodeset`
reference for the syntax to use in that case.
If a job has an empty (or no) :ref:`nodeset` definition, it will
still run and is able to perform limited actions within the Zuul
executor sandbox. Note so-called "executor-only" jobs run with
an empty inventory, and hence Ansible's *implicit localhost*.
This means an executor-only playbook must be written to match
``localhost`` directly; i.e.
.. code-block:: yaml
- hosts: localhost
tasks:
...
not with ``hosts: all`` (as this does not match the implicit
localhost and the playbook will not run). There are also
caveats around things like enumerating the magic variable
``hostvars`` in this situation. For more information see the
Ansible `implicit localhost documentation
<https://docs.ansible.com/ansible/latest/inventory/implicit_localhost.html>`__.
A useful example of executor-only jobs is saving resources by
directly utilising the prior results from testing a committed
change. For example, a review which updates documentation
source files would generally test validity by building a
documentation tree. When this change is committed, the
pre-built output can be copied in an executor-only job directly
to the publishing location in a post-commit *promote* pipeline;
avoiding having to use a node to rebuild the documentation for
final publishing.
.. attr:: override-checkout
When Zuul runs jobs for a proposed change, it normally checks
out the branch associated with that change on every project
present in the job. If jobs are running on a ref (such as a
branch tip or tag), then that ref is normally checked out. This
attribute is used to override that behavior and indicate that
this job should, regardless of the branch for the queue item,
use the indicated ref (i.e., branch or tag) instead. This can
be used, for example, to run a previous version of the software
(from a stable maintenance branch) under test even if the change
being tested applies to a different branch (this is only likely
to be useful if there is some cross-branch interaction with some
component of the system being tested). See also the
project-specific :attr:`job.required-projects.override-checkout`
attribute to apply this behavior to a subset of a job's
projects.
This value is also used to help select which variants of a job
to run. If ``override-checkout`` is set, then Zuul will use
this value instead of the branch of the item being tested when
collecting jobs to run.
.. attr:: timeout
The time in seconds that the job should be allowed to run before
it is automatically aborted and failure is reported. If no
timeout is supplied, the job may run indefinitely. Supplying a
timeout is highly recommended.
This timeout only applies to the pre-run and run playbooks in a
job.
.. attr:: post-timeout
The time in seconds that each post playbook should be allowed to run
before it is automatically aborted and failure is reported. If no
post-timeout is supplied, the job may run indefinitely. Supplying a
post-timeout is highly recommended.
The post-timeout is handled separately from the above timeout because
the post playbooks are typically where you will copy jobs logs.
In the event of the pre-run or run playbooks timing out we want to
do our best to copy the job logs in the post-run playbooks.
.. attr:: attempts
:default: 3
When Zuul encounters an error running a job's pre-run playbook,
Zuul will stop and restart the job. Errors during the main or
post-run -playbook phase of a job are not affected by this
parameter (they are reported immediately). This parameter
controls the number of attempts to make before an error is
reported.
.. attr:: pre-run
The name of a playbook or list of playbooks to run before the
main body of a job. Values are either a string describing the
full path to the playbook in the repo where the job is defined,
or a dictionary described below.
When a job inherits from a parent, the child's pre-run playbooks
are run after the parent's. See :ref:`job` for more
information.
If the value is a dictionary, the following attributes are
available:
.. attr:: name
The path to the playbook relative to the root of the repo.
.. attr:: semaphore
The name of a :ref:`semaphore` (or list of them) or
:ref:`global_semaphore` which should be acquired and released
when the playbook begins and ends. If the semaphore is at
maximum capacity, then Zuul will wait until it can be
acquired before starting the playbook. The format is either a
string, or a list of strings.
If multiple semaphores are requested, the playbook will not
start until all have been acquired, and Zuul will wait until
all are available before acquiring any. The time spent
waiting for pre-run playbook semaphores is counted against
the :attr:`job.timeout`.
None of the semaphores specified for a playbook may also be
specified in the same job.
.. attr:: post-run
The name of a playbook or list of playbooks to run after the
main body of a job. Values are either a string describing the
full path to the playbook in the repo where the job is defined,
or a dictionary described below.
When a job inherits from a parent, the child's post-run playbooks
are run before the parent's. See :ref:`job` for more
information.
If the value is a dictionary, the following attributes are
available:
.. attr:: name
The path to the playbook relative to the root of the repo.
.. attr:: semaphore
The name of a :ref:`semaphore` (or list of them) or
:ref:`global_semaphore` which should be acquired and released
when the playbook begins and ends. If the semaphore is at
maximum capacity, then Zuul will wait until it can be
acquired before starting the playbook. The format is either a
string, or a list of strings.
If multiple semaphores are requested, the playbook will not
start until all have been acquired, and Zuul will wait until
all are available before acquiring any. The time spent
waiting for post-run playbook semaphores is counted against
the :attr:`job.post-timeout`.
None of the semaphores specified for a playbook may also be
specified in the same job.
.. attr:: cleanup-run
The name of a playbook or list of playbooks to run after job
execution. Values are either a string describing the full path
to the playbook in the repo where the job is defined, or a
dictionary described below.
The cleanup phase is performed regardless of the job's result,
even when the job is canceled. Cleanup results are not taken
into account when reporting the job result.
When a job inherits from a parent, the child's cleanup-run playbooks
are run before the parent's. See :ref:`job` for more
information.
There is a hard-coded five minute timeout for cleanup playbooks.
If the value is a dictionary, the following attributes are
available:
.. attr:: name
The path to the playbook relative to the root of the repo.
.. attr:: semaphore
The name of a :ref:`semaphore` (or list of them) or
:ref:`global_semaphore` which should be acquired and released
when the playbook begins and ends. If the semaphore is at
maximum capacity, then Zuul will wait until it can be
acquired before starting the playbook. The format is either a
string, or a list of strings.
If multiple semaphores are requested, the playbook will not
start until all have been acquired, and Zuul will wait until
all are available before acquiring any. The time spent
waiting for post-run playbook semaphores is counted against
the cleanup phase timeout.
None of the semaphores specified for a playbook may also be
specified in the same job.
.. attr:: run
The name of a playbook or list of playbooks for this job. If it
is not supplied, the parent's playbook will be used (and
likewise up the inheritance chain). Values are either a string
describing the full path to the playbook in the repo where the
job is defined, or a dictionary described below.
If the value is a dictionary, the following attributes are
available:
.. attr:: name
The path to the playbook relative to the root of the repo.
.. attr:: semaphore
The name of a :ref:`semaphore` (or list of them) or
:ref:`global_semaphore` which should be acquired and released
when the playbook begins and ends. If the semaphore is at
maximum capacity, then Zuul will wait until it can be
acquired before starting the playbook. The format is either a
string, or a list of strings.
If multiple semaphores are requested, the playbook will not
start until all have been acquired, and Zuul will wait until
all are available before acquiring any. The time spent
waiting for run playbook semaphores is counted against
the :attr:`job.timeout`.
None of the semaphores specified for a playbook may also be
specified in the same job.
Example:
.. code-block:: yaml
run: playbooks/job-playbook.yaml
Or:
.. code-block:: yaml
run:
- name: playbooks/job-playbook.yaml
semaphores: playbook-semaphore
.. attr:: ansible-split-streams
:default: False
Keep stdout/stderr of command and shell tasks separate (the Ansible
default behavior) instead of merging stdout and stderr.
Since version 3, Zuul has combined the stdout and stderr streams
in Ansible command tasks, but will soon switch to using the
normal Ansible behavior. In an upcoming release of Zuul, this
default will change to `True`, and in a later release, this
option will be removed altogether.
This option may be used in the interim to verify playbook
compatibility and facilitate upgrading to the new behavior.
.. attr:: ansible-version
The ansible version to use for all playbooks of the job. This can be
defined at the following layers of configuration where the first match
takes precedence:
* :attr:`job.ansible-version`
* :attr:`tenant.default-ansible-version`
* :attr:`scheduler.default_ansible_version`
* Zuul default version
The supported ansible versions are:
.. program-output:: zuul-manage-ansible -l
.. attr:: roles
.. code-block:: yaml
:name: job-roles-example
- job:
name: myjob
roles:
- zuul: myorg/our-roles-project
- zuul: myorg/ansible-role-foo
name: foo
A list of Ansible roles to prepare for the job. Because a job
runs an Ansible playbook, any roles which are used by the job
must be prepared and installed by Zuul before the job begins.
This value is a list of dictionaries, each of which indicates
one of two types of roles: a Galaxy role, which is simply a role
that is installed from Ansible Galaxy, or a Zuul role, which is
a role provided by a project managed by Zuul. Zuul roles are
able to benefit from speculative merging and cross-project
dependencies when used by playbooks in untrusted projects.
Roles are added to the Ansible role path in the order they
appear on the job -- roles earlier in the list will take
precedence over those which follow.
This attribute is not overridden on inheritance or variance;
instead roles are added with each new job or variant. In the
case of job inheritance or variance, the roles used for each of
the playbooks run by the job will be only those which were
cumulatively defined up to that point in the inheritance
hierarchy where that playbook was added. If a child job
inherits from a parent which defines a pre and post playbook,
then the pre and post playbooks it inherits from the parent job
will run only with the roles that were defined on the parent.
If the child adds its own pre and post playbooks, then any roles
added by the child will be available to the child's playbooks.
This is so that a job which inherits from a parent does not
inadvertently alter the behavior of the parent's playbooks by
the addition of conflicting roles. Roles added by a child will
appear before those it inherits from its parent.
If a project used for a Zuul role has branches, the usual
process of selecting which branch should be checked out applies.
See :attr:`job.override-checkout` for a description of that
process and how to override it. As a special case, if the role
project is the project in which this job definition appears,
then the branch in which this definition appears will be used.
In other words, a playbook may not use a role from a different
branch of the same project.
If the job is run on a ref (for example, a branch tip or a tag)
then a different form of the branch selection process is used.
There is no single branch context available for selecting an
appropriate branch of the role's repo to check out, so only the
following are considered: First the ref specified by
:attr:`job.required-projects.override-checkout`, or
:attr:`job.override-checkout`. Then if the role repo is the
playbook repo, that branch is used; otherwise the project's
default branch is selected.
.. warning::
Keep this behavior difference in mind when designing jobs
that run on both branches and tags. If the same job must be
used in both circumstances, ensure that any roles from other
repos used by playbooks in the job originate only in
un-branched repositories. Otherwise different branches of
the role repo may be checked out.
A project which supplies a role may be structured in one of two
configurations: a bare role (in which the role exists at the
root of the project), or a contained role (in which the role
exists within the ``roles/`` directory of the project, perhaps
along with other roles). In the case of a contained role, the
``roles/`` directory of the project is added to the role search
path. In the case of a bare role, the project itself is added
to the role search path. In case the name of the project is not
the name under which the role should be installed (and therefore
referenced from Ansible), the ``name`` attribute may be used to
specify an alternate.
A job automatically has the project in which it is defined added
to the roles path if that project appears to contain a role or
``roles/`` directory. By default, the project is added to the
path under its own name, however, that may be changed by
explicitly listing the project in the roles list in the usual
way.
.. attr:: galaxy
.. warning:: Galaxy roles are not yet implemented.
The name of the role in Ansible Galaxy. If this attribute is
supplied, Zuul will search Ansible Galaxy for a role by this
name and install it. Mutually exclusive with ``zuul``;
either ``galaxy`` or ``zuul`` must be supplied.
.. attr:: zuul
The name of a Zuul project which supplies the role. Mutually
exclusive with ``galaxy``; either ``galaxy`` or ``zuul`` must
be supplied.
.. attr:: name
The installation name of the role. In the case of a bare
role, the role will be made available under this name.
Ignored in the case of a contained role.
.. attr:: required-projects
A list of other projects which are used by this job. Any Zuul
projects specified here will also be checked out by Zuul into
the working directory for the job. Speculative merging and
cross-repo dependencies will be honored. If there is not a
change for the project ahead in the pipeline, its repo state as
of the time the item was enqueued will be frozen and used for
all jobs for a given change (see :ref:`global_repo_state`).
This attribute is not overridden by inheritance; instead it is
the union of all applicable parents and variants (i.e., jobs can
expand but not reduce the set of required projects when they
inherit).
The format for this attribute is either a list of strings or
dictionaries. Strings are interpreted as project names,
dictionaries, if used, may have the following attributes:
.. attr:: name
:required:
The name of the required project.
.. attr:: override-checkout
When Zuul runs jobs for a proposed change, it normally checks
out the branch associated with that change on every project
present in the job. If jobs are running on a ref (such as a
branch tip or tag), then that ref is normally checked out.
This attribute is used to override that behavior and indicate
that this job should, regardless of the branch for the queue
item, use the indicated ref (i.e., branch or tag) instead,
for only this project. See also the
:attr:`job.override-checkout` attribute to apply the same
behavior to all projects in a job.
This value is also used to help select which variants of a
job to run. If ``override-checkout`` is set, then Zuul will
use this value instead of the branch of the item being tested
when collecting any jobs to run which are defined in this
project.
.. attr:: vars
A dictionary of variables to supply to Ansible. When inheriting
from a job (or creating a variant of a job) vars are merged with
previous definitions. This means a variable definition with the
same name will override a previously defined variable, but new
variable names will be added to the set of defined variables.
When running a trusted playbook, the value of variables will be
frozen at the start of the job. Therefore if the value of the
variable is an Ansible Jinja template, it may only reference
values which are known at the start of the job, and its value
will not change. Untrusted playbooks dynamically evaluate
variables and are not limited by this restriction.
Un-frozen versions of all the original job variables are
available tagged with the ``!unsafe`` YAML tag under the
``unsafe_vars`` variable hierarchy. This tag prevents Ansible
from evaluating them as Jinja templates. For example, the job
variable `myvar` would be available under `unsafe_vars.myvar`.
Advanced users may force Ansible to evaluate these values, but
it is not recommended to do so except in the most controlled of
circumstances. They are almost impossible to render safely.
.. attr:: extra-vars
A dictionary of variables to supply to Ansible with higher
precedence than job, host, or group vars. Note, that despite
the name this is not passed to Ansible using the `--extra-vars`
flag.
.. attr:: host-vars
A dictionary of host variables to supply to Ansible. The keys
of this dictionary are node names as defined in a
:ref:`nodeset`, and the values are dictionaries of variables,
just as in :attr:`job.vars`.
.. attr:: group-vars
A dictionary of group variables to supply to Ansible. The keys
of this dictionary are node groups as defined in a
:ref:`nodeset`, and the values are dictionaries of variables,
just as in :attr:`job.vars`.
An example of three kinds of variables:
.. code-block:: yaml
- job:
name: variable-example
nodeset:
nodes:
- name: controller
label: fedora-27
- name: api1
label: centos-7
- name: api2
label: centos-7
groups:
- name: api
nodes:
- api1
- api2
vars:
foo: "this variable is visible to all nodes"
host-vars:
controller:
bar: "this variable is visible only on the controller node"
group-vars:
api:
baz: "this variable is visible on api1 and api2"
.. attr:: dependencies
A list of other jobs upon which this job depends. Zuul will not
start executing this job until all of its dependencies have
completed successfully or have been paused, and if one or more of
them fail, this job will not be run.
The format for this attribute is either a list of strings or
dictionaries. Strings are interpreted as job names,
dictionaries, if used, may have the following attributes:
.. attr:: name
:required:
The name of the required job.
.. attr:: soft
:default: false
A boolean value which indicates whether this job is a *hard*
or *soft* dependency. A *hard* dependency will cause an
error if the specified job is not run. That is, if job B
depends on job A, but job A is not run for any reason (for
example, it contains a file matcher which does not match),
then Zuul will not run any jobs and report an error. A
*soft* dependency will simply be ignored if the dependent job
is not run.
.. attr:: allowed-projects
A list of Zuul projects which may use this job. By default, a
job may be used by any other project known to Zuul, however,
some jobs use resources or perform actions which are not
appropriate for other projects. In these cases, a list of
projects which are allowed to use this job may be supplied. If
this list is not empty, then it must be an exhaustive list of
all projects permitted to use the job. The current project
(where the job is defined) is not automatically included, so if
it should be able to run this job, then it must be explicitly
listed. This setting is ignored by :term:`config projects
<config-project>` -- they may add any job to any project's
pipelines. By default, all projects may use the job.
If a :attr:`job.secrets` is used in a job definition in an
:term:`untrusted-project`, `allowed-projects` is automatically
set to the current project only, and can not be overridden.
However, a :term:`config-project` may still add such a job to
any project's pipeline. Apply caution when doing so as other
projects may be able to expose the source project's secrets.
This attribute is not overridden by inheritance; instead it is
the intersection of all applicable parents and variants (i.e.,
jobs can reduce but not expand the set of allowed projects when
they inherit).
.. warning::
It is possible to circumvent the use of `allowed-projects` in
an :term:`untrusted-project` by creating a change which
`Depends-On` a change which alters `allowed-projects`. This
limitation does not apply to jobs in a
:term:`config-project`, or jobs in an `untrusted-project`
which use a secret.
.. attr:: post-review
:default: false
A boolean value which indicates whether this job may only be
used in pipelines where :attr:`pipeline.post-review` is
``true``. This is automatically set to ``true`` if this job
uses a :ref:`secret` and is defined in a :term:`untrusted-project`.
It may be explicitly set to obtain the same behavior for jobs
defined in :term:`config projects <config-project>`. Once this
is set to ``true`` anywhere in the inheritance hierarchy for a job,
it will remain set for all child jobs and variants (it can not be
set to ``false``).
.. warning::
It is possible to circumvent the use of `post-review` in an
:term:`untrusted-project` by creating a change which
`Depends-On` a change which alters `post-review`. This
limitation does not apply to jobs in a
:term:`config-project`, or jobs in an `untrusted-project`
which use a secret.
.. attr:: branches
A :ref:`regular expression <regex>` (or list of regular
expressions) which describe on what branches a job should run
(or in the case of variants, to alter the behavior of a job for
a certain branch).
This attribute is not inherited in the usual manner. Instead,
it is used to determine whether each variant on which it appears
will be used when running the job.
If none of the defined job variants contain a branches setting which
matches the branch of an item, then that job is not run for the item.
Otherwise, all of the job variants which match that branch are
used when freezing the job. However, if
:attr:`job.override-checkout` or
:attr:`job.required-projects.override-checkout` are set for a
project, Zuul will attempt to use the job variants which match
the values supplied in ``override-checkout`` for jobs defined in
those projects. This can be used to run a job defined in one
project on another project without a matching branch.
If a tag item is enqueued, we look up the branches which contain
the commit referenced by the tag. If any of those branches match a
branch matcher, the matcher is considered to have matched.
Additionally in the case of a tag item, if the expression
matches the full name of the ref (eg, `refs/tags/foo`) then the
job is considered to match. The preceding section still
applies, so the definition must appear in a branch containing
the commit referenced by the tag to be considered, and then the
expression must also match the tag.
This example illustrates a job called *run-tests* which uses a
nodeset based on the current release of an operating system to
perform its tests, except when testing changes to the stable/2.0
branch, in which case it uses an older release:
.. code-block:: yaml
- job:
name: run-tests
nodeset: current-release
- job:
name: run-tests
branches: stable/2.0
nodeset: old-release
In some cases, Zuul uses an implied value for the branch
specifier if none is supplied:
* For a job definition in a :term:`config-project`, no implied
branch specifier is used. If no branch specifier appears, the
job applies to all branches.
* In the case of an :term:`untrusted-project`, if the project
has only one branch, no implied branch specifier is applied to
:ref:`job` definitions. If the project has more than one
branch, the branch containing the job definition is used as an
implied branch specifier.
This allows for the very simple and expected workflow where if a
project defines a job on the ``master`` branch with no branch
specifier, and then creates a new branch based on ``master``,
any changes to that job definition within the new branch only
affect that branch, and likewise, changes to the master branch
only affect it.
See :attr:`pragma.implied-branch-matchers` for how to override
this behavior on a per-file basis. The behavior may also be
configured by a Zuul administrator using
:attr:`tenant.untrusted-projects.<project>.implied-branch-matchers`.
.. attr:: files
This indicates that the job should only run on changes where the
specified files are modified. Unlike **branches**, this value
is subject to inheritance and overriding, so only the final
value is used to determine if the job should run. This is a
:ref:`regular expression <regex>` or list of regular expressions.
.. warning::
File filters will be ignored for refs that don't have any
files. This will be the case for merge commits (e.g. in a post
pipeline) or empty commits created with
``git commit --allow-empty`` (which can be used in order to
run all jobs).
.. attr:: irrelevant-files
This is a negative complement of **files**. It indicates that
the job should run unless *all* of the files changed match this
list. In other words, if the regular expression ``docs/.*`` is
supplied, then this job will not run if the only files changed
are in the docs directory. A :ref:`regular expression <regex>`
or list of regular expressions.
.. warning::
File filters will be ignored for refs that don't have any
files. This will be the case for merge commits (e.g. in a post
pipeline) or empty commits created with
``git commit --allow-empty`` (which can be used in order to
run all jobs).
.. attr:: match-on-config-updates
:default: true
If this is set to ``true`` (the default), then the job's file
matchers are ignored if a change alters the job's configuration.
This means that changes to jobs with file matchers will be
self-testing without requiring that the file matchers include
the Zuul configuration file defining the job.
.. attr:: deduplicate
:default: auto
In the case of a dependency cycle where multiple changes within
the cycle run the same job, this setting indicates whether Zuul
should attempt to deduplicate the job. If it is deduplicated,
then the job will only run for one queue item within the cycle
and other items which run the same job will use the results of
that build.
This setting determins whether Zuul will consider deduplication.
If it is set to ``false``, Zuul will never attempt to
deduplicate the job. If it is set to ``auto`` (the default),
then Zuul will compare the job with other jobs of other queue
items in the dependency cycle, and if they are equivalent and
meet certain project criteria, it will deduplicate them.
The project criteria that Zuul considers under the ``auto``
setting are either:
* The job must specify :attr:`job.required-projects`.
* Or the queue items must be for the same project.
This is because of the following heuristic: if a job specifies
:attr:`job.required-projects`, it is most likely to be one which
operates in the same way regardless of which project the change
under test belongs to, therefore the result of the same job
running on two queue items in the same dependency cycle should
be the same. If a job does not specify
:attr:`job.required-projects` and runs with two different
projects under test, the outcome is likely different for those
two items.
If this is not true for a job (e.g., the job ignores the project
under test and interacts only with external resources)
:attr:`job.deduplicate` may be set to ``true`` to ignore the
heuristic and deduplicate anyway.
.. attr:: workspace-scheme
:default: golang
The scheme to use when placing git repositories in the
workspace.
.. value:: golang
This writes the repository into a directory based on the
canonical hostname and the full name of the repository. For
example::
src/example.com/organization/project
This is the default and, despite the name, is suitable and
recommended for any language.
.. value:: flat
This writes the repository into a directory based only on the
last component of the name. For example::
src/project
In some cases the ``golang`` scheme can produce collisions
(consider the projects `component` and
`component/subcomponent`). In this case it may be preferable
to use the ``flat`` scheme (which would produce repositories
at `component` and `subcomponent`).
Note, however, that this scheme may produce collisions with
`component` and `component/component`.
.. value:: unique
This writes the repository into a directory based on the
organization name and the ``urllib.parse.quote_plus`` formatted
project name. For example::
src/example.com/organization/organization%2Fproject
This scheme will produce unique workspace paths for every repository
and won't cause collisions.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/config/job.rst
|
job.rst
|
.. _pragma:
Pragma
======
The `pragma` item does not behave like the others. It can not be
included or excluded from configuration loading by the administrator,
and does not form part of the final configuration itself. It is used
to alter how the configuration is processed while loading.
A pragma item only affects the current file. The same file in another
branch of the same project will not be affected, nor any other files
or any other projects. The effect is global within that file --
pragma directives may not be set and then unset within the same file.
.. code-block:: yaml
- pragma:
implied-branch-matchers: False
.. attr:: pragma
The pragma item currently supports the following attributes:
.. attr:: implied-branch-matchers
This is a boolean, which, if set, may be used to enable
(``true``) or disable (``false``) the addition of implied branch
matchers to job and project-template definitions. Normally Zuul
decides whether to add these based on heuristics described in
:attr:`job.branches`. This attribute overrides that behavior.
This can be useful if a project has multiple branches, yet the
jobs defined in the master branch should apply to all branches.
The behavior may also be configured by a Zuul administrator
using
:attr:`tenant.untrusted-projects.<project>.implied-branch-matchers`.
This pragma overrides that setting if both are present.
Note that if a job contains an explicit branch matcher, it will
be used regardless of the value supplied here.
.. attr:: implied-branches
This is a list of :ref:`regular expressions <regex>`, just as
:attr:`job.branches`, which may be used to supply the value of
the implied branch matcher for all jobs and project-templates in
a file.
This may be useful if two projects share jobs but have
dissimilar branch names. If, for example, two projects have
stable maintenance branches with dissimilar names, but both
should use the same job variants, this directive may be used to
indicate that all of the jobs defined in the stable branch of
the first project may also be used for the stable branch of the
other. For example:
.. code-block:: yaml
- pragma:
implied-branches:
- stable/foo
- stable/bar
The above code, when added to the ``stable/foo`` branch of a
project would indicate that the job variants described in that
file should not only be used for changes to ``stable/foo``, but
also on changes to ``stable/bar``, which may be in another
project.
Note that if a job contains an explicit branch matcher, it will
be used regardless of the value supplied here.
If this is used in a branch, it should include that branch name
or changes on that branch may be ignored.
Note also that the presence of `implied-branches` does not
automatically set `implied-branch-matchers`. Zuul will still
decide if implied branch matchers are warranted at all, using
the heuristics described in :attr:`job.branches`, and only use
the value supplied here if that is the case. If you want to
declare specific implied branches on, for example, a
:term:`config-project` project (which normally would not use
implied branches), you must set `implied-branch-matchers` as
well.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/config/pragma.rst
|
pragma.rst
|
.. _secret:
Secret
======
A Secret is a collection of private data for use by one or more jobs.
In order to maintain the security of the data, the values are usually
encrypted, however, data which are not sensitive may be provided
unencrypted as well for convenience.
A Secret may only be used by jobs defined within the same project.
Note that they can be used by any branch of that project, so if a
project's branches have different access controls, consider whether
all branches of that project are equally trusted before using secrets.
To use a secret, a :ref:`job` must specify the secret in
:attr:`job.secrets`. With one exception, secrets are bound to the
playbooks associated with the specific job definition where they were
declared. Additional pre or post playbooks which appear in child jobs
will not have access to the secrets, nor will playbooks which override
the main playbook (if any) of the job which declared the secret. This
protects against jobs in other repositories declaring a job with a
secret as a parent and then exposing that secret.
The exception to the above is if the
:attr:`job.secrets.pass-to-parent` attribute is set to true. In that
case, the secret is made available not only to the playbooks in the
current job definition, but to all playbooks in all parent jobs as
well. This allows for jobs which are designed to work with secrets
while leaving it up to child jobs to actually supply the secret. Use
this option with care, as it may allow the authors of parent jobs to
accidentially or intentionally expose secrets. If a secret with
`pass-to-parent` set in a child job has the same name as a secret
available to a parent job's playbook, the secret in the child job will
not override the parent, instead it will simply not be available to
that playbook (but will remain available to others).
It is possible to use secrets for jobs defined in :term:`config
projects <config-project>` as well as :term:`untrusted projects
<untrusted-project>`, however their use differs slightly. Because
playbooks in a config project which use secrets run in the
:term:`trusted execution context` where proposed changes are not used
in executing jobs, it is safe for those secrets to be used in all
types of pipelines. However, because playbooks defined in an
untrusted project are run in the :term:`untrusted execution context`
where proposed changes are used in job execution, it is dangerous to
allow those secrets to be used in pipelines which are used to execute
proposed but unreviewed changes. By default, pipelines are considered
`pre-review` and will refuse to run jobs which have playbooks that use
secrets in the untrusted execution context (including those subject to
:attr:`job.secrets.pass-to-parent` secrets) in order to protect
against someone proposing a change which exposes a secret. To permit
this (for instance, in a pipeline which only runs after code review),
the :attr:`pipeline.post-review` attribute may be explicitly set to
``true``.
In some cases, it may be desirable to prevent a job which is defined
in a config project from running in a pre-review pipeline (e.g., a job
used to publish an artifact). In these cases, the
:attr:`job.post-review` attribute may be explicitly set to ``true`` to
indicate the job should only run in post-review pipelines.
If a job with secrets is unsafe to be used by other projects, the
:attr:`job.allowed-projects` attribute can be used to restrict the
projects which can invoke that job. If a job with secrets is defined
in an `untrusted-project`, `allowed-projects` is automatically set to
that project only, and can not be overridden (though a
:term:`config-project` may still add the job to any project's pipeline
regardless of this setting; do so with caution as other projects may
expose the source project's secrets).
Secrets, like most configuration items, are unique within a tenant,
though a secret may be defined on multiple branches of the same
project as long as the contents are the same. This is to aid in
branch maintenance, so that creating a new branch based on an existing
branch will not immediately produce a configuration error.
When the values of secrets are passed to Ansible, the ``!unsafe`` YAML
tag is added which prevents them from being evaluated as Jinja
expressions. This is to avoid a situation where a child job might
expose a parent job's secrets via template expansion.
However, if it is known that a given secret value can be trusted, then
this limitation can be worked around by using the following construct
in a playbook:
.. code-block:: yaml
- set_fact:
unsafe_var_eval: "{{ hostvars['localhost'].secretname.var }}"
This will force an explicit template evaluation of the `var` attribute
on the `secretname` secret. The results will be stored in
unsafe_var_eval.
.. attr:: secret
The following attributes must appear on a secret:
.. attr:: name
:required:
The name of the secret, used in a :ref:`job` definition to
request the secret.
.. attr:: data
:required:
A dictionary which will be added to the Ansible variables
available to the job. The values can be any of the normal YAML
data types (strings, integers, dictionaries or lists) or
encrypted strings. See :ref:`encryption` for more information.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/config/secret.rst
|
secret.rst
|
.. _queue:
Queue
=====
Projects that interact with each other should share a ``queue``.
This is especially used in a :value:`dependent <pipeline.manager.dependent>`
pipeline. The :attr:`project.queue` can optionally refer
to a specific :attr:`queue` object that can further configure the
behavior of the queue.
Here is an example ``queue`` configuration.
.. code-block:: yaml
- queue:
name: integrated
per-branch: false
.. attr:: queue
The attributes available on a queue are as follows (all are
optional unless otherwise specified):
.. attr:: name
:required:
This is used later in the project definition to refer to this queue.
.. attr:: per-branch
:default: false
Queues by default define a single queue for all projects and
branches that use it. This is especially important if projects
want to do upgrade tests between different branches in
the :term:`gate`. If a set of projects doesn't have this use case
it can configure the queue to create a shared queue per branch for
all projects. This can be useful for large projects to improve the
throughput of a gate pipeline as this results in shorter queues
and thus less impact when a job fails in the gate. Note that this
means that all projects that should be gated must have aligned branch
names when using per branch queues. Otherwise changes that belong
together end up in different queues.
.. attr:: allow-circular-dependencies
:default: false
Determines whether Zuul is allowed to process circular
dependencies between changes for this queue. All projects that
are part of a dependency cycle must share the same change queue.
If Zuul detects a dependency cycle it will ensure that every
change also includes all other changes that are part of the
cycle. However each change will still be a normal item in the
queue with its own jobs.
Reporting of success will be postponed until all items in the cycle
succeed. In the case of a failure in any of those items the whole cycle
will be dequeued.
An error message will be posted to all items of the cycle if some
items fail to report (e.g. merge failure when some items were already
merged). In this case the target branch(es) might be in a broken state.
In general, circular dependencies are considered to be an
antipattern since they add extra constraints to continuous
deployment systems. Additionally, due to the lack of atomicity
in merge operations in code review systems (this includes
Gerrit, even with submitWholeTopic set), it may be possible for
only part of a cycle to be merged. In that case, manual
interventions (such as reverting a commit, or bypassing gating
to force-merge the remaining commits) may be required.
.. warning:: If the remote system is able to merge the first but
unable to merge the second or later change in a
dependency cycle, then the gating system for a
project may be broken and may require an
intervention to correct.
.. attr:: dependencies-by-topic
:default: false
Determines whether Zuul should query the code review system for
changes under the same topic and treat those as a set of
circular dependencies.
Note that the Gerrit code review system supports a setting
called ``change.submitWholeTopic``, which, when set, will cause
all changes under the same topic to be merged simultaneously.
Zuul automatically observes this setting and treats all changes
to be submitted together as circular dependencies. If this
setting is enabled in gerrit, do not enable
``dependencies-by-topic`` in associated Zuul queues.
Because ``change.submitWholeTopic`` is applied system-wide in
Gerrit, some Zuul users may wish to emulate the behavior for
some projects without enabling it for all of Gerrit. In this
case, setting ``dependencies-by-topic`` will cause Zuul to
approxiamate the Gerrit behavior only for changes enqueued into
queues where this is set.
This setting requires :attr:`queue.allow-circular-dependencies`
to also be set. All of the caveats noted there continue to
apply.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/config/queue.rst
|
queue.rst
|
#!/bin/bash
# Zuul needs ssl certs to be present to talk to zookeeper before it
# starts.
wait_for_certs() {
echo `date -Iseconds` "Wait for certs to be present"
for i in $(seq 1 120); do
# Introduced for 3.7.0: zookeeper shall wait for certificates to be available
# examples_zk_1.examples_default.pem is the last file created by ./tools/zk-ca.sh
[ -f /var/certs/keystores/examples_zk_1.examples_default.pem ] && return
sleep 1
done;
echo `date -Iseconds` "Timeout waiting for certs"
exit 1
}
wait_for_certs
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/examples/playbooks/wait-to-start-certs.sh
|
wait-to-start-certs.sh
|
#!/bin/bash
# Zuul needs to be able to connect to the remote systems in order to
# start.
wait_for_mysql() {
echo `date -Iseconds` "Wait for mysql to start"
for i in $(seq 1 120); do
cat < /dev/null > /dev/tcp/mysql/3306 && return
sleep 1
done
echo `date -Iseconds` "Timeout waiting for mysql"
exit 1
}
wait_for_gerrit() {
echo `date -Iseconds` "Wait for zuul user to be created"
for i in $(seq 1 120); do
[ $(curl -s -o /dev/null -w "%{http_code}" http://admin:secret@gerrit:8080/a/accounts/zuul/sshkeys) = "200" ] && return
sleep 1
done
echo `date -Iseconds` "Timeout waiting for gerrit"
exit 1
}
wait_for_mysql
wait_for_gerrit
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/examples/playbooks/wait-to-start.sh
|
wait-to-start.sh
|
:title: Pagure Driver
.. _pagure_driver:
Pagure
======
The Pagure driver supports sources, triggers, and reporters. It can
interact with the public Pagure.io service as well as site-local
installations of Pagure.
Configure Pagure
----------------
The user's API token configured in zuul.conf must have the following
ACL rights:
- "Merge a pull-request" set to on (optional, only for gating)
- "Flag a pull-request" set to on
- "Comment on a pull-request" set to on
- "Modify an existing project" set to on
Each project to be integrated with Zuul needs:
- "Web hook target" set to
http://<zuul-web>/zuul/api/connection/<conn-name>/payload
- "Pull requests" set to on
- "Open metadata access to all" set to off (optional, expected if approval
based on PR a metadata tag)
- "Minimum score to merge pull-request" set to the same value than
the score requierement (optional, expected if score requierement is
defined in a pipeline)
Furthermore, the user must be added as project collaborator
(**ticket** access level), to be able to read the project's
webhook token. This token is used to validate webhook's payload. But
if Zuul is configured to merge pull requests then the access level
must be **commit**.
Connection Configuration
------------------------
The supported options in ``zuul.conf`` connections are:
.. attr:: <pagure connection>
.. attr:: driver
:required:
.. value:: pagure
The connection must set ``driver=pagure`` for Pagure connections.
.. attr:: api_token
The user's API token with the ``Modify an existing project`` capability.
.. attr:: server
:default: pagure.io
Hostname of the Pagure server.
.. attr:: canonical_hostname
The canonical hostname associated with the git repos on the
Pagure server. Defaults to the value of :attr:`<pagure
connection>.server`. This is used to identify projects from
this connection by name and in preparing repos on the filesystem
for use by jobs. Note that Zuul will still only communicate
with the Pagure server identified by **server**; this option is
useful if users customarily use a different hostname to clone or
pull git repos so that when Zuul places them in the job's
working directory, they appear under this directory name.
.. attr:: baseurl
:default: https://{server}
Path to the Pagure web and API interface.
.. attr:: cloneurl
:default: https://{baseurl}
Path to the Pagure Git repositories. Used to clone.
.. attr:: app_name
:default: Zuul
Display name that will appear as the application name in front
of each CI status flag.
.. attr:: source_whitelist
:default: ''
A comma separated list of source ip adresses from which webhook
calls are whitelisted. If the source is not whitelisted, then
call payload's signature is verified using the project webhook
token. An admin access to the project is required by Zuul to read
the token. White listing a source of hook calls allows Zuul to
react to events without any authorizations. This setting should
not be used in production.
Trigger Configuration
---------------------
Pagure webhook events can be configured as triggers.
A connection name with the Pagure driver can take multiple events with
the following options.
.. attr:: pipeline.trigger.<pagure source>
The dictionary passed to the Pagure pipeline ``trigger`` attribute
supports the following attributes:
.. attr:: event
:required:
The event from Pagure. Supported events are:
.. value:: pg_pull_request
.. value:: pg_pull_request_review
.. value:: pg_push
.. attr:: action
A :value:`pipeline.trigger.<pagure source>.event.pg_pull_request`
event will have associated action(s) to trigger from. The
supported actions are:
.. value:: opened
Pull request opened.
.. value:: changed
Pull request synchronized.
.. value:: closed
Pull request closed.
.. value:: comment
Comment added to pull request.
.. value:: status
Status set on pull request.
.. value:: tagged
Tag metadata set on pull request.
A :value:`pipeline.trigger.<pagure
source>.event.pg_pull_request_review` event will have associated
action(s) to trigger from. The supported actions are:
.. value:: thumbsup
Positive pull request review added.
.. value:: thumbsdown
Negative pull request review added.
.. attr:: comment
This is only used for ``pg_pull_request`` and ``comment`` actions. It
accepts a list of regexes that are searched for in the comment
string. If any of these regexes matches a portion of the comment
string the trigger is matched. ``comment: retrigger`` will
match when comments containing 'retrigger' somewhere in the
comment text are added to a pull request.
.. attr:: status
This is used for ``pg_pull_request`` and ``status`` actions. It
accepts a list of strings each of which matches the user setting
the status, the status context, and the status itself in the
format of ``status``. For example, ``success`` or ``failure``.
.. attr:: tag
This is used for ``pg_pull_request`` and ``tagged`` actions. It
accepts a list of strings and if one of them is part of the
event tags metadata then the trigger is matched.
.. attr:: ref
This is only used for ``pg_push`` events. This field is treated as
a regular expression and multiple refs may be listed. Pagure
always sends full ref name, eg. ``refs/tags/bar`` and this
string is matched against the regular expression.
Reporter Configuration
----------------------
Zuul reports back to Pagure via Pagure API. Available reports include a PR
comment containing the build results, a commit status on start, success and
failure, and a merge of the PR itself. Status name, description, and context
is taken from the pipeline.
.. attr:: pipeline.<reporter>.<pagure source>
To report to Pagure, the dictionaries passed to any of the pipeline
:ref:`reporter<reporters>` attributes support the following
attributes:
.. attr:: status
String value (``pending``, ``success``, ``failure``) that the
reporter should set as the commit status on Pagure.
.. attr:: status-url
:default: web.status_url or the empty string
String value for a link url to set in the Pagure status. Defaults to the
zuul server status_url, or the empty string if that is unset.
.. attr:: comment
:default: true
Boolean value that determines if the reporter should add a
comment to the pipeline status to the Pagure Pull Request. Only
used for Pull Request based items.
.. attr:: merge
:default: false
Boolean value that determines if the reporter should merge the
pull Request. Only used for Pull Request based items.
Requirements Configuration
--------------------------
As described in :attr:`pipeline.require` pipelines may specify that items meet
certain conditions in order to be enqueued into the pipeline. These conditions
vary according to the source of the project in question. To supply
requirements for changes from a Pagure source named ``pagure``, create a
configuration such as the following:
.. code-block:: yaml
pipeline:
require:
pagure:
score: 1
merged: false
status: success
tags:
- gateit
This indicates that changes originating from the Pagure connection
must have a score of *1*, a CI status *success* and not being already merged.
.. attr:: pipeline.require.<pagure source>
The dictionary passed to the Pagure pipeline `require` attribute
supports the following attributes:
.. attr:: score
If present, the minimal score a Pull Request must reached.
.. attr:: status
If present, the CI status a Pull Request must have.
.. attr:: merged
A boolean value (``true`` or ``false``) that indicates whether
the Pull Request must be merged or not in order to be enqueued.
.. attr:: open
A boolean value (``true`` or ``false``) that indicates whether
the Pull Request must be open or closed in order to be enqueued.
.. attr:: tags
if present, the list of tags a Pull Request must have.
Reference pipelines configuration
---------------------------------
Here is an example of standard pipelines you may want to define:
.. literalinclude:: /examples/pipelines/pagure-reference-pipelines.yaml
:language: yaml
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/pagure.rst
|
pagure.rst
|
:title: SMTP Driver
SMTP
====
The SMTP driver supports reporters only. It is used to send email
when items report.
Connection Configuration
------------------------
.. attr:: <smtp connection>
.. attr:: driver
:required:
.. value:: smtp
The connection must set ``driver=smtp`` for SMTP connections.
.. attr:: server
:default: localhost
SMTP server hostname or address to use.
.. attr:: port
:default: 25
SMTP server port.
.. attr:: default_from
:default: zuul
Who the email should appear to be sent from when emailing the report.
This can be overridden by individual pipelines.
.. attr:: default_to
:default: zuul
Who the report should be emailed to by default.
This can be overridden by individual pipelines.
.. attr:: user
Optional user name used to authenticate to the SMTP server. Used only in
conjunction with a password. If no password is present, this option is
ignored.
.. attr:: password
Optional password used to authenticate to the SMTP server.
.. attr:: use_starttls
:default: false
Issue a STARTTLS request to establish an encrypted channel after having
connected to the SMTP server.
Reporter Configuration
----------------------
A simple email reporter is also available.
A :ref:`connection<connections>` that uses the smtp driver must be supplied to the
reporter. The connection also may specify a default *To* or *From*
address.
Each pipeline can overwrite the ``subject`` or the ``to`` or ``from`` address by
providing alternatives as arguments to the reporter. For example:
.. code-block:: yaml
- pipeline:
name: post-merge
success:
outgoing_smtp:
to: [email protected]
failure:
internal_smtp:
to: [email protected]
from: [email protected]
subject: Change {change} failed
.. attr:: pipeline.<reporter>.<smtp source>
To report via email, the dictionaries passed to any of the pipeline
:ref:`reporter<reporters>` attributes support the following
attributes:
.. attr:: to
The SMTP recipient address for the report. Multiple addresses
may be specified as one value separated by commas.
.. attr:: from
The SMTP sender address for the report.
.. attr:: subject
The Subject of the report email.
.. TODO: document subject string formatting.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/smtp.rst
|
smtp.rst
|
:title: GitLab Driver
.. _gitlab_driver:
GitLab
======
The GitLab driver supports sources, triggers, and reporters. It can
interact with the public GitLab.com service as well as site-local
installations of GitLab.
Configure GitLab
----------------
Zuul needs to interact with projects by:
- receiving events via web-hooks
- performing actions via the API
web-hooks
^^^^^^^^^
Projects to be integrated with Zuul needs to send events using webhooks.
This can be enabled at Group level or Project level in "Settings/Webhooks"
- "URL" set to
``http://<zuul-web>/api/connection/<conn-name>/payload``
- "Merge request events" set to "on"
- "Push events" set to "on"
- "Tag push events" set to "on"
- "Comments" set to "on"
- Define a "Secret Token"
API
^^^
| Even though bot users exist: https://docs.gitlab.com/ce/user/project/settings/project_access_tokens.html#project-bot-users
| They are only available at project level.
In order to manage multiple projects using a single connection, Zuul needs a
global access to projects, which can only be achieved by creating a specific
Zuul user. This user counts as a licensed seat.
The API token must be created in user Settings, Access tokens. The Zuul user's
API token configured in zuul.conf must have the following ACL rights: "api".
Connection Configuration
------------------------
The supported options in ``zuul.conf`` connections are:
.. attr:: <gitlab connection>
.. attr:: driver
:required:
.. value:: gitlab
The connection must set ``driver=gitlab`` for GitLab connections.
.. attr:: api_token_name
The user's personal access token name (Used if **cloneurl** is http(s))
Set this parameter if authentication to clone projects is required
.. attr:: api_token
The user's personal access token
.. attr:: webhook_token
The webhook secret token.
.. attr:: server
:default: gitlab.com
Hostname of the GitLab server.
.. attr:: canonical_hostname
The canonical hostname associated with the git repos on the
GitLab server. Defaults to the value of :attr:`<gitlab
connection>.server`. This is used to identify projects from
this connection by name and in preparing repos on the filesystem
for use by jobs. Note that Zuul will still only communicate
with the GitLab server identified by **server**; this option is
useful if users customarily use a different hostname to clone or
pull git repos so that when Zuul places them in the job's
working directory, they appear under this directory name.
.. attr:: baseurl
:default: https://{server}
Path to the GitLab web and API interface.
.. attr:: sshkey
Path to SSH key to use (Used if **cloneurl** is ssh)
.. attr:: cloneurl
:default: {baseurl}
Omit to clone using http(s) or set to ``ssh://git@{server}``.
If **api_token_name** is set and **cloneurl** is either omitted or is
set without credentials, **cloneurl** will be modified to use credentials
as this: ``http(s)://<api_token_name>:<api_token>@<server>``.
If **cloneurl** is defined with credentials, it will be used as is,
without modification from the driver.
.. attr:: keepalive
:default: 60
TCP connection keepalive timeout; ``0`` disables.
.. attr:: disable_connection_pool
:default: false
Connection pooling improves performance and resource usage under
normal circumstances, but in adverse network conditions it can
be problematic. Set this to ``true`` to disable.
Trigger Configuration
---------------------
GitLab webhook events can be configured as triggers.
A connection name with the GitLab driver can take multiple events with
the following options.
.. attr:: pipeline.trigger.<gitlab source>
The dictionary passed to the GitLab pipeline ``trigger`` attribute
supports the following attributes:
.. attr:: event
:required:
The event from GitLab. Supported events are:
.. value:: gl_merge_request
.. value:: gl_push
.. attr:: action
A :value:`pipeline.trigger.<gitlab source>.event.gl_merge_request`
event will have associated action(s) to trigger from. The
supported actions are:
.. value:: opened
Merge request opened.
.. value:: changed
Merge request synchronized.
.. value:: merged
Merge request merged.
.. value:: comment
Comment added to merge request.
.. value:: approved
Merge request approved.
.. value:: unapproved
Merge request unapproved.
.. value:: labeled
Merge request labeled.
.. attr:: comment
This is only used for ``gl_merge_request`` and ``comment`` actions. It
accepts a list of regexes that are searched for in the comment
string. If any of these regexes matches a portion of the comment
string the trigger is matched. ``comment: retrigger`` will
match when comments containing 'retrigger' somewhere in the
comment text are added to a merge request.
.. attr:: labels
This is only used for ``gl_merge_request`` and ``labeled``
actions. It accepts a string or a list of strings that are that
must have been added for the event to match.
.. attr:: unlabels
This is only used for ``gl_merge_request`` and ``labeled``
actions. It accepts a string or a list of strings that are that
must have been removed for the event to match.
.. attr:: ref
This is only used for ``gl_push`` events. This field is treated as
a regular expression and multiple refs may be listed. GitLab
always sends full ref name, eg. ``refs/heads/bar`` and this
string is matched against the regular expression.
Reporter Configuration
----------------------
Zuul reports back to GitLab via the API. Available reports include a Merge Request
comment containing the build results. Status name, description, and context
is taken from the pipeline.
.. attr:: pipeline.<reporter>.<gitlab source>
To report to GitLab, the dictionaries passed to any of the pipeline
:ref:`reporter<reporters>` attributes support the following
attributes:
.. attr:: comment
:default: true
Boolean value that determines if the reporter should add a
comment to the pipeline status to the GitLab Merge Request.
.. attr:: approval
Bolean value that determines whether to report *approve* or *unapprove*
into the merge request approval system. To set an approval the Zuul user
must be a *Developer* or *Maintainer* project's member. If not set approval
won't be reported.
.. attr:: merge
:default: false
Boolean value that determines if the reporter should merge the
Merge Request. To merge a Merge Request the Zuul user must be a *Developer* or
*Maintainer* project's member. In case of *developer*, the *Allowed to merge*
setting in *protected branches* must be set to *Developers + Maintainers*.
.. attr:: label
A string or list of strings, each representing a label name
which should be added to the merge request.
.. attr:: unlabel
A string or list of strings, each representing a label name
which should be removed from the merge request.
Requirements Configuration
--------------------------
As described in :attr:`pipeline.require` pipelines may specify that items meet
certain conditions in order to be enqueued into the pipeline. These conditions
vary according to the source of the project in question.
.. code-block:: yaml
pipeline:
require:
gitlab:
open: true
This indicates that changes originating from the GitLab connection must be
in the *opened* state (not merged yet).
.. attr:: pipeline.require.<gitlab source>
The dictionary passed to the GitLab pipeline `require` attribute
supports the following attributes:
.. attr:: open
A boolean value (``true`` or ``false``) that indicates whether
the Merge Request must be open in order to be enqueued.
.. attr:: merged
A boolean value (``true`` or ``false``) that indicates whether
the Merge Request must be merged or not in order to be enqueued.
.. attr:: approved
A boolean value (``true`` or ``false``) that indicates whether
the Merge Request must be approved or not in order to be enqueued.
.. attr:: labels
A list of labels a Merge Request must have in order to be enqueued.
Reference pipelines configuration
---------------------------------
Here is an example of standard pipelines you may want to define:
.. literalinclude:: /examples/pipelines/gitlab-reference-pipelines.yaml
:language: yaml
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/gitlab.rst
|
gitlab.rst
|
:title: Timer Driver
Timer
=====
The timer driver supports triggers only. It is used for configuring
pipelines so that jobs run at scheduled times. No connection
configuration is required.
Trigger Configuration
---------------------
Timers don't require a special connection or driver. Instead they can
simply be used by listing ``timer`` as the trigger.
This trigger will run based on a cron-style time specification. It
will enqueue an event into its pipeline for every project and branch
defined in the configuration. Any job associated with the pipeline
will run in response to that event.
Zuul implements the timer using `apscheduler`_, Please check the
`apscheduler documentation`_ for more information about the syntax.
.. attr:: pipeline.trigger.timer
The timer trigger supports the following attributes:
.. attr:: time
:required:
The time specification in cron syntax. Only the 5 part syntax
is supported, not the symbolic names. Example: ``0 0 * * *``
runs at midnight.
An optional 6th part specifies seconds. The optional 7th part specifies
a jitter in seconds. This delays the trigger randomly, limited by
the specified value. Example ``0 0 * * * * 60`` runs at
midnight or randomly up to 60 seconds later. The jitter is
applied individually to each project-branch combination.
.. warning::
Be aware the day-of-week value differs from from cron.
The first weekday is Monday (0), and the last is Sunday (6).
.. _apscheduler: https://apscheduler.readthedocs.io/
.. _apscheduler documentation: https://apscheduler.readthedocs.io/en/3.x/modules/triggers/cron.html#module-apscheduler.triggers.cron
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/timer.rst
|
timer.rst
|
.. _drivers:
Drivers
=======
Drivers may support any of the following functions:
* Sources -- hosts git repositories for projects. Zuul can clone git
repos for projects and fetch refs.
* Triggers -- emits events to which Zuul may respond. Triggers are
configured in pipelines to cause changes or other refs to be
enqueued.
* Reporters -- outputs information when a pipeline is finished
processing an item.
Zuul includes the following drivers:
.. toctree::
:maxdepth: 2
gerrit
github
pagure
gitlab
git
mqtt
elasticsearch
smtp
timer
zuul
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/index.rst
|
index.rst
|
:title: Elasticsearch Driver
Elasticsearch
=============
The Elasticsearch driver supports reporters only. The purpose of the driver is
to export build and buildset results to an Elasticsearch index.
If the index does not exist in Elasticsearch then the driver will create it
with an appropriate mapping for static fields.
The driver can add job's variables and any data returned to Zuul
via zuul_return respectively into the `job_vars` and `job_returned_vars` fields
of the exported build doc. Elasticsearch will apply a dynamic data type
detection for those fields.
Elasticsearch supports a number of different datatypes for the fields in a
document. Please refer to its `documentation`_.
The Elasticsearch reporter uses new ES client, that is only supporting
`current version`_ of Elastisearch. In that case the
reporter has been tested on ES cluster version 7. Lower version may
be working, but we can not give tu any guarantee of that.
.. _documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html
.. _current version: https://www.elastic.co/support/eol
Connection Configuration
------------------------
The connection options for the Elasticsearch driver are:
.. attr:: <Elasticsearch connection>
.. attr:: driver
:required:
.. value:: elasticsearch
The connection must set ``driver=elasticsearch``.
.. attr:: uri
:required:
Database connection information in the form of a comma separated
list of ``host:port``. The information can also include protocol (http/https)
or username and password required to authenticate to the Elasticsearch.
Example:
uri=elasticsearch1.domain:9200,elasticsearch2.domain:9200
or
uri=https://user:password@elasticsearch:9200
where user and password is optional.
.. attr:: use_ssl
:default: true
Turn on SSL. This option is not required, if you set ``https`` in
uri param.
.. attr:: verify_certs
:default: true
Make sure we verify SSL certificates.
.. attr:: ca_certs
:default: ''
Path to CA certs on disk.
.. attr:: client_cert
:default: ''
Path to the PEM formatted SSL client certificate.
.. attr:: client_key
:default: ''
Path to the PEM formatted SSL client key.
Example of driver configuration:
.. code-block:: text
[connection elasticsearch]
driver=elasticsearch
uri=https://managesf.sftests.com:9200
Additional parameters to authenticate to the Elasticsearch server you
can find in `client`_ class.
.. _client: https://github.com/elastic/elasticsearch-py/blob/master/elasticsearch/client/__init__.py
Reporter Configuration
----------------------
This reporter is used to store build results in an Elasticsearch index.
The Elasticsearch reporter does nothing on :attr:`pipeline.start` or
:attr:`pipeline.merge-conflict`; it only acts on
:attr:`pipeline.success` or :attr:`pipeline.failure` reporting stages.
.. attr:: pipeline.<reporter>.<elasticsearch source>
The reporter supports the following attributes:
.. attr:: index
:default: zuul
The Elasticsearch index to be used to index the data. To prevent
any name collisions between Zuul tenants, the tenant name is used as index
name prefix. The real index name will be:
.. code-block::
<index-name>.<tenant-name>-<YYYY>.<MM>.<DD>
The index will be created if it does not exist.
.. attr:: index-vars
:default: false
Boolean value that determines if the reporter should add job's vars
to the exported build doc.
NOTE: The index-vars is not including the secrets.
.. attr:: index-returned-vars
:default: false
Boolean value that determines if the reporter should add zuul_returned
vars to the exported build doc.
For example:
.. code-block:: yaml
- pipeline:
name: check
success:
elasticsearch:
index: 'zuul-index'
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/elasticsearch.rst
|
elasticsearch.rst
|
:title: Git Driver
Git
===
This driver can be used to load Zuul configuration from public Git repositories,
for instance from ``opendev.org/zuul/zuul-jobs`` that is suitable for use by
any Zuul system. It can also be used to trigger jobs from ``ref-updated`` events
in a pipeline.
Connection Configuration
------------------------
The supported options in ``zuul.conf`` connections are:
.. attr:: <git connection>
.. attr:: driver
:required:
.. value:: git
The connection must set ``driver=git`` for Git connections.
.. attr:: baseurl
Path to the base Git URL. Git repos name will be appended to it.
.. attr:: poll_delay
:default: 7200
The delay in seconds of the Git repositories polling loop.
Trigger Configuration
---------------------
.. attr:: pipeline.trigger.<git source>
The dictionary passed to the Git pipeline ``trigger`` attribute
supports the following attributes:
.. attr:: event
:required:
Only ``ref-updated`` is supported.
.. attr:: ref
On ref-updated events, a ref such as ``refs/heads/master`` or
``^refs/tags/.*$``. This field is treated as a regular expression,
and multiple refs may be listed.
.. attr:: ignore-deletes
:default: true
When a ref is deleted, a ref-updated event is emitted with a
newrev of all zeros specified. The ``ignore-deletes`` field is a
boolean value that describes whether or not these newrevs
trigger ref-updated events.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/git.rst
|
git.rst
|
:title: Zuul Driver
Zuul
====
The Zuul driver supports triggers only. It is used for triggering
pipelines based on internal Zuul events.
Trigger Configuration
---------------------
Zuul events don't require a special connection or driver. Instead they
can simply be used by listing ``zuul`` as the trigger.
.. attr:: pipeline.trigger.zuul
The Zuul trigger supports the following attributes:
.. attr:: event
:required:
The event name. Currently supported events:
.. value:: project-change-merged
When Zuul merges a change to a project, it generates this
event for every open change in the project.
.. warning::
Triggering on this event can cause poor performance when
using the GitHub driver with a large number of
installations.
.. value:: parent-change-enqueued
When Zuul enqueues a change into any pipeline, it generates
this event for every child of that change.
.. attr:: pipeline
Only available for ``parent-change-enqueued`` events. This is
the name of the pipeline in which the parent change was
enqueued.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/zuul.rst
|
zuul.rst
|
:title: GitHub Driver
.. _github_driver:
GitHub
======
The GitHub driver supports sources, triggers, and reporters. It can
interact with the public GitHub service as well as site-local
installations of GitHub enterprise.
Configure GitHub
----------------
There are two options currently available. GitHub's project owner can either
manually setup web-hook or install a GitHub Application. In the first case,
the project's owner needs to know the zuul endpoint and the webhook secrets.
Web-Hook
........
To configure a project's `webhook events
<https://developer.github.com/webhooks/creating/>`_:
* Set *Payload URL* to
``http://<zuul-hostname>:<port>/api/connection/<connection-name>/payload``.
* Set *Content Type* to ``application/json``.
Select *Events* you are interested in. See below for the supported events.
You will also need to have a GitHub user created for your zuul:
* Zuul public key needs to be added to the GitHub account
* A api_token needs to be created too, see this `article
<https://help.github.com/articles/creating-an-access-token-for-command-line-use/>`_
Then in the zuul.conf, set webhook_token and api_token.
Application
...........
To create a `GitHub application
<https://developer.github.com/apps/building-integrations/setting-up-and-registering-github-apps/registering-github-apps/>`_:
* Go to your organization settings page to create the application, e.g.:
https://github.com/organizations/my-org/settings/apps/new
* Set GitHub App name to "my-org-zuul"
* Set Setup URL to your setup documentation, when user install the application
they are redirected to this url
* Set Webhook URL to
``http://<zuul-hostname>:<port>/api/connection/<connection-name>/payload``.
* Create a Webhook secret
* Set permissions:
* Repository administration: Read
* Checks: Read & Write
* Repository contents: Read & Write (write to let zuul merge change)
* Issues: Read & Write
* Pull requests: Read & Write
* Commit statuses: Read & Write
* Set events subscription:
* Check run
* Commit comment
* Create
* Push
* Release
* Issue comment
* Issues
* Label
* Pull request
* Pull request review
* Pull request review comment
* Status
* Set Where can this GitHub App be installed to "Any account"
* Create the App
* Generate a Private key in the app settings page
Then in the zuul.conf, set webhook_token, app_id and app_key.
After restarting zuul-scheduler, verify in the 'Advanced' tab that the
Ping payload works (green tick and 200 response)
Users can now install the application using its public page, e.g.:
https://github.com/apps/my-org-zuul
.. note::
GitHub Pull Requests that modify GitHub Actions workflow configuration
files cannot be merged by application credentials (this is any Pull Request
that edits the .github/workflows directory and its contents). These Pull
Requests must be merged by a normal user account. This means that Zuul
will be limited to posting test results and cannot merge these PRs
automatically when they pass testing.
GitHub Actions are still in Beta and this behavior may change.
Connection Configuration
------------------------
There are two forms of operation. Either the Zuul installation can be
configured as a `Github App`_ or it can be configured as a Webhook.
If the `Github App`_ approach is taken, the config settings ``app_id`` and
``app_key`` are required. If the Webhook approach is taken, the ``api_token``
setting is required.
The supported options in ``zuul.conf`` connections are:
.. attr:: <github connection>
.. attr:: driver
:required:
.. value:: github
The connection must set ``driver=github`` for GitHub connections.
.. attr:: app_id
App ID if you are using a *GitHub App*. Can be found under the
**Public Link** on the right hand side labeled **ID**.
.. attr:: app_key
Path to a file containing the secret key Zuul will use to create
tokens for the API interactions. In Github this is known as
**Private key** and must be collected when generated.
.. attr:: api_token
API token for accessing GitHub if Zuul is configured with
Webhooks. See `Creating an access token for command-line use
<https://help.github.com/articles/creating-an-access-token-for-command-line-use/>`_.
.. attr:: webhook_token
Required token for validating the webhook event payloads. In
the GitHub App Configuration page, this is called **Webhook
secret**. See `Securing your webhooks
<https://developer.github.com/webhooks/securing/>`_.
.. attr:: sshkey
:default: ~/.ssh/id_rsa
Path to SSH key to use when cloning github repositories if Zuul
is configured with Webhooks.
.. attr:: server
:default: github.com
Hostname of the github install (such as a GitHub Enterprise).
.. attr:: canonical_hostname
The canonical hostname associated with the git repos on the
GitHub server. Defaults to the value of :attr:`<github
connection>.server`. This is used to identify projects from
this connection by name and in preparing repos on the filesystem
for use by jobs. Note that Zuul will still only communicate
with the GitHub server identified by **server**; this option is
useful if users customarily use a different hostname to clone or
pull git repos so that when Zuul places them in the job's
working directory, they appear under this directory name.
.. attr:: verify_ssl
:default: true
Enable or disable ssl verification for GitHub Enterprise. This
is useful for a connection to a test installation.
.. attr:: rate_limit_logging
:default: true
Enable or disable GitHub rate limit logging. If rate limiting is disabled
in GitHub Enterprise this can save some network round trip times.
.. attr:: repo_cache
To configure Zuul to use a GitHub Enterprise `repository cache
<https://docs.github.com/en/[email protected]/admin/enterprise-management/caching-repositories/about-repository-caching>`_
set this value to the hostname of the cache (e.g.,
``europe-ci.github.example.com``). Zuul will fetch commits as
well as determine the global repo state of repositories used in
jobs from this host.
This setting is incompatible with :attr:`<github
connection>.sshkey`.
Because the repository cache may be several minutes behind the
canonical site, enabling this setting automatically sets the
default :attr:`<github connection>.repo_retry_timeout` to 600
seconds. That setting may still be overidden to specify a
different value.
.. attr:: repo_retry_timeout
This setting is only used if :attr:`<github
connection>.repo_cache` is set. It specifies the amount of time
in seconds that Zuul mergers and executors should spend
attempting to fetch git commits which are not available from the
GitHub repository cache host.
When :attr:`<github connection>.repo_cache` is set, this value
defaults to 600 seconds, but it can be overridden. Zuul retries
git fetches every 30 seconds, and this value will be rounded up
to the next highest multiple of 30 seconds.
.. attr:: max_threads_per_installation
:default: 1
The GitHub driver performs event pre-processing in parallel
before forwarding the events (in the correct order) to the
scheduler for processing. By default, this parallel
pre-processing is restricted to a single request for each GitHub
App installation that Zuul uses when interacting with GitHub.
This is to avoid running afoul of GitHub's abuse detection
mechanisms. Some high-traffic installations of GitHub
Enterprise may wish to increase this value to allow more
parallel requests if resources permit. If GitHub Enterprise
resource usage is not a concern, setting this value to ``10`` or
greater may be reasonable.
Trigger Configuration
---------------------
GitHub webhook events can be configured as triggers.
A connection name with the GitHub driver can take multiple events with
the following options.
.. attr:: pipeline.trigger.<github source>
The dictionary passed to the GitHub pipeline ``trigger`` attribute
supports the following attributes:
.. attr:: event
:required:
The event from github. Supported events are:
.. value:: pull_request
.. value:: pull_request_review
.. value:: push
.. value:: check_run
.. attr:: action
A :value:`pipeline.trigger.<github source>.event.pull_request`
event will have associated action(s) to trigger from. The
supported actions are:
.. value:: opened
Pull request opened.
.. value:: changed
Pull request synchronized.
.. value:: closed
Pull request closed.
.. value:: reopened
Pull request reopened.
.. value:: comment
Comment added to pull request.
.. value:: labeled
Label added to pull request.
.. value:: unlabeled
Label removed from pull request.
.. value:: status
Status set on commit. The syntax is ``user:status:value``.
This also can be a regular expression.
A :value:`pipeline.trigger.<github
source>.event.pull_request_review` event will have associated
action(s) to trigger from. The supported actions are:
.. value:: submitted
Pull request review added.
.. value:: dismissed
Pull request review removed.
A :value:`pipeline.trigger.<github source>.event.check_run`
event will have associated action(s) to trigger from. The
supported actions are:
.. value:: requested
A check run is requested.
.. value:: completed
A check run completed.
.. attr:: branch
The branch associated with the event. Example: ``master``. This
field is treated as a regular expression, and multiple branches
may be listed. Used for ``pull_request`` and
``pull_request_review`` events.
.. attr:: comment
This is only used for ``pull_request`` ``comment`` actions. It
accepts a list of regexes that are searched for in the comment
string. If any of these regexes matches a portion of the comment
string the trigger is matched. ``comment: retrigger`` will
match when comments containing 'retrigger' somewhere in the
comment text are added to a pull request.
.. attr:: label
This is only used for ``labeled`` and ``unlabeled``
``pull_request`` actions. It accepts a list of strings each of
which matches the label name in the event literally. ``label:
recheck`` will match a ``labeled`` action when pull request is
labeled with a ``recheck`` label. ``label: 'do not test'`` will
match a ``unlabeled`` action when a label with name ``do not
test`` is removed from the pull request.
.. attr:: state
This is only used for ``pull_request_review`` events. It
accepts a list of strings each of which is matched to the review
state, which can be one of ``approved``, ``comment``, or
``request_changes``.
.. attr:: status
This is used for ``pull-request`` and ``status`` actions. It
accepts a list of strings each of which matches the user setting
the status, the status context, and the status itself in the
format of ``user:context:status``. For example,
``zuul_github_ci_bot:check_pipeline:success``.
.. attr:: check
This is only used for ``check_run`` events. It works similar to
the ``status`` attribute and accepts a list of strings each of
which matches the app requesting or updating the check run, the
check run's name and the conclusion in the format of
``app:name::conclusion``.
To make Zuul properly interact with Github's checks API, each
pipeline that is using the checks API should have at least one
trigger that matches the pipeline's name regardless of the result,
e.g. ``zuul:cool-pipeline:.*``. This will enable the cool-pipeline
to trigger whenever a user requests the ``cool-pipeline`` check
run as part of the ``zuul`` check suite.
Additionally, one could use ``.*:success`` to trigger a pipeline
whenever a successful check run is reported (e.g. useful for
gating).
.. attr:: ref
This is only used for ``push`` events. This field is treated as
a regular expression and multiple refs may be listed. GitHub
always sends full ref name, eg. ``refs/tags/bar`` and this
string is matched against the regular expression.
.. attr:: require-status
.. warning:: This is deprecated and will be removed in a future
version. Use :attr:`pipeline.trigger.<github
source>.require` instead.
This may be used for any event. It requires that a certain kind
of status be present for the PR (the status could be added by
the event in question). It follows the same syntax as
:attr:`pipeline.require.<github source>.status`. For each
specified criteria there must exist a matching status.
This is ignored if the :attr:`pipeline.trigger.<github
source>.require` attribute is present.
.. attr:: require
This may be used for any event. It describes conditions that
must be met by the PR in order for the trigger event to match.
Those conditions may be satisfied by the event in question. It
follows the same syntax as :ref:`github_requirements`.
.. attr:: reject
This may be used for any event and is the mirror of
:attr:`pipeline.trigger.<github source>.require`. It describes
conditions that when met by the PR cause the trigger event not
to match. Those conditions may be satisfied by the event in
question. It follows the same syntax as
:ref:`github_requirements`.
Reporter Configuration
----------------------
Zuul reports back to GitHub via GitHub API. Available reports include a PR
comment containing the build results, a commit status on start, success and
failure, an issue label addition/removal on the PR, and a merge of the PR
itself. Status name, description, and context is taken from the pipeline.
.. attr:: pipeline.<reporter>.<github source>
To report to GitHub, the dictionaries passed to any of the pipeline
:ref:`reporter<reporters>` attributes support the following
attributes:
.. attr:: status
:type: str
:default: None
Report status via the Github `status API
<https://docs.github.com/v3/repos/statuses/>`__. Set to one of
* ``pending``
* ``success``
* ``failure``
This is usually mutually exclusive with a value set in
:attr:`pipeline.<reporter>.<github source>.check`, since this
reports similar results via a different API. This API is older
and results do not show up on the "checks" tab in the Github UI.
It is recommended to use `check` unless you have a specific
reason to use the status API.
.. TODO support role markup in :default: so we can xref
:attr:`web.status_url` below
.. attr:: status-url
:default: link to the build status page
:type: string
URL to set in the Github status.
Defaults to a link to the build status or results page. This
should probably be left blank unless there is a specific reason
to override it.
.. attr:: check
:type: string
Report status via the Github `checks API
<https://docs.github.com/v3/checks/>`__. Set to one of
* ``cancelled``
* ``failure``
* ``in_progress``
* ``neutral``
* ``skipped``
* ``success``
This is usually mutually exclusive with a value set in
:attr:`pipeline.<reporter>.<github source>.status`, since this
reports similar results via a different API.
.. attr:: comment
:default: true
Boolean value that determines if the reporter should add a
comment to the pipeline status to the github pull request. Only
used for Pull Request based items.
.. attr:: review
One of `approve`, `comment`, or `request-changes` that causes the
reporter to submit a review with the specified status on Pull Request
based items. Has no effect on other items.
.. attr:: review-body
Text that will be submitted as the body of the review. Required if review
is set to `comment` or `request-changes`.
.. attr:: merge
:default: false
Boolean value that determines if the reporter should merge the
pull reqeust. Only used for Pull Request based items.
.. attr:: label
List of strings each representing an exact label name which
should be added to the pull request by reporter. Only used for
Pull Request based items.
.. attr:: unlabel
List of strings each representing an exact label name which
should be removed from the pull request by reporter. Only used
for Pull Request based items.
.. _Github App: https://developer.github.com/apps/
.. _github_requirements:
Requirements Configuration
--------------------------
As described in :attr:`pipeline.require` and :attr:`pipeline.reject`,
pipelines may specify that items meet certain conditions in order to
be enqueued into the pipeline. These conditions vary according to the
source of the project in question. To supply requirements for changes
from a GitHub source named ``my-github``, create a configuration such
as the following::
pipeline:
require:
my-github:
review:
- type: approved
This indicates that changes originating from the GitHub connection
named ``my-github`` must have an approved code review in order to be
enqueued into the pipeline.
.. attr:: pipeline.require.<github source>
The dictionary passed to the GitHub pipeline `require` attribute
supports the following attributes:
.. attr:: review
This requires that a certain kind of code review be present for
the pull request (it could be added by the event in question).
It takes several sub-parameters, all of which are optional and
are combined together so that there must be a code review
matching all specified requirements.
.. attr:: username
If present, a code review from this username matches. It is
treated as a regular expression.
.. attr:: email
If present, a code review with this email address matches.
It is treated as a regular expression.
.. attr:: older-than
If present, the code review must be older than this amount of
time to match. Provide a time interval as a number with a
suffix of "w" (weeks), "d" (days), "h" (hours), "m"
(minutes), "s" (seconds). Example ``48h`` or ``2d``.
.. attr:: newer-than
If present, the code review must be newer than this amount of
time to match. Same format as "older-than".
.. attr:: type
If present, the code review must match this type (or types).
.. TODO: what types are valid?
.. attr:: permission
If present, the author of the code review must have this
permission (or permissions) to match. The available values
are ``read``, ``write``, and ``admin``.
.. attr:: open
A boolean value (``true`` or ``false``) that indicates whether
the change must be open or closed in order to be enqueued.
.. attr:: merged
A boolean value (``true`` or ``false``) that indicates whether
the change must be merged or not in order to be enqueued.
.. attr:: current-patchset
A boolean value (``true`` or ``false``) that indicates whether
the item must be associated with the latest commit in the pull
request in order to be enqueued.
.. TODO: this could probably be expanded upon -- under what
circumstances might this happen with github
.. attr:: draft
A boolean value (``true`` or ``false``) that indicates whether
or not the change must be marked as a draft in GitHub in order
to be enqueued.
.. attr:: status
A string value that corresponds with the status of the pull
request. The syntax is ``user:status:value``. This can also
be a regular expression.
Zuul does not differentiate between a status reported via
status API or via checks API (which is also how Github behaves
in terms of branch protection and `status checks`_).
Thus, the status could be reported by a
:attr:`pipeline.<reporter>.<github source>.status` or a
:attr:`pipeline.<reporter>.<github source>.check`.
When a status is reported via the status API, Github will add
a ``[bot]`` to the name of the app that reported the status,
resulting in something like ``user[bot]:status:value``. For a
status reported via the checks API, the app's slug will be
used as is.
.. attr:: label
A string value indicating that the pull request must have the
indicated label (or labels).
.. attr:: pipeline.reject.<github source>
The `reject` attribute is the mirror of the `require` attribute and
is used to specify pull requests which should not be enqueued into
a pipeline. It accepts a dictionary under the connection name and
with the following attributes:
.. attr:: review
This requires that a certain kind of code review be absent for
the pull request (it could be removed by the event in question).
It takes several sub-parameters, all of which are optional and
are combined together so that there must not be a code review
matching all specified requirements.
.. attr:: username
If present, a code review from this username matches. It is
treated as a regular expression.
.. attr:: email
If present, a code review with this email address matches.
It is treated as a regular expression.
.. attr:: older-than
If present, the code review must be older than this amount of
time to match. Provide a time interval as a number with a
suffix of "w" (weeks), "d" (days), "h" (hours), "m"
(minutes), "s" (seconds). Example ``48h`` or ``2d``.
.. attr:: newer-than
If present, the code review must be newer than this amount of
time to match. Same format as "older-than".
.. attr:: type
If present, the code review must match this type (or types).
.. TODO: what types are valid?
.. attr:: permission
If present, the author of the code review must have this
permission (or permissions) to match. The available values
are ``read``, ``write``, and ``admin``.
.. attr:: open
A boolean value (``true`` or ``false``) that indicates whether
the change must be open or closed in order to be rejected.
.. attr:: merged
A boolean value (``true`` or ``false``) that indicates whether
the change must be merged or not in order to be rejected.
.. attr:: current-patchset
A boolean value (``true`` or ``false``) that indicates whether
the item must be associated with the latest commit in the pull
request in order to be rejected.
.. TODO: this could probably be expanded upon -- under what
circumstances might this happen with github
.. attr:: draft
A boolean value (``true`` or ``false``) that indicates whether
or not the change must be marked as a draft in GitHub in order
to be rejected.
.. attr:: status
A string value that corresponds with the status of the pull
request. The syntax is ``user:status:value``. This can also
be a regular expression.
Zuul does not differentiate between a status reported via
status API or via checks API (which is also how Github behaves
in terms of branch protection and `status checks`_).
Thus, the status could be reported by a
:attr:`pipeline.<reporter>.<github source>.status` or a
:attr:`pipeline.<reporter>.<github source>.check`.
When a status is reported via the status API, Github will add
a ``[bot]`` to the name of the app that reported the status,
resulting in something like ``user[bot]:status:value``. For a
status reported via the checks API, the app's slug will be
used as is.
.. attr:: label
A string value indicating that the pull request must not have
the indicated label (or labels).
Reference pipelines configuration
---------------------------------
Branch protection rules
.......................
The rules prevent Pull requests to be merged on defined branches if they are
not met. For instance a branch might require that specific status are marked
as ``success`` before allowing the merge of the Pull request.
Zuul provides the attribute tenant.untrusted-projects.exclude-unprotected-branches.
This attribute is by default set to ``false`` but we recommend to set it to
``true`` for the whole tenant. By doing so Zuul will benefit from:
- exluding in-repo development branches used to open Pull requests. This will
prevent Zuul to fetch and read useless branches data to find Zuul
configuration files.
- reading protection rules configuration from the Github API for a given branch
to define whether a Pull request must enter the gate pipeline. As of now
Zuul only takes in account "Require status checks to pass before merging" and
the checked status checkboxes.
With the use of the reference pipelines below, the Zuul project recommends to
set the minimum following settings:
- attribute tenant.untrusted-projects.exclude-unprotected-branches to ``true``
in the tenant (main.yaml) configuration file.
- on each Github repository, activate the branch protections rules and
configure the name of the protected branches. Furthermore set
"Require status checks to pass before merging" and check the status labels
checkboxes (at least ```<tenant>/check```) that must be marked as success in
order for Zuul to make the Pull request enter the gate pipeline to be merged.
Reference pipelines
...................
Here is an example of standard pipelines you may want to define:
.. literalinclude:: /examples/pipelines/github-reference-pipelines.yaml
:language: yaml
Github Checks API
-----------------
Github provides two distinct methods for reporting results; a "checks"
and a "status" API.
The `checks API`_ provides some additional features compared to the
`status API`_ like file comments and custom actions (e.g. cancel a
running build).
Either can be chosen when configuring Zuul to report for your Github
project. However, there are some considerations to take into account
when choosing the API.
Design decisions
................
The Github checks API defines the concepts of `Check Suites`_ and
`Check Runs`_. *Check suites* are a collection of *check runs* for a
specific commit and summarize a final status
A priori the check suite appears to be a good mapping for a pipeline
execution in Zuul, where a check run maps to a single job execution
that is part of the pipeline run. Unfortunately, there are a few
problematic restrictions mapping between Github and Zuul concepts.
Github check suites are opaque and the current status, duration and
the overall conclusion are all calculated and set automatically
whenever an included check run is updated. Most importantly, there
can only be one check suite per commit SHA, per app. Thus there is no
facility for for Zuul to create multiple check suite results for a
change, e.g. one check suite for each pipeline such as check and gate.
The Github check suite thus does not map well to Zuul's concept of
multiple pipelines for a single change. Since a check suite is unique
and global for the change, it can not be used to flag the status of
arbitrary pipelines. This makes the check suite API insufficient for
recording details that Zuul needs such as "the check pipeline has
passed but the gate pipeline has failed".
Another issue is that Zuul only reports on the results of the whole
pipeline, not individual jobs. Reporting each Zuul job as a separate
check is problematic for a number of reasons.
Zuul often runs the same job for the same change multiple times; for
example in the check and gate pipeline. There is no facility for
these runs to be reported differently in the single check suite for
the Github change.
When configuring branch protection in Github, only a *check run* can
be selected as required status check. This is in conflict with
managing jobs in pipelines with Zuul. For example, to implement
branch protection on GitHub would mean listing each job as a dedicated
check, leading to a check run list that is not kept in sync with the
project's Zuul pipeline configuration. Additionally, you lose some
of Zuul's features like non-voting jobs as Github branch protections
has no concept of a non-voting job.
Thus Zuul can integrate with the checks API, but only at a pipeline
level. Each pipeline execution will map to a check-run result
reported to Github.
Behaviour in Zuul
.................
Reporting
~~~~~~~~~
The Github reporter is able to report both a status
:attr:`pipeline.<reporter>.<github source>.status` or a check
:attr:`pipeline.<reporter>.<github source>.check`. While it's possible to
configure a Github reporter to report both, it's recommended to use only one.
Reporting both might result in duplicated status check entries in the Github
PR (the section below the comments).
Trigger
~~~~~~~
The Github driver is able to trigger on a reported check
(:value:`pipeline.trigger.<github source>.event.check_run`) similar to a
reported status (:value:`pipeline.trigger.<github source>.action.status`).
Requirements
~~~~~~~~~~~~
While trigger and reporter differentiates between status and check, the Github
driver does not differentiate between them when it comes to pipeline
requirements. This is mainly because Github also doesn't differentiate between
both in terms of branch protection and `status checks`_.
Actions / Events
................
Github provides a set of default actions for check suites and check runs.
Those actions are available as buttons in the Github UI. Clicking on those
buttons will emit webhook events which will be handled by Zuul.
These actions are only available on failed check runs / check suites. So
far, a running or successful check suite / check run does not provide any
action from Github side.
Available actions are:
Re-run all checks
Github emits a webhook event with type ``check_suite`` and action
``rerequested`` that is meant to re-run all check-runs contained in this
check suite. Github does not provide the list of check-runs in that case,
so it's up to the Github app what should run.
Re-run failed checks
Github emits a webhook event with type ``check_run`` and action
``rerequested`` for each failed check run contained in this suite.
Re-run
Github emits a webhook event with type ``check_run`` and action
``rerequested`` for the specific check run.
Zuul will handle all events except for the `Re-run all checks` event;
it does not make sense in the Zuul model to trigger all pipelines to
run simultaneously.
These events are unable to be customized in Github. Github will
always report "You have successfully requested ..." despite nothing
listening to the event. Therefore, it might be a solution to handle
the `Re-run all checks` event in Zuul similar to `Re-run failed
checks` just to not do anything while Github makes the user believe an
action was really triggered.
File comments (annotations)
...........................
Check runs can be used to post file comments directly in the files of the PR.
Those are similar to user comments, but must provide some more information.
Zuul jobs can already return file comments via ``zuul_return``
(see: :ref:`return_values`). We can simply use this return value, build the
necessary annotations (how Github calls it) from it and attach them to the
check run.
Custom actions
~~~~~~~~~~~~~~
Check runs can provide some custom actions which will result in additional
buttons being available in the Github UI for this specific check run.
Clicking on such a button will emit a webhook event with type ``check_run``
and action ``requested_action`` and will additionally contain the id/name of
the requested action which we can define when creating the action on the
check run.
We could use these custom actions to provide some "Re-run" action on a
running check run (which might otherwise be stuck in case a check run update
fails) or to abort a check run directly from the Github UI.
Restrictions and Recommendations
................................
Although both the checks API and the status API can be activated for a
Github reporter at the same time, it's not recommended to do so as this might
result in multiple status checks to be reported to the PR for the same pipeline
execution (which would result in duplicated entries in the status section below
the comments of a PR).
In case the update on a check run fails (e.g. request timeout when reporting
success or failure to Github), the check run will stay in status "in_progess"
and there will be no way to re-run the check run via the Github UI as the
predefined actions are only available on failed check runs.
Thus, it's recommended to configure a
:value:`pipeline.trigger.<github source>.action.comment` trigger on the
pipeline to still be able to trigger re-run of the stuck check run via e.g.
"recheck".
The check suite will only list check runs that were reported by Zuul. If
the requirements for a certain pipeline are not met and it is not run, the
check run for this pipeline won't be listed in the check suite. However,
this does not affect the required status checks. If the check run is enabled
as required, Github will still show it in the list of required status checks
- even if it didn't run yet - just not in the check suite.
.. _checks API: https://docs.github.com/v3/checks/
.. _status API: https://docs.github.com/v3/repos/statuses/
.. _Check Suites: https://docs.github.com/v3/checks/suites/
.. _Check Runs: https://docs.github.com/v3/checks/runs/
.. _status checks: https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-status-checks#types-of-status-checks-on-github
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/github.rst
|
github.rst
|
:title: MQTT Driver
MQTT
====
The MQTT driver supports reporters only. It is used to send MQTT
message when items report.
Message Schema
--------------
An MQTT report uses this schema:
.. attr:: <mqtt schema>
.. attr:: uuid
The item UUID. Each item enqueued into a Zuul pipeline is
assigned a UUID which remains the same even as Zuul's
speculative execution algorithm re-orders pipeline contents.
.. attr:: action
The reporter action name, e.g.: 'start', 'success', 'failure',
'merge-conflict', ...
.. attr:: tenant
The tenant name.
.. attr:: pipeline
The pipeline name.
.. attr:: project
The project name.
.. attr:: branch
The branch name.
.. attr:: change_url
The change url.
.. attr:: message
The report message.
.. attr:: change
The change number.
.. attr:: patchset
The patchset number.
.. attr:: commit_id
The commit id number.
.. attr:: owner
The owner username of the change.
.. attr:: ref
The change reference.
.. attr:: zuul_ref
The internal zuul change reference.
.. attr:: trigger_time
The timestamp when the event was added to the scheduler.
.. attr:: enqueue_time
The timestamp when the event was added to the pipeline.
.. attr:: buildset
The buildset information.
.. value:: uuid
The buildset global uuid.
.. attr:: result
The buildset result
.. attr:: builds
The list of builds.
.. attr:: job_name
The job name.
.. attr:: voting
The job voting status.
.. attr:: uuid
The build uuid (not present in start report).
.. attr:: execute_time
The build execute time.
.. attr:: start_time
The build start time (not present in start report).
.. attr:: end_time
The build end time (not present in start report).
.. attr:: log_url
The build log url (not present in start report).
.. attr:: web_url
The url to the build result page. Not present in start
report.
.. attr:: result
The build results (not present in start report).
.. attr:: artifacts
:type: list
The build artifacts (not present in start report).
This is a list of dictionaries corresponding to the returned artifacts.
.. attr:: name
The name of the artifact.
.. attr:: url
The url of the artifact.
.. attr:: metadata
:type: dict
The metadata of the artifact. This is a dictionary of
arbitrary key values determined by the job.
Here is an example of a start message:
.. code-block:: javascript
{
'action': 'start',
'tenant': 'openstack.org',
'pipeline': 'check',
'project': 'sf-jobs',
'branch': 'master',
'change_url': 'https://gerrit.example.com/r/3',
'message': 'Starting check jobs.',
'trigger_time': '1524801056.2545864',
'enqueue_time': '1524801093.5689457',
'change': '3',
'patchset': '1',
'commit_id': '2db20c7fb26adf9ac9936a9e750ced9b4854a964',
'owner': 'username',
'ref': 'refs/changes/03/3/1',
'zuul_ref': 'Zf8b3d7cd34f54cb396b488226589db8f',
'buildset': {
'uuid': 'f8b3d7cd34f54cb396b488226589db8f',
'builds': [{
'job_name': 'linters',
'voting': True
}],
},
}
Here is an example of a success message:
.. code-block:: javascript
{
'action': 'success',
'tenant': 'openstack.org',
'pipeline': 'check',
'project': 'sf-jobs',
'branch': 'master',
'change_url': 'https://gerrit.example.com/r/3',
'message': 'Build succeeded.',
'trigger_time': '1524801056.2545864',
'enqueue_time': '1524801093.5689457',
'change': '3',
'patchset': '1',
'commit_id': '2db20c7fb26adf9ac9936a9e750ced9b4854a964',
'owner': 'username',
'ref': 'refs/changes/03/3/1',
'zuul_ref': 'Zf8b3d7cd34f54cb396b488226589db8f',
'buildset': {
'uuid': 'f8b3d7cd34f54cb396b488226589db8f',
'builds': [{
'job_name': 'linters',
'voting': True
'uuid': '16e3e55aca984c6c9a50cc3c5b21bb83',
'execute_time': 1524801120.75632954,
'start_time': 1524801179.8557224,
'end_time': 1524801208.928095,
'log_url': 'https://logs.example.com/logs/3/3/1/check/linters/16e3e55/',
'web_url': 'https://tenant.example.com/t/tenant-one/build/16e3e55aca984c6c9a50cc3c5b21bb83/',
'result': 'SUCCESS',
'dependencies': [],
'artifacts': [],
}],
},
}
Connection Configuration
------------------------
.. attr:: <mqtt connection>
.. attr:: driver
:required:
.. value:: mqtt
The connection must set ``driver=mqtt`` for MQTT connections.
.. attr:: server
:default: localhost
MQTT server hostname or address to use.
.. attr:: port
:default: 1883
MQTT server port.
.. attr:: keepalive
:default: 60
Maximum period in seconds allowed between communications with the broker.
.. attr:: user
Set a username for optional broker authentication.
.. attr:: password
Set a password for optional broker authentication.
.. attr:: ca_certs
A string path to the Certificate Authority certificate files to enable
TLS connection.
.. attr:: certfile
A strings pointing to the PEM encoded client certificate to
enable client TLS based authentication. This option requires keyfile to
be set too.
.. attr:: keyfile
A strings pointing to the PEM encoded client private keys to
enable client TLS based authentication. This option requires certfile to
be set too.
.. attr:: ciphers
A string specifying which encryption ciphers are allowable for this
connection. More information in this
`openssl doc <https://www.openssl.org/docs/manmaster/man1/ciphers.html>`_.
Reporter Configuration
----------------------
A :ref:`connection<connections>` that uses the mqtt driver must be supplied to the
reporter. Each pipeline must provide a topic name. For example:
.. code-block:: yaml
- pipeline:
name: check
success:
mqtt:
topic: "{tenant}/zuul/{pipeline}/{project}/{branch}/{change}"
qos: 2
.. attr:: pipeline.<reporter>.<mqtt>
To report via MQTT message, the dictionaries passed to any of the pipeline
:ref:`reporter<reporters>` support the following attributes:
.. attr:: topic
The MQTT topic to publish messages. The topic can be a format string that
can use the following parameters: ``tenant``, ``pipeline``, ``project``,
``branch``, ``change``, ``patchset`` and ``ref``.
MQTT topic can have hierarchy separated by ``/``, more details in this
`doc <https://mosquitto.org/man/mqtt-7.html>`_
.. attr:: qos
:default: 0
The quality of service level to use, it can be 0, 1 or 2. Read more in this
`guide <https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels>`_
.. attr:: include-returned-data
:default: false
If set to ``true``, Zuul will include any data returned from the
job via :ref:`return_values`.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/mqtt.rst
|
mqtt.rst
|
:title: Gerrit Driver
Gerrit
======
`Gerrit`_ is a code review system. The Gerrit driver supports
sources, triggers, and reporters.
.. _Gerrit: https://www.gerritcodereview.com/
Zuul will need access to a Gerrit user.
Give that user whatever permissions will be needed on the projects you
want Zuul to report on. For instance, you may want to grant
``Verified +/-1`` and ``Submit`` to the user. Additional categories
or values may be added to Gerrit. Zuul is very flexible and can take
advantage of those.
If ``change.submitWholeTopic`` is configured in Gerrit, Zuul will
honor this by enqueing changes with the same topic as circular
dependencies. However, it is still necessary to enable circular
dependency support in any pipeline queues where such changes may
appear. See :attr:`queue.allow-circular-dependencies` for information
on how to configure this.
Zuul interacts with Gerrit in up to three ways:
* Receiving trigger events
* Fetching source code
* Reporting results
Trigger events arrive over an event stream, either SSH (via the
``gerrit stream-events`` command) or other protocols such as Kafka, or
AWS Kinesis.
Fetching source code may happen over SSH or HTTP.
Reporting may happen over SSH or HTTP (strongly preferred).
The appropriate connection methods must be configured to satisfy the
interactions Zuul will have with Gerrit. The recommended
configuration is to configure both SSH and HTTP access.
The section below describes commond configuration settings. Specific
settings for different connection methods follow.
Connection Configuration
------------------------
The supported options in ``zuul.conf`` connections are:
.. attr:: <gerrit connection>
.. attr:: driver
:required:
.. value:: gerrit
The connection must set ``driver=gerrit`` for Gerrit connections.
.. attr:: server
:required:
Fully qualified domain name of Gerrit server.
.. attr:: canonical_hostname
The canonical hostname associated with the git repos on the
Gerrit server. Defaults to the value of
:attr:`<gerrit connection>.server`. This is used to identify
projects from this connection by name and in preparing repos on
the filesystem for use by jobs. Note that Zuul will still only
communicate with the Gerrit server identified by ``server``;
this option is useful if users customarily use a different
hostname to clone or pull git repos so that when Zuul places
them in the job's working directory, they appear under this
directory name.
.. attr:: baseurl
:default: https://{server}
Path to Gerrit web interface. Omit the trailing ``/``.
.. attr:: gitweb_url_template
:default: {baseurl}/gitweb?p={project.name}.git;a=commitdiff;h={sha}
Url template for links to specific git shas. By default this will
point at Gerrit's built in gitweb but you can customize this value
to point elsewhere (like cgit or github).
The three values available for string interpolation are baseurl
which points back to Gerrit, project and all of its safe attributes,
and sha which is the git sha1.
.. attr:: user
:default: zuul
User name to use when accessing Gerrit.
SSH Configuration
~~~~~~~~~~~~~~~~~
To prepare for SSH access, create an SSH keypair for Zuul to use if
there isn't one already, and create a Gerrit user with that key::
cat ~/id_rsa.pub | ssh -p29418 review.example.com gerrit create-account --ssh-key - --full-name Zuul zuul
.. note:: If you use an RSA key, ensure it is encoded in the PEM
format (use the ``-t rsa -m PEM`` arguments to
`ssh-keygen`).
If using Gerrit 2.7 or later, make sure the user is a member of a group
that is granted the ``Stream Events`` permission, otherwise it will not
be able to invoke the ``gerrit stream-events`` command over SSH.
.. attr:: <gerrit ssh connection>
.. attr:: ssh_server
If SSH access to the Gerrit server should be via a different
hostname than web access, set this value to the hostname to use
for SSH connections.
.. attr:: port
:default: 29418
Gerrit SSH server port.
.. attr:: sshkey
:default: ~zuul/.ssh/id_rsa
Path to SSH key to use when logging into Gerrit.
.. attr:: keepalive
:default: 60
SSH connection keepalive timeout; ``0`` disables.
.. attr:: git_over_ssh
:default: false
This forces git operation over SSH even if the ``password``
attribute is set. This allow REST API access to the Gerrit
server even when git-over-http operation is disabled on the
server.
HTTP Configuration
~~~~~~~~~~~~~~~~~~
.. attr:: <gerrit ssh connection>
.. attr:: password
The HTTP authentication password for the user. This is
optional, but if it is provided, Zuul will report to Gerrit via
HTTP rather than SSH. It is required in order for file and line
comments to reported (the Gerrit SSH API only supports review
messages). Retrieve this password from the ``HTTP Password``
section of the ``Settings`` page in Gerrit.
.. attr:: auth_type
:default: basic
The HTTP authentication mechanism.
.. value:: basic
HTTP Basic authentication; the default for most Gerrit
installations.
.. value:: digest
HTTP Digest authentication; only used in versions of Gerrit
prior to 2.15.
.. value:: form
Zuul will submit a username and password to a form in order
to authenticate.
.. value:: gcloud_service
Only valid when running in Google Cloud. This will use the
default service account to authenticate to Gerrit. Note that
this will only be used for interacting with the Gerrit API;
anonymous HTTP access will be used to access the git
repositories, therefore private repos or draft changes will
not be available.
.. attr:: verify_ssl
:default: true
When using a self-signed certificate, this may be set to
``false`` to disable SSL certificate verification.
Kafka Event Support
~~~~~~~~~~~~~~~~~~~
Zuul includes support for Gerrit's `events-kafka` plugin. This may be
used as an alternative to SSH for receiving trigger events.
Kafka does provide event delivery guarantees, so unlike SSH, if all
Zuul schedulers are unable to communicate with Gerrit or Kafka, they
will eventually receive queued events on reconnection.
All Zuul schedulers will attempt to connect to Kafka brokers. There
are some implications for event delivery:
* All events will be delivered to Zuul at least once. In the case of
a disrupted connection, Zuul may receive duplicate events.
* Events should generally arrive in order, however some events in
rapid succession may be received by Zuul out of order.
.. attr:: <gerrit kafka connection>
.. attr:: kafka_bootstrap_servers
:required:
A comma-separated list of Kafka servers (optionally including
port separated with `:`).
.. attr:: kafka_topic
:default: gerrit
The Kafka topic to which Zuul should subscribe.
.. attr:: kafka_client_id
:default: zuul
The Kafka client ID.
.. attr:: kafka_group_id
:default: zuul
The Kafka group ID.
.. attr:: kafka_tls_cert
Path to TLS certificate to use when connecting to a Kafka broker.
.. attr:: kafka_tls_key
Path to TLS certificate key to use when connecting to a Kafka broker.
.. attr:: kafka_tls_ca
Path to TLS CA certificate to use when connecting to a Kafka broker.
AWS Kinesis Event Support
~~~~~~~~~~~~~~~~~~~~~~~~~
Zuul includes support for Gerrit's `events-aws-kinesis` plugin. This
may be used as an alternative to SSH for receiving trigger events.
Kinesis does provide event delivery guarantees, so unlike SSH, if all
Zuul schedulers are unable to communicate with Gerrit or AWS, they
will eventually receive queued events on reconnection.
All Zuul schedulers will attempt to connect to AWS Kinesis, but only
one scheduler will process a given Kinesis shard at a time. There are
some implications for event delivery:
* All events will be delivered to Zuul at least once. In the case of
a disrupted connection, Zuul may receive duplicate events.
* If a connection is disrupted longer than the Kinesis retention
period for a shard, Zuul may skip to the latest event ignoring all
previous events.
* Because shard processing happens in parallel, events may not arrive
in order.
* If a long period with no events elapses and a connection is
disrupted, it may take Zuul some time to catch up to the latest
events.
.. attr:: <gerrit aws kinesis connection>
.. attr:: aws_kinesis_region
:required:
The AWS region name in which the Kinesis stream is located.
.. attr:: aws_kinesis_stream
:default: gerrit
The AWS Kinesis stream name.
.. attr:: aws_kinesis_access_key
The AWS access key to use.
.. attr:: aws_kinesis_secret_key
The AWS secret key to use.
Trigger Configuration
---------------------
.. attr:: pipeline.trigger.<gerrit source>
The dictionary passed to the Gerrit pipeline ``trigger`` attribute
supports the following attributes:
.. attr:: event
:required:
The event name from gerrit. Examples: ``patchset-created``,
``comment-added``, ``ref-updated``. This field is treated as a
regular expression.
.. attr:: branch
The branch associated with the event. Example: ``master``.
This field is treated as a regular expression, and multiple
branches may be listed.
.. attr:: ref
On ref-updated events, the branch parameter is not used, instead
the ref is provided. Currently Gerrit has the somewhat
idiosyncratic behavior of specifying bare refs for branch names
(e.g., ``master``), but full ref names for other kinds of refs
(e.g., ``refs/tags/foo``). Zuul matches this value exactly
against what Gerrit provides. This field is treated as a
regular expression, and multiple refs may be listed.
.. attr:: ignore-deletes
:default: true
When a branch is deleted, a ref-updated event is emitted with a
newrev of all zeros specified. The ``ignore-deletes`` field is a
boolean value that describes whether or not these newrevs
trigger ref-updated events.
.. attr:: approval
This is only used for ``comment-added`` events. It only matches
if the event has a matching approval associated with it.
Example: ``Code-Review: 2`` matches a ``+2`` vote on the code
review category. Multiple approvals may be listed.
.. attr:: email
This is used for any event. It takes a regex applied on the
performer email, i.e. Gerrit account email address. If you want
to specify several email filters, you must use a YAML list.
Make sure to use non greedy matchers and to escapes dots!
Example: ``email: ^.*?@example\.org$``.
.. attr:: username
This is used for any event. It takes a regex applied on the
performer username, i.e. Gerrit account name. If you want to
specify several username filters, you must use a YAML list.
Make sure to use non greedy matchers and to escapes dots.
Example: ``username: ^zuul$``.
.. attr:: comment
This is only used for ``comment-added`` events. It accepts a
list of regexes that are searched for in the comment string. If
any of these regexes matches a portion of the comment string the
trigger is matched. ``comment: retrigger`` will match when
comments containing ``retrigger`` somewhere in the comment text
are added to a change.
.. attr:: require-approval
.. warning:: This is deprecated and will be removed in a future
version. Use :attr:`pipeline.trigger.<gerrit
source>.require` instead.
This may be used for any event. It requires that a certain kind
of approval be present for the current patchset of the change
(the approval could be added by the event in question). It
follows the same syntax as :attr:`pipeline.require.<gerrit
source>.approval`. For each specified criteria there must exist
a matching approval.
This is ignored if the :attr:`pipeline.trigger.<gerrit
source>.require` attribute is present.
.. attr:: reject-approval
.. warning:: This is deprecated and will be removed in a future
version. Use :attr:`pipeline.trigger.<gerrit
source>.reject` instead.
This takes a list of approvals in the same format as
:attr:`pipeline.trigger.<gerrit source>.require-approval` but
the item will fail to enter the pipeline if there is a matching
approval.
This is ignored if the :attr:`pipeline.trigger.<gerrit
source>.reject` attribute is present.
.. attr:: require
This may be used for any event. It describes conditions that
must be met by the change in order for the trigger event to
match. Those conditions may be satisfied by the event in
question. It follows the same syntax as
:ref:`gerrit_requirements`.
.. attr:: reject
This may be used for any event and is the mirror of
:attr:`pipeline.trigger.<gerrit source>.require`. It describes
conditions that when met by the change cause the trigger event
not to match. Those conditions may be satisfied by the event in
question. It follows the same syntax as
:ref:`gerrit_requirements`.
Reporter Configuration
----------------------
.. attr:: pipeline.reporter.<gerrit reporter>
The dictionary passed to the Gerrit reporter is used to provide label
values to Gerrit. To set the `Verified` label to `1`, add ``verified:
1`` to the dictionary.
The following additional keys are recognized:
.. attr:: submit
:default: False
Set this to ``True`` to submit (merge) the change.
.. attr:: comment
:default: True
If this is true (the default), Zuul will leave review messages
on the change (including job results). Set this to false to
disable this behavior (file and line commands will still be sent
if present).
A :ref:`connection<connections>` that uses the gerrit driver must be
supplied to the trigger.
.. _gerrit_requirements:
Requirements Configuration
--------------------------
As described in :attr:`pipeline.require` and :attr:`pipeline.reject`,
pipelines may specify that items meet certain conditions in order to
be enqueued into the pipeline. These conditions vary according to the
source of the project in question. To supply requirements for changes
from a Gerrit source named ``my-gerrit``, create a configuration such
as the following:
.. code-block:: yaml
pipeline:
require:
my-gerrit:
approval:
- Code-Review: 2
This indicates that changes originating from the Gerrit connection
named ``my-gerrit`` must have a ``Code-Review`` vote of ``+2`` in
order to be enqueued into the pipeline.
.. attr:: pipeline.require.<gerrit source>
The dictionary passed to the Gerrit pipeline `require` attribute
supports the following attributes:
.. attr:: approval
This requires that a certain kind of approval be present for the
current patchset of the change (the approval could be added by
the event in question). Approval is a dictionary or a list of
dictionaries with attributes listed below, all of which are
optional and are combined together so that there must be an approval
matching all specified requirements.
.. attr:: username
If present, an approval from this username is required. It is
treated as a regular expression.
.. attr:: email
If present, an approval with this email address is required. It is
treated as a regular expression.
.. attr:: older-than
If present, the approval must be older than this amount of time
to match. Provide a time interval as a number with a suffix of
"w" (weeks), "d" (days), "h" (hours), "m" (minutes), "s"
(seconds). Example ``48h`` or ``2d``.
.. attr:: newer-than
If present, the approval must be newer than this amount
of time to match. Same format as "older-than".
Any other field is interpreted as a review category and value
pair. For example ``Verified: 1`` would require that the
approval be for a +1 vote in the "Verified" column. The value
may either be a single value or a list: ``Verified: [1, 2]``
would match either a +1 or +2 vote.
.. attr:: open
A boolean value (``true`` or ``false``) that indicates whether
the change must be open or closed in order to be enqueued.
.. attr:: current-patchset
A boolean value (``true`` or ``false``) that indicates whether the
change must be the current patchset in order to be enqueued.
.. attr:: wip
A boolean value (``true`` or ``false``) that indicates whether the
change must be wip or not wip in order to be enqueued.
.. attr:: status
A string value that corresponds with the status of the change
reported by Gerrit.
.. attr:: pipeline.reject.<gerrit source>
The `reject` attribute is the mirror of the `require` attribute. It
also accepts a dictionary under the connection name. This
dictionary supports the following attributes:
.. attr:: approval
This requires that a certain kind of approval not be present for the
current patchset of the change (the approval could be added by
the event in question). Approval is a dictionary or a list of
dictionaries with attributes listed below, all of which are
optional and are combined together so that there must be no approvals
matching all specified requirements.
Example to reject a change with any negative vote:
.. code-block:: yaml
reject:
my-gerrit:
approval:
- Code-Review: [-1, -2]
.. attr:: username
If present, an approval from this username is required. It is
treated as a regular expression.
.. attr:: email
If present, an approval with this email address is required. It is
treated as a regular expression.
.. attr:: older-than
If present, the approval must be older than this amount of time
to match. Provide a time interval as a number with a suffix of
"w" (weeks), "d" (days), "h" (hours), "m" (minutes), "s"
(seconds). Example ``48h`` or ``2d``.
.. attr:: newer-than
If present, the approval must be newer than this amount
of time to match. Same format as "older-than".
Any other field is interpreted as a review category and value
pair. For example ``Verified: 1`` would require that the
approval be for a +1 vote in the "Verified" column. The value
may either be a single value or a list: ``Verified: [1, 2]``
would match either a +1 or +2 vote.
.. attr:: open
A boolean value (``true`` or ``false``) that indicates whether
the change must be open or closed in order to be rejected.
.. attr:: current-patchset
A boolean value (``true`` or ``false``) that indicates whether the
change must be the current patchset in order to be rejected.
.. attr:: wip
A boolean value (``true`` or ``false``) that indicates whether the
change must be wip or not wip in order to be rejected.
.. attr:: status
A string value that corresponds with the status of the change
reported by Gerrit.
Reference Pipelines Configuration
---------------------------------
Here is an example of standard pipelines you may want to define:
.. literalinclude:: /examples/pipelines/gerrit-reference-pipelines.yaml
:language: yaml
Checks Plugin Support (Deprecated)
------------------------------------
The Gerrit driver has support for Gerrit's `checks` plugin. Due to
the deprecation of the checks plugin in Gerrit, support in Zuul is
also deprecated and likely to be removed in a future version. It is
not recommended for use.
Caveats include (but are not limited to):
* This documentation is brief.
* Access control for the `checks` API in Gerrit depends on a single
global administrative permission, ``administrateCheckers``. This is
required in order to use the `checks` API and can not be restricted
by project. This means that any system using the `checks` API can
interfere with any other.
* Checkers are restricted to a single project. This means that a
system with many projects will require many checkers to be defined
in Gerrit -- one for each project+pipeline.
* No support is provided for attaching checks to tags or commits,
meaning that tag, release, and post pipelines are unable to be used
with the `checks` API and must rely on `stream-events`.
* Sub-checks are not implemented yet, so in order to see the results
of individual jobs on a change, users must either follow the
buildset link, or the pipeline must be configured to leave a
traditional comment.
* Familiarity with the `checks` API is recommended.
* Checkers may not be permanently deleted from Gerrit (only
"soft-deleted" so they no longer apply), so any experiments you
perform on a production system will leave data there forever.
In order to use the `checks` API, you must have HTTP access configured
in `zuul.conf`.
There are two ways to configure a pipeline for the `checks` API:
directly referencing the checker UUID, or referencing it's scheme. It
is hoped that once multi-repository checks are supported, that an
administrator will be able to configure a single checker in Gerrit for
each Zuul pipeline, and those checkers can apply to all repositories.
If and when that happens, we will be able to reference the checker
UUID directly in Zuul's pipeline configuration. If you only have a
single project, you may find this approach acceptable now.
To use this approach, create a checker named ``zuul:check`` and
configure a pipeline like this:
.. code-block:: yaml
- pipeline:
name: check
manager: independent
trigger:
gerrit:
- event: pending-check
uuid: 'zuul:check'
enqueue:
gerrit:
checks-api:
uuid: 'zuul:check'
state: SCHEDULED
message: 'Change has been enqueued in check'
start:
gerrit:
checks-api:
uuid: 'zuul:check'
state: RUNNING
message: 'Jobs have started running'
no-jobs:
gerrit:
checks-api:
uuid: 'zuul:check'
state: NOT_RELEVANT
message: 'Change has no jobs configured'
success:
gerrit:
checks-api:
uuid: 'zuul:check'
state: SUCCESSFUL
message: 'Change passed all voting jobs'
failure:
gerrit:
checks-api:
uuid: 'zuul:check'
state: FAILED
message: 'Change failed'
For a system with multiple repositories and one or more checkers for
each repository, the `scheme` approach is recommended. To use this,
create a checker for each pipeline in each repository. Give them
names such as ``zuul_check:project1``, ``zuul_gate:project1``,
``zuul_check:project2``, etc. The part before the ``:`` is the
`scheme`. Then create a pipeline like this:
.. code-block:: yaml
- pipeline:
name: check
manager: independent
trigger:
gerrit:
- event: pending-check
scheme: 'zuul_check'
enqueue:
gerrit:
checks-api:
scheme: 'zuul_check'
state: SCHEDULED
message: 'Change has been enqueued in check'
start:
gerrit:
checks-api:
scheme: 'zuul_check'
state: RUNNING
message: 'Jobs have started running'
no-jobs:
gerrit:
checks-api:
scheme: 'zuul_check'
state: NOT_RELEVANT
message: 'Change has no jobs configured'
success:
gerrit:
checks-api:
scheme: 'zuul_check'
state: SUCCESSFUL
message: 'Change passed all voting jobs'
failure:
gerrit:
checks-api:
scheme: 'zuul_check'
state: FAILED
message: 'Change failed'
This will match and report to the appropriate checker for a given
repository based on the scheme you provided.
.. The original design doc may be of use during development:
https://gerrit-review.googlesource.com/c/gerrit/+/214733
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/drivers/gerrit.rst
|
gerrit.rst
|
Optional: Register with an Identity Provider
============================================
By default, there is no public link between your Matrix account and
any identifying information such as your email address. However, you
may wish people to be able to find your Matrix ID by looking up your
email address or phone number. We also have plans to add additional
functionality to our bots if they are able to look up contributors by
email addresses. If you wish to make your account discoverable in
this way, you may perform the following steps to list your account in
one of the public third-party identifier services. Note that these
services are designed to only return results for someone who already
knows your email address or phone number; they take care to ensure
that it is not possible (or nearly so) to "scrape" their data sets to
obtain lists of users.
To get started, open the User Menu and click `All settings`. Under
the `General` section, find `Email addresses`. If you followed the
instructions above, you should already have an email address listed
here. If you don't, enter your address, click `Add`, and follow the
instructions to verify your address. The dialog should look like this
when complete:
.. image:: /images/matrix/id-email-complete.png
:align: center
To make your account discoverable by email, scroll down to the
`Discovery` section.
.. image:: /images/matrix/id-disc.png
:align: center
Read the privacy notice and click the checkbox
next to `Accept`. That will enable the `Continue` button; click that
to proceed.
.. image:: /images/matrix/id-disc-accept.png
:align: center
The `Discovery` section will be replaced with the email address you
registered above.
.. image:: /images/matrix/id-disc-accept.png
:align: center
Click the `Share` button next to the address. The system will send an
email to you, and meanwhile the dialog will show this:
.. image:: /images/matrix/id-disc-verify-wait.png
:align: center
You will receive an email like this:
.. image:: /images/matrix/id-disc-verify-email.png
:align: center
Follow the link in the email to verify it really is you making the
request.
.. image:: /images/matrix/id-disc-verify-success.png
:align: center
Then return to the settings page and click the `Complete` button.
.. image:: /images/matrix/id-disc-verify-wait.png
:align: center
Once everything is finished, the complete button should change to read
`Revoke`.
.. image:: /images/matrix/id-disc-verify-complete.png
:align: center
If you see that, you're all done. If you change your mind and don't
want your account to be discoverable via email, you can click the
`Revoke` button at any time.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/matrix-id.rst
|
matrix-id.rst
|
Configuring Microsoft Authentication
====================================
This document explains how to configure Zuul in order to enable
authentication with Microsoft Login.
Prerequisites
-------------
* The Zuul instance must be able to query Microsoft's OAUTH API servers. This
simply generally means that the Zuul instance must be able to send and
receive HTTPS data to and from the Internet.
* You must have an Active Directory instance in Azure and the ability
to create an App Registration.
By convention, we will assume Zuul's Web UI's base URL is
``https://zuul.example.com/``.
Creating the App Registration
-----------------------------
Navigate to the Active Directory instance in Azure and select `App
registrations` under ``Manage``. Select ``New registration``. This
will open a dialog to register an application.
Enter a name of your choosing (e.g., ``Zuul``), and select which
account types should have access. Under ``Redirect URI`` select
``Single-page application(SPA)`` and enter
``https://zuul.example.com/auth_callback`` as the redirect URI. Press
the ``Register`` button.
You should now be at the overview of the Zuul App registration. This
page displays several values which will be used later. Record the
``Application (client) ID`` and ``Directory (tenant) ID``. When we need
to construct values including these later, we will refer to them with
all caps (e.g., ``CLIENT_ID`` and ``TENANT_ID`` respectively).
Select ``Authentication`` under ``Manage``. You should see a
``Single-page application`` section with the redirect URI previously
configured during registration; if not, correct that now.
Under ``Implicit grant and hybrid flows`` select both ``Access
tokens`` and ``ID tokens``, then Save.
Back at the Zuul App Registration menu, select ``Expose an API``, then
press ``Set`` and then press ``Save`` to accept the default
Application ID URI (it should look like ``api://CLIENT_ID``).
Press ``Add a scope`` and enter ``zuul`` as the scope name. Enter
``Access zuul`` for both the ``Admin consent display name`` and
``Admin consent description``. Leave ``Who can consent`` set to
``Admins only``, then press ``Add scope``.
Optional: Include Groups Claim
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to include group information in the token sent to Zuul,
select ``Token configuration`` under ``Manage`` and then ``Add groups
claim``.
Setting up Zuul
---------------
Edit the ``/etc/zuul/zuul.conf`` to add the microsoft authenticator:
.. code-block:: ini
[auth microsoft]
default=true
driver=OpenIDConnect
realm=zuul.example.com
authority=https://login.microsoftonline.com/TENANT_ID/v2.0
issuer_id=https://sts.windows.net/TENANT_ID/
client_id=CLIENT_ID
scope=openid profile api://CLIENT_ID/zuul
audience=api://CLIENT_ID
load_user_info=false
Restart Zuul services (scheduler, web).
Head to your tenant's status page. If all went well, you should see a
`Sign in` button in the upper right corner of the
page. Congratulations!
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/openid-with-microsoft.rst
|
openid-with-microsoft.rst
|
Chatting with Matrix
====================
The Zuul community uses mailing lists for long-form communication and
Matrix for real-time (or near real-time) chat.
This guide will walk you through getting started with Matrix and how
to use it to join communities like Zuul on IRC.
Familiar with Matrix already and want to jump straight to the room?
Follow this link: `https://matrix.to/#/#zuul:opendev.org <https://matrix.to/#/#zuul:opendev.org>`_
Why Use Matrix?
---------------
Matrix has a number of clients available including feature-rich web,
desktop and mobile clients, as well as integration with the popular
text-based weechat client. This provides plenty of choices based on
your own preference. This guide will focus on using the Element web
client.
Matrix supports persistent presence in "rooms". Once you join a room,
your homeserver will keep you connected to that room and save all of
the messages sent to it, so that if you close your client and return
later, you won't miss anything. You don't need to run your own server
to use Matrix; you are welcome to use the public server at matrix.org.
But if you are a member of an organization that already runs a
homeserver, or you would like to run one yourself, you may do so and
federate with the larger Matrix network. This guide will walk you
through setting up an account on the matrix.org homeserver.
Matrix is an open (in every sense of the word) federated communication
system. Because of this it's possible to bridge the Matrix network to
other networks (including IRC, slack, etc). That makes it the perfect
system to use to communicate with various communities using a single
interface.
Create An Account
-----------------
If you don't already have an account on a Matrix homeserver, go to
https://app.element.io/ to create one, then click `Create Account`.
.. image:: /images/matrix/account-welcome.png
:align: center
You can create an account with an email address or one of the
supported authentication providers.
.. image:: /images/matrix/account-create.png
:align: center
You'll be asked to accept the terms and conditions of the service.
.. image:: /images/matrix/account-accept.png
:align: center
If you are registering an account via email, you will be prompted to
verify your email address.
.. image:: /images/matrix/account-verify.png
:align: center
You will receive an email like this:
.. image:: /images/matrix/account-verify-email.png
:align: center
Once you click the link in the email, your account will be created.
.. image:: /images/matrix/account-success.png
:align: center
You can follow the link to sign in.
.. image:: /images/matrix/account-signin.png
:align: center
Join the #zuul Room
-------------------
Click the plus icon next to `Rooms` on the left of the screen, then
click `Explore public rooms` in the dropdown that appears.
.. image:: /images/matrix/account-rooms-dropdown.png
:align: center
A popup dialog will appear; enter ``#zuul:opendev.org`` into the
search box.
.. image:: /images/matrix/account-rooms-zuul.png
:align: center
It will display `No results for "#zuul:opendev.org"` since the room is
hosted on a federated homeserver, but it's really there. Disregard
that and hit `enter` or click `Join`, and you will join the room.
Go ahead and say hi, introduce yourself, and let us know what you're
working on or any questions you have. Keep in mind that the Zuul
community is world-wide and we may be away from our desks when you
join. Because Matrix keeps a message history, we'll see your message
and you'll see any responses, even if you close your browser and log
in later.
Optional Next Steps
-------------------
The following steps are optional. You don't need to do these just to
hop in with a quick question, but if you plan on spending more than a
brief amount of time interacting with communities in Matrix, they will
improve your experience.
.. toctree::
:maxdepth: 1
matrix-encryption
matrix-id
matrix-irc
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/matrix.rst
|
matrix.rst
|
Optional: Save Encryption Keys
==============================
The Matrix protocol supports end-to-end encryption. We don't have
this enabled for the ``#zuul`` room (there's little point as it's a
public room), but if you start direct chats with other Matrix users,
your communication will be encrypted by default. Since it's
*end-to-end* encryption, that means your encryption keys are stored on
your client, and the server has no way to decrypt those messages. But
that also means that if you sign out of your client or switch
browsers, you will lose your encryption keys along with access to any
old messages that were encrypted with them. To avoid this, you can
back up your keys to the server (in an encrypted form, of course) so
that if you log in from another session, you can restore those keys
and pick up where you left off. To set this up, open the User Menu by
clicking on your name at the top left of the screen.
.. image:: /images/matrix/user-menu.png
:align: center
Click the `Security & privacy` menu item in the dropdown.
.. image:: /images/matrix/user-menu-dropdown.png
:align: center
Click the `Set up` button under `Encryption` / `Secure Backup` in the
dialog that pops up.
.. image:: /images/matrix/user-encryption.png
:align: center
Follow the prompts to back up your encryption keys.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/matrix-encryption.rst
|
matrix-encryption.rst
|
:title: Project Testing Interface
.. _pti:
Project Testing Interface
=========================
The following sections describe an example PTI (Project Testing Interface)
implementation. The goal is to setup a consistent interface for driving tests
and other necessary tasks to succesfully configure :ref:`project_gating` within
your organization.
Projects layout
---------------
A proper PTI needs at least two projects:
* org-config: a :term:`config-project` curated by administrators, and
* org-jobs: a :term:`untrusted-project` to hold common jobs.
The projects that are being tested or deployed are also
:term:`untrusted-project`, and for the purpose of this example we will use a
couple of integrated projects named:
* org-server
* org-client
org-config
~~~~~~~~~~
The config project needs careful scrutiny as it defines priviledged Zuul
configurations that are shared by all your projects:
:ref:`pipeline` triggers and requirements let you define when a change is
tested and what are the conditions for merging code. Approval from core
members or special labels to indicate a change is good to go are pipelines
configuration.
The base job let you define how the test environment is validated before
the actual job is executed. The base job also defines how and where the job
artifacts are stored.
More importantly, a config-project may enforce a set of integration jobs to
be executed on behalf of the other projects. A regular (untrusted-project) can
only manage its own configuration, and as part of a PTI implementation, you
want to ensure your projects' changes undergo validation that are defined
globally by your organization.
Because the nature of those configuration settings are priviledged,
config-projects changes are only effective when merged.
org-jobs
~~~~~~~~
Jobs definition content are not priviledged Zuul settings and jobs can be
defined in a regular :term:`untrusted-project`.
As a matter of fact, it is recommended to define jobs outside of the config
project so that job updates can be tested before being merged.
In this example, we are using a dedicated org-jobs project.
Projects content
----------------
In this example PTI, the organization requirements are a consistent code style
and an integration test to validate org-client and org-server works according
to a reference implementation.
In the org-jobs project, we define a couple of jobs:
.. code-block:: yaml
# org-jobs/zuul.yaml
- job:
name: org-codestyle
parent: run-test-command
vars:
test_command: $code-style-tool $org-check-argument
# e.g.: linters --max-column 120
- job:
name: org-integration-test
run: integration-tests.yaml
required-projects:
- org-server
- org-client
The integration-tests.yaml playbook needs to implement an integration test
that checks both the server and client code.
In the org-config project, we define a project template:
.. code-block:: yaml
# org-config/zuul.d/pti.yaml
- project-template:
name: org-pti
queue: integrated
check:
jobs:
- org-codestyle
- org-integration-test
gate:
jobs:
- org-codestyle
- org-integration-test
Finaly, in the org-config project, we setup the PTI template on both projects:
.. code-block:: yaml
# org-config/zuul.d/projects.yaml
- project:
name: org-server
templates:
- org-pti
- project:
name: org-client
templates:
- org-pti
Usage
-----
With the above layout, the organization projects use a consistent testing
interface.
The org-client or org-server does not need extra settings, all new
contribution shall pass the codestyle and integration-test as defined by
the organization admin.
Project tests
~~~~~~~~~~~~~
Projects may add extra jobs on top of the PTI.
For example, the org-client project can add a user interface test:
.. code-block:: yaml
# org-client/.zuul.yaml
- job:
name: org-client-ui-validation
- project:
check:
jobs:
- org-client-ui-validation
gate:
jobs:
- org-client-ui-validation
In this example, new org-client change will run the PTI's jobs as well as the
org-client-ui-validation job.
Updating PTI test
~~~~~~~~~~~~~~~~~
Once the PTI is in place, if a project needs adjustment,
it can proceed as follow:
First a change on org-jobs is proposed to modify a job. For example, update a
codestyle check using such commit:
.. code-block:: text
# org-jobs/change-url
Update codestyle to enforce CamelCase.
Then, without merging this proposal, it can be tested accross the projects using
such commit:
.. code-block:: text
# org-client/change-url
Validate new codestyle.
Depends-On: org-jobs/change-url
Lastly the org-jobs may be enriched with:
.. code-block:: text
# org-jobs/change-url
Update codestyle to enforce CamelCase.
Needed-By: org-client/change-url
.. note:: Extra care is required when updating PTI jobs as they affects all
the projects. Ideally, the org-jobs project would use a org-jobs-check
to run PTI jobs change on every projects.
Cross project gating
--------------------
The org-pti template is using the "integrated" queue to ensure projects change
are gated by the zuul scheduler. Though, the jobs need extra care to properly
test projects as they are prepared by Zuul. For example, the
org-integration-test playbook need to ensure the client and server are installed
from the zuul src_root.
This is called sibling installation, and it is a critical piece to ensure cross
project gating.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/pti.rst
|
pti.rst
|
.. _howto-zookeeper:
ZooKeeper Administration
========================
This section will cover some basic tasks and recommendations when
setting up ZooKeeper for use with Zuul. A complete tutorial for
ZooKeeper is out of scope for this documentation.
Configuration
-------------
The following general configuration setting in
``/etc/zookeeper/zoo.cfg`` is recommended:
.. code-block::
autopurge.purgeInterval=6
This instructs ZooKeeper to purge old snapshots every 6 hours. This
will avoid filling the disk.
.. _zk-encrypted-connections:
Encrypted Connections
---------------------
Zuul requires its connections to ZooKeeper are TLS encrypted.
ZooKeeper version 3.5.1 or greater is required for TLS support.
ZooKeeper performs hostname validation for all ZooKeeper servers
("quorum members"), therefore each member of the ZooKeeper cluster
should have its own certificate. This does not apply to clients which
may share a certificate.
ZooKeeper performs certificate validation on all connections (server
and client). If you use a private Certificate Authority (CA) (which
is generally recommended and discussed below), then these TLS
certificates not only serve to encrypt traffic, but also to
authenticate and authorize clients to the cluster. Only clients with
certificates authorized by a CA explicitly trusted by your ZooKeeper
installation will be able to connect.
.. note:: The instructions below direct you to sign certificates with
a CA that you create specifically for Zuul's ZooKeeper
cluster. If you use a CA you share with other users in your
organization, any certificate signed by that CA will be able
to connect to your ZooKeeper cluster. In this case, you may
need to take additional steps such as network isolation to
protect your ZooKeeper cluster. These are beyond the scope
of this document.
The ``tools/zk-ca.sh`` script in the Zuul source code repository can
be used to quickly and easily generate self-signed certificates for
all ZooKeeper cluster members and clients.
Make a directory for it to store the certificates and CA data, and run
it once for each client:
.. code-block::
mkdir /etc/zookeeper/ca
tools/zk-ca.sh /etc/zookeeper/ca zookeeper1.example.com
tools/zk-ca.sh /etc/zookeeper/ca zookeeper2.example.com
tools/zk-ca.sh /etc/zookeeper/ca zookeeper3.example.com
Add the following to ``/etc/zookeeper/zoo.cfg``:
.. code-block::
# Necessary for TLS support
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
# Client TLS configuration
secureClientPort=2281
ssl.keyStore.location=/etc/zookeeper/ca/keystores/zookeeper1.example.com.pem
ssl.trustStore.location=/etc/zookeeper/ca/certs/cacert.pem
# Server TLS configuration
sslQuorum=true
ssl.quorum.keyStore.location=/etc/zookeeper/ca/keystores/zookeeper1.example.com.pem
ssl.quorum.trustStore.location=/etc/zookeeper/ca/certs/cacert.pem
Change the name of the certificate filenames as appropriate for the
host (e.g., ``zookeeper1.example.com.pem``).
In order to disable plaintext connections, ensure that the
``clientPort`` option does not appear in ``zoo.cfg``. Use the new
method of specifying Zookeeper quorum servers, which looks like this:
.. code-block::
server.1=zookeeper1.example.com:2888:3888
server.2=zookeeper2.example.com:2888:3888
server.3=zookeeper3.example.com:2888:3888
This format normally includes ``;2181`` at the end of each line,
signifying that the server should listen on port 2181 for plaintext
client connections (this is equivalent to the ``clientPort`` option).
Omit it to disable plaintext connections. The earlier addition of
``secureClientPort`` to the config file instructs ZooKeeper to listen
for encrypted connections on port 2281.
Be sure to specify port 2281 rather than the standard 2181 in the
:attr:`zookeeper.hosts` setting in ``zuul.conf``.
Finally, add the :attr:`zookeeper.tls_cert`,
:attr:`zookeeper.tls_key`, and :attr:`zookeeper.tls_ca` options. Your
``zuul.conf`` file should look like:
.. code-block::
[zookeeper]
hosts=zookeeper1.example.com:2281,zookeeper2.example.com:2281,zookeeper3.example.com:2281
tls_cert=/etc/zookeeper/ca/certs/client.pem
tls_key=/etc/zookeeper/ca/keys/clientkey.pem
tls_ca=/etc/zookeeper/ca/certs/cacert.pem
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/zookeeper.rst
|
zookeeper.rst
|
How-To Guides
=============
.. toctree::
:maxdepth: 1
pti
badges
matrix
zookeeper
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/index.rst
|
index.rst
|
Configuring Google Authentication
=================================
This document explains how to configure Zuul in order to enable authentication
with Google.
Prerequisites
-------------
* The Zuul instance must be able to query Google's OAUTH API servers. This
simply generally means that the Zuul instance must be able to send and
receive HTTPS data to and from the Internet.
* You must set up a project in `Google's developers console <https://console.developers.google.com/>`_.
Setting up credentials with Google
----------------------------------
In the developers console, choose your project and click `APIs & Services`.
Choose `Credentials` in the menu on the left, then click `Create Credentials`.
Choose `Create OAuth client ID`. You might need to configure a consent screen first.
Create OAuth client ID
......................
Choose `Web application` as Application Type.
In `Authorized JavaScript Origins`, add the base URL of Zuul's Web UI. For example,
if you are running a yarn development server on your computer, it would be
`http://localhost:3000` .
In `Authorized redirect URIs`, write down the base URL of Zuul's Web UI followed
by "/t/<tenant>/auth_callback", for each tenant on which you want to enable
authentication. For example, if you are running a yarn development server on
your computer and want to set up authentication for tenant "local",
write `http://localhost:3000/t/local/auth_callback` .
Click Save. Google will generate a Client ID and a Client secret for your new
credentials; we will only need the Client ID for the rest of this How-To.
Configure Zuul
..............
Edit the ``/etc/zuul/zuul.conf`` to add the google authenticator:
.. code-block:: ini
[auth google_auth]
default=true
driver=OpenIDConnect
realm=my_realm
issuer_id=https://accounts.google.com
client_id=<your Google Client ID>
Restart Zuul services (scheduler, web).
Head to your tenant's status page. If all went well, you should see a "Sign in"
button in the upper right corner of the page. Congratulations!
Further Reading
---------------
This How-To is based on `Google's documentation on their implementation of OpenID Connect <https://developers.google.com/identity/protocols/oauth2/openid-connect>`_.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/openid-with-google.rst
|
openid-with-google.rst
|
Configuring Keycloak Authentication
===================================
This document explains how to configure Zuul and Keycloak in order to enable
authentication in Zuul with Keycloak.
Prerequisites
-------------
* The Zuul instance must be able to query Keycloak over HTTPS.
* Authenticating users must be able to reach Keycloak's web UI.
* Have a realm set up in Keycloak.
`Instructions on how to do so can be found here <https://www.keycloak.org/docs/latest/server_admin/#configuring-realms>`_ .
By convention, we will assume the Keycloak server's FQDN is ``keycloak``, and
Zuul's Web UI's base URL is ``https://zuul/``. We will use the realm ``my_realm``.
Most operations below regarding the configuration of Keycloak can be performed through
Keycloak's admin CLI. The following steps must be performed as an admin on Keycloak's
GUI.
Setting up Keycloak
-------------------
Create a client
...............
Choose the realm ``my_realm``, then click ``clients`` in the Configure panel.
Click ``Create``.
Name your client as you please. We will pick ``zuul`` for this example. Make sure
to fill the following fields:
* Client Protocol: ``openid-connect``
* Access Type: ``public``
* Implicit Flow Enabled: ``ON``
* Valid Redirect URIs: ``https://zuul/*``
* Web Origins: ``https://zuul/``
Click "Save" when done.
Create a client scope
......................
Keycloak maps the client ID to a specific claim, instead of the usual `aud` claim.
We need to configure Keycloak to add our client ID to the `aud` claim by creating
a custom client scope for our client.
Choose the realm ``my_realm``, then click ``client scopes`` in the Configure panel.
Click ``Create``.
Name your scope as you please. We will name it ``zuul_aud`` for this example.
Make sure you fill the following fields:
* Protocol: ``openid-connect``
* Include in Token Scope: ``ON``
Click "Save" when done.
On the Client Scopes page, click on ``zuul_aud`` to configure it; click on
``Mappers`` then ``create``.
Make sure to fill the following:
* Mapper Type: ``Audience``
* Included Client Audience: ``zuul``
* Add to ID token: ``ON``
* Add to access token: ``ON``
Then save.
Finally, go back to the clients list and pick the ``zuul`` client again. Click
on ``Client Scopes``, and add the ``zuul_aud`` scope to the ``Assigned Default
Client Scopes``.
Configuring JWT signing algorithms
..................................
.. note::
Skip this step if you are using a keycloak version prior to 18.0.
Due to current limitations with the pyJWT library, Zuul does not support every default
signing algorithm used by Keycloak.
Go to `my_realm->Settings->Keys`, then choose `rsa-enc-generated` (this should be mapped
to "RSA-OAEP") if available. Then set `enabled` to false and save your changes.
(Optional) Set up a social identity provider
............................................
Keycloak can delegate authentication to predefined social networks. Follow
`these steps to find out how. <https://www.keycloak.org/docs/latest/server_admin/index.html#social-identity-providers>`_
If you don't set up authentication delegation, make sure to create at least one
user in your realm, or allow self-registration. See Keycloak's documentation section
on `user management <https://www.keycloak.org/docs/latest/server_admin/index.html#assembly-managing-users_server_administration_guide>`_
for more details on how to do so.
Setting up Zuul
---------------
Edit the ``/etc/zuul/zuul.conf`` to add the keycloak authenticator:
.. code-block:: ini
[auth keycloak]
default=true
driver=OpenIDConnect
realm=my_realm
issuer_id=https://keycloak/auth/realms/my_realm
client_id=zuul
Restart Zuul services (scheduler, web).
Head to your tenant's status page. If all went well, you should see a "Sign in"
button in the upper right corner of the page. Congratulations!
Further Reading
---------------
This How-To is based on `Keycloak's documentation <https://www.keycloak.org/documentation.html>`_,
specifically `the documentation about clients <https://www.keycloak.org/docs/latest/server_admin/#assembly-managing-clients_server_administration_guide>`_.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/openid-with-keycloak.rst
|
openid-with-keycloak.rst
|
:title: Badges
.. We don't need no stinking badges
.. _badges:
Badges
======
You can embed a badge declaring that your project is gated and therefore by
definition always has a working build. Since there is only one status to
report, it is a simple static file:
.. image:: https://zuul-ci.org/gated.svg
:alt: Zuul: Gated
To use it, simply put ``https://zuul-ci.org/gated.svg`` into an RST or
markdown formatted README file, or use it in an ``<img>`` tag in HTML.
For advanced usage Zuul also supports generating dynamic badges via the
REST api. This can be useful if you want to display the status of e.g. periodic
pipelines of a project. To use it use an url like
``https://zuul.opendev.org/api/tenant/zuul/badge?project=zuul/zuul-website&pipeline=post``
instead of the above mentioned url. It supports filtering by ``project``,
``pipeline`` and ``branch``.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/badges.rst
|
badges.rst
|
Optional: Join an IRC Room
==========================
The Matrix community maintains bridges to most major IRC networks.
You can use the same Matrix account and client to join IRC channels as
well as Zuul's Matrix Room. You will benefit from the persistent
connection and history features as well. Follow the instructions
below to join an IRC channel. The example below is for the
``#opendev`` channel on OFTC, but the process is similar for other
channels or networks.
Click the plus icon next to `Rooms` on the left of the screen, then
click `Explore public rooms` in the dropdown that appears.
.. image:: /images/matrix/account-rooms-dropdown.png
:align: center
A popup dialog will appear; below the search bar in the dialog, click
the dropdown selector labeled `Matrix rooms (matrix.org)` and change
it to `OFTC rooms (matrix.org)`. Then enter ``#opendev`` into the search
box.
.. image:: /images/matrix/account-rooms-opendev.png
:align: center
It will display `No results for "#opendev"` which is an unfortunate
consequence of one of the anti-spam measures that is necessary on IRC.
Disregard that and hit `enter` or click `Join`, and you will join the
room.
If this is your first time joining an OFTC channel, you will also
receive an invitation to join the `OFTC IRC Bridge status` room.
.. image:: /images/matrix/account-rooms-invite.png
:align: center
Accept the invitation.
.. image:: /images/matrix/account-rooms-accept.png
:align: center
This is a private control channel between you and the system that
operates the OFTC bridge. Here you can perform some IRC commands such
as changing your nickname and setting up nick registration. That is
out of scope for this HOWTO, but advanced IRC users may be interested
in doing so.
You may repeat this procedure for any other IRC channels on the OFTC,
Freenode, or libera.chat networks.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/howtos/matrix-irc.rst
|
matrix-irc.rst
|
Data Model
==========
It all starts with the :py:class:`~zuul.model.Pipeline`. A Pipeline is the
basic organizational structure that everything else hangs off.
.. autoclass:: zuul.model.Pipeline
Pipelines have a configured
:py:class:`~zuul.manager.PipelineManager` which controlls how
the :py:class:`Ref <zuul.model.Ref>` objects are enqueued and
processed.
There are currently two,
:py:class:`~zuul.manager.dependent.DependentPipelineManager` and
:py:class:`~zuul.manager.independent.IndependentPipelineManager`
.. autoclass:: zuul.manager.PipelineManager
.. autoclass:: zuul.manager.dependent.DependentPipelineManager
.. autoclass:: zuul.manager.independent.IndependentPipelineManager
A :py:class:`~zuul.model.Pipeline` has one or more
:py:class:`~zuul.model.ChangeQueue` objects.
.. autoclass:: zuul.model.ChangeQueue
A :py:class:`~zuul.model.Job` represents the definition of what to do. A
:py:class:`~zuul.model.Build` represents a single run of a
:py:class:`~zuul.model.Job`. A :py:class:`~zuul.model.JobGraph` is used to
encapsulate the dependencies between one or more :py:class:`~zuul.model.Job`
objects.
.. autoclass:: zuul.model.Job
.. autoclass:: zuul.model.JobGraph
.. autoclass:: zuul.model.Build
The :py:class:`~zuul.manager.base.PipelineManager` enqueues each
:py:class:`Ref <zuul.model.Ref>` into the
:py:class:`~zuul.model.ChangeQueue` in a :py:class:`~zuul.model.QueueItem`.
.. autoclass:: zuul.model.QueueItem
As the Changes are processed, each :py:class:`~zuul.model.Build` is put into
a :py:class:`~zuul.model.BuildSet`
.. autoclass:: zuul.model.BuildSet
Changes
~~~~~~~
.. autoclass:: zuul.model.Change
.. autoclass:: zuul.model.Ref
Filters
~~~~~~~
.. autoclass:: zuul.model.RefFilter
.. autoclass:: zuul.model.EventFilter
Tenants
~~~~~~~
An abide is a collection of tenants.
.. autoclass:: zuul.model.Tenant
.. autoclass:: zuul.model.UnparsedAbideConfig
.. autoclass:: zuul.model.UnparsedConfig
.. autoclass:: zuul.model.ParsedConfig
Other Global Objects
~~~~~~~~~~~~~~~~~~~~
.. autoclass:: zuul.model.Project
.. autoclass:: zuul.model.Layout
.. autoclass:: zuul.model.RepoFiles
.. autoclass:: zuul.model.TriggerEvent
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/datamodel.rst
|
datamodel.rst
|
Ansible Integration
===================
Zuul contains Ansible modules and plugins to control the execution of Ansible
Job content.
Zuul provides realtime build log streaming to end users so that users
can watch long-running jobs in progress.
Streaming job output
--------------------
All jobs run with the :py:mod:`zuul.ansible.base.callback.zuul_stream` callback
plugin enabled, which writes the build log to a file so that the
:py:class:`zuul.lib.log_streamer.LogStreamer` can provide the data on demand
over the finger protocol. Finally, :py:class:`zuul.web.LogStreamHandler`
exposes that log stream over a websocket connection as part of
:py:class:`zuul.web.ZuulWeb`.
.. autoclass:: zuul.ansible.base.callback.zuul_stream.CallbackModule
:members:
.. autoclass:: zuul.lib.log_streamer.LogStreamer
.. autoclass:: zuul.web.LogStreamHandler
.. autoclass:: zuul.web.ZuulWeb
In addition to real-time streaming, Zuul also installs another callback module,
:py:mod:`zuul.ansible.base.callback.zuul_json.CallbackModule` that collects all
of the information about a given run into a json file which is written to the
work dir so that it can be published along with build logs.
.. autoclass:: zuul.ansible.base.callback.zuul_json.CallbackModule
Since the streaming log is by necessity a single text stream, choices
have to be made for readability about what data is shown and what is
not shown. The json log file is intended to allow for a richer more
interactive set of data to be displayed to the user.
.. _zuul_console_streaming:
Capturing live command output
-----------------------------
As jobs may execute long-running shell scripts or other commands,
additional effort is expended to stream ``stdout`` and ``stderr`` of
shell tasks as they happen rather than waiting for the command to
finish.
The global job configuration should run the ``zuul_console`` task as a
very early prerequisite step.
.. automodule:: zuul.ansible.base.library.zuul_console
This will start a daemon that listens on TCP port 19885 on the testing
node. This daemon can be queried to stream back the output of shell
tasks as described below.
Zuul contains a modified version of Ansible's
:ansible:module:`command` module that overrides the default
implementation.
.. automodule:: zuul.ansible.base.library.command
This library will capture the output of the running
command and write it to a temporary file on the host the command is
running on. These files are named in the format
``/tmp/console-<uuid>-<task_id>-<host>.log``
The ``zuul_stream`` callback mentioned above will send a request to
the remote ``zuul_console`` daemon, providing the uuid and task id of
the task it is currently processing. The ``zuul_console`` daemon will
then read the logfile from disk and stream the data back as it
appears, which ``zuul_stream`` will then present as described above.
The ``zuul_stream`` callback will indicate to the ``zuul_console``
daemon when it has finished reading the task, which prompts the remote
side to remove the temporary streaming output files. In some cases,
aborting the Ansible process may not give the ``zuul_stream`` callback
the chance to send this notice, leaking the temporary files. If nodes
are ephemeral this makes little difference, but these files may be
visible on static nodes.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/ansible.rst
|
ansible.rst
|
Zuul Dashboard Javascript
=========================
zuul-web has an html, css and javascript component, `zuul-dashboard`, that
is managed using Javascript toolchains. It is intended to be served by zuul-web
directly from zuul/web/static in the simple case, or to be published to
an alternate static web location, such as an Apache server.
The web dashboard is written in `React`_ and `PatternFly`_ and is
managed by `create-react-app`_ and `yarn`_ which in turn both assume a
functioning and recent `nodejs`_ installation.
.. note::
The web dashboard source code and package.json are located in the ``web``
directory. All the yarn commands need to be executed from the ``web``
directory.
For the impatient who don't want deal with javascript toolchains
----------------------------------------------------------------
tl;dr - You have to build stuff with javascript tools.
The best thing would be to get familiar with the tools, there are a lot of
good features available. If you're going to hack on the Javascript, you should
get to know them.
If you don't want to hack on Javascript and just want to run Zuul's tests,
``tox`` has been set up to handle it for you.
If you do not have `yarn`_ installed, ``tox`` will use `nodeenv`_ to install
node into the active python virtualenv, and then will install `yarn`_ into
that virtualenv as well.
yarn dependency management
--------------------------
`yarn`_ manages the javascript dependencies. That means the first step is
getting `yarn`_ installed.
.. code-block:: console
tools/install-js-tools.sh
The ``tools/install-js-tools.sh`` script will add apt or yum repositories and
install `nodejs`_ and `yarn`_ from them. For RPM-based distros it needs to know
which repo description file to download, so it calls out to
``tools/install-js-repos-rpm.sh``.
Once yarn is installed, getting dependencies installed is:
.. code-block:: console
yarn install
The ``yarn.lock`` file contains all of the specific versions that were
installed before. Since this is an application it has been added to the repo.
To add new runtime dependencies:
.. code-block:: console
yarn add awesome-package
To add new build-time dependencies:
.. code-block:: console
yarn add -D awesome-package
To remove dependencies:
.. code-block:: console
yarn remove terrible-package
Adding or removing packages will add the logical dependency to ``package.json``
and will record the version of the package and any of its dependencies that
were installed into ``yarn.lock`` so that other users can simply run
``yarn install`` and get the same environment.
To update a dependency:
.. code-block:: console
yarn add awesome-package
Dependencies are installed into the ``node_modules`` directory. Deleting that
directory and re-running ``yarn install`` should always be safe.
Dealing with yarn.lock merge conflicts
--------------------------------------
Since ``yarn.lock`` is generated, it can create merge conflicts. Resolving
them at the ``yarn.lock`` level is too hard, but `yarn`_ itself is
deterministic. The best procedure for dealing with ``yarn.lock`` merge
conflicts is to first resolve the conflicts, if any, in ``package.json``. Then:
.. code-block:: console
yarn install --force
git add yarn.lock
Which causes yarn to discard the ``yarn.lock`` file, recalculate the
dependencies and write new content.
React Components and Styling
----------------------------
Each page is a React Component. For instance the status.html page code
is ``web/src/pages/status.jsx``. It is usually a good idea not to put
too much markup in those page components and create different
components for this instead. This way, the page component can deal
with the logic like reloading data if needed or evaluating URL
parameters and the child components can deal with the markup. Thus,
you will find a lot of components in the ``web/src/containers``
directory that mainly deal with the markup.
Mapping of pages/urls to components can be found in the route list in
``web/src/routes.js``.
The best way to get started is to check out the libraries that glue
everything together. Those are `React`__, `react-router`_ and
`Redux`_.
.. _React-getting-started: https://reactjs.org/docs/getting-started.html
__ React-getting-started_
For the visual part we are using `PatternFly`_. For a list of available
PatternFly React components, take a look at the `Components`_ section in their
documentation. If a single component is not enough, you could also take a
look at the `Demos`_ sections which provides some more advanced examples
incorporating multiple components and their interaction.
If you are unsure which component you should use for your purpose, you might
want to check out the `Usage and behaviour`_ section in their design guidelines.
There is also a list of available `icons`_ including some recommendations on
when to use which icon. In case you don't find an appropriate icon there, you
could check out the `FontAwesome Free`_ icons, as most of them are included in
PatternFly. To find out if an icon is available, simply try to import it from
the ``@patternfly/react-icons`` package.
For example if you want to use the `address-book`_ icon (which is not listed in
the PatternFly icon list) you can import it via the following statement:
.. code-block:: javascript
import { AddressBookIcon } from '@patternfly/react-icons'
Please note that the spelling of the icon name changes to CamelCase and is
always extended by ``Icon``.
Development
-----------
Building the code can be done with:
.. code-block:: bash
yarn build
zuul-web has a ``static`` route defined which serves files from
``zuul/web/static``. ``yarn build`` will put the build output files
into the ``zuul/web/static`` directory so that zuul-web can serve them.
Development server that handles things like reloading and
hot-updating of code can be started with:
.. code-block:: bash
yarn start
will build the code and launch the dev server on `localhost:3000`. Fake
api response needs to be set in the ``web/public/api`` directory.
.. code-block:: bash
mkdir public/api/
for route in info status jobs builds; do
curl -o public/api/${route} https://zuul.openstack.org/api/${route}
done
To use an existing zuul api, uses the REACT_APP_ZUUL_API environment
variable:
.. code-block:: bash
# Use openstack zuul's api:
yarn start:openstack
# Use software-factory multi-tenant zuul's api:
yarn start:multi
# Use a custom zuul:
REACT_APP_ZUUL_API="https://zuul.example.com/api/" yarn start
To run eslint tests locally:
.. code-block:: bash
yarn lint
Authentication
~~~~~~~~~~~~~~
The docker-compose file in ``doc/source/examples/keycloak`` can be
used to run a Keycloak server for use with a development build of the
web app. The default values in that file are already set up for the
web app running on localhost. See the Keycloak tutorial for details.
Deploying
---------
The web application is a set of static files and is designed to be served
by zuul-web from its ``static`` route. In order to make sure this works
properly, the javascript build needs to be performed so that the javascript
files are in the ``zuul/web/static`` directory. Because the javascript
build outputs into the ``zuul/web/static`` directory, as long as
``yarn build`` has been done before ``pip install .`` or
``python setup.py sdist``, all the files will be where they need to be.
As long as `yarn`_ is installed, the installation of zuul will run
``yarn build`` appropriately.
.. _yarn: https://yarnpkg.com/en/
.. _nodejs: https://nodejs.org/
.. _webpack: https://webpack.js.org/
.. _devtool: https://webpack.js.org/configuration/devtool/#devtool
.. _nodeenv: https://pypi.org/project/nodeenv
.. _React: https://reactjs.org/
.. _react-router: https://reactrouter.com/web/guides/philosophy
.. _Redux: https://redux.js.org/introduction/core-concepts
.. _PatternFly: https://www.patternfly.org/
.. _create-react-app: https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md
.. _Components: https://www.patternfly.org/v4/documentation/react/components/aboutmodal
.. _Demos: https://www.patternfly.org/v4/documentation/react/demos/bannerdemo
.. _Usage and behaviour: https://www.patternfly.org/v4/design-guidelines/usage-and-behavior/about-modal
.. _icons: https://www.patternfly.org/v4/guidelines/icons
.. _FontAwesome Free: https://fontawesome.com/icons?d=gallery&m=free
.. _address-book: https://fontawesome.com/icons/address-book?style=solid
By default, zuul-web provides a Progressive Web Application but does
not run a Service Worker. For deployers who would like to enable one,
set the environment variable
``REACT_APP_ENABLE_SERVICE_WORKER=true`` during installation.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/javascript.rst
|
javascript.rst
|
Testing
=======
Zuul provides an extensive framework for performing functional testing
on the system from end-to-end with major external components replaced
by fakes for ease of use and speed.
Test classes that subclass :py:class:`~tests.base.ZuulTestCase` have
access to a number of attributes useful for manipulating or inspecting
the environment being simulated in the test:
.. autofunction:: tests.base.simple_layout
.. autoclass:: tests.base.ZuulTestCase
:members:
.. autoclass:: tests.base.FakeGerritConnection
:members:
:inherited-members:
.. autoclass:: tests.base.RecordingExecutorServer
:members:
.. autoclass:: tests.base.FakeBuild
:members:
.. autoclass:: tests.base.BuildHistory
:members:
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/testing.rst
|
testing.rst
|
ZooKeeper
=========
Overview
--------
Zuul has a microservices architecture with the goal of no single point of
failure in mind.
Zuul is an event driven system with several event loops that interact
with each other:
* Driver event loop: Drivers like GitHub or Gerrit have their own event loops.
They perform preprocessing of the received events and add events into the
scheduler event loop.
* Scheduler event loop: This event loop processes the pipelines and
reconfigurations.
Each of these event loops persists data in ZooKeeper so that other
components can share or resume processing.
A key aspect of scalability is maintaining an event queue per
pipeline. This makes it easy to process several pipelines in
parallel. A new driver event is first processed in the driver event
queue. This adds a new event into the scheduler event queue. The
scheduler event queue then checks which pipeline may be interested in
this event according to the tenant configuration and layout. Based on
this the event is dispatched to all matching pipeline queues.
In order to make reconfigurations efficient we store the parsed branch
config in Zookeeper. This makes it possible to create the current
layout without the need to ask the mergers multiple times for the
configuration. This is used by zuul-web to keep an up-to-date layout
for API requests.
We store the pipeline state in Zookeeper. This contains the complete
information about queue items, jobs and builds, as well as a separate
abbreviated state for quick access by zuul-web for the status page.
Driver Event Ingestion
----------------------
There are three types of event receiving mechanisms in Zuul:
* Active event gathering: The connection actively listens to events (Gerrit)
or generates them itself (git, timer, zuul)
* Passive event gathering: The events are sent to Zuul from outside (GitHub
webhooks)
* Internal event generation: The events are generated within Zuul itself and
typically get injected directly into the scheduler event loop.
The active event gathering needs to be handled differently from
passive event gathering.
Active Event Gathering
~~~~~~~~~~~~~~~~~~~~~~
This is mainly done by the Gerrit driver. We actively maintain a
connection to the target and receive events. We utilize a leader
election to make sure there is exactly one instance receiving the
events.
Passive Event Gathering
~~~~~~~~~~~~~~~~~~~~~~~
In case of passive event gathering the events are sent to Zuul
typically via webhooks. These types of events are received in zuul-web
which then stores them in Zookeeper. This type of event gathering is
used by GitHub and other drivers. In this case we can have multiple
instances but still receive only one event so that we don't need to
take special care of event deduplication or leader election. Multiple
instances behind a load balancer are safe to use and recommended for
such passive event gathering.
Configuration Storage
---------------------
Zookeeper is not designed as a database with a large amount of data,
so we should store as little as possible in zookeeper. Thus we only
store the per project-branch unparsed config in zookeeper. From this,
every part of Zuul, like the scheduler or zuul-web, can quickly
recalculate the layout of each tenant and keep it up to date by
watching for changes in the unparsed project-branch-config.
We store the actual config sharded in multiple nodes, and those nodes
are stored under per project and branch znodes. This is needed because
of the 1MB limit per znode in zookeeper. It further makes it less
expensive to cache the global config in each component as this cache
is updated incrementally.
Executor and Merger Queues
--------------------------
The executors and mergers each have an execution queue (and in the
case of executors, optionally per-zone queues). This makes it easy
for executors and mergers to simply pick the next job to run without
needing to inspect the entire pipeline state. The scheduler is
responsible for submitting job requests as the state changes.
Zookeeper Map
-------------
This is a reference for object layout in Zookeeper.
.. path:: zuul
All ephemeral data stored here. Remove the entire tree to "reset"
the system.
.. path:: zuul/cache/connection/<connection>
The connection cache root. Each connection has a dedicated space
for its caches. Two types of caches are currently implemented:
change and branch.
.. path:: zuul/cache/connection/<connection>/branches
The connection branch cache root. Contains the cache itself and a
lock.
.. path:: zuul/cache/connection/<connection>/branches/data
:type: BranchCacheZKObject (sharded)
The connection branch cache data. This is a single sharded JSON blob.
.. path:: zuul/cache/connection/<connection>/branches/lock
:type: RWLock
The connection branch cache read/write lock.
.. path:: zuul/cache/connection/<connection>/cache
The connection change cache. Each node under this node is an entry
in the change cache. The node ID is a sha256 of the cache key, the
contents are the JSON serialization of the cache entry metadata.
One of the included items is the `data_uuid` which is used to
retrieve the actual change data.
When a cache entry is updated, a new data node is created without
deleting the old data node. They are eventually garbage collected.
.. path:: zuul/cache/connection/<connection>/data
Data for the change cache. These nodes are identified by a UUID
referenced from the cache entries.
These are sharded JSON blobs of the change data.
.. path:: zuul/cache/blob/data
Data for the blob store. These nodes are identified by a
sha256sum of the secret content.
These are sharded blobs of data.
.. path:: zuul/cache/blob/lock
Side-channel lock directory for the blob store. The store locks
by key id under this znode when writing.
.. path:: zuul/cleanup
This node holds locks for the cleanup routines to make sure that
only one scheduler runs them at a time.
.. path:: build_requests
.. path:: connection
.. path:: general
.. path:: merge_requests
.. path:: node_request
.. path:: sempahores
.. path:: zuul/components
The component registry. Each Zuul process registers itself under
the appropriate node in this hierarchy so the system has a holistic
view of what's running. The name of the node is based on the
hostname but is a sequence node in order to handle multiple
processes. The nodes are ephemeral so an outage is automatically
detected.
The contents of each node contain information about the running
process and may be updated periodically.
.. path:: executor
.. path:: fingergw
.. path:: merger
.. path:: scheduler
.. path:: web
.. path:: zuul/config/cache
The unparsed config cache. This contains the contents of every
Zuul config file returned by the mergers for use in configuration.
Organized by repo canonical name, branch, and filename. The files
themeselves are sharded.
.. path:: zuul/config/lock
Locks for the unparsed config cache.
.. path:: zuul/events/connection/<connection>/events
:type: ConnectionEventQueue
The connection event queue root. Each connection has an event
queue where incoming events are recorded before being moved to the
tenant event queue.
.. path:: zuul/events/connection/<connection>/events/queue
The actual event queue. Entries in the queue reference separate
data nodes. These are sequence nodes to maintain the event order.
.. path:: zuul/events/connection/<connection>/events/data
Event data nodes referenced by queue items. These are sharded.
.. path:: zuul/events/connection/<connection>/events/election
An election to determine which scheduler processes the event queue
and moves events to the tenant event queues.
Drivers may have additional elections as well. For example, Gerrit
has an election for the watcher and poller.
.. path:: zuul/events/tenant/<tenant>
Tenant-specific event queues. Each queue described below has a
data and queue subnode.
.. path:: zuul/events/tenant/<tenant>/management
The tenant-specific management event queue.
.. path:: zuul/events/tenant/<tenant>/trigger
The tenant-specific trigger event queue.
.. path:: zuul/events/tenant/<tenant>/pipelines
Holds a set of queues for each pipeline.
.. path:: zuul/events/tenant/<tenant>/pipelines/<pipeline>/management
The pipeline management event queue.
.. path:: zuul/events/tenant/<tenant>/pipelines/<pipeline>/result
The pipeline result event queue.
.. path:: zuul/events/tenant/<tenant>/pipelines/<pipeline>/trigger
The pipeline trigger event queue.
.. path:: zuul/executor/unzoned
:type: JobRequestQueue
The unzoned executor build request queue. The generic description
of a job request queue follows:
.. path:: requests/<request uuid>
Requests are added by UUID. Consumers watch the entire tree and
order the requests by znode creation time.
.. path:: locks/<request uuid>
:type: Lock
A consumer will create a lock under this node before processing
a request. The znode containing the lock and the requent znode
have the same UUID. This is a side-channel lock so that the
lock can be held while the request itself is deleted.
.. path:: params/<request uuid>
Parameters can be quite large, so they are kept in a separate
znode and only read when needed, and may be removed during
request processing to save space in ZooKeeper. The data may be
sharded.
.. path:: result-data/<request uuid>
When a job is complete, the results of the merge are written
here. The results may be quite large, so they are sharded.
.. path:: results/<request uuid>
Since writing sharded data is not atomic, once the results are
written to ``result-data``, a small znode is written here to
indicate the results are ready to read. The submitter can watch
this znode to be notified that it is ready.
.. path:: waiters/<request uuid>
:ephemeral:
A submitter who requires the results of the job creates an
ephemeral node here to indicate their interest in the results.
This is used by the cleanup routines to ensure that they don't
prematurely delete the result data. Used for merge jobs
.. path:: zuul/executor/zones/<zone>
A zone-specific executor build request queue. The contents are the
same as above.
.. path:: zuul/layout/<tenant>
The layout state for the tenant. Contains the cache and time data
needed for a component to determine if its in-memory layout is out
of date and update it if so.
.. path:: zuul/layout-data/<layout uuid>
Additional information about the layout. This is sharded data for
each layout UUID.
.. path:: zuul/locks
Holds various types of locks so that multiple components can coordinate.
.. path:: zuul/locks/connection
Locks related to connections.
.. path:: zuul/locks/connection/<connection>
Locks related to a single connection.
.. path:: zuul/locks/connection/database/migration
:type: Lock
Only one component should run a database migration; this lock
ensures that.
.. path:: zuul/locks/events
Locks related to tenant event queues.
.. path:: zuul/locks/events/trigger/<tenant>
:type: Lock
The scheduler locks the trigger event queue for each tenant before
processing it. This lock is only needed when processing and
removing items from the queue; no lock is required to add items.
.. path:: zuul/locks/events/management/<tenant>
:type: Lock
The scheduler locks the management event queue for each tenant
before processing it. This lock is only needed when processing and
removing items from the queue; no lock is required to add items.
.. path:: zuul/locks/pipeline
Locks related to pipelines.
.. path:: zuul/locks/pipeline/<tenant>/<pipeline>
:type: Lock
The scheduler obtains a lock before processing each pipeline.
.. path:: zuul/locks/tenant
Tenant configuration locks.
.. path:: zuul/locks/tenant/<tenant>
:type: RWLock
A write lock is obtained at this location before creating a new
tenant layout and storing its metadata in ZooKeeper. Components
which later determine that they need to update their tenant
configuration to match the state in ZooKeeper will obtain a read
lock at this location to ensure the state isn't mutated again while
the components are updating their layout to match.
.. path:: zuul/ltime
An empty node which serves to coordinate logical timestamps across
the cluster. Components may update this znode which will cause the
latest ZooKeeper transaction ID to appear in the zstat for this
znode. This is known as the `ltime` and can be used to communicate
that any subsequent transactions have occurred after this `ltime`.
This is frequently used for cache validation. Any cache which was
updated after a specified `ltime` may be determined to be
sufficiently up-to-date for use without invalidation.
.. path:: zuul/merger
:type: JobRequestQueue
A JobRequestQueue for mergers. See :path:`zuul/executor/unzoned`.
.. path:: zuul/nodepool
:type: NodepoolEventElection
An election to decide which scheduler will monitor nodepool
requests and generate node completion events as they are completed.
.. path:: zuul/results/management
Stores results from management events (such as an enqueue event).
.. path:: zuul/scheduler/timer-election
:type: SessionAwareElection
An election to decide which scheduler will generate events for
timer pipeline triggers.
.. path:: zuul/scheduler/stats-election
:type: SchedulerStatsElection
An election to decide which scheduler will report system-wide stats
(such as total node requests).
.. path:: zuul/global-semaphores/<semaphore>
:type: SemaphoreHandler
Represents a global semaphore (shared by multiple tenants).
Information about which builds hold the semaphore is stored in the
znode data.
.. path:: zuul/semaphores/<tenant>/<semaphore>
:type: SemaphoreHandler
Represents a semaphore. Information about which builds hold the
semaphore is stored in the znode data.
.. path:: zuul/system
:type: SystemConfigCache
System-wide configuration data.
.. path:: conf
The serialized version of the unparsed abide configuration as
well as system attributes (such as the tenant list).
.. path:: conf-lock
:type: WriteLock
A lock to be acquired before updating :path:`zuul/system/conf`
.. path:: zuul/tenant/<tenant>
Tenant-specific information here.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>
Pipeline state.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/dirty
A flag indicating that the pipeline state is "dirty"; i.e., it
needs to have the pipeline processor run.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/queue
Holds queue objects.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>
Items belong to queues, but are held in their own hierarchy since
they may shift to differrent queues during reconfiguration.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>
There will only be one buildset under the buildset/ node. If we
reset it, we will get a new uuid and delete the old one. Any
external references to it will be automatically invalidated.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>/repo_state
The global repo state for the buildset is kept in its own node
since it can be large, and is also common for all jobs in this
buildset.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>/job/<job name>
The frozen job.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>/job/<job name>/build/<build uuid>
Information about this build of the job. Similar to buildset,
there should only be one entry, and using the UUID automatically
invalidates any references.
.. path:: zuul/tenant/<tenant>/pipeline/<pipeline>/item/<item uuid>/buildset/<buildset uuid>/job/<job name>/build/<build uuid>/parameters
Parameters for the build; these can be large so they're in their
own znode and will be read only if needed.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/zookeeper.rst
|
zookeeper.rst
|
Release Notes
=============
Zuul uses `reno`_ for release note management. When adding a noteworthy
feature, fixing a noteworthy bug or introducing a behavior change that a
user or operator should know about, it is a good idea to add a release note
to the same patch.
Installing reno
---------------
reno has a command, ``reno``, that is expected to be run by developers
to create a new release note. The simplest thing to do is to install it locally
with pip:
.. code-block:: bash
pip install --user reno
Adding a new release note
-------------------------
Adding a new release note is easy:
.. code-block:: bash
reno new releasenote-slug
Where ``releasenote-slug`` is a short identifier for the release note.
reno will then create a file in ``releasenotes/notes`` that contains an
initial template with the available sections.
The file it creates is a yaml file. All of the sections except for ``prelude``
contain lists, which will be combined with the lists from similar sections in
other note files to create a bulleted list that will then be processed by
Sphinx.
The ``prelude`` section is a single block of text that will also be
combined with any other prelude sections into a single chunk.
.. _reno: https://docs.openstack.org/reno/latest/
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/releasenotes.rst
|
releasenotes.rst
|
Developer's Guide
=================
This section contains information for Developers who wish to work on
Zuul itself. This information is not necessary for the operation of
Zuul, though advanced users may find it interesting.
.. autoclass:: zuul.scheduler.Scheduler
.. toctree::
:maxdepth: 1
datamodel
drivers
triggers
testing
metrics
docs
ansible
javascript
specs/index
zookeeper
model-changelog
releasenotes
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/index.rst
|
index.rst
|
Triggers
========
Triggers must inherit from :py:class:`~zuul.trigger.BaseTrigger` and, at a minimum,
implement the :py:meth:`~zuul.trigger.BaseTrigger.getEventFilters` method.
.. autoclass:: zuul.trigger.BaseTrigger
:members:
Current list of triggers are:
.. autoclass:: zuul.driver.gerrit.gerrittrigger.GerritTrigger
:members:
.. autoclass:: zuul.driver.timer.timertrigger.TimerTrigger
:members:
.. autoclass:: zuul.driver.zuul.zuultrigger.ZuulTrigger
:members:
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/triggers.rst
|
triggers.rst
|
Drivers
=======
Zuul provides an API for extending its functionality to interact with
other systems.
.. autoclass:: zuul.driver.Driver
:members:
.. autoclass:: zuul.driver.ConnectionInterface
:members:
.. autoclass:: zuul.driver.SourceInterface
:members:
.. autoclass:: zuul.driver.TriggerInterface
:members:
.. autoclass:: zuul.driver.ReporterInterface
:members:
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/drivers.rst
|
drivers.rst
|
Data Model Changelog
====================
Record changes to the ZooKeeper data model which require API version
increases here.
When making a model change:
* Increment the value of ``MODEL_API`` in ``model_api.py``.
* Update code to use the new API by default and add
backwards-compatibility handling for older versions. This makes it
easier to clean up backwards-compatibility handling in the future.
* Make sure code that special cases model versions either references a
``model_api`` variable or has a comment like `MODEL_API: >
{version}` so that we can grep for that and clean up compatability
code that is no longer needed.
* Add a test to ``test_model_upgrade.py``.
* Add an entry to this log so we can decide when to remove
backwards-compatibility handlers.
Version 0
---------
:Prior Zuul version: 4.11.0
:Description: This is an implied version as of Zuul 4.12.0 to
initialize the series.
Version 1
---------
:Prior Zuul version: 4.11.0
:Description: No change since Version 0. This explicitly records the
component versions in ZooKeeper.
Version 2
---------
:Prior Zuul version: 5.0.0
:Description: Changes the sempahore handle format from `<item_uuid>-<job_name>`
to a dictionary with buildset path and job name.
Version 3
---------
:Prior Zuul version: 5.0.0
:Description: Add a new `SupercedeEvent` and use that for dequeuing of
superceded items from other pipelines. This only affects the
schedulers.
Version 4
---------
:Prior Zuul version: 5.1.0
:Description: Adds QueueItem.dequeued_missing_requirements and sets it to True
if a change no longer meets merge requirements in dependent
pipelines. This only affects schedulers.
Version 5
---------
:Prior Zuul version: 5.1.0
:Description: Changes the result data attributes on Build from
ResultData to JobData instances and uses the
inline/offloading paradigm from FrozenJob. This affects
schedulers and executors.
Version 6
---------
:Prior Zuul version: 5.2.0
:Description: Stores the complete layout min_ltimes in /zuul/layout-data.
This only affects schedulers.
Version 7
---------
:Prior Zuul version: 5.2.2
:Description: Adds the blob store and stores large secrets in it.
Playbook secret references are now either an integer
index into the job secret list, or a dict with a blob
store key. This affects schedulers and executors.
Version 8
---------
:Prior Zuul version: 6.0.0
:Description: Deduplicates jobs in dependency cycles. Affects
schedulers only.
Version 9
---------
:Prior Zuul version: 6.3.0
:Description: Adds nodeset_alternatives and nodeset_index to frozen job.
Removes nodset from frozen job. Affects schedulers and executors.
Version 10
----------
:Prior Zuul version: 6.4.0
:Description: Renames admin_rules to authz_rules in unparsed abide.
Affects schedulers and web.
Version 11
----------
:Prior Zuul version: 8.0.1
:Description: Adds merge_modes to branch cache. Affects schedulers and web.
Version 12
----------
:Prior Zuul version: 8.0.1
:Description: Adds job_versions and build_versions to BuildSet.
Affects schedulers.
Version 13
----------
:Prior Zuul version: 8.2.0
:Description: Stores only the necessary event info as part of a queue item
instead of the full trigger event.
Affects schedulers.
Version 14
----------
:Prior Zuul version: 8.2.0
:Description: Adds the pre_fail attribute to builds.
Affects schedulers.
Version 15
----------
:Prior Zuul version: 9.0.0
:Description: Adds ansible_split_streams to FrozenJob.
Affects schedulers and executors.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/model-changelog.rst
|
model-changelog.rst
|
Documentation
=============
This is a brief style guide for Zuul documentation.
ReStructuredText Conventions
----------------------------
Code Blocks
~~~~~~~~~~~
When showing a YAML example, use the ``.. code-block:: yaml``
directive so that the sample appears as a code block with the correct
syntax highlighting.
Literal Values
~~~~~~~~~~~~~~
Filenames and literal values (such as when we instruct a user to type
a specific string into a configuration file) should use the RST
````literal```` syntax.
YAML supports boolean values expressed with or without an initial
capital letter. In examples and documentation, use ``true`` and
``false`` in lowercase type because the resulting YAML is easier for
users to type and read.
Terminology
~~~~~~~~~~~
Zuul employs some specialized terminology. To help users become
acquainted with it, we employ a glossary. Observe the following:
* Specialized terms should have entries in the glossary.
* If the term is being defined in the text, don't link to the glossary
(that would be redundant), but do emphasize it with ``*italics*``
the first time it appears in that definition. Subsequent uses
within the same subsection should be in regular type.
* If it's being used (but not defined) in the text, link the first
usage within a subsection to the glossary using the ``:term:`` role,
but subsequent uses should be in regular type.
* Be cognizant of how readers may jump to link targets within the
text, so be liberal in considering that once you cross a link
target, you may be in a new "subsection" for the above guideline.
Zuul Sphinx Directives
----------------------
The following extra Sphinx directives are available in the ``zuul``
domain. The ``zuul`` domain is configured as the default domain, so the
``zuul:`` prefix may be omitted.
zuul:attr::
~~~~~~~~~~~
This should be used when documenting Zuul configuration attributes.
Zuul configuration is heavily hierarchical, and this directive
facilitates documenting these by emphasising the hierarchy as
appropriate. It will annotate each configuration attribute with a
nice header with its own unique hyperlink target. It displays the
entire hierarchy of the attribute, but emphasises the last portion
(i.e., the field being documented).
To use the hierarchical features, simply nest with indentation in the
normal RST manner.
It supports the ``required`` and ``default`` options and will annotate
the header appropriately. Example:
.. code-block:: rst
.. attr:: foo
Some text about ``foo``.
.. attr:: bar
:required:
:default: 42
Text about ``foo.bar``.
.. attr:: foo
:noindex:
Some text about ``foo``.
.. attr:: bar
:noindex:
:required:
:default: 42
Text about ``foo.bar``.
zuul:value::
~~~~~~~~~~~~
Similar to zuul:attr, but used when documenting a literal value of an
attribute.
.. code-block:: rst
.. attr:: foo
Some text about foo. It supports the following values:
.. value:: bar
One of the supported values for ``foo`` is ``bar``.
.. value:: baz
Another supported values for ``foo`` is ``baz``.
.. attr:: foo
:noindex:
Some text about foo. It supports the following values:
.. value:: bar
:noindex:
One of the supported values for ``foo`` is ``bar``.
.. value:: baz
:noindex:
Another supported values for ``foo`` is ``baz``.
zuul:var::
~~~~~~~~~~
Also similar to zuul:attr, but used when documenting an Ansible
variable which is available to a job's playbook. In these cases, it's
often necessary to indicate the variable may be an element of a list
or dictionary, so this directive supports a ``type`` option. It also
supports the ``hidden`` option so that complex data structure
definitions may continue across sections. To use this, set the hidden
option on a ``zuul:var::`` directive with the root of the data
structure as the name. Example:
.. code-block:: rst
.. var:: foo
Foo is a dictionary with the following keys:
.. var:: items
:type: list
Items is a list of dictionaries with the following keys:
.. var:: bar
Text about bar
Section Boundary
.. var:: foo
:hidden:
.. var:: baz
Text about baz
.. End of code block; start example
.. var:: foo
:noindex:
Foo is a dictionary with the following keys:
.. var:: items
:noindex:
:type: list
Items is a list of dictionaries with the following keys:
.. var:: bar
:noindex:
Text about bar
Section Boundary
.. var:: foo
:noindex:
:hidden:
.. var:: baz
:noindex:
Text about baz
.. End of example
Zuul Sphinx Roles
-----------------
The following extra Sphinx roles are available. Use these within the
text when referring to attributes, values, and variables defined with
the directives above. Use these roles for the first appearance of an
object within a subsection, but use the ````literal```` role in
subsequent uses.
\:zuul:attr:
~~~~~~~~~~~~
This creates a reference to the named attribute. Provide the fully
qualified name (e.g., ``:attr:`pipeline.manager```)
\:zuul:value:
~~~~~~~~~~~~~
This creates a reference to the named value. Provide the fully
qualified name (e.g., ``:attr:`pipeline.manager.dependent```)
\:zuul:var:
~~~~~~~~~~~
This creates a reference to the named variable. Provide the fully
qualified name (e.g., ``:var:`zuul.executor.name```)
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/docs.rst
|
docs.rst
|
:title: Metrics
Metrics
=======
Event Overview
--------------
The following table illustrates the event and pipeline processing
sequence as it relates to some of the metrics described in
:ref:`statsd`. This is intended as general guidance only and is not
an exhaustive list.
+----------------------------------------+------+------+------+--------------------------------------+
| Event | Metrics | Attribute |
+========================================+======+======+======+======================================+
| Event generated by source | | | | event.timestamp |
+----------------------------------------+------+ + +--------------------------------------+
| Enqueued into driver queue | | | | |
+----------------------------------------+------+ + +--------------------------------------+
| Enqueued into tenant trigger queue | | | | event.arrived_at_scheduler_timestamp |
+----------------------------------------+ + [8] + +--------------------------------------+
| Forwarded to matching pipelines | [1] | | | |
+----------------------------------------+ + + +--------------------------------------+
| Changes enqueued ahead | | | | |
+----------------------------------------+ + + +--------------------------------------+
| Change enqueued | | | | item.enqueue_time |
+----------------------------------------+------+------+ +--------------------------------------+
| Changes enqueued behind | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Set item configuration | | | | build_set.configured_time |
+----------------------------------------+------+------+ +--------------------------------------+
| Request files changed (if needed) | | | | |
+----------------------------------------+ +------+ +--------------------------------------+
| Request merge | [2] | | | |
+----------------------------------------+ +------+ +--------------------------------------+
| Wait for merge (and files if needed) | | | [9] | |
+----------------------------------------+------+------+ +--------------------------------------+
| Generate dynamic layout (if needed) | [3] | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Freeze job graph | [4] | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Request global repo state (if needed) | | | | build_set.repo_state_request_time |
+----------------------------------------+ [5] +------+ +--------------------------------------+
| Wait for global repo state (if needed) | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Deduplicate jobs | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Acquire semaphore (non-resources-first)| | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Request nodes | | | | request.created_time |
+----------------------------------------+ [6] +------+ +--------------------------------------+
| Wait for nodes | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Acquire semaphore (resources-first) | | | | |
+----------------------------------------+------+------+ +--------------------------------------+
| Enqueue build request | | | | build.execute_time |
+----------------------------------------+ [7] +------+ +--------------------------------------+
| Executor starts job | | | | build.start_time |
+----------------------------------------+------+------+------+--------------------------------------+
====== =============================
Metric Name
====== =============================
1 event_enqueue_processing_time
2 merge_request_time
3 layout_generation_time
4 job_freeze_time
5 repo_state_time
6 node_request_time
7 job_wait_time
8 event_enqueue_time
9 event_job_time
====== =============================
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/metrics.rst
|
metrics.rst
|
================================================
Enhanced regional distribution of zuul-executors
================================================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
Problem description
===================
When running large distributed deployments it can be desirable to keep traffic
as local as possible. To facilitate this zuul supports zoning of zuul-executors.
Using zones executors only process jobs on nodes that are running in the same
zone. This works well in many cases. However there is currently a limitation
around live log streaming that makes it impossible to use this feature in
certain environments.
Live log streaming via zuul-web or zuul-fingergw requires that each executor
is directly addressable from zuul-web or zuul-fingergw. This is not the case
if
* zuul-executors are behind a NAT. In this case one would need to create a NAT
rule per executor on different ports which can become a maintenance nightmare.
* zuul-executors run in a different Kubernetes or OpenShift. In this case one
would need a Ingress/Route or NodePort per executor which also makes
maintenance really hard.
Proposed change
---------------
In both use cases it would be desirable to have one service in each zone that
can further dispatch log streams within its own zone. Addressing a single
service is much more feasable by e.g. a single NAT rule or a Route or NodePort
service in Kubernetes.
.. graphviz::
:align: center
graph {
graph [fontsize=10 fontname="Verdana"];
node [fontsize=10 fontname="Verdana"];
user [ label="User" ];
subgraph cluster_1 {
node [style=filled];
label = "Zone 1";
web [ label="Web" ];
executor_1 [ label="Executor 1" ];
}
subgraph cluster_2 {
node [style=filled];
label = "Zone 2";
route [ label="Route/Ingress/NAT" ]
fingergw_zone2 [ label="Fingergw Zone 2"];
executor_2 [ label="Executor 2" ];
executor_3 [ label="Executor 3" ];
}
user -- web [ constraint=false ];
web -- executor_1
web -- route [ constraint=false ]
route -- fingergw_zone2
fingergw_zone2 -- executor_2
fingergw_zone2 -- executor_3
}
Current log streaming is essentially the same for zuul-web and zuul-fingergw and
works like this:
* Fingergw gets stream request by user
* Fingergw resolves stream address by calling get_job_log_stream_address and
supplying a build uuid
* Scheduler responds with the executor hostname and port on which the build
is running.
* Fingergw connects to the stream address, supplies the build uuid and connects
the streams.
The proposed process is almost the same:
* Fingergw gets stream request by user
* Fingergw resolves stream address by calling get_job_log_stream_address and
supplying the build uuid *and the zone of the fingergw (optional)*
* Scheduler responds:
* Address of executor if the zone provided with the request matches the zone
of the executor running the build, or the executor is un-zoned.
* Address of fingergw in the target zone otherwise.
* Fingergw connects to the stream address, supplies the build uuid and connects
the streams.
In case the build runs in a different zone the fingergw in the target zone will
follow the exact same process and get the executor stream process as this will
be in the same zone.
In order to facilitate this the following changes need to be made:
* The fingergw registers itself in the zk component registry and offers its
hostname, port and optionally zone. The hostname further needs to be
configurable like it is for the executors.
* Zuul-web and fingergw need a new optional config parameter containing their
zone.
While zuul-web and zuul-fingergw will be aware of what zone they are running in,
end-users will not need this information; the user-facing instances of those
services will continue to serve the entirely of the Zuul system regardless of
which zone they reside in, all from a single public URL or address.
Gearman
-------
The easiest and most standard way of getting non-http traffic into a
Kubernetes/Openshift cluster is using Ingres/Routes in combination with TLS and
SNI (server name indication). SNI is used in this case for dispatching the
connection to the correct service. Gearman currently doesn't support SNI which
makes it harder to route it into an Kubernetes/Openshift cluster from outside.
Security considerations
-----------------------
Live log streams can potentially contain sensitive data. Especially when
transferring them between different datacenters encryption would be useful.
So we should support optionally encrypting the finger streams using TLS with
optional client auth like we do with gearman. The mechanism should also support
SNI (Server name indication).
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/enhanced-regional-executors.rst
|
enhanced-regional-executors.rst
|
Circular Dependencies
=====================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
The current assumption in Zuul is that dependencies form a Directed Acyclic
Graph (DAG). This is also what should be considered best practice. However,
there can be cases where we have circular dependencies and with that no longer
a DAG.
The current implementation to detect and prevent cycles will visit all vertices
of the dependency graph and bail out if an item is encountered twice. This
method is no longer feasible when we want to allow circular dependencies
between changes.
Instead, we need to find the `strongly connected components`_ (changes) of a
given dependency graph. The individual changes in those subgraphs need to know
about each other.
Circular dependency handling needs to be configurable on a per tenant and
project basis.
.. _strongly connected components: https://en.wikipedia.org/wiki/Strongly_connected_component
Proposed change
---------------
By default, Zuul will retain the current behavior of preventing dependency
cycles. The circular dependency handling must be explicitly enabled in the
tenant configuration.
.. code-block:: yaml
allow-circular-dependencies: true
In addition, the tenant default may be overridden on a per-project basis:
.. code-block:: yaml
[...]
untrusted-projects:
- org/project:
allow-circular-dependencies: true
[...]
Changes with cross-repo circular dependencies are required to share the same
change queue. We would still enqueue one queue item per change but hold back
reporting of the cycle until all items have finished. All the items in a cycle
would reference a shared bundle item.
A different approach would be to allow the enqueuing of changes across change
queues. This, however, would be a very substantial change with a lot of edge
cases and will therefore not be considered.
Dependencies are currently expressed with a ``Depends-On`` in the footer of a
commit message or pull-request body. This information is already used for
detecting cycles in the dependency graph.
A cycle is created by having a mutual ``Depends-On`` for the changes that
depend on each other.
We might need a way to prevent changes from being enqueued before all changes
that are part of a cycle are prepared. For this, we could introduce a special
value (e.g. ``null``) for the ``Depends-On`` to indicate that the cycle is not
complete yet. This is since we don't know the change URLs ahead of time.
From a user's perspective this would look as follows:
1. Set ``Depends-On: null`` on the first change that is uploaded.
2. Reference the change URL of the previous change in the ``Depends-On``.
Repeat this for all changes that are part of the cycle.
3. Set the ``Depends-On`` (e.g. pointing to the last uploaded change) to
complete the cycle.
Implementation
--------------
1. Detect strongly connected changes using e.g. `Tarjan's algorithm`_, when
enqueuing a change and its dependencies.
.. _Tarjan's algorithm: https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
2. Introduce a new class (e.g. ``Bundle``) that will hold a list of strongly
connected components (changes) in the order in which they need to be merged.
In case a circular dependency is detected all instances of ``QueueItem``
that are strongly connected will hold a reference to the same ``Bundle``
instance. In case there is no cycle, this reference will be ``None``.
3. The merger call for a queue item that has an associated bundle item will
always include all changes in the bundle.
However each ``QueueItem`` will only have and execute the job graph for a
particular change.
4. Hold back reporting of a ``QueueItem`` in case it has an associated
``Bundle`` until all related ``QueueItem`` have finished.
Report the individual job results for a ``QueueItem`` as usual. The last
reported item will also report a summary of the overall bundle result to
each related change.
Challenges
----------
Ordering of changes
Usually, the order of change in a strongly connected component doesn't
matter. However for sources that have the concept of a parent-child
relationship (e.g. Gerrit changes) we need to keep the order and report a
parent change before the child.
This information is available in ``Change.git_needs_changes``.
To not change the reporting logic too much (currently only the first item in
the queue can report), the changes need to be enqueued in the correct order.
Due to the recursive implementation of ``PipelineManager.addChange()``, this
could mean that we need to allow enqueuing changes ahead of others.
Windows size in the dependent pipeline manager
Since we need to postpone reporting until all items of a bundle have
finished those items will be kept in the queue. This will prevent new
changes from entering the active window. It might even lead to a deadlock in
case the number of changes within the strongly connected component is larger
than the current window size.
One solution would be to increase the size of the window by one every time
we hold an item that has finished but is still waiting for other items in a
bundle.
Reporting of bundle items
The current logic will try to report an item as soon as all jobs have
finished. In case this item is part of a bundle we have to hold back the
reporting until all items that are part of the bundle have succeeded or we
know that the whole bundle will fail.
In case the first item of a bundle did already succeed but a subsequent item
fails we must not reset the builds of queue items that are part of this
bundle, as it would currently happen when the jobs are canceled. Instead, we
need to keep the existing results for all items in a bundle.
When reporting a queue item that is part of a bundle, we need to make sure
to also report information related to the bundle as a whole. Otherwise, the
user might not be able to identify why a failure is reported even though all
jobs succeeded.
The reporting of the bundle summary needs to be done in the last item of a
bundle because only then we know if the complete bundle was submitted
successfully or not.
Recovering from errors
Allowing circular dependencies introduces the risk to end up with a broken
state when something goes wrong during the merge of the bundled changes.
Currently, there is no way to more or less atomically submit multiple
changes at once. Gerrit offers an option to submit a complete topic. This,
however, also doesn't offer any guarantees for being atomic across
repositories [#atomic]_. When considering changes with a circular
dependency, spanning multiple sources (e.g. Gerrit + Github) this seems no
longer possible at all.
Given those constraints, Zuul can only work on a best effort basis by
trying hard to make sure to not start merging the chain of dependent
changes unless it is safe to assume that the merges will succeed.
Even in those cases, there is a chance that e.g. due to a network issue,
Zuul fails to submit all changes of a bundle.
In those cases, the best way would be to automatically recover from the
situation. However, this might mean pushing a revert or force-pushing to
the target branch and reopening changes, which will introduce a new set of
problems on its own. In addition, the recovery might be affected by e.g.
network issues as well and can potentially fail.
All things considered, it's probably best to perform a gate reset as with a
normal failing item and require human intervention to bring the
repositories back into a consistent state. Zuul can assist in that by
logging detailed information about the performed steps and encountered
errors to the affected change pages.
Execution overhead
Without any de-duplication logic, every change that is part of a bundle
will have its jobs executed. For circular dependent changes with the same
jobs configured this could mean executing the same jobs twice.
.. rubric:: Footnotes
.. [#atomic] https://groups.google.com/forum/#!topic/repo-discuss/OuCXboAfEZQ
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/circular-dependencies.rst
|
circular-dependencies.rst
|
Kubernetes Operator
===================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
While Zuul can be happily deployed in a Kubernetes environment, it is
a complex enough system that a Kubernetes Operator could provide value
to deployers. A Zuul Operator would allow a deployer to create, manage
and operate "A Zuul" in their Kubernetes and leave the details of how
that works to the Operator.
To that end, the Zuul Project should create and maintain a Kubernetes
Operator for running Zuul. Given the close ties between Zuul and Ansible,
we should use `Ansible Operator`_ to implement the Operator. Our existing
community is already running Zuul in both Kubernetes and OpenShift, so
we should ensure our Operator works in both. When we're happy with it,
we should publish it to `OperatorHub`_.
That's the easy part. The remainder of the document is for hammering out
some of the finer details.
.. _Ansible Operator: https://github.com/operator-framework/operator-sdk/blob/master/doc/ansible/user-guide.md
.. _OperatorHub: https://www.operatorhub.io/
Custom Resource Definitions
---------------------------
One of the key parts of making an Operator is to define one or more
Custom Resource Definition (CRD). These allow a user to say "hey k8s,
please give me a Thing". It is then the Operator's job to take the
appropriate actions to make sure the Thing exists.
For Zuul, there should definitely be a Zuul CRD. It should be namespaced
with ``zuul-ci.org``. There should be a section for each service for
managing service config as well as capacity:
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
executor:
count: 5
web:
count: 1
fingergw:
count: 1
scheduler:
count: 1
.. note:: Until the distributed scheduler exists in the underlying Zuul
implementation, the ``count`` parameter for the scheduler service
cannot be set to anything greater than 1.
Zuul requires Nodepool to operate. While there are friendly people
using Nodepool without Zuul, from the context of the Operator, the Nodepool
services should just be considered part of Zuul.
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
executor:
count: 5
web:
count: 1
fingergw:
count: 1
scheduler:
count: 1
# Because of nodepool config sharding, count is not valid for launcher.
launcher:
builder:
count: 2
Images
------
The Operator should, by default, use the ``docker.io/zuul`` images that
are published. To support locally built or overridden images, the Operator
should have optional config settings for each image.
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
image: docker.io/example/zuul-merger
executor:
count: 5
web:
count: 1
fingergw:
count: 1
scheduler:
count: 1
launcher:
builder:
count: 2
External Dependencies
---------------------
Zuul needs some services, such as a RDBMS and a Zookeeper, that themselves
are resources that should or could be managed by an Operator. It is out of
scope (and inappropriate) for Zuul to provide these itself. Instead, the Zuul
Operator should use CRDs provided by other Operators.
On Kubernetes installs that support the Operator Lifecycle Manager, external
dependencies can be declared in the Zuul Operator's OLM metadata. However,
not all Kubernetes installs can handle this, so it should also be possible
for a deployer to manually install a list of documented operators and CRD
definitions before installing the Zuul Operator.
For each external service dependency where the Zuul Operator would be relying
on another Operator to create and manage the given service, there should be
a config override setting to allow a deployer to say "I already have one of
these that's located at Location, please don't create one." The config setting
should be the location and connection information for the externally managed
version of the service, and not providing that information should be taken
to mean the Zuul Operator should create and manage the resource.
::
---
apiVersion: v1
kind: Secret
metadata:
name: externalDatabase
type: Opaque
stringData:
dburi: mysql+pymysql://zuul:[email protected]/zuul
---
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
# If the database section is omitted, the Zuul Operator will create
# and manage the database.
database:
secretName: externalDatabase
key: dburi
While Zuul supports multiple backends for RDBMS, the Zuul Operator should not
attempt to support managing both. If the user chooses to let the Zuul Operator
create and manage RDBMS, the `Percona XtraDB Cluster Operator`_ should be
used. Deployers who wish to use a different one should use the config override
setting pointing to the DB location.
.. _Percona XtraDB Cluster Operator: https://operatorhub.io/operator/percona-xtradb-cluster-operator
Zuul Config
-----------
Zuul config files that do not contain information that the Operator needs to
do its job, or that do not contain information into which the Operator might
need to add data, should be handled by ConfigMap resources and not as
parts of the CRD. The CRD should take references to the ConfigMap objects.
Completely external files like ``clouds.yaml`` and ``kube/config``
should be in Secrets referenced in the config. Zuul files like
``nodepool.yaml`` and ``main.yaml`` that contain no information the Operator
needs should be in ConfigMaps and referenced.
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
executor:
count: 5
web:
count: 1
fingergw:
count: 1
scheduler:
count: 1
config: zuulYamlConfig
launcher:
config: nodepoolYamlConfig
builder:
config: nodepoolYamlConfig
externalConfig:
openstack:
secretName: cloudsYaml
kubernetes:
secretName: kubeConfig
amazon:
secretName: botoConfig
Zuul files like ``/etc/nodepool/secure.conf`` and ``/etc/zuul/zuul.conf``
should be managed by the Operator and their options should be represented in
the CRD.
The Operator will shard the Nodepool config by provider-region using a utility
pod and create a new ConfigMap for each provider-region with only the subset of
config needed for that provider-region. It will then create a pod for each
provider-region.
Because the Operator needs to make decisions based on what's going on with
the ``zuul.conf``, or needs to directly manage some of it on behalf of the
deployer (such as RDBMS and Zookeeper connection info), the ``zuul.conf``
file should be managed by and expressed in the CRD.
Connections should each have a stanza that is mostly a passthrough
representation of what would go in the corresponding section of ``zuul.conf``.
Due to the nature of secrets in kubernetes, fields that would normally contain
either a secret string or a path to a file containing secret information
should instead take the name of a kubernetes secret and the key name of the
data in that secret that the deployer will have previously defined. The
Operator will use this information to mount the appropriate secrets into a
utility container, construct appropriate config files for each service,
reupload those into kubernetes as additional secrets, and then mount the
config secrets and the needed secrets containing file content only in the
pods that need them.
::
---
apiVersion: v1
kind: Secret
metadata:
name: gerritSecrets
type: Opaque
data:
sshkey: YWRtaW4=
http_password: c2VjcmV0Cg==
---
apiVersion: v1
kind: Secret
metadata:
name: githubSecrets
type: Opaque
data:
app_key: aRnwpen=
webhook_token: an5PnoMrlw==
---
apiVersion: v1
kind: Secret
metadata:
name: pagureSecrets
type: Opaque
data:
api_token: Tmf9fic=
---
apiVersion: v1
kind: Secret
metadata:
name: smtpSecrets
type: Opaque
data:
password: orRn3V0Gwm==
---
apiVersion: v1
kind: Secret
metadata:
name: mqttSecrets
type: Opaque
data:
password: YWQ4QTlPO2FpCg==
ca_certs: PVdweTgzT3l5Cg==
certfile: M21hWF95eTRXCg==
keyfile: JnhlMElpNFVsCg==
---
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
merger:
count: 5
git_user_email: [email protected]
git_user_name: Example Zuul
executor:
count: 5
manage_ansible: false
web:
count: 1
status_url: https://zuul.example.org
fingergw:
count: 1
scheduler:
count: 1
connections:
gerrit:
driver: gerrit
server: gerrit.example.com
sshkey:
# If the key name in the secret matches the connection key name,
# it can be omitted.
secretName: gerritSecrets
password:
secretName: gerritSecrets
# If they do not match, the key must be specified.
key: http_password
user: zuul
baseurl: http://gerrit.example.com:8080
auth_type: basic
github:
driver: github
app_key:
secretName: githubSecrets
key: app_key
webhook_token:
secretName: githubSecrets
key: webhook_token
rate_limit_logging: false
app_id: 1234
pagure:
driver: pagure
api_token:
secretName: pagureSecrets
key: api_token
smtp:
driver: smtp
server: smtp.example.com
port: 25
default_from: [email protected]
default_to: [email protected]
user: zuul
password:
secretName: smtpSecrets
mqtt:
driver: mqtt
server: mqtt.example.com
user: zuul
password:
secretName: mqttSecrets
ca_certs:
secretName: mqttSecrets
certfile:
secretName: mqttSecrets
keyfile:
secretName: mqttSecrets
Executor job volume
-------------------
To manage the executor job volumes, the CR also accepts a list of volumes
to be bind mounted in the job bubblewrap contexts:
::
name: Text
context: <trusted | untrusted>
access: <ro | rw>
path: /path
volume: Kubernetes.Volume
For example, to expose a GCP authdaemon token, the Zuul CR can be defined as
::
apiVersion: zuul-ci.org/v1alpha1
kind: Zuul
spec:
...
jobVolumes:
- context: trusted
access: ro
path: /authdaemon/token
volume:
name: gcp-auth
hostPath:
path: /var/authdaemon/executor
type: DirectoryOrCreate
Which would result in a new executor mountpath along with this zuul.conf change:
::
trusted_ro_paths=/authdaemon/token
Logging
-------
By default, the Zuul Operator should perform no logging config which should
result in Zuul using its default of logging to ``INFO``. There should be a
simple config option to switch that to enable ``DEBUG`` logging. There should
also be an option to allow specifying a named ``ConfigMap`` with a logging
config. If a logging config ``ConfigMap`` is given, it should override the
``DEBUG`` flag.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/kubernetes-operator.rst
|
kubernetes-operator.rst
|
Specifications
==============
This section contains specifications for future Zuul development. As
we work on implementing significant changes, these document our plans
for those changes and help us work on them collaboratively. Once a
specification is implemented, it should be removed. All relevant
details for implemented work must be reflected correctly in Zuul's
documentation instead.
.. warning:: These are not authoritative documentation. These
features are not currently available in Zuul. They may change
significantly before final implementation, or may never be fully
completed.
.. toctree::
:maxdepth: 1
circular-dependencies
community-matrix
enhanced-regional-executors
kubernetes-operator
nodepool-in-zuul
tenant-resource-quota
tenant-scoped-admin-web-API
tracing
zuul-runner
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/index.rst
|
index.rst
|
Use Matrix for Chat
===================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
We just switched IRC networks from Freenode to OFTC. This was
done quickly because remaining on Freenode was untenable due to recent
changes, and the OpenDev community had an existing plan prepared to
move to OFTC should such a situation arise.
Now that the immediate issue is addressed, we can take a considered
approach to evaluating whether an alternative to IRC such as Matrix
would be more suited.
Requirements
------------
Here are some concerns that affect us as a community:
* Some users like to stay connected all the time so they can read
messages from when they are away.
* Others are only interested in connecting when they have something to
say.
* On Freenode, nick registration was required to join #zuul in order
to mitigate spam. It is unclear whether the same will be true for
OFTC.
* Some users prefer simple text-based clients.
* Others prefer rich messaging and browser or mobile clients.
* We rely heavily on gerritbot.
* We use the logs recorded by eavesdrop from time to time.
* We benefit from the OpenDev statusbot.
* We collaborate with a large number of people in the OpenDev
community in various OFTC channels. We also collaborate with folks
in Ansible and other communities in libera.chat channels.
* Users must be able to access our chat using Free and Open-Source
Software.
* The software running the chat system itself should be Free and
Open-Source as well if possible. Both of these are natural
extensions of the Open Infrastructure community's Four Opens, as
well as OpenDev's mantra that Free Software needs Free Tools.
Benefits Offered by Matrix
--------------------------
* The Matrix architecture associates a user with a "homeserver", and
that homeserver is responsible for storing messages in all of the
rooms the user is present. This means that every Matrix user has
the ability to access messages received while their client is
disconnected. Users don't need to set up separate "bouncers".
* Authentication happens with the Matrix client and homeserver, rather
than through a separate nickserv registration system. This process
is familiar to all users of web services, so should reduce barriers
to access for new users.
* Matrix has a wide variety of clients available, including the
Element web/desktop/mobile clients, as well as the weechat-matrix
plugin. This addresses users of simple text clients and rich media.
* Bots are relatively simple to implement with Matrix.
* The Matrix community is dedicated to interoperability. That drives
their commitment to open standards, open source software, federation
using Matrix itself, and bridging to other communities which
themselves operate under open standards. That aligns very well with
our four-opens philosophy, and leads directly to the next point:
* Bridges exist to OFTC, libera.chat, and, at least for the moment,
Freenode. That means that any of our users who have invested in
establishing a presence in Matrix can relatively easily interact
with communities who call those other networks home.
* End-to-end encrypted channels for private chats. While clearly the
#zuul channel is our main concern, and it will be public and
unencrypted, the ability for our community members to have ad-hoc
chats about sensitive matters (such as questions which may relate to
security) is a benefit. If Matrix becomes more widely used such
that employees of companies feel secure having private chats in the
same platform as our public community interactions, we all benefit
from the increased availability and accessibility of people who no
longer need to split their attention between multiple platforms.
Reasons to Move
---------------
We could continue to call the #zuul channel on OFTC home, and
individual users could still use Matrix on their own to obtain most of
those benefits by joining the portal room on the OFTC matrix.org
bridge. The reasons to move to a native Matrix room are:
* Eliminate a potential failure point. If many/most of us are
connected via Matrix and the bridge, then either a Matrix or an OFTC
outage would affect us.
* Eliminate a source of spam. Spammers find IRC networks very easy to
attack. Matrix is not immune to this, but it is more difficult.
* Isolate ourselves from OFTC-related technology or policy changes.
For example, if we find we need to require registration to speak in
channel, that would take us back to the state where we have to teach
new users about nick registration.
* Elevating the baseline level of functionality expected from our chat
platform. By saying that our home is Matrix, we communicate to
users that the additional functionality offered by the platform is
an expected norm. Rather than tailoring our interactions to the
lowest-common-denominator of IRC, we indicate that the additional
features available in Matrix are welcomed.
* Provide a consistent and unconfusing message for new users. Rather
than saying "we're on OFTC, use Matrix to talk to us for a better
experience", we can say simply "use Matrix".
* Lead by example. Because of the recent fragmentation in the Free
and Open-Source software communities, Matrix is a natural way to
frictionlessly participate in a multitude of communities. Let's
show people how that can work.
Reasons to Stay
---------------
All of the work to move to OFTC has been done, and for the moment at
least, the OFTC matrix.org bridge is functioning well. Moving to a
native room will require some work.
Implementation Plan
-------------------
To move to a native Matrix room, we would do the following:
* Create a homeserver to host our room and bots. Technically, this is
not necessary, but having a homeserver allows us more control over
the branding, policy, and technology of our room. It means we are
isolated from policy decisions by the admins of matrix.org, and it
fully utilizes the federated nature of the technology.
We should ask the OpenDev collaboratory to host a homeserver for
this purpose. That could either be accomplished by running a
synapse server on a VM in OpenDev's infrastructure, or the
Foundation could subscribe to a hosted server run by Element.
At this stage, we would not necessarily host any user accounts on
the homeserver; it would only be used for hosting rooms and bot
accounts.
The homeserver would likely be for opendev.org; so our room would be
#zuul:opendev.org, and we might expect bot accounts like
@gerrit:opendev.org.
The specifics of this step are out of scope for this document. To
accomplish this, we will start an OpenDev spec to come to agreement
on the homeserver.
* Ensure that the OpenDev service bots upon which we rely (gerrit, and
status) support matrix. This is also under the domain of OpenDev;
but it is a pre-requisite for us to move.
We also rely somewhat on eavesdrop. Matrix does support searching,
but that doesn't cause it to be indexed by search engines, and
searching a decade worth of history may not work as well, so we
should also include eavesdrop in that list.
OpenDev also runs a meeting bot, but we haven't used it in years.
* Create the #zuul room.
* Create instructions to tell users how to join it. We will recommend
that if they do not already have a Matrix homeserver, they register
with matrix.org.
* Announce the move, and retire the OFTC channel.
Potential Future Enhancements
-----------------------------
Most of this is out of scope for the Zuul community, and instead
relates to OpenDev, but we should consider these possibilities when
weighing our decision.
It would be possible for OpenDev and/or the Foundation to host user
accounts on the homeserver. This might be more comfortable for new
users who are joining Matrix at the behest of our community.
If that happens, user accounts on the homeserver could be tied to a
future OpenDev single-sign-on system, meaning that registration could
become much simpler and be shared with all OpenDev services.
It's also possible for OpenDev and/or the Foundation to run multiple
homeservers in multiple locations in order to aid users who may live
in jurisdictions with policy or technical requirements that prohibit
their accessing the matrix.org homeserver.
All of these, if they come to pass, would be very far down the road,
but they do illustrate some of the additional flexibility our
communities could obtain by using Matrix.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/community-matrix.rst
|
community-matrix.rst
|
Nodepool in Zuul
================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
The following specification describes a plan to move Nodepool's
functionality into Zuul and end development of Nodepool as a separate
application. This will allow for more node and image related features
as well as simpler maintenance and deployment.
Introduction
------------
Nodepool exists as a distinct application from Zuul largely due to
historical circumstances: it was originally a process for launching
nodes, attaching them to Jenkins, detaching them from Jenkins and
deleting them. Once Zuul grew its own execution engine, Nodepool
could have been adopted into Zuul at that point, but the existing
loose API meant it was easy to maintain them separately and combining
them wasn't particularly advantageous.
However, now we find ourselves with a very robust framework in Zuul
for dealing with ZooKeeper, multiple components, web services and REST
APIs. All of these are lagging behind in Nodepool, and it is time to
address that one way or another. We could of course upgrade
Nodepool's infrastructure to match Zuul's, or even separate out these
frameworks into third-party libraries. However, there are other
reasons to consider tighter coupling between Zuul and Nodepool, and
these tilt the scales in favor of moving Nodepool functionality into
Zuul.
Designing Nodepool as part of Zuul would allow for more features
related to Zuul's multi-tenancy. Zuul is quite good at
fault-tolerance as well as scaling, so designing Nodepool around that
could allow for better cooperation between node launchers. Finally,
as part of Zuul, Nodepool's image lifecycle can be more easily
integrated with Zuul-based workflow.
There are two Nodepool components: nodepool-builder and
nodepool-launcher. We will address the functionality of each in the
following sections on Image Management and Node Management.
This spec contemplates a new Zuul component to handle image and node
management: zuul-launcher. Much of the Nodepool configuration will
become Zuul configuration as well. That is detailed in its own
section, but for now, it's enough to know that the Zuul system as a
whole will know what images and node labels are present in the
configuration.
Image Management
----------------
Part of nodepool-builder's functionality is important to have as a
long-running daemon, and part of what it does would make more sense as
a Zuul job. By moving the actual image build into a Zuul job, we can
make the activity more visible to users of the system. It will be
easier for users to test changes to image builds (inasmuch as they can
propose a change and a check job can run on that change to see if the
image builds sucessfully). Build history and logs will be visible in
the usual way in the Zuul web interface.
A frequently requested feature is the ability to verify images before
putting them into service. This is not practical with the current
implementation of Nodepool because of the loose coupling with Zuul.
However, once we are able to include Zuul jobs in the workflow of
image builds, it is easier to incorporate Zuul jobs to validate those
images as well. This spec includes a mechanism for that.
The parts of nodepool-builder that makes sense as a long-running
daemon are the parts dealing with image lifecycles. Uploading builds
to cloud providers, keeping track of image builds and uploads,
deciding when those images should enter or leave service, and deleting
them are all better done with state management and long-running
processes (we should know -- early versions of Nodepool attempted to
do all of that with Jenkins jobs with limited success).
The sections below describe how we will implement image management in
Zuul.
First, a reminder that using custom images is optional with Zuul.
Many Zuul systems will be able to operate using only stock cloud
provider images. One of the strengths of nodepool-builder is that it
can build an image for Zuul without relying on any particular cloud
provider images. A Zuul system whose operator wants to use custom
images will need to bootstrap that process, and under the proposed
system where images are build in Zuul jobs, that would need to be done
using a stock cloud image. In other words, to bootstrap a system such
as OpenDev from scratch, the operators would need to use a stock cloud
image to run the job to build the custom image. Once a custom image
is available, further image builds could be run on either the stock
cloud image or the custom image. That decision is left to the
operator and involves consideration of fault tolerance and disaster
recovery scenarios.
To build a custom image, an operator will define a fairly typical Zuul
job for each image they would like to produce. For example, a system
may have one job to build a debian-stable image, a second job for
debian-unstable, a third job for ubuntu-focal, a fourth job for
ubuntu-jammy. Zuul's job inheritance system could be very useful here
to deal with many variations of a similar process.
Currently nodepool-builder will build an image under three
circumstances: 1) the image (or the image in a particular format) is
missing; 2) a user has directly requested a build; 3) on an automatic
interval (typically daily). To map this into Zuul, we will use Zuul's
existing pipeline functionality, but we will add a new trigger for
case #1. Case #2 can be handled by a manual Zuul enqueue command, and
case #3 by a periodic pipeline trigger.
Since Zuul knows what images are configured and what their current
states are, it will be able to emit trigger events when it detects
that a new image (or image format) has been added to its
configuration. In these cases, the `zuul` driver in Zuul will enqueue
an `image-build` trigger event on startup or reconfiguration for every
missing image. The event will include the image name. Pipelines will
be configured to trigger on `image-build` events as well as on a timer
trigger.
Jobs will include an extra attribute to indicate they build a
particular image. This serves two purposes; first, in the case of an
`image-build` trigger event, it will act as a matcher so that only
jobs matching the image that needs building are run. Second, it will
allow Zuul to determine which formats are needed for that image (based
on which providers are configured to use it) and include that
information as job data.
The job will be responsible for building the image and uploading the
result to some storage system. The URLs for each image format built
should be returned to Zuul as artifacts.
Finally, the `zuul` driver reporter will accept parameters which will
tell it to search the result data for these artifact URLs and update
the internal image state accordingly.
An example configuration for a simple single-stage image build:
.. code-block:: yaml
- pipeline:
name: image
trigger:
zuul:
events:
- image-build
timer:
time: 0 0 * * *
success:
zuul:
image-built: true
image-validated: true
- job:
name: build-debian-unstable-image
image-build-name: debian-unstable
This job would run whenever Zuul determines it needs a new
debian-unstable image or daily at midnight. Once the job completes,
because of the ``image-built: true`` report, it will look for artifact
data like this:
.. code-block:: yaml
artifacts:
- name: raw image
url: https://storage.example.com/new_image.raw
metadata:
type: zuul_image
image_name: debian-unstable
format: raw
- name: qcow2 image
url: https://storage.example.com/new_image.qcow2
metadata:
type: zuul_image
image_name: debian-unstable
format: qcow2
Zuul will update internal records in ZooKeeper for the image to record
the storage URLs. The zuul-launcher process will then start
background processes to download the images from the storage system
and upload them to the configured providers (much as nodepool-builder
does now with files on disk). As a special case, it may detect that
the image files are stored in a location that a provider can access
directly for import and may be able to import directly from the
storage location rather than downloading locally first.
To handle image validation, a flag will be stored for each image
upload indicating whether it has been validated. The example above
specifies ``image-validated: true`` and therefore Zuul will put the
image into service as soon as all image uploads are complete.
However, if it were false, then Zuul would emit an `image-validate`
event after each upload is complete. A second pipeline can be
configured to perform image validation. It can run any number of
jobs, and since Zuul has complete knowledge of image states, it will
supply nodes using the new image upload (which is not yet in service
for normal jobs). An example of this might look like:
.. code-block:: yaml
- pipeline:
name: image-validate
trigger:
zuul:
events:
- image-validate
success:
zuul:
image-validated: true
- job:
name: validate-debian-unstable-image
image-build-name: debian-unstable
nodeset:
nodes:
- name: node
label: debian
The label should specify the same image that is being validated. Its
node request will be made with extra specifications so that it is
fulfilled with a node built from the image under test. This process
may repeat for each of the providers using that image (normal pipeline
queue deduplication rules may need a special case to allow this).
Once the validation jobs pass, the entry in ZooKeeper will be updated
and the image will go into regular service.
A more specific process definition follows:
After a buildset reports with ``image-built: true``, Zuul will scan
result data and for each artifact it finds, it will create an entry in
ZooKeeper at `/zuul/images/<image_name>/<sequence>`. Zuul will know
not to emit any more `image-build` events for that image at this
point.
For every provider using that image, Zuul will create an entry in
ZooKeeper at
`/zuul/image-uploads/<image_name>/<image_number>/provider/<provider_name>`.
It will set the remote image ID to null and the `image-validated` flag
to whatever was specified in the reporter.
Whenever zuul-launcher observes a new `image-upload` record without an
ID, it will:
* Lock the whole image
* Lock each upload it can handle
* Unlocks the image while retaining the upload locks
* Downloads artifact (if needed) and uploads images to provider
* If upload requires validation, it enqueues an `image-validate` zuul driver trigger event
* Unlocks upload
The locking sequence is so that a single launcher can perform multiple
uploads from a single artifact download if it has the opportunity.
Once more than two builds of an image are in service, the oldest is
deleted. The image ZooKeeper record set to the `deleting` state.
Zuul-launcher will delete the uploads from the providers. The `zuul`
driver emits an `image-delete` event with item data for the image
artifact. This will trigger an image-delete job that can delete the
artifact from the cloud storage.
All of these pipeline definitions should typically be in a single
tenant (but need not be), but the images they build are potentially
available to each tenant that includes the image definition
configuration object (see the Configuration section below). Any repo
in a tenant with an image build pipeline will be able to cause images
to be built and uploaded to providers.
Snapshot Images
~~~~~~~~~~~~~~~
Nodepool does not currently support snapshot images, but the spec for
the current version of Nodepool does contemplate the possibility of a
snapshot based nodepool-builder process. Likewise, this spec does not
require us to support snapshot image builds, but in case we want to
add support in the future, we should have a plan for it.
The image build job in Zuul could, instead of running
diskimage-builder, act on the remote node to prepare it for a
snapshot. A special job attribute could indicate that it is a
snapshot image job, and instead of having the zuul-launcher component
delete the node at the end of the job, it could snapshot the node and
record that information in ZooKeeper. Unlike an image-build job, an
image-snapshot job would need to run in each provider (similar to how
it is proposed that an image-validate job will run in each provider).
An image-delete job would not be required.
Node Management
---------------
The techniques we have developed for cooperative processing in Zuul
can be applied to the node lifecycle. This is a good time to make a
significant change to the nodepool protocol. We can achieve several
long-standing goals:
* Scaling and fault-tolerance: rather than having a 1:N relationship
of provider:nodepool-launcher, we can have multiple zuul-launcher
processes, each of which is capable of handling any number of
providers.
* More intentional request fulfillment: almost no intelligence goes
into selecting which provider will fulfill a given node request; by
assigning providers intentionally, we can more efficiently utilize
providers.
* Fulfilling node requests from multiple providers: by designing
zuul-launcher for cooperative work, we can have nodesets that
request nodes which are fulfilled by different providers. Generally
we should favor the same provider for a set of nodes (since they may
need to communicate over a LAN), but if that is not feasible,
allowing multiple providers to fulfill a request will permit
nodesets with diverse node types (e.g., VM + static, or VM +
container).
Each zuul-launcher process will execute a number of processing loops
in series; first a global request processing loop, and then a
processing loop for each provider. Each one will involve obtaining a
ZooKeeper lock so that only one zuul-launcher process will perform
each function at a time.
Zuul-launcher will need to know about every connection in the system
so that it may have a fuul copy of the configuration, but operators
may wish to localize launchers to specific clouds. To support this,
zuul-launcher will take an optional command-line argument to indicate
on which connections it should operate.
Currently a node request as a whole may be declined by providers. We
will make that more granular and store information about each node in
the request (in other words, individual nodes may be declined by
providers).
All drivers for providers should implement the state machine
interface. Any state machine information currently storen in memory
in nodepool-launcher will need to move to ZooKeeper so that other
launchers can resume state machine processing.
The individual provider loop will:
* Lock a provider in ZooKeeper (`/zuul/provider/<name>`)
* Iterate over every node assigned to that provider in a `building` state
* Drive the state machine
* If success, update request
* If failure, determine if it's a temporary or permanent failure
and update the request accordingly
* If quota available, unpause provider (if paused)
The global queue process will:
* Lock the global queue
* Iterate over every pending node request, and every node within that request
* If all providers have failed the request, clear all temp failures
* If all providers have permanently failed the request, return error
* Identify providers capable of fulfilling the request
* Assign nodes to any provider with sufficient quota
* If no providers with sufficient quota, assign it to first (highest
priority) provider that can fulfill it later and pause that
provider
Configuration
-------------
The configuration currently handled by Nodepool will be refactored and
added to Zuul's configuration syntax. It will be loaded directly from
git repos like most Zuul configuration, however it will be
non-speculative (like pipelines and semaphores -- changes must merge
before they take effect).
Information about connecting to a cloud will be added to ``zuul.conf``
as a ``connection`` entry. The rate limit setting will be moved to
the connection configuration. Providers will then reference these
connections by name.
Because providers and images reference global (i.e., outside tenant
scope) concepts, ZooKeeper paths for data related to those should
include the canonical name of the repo where these objects are
defined. For example, a `debian-unstable` image in the
`opendev/images` repo should be stored at
``/zuul/zuul-images/opendev.org%2fopendev%2fimages/``. This avoids
collisions if different tenants contain different image objects with
the same name.
The actual Zuul config objects will be tenant scoped. Image
definitions which should be available to a tenant should be included
in that tenant's config. Again using the OpenDev example, the
hypothetical `opendev/images` repository should be included in every
OpenDev tenant so all of those images are available.
Within a tenant, image names must be unique (otherwise it is a tenant
configuration error, similar to a job name collision).
The diskimage-builder related configuration items will no longer be
necessary since they will be encoded in Zuul jobs. This will reduce
the complexity of the configuration significantly.
The provider configuration will change as we take the opportunity to
make it more "Zuul-like". Instead of a top-level dictionary, we will
use lists. We will standardize on attributes used across drivers
where possible, as well as attributes which may be located at
different levels of the configuration.
The goals of this reorganization are:
* Allow projects to manage their own image lifecycle (if permitted by
site administrators).
* Manage access control to labels, images and flavors via standard
Zuul mechanisms (whether an item appears within a tenant).
* Reduce repetition and boilerplate for systems with many clouds,
labels, or images.
The new configuration objects are:
Image
This represents any kind of image (A Zuul image built by a job
described above, or a cloud image). By using one object to
represent both, we open the possibility of having a label in one
provider use a cloud image and in another provider use a Zuul image
(because the label will reference the image by short-name which may
resolve to a different image object in different tenants). A given
image object will specify what type it is, and any relevant
information about it (such as the username to use, etc).
Flavor
This is a new abstraction layer to reference instance types across
different cloud providers. Much like labels today, these probably
won't have much information associated with them other than to
reserve a name for other objects to reference. For example, a site
could define a `small` and a `large` flavor. These would later be
mapped to specific instance types on clouds.
Label
Unlike the current Nodepool ``label`` definitions, these labels will
also specify the image and flavor to use. These reference the two
objects above, which means that labels themselves contain the
high-level definition of what will be provided (e.g., a `large
ubuntu` node) while the specific mapping of what `large` and
`ubuntu` mean are left to the more specific configuration levels.
Section
This looks a lot like the current ``provider`` configuration in
Nodepool (but also a little bit like a ``pool``). Several parts of
the Nodepool configuration (such as separating out availability
zones from providers into pools) were added as an afterthought, and
we can take the opportunity to address that here.
A ``section`` is part of a cloud. It might be a region (if a cloud
has regions). It might be one or more availability zones within a
region. A lot of the specifics about images, flavors, subnets,
etc., will be specified here. Because a cloud may have many
sections, we will implement inheritance among sections.
Provider
This is mostly a mapping of labels to sections and is similar to a
provider pool in the current Nodepool configuration. It exists as a
separate object so that site administrators can restrict ``section``
definitions to central repos and allow tenant administrators to
control their own image and labels by allowing certain projects to
define providers.
It mostly consists of a list of labels, but may also include images.
When launching a node, relevant attributes may come from several
sources (the pool, image, flavor, or provider). Not all attributes
make sense in all locations, but where we can support them in multiple
locations, the order of application (later items override earlier
ones) will be:
* ``image`` stanza
* ``flavor`` stanza
* ``label`` stanza
* ``section`` stanza (top level)
* ``image`` within ``section``
* ``flavor`` within ``section``
* ``provider`` stanza (top level)
* ``label`` within ``provider``
This reflects that the configuration is built upwards from general and
simple objects toward more specific objects image, flavor, label,
section, provider. Generally speaking, inherited scalar values will
override, dicts will merge, lists will concatenate.
An example configuration follows. First, some configuration which may
appear in a central project and shared among multiple tenants:
.. code-block:: yaml
# Images, flavors, and labels are the building blocks of the
# configuration.
- image:
name: centos-7
type: zuul
# Any other image-related info such as:
# username: ...
# python-path: ...
# shell-type: ...
# A default that can be overridden by a provider:
# config-drive: true
- image:
name: ubuntu
type: cloud
- flavor:
name: large
- label:
name: centos-7
min-ready: 1
flavor: large
image: centos-7
- label:
name: ubuntu
flavor: small
image: ubuntu
# A section for each cloud+region+az
- section:
name: rax-base
abstract: true
connection: rackspace
boot-timeout: 120
launch-timeout: 600
key-name: infra-root-keys-2020-05-13
# The launcher will apply the minimum of the quota reported by the
# driver (if available) or the values here.
quota:
instances: 2000
subnet: some-subnet
tags:
section-info: foo
# We attach both kinds of images to providers in order to provide
# image-specific info (like config-drive) or username.
images:
- name: centos-7
config-drive: true
# This is a Zuul image
- name: ubuntu
# This is a cloud image, so the specific cloud image name is required
image-name: ibm-ubuntu-20-04-3-minimal-amd64-1
# Other information may be provided
# username ...
# python-path: ...
# shell-type: ...
flavors:
- name: small
cloud-flavor: "Performance 8G"
- name: large
cloud-flavor: "Performance 16G"
- section:
name: rax-dfw
parent: rax-base
region: 'DFW'
availability-zones: ["a", "b"]
# A provider to indicate what labels are available to a tenant from
# a section.
- provider:
name: rax-dfw-main
section: rax-dfw
labels:
- name: centos-7
- name: ubuntu
key-name: infra-root-keys-2020-05-13
tags:
provider-info: bar
The following configuration might appear in a repo that is only used
in a single tenant:
.. code-block:: yaml
- image:
name: devstack
type: zuul
- label:
name: devstack
- provider:
name: rax-dfw-devstack
section: rax-dfw
# The images can be attached to the provider just as a section.
image:
- name: devstack
config-drive: true
labels:
- name: devstack
Here is a potential static node configuration:
.. code-block:: yaml
- label:
name: big-static-node
- section:
name: static-nodes
connection: null
nodes:
- name: static.example.com
labels:
- big-static-node
host-key: ...
username: zuul
- provider:
name: static-provider
section: static-nodes
labels:
- big-static-node
Each of the the above stanzas may only appear once in a tenant for a
given name (like pipelines or semaphores, they are singleton objects).
If they appear in more than one branch of a project, the definitions
must be identical; otherwise, or if they appear in more than one repo,
the second definition is an error. These are meant to be used in
unbranched repos. Whatever tenants they appear in will be permitted
to access those respective resources.
The purpose of the ``provider`` stanza is to associate labels, images,
and sections. Much of the configuration related to launching an
instance (including the availability of zuul or cloud images) may be
supplied in the ``provider`` stanza and will apply to any labels
within. The ``section`` stanza also allows configuration of the same
information except for the labels themselves. The ``section``
supplies default values and the ``provider`` can override them or add
any missing values. Images are additive -- any images that appear in
a ``provider`` will augment those that appear in a ``section``.
The result is a modular scheme for configuration, where a single
``section`` instance can be used to set as much information as
possible that applies globally to a provider. A simple configuration
may then have a single ``provider`` instance to attach labels to that
section. A more complex installation may define a "standard" pool
that is present in every tenant, and then tenant-specific pools as
well. These pools will all attach to the same section.
References to sections, images and labels will be internally converted
to canonical repo names to avoid ambiguity. Under the current
Nodepool system, labels are truly a global object, but under this
proposal, a label short name in one tenant may be different than one
in another. Therefore the node request will internally specify the
canonical label name instead of the short name. Users will never use
canonical names, only short names.
For static nodes, there is some repitition to labels: first labels
must be associated with the individual nodes defined on the section,
then the labels must appear again on a provider. This allows an
operator to define a collection of static nodes centrally on a
section, then include tenant-specific sets of labels in a provider.
For the simple case where all static node labels in a section should
be available in a provider, we could consider adding a flag to the
provider to allow that (e.g., ``include-all-node-labels: true``).
Static nodes themselves are configured on a section with a ``null``
connection (since there is no cloud provider associated with static
nodes). In this case, the additional ``nodes`` section attribute
becomes available.
Upgrade Process
---------------
Most users of diskimages will need to create new jobs to build these
images. This proposal also includes significant changes to the node
allocation system which come with operational risks.
To make the transition as minimally disruptive as possible, we will
support both systems in Zuul, and allow for selection of one system or
the other on a per-label and per-tenant basis.
By default, if a nodeset specifies a label that is not defined by a
``label`` object in the tenant, Zuul will use the old system and place
a ZooKeeper request in ``/nodepool``. If a matching ``label`` is
available in the tenant, The request will use the new system and be
sent to ``/zuul/node-requests``. Once a tenant has completely
converted, a configuration flag may be set in the tenant configuration
and that will allow Zuul to treat nodesets that reference unknown
labels as configuration errors. A later version of Zuul will remove
the backwards compatability and make this the standard behavior.
Because each of the systems will have unique metadata, they will not
recognize each others nodes, and it will appear to each that another
system is using part of their quota. Nodepool is already designed to
handle this case (at least, handle it as well as possible).
Library Requirements
--------------------
The new zuul-launcher component will need most of Nodepool's current
dependencies, which will entail adding many third-party cloud provider
interfaces. As of writing, this uses another 420M of disk space.
Since our primary method of distribution at this point is container
images, if the additional space is a concern, we could restrict the
installation of these dependencies to only the zuul-launcher image.
Diskimage-Builder Testing
-------------------------
The diskimage-builder project team has come to rely on Nodepool in its
testing process. It uses Nodepool to upload images to a devstack
cloud, launch nodes from those instances, and verify that they
function. To aid in continuity of testing in the diskimage-builder
project, we will extract the OpenStack image upload and node launching
code into a simple Python script that can be used in diskimage-builder
test jobs in place of Nodepool.
Work Items
----------
* In existing Nodepool convert the following drivers to statemachine:
gce, kubernetes, openshift, openshift, openstack (openstack is the
only one likely to require substantial effort, the others should be
trivial)
* Replace Nodepool with an image upload script in diskimage-builder
test jobs
* Add roles to zuul-jobs to build images using diskimage-builder
* Implement node-related config items in Zuul config and Layout
* Create zuul-launcher executable/component
* Add image-name item data
* Add image-build-name attribute to jobs
* Including job matcher based on item image-name
* Include image format information based on global config
* Add zuul driver pipeline trigger/reporter
* Add image lifecycle manager to zuul-launcher
* Emit image-build events
* Emit image-validate events
* Emit image-delete events
* Add Nodepool driver code to Zuul
* Update zuul-launcher to perform image uploads and deletion
* Implement node launch global request handler
* Implement node launch provider handlers
* Update Zuul nodepool interface to handle both Nodepool and
zuul-launcher node request queues
* Add tenant feature flag to switch between them
* Release a minor version of Zuul with support for both
* Remove Nodepool support from Zuul
* Release a major version of Zuul with only zuul-launcher support
* Retire Nodepool
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/nodepool-in-zuul.rst
|
nodepool-in-zuul.rst
|
=========================
Resource Quota per Tenant
=========================
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
Problem Description
===================
Zuul is inherently built to be tenant scoped and can be operated as a shared CI
system for a large number of more or less independent projects. As such, one of
its goals is to provide each tenant a fair amount of resources.
If Zuul, and more specifically Nodepool, are pooling build nodes from shared
providers (e.g. a limited number of OpenStack clouds) the principle of a fair
resource share across tenants can hardly be met by the Nodepool side. In large
Zuul installations, it is not uncommon that some tenants request far more
resources and at a higher rate from the Nodepool providers than other tenants.
While Zuuls "fair scheduling" mechanism makes sure each queue item gets treated
justly, there is no mechanism to limit allocated resources on a per-tenant
level. This, however, would be useful in different ways.
For one, in a shared pool of computing resources, it can be necessary to
enforce resource budgets allocated to tenants. That is, a tenant shall only be
able to allocate resources within a defined and payed limit. This is not easily
possible at the moment as Nodepool is not inherently tenant-aware. While it can
limit the number of servers, CPU cores, and RAM allocated on a per-pool level,
this does not directly translate to Zuul tenants. Configuring a separate pool
per tenant would not only lead to much more complex Nodepool configurations,
but also induce performance penalties as each pool runs in its own Python
thread.
Also, in scenarios where Zuul and auxiliary services (e.g. GitHub or
Artifactory) are operated near or at their limits, the system can become
unstable. In such a situation, a common measure is to lower Nodepools resource
quota to limit the number of concurrent builds and thereby reduce the load on
Zuul and other involved services. However, this can currently be done only on
a per-provider or per-pool level, most probably affecting all tenants. This
would contradict the principle of fair resource pooling as there might be less
eager tenants that do not, or rather insignificantly, contribute to the overall
high load. It would therefore be more advisable to limit only those tenants'
resources that induce the most load.
Therefore, it is suggested to implement a mechanism in Nodepool that allows to
define and enforce limits of currently allocated resources on a per-tenant
level. This specification describes how resource quota can be enforced in
Nodepool with minimal additional configuration and execution overhead and with
little to no impact on existing Zuul installations. A per-tenant resource limit
is then applied additionally to already existing pool-level limits and treated
globally across all providers.
Proposed Change
===============
The proposed change consists of several parts in both, Zuul and Nodepool. As
Zuul is the only source of truth for tenants, it must pass the name of the
tenant with each NodeRequest to Nodepool. The Nodepool side must consider this
information and adhere to any resource limits configured for the corresponding
tenant. However, this shall be backwards compatible, i.e., if no tenant name is
passed with a NodeRequest, tenant quotas shall be ignored for this request.
Vice versa, if no resource limit is configured for a tenant, the tenant on the
NodeRequest does not add any additional behaviour.
To keep record of currently consumed resources globally, i.e., across all
providers, the number of CPU cores and main memory (RAM) of a Node shall be
stored with its representation in ZooKeeper by Nodepool. This allows for
a cheap and provider agnostic aggregation of the currently consumed resources
per tenant from any provider. The OpenStack driver already stores the resources
in terms of cores, ram, and instances per ``zk.Node`` in a separate property in
ZooKeeper. This is to be expanded to other drivers where applicable (cf.
"Implementation Caveats" below).
Make Nodepool Tenant Aware
--------------------------
1. Add ``tenant`` attribute to ``zk.NodeRequest`` (applies to Zuul and
Nodepool)
2. Add ``tenant`` attribute to ``zk.Node`` (applies to Nodepool)
Introduce Tenant Quotas in Nodepool
-----------------------------------
1. introduce new top-level config item ``tenant-resource-limits`` for Nodepool
config
.. code-block:: yaml
tenant-resource-limits:
- tenant-name: tenant1
max-servers: 10
max-cores: 200
max-ram: 800
- tenant-name: tenant2
max-servers: 100
max-cores: 1500
max-ram: 6000
2. for each node request that has the tenant attribute set and a corresponding
``tenant-resource-limits`` config exists
- get quota information from current active and planned nodes of same tenant
- if quota for current tenant would be exceeded
- defer node request
- do not pause the pool (as opposed to exceeded pool quota)
- leave the node request unfulfilled (REQUESTED state)
- return from handler for another iteration to fulfill request when tenant
quota allows eventually
- if quota for current tenant would not be exceeded
- proceed with normal process
3. for each node request that does not have the tenant attribute or a tenant
for which no ``tenant-resource-limits`` config exists
- do not calculate the per-tenant quota and proceed with normal process
Implementation Caveats
----------------------
This implementation is ought to be driver agnostic and therefore not to be
implemented separately for each Nodepool driver. For the Kubernetes, OpenShift,
and Static drivers, however, it is not easily possible to find the current
allocated resources. The proposed change therefore does not currently apply to
these. The Kubernetes and OpenShift(Pods) drivers would need to enforce
resource request attributes on their labels which are optional at the moment
(cf. `Kubernetes Driver Doc`_). Another option would be to enforce resource
limits on a per Kubernetes namespace level. How such limits can be implemented
in this case needs to be addressed separately. Similarly, the AWS, Azure, and
GCE drivers do not fully implement quota information for their nodes. E.g. the
AWS driver only considers the number of servers, not the number of cores or
RAM. Therefore, nodes from these providers also cannot be fully taken into
account when calculating a global resource limit besides of number of servers.
Implementing full quota support in those drivers is not within the scope of
this change. However, following this spec, implementing quota support there to
support a per-tenant limit would be straight forward. It just requires them to
set the corresponding ``zk.Node.resources`` attributes. As for now, only the
OpenStack driver exports resource information about its nodes to ZooKeeper, but
as other drivers get enhanced with this feature, they will inherently be
considered for such global limits as well.
In the `QuotaSupport`_ mixin class, we already query ZooKeeper for the used and
planned resources. Ideally, we can extend this method to also return the
resources currently allocated by each tenant without additional costs and
account for this additional quota information as we already do for provider and
pool quotas (cf. `SimpleTaskManagerHandler`_). However, calculation of
currently consumed resources by a provider is done only for nodes of the same
provider. This does not easily work for global limits as intended for tenant
quotas. Therefore, this information (``cores``, ``ram``, ``instances``) will be
stored in a generic way on ``zk.Node.resources`` objects for any provider to
evaluate these quotas upon an incoming node request.
.. _`Kubernetes Driver Doc`: https://zuul-ci.org/docs/nodepool/kubernetes.html#attr-providers.[kubernetes].pools.labels.cpu
.. _`QuotaSupport`: https://opendev.org/zuul/nodepool/src/branch/master/nodepool/driver/utils.py#L180
.. _`SimpleTaskManagerHandler`: https://opendev.org/zuul/nodepool/src/branch/master/nodepool/driver/simple.py#L218
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/tenant-resource-quota.rst
|
tenant-resource-quota.rst
|
===========================
Tenant-scoped admin web API
===========================
https://storyboard.openstack.org/#!/story/2001771
The aim of this spec is to extend the existing web API of Zuul to
privileged actions, and to scope these actions to tenants, projects and privileged users.
Problem Description
===================
Zuul 3 introduced tenant isolation, and most privileged actions, being scoped
to a specific tenant, reflect that change. However the only way to trigger
these actions is through the Zuul CLI, which assumes either access to the
environment of a Zuul component or to Zuul's configuration itself. This is a
problem as being allowed to perform privileged actions on a tenant or for a
specific project should not entail full access to Zuul's admin capabilities.
.. Likewise, Nodepool provides actions that could be scoped to a tenant:
* Ability to trigger an image build when the definition of an image used by
that tenant has changed
* Ability to delete nodesets that have been put on autohold (this is mitigated
by the max-hold-age setting in Nodepool, if set)
These actions can only be triggered through Nodepool's CLI, with the same
problems as Zuul. Another important blocker is that Nodepool has no notion of
tenancy as defined by Zuul.
Proposed Change
===============
Zuul will expose privileged actions through its web API. In order to do so, Zuul
needs to support user authentication. A JWT (JSON Web Token) will be used to carry
user information; from now on it will be called the **Authentication Token** for the
rest of this specification.
Zuul needs also to support authorization and access control. Zuul's configuration
will be modified to include access control rules.
A Zuul operator will also be able to generate an Authentication Token manually
for a user, and communicate the Authentication Token to said user. This Authentication
Token can optionally include authorization claims that override Zuul's authorization
configuration, so that an operator can provide privileges temporarily to a user.
By querying Zuul's web API with the Authentication Token set in an
"Authorization" header, the user can perform administration tasks.
Zuul will need to provide the following minimal new features:
* JWT validation
* Access control configuration
* administration web API
The manual generation of Authentication Tokens can also be used for testing
purposes or non-production environments.
JWT Validation
--------------
Expected Format
...............
Note that JWTs can be arbitrarily extended with custom claims, giving flexibility
in its contents. It also allows to extend the format as needed for future
features.
In its minimal form, the Authentication Token's contents will have the following
format:
.. code-block:: javascript
{
'iss': 'jwt_provider',
'aud': 'my_zuul_deployment',
'exp': 1234567890,
'iat': 1234556780,
'sub': 'alice'
}
* **iss** is the issuer of the Authorization Token. This can be logged for
auditing purposes, and it can be used to filter Identity Providers.
* **aud**, as the intended audience, is the client id for the Zuul deployment in the
issuer.
* **exp** is the Authorization Token's expiry timestamp.
* **iat** is the Authorization Token's date of issuance timestamp.
* **sub** is the default, unique identifier of the user.
These are standard JWT claims and ensure that Zuul can consume JWTs issued
by external authentication systems as Authentication Tokens, assuming the claims
are set correctly.
Authentication Tokens lacking any of these claims will be rejected.
Authentication Tokens with an ``iss`` claim not matching the white list of
accepted issuers in Zuul's configuration will be rejected.
Authentication Tokens addressing a different audience than the expected one
for the specific issuer will be rejected.
Unsigned or incorrectly signed Authentication Tokens will be rejected.
Authentication Tokens with an expired timestamp will be rejected.
Extra Authentication Claims
...........................
Some JWT Providers can issue extra claims about a user, like *preferred_username*
or *email*. Zuul will allow an operator to set such an extra claim as the default,
unique user identifier in place of *sub* if it is more convenient.
If the chosen claim is missing from the Authentication Token, it will be rejected.
Authorization Claims
....................
If the Authentication Token is issued manually by a Zuul Operator, it can include
extra claims extending Zuul's authorization rules for the Authentication Token's
bearer:
.. code-block:: javascript
{
'iss': 'zuul_operator',
'aud': 'zuul.openstack.org',
'exp': 1234567890,
'iat': 1234556780,
'sub': 'alice',
'zuul': {
'admin': ['tenantA', 'tenantB']
}
}
* **zuul** is a claim reserved for zuul-specific information about the user.
It is a dictionary, the only currently supported key is **admin**.
* **zuul.admin** is a list of tenants on which the user is allowed privileged
actions.
In the previous example, user **alice** can perform privileged actions
on every project of **tenantA** and **tenantB**. This is on top of alice's
default authorizations.
These are intended to be **whitelists**: if a tenant is unlisted the user is
assumed not to be allowed to perform a privileged action (unless the
authorization rules in effect for this deployment of Zuul allow it.)
Note that **iss** is set to ``zuul_operator``. This can be used to reject Authentication
Tokens with a ``zuul`` claim if they come from other issuers.
Access Control Configuration
----------------------------
The Zuul main.yaml configuration file will accept new **admin-rule** objects
describing access rules for privileged actions.
Authorization rules define conditions on the claims
in an Authentication Token; if these conditions are met the action is authorized.
In order to allow the parsing of claims with complex structures like dictionaries,
an XPath-like format will be supported.
Here is an example of how rules can be defined:
.. code-block:: yaml
- admin-rule:
name: affiliate_or_admin
conditions:
- resources_access.account.roles: "affiliate"
iss: external_institution
- resources_access.account.roles: "admin"
- admin-rule:
name: alice_or_bob
conditions:
- zuul_uid: alice
- zuul_uid: bob
* **name** is how the authorization rule will be refered as in Zuul's tenants
configuration.
* **conditions** is the list of conditions that define a rule. An Authentication
Token must match **at least one** of the conditions for the rule to apply. A
condition is a dictionary where keys are claims. **All** the associated values must
match the claims in the user's Authentication Token.
Zuul's authorization engine will adapt matching tests depending on the nature of
the claim in the Authentication Token, eg:
* if the claim is a JSON list, check that the condition value is in the claim
* if the claim is a string, check that the condition value is equal to the claim's value
The special ``zuul_uid`` claim refers to the ``uid_claim`` setting in an
authenticator's configuration, as will be explained below. By default it refers
to the ``sub`` claim of an Authentication Token.
This configuration file is completely optional, if the ``zuul.admin`` claim
is set in the Authentication Token to define tenants on which privileged actions
are allowed.
Under the above example, the following Authentication Token would match rules
``affiliate_or_admin`` and ``alice_or_bob``:
.. code-block:: javascript
{
'iss': 'external_institution',
'aud': 'my_zuul_deployment',
'exp': 1234567890,
'iat': 1234556780,
'sub': 'alice',
'resources_access': {
'account': {
'roles': ['affiliate', 'other_role']
}
},
}
And this Authentication Token would only match rule ``affiliate_or_admin``:
.. code-block:: javascript
{
'iss': 'some_hellish_dimension',
'aud': 'my_zuul_deployment',
'exp': 1234567890,
'sub': 'carol',
'iat': 1234556780,
'resources_access': {
'account': {
'roles': ['admin', 'other_role']
}
},
}
Privileged actions are tenant-scoped. Therefore the access control will be set
in tenants definitions, e.g:
.. code-block:: yaml
- tenant:
name: tenantA
admin_rules:
- an_authz_rule
- another_authz_rule
source:
gerrit:
untrusted-projects:
- org/project1:
- org/project2
- ...
- tenant:
name: tenantB
admin_rules:
- yet_another_authz_rule
source:
gerrit:
untrusted-projects:
- org/project1
- org/project3
- ...
An action on the ``tenantA`` tenant will be allowed if ``an_authz_rule`` OR
``another_authz_rule`` is matched.
An action on the ``tenantB`` tenant will be authorized if ``yet_another_authz_rule``
is matched.
Administration Web API
----------------------
Unless specified, all the following endpoints require the presence of the ``Authorization``
header in the HTTP query.
Unless specified, all calls to the endpoints return with HTTP status code 201 if
successful, 401 if unauthenticated, 403 if the user is not allowed to perform the
action, and 400 with a JSON error description otherwise.
In case of a 401 code, an additional ``WWW-Authenticate`` header is emitted, for example::
WWW-Authenticate: Bearer realm="zuul.openstack.org"
error="invalid_token"
error_description="Token expired"
Zuul's web API will be extended to provide the following endpoints:
POST /api/tenant/{tenant}/project/{project}/enqueue
...................................................
This call allows a user to re-enqueue a buildset, like the *enqueue* or
*enqueue-ref* subcommands of Zuul's CLI.
To trigger the re-enqueue of a change, the following JSON body must be sent in
the query:
.. code-block:: javascript
{"trigger": <Zuul trigger>,
"change": <changeID>,
"pipeline": <pipeline>}
To trigger the re-enqueue of a ref, the following JSON body must be sent in
the query:
.. code-block:: javascript
{"trigger": <Zuul trigger>,
"ref": <ref>,
"oldrev": <oldrev>,
"newrev": <newrev>,
"pipeline": <pipeline>}
POST /api/tenant/{tenant}/project/{project}/dequeue
...................................................
This call allows a user to dequeue a buildset, like the *dequeue* subcommand of
Zuul's CLI.
To dequeue a change, the following JSON body must be sent in the query:
.. code-block:: javascript
{"change": <changeID>,
"pipeline": <pipeline>}
To dequeue a ref, the following JSON body must be sent in
the query:
.. code-block:: javascript
{"ref": <ref>,
"pipeline": <pipeline>}
POST /api/tenant/{tenant}/project/{project}/autohold
..............................................................
This call allows a user to automatically put a node set on hold in case of
a build failure on the chosen job, like the *autohold* subcommand of Zuul's
CLI.
Any of the following JSON bodies must be sent in the query:
.. code-block:: javascript
{"change": <changeID>,
"reason": <reason>,
"count": <count>,
"node_hold_expiration": <expiry>,
"job": <job>}
or
.. code-block:: javascript
{"ref": <ref>,
"reason": <reason>,
"count": <count>,
"node_hold_expiration": <expiry>,
"job": <job>}
GET /api/user/authorizations
.........................................
This call returns the list of tenant the authenticated user can perform privileged
actions on.
This endpoint can be consumed by web clients in order to know which actions to display
according to the user's authorizations, either from Zuul's configuration or
from the valid Authentication Token's ``zuul.admin`` claim if present.
The return value is similar in form to the `zuul.admin` claim:
.. code-block:: javascript
{
'zuul': {
'admin': ['tenantA', 'tenantB']
}
}
The call needs authentication and returns with HTTP code 200, or 401 if no valid
Authentication Token is passed in the request's headers. If no rule applies to
the user, the return value is
.. code-block:: javascript
{
'zuul': {
'admin': []
}
}
Logging
.......
Zuul will log an event when a user presents an Authentication Token with a
``zuul.admin`` claim, and if the authorization override is granted or denied:
.. code-block:: bash
Issuer %{iss}s attempt to override user %{sub}s admin rules granted|denied
At DEBUG level the log entry will also contain the ``zuul.admin`` claim.
Zuul will log an event when a user presents a valid Authentication Token to
perform a privileged action:
.. code-block:: bash
User %{sub}s authenticated from %{iss}s requesting %{action}s on %{tenant}s/%{project}s
At DEBUG level the log entry will also contain the JSON body passed to the query.
The events will be logged at zuul.web's level but a new handler focused on auditing
could also be created.
Zuul Client CLI and Admin Web API
.................................
The CLI will be modified to call the REST API instead of using a Gearman server
if the CLI's configuration file is lacking a ``[gearman]`` section but has a
``[web]`` section.
In that case the CLI will take the --auth-token argument on
the ``autohold``, ``enqueue``, ``enqueue-ref`` and ``dequeue`` commands. The
Authentication Token will be used to query the web API to execute these
commands; allowing non-privileged users to use the CLI remotely.
.. code-block:: bash
$ zuul --auth-token AaAa.... autohold --tenant openstack --project example_project --job example_job --reason "reason text" --count 1
Connecting to https://zuul.openstack.org...
<usual autohold output>
JWT Generation by Zuul
-----------------------
Client CLI
..........
A new command will be added to the Zuul Client CLI to allow an operator to generate
an Authorization Token for a third party. It will return the contents of the
``Authorization`` header as it should be set when querying the admin web API.
.. code-block:: bash
$ zuul create-auth-token --auth-config zuul-operator --user alice --tenant tenantA --expires-in 1800
bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJodHRwOi8vbWFuYWdlc2Yuc2ZyZG90ZXN0aW5zdGFuY2Uub3JnIiwienV1bC50ZW5hbnRzIjp7ImxvY2FsIjoiKiJ9LCJleHAiOjE1Mzc0MTcxOTguMzc3NTQ0fQ.DLbKx1J84wV4Vm7sv3zw9Bw9-WuIka7WkPQxGDAHz7s
The ``auth-config`` argument refers to the authenticator configuration to use
(see configuration changes below). The configuration must mention the secret
to use to sign the Token.
This way of generating Authorization Tokens is meant for testing
purposes only and should not be used in production, where the use of an
external Identity Provider is preferred.
Configuration Changes
.....................
JWT creation and validation require a secret and an algorithm. While several algorithms are
supported by the pyJWT library, using ``RS256`` offers asymmetrical encryption,
which allows the public key to be used in untrusted contexts like javascript
code living browser side. Therefore this should be the preferred algorithm for
issuers. Zuul will also support ``HS256`` as the most widely used algorithm.
Some identity providers use key sets (also known as **JWKS**), therefore the key to
use when verifying the Authentication Token's signatures cannot be known in advance.
Zuul must support the ``RS256`` algorithm with JWKS as well.
Here is an example defining the three supported types of authenticators:
.. code-block:: ini
[web]
listen_address=127.0.0.1
port=9000
static_cache_expiry=0
status_url=https://zuul.example.com/status
# symmetrical encryption
[auth "zuul_operator"]
driver=HS256
# symmetrical encryption only needs a shared secret
secret=exampleSecret
# accept "zuul.actions" claim in Authentication Token
allow_authz_override=true
# what the "aud" claim must be in Authentication Token
client_id=zuul.openstack.org
# what the "iss" claim must be in Authentication Token
issuer_id=zuul_operator
# the claim to use as the unique user identifier, defaults to "sub"
uid_claim=sub
# Auth realm, used in 401 error messages
realm=openstack
# (optional) Ensure a Token cannot be valid for longer than this amount of time, in seconds
max_validity_time = 1800000
# (optional) Account for skew between clocks, in seconds
skew = 3
# asymmetrical encryption
[auth "my_oidc_idp"]
driver=RS256
public_key=/path/to/key.pub
# optional, needed only if Authentication Token must be generated manually as well
private_key=/path/to/key
# if not explicitly set, allow_authz_override defaults to False
# what the "aud" claim must be in Authentication Token
client_id=my_zuul_deployment_id
# what the "iss" claim must be in Authentication Token
issuer_id=my_oidc_idp_id
# Auth realm, used in 401 error messages
realm=openstack
# (optional) Ensure a Token cannot be valid for longer than this amount of time, in seconds
max_validity_time = 1800000
# (optional) Account for skew between clocks, in seconds
skew = 3
# asymmetrical encryption using JWKS for validation
# The signing secret being known to the Identity Provider only, this
# authenticator cannot be used to manually issue Tokens with the CLI
[auth google_oauth_playground]
driver=RS256withJWKS
# URL of the JWKS; usually found in the .well-known config of the Identity Provider
keys_url=https://www.googleapis.com/oauth2/v3/certs
# what the "aud" claim must be in Authentication Token
client_id=XXX.apps.googleusercontent.com
# what the "iss" claim must be in Authentication Token
issuer_id=https://accounts.google.com
uid_claim=name
# Auth realm, used in 401 error messages
realm=openstack
# (optional) Account for skew between clocks, in seconds
skew = 3
Implementation
==============
Assignee(s)
-----------
Primary assignee:
mhu
.. feel free to add yourself as an assignee, the more eyes/help the better
Gerrit Topic
------------
Use Gerrit topic "zuul_admin_web" for all patches related to this spec.
.. code-block:: bash
git-review -t zuul_admin_web
Work Items
----------
Due to its complexity the spec should be implemented in smaller "chunks":
* https://review.openstack.org/576907 - Add admin endpoints, support for JWT
providers declaration in the configuration, JWT validation mechanism
* https://review.openstack.org/636197 - Allow Auth Token generation from
Zuul's CLI
* https://review.openstack.org/636315 - Allow users to use the REST API from
the CLI (instead of Gearman), with a bearer token
* https://review.openstack.org/#/c/639855 - Authorization configuration objects declaration and validation
* https://review.openstack.org/640884 - Authorization engine
* https://review.openstack.org/641099 - REST API: add /api/user/authorizations route
Documentation
-------------
* The changes in the configuration will need to be documented:
* configuring authenticators in zuul.conf, supported algorithms and their
specific configuration options
* creating authorization rules
* The additions to the web API need to be documented.
* The additions to the Zuul Client CLI need to be documented.
* The potential impacts of exposing administration tasks in terms of build results
or resources management need to be clearly documented for operators (see below).
Security
--------
Anybody with a valid Authentication Token can perform administration tasks exposed
through the Web API. Revoking JWT is not trivial, and not in the scope of this spec.
As a mitigation, Authentication Tokens should be generated with a short time to
live, like 30 minutes or less. This is especially important if the Authentication
Token overrides predefined authorizations with a ``zuul.admin`` claim. This
could be the default value for generating Tokens with the CLI; this will depend on the configuration of
other external issuers otherwise. If using the ``zuul.admin`` claims, the
Authentication Token should also be generated with as little a scope as possible
(one tenant only) to reduce the surface of attack should the
Authentication Token be compromised.
Exposing administration tasks can impact build results (dequeue-ing buildsets),
and pose potential resources problems with Nodepool if the ``autohold`` feature
is abused, leading to a significant number of nodes remaining in "hold" state for
extended periods of time. Such power should be handed over responsibly.
These security considerations concern operators and the way they handle this
feature, and do not impact development. They however need to be clearly documented,
as operators need to be aware of the potential side effects of delegating privileges
to other users.
Testing
-------
* Unit testing of the new web endpoints will be needed.
* Validation of the new configuration parameters will be needed.
Follow-up work
--------------
The following items fall outside of the scope of this spec but are logical features
to implement once the tenant-scoped admin REST API gets finalized:
* Web UI: log-in, log-out and token refresh support with an external Identity Provider
* Web UI: dequeue button near a job's status on the status page, if the authenticated
user has sufficient authorization
* autohold button near a job's build result on the builds page, if the authenticated
user has sufficient authorization
* reenqueue button near a buildset on a buildsets page, if the authenticated user
has sufficient authorization
Dependencies
============
* This implementation will use an existing dependency to **pyJWT** in Zuul.
* A new dependency to **jsonpath-rw** will be added to support XPath-like parsing
of complex claims.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/tenant-scoped-admin-web-API.rst
|
tenant-scoped-admin-web-API.rst
|
Tracing
=======
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
It can be difficult for a user to understand what steps were involved
between a trigger event (such as a patchset upload or recheck comment)
and a buildset report. If it took an unusually long time it can be
difficult to determine why. At present, an operator would need to
examine logs to determine what steps were involved and the sources of
any potential delays. Even experienced operators and developers can
take quite some time to first collect and then analyze logs to answer
these questions.
Sometimes these answers may point to routine system operation (such as
a delay caused by many gate resets, or preparing a large number of
repositories). Other times they may point to deficiencies in the
system (insufficient mergers) or bugs in the code.
Being able to visualize the activities of a Zuul system can help
operators (and potentially users) triage and diagnose issues more
quickly and accurately. Even if examining logs is ultimately required
in order to fully diagnose an issue, being able to narrow down the
scope using analsys tools can greatly simplify the process.
Proposed Solution
-----------------
Implementing distributed tracing in Zuul can help improve the
observability of the system and aid operators and potentially users in
understanding the sequence of events.
By exporting information about the processing Zuul performs using the
OpenTelemetry API, information about Zuul operations can be collected
in any of several tools for analysis.
OpenTelemetry is an Open Source protocol for exchanging observability
data, an SDK implementing that protocol, as well as an implementation
of a collector for distributing information to multiple backends.
It supports three kinds of observability data: `traces`, `metrics`,
and `logs`. Since Zuul already has support for metrics and logs, this
specification proposes that we use only the support in OpenTelemtry
for `traces`.
Usage Scenarios
~~~~~~~~~~~~~~~
Usage of OpenTelemetry should be entirely optional and supplementary
for any Zuul deployment. Log messages alone should continue to be
sufficient to analyze any potential problem.
Should a deployer wish to use OpenTelemetry tracing data, a very
simple deployment for smaller sites may be constructed by running only
Jaeger. Jaeger is a service that can receive, store, and display
tracing information. The project distributes an all-in-one container
image which can store data in local filesystem storage.
https://www.jaegertracing.io/
Larger sites may wish to run multiple collectors and feed data to
larger, distributed storage backends (such as Cassandra,
Elasticsearch, etc).
Suitability to Zuul
~~~~~~~~~~~~~~~~~~~
OpenTelemetry tracing, at a high level, is designed to record
information about events, their timing, and their relation to other
events. At first this seems like a natural fit for Zuul, which reacts
to events, processes events, and generates more events. However,
OpenTelemetry's bias toward small and simple web applications is
evident throughout its documentation and the SDK implementation.
Traces give us the big picture of what happens when a request is
made by user or an application.
Zuul is not driven by user or application requests, and a system
designed to record several millisecond-long events which make up the
internal response to a user request of a web app is not necessarily
the obvious right choice for recording sequences and combinations of
events which frequently take hours (and sometimes days) to play out
across multiple systems.
Fortunately, the concepts and protocol implementation of OpenTelemtry
are sufficiently well-designed for the general case to be able to
accomodate a system like Zuul, even if the SDK makes incompatible
assumptions that make integration difficult. There are some
challenges to implementation, but because the concepts appear to be
well matched, we should proceed with using the OpenTelemetry protocol
and SDK.
Spans
~~~~~
The key tracing concepts in OpenTelemety are `traces` and `spans`.
From a data model perspective, the unit of data storage is a `span`.
A trace itself is really just a unique ID that is common to multiple
spans.
Spans can relate to other spans as either children or links. A trace
is generally considered to have a single 'root' span, and within the
time period represented by that span, it may have any number of child
spans (which may further have their own child spans).
OpenTelemetry anticipates that a span on one system may spawn a child
span on another system and includes facilities for transferring enough
information about the parent span to a child system that the child
system alone can emit traces for its span and any children that it
spawns in turn.
For a concrete example in Zuul, we might have a Zuul scheduler start a
span for a buildset, and then a merger might emit a child span for
performing the initial merge, and an executor might emit a child span
for executing a build.
Spans can relate to other spans (including spans in other traces), so
sequences of events can be chained together without necessitating that
they all be part of the same span or trace.
Because Zuul processes series of events which may stretch for long
periods of time, we should specify what events and actions should
correspond to spans and traces. Spans can have arbitrary metadat
associated with them, so we will be able to search by event or job
ids.
The following sections describe traces and their child spans.
Event Ingestion
+++++++++++++++
A trace will begin when Zuul receives an event and end when that event
has been enqueued into scheduler queues (or discarded). A driver
completing processing of an event is a definitive point in time so it
is easy to know when to close the root span for that event's trace
(whereas if we kept the trace open to include scheduler processing, we
would need to know when the last trigger event spawned by the
connection event was complete).
This may include processing in internal queues by a given driver, and
these processing steps/queues should appear as their own child spans.
The spans should include event IDs (and potentially other information
about the event such as change or pull request numbers) as metadata.
Tenant Event Processing
+++++++++++++++++++++++
A trace will begin when a scheduler begins processing a tenant event
and ends when it has forwarded the event to all pipelines within a
tenant. It will link to the event ingestion trace as a follow-on
span.
Queue Item
++++++++++
A trace will begin when an item is enqueued and end when it is
dequeued. This will be quite a long trace (hours or days). It is
expected to be the primary benefit of this telemetry effort as it will
show the entire lifetime of a queue item. It will link to the tenant
event processing trace as a follow-on span.
Within the root span, there will be a span for each buildset (so that
if a gate reset happens and a new buildset is created, users will see
a series of buildset spans). Within a buildset, there will be spans
for all of the major processing steps, such as merge operations,
layout calculating, freezing the job graph, and freezing jobs. Each
build will also merit a span (retried builds will get their own spans
as well), and within a job span, there will be child spans for git
repo prep, job setup, individual playbooks, and cleanup.
SDK Challenges
~~~~~~~~~~~~~~
As a high-level concept, the idea of spans for each of these
operations makes sense. In practice, the SDK makes implementation
challenging.
The OpenTelemtry SDK makes no provision for beginning a span on one
system and ending it on another, so the fact that one Zuul scheduler
might start a buildset span while another ends it is problematic.
Fortunately, the OpenTelemetry API only reports spans when they end,
not when they start. This means that we don't need to coordinate a
"start" API call on one scheduler with an "end" API call on another.
We can simply emit the trace with its root span at the end. However,
any child spans emitted during that time need to know the trace ID
they should use, which means that we at least need to store a trace ID
and start timestamp on our starting scheduler for use by any child
spans as well as the "end span" API call.
The SDK does not support creating a span with a specific trace ID or
start timestamp (most timestamps are automatic), but it has
well-defined interfaces for spans and we can subclass the
implementation to allow us to specify trace IDs and timestamps. With
this approach, we can "virtually" start a span on one host, store its
information in ZooKeeper with whatever long-lived object it is
associated with (such as a QueueItem) and then make it concrete on
another host when we end it.
Alternatives
++++++++++++
This section describes some alternative ideas for dealing with the
SDK's mismatch with Zuul concepts as well as why they weren't
selected.
* Multiple root spans with the same trace ID
Jaeger handles this relatively well, and the timeline view appears
as expected (multiple events with whitespace between them). The
graph view in Jaeger may have some trouble displaying this.
It is not clear that OpenTelemetry anticipates having multiple
"root" spans, so it may be best to avoid this in order to avoid
potential problems with other tools.
* Child spans without a parent
If we emit spans that specify a parent which does not exist, Jaeger
will display these traces but show a warning that the parent is
invalid. This may occur naturally while the system is operating
(builds complete while a buildset is running), but should be
eventually corrected once an item is dequeued. In case of a serious
error, we may never close a parent span, which would cause this to
persist. We should accept that this may happen, but try to avoid it
happening intentionally.
Links
~~~~~
Links between spans are fairly primitive in Jaeger. While the
OpenTelemetry API includes attributes for links (so that when we link
a queue item to an event, we could specify that it was a forwarded
event), Jaeger does not store or render them. Instead, we are only
left with a reference to a ``< span in another trace >`` with a
reference type of ``FOLLOWS_FROM``. Clicking on that link will
immediately navigate to the other trace where metadata about the trace
will be visible, but before clicking on it, users will have little
idea of what awaits on the other side.
For this reason, we should use span links sparingly so that when they
are encountered, users are likely to intuit what they are for and are
not overwhelmed by multiple indistinguishable links.
Events and Exceptions
~~~~~~~~~~~~~~~~~~~~~
OpenTelemetry allows events to be added to spans. Events have their
own timestamp and attributes. These can be used to add additional
context to spans (representing single points in time rather than
events with duration that should be child spans). Examples might
include receiving a request to cancel a job or dequeue an item.
Events should not be used as an alternative to logs, nor should all
log messages be copied as events. Events should be used sparingly to
avoid overwhelming the tracing storage with data and the user with
information.
Exceptions may also be included in spans. This happens automatically
and by default when using the context managers supplied by the SDK.
Because many spans in Zuul will be unable to use the SDK context
managers and any exception information would need to be explicitly
handled and stored in ZooKeeper, we will disable inclusion of
exception information in spans. This will provide a more consistent
experience (so that users don't see the absence of an exception in
tracing information to indicate the absence of an error in logs) and
reduce the cost of supporting traces (extra storage in ZooKeeper and
in the telemetry storage).
If we decide that exception information is worth including in the
future, this decision will be easy to revisit and reverse.
Sensitive Information
~~~~~~~~~~~~~~~~~~~~~
No sensitive information (secrets, passwords, job variables, etc)
should be included in tracing output. All output should be suitable
for an audience of Zuul users (that is, if someone has access to the
Zuul dashboard, then tracing data should not have any more sensitive
information than they already have access to). For public-facing Zuul
systems (such as OpenDev), the information should be suitable for
public use.
Protobuf and gRPC
~~~~~~~~~~~~~~~~~
The most efficient and straightforward method of transmitting data
from Zuul to a collector (including Jaeger) is using OTLP with gRPC
(OpenTelemetry Protocol + gRPC Remote Procedure Calls). Because
Protobuf applications include automatically generated code, we may
encounter the occasional version inconsistency. We may need to
navigate package requirements more than normal due to this (especially
if we have multiple packages that depend on protobuf).
For a contemporary example, the OpenTelemetry project is in the
process of pinning to an older version of protobuf:
https://github.com/open-telemetry/opentelemetry-python/issues/2717
There is an HTTP+JSON exporter as well, so in the case that something
goes very wrong with protobuf+gRPC, that may be available as a fallback.
Work Items
----------
* Add OpenTelemetry SDK and support for configuring an exporter to
zuul.conf
* Implement SDK subclasses to support opening and closing spans on
different hosts
* Instrument event processing in each driver
* Instrument event processing in scheduler
* Instrument queue items and related spans
* Document a simple Jaeger setup as a quickstart add-on (similar to
authz)
* Optional: work with OpenDev to run a public Jaeger server for
OpenDev
The last item is not required for this specification (and not our
choice as Zuul developers to make) but it would be nice if there were
one available so that all Zuul users and developers have a reference
implementation available for community collaboration.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/tracing.rst
|
tracing.rst
|
Zuul Runner
===========
.. warning:: This is not authoritative documentation. These features
are not currently available in Zuul. They may change significantly
before final implementation, or may never be fully completed.
While Zuul can be deployed to reproduce a job locally, it
is a complex enough system to setup. Zuul jobs being written in
Ansible, we shouldn't have to setup a Zookeeper, Nodepool and Zuul
service to run a job locally.
To that end, the Zuul Project should create a command line utility
to run a job locally using direct ansible-playbook commands execution.
The scope includes two use cases:
* Running a local build of a job that has already ran, for example to
recreate a build that failed in the gate, through using either
a `zuul-info/inventory.yaml` file, or using the `--change-url` command
line argument.
* Being able to run any job from any Zuul instance, tenant, project
or pipeline regardless if it has run or not.
Zuul Job Execution Context
--------------------------
One of the key parts of making the Zuul Runner command line utility
is to reproduce as close as possible the zuul service environment.
A Zuul job requires:
- Test resources
- Copies of the required projects
- Ansible configuration
- Decrypted copies of the secrets
Test Resources
~~~~~~~~~~~~~~
The Zuul Runner shall require the user to provide test resources
as an Ansible inventory, similarly to what Nodepool provides to the
Zuul Executor. The Runner would enrich the inventory with the zuul
vars.
For example, if a job needs two nodes, then the user provides
a resource file like this:
.. code-block:: yaml
all:
hosts:
controller:
ansible_host: ip-node-1
ansible_user: user-node-1
worker:
ansible_host: ip-node-2
ansible_user: user-node-2
Required Projects
~~~~~~~~~~~~~~~~~
The Zuul Runner shall query an existing Zuul API to get the list
of projects required to run a job. This is implemented as part of
the `topic:freeze_job` changes to expose the executor gearman parameters.
The CLI would then perform the executor service task to clone and merge
the required project locally.
Ansible Configuration
~~~~~~~~~~~~~~~~~~~~~
The CLI would also perform the executor service tasks to setup the
execution context.
Playbooks
~~~~~~~~~
In some case, running all the job playbooks is not desirable,
in this situation the CLI provides a way to select and filter
unneeded playbook.
"zuul-runner --list-playbooks" and it would print out:
.. code-block:: console
0: opendev.org/base-jobs/playbooks/pre.yaml
...
10: opendev.org/base-jobs/playbooks/post.yaml
To avoid running the playbook 10, the user would use:
* "--no-playbook 10"
* "--no-playbook -1"
* "--playbook 1..9"
Alternatively, a matcher may be implemented to express:
* "--skip 'opendev.org/base-jobs/playbooks/post.yaml'"
Secrets
~~~~~~~
The Zuul Runner shall require the user to provide copies of
any secrets required by the job.
Implementation
--------------
The process of exposing gearman parameter and refactoring the executor
code to support local/direct usage already started here:
https://review.opendev.org/#/q/topic:freeze_job+(status:open+OR+status:merged)
Zuul Runner CLI
---------------
Here is the proposed usage for the CLI:
.. code-block:: console
usage: zuul-runner [-h] [-c CONFIG] [--version] [-v] [-e FILE] [-a API]
[-t TENANT] [-j JOB] [-P PIPELINE] [-p PROJECT] [-b BRANCH]
[-g GIT_DIR] [-D DEPENDS_ON]
{prepare-workspace,execute} ...
A helper script for running zuul jobs locally.
optional arguments:
-h, --help show this help message and exit
-c CONFIG specify the config file
--version show zuul version
-v, --verbose verbose output
-e FILE, --extra-vars FILE
global extra vars file
-a API, --api API the zuul server api to query against
-t TENANT, --tenant TENANT
the zuul tenant name
-j JOB, --job JOB the zuul job name
-P PIPELINE, --pipeline PIPELINE
the zuul pipeline name
-p PROJECT, --project PROJECT
the zuul project name
-b BRANCH, --branch BRANCH
the zuul project's branch name
-g GIT_DIR, --git-dir GIT_DIR
the git merger dir
-C CHANGE_URL, --change-url CHANGE_URL
reproduce job with speculative change content
commands:
valid commands
{prepare-workspace,execute}
prepare-workspace checks out all of the required playbooks and roles
into a given workspace and returns the order of
execution
execute prepare and execute a zuul jobs
And here is an example execution:
.. code-block:: console
$ pip install --user zuul
$ zuul-runner --api https://zuul.openstack.org --project openstack/nova --job tempest-full-py3 execute
[...]
2019-05-07 06:08:01,040 DEBUG zuul.Runner - Ansible output: b'PLAY RECAP *********************************************************************'
2019-05-07 06:08:01,040 DEBUG zuul.Runner - Ansible output: b'instance-ip : ok=9 changed=5 unreachable=0 failed=0'
2019-05-07 06:08:01,040 DEBUG zuul.Runner - Ansible output: b'localhost : ok=12 changed=9 unreachable=0 failed=0'
2019-05-07 06:08:01,040 DEBUG zuul.Runner - Ansible output: b''
2019-05-07 06:08:01,218 DEBUG zuul.Runner - Ansible output terminated
2019-05-07 06:08:01,219 DEBUG zuul.Runner - Ansible cpu times: user=0.00, system=0.00, children_user=0.00, children_system=0.00
2019-05-07 06:08:01,219 DEBUG zuul.Runner - Ansible exit code: 0
2019-05-07 06:08:01,219 DEBUG zuul.Runner - Stopped disk job killer
2019-05-07 06:08:01,220 DEBUG zuul.Runner - Ansible complete, result RESULT_NORMAL code 0
2019-05-07 06:08:01,220 DEBUG zuul.ExecutorServer - Sent SIGTERM to SSH Agent, {'SSH_AUTH_SOCK': '/tmp/ssh-SYKgxg36XMBa/agent.18274', 'SSH_AGENT_PID': '18275'}
SUCCESS
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/developer/specs/zuul-runner.rst
|
zuul-runner.rst
|
.. _quick-start:
Quick-Start Installation and Tutorial
=====================================
Zuul is not like other CI or CD systems. It is a project gating
system designed to assist developers in taking a change from proposal
through deployment. Zuul can support any number of workflow processes
and systems, but to help you get started with Zuul, this tutorial will
walk through setting up a basic gating configuration which protects
projects from merging broken code.
This tutorial is entirely self-contained and may safely be run on a
workstation. The only requirements are a network connection, the
ability to run containers, and at least 2GiB of RAM.
This tutorial supplies a working Gerrit for code review, though the
concepts you will learn apply equally to GitHub.
.. note:: Even if you don't ultimately intend to use Gerrit, you are
encouraged to follow this tutorial to learn how to set up
and use Zuul.
At the end of the tutorial, you will find further information about
how to configure your Zuul to interact with GitHub.
Start Zuul Containers
---------------------
Before you start, ensure that some needed packages are installed.
.. code-block:: shell
# Red Hat / CentOS:
sudo yum install podman git python3
sudo python3 -m pip install git-review podman-compose
# Fedora:
sudo dnf install podman git python3
sudo python3 -m pip install git-review podman-compose
# OpenSuse:
sudo zypper install podman git python3
sudo python3 -m pip install git-review podman-compose
# Ubuntu / Debian:
sudo apt-get update
sudo apt-get install podman git python3-pip
sudo python3 -m pip install git-review podman-compose
Clone the Zuul repository:
.. code-block:: shell
git clone https://opendev.org/zuul/zuul
Then cd into the directory containing this document, and run
podman-compose in order to start Zuul, Nodepool and Gerrit.
.. code-block:: shell
cd zuul/doc/source/examples
podman-compose -p zuul-tutorial up
For reference, the files in that directory are also `browsable on the web
<https://opendev.org/zuul/zuul/src/branch/master/doc/source/examples>`_.
All of the services will be started with debug-level logging sent to
the standard output of the terminal where podman-compose is running.
You will see a considerable amount of information scroll by, including
some errors. Zuul will immediately attempt to connect to Gerrit and
begin processing, even before Gerrit has fully initialized. The
podman composition includes scripts to configure Gerrit and create an
account for Zuul. Once this has all completed, the system should
automatically connect, stabilize and become idle. When this is
complete, you will have the following services running:
* Zookeeper
* Gerrit
* Nodepool Launcher
* Zuul Scheduler
* Zuul Web Server
* Zuul Executor
* Apache HTTPD
And a long-running static test node used by Nodepool and Zuul upon
which to run tests.
The Zuul scheduler is configured to connect to Gerrit via a connection
named ``gerrit``. Zuul can interact with as many systems as
necessary, each such connection is assigned a name for use in the Zuul
configuration.
Zuul is a multi-tenant application, so that differing needs of
independent work-groups can be supported from one system. This
example configures a single tenant named ``example-tenant``. Assigned
to this tenant are three projects: ``zuul-config``, ``test1`` and
``test2``. These have already been created in Gerrit and are ready
for us to begin using.
Add Your Gerrit Account
-----------------------
Before you can interact with Gerrit, you will need to create an
account. The initialization script has already created an account for
Zuul, but has left the task of creating your own account to you so
that you can provide your own SSH key. You may safely use any
existing SSH key on your workstation, or you may create a new one by
running ``ssh-keygen``.
Gerrit is configured in a development mode where passwords are not
required in the web interface and you may become any user in the
system at any time.
To create your Gerrit account, visit http://localhost:8080 in your
browser and click `Sign in` in the top right corner.
.. image:: /images/sign-in.png
:align: center
Then click `New Account` under `Register`.
.. image:: /images/register.png
:align: center
Don't bother to enter anything into the confirmation dialog that pops
up, instead, click the `settings` link at the bottom.
.. image:: /images/confirm.png
:align: center
In the `Profile` section at the top, enter the username you use to log
into your workstation in the `Username` field and your full name in
the `Full name` field, then click `Save Changes`.
.. image:: /images/profile.png
:align: center
Scroll down to the `Email Addresses` section and enter your email
address into the `New email address` field, then click `Send
Verification`. Since Gerrit is in developer mode, it will not
actually send any email, and the address will be automatically
confirmed. This step is useful since several parts of the Gerrit user
interface expect to be able to display email addresses.
.. image:: /images/email.png
:align: center
Scroll down to the `SSH keys` section and copy and paste the contents
of ``~/.ssh/id_rsa.pub`` into the `New SSH key` field and click `Add
New SSH Key`.
.. image:: /images/sshkey.png
:align: center
.. We ask them to click reload so that the page refreshes and their
avatar appears in the top right. Otherwise it's difficult to see
that there's anything there to click.
Click the `Reload` button in your browser to reload the page with the
new settings in effect. At this point you have created and logged
into your personal account in Gerrit and are ready to begin
configuring Zuul.
Configure Zuul Pipelines
------------------------
Zuul recognizes two types of projects: :term:`config
projects<config-project>` and :term:`untrusted
projects<untrusted-project>`. An *untrusted project* is a normal
project from Zuul's point of view. In a gating system, it contains
the software under development and/or most of the job content that
Zuul will run. A *config project* is a special project that contains
the Zuul's configuration. Because it has access to normally
restricted features in Zuul, changes to this repository are not
dynamically evaluated by Zuul. The security and functionality of the
rest of the system depends on this repository, so it is best to limit
what is contained within it to the minimum, and ensure thorough code
review practices when changes are made.
Zuul has no built-in workflow definitions, so in order for it to do
anything, you will need to begin by making changes to a *config
project*. The initialization script has already created a project
named ``zuul-config`` which you should now clone onto your workstation:
.. code-block:: shell
git clone http://localhost:8080/zuul-config
You will find that this repository is empty. Zuul reads its
configuration from either a single file or a directory. In a *Config
Project* with substantial Zuul configuration, you may find it easiest
to use the ``zuul.d`` directory for Zuul configuration. Later, in
*Untrusted Projects* you will use a single file for in-repo
configuration. Make the directory:
.. code-block:: shell
cd zuul-config
mkdir zuul.d
The first type of configuration items we need to add are the Pipelines
we intend to use. In Zuul, a Pipeline represents a workflow action.
It is triggered by some action on a connection. Projects are able to
attach jobs to run in that pipeline, and when they complete, the
results are reported along with actions which may trigger further
Pipelines. In a gating system two pipelines are required:
:term:`check` and :term:`gate`. In our system, ``check`` will be
triggered when a patch is uploaded to Gerrit, so that we are able to
immediately run tests and report whether the change works and is
therefore able to merge. The ``gate`` pipeline is triggered when a code
reviewer approves the change in Gerrit. It will run test jobs again
(in case other changes have merged since the change in question was
uploaded) and if these final tests pass, will automatically merge the
change. To configure these pipelines, copy the following file into
``zuul.d/pipelines.yaml``:
.. literalinclude:: /examples/zuul-config/zuul.d/pipelines.yaml
:language: yaml
Once we have bootstrapped our initial Zuul configuration, we will want
to use the gating process on this repository too, so we need to attach
the ``zuul-config`` repository to the ``check`` and ``gate`` pipelines
we are about to create. There are no jobs defined yet, so we must use
the internally defined ``noop`` job, which always returns success.
Later on we will be configuring some other projects, and while we will
be able to dynamically add jobs to their pipelines, those projects
must first be attached to the pipelines in order for that to work. In
our system, we want all of the projects in Gerrit to participate in
the check and gate pipelines, so we can use a regular expression to
apply this to all projects. To configure the ``check`` and ``gate``
pipelines for ``zuul-config`` to run the ``noop`` job, and add all
projects to those pipelines (with no jobs), copy the following file
into ``zuul.d/projects.yaml``:
.. literalinclude:: /examples/zuul-config/zuul.d/projects.yaml
:language: yaml
Every real job (i.e., all jobs other than ``noop``) must inherit from a
:term:`base job`, and base jobs may only be defined in a
:term:`config-project`. Let's go ahead and add a simple base job that
we can build on later. Copy the following into ``zuul.d/jobs.yaml``:
.. literalinclude:: /examples/zuul-config/zuul.d/jobs.yaml
:language: yaml
Commit the changes and push them up for review:
.. code-block:: shell
git add zuul.d
git commit -m "Add initial Zuul configuration"
git review
Because Zuul is currently running with no configuration whatsoever, it
will ignore this change. For this initial change which bootstraps the
entire system, we will need to bypass code review (hopefully for the
last time). To do this, you need to switch to the Administrator
account in Gerrit. Visit http://localhost:8080 in your browser and
then:
Click the avatar image in the top right corner then click `Sign out`.
.. image:: /images/sign-out-user.png
:align: center
Then click the `Sign in` link again.
.. image:: /images/sign-in.png
:align: center
Click `admin` to log in as the `admin` user.
.. image:: /images/become-select.png
:align: center
You will then see a list of open changes; click on the change you
uploaded.
.. image:: /images/open-changes.png
:align: center
Click `Reply...` at the top center of the change screen. This will
open a dialog where you can leave a review message and vote on the
change. As the administrator, you have access to vote in all of the
review categories, even `Verified` which is normally reserved for
Zuul. Vote Code-Review: +2, Verified: +2, Workflow: +1, and then
click `Send` to leave your approval votes.
.. image:: /images/review-1001.png
:align: center
Once the required votes have been set, the `Submit` button will appear
in the top right; click it. This will cause the change to be merged
immediately. This is normally handled by Zuul, but as the
administrator you can bypass Zuul to forcibly merge a change.
.. image:: /images/submit-1001.png
:align: center
Now that the initial configuration has been bootstrapped, you should
not need to bypass testing and code review again, so switch back to
the account you created for yourself. Click on the avatar image in
the top right corner then click `Sign out`.
.. image:: /images/sign-out-admin.png
:align: center
Then click the `Sign in` link again.
.. image:: /images/sign-in.png
:align: center
And click your username to log into your account.
.. image:: /images/become-select.png
:align: center
Test Zuul Pipelines
-------------------
Zuul is now running with a basic :term:`check` and :term:`gate`
configuration. Now is a good time to take a look at Zuul's web
interface. Visit http://localhost:9000/t/example-tenant/status to see
the current status of the system. It should be idle, but if you leave
this page open during the following steps, you will see it update
automatically.
We can now begin adding Zuul configuration to one of our
:term:`untrusted projects<untrusted-project>`. Start by cloning the
`test1` project which was created by the setup script.
.. code-block:: shell
cd ..
git clone http://localhost:8080/test1
Every Zuul job that runs needs a playbook, so let's create a
sub-directory in the project to hold playbooks:
.. code-block:: shell
cd test1
mkdir playbooks
Start with a simple playbook which just outputs a debug message. Copy
the following to ``playbooks/testjob.yaml``:
.. literalinclude:: /examples/test1/playbooks/testjob.yaml
:language: yaml
Now define a Zuul job which runs that playbook. Zuul will read its
configuration from any of ``zuul.d/`` or ``.zuul.d/`` directories, or
the files ``zuul.yaml`` or ``.zuul.yaml``. Generally in an *untrusted
project* which isn't dedicated entirely to Zuul, it's best to put
Zuul's configuration in a hidden file. Copy the following to
``.zuul.yaml`` in the root of the project:
.. literalinclude:: /examples/test1/zuul.yaml
:language: yaml
Commit the changes and push them up to Gerrit for review:
.. code-block:: shell
git add .zuul.yaml playbooks
git commit -m "Add test Zuul job"
git review
Zuul will dynamically evaluate proposed changes to its configuration
in *untrusted projects* immediately, so shortly after your change is
uploaded, Zuul will run the new job and report back on the change.
Visit http://localhost:8080/dashboard/self and open the change you
just uploaded. If the build is complete, Zuul should have left a
Verified: +1 vote on the change, along with a comment at the bottom.
Expand the comments and you should see that the job succeeded, and a
link to the build result in Zuul is provided. You can follow that
link to see some information about the build, but you won't find any
logs since Zuul hasn't been told where to save them yet.
.. image:: /images/check1-1002.png
:align: center
This means everything is working so far, but we need to configure a
bit more before we have a useful job.
Configure a Base Job
--------------------
Every Zuul tenant needs at least one base job. Zuul administrators
can use a base job to customize Zuul to the local environment. This
may include tasks which run both before jobs, such as setting up
package mirrors or networking configuration, or after jobs, such as
artifact and log storage.
Zuul doesn't take anything for granted, and even tasks such as copying
the git repos for the project being tested onto the remote node must
be explicitly added to a base job (and can therefore be customized as
needed). The Zuul in this tutorial is pre-configured to use the `zuul
jobs`_ repository which is the "standard library" of Zuul jobs and
roles. We will make use of it to quickly create a base job which
performs the necessary set up actions and stores build logs.
.. _zuul jobs: https://zuul-ci.org/docs/zuul-jobs/
Return to the ``zuul-config`` repo that you were working in earlier.
We're going to add some playbooks to the empty base job we created
earlier. Start by creating a directory to store those playbooks:
.. code-block:: shell
cd ..
cd zuul-config
mkdir -p playbooks/base
Zuul supports running any number of playbooks before a job (called
*pre-run* playbooks) or after a job (called *post-run* playbooks).
We're going to add a single *pre-run* playbook now. Copy the
following to ``playbooks/base/pre.yaml``:
.. literalinclude:: /examples/zuul-config/playbooks/base/pre.yaml
:language: yaml
This playbook does two things; first it creates a new SSH key and adds
it to all of the hosts in the inventory, and removes the private key
that Zuul normally uses to log into nodes from the running SSH agent.
This is just an extra bit of protection which ensures that if Zuul's
SSH key has access to any important systems, normal Zuul jobs can't
use it. The second thing the playbook does is copy the git
repositories that Zuul has prepared (which may have one or more
changes being tested) to all of the nodes used in the job.
Next, add a *post-run* playbook to remove the per-build SSH key. Copy
the following to ``playbooks/base/post-ssh.yaml``:
.. literalinclude:: /examples/zuul-config/playbooks/base/post-ssh.yaml
:language: yaml
This is the complement of the `add-build-sshkey` role in the pre-run
playbook -- it simply removes the per-build ssh key from any remote
systems. Zuul always tries to run all of the post-run playbooks
regardless of whether any previous playbooks have failed. Because we
always want log collection to run and we want it to run last, we
create a second post-run playbook for it. Copy the following to
``playbooks/base/post-logs.yaml``:
.. literalinclude:: /examples/zuul-config/playbooks/base/post-logs.yaml
:language: yaml
The first role in this playbook generates some metadata about the logs
which are about to be uploaded. Zuul uses this metadata in its web
interface to nicely render the logs and other information about the
build.
This tutorial is running an Apache webserver in a container which will
serve build logs from a volume that is shared with the Zuul executor.
That volume is mounted at `/srv/static/logs`, which is the default
location in the `upload-logs`_ role. The role also supports copying
files to a remote server via SCP; see the role documentation for how
to configure it. For this simple case, the only option we need to
provide is the URL where the logs can ultimately be found.
.. note:: Zuul-jobs also contains `roles
<https://zuul-ci.org/docs/zuul-jobs/log-roles.html>`_ to
upload logs to a OpenStack Object Storage (swift) or Google
Cloud Storage containers. If you create a role to upload
logs to another system, please feel free to contribute it to
the zuul-jobs repository for others to use.
.. _upload-logs: https://zuul-ci.org/docs/zuul-jobs/roles.html#role-upload-logs
Now that the new playbooks are in place, update the ``base`` job
definition to include them. Overwrite ``zuul.d/jobs.yaml`` with the
following:
.. literalinclude:: /examples/zuul-config/zuul.d/jobs2.yaml
:language: yaml
Then commit the change and upload it to Gerrit for review:
.. code-block:: shell
git add playbooks zuul.d/jobs.yaml
git commit -m "Update Zuul base job"
git review
Visit http://localhost:8080/dashboard/self and open the
``zuul-config`` change you just uploaded.
You should see a Verified +1 vote from Zuul. Click `Reply` then vote
Code-Review: +2 and Workflow: +1 then click `Send`.
.. image:: /images/review-1003.png
:align: center
Wait a few moments for Zuul to process the event, and then reload the
page. The change should have been merged.
Visit http://localhost:8080/dashboard/self and return to the
``test1`` change you uploaded earlier. Click `Reply` then type
`recheck` into the text field and click `Send`.
.. image:: /images/recheck-1002.png
:align: center
This will cause Zuul to re-run the test job we created earlier. This
time it will run with the updated base job configuration, and when
complete, it will report the published log location as a comment on
the change:
.. image:: /images/check2-1002.png
:align: center
Follow the link and you will be directed to the build result page. If
you click on the `Logs` tab, you'll be able to browse the console log
for the job. In the middle of the log, you should see the "Hello,
world!" output from the job's playbook.
Also try the `Console` tab for a more structured view of the log.
Click on the `OK` button in the middle of the page to see the output
of just the task we're interested in.
Further Steps
-------------
You now have a Zuul system up and running, congratulations!
The Zuul community would love to hear about how you plan to use Zuul.
Please take a few moments to fill out the `Zuul User Survey
<https://www.surveymonkey.com/r/K2B2MWL>`_ to provide feedback and
information around your deployment. All information is confidential
to the OpenStack Foundation unless you designate that it can be
public.
If you would like to make further changes to Zuul, its configuration
files are located in the ``zuul/doc/source/examples`` directory
and are bind-mounted into the running containers. You may edit them
and restart the Zuul containers to make changes.
If you would like to connect your Zuul to GitHub, see
:ref:`github_driver`.
.. TODO: write an extension to this tutorial to connect to github
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/tutorials/quick-start.rst
|
quick-start.rst
|
Jaeger Tracing Tutorial
=======================
Zuul includes support for `distributed tracing`_ as described by the
OpenTelemetry project. This allows operators (and potentially users)
to visualize the progress of events and queue items through the
various Zuul components as an aid to debugging.
Zuul supports the OpenTelemetry Protocol (OTLP) for exporting traces.
Many observability systems support receiving traces via OTLP. One of
these is Jaeger. Because it can be run as a standalone service with
local storage, this tutorial describes how to set up a Jaeger server
and configure Zuul to export data to it.
For more information about tracing in Zuul, see :ref:`tracing`.
To get started, first run the :ref:`quick-start` and then follow the
steps in this tutorial to add a Jaeger server.
Restart Zuul Containers
-----------------------
After completing the initial tutorial, stop the Zuul containers so
that we can update Zuul's configuration to enable tracing.
.. code-block:: shell
cd zuul/doc/source/examples
sudo -E podman-compose -p zuul-tutorial stop
Restart the containers with a new Zuul configuration.
.. code-block:: shell
cd zuul/doc/source/examples
ZUUL_TUTORIAL_CONFIG="./tracing/etc_zuul/" sudo -E podman-compose -p zuul-tutorial up -d
This tells podman-compose to use these Zuul `config files
<https://opendev.org/zuul/zuul/src/branch/master/doc/source/examples/tracing>`_.
The only change compared to the quick-start is to add a
:attr:`tracing` section to ``zuul.conf``:
.. code-block:: ini
[tracing]
enabled=true
endpoint=jaeger:4317
insecure=true
This instructs Zuul to send tracing information to the Jaeger server
we will start below.
Start Jaeger
------------
A separate docker-compose file is provided to run Jaeger. Start it
with this command:
.. code-block:: shell
cd zuul/doc/source/examples/tracing
sudo -E podman-compose -p zuul-tutorial-tracing up -d
You can visit http://localhost:16686/search to verify it is running.
Recheck a change
----------------
Visit Gerrit at http://localhost:8080/dashboard/self and return to the
``test1`` change you uploaded earlier. Click `Reply` then type
`recheck` into the text field and click `Send`. This will tell Zuul
to run the test job once again. When the job is complete, you should
have a trace available in Jaeger.
To see the trace, visit http://localhost:16686/search and select the
`zuul` service (reload the page if it doesn't show up at first).
Press `Find Traces` and you should see the trace for your build
appear.
_`distributed tracing`: https://opentelemetry.io/docs/concepts/observability-primer/#distributed-traces
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/tutorials/tracing.rst
|
tracing.rst
|
Keycloak Tutorial
=================
Zuul supports an authenticated API accessible via its web app which
can be used to perform some administrative actions. To see this in
action, first run the :ref:`quick-start` and then follow the steps in
this tutorial to add a Keycloak server.
Zuul supports any identity provider that can supply a JWT using OpenID
Connect. Keycloak is used here because it is entirely self-contained.
Google authentication is one additional option described elsewhere in
the documentation.
Gerrit can be updated to use the same authentication system as Zuul,
but this tutorial does not address that.
Update /etc/hosts
-----------------
The Zuul containers will use the internal container network to connect to
keycloak, but you will use a mapped port to access it in your web
browser. There is no way to have Zuul use the internal hostname when
it validates the token yet redirect your browser to `localhost` to
obtain the token, therefore you will need to add a matching host entry
to `/etc/hosts`. Make sure you have a line that looks like this:
.. code-block::
127.0.0.1 localhost keycloak
If you are using podman, you need to add the following option in $HOME/.config/containers/containers.conf:
.. code-block::
[containers]
no_hosts=true
This way your /etc/hosts settings will not interfere with podman's networking.
Restart Zuul Containers
-----------------------
After completing the initial tutorial, stop the Zuul containers so
that we can update Zuul's configuration to add authentication.
.. code-block:: shell
cd zuul/doc/source/examples
sudo -E podman-compose -p zuul-tutorial stop
Restart the containers with a new Zuul configuration.
.. code-block:: shell
cd zuul/doc/source/examples
ZUUL_TUTORIAL_CONFIG="./keycloak/etc_zuul/" sudo -E podman-compose -p zuul-tutorial up -d
This tells podman-compose to use these Zuul `config files
<https://opendev.org/zuul/zuul/src/branch/master/doc/source/examples/keycloak>`_.
Start Keycloak
--------------
A separate docker-compose file is supplied to run Keycloak. Start it
with this command:
.. code-block:: shell
cd zuul/doc/source/examples/keycloak
sudo -E podman-compose -p zuul-tutorial-keycloak up -d
Once Keycloak is running, you can visit the web interface at
http://localhost:8082/
The Keycloak administrative user is `admin` with a password of
`kcadmin`.
Log Into Zuul
-------------
Visit http://localhost:9000/t/example-tenant/autoholds and click the
login icon on the top right. You will be directed to Keycloak, where
you can log into the Zuul realm with the user `admin` and password
`admin`.
Once you return to Zuul, you should see the option to create an
autohold -- an admin-only option.
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/doc/source/tutorials/keycloak.rst
|
keycloak.rst
|
# Copyright 2022 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from concurrent import futures
import fixtures
import grpc
from opentelemetry import trace
from opentelemetry.proto.collector.trace.v1.trace_service_pb2_grpc import (
TraceServiceServicer,
add_TraceServiceServicer_to_server
)
from opentelemetry.proto.collector.trace.v1.trace_service_pb2 import (
ExportTraceServiceResponse,
)
class TraceServer(TraceServiceServicer):
def __init__(self, fixture):
super().__init__()
self.fixture = fixture
def Export(self, request, context):
self.fixture.requests.append(request)
return ExportTraceServiceResponse()
class OTLPFixture(fixtures.Fixture):
def __init__(self):
super().__init__()
self.requests = []
self.executor = futures.ThreadPoolExecutor(
thread_name_prefix='OTLPFixture',
max_workers=10)
self.server = grpc.server(self.executor)
add_TraceServiceServicer_to_server(TraceServer(self), self.server)
self.port = self.server.add_insecure_port('[::]:0')
# Reset global tracer provider
trace._TRACER_PROVIDER_SET_ONCE = trace.Once()
trace._TRACER_PROVIDER = None
def _setUp(self):
self.server.start()
def _cleanup(self):
self.server.stop()
self.server.wait_for_termination()
self.executor.shutdown()
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/otlp_fixture.py
|
otlp_fixture.py
|
# Copyright 2016 Red Hat, Inc.
# Copyright 2021 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from collections import defaultdict
import http.server
import json
import logging
import re
import socketserver
import threading
import urllib.parse
import time
from git.util import IterableList
class GitlabWebServer(object):
def __init__(self, merge_requests):
super(GitlabWebServer, self).__init__()
self.merge_requests = merge_requests
self.fake_repos = defaultdict(lambda: IterableList('name'))
# A dictionary so we can mutate it
self.options = dict(
community_edition=False,
delayed_complete_mr=0,
uncomplete_mr=False)
self.stats = {"get_mr": 0}
def start(self):
merge_requests = self.merge_requests
fake_repos = self.fake_repos
options = self.options
stats = self.stats
class Server(http.server.SimpleHTTPRequestHandler):
log = logging.getLogger("zuul.test.GitlabWebServer")
branches_re = re.compile(r'.+/projects/(?P<project>.+)/'
r'repository/branches\\?.*$')
branch_re = re.compile(r'.+/projects/(?P<project>.+)/'
r'repository/branches/(?P<branch>.+)$')
mr_re = re.compile(r'.+/projects/(?P<project>.+)/'
r'merge_requests/(?P<mr>\d+)$')
mr_approvals_re = re.compile(
r'.+/projects/(?P<project>.+)/'
r'merge_requests/(?P<mr>\d+)/approvals$')
mr_notes_re = re.compile(
r'.+/projects/(?P<project>.+)/'
r'merge_requests/(?P<mr>\d+)/notes$')
mr_approve_re = re.compile(
r'.+/projects/(?P<project>.+)/'
r'merge_requests/(?P<mr>\d+)/approve$')
mr_unapprove_re = re.compile(
r'.+/projects/(?P<project>.+)/'
r'merge_requests/(?P<mr>\d+)/unapprove$')
mr_merge_re = re.compile(r'.+/projects/(?P<project>.+)/'
r'merge_requests/(?P<mr>\d+)/merge$')
mr_update_re = re.compile(r'.+/projects/(?P<project>.+)/'
r'merge_requests/(?P<mr>\d+)$')
def _get_mr(self, project, number):
project = urllib.parse.unquote(project)
mr = merge_requests.get(project, {}).get(number)
if not mr:
# Find out what gitlab does in this case
raise NotImplementedError()
return mr
def do_GET(self):
path = self.path
self.log.debug("Got GET %s", path)
m = self.mr_re.match(path)
if m:
return self.get_mr(**m.groupdict())
m = self.mr_approvals_re.match(path)
if m:
return self.get_mr_approvals(**m.groupdict())
m = self.branch_re.match(path)
if m:
return self.get_branch(**m.groupdict())
m = self.branches_re.match(path)
if m:
return self.get_branches(path, **m.groupdict())
self.send_response(500)
self.end_headers()
def do_POST(self):
path = self.path
self.log.debug("Got POST %s", path)
data = self.rfile.read(int(self.headers['Content-Length']))
if (self.headers['Content-Type'] ==
'application/x-www-form-urlencoded'):
data = urllib.parse.parse_qs(data.decode('utf-8'))
self.log.debug("Got data %s", data)
m = self.mr_notes_re.match(path)
if m:
return self.post_mr_notes(data, **m.groupdict())
m = self.mr_approve_re.match(path)
if m:
return self.post_mr_approve(data, **m.groupdict())
m = self.mr_unapprove_re.match(path)
if m:
return self.post_mr_unapprove(data, **m.groupdict())
self.send_response(500)
self.end_headers()
def do_PUT(self):
path = self.path
self.log.debug("Got PUT %s", path)
data = self.rfile.read(int(self.headers['Content-Length']))
if (self.headers['Content-Type'] ==
'application/x-www-form-urlencoded'):
data = urllib.parse.parse_qs(data.decode('utf-8'))
self.log.debug("Got data %s", data)
m = self.mr_merge_re.match(path)
if m:
return self.put_mr_merge(data, **m.groupdict())
m = self.mr_update_re.match(path)
if m:
return self.put_mr_update(data, **m.groupdict())
self.send_response(500)
self.end_headers()
def send_data(self, data, code=200):
data = json.dumps(data).encode('utf-8')
self.send_response(code)
self.send_header('Content-Type', 'application/json')
self.send_header('Content-Length', len(data))
self.end_headers()
self.wfile.write(data)
def get_mr(self, project, mr):
stats["get_mr"] += 1
mr = self._get_mr(project, mr)
data = {
'target_branch': mr.branch,
'title': mr.subject,
'state': mr.state,
'description': mr.description,
'author': {
'name': 'Administrator',
'username': 'admin'
},
'updated_at':
mr.updated_at.strftime('%Y-%m-%dT%H:%M:%S.%fZ'),
'sha': mr.sha,
'labels': mr.labels,
'merged_at': mr.merged_at.strftime('%Y-%m-%dT%H:%M:%S.%fZ')
if mr.merged_at else mr.merged_at,
'merge_status': mr.merge_status,
}
if options['delayed_complete_mr'] and \
time.monotonic() < options['delayed_complete_mr']:
diff_refs = None
elif options['uncomplete_mr']:
diff_refs = None
else:
diff_refs = {
'base_sha': mr.base_sha,
'head_sha': mr.sha,
'start_sha': 'c380d3acebd181f13629a25d2e2acca46ffe1e00'
}
data['diff_refs'] = diff_refs
self.send_data(data)
def get_mr_approvals(self, project, mr):
mr = self._get_mr(project, mr)
if not options['community_edition']:
self.send_data({
'approvals_left': 0 if mr.approved else 1,
})
else:
self.send_data({
'approved': mr.approved,
})
def get_branch(self, project, branch):
project = urllib.parse.unquote(project)
branch = urllib.parse.unquote(branch)
owner, name = project.split('/')
if branch in fake_repos[(owner, name)]:
protected = fake_repos[(owner, name)][branch].protected
self.send_data({'protected': protected})
else:
return self.send_data({}, code=404)
def get_branches(self, url, project):
project = urllib.parse.unquote(project).split('/')
req = urllib.parse.urlparse(url)
query = urllib.parse.parse_qs(req.query)
per_page = int(query["per_page"][0])
page = int(query["page"][0])
repo = fake_repos[tuple(project)]
first_entry = (page - 1) * per_page
last_entry = min(len(repo), (page) * per_page)
if first_entry >= len(repo):
branches = []
else:
branches = [{'name': repo[i].name,
'protected': repo[i].protected}
for i in range(first_entry, last_entry)]
self.send_data(branches)
def post_mr_notes(self, data, project, mr):
mr = self._get_mr(project, mr)
mr.addNote(data['body'][0])
self.send_data({})
def post_mr_approve(self, data, project, mr):
assert 'sha' in data
mr = self._get_mr(project, mr)
if data['sha'][0] != mr.sha:
return self.send_data(
{'message': 'SHA does not match HEAD of source '
'branch: <new_sha>'}, code=409)
mr.approved = True
self.send_data({})
def post_mr_unapprove(self, data, project, mr):
mr = self._get_mr(project, mr)
mr.approved = False
self.send_data({})
def put_mr_merge(self, data, project, mr):
mr = self._get_mr(project, mr)
squash = None
if data and isinstance(data, dict):
squash = data.get('squash')
mr.mergeMergeRequest(squash)
self.send_data({'state': 'merged'})
def put_mr_update(self, data, project, mr):
mr = self._get_mr(project, mr)
labels = set(mr.labels)
add_labels = data.get('add_labels', [''])[0].split(',')
remove_labels = data.get('remove_labels', [''])[0].split(',')
labels = labels - set(remove_labels)
labels = labels | set(add_labels)
mr.labels = list(labels)
self.send_data({})
def log_message(self, fmt, *args):
self.log.debug(fmt, *args)
self.httpd = socketserver.ThreadingTCPServer(('', 0), Server)
self.port = self.httpd.socket.getsockname()[1]
self.thread = threading.Thread(name='GitlabWebServer',
target=self.httpd.serve_forever)
self.thread.daemon = True
self.thread.start()
def stop(self):
self.httpd.shutdown()
self.thread.join()
self.httpd.server_close()
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fakegitlab.py
|
fakegitlab.py
|
# Copyright 2012 Hewlett-Packard Development Company, L.P.
# Copyright 2016 Red Hat, Inc.
# Copyright 2021-2022 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import configparser
from collections import OrderedDict
from configparser import ConfigParser
from contextlib import contextmanager
import copy
import datetime
import errno
import gc
import hashlib
from io import StringIO
import itertools
import json
import logging
import os
import random
import re
from collections import defaultdict, namedtuple
from queue import Queue
from typing import Callable, Optional, Generator, List, Dict
from unittest.case import skipIf
import zlib
import prometheus_client
import requests
import select
import shutil
import socket
import string
import subprocess
import sys
import tempfile
import threading
import traceback
import time
import uuid
import socketserver
import http.server
import urllib.parse
import git
import fixtures
import kazoo.client
import kazoo.exceptions
import pymysql
import psycopg2
import psycopg2.extensions
import testtools
import testtools.content
import testtools.content_type
from git.exc import NoSuchPathError
import yaml
import paramiko
import sqlalchemy
import requests_mock
from kazoo.exceptions import NoNodeError
from zuul import model
from zuul.model import (
BuildRequest, Change, MergeRequest, WebInfo, HoldRequest
)
from zuul.driver.zuul import ZuulDriver
from zuul.driver.git import GitDriver
from zuul.driver.smtp import SMTPDriver
from zuul.driver.github import GithubDriver
from zuul.driver.timer import TimerDriver
from zuul.driver.sql import SQLDriver
from zuul.driver.bubblewrap import BubblewrapDriver
from zuul.driver.nullwrap import NullwrapDriver
from zuul.driver.mqtt import MQTTDriver
from zuul.driver.pagure import PagureDriver
from zuul.driver.gitlab import GitlabDriver
from zuul.driver.gerrit import GerritDriver
from zuul.driver.github.githubconnection import GithubClientManager
from zuul.driver.elasticsearch import ElasticsearchDriver
from zuul.lib.collections import DefaultKeyDict
from zuul.lib.connections import ConnectionRegistry
from zuul.zk import zkobject, ZooKeeperClient
from zuul.zk.components import SchedulerComponent, COMPONENT_REGISTRY
from zuul.zk.event_queues import ConnectionEventQueue
from zuul.zk.executor import ExecutorApi
from zuul.zk.locks import tenant_read_lock, pipeline_lock, SessionAwareLock
from zuul.zk.merger import MergerApi
from psutil import Popen
import zuul.driver.gerrit.gerritsource as gerritsource
import zuul.driver.gerrit.gerritconnection as gerritconnection
import zuul.driver.git.gitwatcher as gitwatcher
import zuul.driver.github.githubconnection as githubconnection
import zuul.driver.pagure.pagureconnection as pagureconnection
import zuul.driver.gitlab.gitlabconnection as gitlabconnection
import zuul.driver.github
import zuul.driver.elasticsearch.connection as elconnection
import zuul.driver.sql
import zuul.scheduler
import zuul.executor.server
import zuul.executor.client
import zuul.lib.ansible
import zuul.lib.connections
import zuul.lib.auth
import zuul.lib.keystorage
import zuul.merger.client
import zuul.merger.merger
import zuul.merger.server
import zuul.nodepool
import zuul.configloader
from zuul.lib.logutil import get_annotated_logger
import tests.fakegithub
import tests.fakegitlab
from tests.otlp_fixture import OTLPFixture
import opentelemetry.sdk.trace.export
FIXTURE_DIR = os.path.join(os.path.dirname(__file__), 'fixtures')
KEEP_TEMPDIRS = bool(os.environ.get('KEEP_TEMPDIRS', False))
SCHEDULER_COUNT = int(os.environ.get('ZUUL_SCHEDULER_COUNT', 1))
def skipIfMultiScheduler(reason=None):
if not reason:
reason = "Test is failing with multiple schedulers"
return skipIf(SCHEDULER_COUNT > 1, reason)
def repack_repo(path):
cmd = ['git', '--git-dir=%s/.git' % path, 'repack', '-afd']
output = subprocess.Popen(cmd, close_fds=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out = output.communicate()
if output.returncode:
raise Exception("git repack returned %d" % output.returncode)
return out
def random_sha1():
return hashlib.sha1(str(random.random()).encode('ascii')).hexdigest()
def iterate_timeout(max_seconds, purpose):
start = time.time()
count = 0
while (time.time() < start + max_seconds):
count += 1
yield count
time.sleep(0.01)
raise Exception("Timeout waiting for %s" % purpose)
def simple_layout(path, driver='gerrit'):
"""Specify a layout file for use by a test method.
:arg str path: The path to the layout file.
:arg str driver: The source driver to use, defaults to gerrit.
Some tests require only a very simple configuration. For those,
establishing a complete config directory hierachy is too much
work. In those cases, you can add a simple zuul.yaml file to the
test fixtures directory (in fixtures/layouts/foo.yaml) and use
this decorator to indicate the test method should use that rather
than the tenant config file specified by the test class.
The decorator will cause that layout file to be added to a
config-project called "common-config" and each "project" instance
referenced in the layout file will have a git repo automatically
initialized.
"""
def decorator(test):
test.__simple_layout__ = (path, driver)
return test
return decorator
def never_capture():
"""Never capture logs/output
Due to high volume, log files are normally captured and attached
to the subunit stream only on error. This can make diagnosing
some problems difficult. Use this dectorator on a test to
indicate that logs and output should not be captured.
"""
def decorator(test):
test.__never_capture__ = True
return test
return decorator
def registerProjects(source_name, client, config):
path = config.get('scheduler', 'tenant_config')
with open(os.path.join(FIXTURE_DIR, path)) as f:
tenant_config = yaml.safe_load(f.read())
for tenant in tenant_config:
sources = tenant['tenant']['source']
conf = sources.get(source_name)
if not conf:
return
projects = conf.get('config-projects', [])
projects.extend(conf.get('untrusted-projects', []))
for project in projects:
if isinstance(project, dict):
# This can be a dict with the project as the only key
client.addProjectByName(
list(project.keys())[0])
else:
client.addProjectByName(project)
class StatException(Exception):
# Used by assertReportedStat
pass
class GerritDriverMock(GerritDriver):
def __init__(self, registry, changes: Dict[str, Dict[str, Change]],
upstream_root: str, additional_event_queues, poller_events,
add_cleanup: Callable[[Callable[[], None]], None]):
super(GerritDriverMock, self).__init__()
self.registry = registry
self.changes = changes
self.upstream_root = upstream_root
self.additional_event_queues = additional_event_queues
self.poller_events = poller_events
self.add_cleanup = add_cleanup
def getConnection(self, name, config):
db = self.changes.setdefault(config['server'], {})
poll_event = self.poller_events.setdefault(name, threading.Event())
ref_event = self.poller_events.setdefault(name + '-ref',
threading.Event())
connection = FakeGerritConnection(
self, name, config,
changes_db=db,
upstream_root=self.upstream_root,
poller_event=poll_event,
ref_watcher_event=ref_event)
if connection.web_server:
self.add_cleanup(connection.web_server.stop)
setattr(self.registry, 'fake_' + name, connection)
return connection
class GithubDriverMock(GithubDriver):
def __init__(self, registry, changes: Dict[str, Dict[str, Change]],
config: ConfigParser, upstream_root: str,
additional_event_queues,
git_url_with_auth: bool):
super(GithubDriverMock, self).__init__()
self.registry = registry
self.changes = changes
self.config = config
self.upstream_root = upstream_root
self.additional_event_queues = additional_event_queues
self.git_url_with_auth = git_url_with_auth
def getConnection(self, name, config):
server = config.get('server', 'github.com')
db = self.changes.setdefault(server, {})
connection = FakeGithubConnection(
self, name, config,
changes_db=db,
upstream_root=self.upstream_root,
git_url_with_auth=self.git_url_with_auth)
setattr(self.registry, 'fake_' + name, connection)
client = connection.getGithubClient(None)
registerProjects(connection.source.name, client, self.config)
return connection
class PagureDriverMock(PagureDriver):
def __init__(self, registry, changes: Dict[str, Dict[str, Change]],
upstream_root: str, additional_event_queues):
super(PagureDriverMock, self).__init__()
self.registry = registry
self.changes = changes
self.upstream_root = upstream_root
self.additional_event_queues = additional_event_queues
def getConnection(self, name, config):
server = config.get('server', 'pagure.io')
db = self.changes.setdefault(server, {})
connection = FakePagureConnection(
self, name, config,
changes_db=db,
upstream_root=self.upstream_root)
setattr(self.registry, 'fake_' + name, connection)
return connection
class GitlabDriverMock(GitlabDriver):
def __init__(self, registry, changes: Dict[str, Dict[str, Change]],
config: ConfigParser, upstream_root: str,
additional_event_queues):
super(GitlabDriverMock, self).__init__()
self.registry = registry
self.changes = changes
self.config = config
self.upstream_root = upstream_root
self.additional_event_queues = additional_event_queues
def getConnection(self, name, config):
server = config.get('server', 'gitlab.com')
db = self.changes.setdefault(server, {})
connection = FakeGitlabConnection(
self, name, config,
changes_db=db,
upstream_root=self.upstream_root)
setattr(self.registry, 'fake_' + name, connection)
registerProjects(connection.source.name, connection,
self.config)
return connection
class TestConnectionRegistry(ConnectionRegistry):
def __init__(self, changes, config, additional_event_queues,
upstream_root, poller_events, git_url_with_auth,
add_cleanup):
self.connections = OrderedDict()
self.drivers = {}
self.registerDriver(ZuulDriver())
self.registerDriver(GerritDriverMock(
self, changes, upstream_root, additional_event_queues,
poller_events, add_cleanup))
self.registerDriver(GitDriver())
self.registerDriver(GithubDriverMock(
self, changes, config, upstream_root, additional_event_queues,
git_url_with_auth))
self.registerDriver(SMTPDriver())
self.registerDriver(TimerDriver())
self.registerDriver(SQLDriver())
self.registerDriver(BubblewrapDriver(check_bwrap=True))
self.registerDriver(NullwrapDriver())
self.registerDriver(MQTTDriver())
self.registerDriver(PagureDriverMock(
self, changes, upstream_root, additional_event_queues))
self.registerDriver(GitlabDriverMock(
self, changes, config, upstream_root, additional_event_queues))
self.registerDriver(ElasticsearchDriver())
class FakeAnsibleManager(zuul.lib.ansible.AnsibleManager):
def validate(self):
return True
def copyAnsibleFiles(self):
pass
class GerritChangeReference(git.Reference):
_common_path_default = "refs/changes"
_points_to_commits_only = True
class FakeGerritChange(object):
categories = {'Approved': ('Approved', -1, 1),
'Code-Review': ('Code-Review', -2, 2),
'Verified': ('Verified', -2, 2)}
def __init__(self, gerrit, number, project, branch, subject,
status='NEW', upstream_root=None, files={},
parent=None, merge_parents=None, merge_files=None,
topic=None, empty=False):
self.gerrit = gerrit
self.source = gerrit
self.reported = 0
self.queried = 0
self.patchsets = []
self.number = number
self.project = project
self.branch = branch
self.subject = subject
self.latest_patchset = 0
self.depends_on_change = None
self.depends_on_patchset = None
self.needed_by_changes = []
self.fail_merge = False
self.messages = []
self.comments = []
self.checks = {}
self.checks_history = []
self.submit_requirements = []
self.data = {
'branch': branch,
'comments': self.comments,
'commitMessage': subject,
'createdOn': time.time(),
'id': 'I' + random_sha1(),
'lastUpdated': time.time(),
'number': str(number),
'open': status == 'NEW',
'owner': {'email': '[email protected]',
'name': 'User Name',
'username': 'username'},
'patchSets': self.patchsets,
'project': project,
'status': status,
'subject': subject,
'submitRecords': [],
'url': '%s/%s' % (self.gerrit.baseurl.rstrip('/'), number)}
if topic:
self.data['topic'] = topic
self.upstream_root = upstream_root
if merge_parents:
self.addMergePatchset(parents=merge_parents,
merge_files=merge_files)
else:
self.addPatchset(files=files, parent=parent, empty=empty)
if merge_parents:
self.data['parents'] = merge_parents
elif parent:
self.data['parents'] = [parent]
self.data['submitRecords'] = self.getSubmitRecords()
self.open = status == 'NEW'
def addFakeChangeToRepo(self, msg, files, large, parent):
path = os.path.join(self.upstream_root, self.project)
repo = git.Repo(path)
if parent is None:
parent = 'refs/tags/init'
ref = GerritChangeReference.create(
repo, '%s/%s/%s' % (str(self.number).zfill(2)[-2:],
self.number,
self.latest_patchset),
parent)
repo.head.reference = ref
repo.head.reset(working_tree=True)
repo.git.clean('-x', '-f', '-d')
path = os.path.join(self.upstream_root, self.project)
if not large:
for fn, content in files.items():
fn = os.path.join(path, fn)
if content is None:
os.unlink(fn)
repo.index.remove([fn])
else:
d = os.path.dirname(fn)
if not os.path.exists(d):
os.makedirs(d)
with open(fn, 'w') as f:
f.write(content)
repo.index.add([fn])
else:
for fni in range(100):
fn = os.path.join(path, str(fni))
f = open(fn, 'w')
for ci in range(4096):
f.write(random.choice(string.printable))
f.close()
repo.index.add([fn])
r = repo.index.commit(msg)
repo.head.reference = 'master'
repo.head.reset(working_tree=True)
repo.git.clean('-x', '-f', '-d')
repo.heads['master'].checkout()
return r
def addFakeMergeCommitChangeToRepo(self, msg, parents):
path = os.path.join(self.upstream_root, self.project)
repo = git.Repo(path)
ref = GerritChangeReference.create(
repo, '%s/%s/%s' % (str(self.number).zfill(2)[-2:],
self.number,
self.latest_patchset),
parents[0])
repo.head.reference = ref
repo.head.reset(working_tree=True)
repo.git.clean('-x', '-f', '-d')
repo.index.merge_tree(parents[1])
parent_commits = [repo.commit(p) for p in parents]
r = repo.index.commit(msg, parent_commits=parent_commits)
repo.head.reference = 'master'
repo.head.reset(working_tree=True)
repo.git.clean('-x', '-f', '-d')
repo.heads['master'].checkout()
return r
def addPatchset(self, files=None, large=False, parent=None, empty=False):
self.latest_patchset += 1
if empty:
files = {}
elif not files:
fn = '%s-%s' % (self.branch.replace('/', '_'), self.number)
data = ("test %s %s %s\n" %
(self.branch, self.number, self.latest_patchset))
files = {fn: data}
msg = self.subject + '-' + str(self.latest_patchset)
c = self.addFakeChangeToRepo(msg, files, large, parent)
ps_files = [{'file': '/COMMIT_MSG',
'type': 'ADDED'},
{'file': 'README',
'type': 'MODIFIED'}]
for f in files:
ps_files.append({'file': f, 'type': 'ADDED'})
d = {'approvals': [],
'createdOn': time.time(),
'files': ps_files,
'number': str(self.latest_patchset),
'ref': 'refs/changes/%s/%s/%s' % (str(self.number).zfill(2)[-2:],
self.number,
self.latest_patchset),
'revision': c.hexsha,
'uploader': {'email': '[email protected]',
'name': 'User name',
'username': 'user'}}
self.data['currentPatchSet'] = d
self.patchsets.append(d)
self.data['submitRecords'] = self.getSubmitRecords()
def addMergePatchset(self, parents, merge_files=None):
self.latest_patchset += 1
if not merge_files:
merge_files = []
msg = self.subject + '-' + str(self.latest_patchset)
c = self.addFakeMergeCommitChangeToRepo(msg, parents)
ps_files = [{'file': '/COMMIT_MSG',
'type': 'ADDED'},
{'file': '/MERGE_LIST',
'type': 'ADDED'}]
for f in merge_files:
ps_files.append({'file': f, 'type': 'ADDED'})
d = {'approvals': [],
'createdOn': time.time(),
'files': ps_files,
'number': str(self.latest_patchset),
'ref': 'refs/changes/%s/%s/%s' % (str(self.number).zfill(2)[-2:],
self.number,
self.latest_patchset),
'revision': c.hexsha,
'uploader': {'email': '[email protected]',
'name': 'User name',
'username': 'user'}}
self.data['currentPatchSet'] = d
self.patchsets.append(d)
self.data['submitRecords'] = self.getSubmitRecords()
def setCheck(self, checker, reset=False, **kw):
if reset:
self.checks[checker] = {'state': 'NOT_STARTED',
'created': str(datetime.datetime.now())}
chk = self.checks.setdefault(checker, {})
chk['updated'] = str(datetime.datetime.now())
for (key, default) in [
('state', None),
('repository', self.project),
('change_number', self.number),
('patch_set_id', self.latest_patchset),
('checker_uuid', checker),
('message', None),
('url', None),
('started', None),
('finished', None),
]:
val = kw.get(key, chk.get(key, default))
if val is not None:
chk[key] = val
elif key in chk:
del chk[key]
self.checks_history.append(copy.deepcopy(self.checks))
def addComment(self, filename, line, message, name, email, username,
comment_range=None):
comment = {
'file': filename,
'line': int(line),
'reviewer': {
'name': name,
'email': email,
'username': username,
},
'message': message,
}
if comment_range:
comment['range'] = comment_range
self.comments.append(comment)
def getPatchsetCreatedEvent(self, patchset):
event = {"type": "patchset-created",
"change": {"project": self.project,
"branch": self.branch,
"id": "I5459869c07352a31bfb1e7a8cac379cabfcb25af",
"number": str(self.number),
"subject": self.subject,
"owner": {"name": "User Name"},
"url": "https://hostname/3"},
"patchSet": self.patchsets[patchset - 1],
"uploader": {"name": "User Name"}}
return event
def getChangeRestoredEvent(self):
event = {"type": "change-restored",
"change": {"project": self.project,
"branch": self.branch,
"id": "I5459869c07352a31bfb1e7a8cac379cabfcb25af",
"number": str(self.number),
"subject": self.subject,
"owner": {"name": "User Name"},
"url": "https://hostname/3"},
"restorer": {"name": "User Name"},
"patchSet": self.patchsets[-1],
"reason": ""}
return event
def getChangeAbandonedEvent(self):
event = {"type": "change-abandoned",
"change": {"project": self.project,
"branch": self.branch,
"id": "I5459869c07352a31bfb1e7a8cac379cabfcb25af",
"number": str(self.number),
"subject": self.subject,
"owner": {"name": "User Name"},
"url": "https://hostname/3"},
"abandoner": {"name": "User Name"},
"patchSet": self.patchsets[-1],
"reason": ""}
return event
def getChangeCommentEvent(self, patchset, comment=None,
patchsetcomment=None):
if comment is None and patchsetcomment is None:
comment = "Patch Set %d:\n\nThis is a comment" % patchset
elif comment:
comment = "Patch Set %d:\n\n%s" % (patchset, comment)
else: # patchsetcomment is not None
comment = "Patch Set %d:\n\n(1 comment)" % patchset
commentevent = {"comment": comment}
if patchsetcomment:
commentevent.update(
{'patchSetComments':
{"/PATCHSET_LEVEL": [{"message": patchsetcomment}]}
}
)
event = {"type": "comment-added",
"change": {"project": self.project,
"branch": self.branch,
"id": "I5459869c07352a31bfb1e7a8cac379cabfcb25af",
"number": str(self.number),
"subject": self.subject,
"owner": {"name": "User Name"},
"url": "https://hostname/3"},
"patchSet": self.patchsets[patchset - 1],
"author": {"name": "User Name"},
"approvals": [{"type": "Code-Review",
"description": "Code-Review",
"value": "0"}]}
event.update(commentevent)
return event
def getChangeMergedEvent(self):
event = {"submitter": {"name": "Jenkins",
"username": "jenkins"},
"newRev": "29ed3b5f8f750a225c5be70235230e3a6ccb04d9",
"patchSet": self.patchsets[-1],
"change": self.data,
"type": "change-merged",
"eventCreatedOn": 1487613810}
return event
def getRefUpdatedEvent(self):
path = os.path.join(self.upstream_root, self.project)
repo = git.Repo(path)
oldrev = repo.heads[self.branch].commit.hexsha
event = {
"type": "ref-updated",
"submitter": {
"name": "User Name",
},
"refUpdate": {
"oldRev": oldrev,
"newRev": self.patchsets[-1]['revision'],
"refName": self.branch,
"project": self.project,
}
}
return event
def addApproval(self, category, value, username='reviewer_john',
granted_on=None, message='', tag=None):
if not granted_on:
granted_on = time.time()
approval = {
'description': self.categories[category][0],
'type': category,
'value': str(value),
'by': {
'username': username,
'email': username + '@example.com',
},
'grantedOn': int(granted_on),
'__tag': tag, # Not available in ssh api
}
for i, x in enumerate(self.patchsets[-1]['approvals'][:]):
if x['by']['username'] == username and x['type'] == category:
del self.patchsets[-1]['approvals'][i]
self.patchsets[-1]['approvals'].append(approval)
event = {'approvals': [approval],
'author': {'email': '[email protected]',
'name': 'Patchset Author',
'username': 'author_phil'},
'change': {'branch': self.branch,
'id': 'Iaa69c46accf97d0598111724a38250ae76a22c87',
'number': str(self.number),
'owner': {'email': '[email protected]',
'name': 'Change Owner',
'username': 'owner_jane'},
'project': self.project,
'subject': self.subject,
'url': 'https://hostname/459'},
'comment': message,
'patchSet': self.patchsets[-1],
'type': 'comment-added'}
if 'topic' in self.data:
event['change']['topic'] = self.data['topic']
self.data['submitRecords'] = self.getSubmitRecords()
return json.loads(json.dumps(event))
def setWorkInProgress(self, wip):
# Gerrit only includes 'wip' in the data returned via ssh if
# the value is true.
if wip:
self.data['wip'] = True
elif 'wip' in self.data:
del self.data['wip']
def getSubmitRecords(self):
status = {}
for cat in self.categories:
status[cat] = 0
for a in self.patchsets[-1]['approvals']:
cur = status[a['type']]
cat_min, cat_max = self.categories[a['type']][1:]
new = int(a['value'])
if new == cat_min:
cur = new
elif abs(new) > abs(cur):
cur = new
status[a['type']] = cur
labels = []
ok = True
for typ, cat in self.categories.items():
cur = status[typ]
cat_min, cat_max = cat[1:]
if cur == cat_min:
value = 'REJECT'
ok = False
elif cur == cat_max:
value = 'OK'
else:
value = 'NEED'
ok = False
labels.append({'label': cat[0], 'status': value})
if ok:
return [{'status': 'OK'}]
return [{'status': 'NOT_READY',
'labels': labels}]
def getSubmitRequirements(self):
return self.submit_requirements
def setSubmitRequirements(self, reqs):
self.submit_requirements = reqs
def setDependsOn(self, other, patchset):
self.depends_on_change = other
self.depends_on_patchset = patchset
d = {'id': other.data['id'],
'number': other.data['number'],
'ref': other.patchsets[patchset - 1]['ref']
}
self.data['dependsOn'] = [d]
other.needed_by_changes.append((self, len(self.patchsets)))
needed = other.data.get('neededBy', [])
d = {'id': self.data['id'],
'number': self.data['number'],
'ref': self.patchsets[-1]['ref'],
'revision': self.patchsets[-1]['revision']
}
needed.append(d)
other.data['neededBy'] = needed
def query(self):
self.queried += 1
d = self.data.get('dependsOn')
if d:
d = d[0]
if (self.depends_on_change.patchsets[-1]['ref'] == d['ref']):
d['isCurrentPatchSet'] = True
else:
d['isCurrentPatchSet'] = False
return json.loads(json.dumps(self.data))
def queryHTTP(self, internal=False):
if not internal:
self.queried += 1
labels = {}
for cat in self.categories:
labels[cat] = {}
for app in self.patchsets[-1]['approvals']:
label = labels[app['type']]
_, label_min, label_max = self.categories[app['type']]
val = int(app['value'])
label_all = label.setdefault('all', [])
approval = {
"value": val,
"username": app['by']['username'],
"email": app['by']['email'],
"date": str(datetime.datetime.fromtimestamp(app['grantedOn'])),
}
if app.get('__tag') is not None:
approval['tag'] = app['__tag']
label_all.append(approval)
if val == label_min:
label['blocking'] = True
if 'rejected' not in label:
label['rejected'] = app['by']
if val == label_max:
if 'approved' not in label:
label['approved'] = app['by']
revisions = {}
rev = self.patchsets[-1]
num = len(self.patchsets)
files = {}
for f in rev['files']:
if f['file'] == '/COMMIT_MSG':
continue
files[f['file']] = {"status": f['type'][0]} # ADDED -> A
parent = '0000000000000000000000000000000000000000'
if self.depends_on_change:
parent = self.depends_on_change.patchsets[
self.depends_on_patchset - 1]['revision']
revisions[rev['revision']] = {
"kind": "REWORK",
"_number": num,
"created": rev['createdOn'],
"uploader": rev['uploader'],
"ref": rev['ref'],
"commit": {
"subject": self.subject,
"message": self.data['commitMessage'],
"parents": [{
"commit": parent,
}]
},
"files": files
}
data = {
"id": self.project + '~' + self.branch + '~' + self.data['id'],
"project": self.project,
"branch": self.branch,
"hashtags": [],
"change_id": self.data['id'],
"subject": self.subject,
"status": self.data['status'],
"created": self.data['createdOn'],
"updated": self.data['lastUpdated'],
"_number": self.number,
"owner": self.data['owner'],
"labels": labels,
"current_revision": self.patchsets[-1]['revision'],
"revisions": revisions,
"requirements": [],
"work_in_progresss": self.data.get('wip', False)
}
if 'parents' in self.data:
data['parents'] = self.data['parents']
if 'topic' in self.data:
data['topic'] = self.data['topic']
data['submit_requirements'] = self.getSubmitRequirements()
return json.loads(json.dumps(data))
def queryRevisionHTTP(self, revision):
for ps in self.patchsets:
if ps['revision'] == revision:
break
else:
return None
changes = []
if self.depends_on_change:
changes.append({
"commit": {
"commit": self.depends_on_change.patchsets[
self.depends_on_patchset - 1]['revision'],
},
"_change_number": self.depends_on_change.number,
"_revision_number": self.depends_on_patchset
})
for (needed_by_change, needed_by_patchset) in self.needed_by_changes:
changes.append({
"commit": {
"commit": needed_by_change.patchsets[
needed_by_patchset - 1]['revision'],
},
"_change_number": needed_by_change.number,
"_revision_number": needed_by_patchset,
})
return {"changes": changes}
def queryFilesHTTP(self, revision):
for rev in self.patchsets:
if rev['revision'] == revision:
break
else:
return None
files = {}
for f in rev['files']:
files[f['file']] = {"status": f['type'][0]} # ADDED -> A
return files
def setMerged(self):
if (self.depends_on_change and
self.depends_on_change.data['status'] != 'MERGED'):
return
if self.fail_merge:
return
self.data['status'] = 'MERGED'
self.data['open'] = False
self.open = False
path = os.path.join(self.upstream_root, self.project)
repo = git.Repo(path)
repo.head.reference = self.branch
repo.head.reset(working_tree=True)
repo.git.merge('-s', 'resolve', self.patchsets[-1]['ref'])
repo.heads[self.branch].commit = repo.head.commit
def setReported(self):
self.reported += 1
class GerritWebServer(object):
def __init__(self, fake_gerrit):
super(GerritWebServer, self).__init__()
self.fake_gerrit = fake_gerrit
def start(self):
fake_gerrit = self.fake_gerrit
class Server(http.server.SimpleHTTPRequestHandler):
log = logging.getLogger("zuul.test.FakeGerritConnection")
review_re = re.compile('/a/changes/(.*?)/revisions/(.*?)/review')
together_re = re.compile('/a/changes/(.*?)/submitted_together')
submit_re = re.compile('/a/changes/(.*?)/submit')
pending_checks_re = re.compile(
r'/a/plugins/checks/checks\.pending/\?'
r'query=checker:(.*?)\+\(state:(.*?)\)')
update_checks_re = re.compile(
r'/a/changes/(.*)/revisions/(.*?)/checks/(.*)')
list_checkers_re = re.compile('/a/plugins/checks/checkers/')
change_re = re.compile(r'/a/changes/(.*)\?o=.*')
related_re = re.compile(r'/a/changes/(.*)/revisions/(.*)/related')
files_re = re.compile(r'/a/changes/(.*)/revisions/(.*)/files'
r'\?parent=1')
change_search_re = re.compile(r'/a/changes/\?n=500.*&q=(.*)')
version_re = re.compile(r'/a/config/server/version')
def do_POST(self):
path = self.path
self.log.debug("Got POST %s", path)
data = self.rfile.read(int(self.headers['Content-Length']))
data = json.loads(data.decode('utf-8'))
self.log.debug("Got data %s", data)
m = self.review_re.match(path)
if m:
return self.review(m.group(1), m.group(2), data)
m = self.submit_re.match(path)
if m:
return self.submit(m.group(1), data)
m = self.update_checks_re.match(path)
if m:
return self.update_checks(
m.group(1), m.group(2), m.group(3), data)
self.send_response(500)
self.end_headers()
def do_GET(self):
path = self.path
self.log.debug("Got GET %s", path)
m = self.change_re.match(path)
if m:
return self.get_change(m.group(1))
m = self.related_re.match(path)
if m:
return self.get_related(m.group(1), m.group(2))
m = self.files_re.match(path)
if m:
return self.get_files(m.group(1), m.group(2))
m = self.together_re.match(path)
if m:
return self.get_submitted_together(m.group(1))
m = self.change_search_re.match(path)
if m:
return self.get_changes(m.group(1))
m = self.pending_checks_re.match(path)
if m:
return self.get_pending_checks(m.group(1), m.group(2))
m = self.list_checkers_re.match(path)
if m:
return self.list_checkers()
m = self.version_re.match(path)
if m:
return self.version()
self.send_response(500)
self.end_headers()
def _403(self, msg):
self.send_response(403)
self.end_headers()
self.wfile.write(msg.encode('utf8'))
def _404(self):
self.send_response(404)
self.end_headers()
def _409(self):
self.send_response(409)
self.end_headers()
def _get_change(self, change_id):
change_id = urllib.parse.unquote(change_id)
project, branch, change = change_id.split('~')
for c in fake_gerrit.changes.values():
if (c.data['id'] == change and
c.data['branch'] == branch and
c.data['project'] == project):
return c
def review(self, change_id, revision, data):
change = self._get_change(change_id)
if not change:
return self._404()
message = data['message']
b_len = len(message.encode('utf-8'))
if b_len > gerritconnection.GERRIT_HUMAN_MESSAGE_LIMIT:
self.send_response(400, message='Message length exceeded')
self.end_headers()
return
labels = data.get('labels', {})
comments = data.get('robot_comments', data.get('comments', {}))
tag = data.get('tag', None)
fake_gerrit._test_handle_review(
int(change.data['number']), message, False, labels,
True, False, comments, tag=tag)
self.send_response(200)
self.end_headers()
def submit(self, change_id, data):
change = self._get_change(change_id)
if not change:
return self._404()
if not fake_gerrit._fake_submit_permission:
return self._403('submit not permitted')
candidate = self._get_change(change_id)
sr = candidate.getSubmitRecords()
if sr[0]['status'] != 'OK':
# One of the changes in this topic isn't
# ready to merge
return self._409()
changes_to_merge = set(change.data['number'])
if fake_gerrit._fake_submit_whole_topic:
results = fake_gerrit._test_get_submitted_together(change)
for record in results:
candidate = self._get_change(record['id'])
sr = candidate.getSubmitRecords()
if sr[0]['status'] != 'OK':
# One of the changes in this topic isn't
# ready to merge
return self._409()
changes_to_merge.add(candidate.data['number'])
message = None
labels = {}
for change_number in changes_to_merge:
fake_gerrit._test_handle_review(
int(change_number), message, True, labels,
False, True)
self.send_response(200)
self.end_headers()
def update_checks(self, change_id, revision, checker, data):
self.log.debug("Update checks %s %s %s",
change_id, revision, checker)
change = self._get_change(change_id)
if not change:
return self._404()
change.setCheck(checker, **data)
self.send_response(200)
# TODO: return the real data structure, but zuul
# ignores this now.
self.end_headers()
def get_pending_checks(self, checker, state):
self.log.debug("Get pending checks %s %s", checker, state)
ret = []
for c in fake_gerrit.changes.values():
if checker not in c.checks:
continue
patchset_pending_checks = {}
if c.checks[checker]['state'] == state:
patchset_pending_checks[checker] = {
'state': c.checks[checker]['state'],
}
if patchset_pending_checks:
ret.append({
'patch_set': {
'repository': c.project,
'change_number': c.number,
'patch_set_id': c.latest_patchset,
},
'pending_checks': patchset_pending_checks,
})
self.send_data(ret)
def list_checkers(self):
self.log.debug("Get checkers")
self.send_data(fake_gerrit.fake_checkers)
def get_change(self, number):
change = fake_gerrit.changes.get(int(number))
if not change:
return self._404()
self.send_data(change.queryHTTP())
self.end_headers()
def get_related(self, number, revision):
change = fake_gerrit.changes.get(int(number))
if not change:
return self._404()
data = change.queryRevisionHTTP(revision)
if data is None:
return self._404()
self.send_data(data)
self.end_headers()
def get_files(self, number, revision):
change = fake_gerrit.changes.get(int(number))
if not change:
return self._404()
data = change.queryFilesHTTP(revision)
if data is None:
return self._404()
self.send_data(data)
self.end_headers()
def get_submitted_together(self, number):
change = fake_gerrit.changes.get(int(number))
if not change:
return self._404()
results = fake_gerrit._test_get_submitted_together(change)
self.send_data(results)
self.end_headers()
def get_changes(self, query):
self.log.debug("simpleQueryHTTP: %s", query)
query = urllib.parse.unquote(query)
fake_gerrit.queries.append(query)
results = []
if query.startswith('(') and 'OR' in query:
query = query[1:-1]
for q in query.split(' OR '):
for r in fake_gerrit._simpleQuery(q, http=True):
if r not in results:
results.append(r)
else:
results = fake_gerrit._simpleQuery(query, http=True)
self.send_data(results)
self.end_headers()
def version(self):
self.send_data('3.0.0-some-stuff')
self.end_headers()
def send_data(self, data):
data = json.dumps(data).encode('utf-8')
data = b")]}'\n" + data
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.send_header('Content-Length', len(data))
self.end_headers()
self.wfile.write(data)
def log_message(self, fmt, *args):
self.log.debug(fmt, *args)
self.httpd = socketserver.ThreadingTCPServer(('', 0), Server)
self.port = self.httpd.socket.getsockname()[1]
self.thread = threading.Thread(name='GerritWebServer',
target=self.httpd.serve_forever)
self.thread.daemon = True
self.thread.start()
def stop(self):
self.httpd.shutdown()
self.thread.join()
self.httpd.server_close()
class FakeGerritPoller(gerritconnection.GerritChecksPoller):
"""A Fake Gerrit poller for use in tests.
This subclasses
:py:class:`~zuul.connection.gerrit.GerritPoller`.
"""
poll_interval = 1
def _poll(self, *args, **kw):
r = super(FakeGerritPoller, self)._poll(*args, **kw)
# Set the event so tests can confirm that the poller has run
# after they changed something.
self.connection._poller_event.set()
return r
class FakeGerritRefWatcher(gitwatcher.GitWatcher):
"""A Fake Gerrit ref watcher.
This subclasses
:py:class:`~zuul.connection.git.GitWatcher`.
"""
def __init__(self, *args, **kw):
super(FakeGerritRefWatcher, self).__init__(*args, **kw)
self.baseurl = self.connection.upstream_root
self.poll_delay = 1
def _poll(self, *args, **kw):
r = super(FakeGerritRefWatcher, self)._poll(*args, **kw)
# Set the event so tests can confirm that the watcher has run
# after they changed something.
self.connection._ref_watcher_event.set()
return r
class FakeElasticsearchConnection(elconnection.ElasticsearchConnection):
log = logging.getLogger("zuul.test.FakeElasticsearchConnection")
def __init__(self, driver, connection_name, connection_config):
self.driver = driver
self.connection_name = connection_name
self.source_it = None
def add_docs(self, source_it, index):
self.source_it = source_it
self.index = index
class FakeGerritConnection(gerritconnection.GerritConnection):
"""A Fake Gerrit connection for use in tests.
This subclasses
:py:class:`~zuul.connection.gerrit.GerritConnection` to add the
ability for tests to add changes to the fake Gerrit it represents.
"""
log = logging.getLogger("zuul.test.FakeGerritConnection")
_poller_class = FakeGerritPoller
_ref_watcher_class = FakeGerritRefWatcher
def __init__(self, driver, connection_name, connection_config,
changes_db=None, upstream_root=None, poller_event=None,
ref_watcher_event=None):
if connection_config.get('password'):
self.web_server = GerritWebServer(self)
self.web_server.start()
url = 'http://localhost:%s' % self.web_server.port
connection_config['baseurl'] = url
else:
self.web_server = None
super(FakeGerritConnection, self).__init__(driver, connection_name,
connection_config)
self.fixture_dir = os.path.join(FIXTURE_DIR, 'gerrit')
self.change_number = 0
self.changes = changes_db
self.queries = []
self.upstream_root = upstream_root
self.fake_checkers = []
self._poller_event = poller_event
self._ref_watcher_event = ref_watcher_event
self._fake_submit_whole_topic = False
self._fake_submit_permission = True
self.submit_retry_backoff = 0
def onStop(self):
super().onStop()
if self.web_server:
self.web_server.stop()
def addFakeChecker(self, **kw):
self.fake_checkers.append(kw)
def addFakeChange(self, project, branch, subject, status='NEW',
files=None, parent=None, merge_parents=None,
merge_files=None, topic=None, empty=False):
"""Add a change to the fake Gerrit."""
self.change_number += 1
c = FakeGerritChange(self, self.change_number, project, branch,
subject, upstream_root=self.upstream_root,
status=status, files=files, parent=parent,
merge_parents=merge_parents,
merge_files=merge_files,
topic=topic, empty=empty)
self.changes[self.change_number] = c
return c
def addFakeTag(self, project, branch, tag):
path = os.path.join(self.upstream_root, project)
repo = git.Repo(path)
commit = repo.heads[branch].commit
newrev = commit.hexsha
ref = 'refs/tags/' + tag
git.Tag.create(repo, tag, commit)
event = {
"type": "ref-updated",
"submitter": {
"name": "User Name",
},
"refUpdate": {
"oldRev": 40 * '0',
"newRev": newrev,
"refName": ref,
"project": project,
}
}
return event
def getFakeBranchCreatedEvent(self, project, branch):
path = os.path.join(self.upstream_root, project)
repo = git.Repo(path)
oldrev = 40 * '0'
event = {
"type": "ref-updated",
"submitter": {
"name": "User Name",
},
"refUpdate": {
"oldRev": oldrev,
"newRev": repo.heads[branch].commit.hexsha,
"refName": 'refs/heads/' + branch,
"project": project,
}
}
return event
def getFakeBranchDeletedEvent(self, project, branch):
oldrev = '4abd38457c2da2a72d4d030219ab180ecdb04bf0'
newrev = 40 * '0'
event = {
"type": "ref-updated",
"submitter": {
"name": "User Name",
},
"refUpdate": {
"oldRev": oldrev,
"newRev": newrev,
"refName": 'refs/heads/' + branch,
"project": project,
}
}
return event
def review(self, item, message, submit, labels, checks_api, file_comments,
phase1, phase2, zuul_event_id=None):
if self.web_server:
return super(FakeGerritConnection, self).review(
item, message, submit, labels, checks_api, file_comments,
phase1, phase2, zuul_event_id)
self._test_handle_review(int(item.change.number), message, submit,
labels, phase1, phase2)
def _test_get_submitted_together(self, change):
topic = change.data.get('topic')
if not self._fake_submit_whole_topic:
topic = None
if topic:
results = self._simpleQuery(f'topic:{topic}', http=True)
else:
results = [change.queryHTTP(internal=True)]
for dep in change.data.get('dependsOn', []):
dep_change = self.changes.get(int(dep['number']))
r = dep_change.queryHTTP(internal=True)
if r not in results:
results.append(r)
if len(results) == 1:
return []
return results
def _test_handle_review(self, change_number, message, submit, labels,
phase1, phase2, file_comments=None, tag=None):
# Handle a review action from a test
change = self.changes[change_number]
# Add the approval back onto the change (ie simulate what gerrit would
# do).
# Usually when zuul leaves a review it'll create a feedback loop where
# zuul's review enters another gerrit event (which is then picked up by
# zuul). However, we can't mimic this behaviour (by adding this
# approval event into the queue) as it stops jobs from checking what
# happens before this event is triggered. If a job needs to see what
# happens they can add their own verified event into the queue.
# Nevertheless, we can update change with the new review in gerrit.
if phase1:
for cat in labels:
change.addApproval(cat, labels[cat], username=self.user,
tag=tag)
if message:
change.messages.append(message)
if file_comments:
for filename, commentlist in file_comments.items():
for comment in commentlist:
change.addComment(filename, comment['line'],
comment['message'], 'Zuul',
'[email protected]', self.user,
comment.get('range'))
if message:
change.setReported()
if submit and phase2:
change.setMerged()
def queryChangeSSH(self, number, event=None):
self.log.debug("Query change SSH: %s", number)
change = self.changes.get(int(number))
if change:
return change.query()
return {}
def _simpleQuery(self, query, http=False):
if http:
def queryMethod(change):
return change.queryHTTP()
else:
def queryMethod(change):
return change.query()
# the query can be in parenthesis so strip them if needed
if query.startswith('('):
query = query[1:-1]
if query.startswith('change:'):
# Query a specific changeid
changeid = query[len('change:'):]
l = [queryMethod(change) for change in self.changes.values()
if (change.data['id'] == changeid or
change.data['number'] == changeid)]
elif query.startswith('message:'):
# Query the content of a commit message
msg = query[len('message:'):].strip()
# Remove quoting if it is there
if msg.startswith('{') and msg.endswith('}'):
msg = msg[1:-1]
l = [queryMethod(change) for change in self.changes.values()
if msg in change.data['commitMessage']]
else:
cut_off_time = 0
l = list(self.changes.values())
parts = query.split(" ")
for part in parts:
if part.startswith("-age"):
_, _, age = part.partition(":")
cut_off_time = (
datetime.datetime.now().timestamp() - float(age[:-1])
)
l = [
change for change in l
if change.data["lastUpdated"] >= cut_off_time
]
if part.startswith('topic:'):
topic = part[len('topic:'):].strip()
l = [
change for change in l
if 'topic' in change.data
and topic in change.data['topic']
]
l = [queryMethod(change) for change in l]
return l
def simpleQuerySSH(self, query, event=None):
log = get_annotated_logger(self.log, event)
log.debug("simpleQuerySSH: %s", query)
self.queries.append(query)
results = []
if query.startswith('(') and 'OR' in query:
query = query[1:-1]
for q in query.split(' OR '):
for r in self._simpleQuery(q):
if r not in results:
results.append(r)
else:
results = self._simpleQuery(query)
return results
def startSSHListener(self, *args, **kw):
pass
def _uploadPack(self, project):
ret = ('00a31270149696713ba7e06f1beb760f20d359c4abed HEAD\x00'
'multi_ack thin-pack side-band side-band-64k ofs-delta '
'shallow no-progress include-tag multi_ack_detailed no-done\n')
path = os.path.join(self.upstream_root, project.name)
repo = git.Repo(path)
for ref in repo.refs:
if ref.path.endswith('.lock'):
# don't treat lockfiles as ref
continue
r = ref.object.hexsha + ' ' + ref.path + '\n'
ret += '%04x%s' % (len(r) + 4, r)
ret += '0000'
return ret
def getGitUrl(self, project):
return 'file://' + os.path.join(self.upstream_root, project.name)
class PagureChangeReference(git.Reference):
_common_path_default = "refs/pull"
_points_to_commits_only = True
class FakePagurePullRequest(object):
log = logging.getLogger("zuul.test.FakePagurePullRequest")
def __init__(self, pagure, number, project, branch,
subject, upstream_root, files={}, number_of_commits=1,
initial_comment=None):
self.pagure = pagure
self.source = pagure
self.number = number
self.project = project
self.branch = branch
self.subject = subject
self.upstream_root = upstream_root
self.number_of_commits = 0
self.status = 'Open'
self.initial_comment = initial_comment
self.uuid = uuid.uuid4().hex
self.comments = []
self.flags = []
self.files = {}
self.tags = []
self.cached_merge_status = ''
self.threshold_reached = False
self.commit_stop = None
self.commit_start = None
self.threshold_reached = False
self.upstream_root = upstream_root
self.cached_merge_status = 'MERGE'
self.url = "https://%s/%s/pull-request/%s" % (
self.pagure.server, self.project, self.number)
self.is_merged = False
self.pr_ref = self._createPRRef()
self._addCommitInPR(files=files)
self._updateTimeStamp()
def _getPullRequestEvent(self, action, pull_data_field='pullrequest'):
name = 'pg_pull_request'
data = {
'msg': {
pull_data_field: {
'branch': self.branch,
'comments': self.comments,
'commit_start': self.commit_start,
'commit_stop': self.commit_stop,
'date_created': '0',
'tags': self.tags,
'initial_comment': self.initial_comment,
'id': self.number,
'project': {
'fullname': self.project,
},
'status': self.status,
'subject': self.subject,
'uid': self.uuid,
}
},
'msg_id': str(uuid.uuid4()),
'timestamp': 1427459070,
'topic': action
}
if action == 'pull-request.flag.added':
data['msg']['flag'] = self.flags[0]
if action == 'pull-request.tag.added':
data['msg']['tags'] = self.tags
return (name, data)
def getPullRequestOpenedEvent(self):
return self._getPullRequestEvent('pull-request.new')
def getPullRequestClosedEvent(self, merged=True):
if merged:
self.is_merged = True
self.status = 'Merged'
else:
self.is_merged = False
self.status = 'Closed'
return self._getPullRequestEvent('pull-request.closed')
def getPullRequestUpdatedEvent(self):
self._addCommitInPR()
self.addComment(
"**1 new commit added**\n\n * ``Bump``\n",
True)
return self._getPullRequestEvent('pull-request.comment.added')
def getPullRequestCommentedEvent(self, message):
self.addComment(message)
return self._getPullRequestEvent('pull-request.comment.added')
def getPullRequestInitialCommentEvent(self, message):
self.initial_comment = message
self._updateTimeStamp()
return self._getPullRequestEvent('pull-request.initial_comment.edited')
def getPullRequestTagAddedEvent(self, tags, reset=True):
if reset:
self.tags = []
_tags = set(self.tags)
_tags.update(set(tags))
self.tags = list(_tags)
self.addComment(
"**Metadata Update from @pingou**:\n- " +
"Pull-request tagged with: %s" % ', '.join(tags),
True)
self._updateTimeStamp()
return self._getPullRequestEvent(
'pull-request.tag.added', pull_data_field='pull_request')
def getPullRequestStatusSetEvent(self, status, username="zuul"):
self.addFlag(
status, "https://url", "Build %s" % status, username)
return self._getPullRequestEvent('pull-request.flag.added')
def insertFlag(self, flag):
to_pop = None
for i, _flag in enumerate(self.flags):
if _flag['uid'] == flag['uid']:
to_pop = i
if to_pop is not None:
self.flags.pop(to_pop)
self.flags.insert(0, flag)
def addFlag(self, status, url, comment, username="zuul"):
flag_uid = "%s-%s-%s" % (username, self.number, self.project)
flag = {
"username": "Zuul CI",
"user": {
"name": username
},
"uid": flag_uid[:32],
"comment": comment,
"status": status,
"url": url
}
self.insertFlag(flag)
self._updateTimeStamp()
def editInitialComment(self, initial_comment):
self.initial_comment = initial_comment
self._updateTimeStamp()
def addComment(self, message, notification=False, fullname=None):
self.comments.append({
'comment': message,
'notification': notification,
'date_created': str(int(time.time())),
'user': {
'fullname': fullname or 'Pingou'
}}
)
self._updateTimeStamp()
def getPRReference(self):
return '%s/head' % self.number
def _getRepo(self):
repo_path = os.path.join(self.upstream_root, self.project)
return git.Repo(repo_path)
def _createPRRef(self):
repo = self._getRepo()
return PagureChangeReference.create(
repo, self.getPRReference(), 'refs/tags/init')
def addCommit(self, files={}, delete_files=None):
"""Adds a commit on top of the actual PR head."""
self._addCommitInPR(files=files, delete_files=delete_files)
self._updateTimeStamp()
def forcePush(self, files={}):
"""Clears actual commits and add a commit on top of the base."""
self._addCommitInPR(files=files, reset=True)
self._updateTimeStamp()
def _addCommitInPR(self, files={}, delete_files=None, reset=False):
repo = self._getRepo()
ref = repo.references[self.getPRReference()]
if reset:
self.number_of_commits = 0
ref.set_object('refs/tags/init')
self.number_of_commits += 1
repo.head.reference = ref
repo.git.clean('-x', '-f', '-d')
if files:
self.files = files
elif not delete_files:
fn = '%s-%s' % (self.branch.replace('/', '_'), self.number)
self.files = {fn: "test %s %s\n" % (self.branch, self.number)}
msg = self.subject + '-' + str(self.number_of_commits)
for fn, content in self.files.items():
fn = os.path.join(repo.working_dir, fn)
with open(fn, 'w') as f:
f.write(content)
repo.index.add([fn])
if delete_files:
for fn in delete_files:
if fn in self.files:
del self.files[fn]
fn = os.path.join(repo.working_dir, fn)
repo.index.remove([fn])
self.commit_stop = repo.index.commit(msg).hexsha
if not self.commit_start:
self.commit_start = self.commit_stop
repo.create_head(self.getPRReference(), self.commit_stop, force=True)
self.pr_ref.set_commit(self.commit_stop)
repo.head.reference = 'master'
repo.git.clean('-x', '-f', '-d')
repo.heads['master'].checkout()
def _updateTimeStamp(self):
self.last_updated = str(int(time.time()))
class FakePagureAPIClient(pagureconnection.PagureAPIClient):
log = logging.getLogger("zuul.test.FakePagureAPIClient")
def __init__(self, baseurl, api_token, project,
pull_requests_db={}):
super(FakePagureAPIClient, self).__init__(
baseurl, api_token, project)
self.session = None
self.pull_requests = pull_requests_db
self.return_post_error = None
def gen_error(self, verb, custom_only=False):
if verb == 'POST' and self.return_post_error:
return {
'error': self.return_post_error['error'],
'error_code': self.return_post_error['error_code']
}, 401, "", 'POST'
self.return_post_error = None
if not custom_only:
return {
'error': 'some error',
'error_code': 'some error code'
}, 503, "", verb
def _get_pr(self, match):
project, number = match.groups()
pr = self.pull_requests.get(project, {}).get(number)
if not pr:
return self.gen_error("GET")
return pr
def get(self, url):
self.log.debug("Getting resource %s ..." % url)
match = re.match(r'.+/api/0/(.+)/pull-request/(\d+)$', url)
if match:
pr = self._get_pr(match)
return {
'branch': pr.branch,
'subject': pr.subject,
'status': pr.status,
'initial_comment': pr.initial_comment,
'last_updated': pr.last_updated,
'comments': pr.comments,
'commit_stop': pr.commit_stop,
'threshold_reached': pr.threshold_reached,
'cached_merge_status': pr.cached_merge_status,
'tags': pr.tags,
}, 200, "", "GET"
match = re.match(r'.+/api/0/(.+)/pull-request/(\d+)/flag$', url)
if match:
pr = self._get_pr(match)
return {'flags': pr.flags}, 200, "", "GET"
match = re.match('.+/api/0/(.+)/git/branches$', url)
if match:
# project = match.groups()[0]
return {'branches': ['master']}, 200, "", "GET"
match = re.match(r'.+/api/0/(.+)/pull-request/(\d+)/diffstats$', url)
if match:
pr = self._get_pr(match)
return pr.files, 200, "", "GET"
def post(self, url, params=None):
self.log.info(
"Posting on resource %s, params (%s) ..." % (url, params))
# Will only match if return_post_error is set
err = self.gen_error("POST", custom_only=True)
if err:
return err
match = re.match(r'.+/api/0/(.+)/pull-request/(\d+)/merge$', url)
if match:
pr = self._get_pr(match)
pr.status = 'Merged'
pr.is_merged = True
return {}, 200, "", "POST"
match = re.match(r'.+/api/0/-/whoami$', url)
if match:
return {"username": "zuul"}, 200, "", "POST"
if not params:
return self.gen_error("POST")
match = re.match(r'.+/api/0/(.+)/pull-request/(\d+)/flag$', url)
if match:
pr = self._get_pr(match)
params['user'] = {"name": "zuul"}
pr.insertFlag(params)
match = re.match(r'.+/api/0/(.+)/pull-request/(\d+)/comment$', url)
if match:
pr = self._get_pr(match)
pr.addComment(params['comment'])
return {}, 200, "", "POST"
class FakePagureConnection(pagureconnection.PagureConnection):
log = logging.getLogger("zuul.test.FakePagureConnection")
def __init__(self, driver, connection_name, connection_config,
changes_db=None, upstream_root=None):
super(FakePagureConnection, self).__init__(driver, connection_name,
connection_config)
self.connection_name = connection_name
self.pr_number = 0
self.pull_requests = changes_db
self.statuses = {}
self.upstream_root = upstream_root
self.reports = []
self.cloneurl = self.upstream_root
def get_project_api_client(self, project):
client = FakePagureAPIClient(
self.baseurl, None, project,
pull_requests_db=self.pull_requests)
if not self.username:
self.set_my_username(client)
return client
def get_project_webhook_token(self, project):
return 'fake_webhook_token-%s' % project
def emitEvent(self, event, use_zuulweb=False, project=None,
wrong_token=False):
name, payload = event
if use_zuulweb:
if not wrong_token:
secret = 'fake_webhook_token-%s' % project
else:
secret = ''
payload = json.dumps(payload).encode('utf-8')
signature, _ = pagureconnection._sign_request(payload, secret)
headers = {'x-pagure-signature': signature,
'x-pagure-project': project}
return requests.post(
'http://127.0.0.1:%s/api/connection/%s/payload'
% (self.zuul_web_port, self.connection_name),
data=payload, headers=headers)
else:
data = {'payload': payload}
self.event_queue.put(data)
return data
def openFakePullRequest(self, project, branch, subject, files=[],
initial_comment=None):
self.pr_number += 1
pull_request = FakePagurePullRequest(
self, self.pr_number, project, branch, subject, self.upstream_root,
files=files, initial_comment=initial_comment)
self.pull_requests.setdefault(
project, {})[str(self.pr_number)] = pull_request
return pull_request
def getGitReceiveEvent(self, project):
name = 'pg_push'
repo_path = os.path.join(self.upstream_root, project)
repo = git.Repo(repo_path)
headsha = repo.head.commit.hexsha
data = {
'msg': {
'project_fullname': project,
'branch': 'master',
'end_commit': headsha,
'old_commit': '1' * 40,
},
'msg_id': str(uuid.uuid4()),
'timestamp': 1427459070,
'topic': 'git.receive',
}
return (name, data)
def getGitTagCreatedEvent(self, project, tag, rev):
name = 'pg_push'
data = {
'msg': {
'project_fullname': project,
'tag': tag,
'rev': rev
},
'msg_id': str(uuid.uuid4()),
'timestamp': 1427459070,
'topic': 'git.tag.creation',
}
return (name, data)
def getGitBranchEvent(self, project, branch, type, rev):
name = 'pg_push'
data = {
'msg': {
'project_fullname': project,
'branch': branch,
'rev': rev,
},
'msg_id': str(uuid.uuid4()),
'timestamp': 1427459070,
'topic': 'git.branch.%s' % type,
}
return (name, data)
def setZuulWebPort(self, port):
self.zuul_web_port = port
FakeGitlabBranch = namedtuple('Branch', ('name', 'protected'))
class FakeGitlabConnection(gitlabconnection.GitlabConnection):
log = logging.getLogger("zuul.test.FakeGitlabConnection")
def __init__(self, driver, connection_name, connection_config,
changes_db=None, upstream_root=None):
self.merge_requests = changes_db
self.upstream_root = upstream_root
self.mr_number = 0
self._test_web_server = tests.fakegitlab.GitlabWebServer(changes_db)
self._test_web_server.start()
self._test_baseurl = 'http://localhost:%s' % self._test_web_server.port
connection_config['baseurl'] = self._test_baseurl
super(FakeGitlabConnection, self).__init__(driver, connection_name,
connection_config)
def onStop(self):
super().onStop()
self._test_web_server.stop()
def addProject(self, project):
super(FakeGitlabConnection, self).addProject(project)
self.addProjectByName(project.name)
def addProjectByName(self, project_name):
owner, proj = project_name.split('/')
repo = self._test_web_server.fake_repos[(owner, proj)]
branch = FakeGitlabBranch('master', False)
if 'master' not in repo:
repo.append(branch)
def protectBranch(self, owner, project, branch, protected=True):
if branch in self._test_web_server.fake_repos[(owner, project)]:
del self._test_web_server.fake_repos[(owner, project)][branch]
fake_branch = FakeGitlabBranch(branch, protected=protected)
self._test_web_server.fake_repos[(owner, project)].append(fake_branch)
def deleteBranch(self, owner, project, branch):
if branch in self._test_web_server.fake_repos[(owner, project)]:
del self._test_web_server.fake_repos[(owner, project)][branch]
def getGitUrl(self, project):
return 'file://' + os.path.join(self.upstream_root, project.name)
def real_getGitUrl(self, project):
return super(FakeGitlabConnection, self).getGitUrl(project)
def openFakeMergeRequest(self, project,
branch, title, description='', files=[],
base_sha=None):
self.mr_number += 1
merge_request = FakeGitlabMergeRequest(
self, self.mr_number, project, branch, title, self.upstream_root,
files=files, description=description, base_sha=base_sha)
self.merge_requests.setdefault(
project, {})[str(self.mr_number)] = merge_request
return merge_request
def emitEvent(self, event, use_zuulweb=False, project=None):
name, payload = event
if use_zuulweb:
payload = json.dumps(payload).encode('utf-8')
headers = {'x-gitlab-token': self.webhook_token}
return requests.post(
'http://127.0.0.1:%s/api/connection/%s/payload'
% (self.zuul_web_port, self.connection_name),
data=payload, headers=headers)
else:
data = {'payload': payload}
self.event_queue.put(data)
return data
def setZuulWebPort(self, port):
self.zuul_web_port = port
def getPushEvent(
self, project, before=None, after=None,
branch='refs/heads/master',
added_files=None, removed_files=None,
modified_files=None):
if added_files is None:
added_files = []
if removed_files is None:
removed_files = []
if modified_files is None:
modified_files = []
name = 'gl_push'
if not after:
repo_path = os.path.join(self.upstream_root, project)
repo = git.Repo(repo_path)
after = repo.head.commit.hexsha
data = {
'object_kind': 'push',
'before': before or '1' * 40,
'after': after,
'ref': branch,
'project': {
'path_with_namespace': project
},
'commits': [
{
'added': added_files,
'removed': removed_files,
'modified': modified_files
}
],
'total_commits_count': 1,
}
return (name, data)
def getGitTagEvent(self, project, tag, sha):
name = 'gl_push'
data = {
'object_kind': 'tag_push',
'before': '0' * 40,
'after': sha,
'ref': 'refs/tags/%s' % tag,
'project': {
'path_with_namespace': project
},
}
return (name, data)
@contextmanager
def enable_community_edition(self):
self._test_web_server.options['community_edition'] = True
yield
self._test_web_server.options['community_edition'] = False
@contextmanager
def enable_delayed_complete_mr(self, complete_at):
self._test_web_server.options['delayed_complete_mr'] = complete_at
yield
self._test_web_server.options['delayed_complete_mr'] = 0
@contextmanager
def enable_uncomplete_mr(self):
self._test_web_server.options['uncomplete_mr'] = True
orig = self.gl_client.get_mr_wait_factor
self.gl_client.get_mr_wait_factor = 0.1
yield
self.gl_client.get_mr_wait_factor = orig
self._test_web_server.options['uncomplete_mr'] = False
class GitlabChangeReference(git.Reference):
_common_path_default = "refs/merge-requests"
_points_to_commits_only = True
class FakeGitlabMergeRequest(object):
log = logging.getLogger("zuul.test.FakeGitlabMergeRequest")
def __init__(self, gitlab, number, project, branch,
subject, upstream_root, files=[], description='',
base_sha=None):
self.gitlab = gitlab
self.source = gitlab
self.number = number
self.project = project
self.branch = branch
self.subject = subject
self.description = description
self.upstream_root = upstream_root
self.number_of_commits = 0
self.created_at = datetime.datetime.now(datetime.timezone.utc)
self.updated_at = self.created_at
self.merged_at = None
self.sha = None
self.state = 'opened'
self.is_merged = False
self.merge_status = 'can_be_merged'
self.squash_merge = None
self.labels = []
self.notes = []
self.url = "https://%s/%s/merge_requests/%s" % (
self.gitlab.server, self.project, self.number)
self.base_sha = base_sha
self.approved = False
self.mr_ref = self._createMRRef(base_sha=base_sha)
self._addCommitInMR(files=files)
def _getRepo(self):
repo_path = os.path.join(self.upstream_root, self.project)
return git.Repo(repo_path)
def _createMRRef(self, base_sha=None):
base_sha = base_sha or 'refs/tags/init'
repo = self._getRepo()
return GitlabChangeReference.create(
repo, self.getMRReference(), base_sha)
def getMRReference(self):
return '%s/head' % self.number
def addNote(self, body):
self.notes.append(
{
"body": body,
"created_at": datetime.datetime.now(datetime.timezone.utc),
}
)
def addCommit(self, files=[], delete_files=None):
self._addCommitInMR(files=files, delete_files=delete_files)
self._updateTimeStamp()
def closeMergeRequest(self):
self.state = 'closed'
self._updateTimeStamp()
def mergeMergeRequest(self, squash=None):
self.state = 'merged'
self.is_merged = True
self.squash_merge = squash
self._updateTimeStamp()
self.merged_at = self.updated_at
def reopenMergeRequest(self):
self.state = 'opened'
self._updateTimeStamp()
self.merged_at = None
def _addCommitInMR(self, files=[], delete_files=None, reset=False):
repo = self._getRepo()
ref = repo.references[self.getMRReference()]
if reset:
self.number_of_commits = 0
ref.set_object('refs/tags/init')
self.number_of_commits += 1
repo.head.reference = ref
repo.git.clean('-x', '-f', '-d')
if files:
self.files = files
elif not delete_files:
fn = '%s-%s' % (self.branch.replace('/', '_'), self.number)
self.files = {fn: "test %s %s\n" % (self.branch, self.number)}
msg = self.subject + '-' + str(self.number_of_commits)
for fn, content in self.files.items():
fn = os.path.join(repo.working_dir, fn)
with open(fn, 'w') as f:
f.write(content)
repo.index.add([fn])
if delete_files:
for fn in delete_files:
if fn in self.files:
del self.files[fn]
fn = os.path.join(repo.working_dir, fn)
repo.index.remove([fn])
self.sha = repo.index.commit(msg).hexsha
repo.create_head(self.getMRReference(), self.sha, force=True)
self.mr_ref.set_commit(self.sha)
repo.head.reference = 'master'
repo.git.clean('-x', '-f', '-d')
repo.heads['master'].checkout()
def _updateTimeStamp(self):
self.updated_at = datetime.datetime.now(datetime.timezone.utc)
def getMergeRequestEvent(self, action, previous_labels=None):
name = 'gl_merge_request'
data = {
'object_kind': 'merge_request',
'project': {
'path_with_namespace': self.project
},
'object_attributes': {
'title': self.subject,
'created_at': self.created_at.strftime(
'%Y-%m-%d %H:%M:%S.%f%z'),
'updated_at': self.updated_at.strftime(
'%Y-%m-%d %H:%M:%S UTC'),
'iid': self.number,
'target_branch': self.branch,
'last_commit': {'id': self.sha},
'action': action
},
}
data['labels'] = [{'title': label} for label in self.labels]
data['changes'] = {}
if previous_labels is not None:
data['changes']['labels'] = {
'previous': [{'title': label} for label in previous_labels],
'current': data['labels']
}
return (name, data)
def getMergeRequestOpenedEvent(self):
return self.getMergeRequestEvent(action='open')
def getMergeRequestUpdatedEvent(self):
self.addCommit()
return self.getMergeRequestEvent(action='update')
def getMergeRequestMergedEvent(self):
self.mergeMergeRequest()
return self.getMergeRequestEvent(action='merge')
def getMergeRequestMergedPushEvent(self, added_files=None,
removed_files=None,
modified_files=None):
return self.gitlab.getPushEvent(
project=self.project,
branch='refs/heads/%s' % self.branch,
before=random_sha1(),
after=self.sha,
added_files=added_files,
removed_files=removed_files,
modified_files=modified_files)
def getMergeRequestApprovedEvent(self):
self.approved = True
return self.getMergeRequestEvent(action='approved')
def getMergeRequestUnapprovedEvent(self):
self.approved = False
return self.getMergeRequestEvent(action='unapproved')
def getMergeRequestLabeledEvent(self, add_labels=[], remove_labels=[]):
previous_labels = self.labels
labels = set(previous_labels)
labels = labels - set(remove_labels)
labels = labels | set(add_labels)
self.labels = list(labels)
return self.getMergeRequestEvent(action='update',
previous_labels=previous_labels)
def getMergeRequestCommentedEvent(self, note):
self.addNote(note)
note_date = self.notes[-1]['created_at'].strftime(
'%Y-%m-%d %H:%M:%S UTC')
name = 'gl_merge_request'
data = {
'object_kind': 'note',
'project': {
'path_with_namespace': self.project
},
'merge_request': {
'title': self.subject,
'iid': self.number,
'target_branch': self.branch,
'last_commit': {'id': self.sha}
},
'object_attributes': {
'created_at': note_date,
'updated_at': note_date,
'note': self.notes[-1]['body'],
},
}
return (name, data)
class GithubChangeReference(git.Reference):
_common_path_default = "refs/pull"
_points_to_commits_only = True
class FakeGithubPullRequest(object):
def __init__(self, github, number, project, branch,
subject, upstream_root, files=None, number_of_commits=1,
writers=[], body=None, body_text=None, draft=False,
mergeable=True, base_sha=None):
"""Creates a new PR with several commits.
Sends an event about opened PR.
If the `files` argument is provided it must be a dictionary of
file names OR FakeFile instances -> content.
"""
self.github = github
self.source = github
self.number = number
self.project = project
self.branch = branch
self.subject = subject
self.body = body
self.body_text = body_text
self.draft = draft
self.mergeable = mergeable
self.number_of_commits = 0
self.upstream_root = upstream_root
# Dictionary of FakeFile -> content
self.files = {}
self.comments = []
self.labels = []
self.statuses = {}
self.reviews = []
self.writers = []
self.admins = []
self.updated_at = None
self.head_sha = None
self.is_merged = False
self.merge_message = None
self.state = 'open'
self.url = 'https://%s/%s/pull/%s' % (github.server, project, number)
self.base_sha = base_sha
self.pr_ref = self._createPRRef(base_sha=base_sha)
self._addCommitToRepo(files=files)
self._updateTimeStamp()
def addCommit(self, files=None, delete_files=None):
"""Adds a commit on top of the actual PR head."""
self._addCommitToRepo(files=files, delete_files=delete_files)
self._updateTimeStamp()
def forcePush(self, files=None):
"""Clears actual commits and add a commit on top of the base."""
self._addCommitToRepo(files=files, reset=True)
self._updateTimeStamp()
def getPullRequestOpenedEvent(self):
return self._getPullRequestEvent('opened')
def getPullRequestSynchronizeEvent(self):
return self._getPullRequestEvent('synchronize')
def getPullRequestReopenedEvent(self):
return self._getPullRequestEvent('reopened')
def getPullRequestClosedEvent(self):
return self._getPullRequestEvent('closed')
def getPullRequestEditedEvent(self, old_body=None):
return self._getPullRequestEvent('edited', old_body)
def addComment(self, message):
self.comments.append(message)
self._updateTimeStamp()
def getIssueCommentAddedEvent(self, text):
name = 'issue_comment'
data = {
'action': 'created',
'issue': {
'number': self.number
},
'comment': {
'body': text
},
'repository': {
'full_name': self.project
},
'sender': {
'login': 'ghuser'
}
}
return (name, data)
def getCommentAddedEvent(self, text):
name, data = self.getIssueCommentAddedEvent(text)
# A PR comment has an additional 'pull_request' key in the issue data
data['issue']['pull_request'] = {
'url': 'http://%s/api/v3/repos/%s/pull/%s' % (
self.github.server, self.project, self.number)
}
return (name, data)
def getReviewAddedEvent(self, review):
name = 'pull_request_review'
data = {
'action': 'submitted',
'pull_request': {
'number': self.number,
'title': self.subject,
'updated_at': self.updated_at,
'base': {
'ref': self.branch,
'repo': {
'full_name': self.project
}
},
'head': {
'sha': self.head_sha
}
},
'review': {
'state': review
},
'repository': {
'full_name': self.project
},
'sender': {
'login': 'ghuser'
}
}
return (name, data)
def addLabel(self, name):
if name not in self.labels:
self.labels.append(name)
self._updateTimeStamp()
return self._getLabelEvent(name)
def removeLabel(self, name):
if name in self.labels:
self.labels.remove(name)
self._updateTimeStamp()
return self._getUnlabelEvent(name)
def _getLabelEvent(self, label):
name = 'pull_request'
data = {
'action': 'labeled',
'pull_request': {
'number': self.number,
'updated_at': self.updated_at,
'base': {
'ref': self.branch,
'repo': {
'full_name': self.project
}
},
'head': {
'sha': self.head_sha
}
},
'label': {
'name': label
},
'sender': {
'login': 'ghuser'
}
}
return (name, data)
def _getUnlabelEvent(self, label):
name = 'pull_request'
data = {
'action': 'unlabeled',
'pull_request': {
'number': self.number,
'title': self.subject,
'updated_at': self.updated_at,
'base': {
'ref': self.branch,
'repo': {
'full_name': self.project
}
},
'head': {
'sha': self.head_sha,
'repo': {
'full_name': self.project
}
}
},
'label': {
'name': label
},
'sender': {
'login': 'ghuser'
}
}
return (name, data)
def editBody(self, body):
old_body = self.body
self.body = body
self._updateTimeStamp()
return self.getPullRequestEditedEvent(old_body=old_body)
def _getRepo(self):
repo_path = os.path.join(self.upstream_root, self.project)
return git.Repo(repo_path)
def _createPRRef(self, base_sha=None):
base_sha = base_sha or 'refs/tags/init'
repo = self._getRepo()
return GithubChangeReference.create(
repo, self.getPRReference(), base_sha)
def _addCommitToRepo(self, files=None, delete_files=None, reset=False):
repo = self._getRepo()
ref = repo.references[self.getPRReference()]
if reset:
self.number_of_commits = 0
ref.set_object('refs/tags/init')
self.number_of_commits += 1
repo.head.reference = ref
repo.head.reset(working_tree=True)
repo.git.clean('-x', '-f', '-d')
if files:
# Normalize the dictionary of 'Union[str,FakeFile] -> content'
# to 'FakeFile -> content'.
normalized_files = {}
for fn, content in files.items():
if isinstance(fn, tests.fakegithub.FakeFile):
normalized_files[fn] = content
else:
normalized_files[tests.fakegithub.FakeFile(fn)] = content
self.files.update(normalized_files)
elif not delete_files:
fn = '%s-%s' % (self.branch.replace('/', '_'), self.number)
content = f"test {self.branch} {self.number}\n"
self.files.update({tests.fakegithub.FakeFile(fn): content})
msg = self.subject + '-' + str(self.number_of_commits)
for fake_file, content in self.files.items():
fn = os.path.join(repo.working_dir, fake_file.filename)
with open(fn, 'w') as f:
f.write(content)
repo.index.add([fn])
if delete_files:
for fn in delete_files:
if fn in self.files:
del self.files[fn]
fn = os.path.join(repo.working_dir, fn)
repo.index.remove([fn])
self.head_sha = repo.index.commit(msg).hexsha
repo.create_head(self.getPRReference(), self.head_sha, force=True)
self.pr_ref.set_commit(self.head_sha)
# Create an empty set of statuses for the given sha,
# each sha on a PR may have a status set on it
self.statuses[self.head_sha] = []
repo.head.reference = 'master'
repo.head.reset(working_tree=True)
repo.git.clean('-x', '-f', '-d')
repo.heads['master'].checkout()
def _updateTimeStamp(self):
self.updated_at = time.strftime('%Y-%m-%dT%H:%M:%SZ', time.localtime())
def getPRHeadSha(self):
repo = self._getRepo()
return repo.references[self.getPRReference()].commit.hexsha
def addReview(self, user, state, granted_on=None):
gh_time_format = '%Y-%m-%dT%H:%M:%SZ'
# convert the timestamp to a str format that would be returned
# from github as 'submitted_at' in the API response
if granted_on:
granted_on = datetime.datetime.utcfromtimestamp(granted_on)
submitted_at = time.strftime(
gh_time_format, granted_on.timetuple())
else:
# github timestamps only down to the second, so we need to make
# sure reviews that tests add appear to be added over a period of
# time in the past and not all at once.
if not self.reviews:
# the first review happens 10 mins ago
offset = 600
else:
# subsequent reviews happen 1 minute closer to now
offset = 600 - (len(self.reviews) * 60)
granted_on = datetime.datetime.utcfromtimestamp(
time.time() - offset)
submitted_at = time.strftime(
gh_time_format, granted_on.timetuple())
self.reviews.append(tests.fakegithub.FakeGHReview({
'state': state,
'user': {
'login': user,
'email': user + "@example.com",
},
'submitted_at': submitted_at,
}))
def getPRReference(self):
return '%s/head' % self.number
def _getPullRequestEvent(self, action, old_body=None):
name = 'pull_request'
data = {
'action': action,
'number': self.number,
'pull_request': {
'number': self.number,
'title': self.subject,
'updated_at': self.updated_at,
'base': {
'ref': self.branch,
'repo': {
'full_name': self.project
}
},
'head': {
'sha': self.head_sha,
'repo': {
'full_name': self.project
}
},
'body': self.body
},
'sender': {
'login': 'ghuser'
},
'repository': {
'full_name': self.project,
},
'installation': {
'id': 123,
},
'changes': {},
'labels': [{'name': l} for l in self.labels]
}
if old_body:
data['changes']['body'] = {'from': old_body}
return (name, data)
def getCommitStatusEvent(self, context, state='success', user='zuul'):
name = 'status'
data = {
'state': state,
'sha': self.head_sha,
'name': self.project,
'description': 'Test results for %s: %s' % (self.head_sha, state),
'target_url': 'http://zuul/%s' % self.head_sha,
'branches': [],
'context': context,
'sender': {
'login': user
}
}
return (name, data)
def getCheckRunRequestedEvent(self, cr_name, app="zuul"):
name = "check_run"
data = {
"action": "rerequested",
"check_run": {
"head_sha": self.head_sha,
"name": cr_name,
"app": {
"slug": app,
},
},
"repository": {
"full_name": self.project,
},
}
return (name, data)
def getCheckRunAbortEvent(self, check_run):
# A check run aborted event can only be created from a FakeCheckRun as
# we need some information like external_id which is "calculated"
# during the creation of the check run.
name = "check_run"
data = {
"action": "requested_action",
"requested_action": {
"identifier": "abort",
},
"check_run": {
"head_sha": self.head_sha,
"name": check_run["name"],
"app": {
"slug": check_run["app"]
},
"external_id": check_run["external_id"],
},
"repository": {
"full_name": self.project,
},
}
return (name, data)
def setMerged(self, commit_message):
self.is_merged = True
self.merge_message = commit_message
repo = self._getRepo()
repo.heads[self.branch].commit = repo.commit(self.head_sha)
class FakeGithubClientManager(GithubClientManager):
github_class = tests.fakegithub.FakeGithubClient
github_enterprise_class = tests.fakegithub.FakeGithubEnterpriseClient
log = logging.getLogger("zuul.test.FakeGithubClientManager")
def __init__(self, connection_config):
super().__init__(connection_config)
self.record_clients = False
self.recorded_clients = []
self.github_data = None
def getGithubClient(self,
project_name=None,
zuul_event_id=None):
client = super().getGithubClient(
project_name=project_name,
zuul_event_id=zuul_event_id)
# Some tests expect the installation id as part of the
if self.app_id:
inst_id = self.installation_map.get(project_name)
client.setInstId(inst_id)
# The super method creates a fake github client with empty data so
# add it here.
client.setData(self.github_data)
if self.record_clients:
self.recorded_clients.append(client)
return client
def _prime_installation_map(self):
# Only valid if installed as a github app
if not self.app_id:
return
# github_data.repos is a hash like
# { ('org', 'project1'): <dataobj>
# ('org', 'project2'): <dataobj>,
# ('org2', 'project1'): <dataobj>, ... }
#
# we don't care about the value. index by org, e.g.
#
# {
# 'org': ('project1', 'project2')
# 'org2': ('project1', 'project2')
# }
orgs = defaultdict(list)
project_id = 1
for org, project in self.github_data.repos:
# Each entry is in the format for "repositories" response
# of GET /installation/repositories
orgs[org].append({
'id': project_id,
'name': project,
'full_name': '%s/%s' % (org, project)
# note, lots of other stuff that's not relevant
})
project_id += 1
self.log.debug("GitHub installation mapped to: %s" % orgs)
# Mock response to GET /app/installations
app_json = []
app_projects = []
app_id = 1
# Ensure that we ignore suspended apps
app_json.append(
{
'id': app_id,
'suspended_at': '2021-09-23T01:43:44Z',
'suspended_by': {
'login': 'ianw',
'type': 'User',
'id': 12345
}
})
app_projects.append([])
app_id += 1
for org, projects in orgs.items():
# We respond as if each org is a different app instance
#
# Below we will be sent the app_id in a token to query
# what projects this app exports. Keep the projects in a
# sequential list so we can just look up "projects for app
# X" == app_projects[X]
app_projects.append(projects)
app_json.append(
{
'id': app_id,
# Acutally none of this matters, and there's lots
# more in a real response. Padded out just for
# example sake.
'account': {
'login': org,
'id': 1234,
'type': 'User',
},
'permissions': {
'checks': 'write',
'metadata': 'read',
'contents': 'read'
},
'events': ['push',
'pull_request'
],
'suspended_at': None,
'suspended_by': None,
}
)
app_id += 1
# TODO(ianw) : we could exercise the pagination paths ...
with requests_mock.Mocker() as m:
m.get('%s/app/installations' % self.base_url, json=app_json)
def repositories_callback(request, context):
# FakeGithubSession gives us an auth token "token
# token-X" where "X" corresponds to the app id we want
# the projects for. apps start at id "1", so the projects
# to return for this call are app_projects[token-1]
token = int(request.headers['Authorization'][12:])
projects = app_projects[token - 1]
return {
'total_count': len(projects),
'repositories': projects
}
m.get('%s/installation/repositories?per_page=100' % self.base_url,
json=repositories_callback)
# everything mocked now, call real implementation
super()._prime_installation_map()
class FakeGithubConnection(githubconnection.GithubConnection):
log = logging.getLogger("zuul.test.FakeGithubConnection")
client_manager_class = FakeGithubClientManager
def __init__(self, driver, connection_name, connection_config,
changes_db=None, upstream_root=None, git_url_with_auth=False):
super(FakeGithubConnection, self).__init__(driver, connection_name,
connection_config)
self.connection_name = connection_name
self.pr_number = 0
self.pull_requests = changes_db
self.statuses = {}
self.upstream_root = upstream_root
self.merge_failure = False
self.merge_not_allowed_count = 0
self.github_data = tests.fakegithub.FakeGithubData(changes_db)
self._github_client_manager.github_data = self.github_data
self.git_url_with_auth = git_url_with_auth
def setZuulWebPort(self, port):
self.zuul_web_port = port
def openFakePullRequest(self, project, branch, subject, files=[],
body=None, body_text=None, draft=False,
mergeable=True, base_sha=None):
self.pr_number += 1
pull_request = FakeGithubPullRequest(
self, self.pr_number, project, branch, subject, self.upstream_root,
files=files, body=body, body_text=body_text, draft=draft,
mergeable=mergeable, base_sha=base_sha)
self.pull_requests[self.pr_number] = pull_request
return pull_request
def getPushEvent(self, project, ref, old_rev=None, new_rev=None,
added_files=None, removed_files=None,
modified_files=None):
if added_files is None:
added_files = []
if removed_files is None:
removed_files = []
if modified_files is None:
modified_files = []
if not old_rev:
old_rev = '0' * 40
if not new_rev:
new_rev = random_sha1()
name = 'push'
data = {
'ref': ref,
'before': old_rev,
'after': new_rev,
'repository': {
'full_name': project
},
'commits': [
{
'added': added_files,
'removed': removed_files,
'modified': modified_files
}
]
}
return (name, data)
def getBranchProtectionRuleEvent(self, project, action):
name = 'branch_protection_rule'
data = {
'action': action,
'rule': {},
'repository': {
'full_name': project,
}
}
return (name, data)
def emitEvent(self, event, use_zuulweb=False):
"""Emulates sending the GitHub webhook event to the connection."""
name, data = event
payload = json.dumps(data).encode('utf8')
secret = self.connection_config['webhook_token']
signature = githubconnection._sign_request(payload, secret)
headers = {'x-github-event': name,
'x-hub-signature': signature,
'x-github-delivery': str(uuid.uuid4())}
if use_zuulweb:
return requests.post(
'http://127.0.0.1:%s/api/connection/%s/payload'
% (self.zuul_web_port, self.connection_name),
json=data, headers=headers)
else:
data = {'headers': headers, 'body': data}
self.event_queue.put(data)
return data
def addProject(self, project):
# use the original method here and additionally register it in the
# fake github
super(FakeGithubConnection, self).addProject(project)
self.getGithubClient(project.name).addProject(project)
def getGitUrl(self, project):
if self.git_url_with_auth:
auth_token = ''.join(
random.choice(string.ascii_lowercase) for x in range(8))
prefix = 'file://x-access-token:%s@' % auth_token
else:
prefix = ''
if self.repo_cache:
return prefix + os.path.join(self.repo_cache, str(project))
return prefix + os.path.join(self.upstream_root, str(project))
def real_getGitUrl(self, project):
return super(FakeGithubConnection, self).getGitUrl(project)
def setCommitStatus(self, project, sha, state, url='', description='',
context='default', user='zuul', zuul_event_id=None):
# record that this got reported and call original method
self.github_data.reports.append(
(project, sha, 'status', (user, context, state)))
super(FakeGithubConnection, self).setCommitStatus(
project, sha, state,
url=url, description=description, context=context)
def labelPull(self, project, pr_number, label, zuul_event_id=None):
# record that this got reported
self.github_data.reports.append((project, pr_number, 'label', label))
pull_request = self.pull_requests[int(pr_number)]
pull_request.addLabel(label)
def unlabelPull(self, project, pr_number, label, zuul_event_id=None):
# record that this got reported
self.github_data.reports.append((project, pr_number, 'unlabel', label))
pull_request = self.pull_requests[pr_number]
pull_request.removeLabel(label)
class BuildHistory(object):
def __init__(self, **kw):
self.__dict__.update(kw)
def __repr__(self):
return ("<Completed build, result: %s name: %s uuid: %s "
"changes: %s ref: %s>" %
(self.result, self.name, self.uuid,
self.changes, self.ref))
class FakeStatsd(threading.Thread):
log = logging.getLogger("zuul.test.FakeStatsd")
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True
self.sock = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
self.sock.bind(('', 0))
self.port = self.sock.getsockname()[1]
self.wake_read, self.wake_write = os.pipe()
self.stats = []
def clear(self):
self.stats = []
def run(self):
while True:
poll = select.poll()
poll.register(self.sock, select.POLLIN)
poll.register(self.wake_read, select.POLLIN)
ret = poll.poll()
for (fd, event) in ret:
if fd == self.sock.fileno():
data = self.sock.recvfrom(1024)
if not data:
return
# self.log.debug("Appending: %s" % data[0])
self.stats.append(data[0])
if fd == self.wake_read:
return
def stop(self):
os.write(self.wake_write, b'1\n')
self.join()
self.sock.close()
class FakeBuild(object):
log = logging.getLogger("zuul.test")
def __init__(self, executor_server, build_request, params):
self.daemon = True
self.executor_server = executor_server
self.build_request = build_request
self.jobdir = None
self.uuid = build_request.uuid
self.parameters = params
self.job = model.FrozenJob.fromZK(executor_server.zk_context,
params["job_ref"])
self.parameters["zuul"].update(
zuul.executor.server.zuul_params_from_job(self.job))
# TODOv3(jeblair): self.node is really "the label of the node
# assigned". We should rename it (self.node_label?) if we
# keep using it like this, or we may end up exposing more of
# the complexity around multi-node jobs here
# (self.nodes[0].label?)
self.node = None
if len(self.job.nodeset.nodes) == 1:
self.node = next(iter(self.job.nodeset.nodes.values())).label
self.unique = self.parameters['zuul']['build']
self.pipeline = self.parameters['zuul']['pipeline']
self.project = self.parameters['zuul']['project']['name']
self.name = self.job.name
self.wait_condition = threading.Condition()
self.waiting = False
self.paused = False
self.aborted = False
self.requeue = False
self.should_fail = False
self.should_retry = False
self.created = time.time()
self.changes = None
items = self.parameters['zuul']['items']
self.changes = ' '.join(['%s,%s' % (x['change'], x['patchset'])
for x in items if 'change' in x])
if 'change' in items[-1]:
self.change = ' '.join((items[-1]['change'],
items[-1]['patchset']))
else:
self.change = None
def __repr__(self):
waiting = ''
if self.waiting:
waiting = ' [waiting]'
return '<FakeBuild %s:%s %s%s>' % (self.pipeline, self.name,
self.changes, waiting)
def release(self):
"""Release this build."""
self.wait_condition.acquire()
self.wait_condition.notify()
self.waiting = False
self.log.debug("Build %s released" % self.unique)
self.wait_condition.release()
def isWaiting(self):
"""Return whether this build is being held.
:returns: Whether the build is being held.
:rtype: bool
"""
self.wait_condition.acquire()
if self.waiting:
ret = True
else:
ret = False
self.wait_condition.release()
return ret
def _wait(self):
self.wait_condition.acquire()
self.waiting = True
self.log.debug("Build %s waiting" % self.unique)
self.wait_condition.wait()
self.wait_condition.release()
def run(self):
self.log.debug('Running build %s' % self.unique)
if self.executor_server.hold_jobs_in_build:
self.log.debug('Holding build %s' % self.unique)
self._wait()
self.log.debug("Build %s continuing" % self.unique)
self.writeReturnData()
result = (RecordingAnsibleJob.RESULT_NORMAL, 0) # Success
if self.shouldFail():
result = (RecordingAnsibleJob.RESULT_NORMAL, 1) # Failure
if self.shouldRetry():
result = (RecordingAnsibleJob.RESULT_NORMAL, None)
if self.aborted:
result = (RecordingAnsibleJob.RESULT_ABORTED, None)
if self.requeue:
result = (RecordingAnsibleJob.RESULT_UNREACHABLE, None)
return result
def shouldFail(self):
if self.should_fail:
return True
changes = self.executor_server.fail_tests.get(self.name, [])
for change in changes:
if self.hasChanges(change):
return True
return False
def shouldRetry(self):
if self.should_retry:
return True
entries = self.executor_server.retry_tests.get(self.name, [])
for entry in entries:
if self.hasChanges(entry['change']):
if entry['retries'] is None:
return True
if entry['retries']:
entry['retries'] = entry['retries'] - 1
return True
return False
def writeReturnData(self):
changes = self.executor_server.return_data.get(self.name, {})
data = changes.get(self.change)
if data is None:
return
with open(self.jobdir.result_data_file, 'w') as f:
f.write(json.dumps({'data': data}))
def hasChanges(self, *changes):
"""Return whether this build has certain changes in its git repos.
:arg FakeChange changes: One or more changes (varargs) that
are expected to be present (in order) in the git repository of
the active project.
:returns: Whether the build has the indicated changes.
:rtype: bool
"""
for change in changes:
hostname = change.source.canonical_hostname
path = os.path.join(self.jobdir.src_root, hostname, change.project)
try:
repo = git.Repo(path)
except NoSuchPathError as e:
self.log.debug('%s' % e)
return False
repo_messages = [c.message.strip() for c in repo.iter_commits()]
commit_message = '%s-1' % change.subject
self.log.debug("Checking if build %s has changes; commit_message "
"%s; repo_messages %s" % (self, commit_message,
repo_messages))
if commit_message not in repo_messages:
self.log.debug(" messages do not match")
return False
self.log.debug(" OK")
return True
def getWorkspaceRepos(self, projects):
"""Return workspace git repo objects for the listed projects
:arg list projects: A list of strings, each the canonical name
of a project.
:returns: A dictionary of {name: repo} for every listed
project.
:rtype: dict
"""
repos = {}
for project in projects:
path = os.path.join(self.jobdir.src_root, project)
repo = git.Repo(path)
repos[project] = repo
return repos
class RecordingAnsibleJob(zuul.executor.server.AnsibleJob):
result = None
semaphore_sleep_time = 5
def _execute(self):
for _ in iterate_timeout(60, 'wait for merge'):
if not self.executor_server.hold_jobs_in_start:
break
time.sleep(1)
super()._execute()
def doMergeChanges(self, *args, **kw):
# Get a merger in order to update the repos involved in this job.
commit = super(RecordingAnsibleJob, self).doMergeChanges(
*args, **kw)
if not commit:
self.recordResult('MERGE_CONFLICT')
return commit
def recordResult(self, result):
self.executor_server.lock.acquire()
build = self.executor_server.job_builds.get(self.build_request.uuid)
if not build:
self.executor_server.lock.release()
# Already recorded
return
self.executor_server.build_history.append(
BuildHistory(name=build.name, result=result, changes=build.changes,
node=build.node, uuid=build.unique, job=build.job,
ref=build.parameters['zuul']['ref'],
newrev=build.parameters['zuul'].get('newrev'),
parameters=build.parameters, jobdir=build.jobdir,
pipeline=build.parameters['zuul']['pipeline'],
build_request_ref=build.build_request.path)
)
self.executor_server.running_builds.remove(build)
del self.executor_server.job_builds[self.build_request.uuid]
self.executor_server.lock.release()
def runPlaybooks(self, args):
build = self.executor_server.job_builds[self.build_request.uuid]
build.jobdir = self.jobdir
self.result, error_detail = super(
RecordingAnsibleJob, self).runPlaybooks(args)
if self.result is None:
# Record result now because cleanup won't be performed
self.recordResult(None)
return self.result, error_detail
def runCleanupPlaybooks(self, success):
super(RecordingAnsibleJob, self).runCleanupPlaybooks(success)
if self.result is not None:
self.recordResult(self.result)
def runAnsible(self, cmd, timeout, playbook, ansible_version,
allow_pre_fail, wrapped=True, cleanup=False):
build = self.executor_server.job_builds[self.build_request.uuid]
if self.executor_server._run_ansible:
# Call run on the fake build omitting the result so we also can
# hold real ansible jobs.
if playbook not in [self.jobdir.setup_playbook,
self.jobdir.freeze_playbook]:
build.run()
result = super(RecordingAnsibleJob, self).runAnsible(
cmd, timeout, playbook, ansible_version, allow_pre_fail,
wrapped, cleanup)
else:
if playbook not in [self.jobdir.setup_playbook,
self.jobdir.freeze_playbook]:
result = build.run()
else:
result = (self.RESULT_NORMAL, 0)
return result
def getHostList(self, args, nodes):
self.log.debug("hostlist %s", nodes)
hosts = super(RecordingAnsibleJob, self).getHostList(args, nodes)
for host in hosts:
if not host['host_vars'].get('ansible_connection'):
host['host_vars']['ansible_connection'] = 'local'
return hosts
def pause(self):
build = self.executor_server.job_builds[self.build_request.uuid]
build.paused = True
super().pause()
def resume(self):
build = self.executor_server.job_builds.get(self.build_request.uuid)
if build:
build.paused = False
super().resume()
def _send_aborted(self):
self.recordResult('ABORTED')
super()._send_aborted()
FakeMergeRequest = namedtuple(
"FakeMergeRequest", ("uuid", "job_type", "payload")
)
class HoldableMergerApi(MergerApi):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.hold_in_queue = False
self.history = {}
def submit(self, request, params, needs_result=False):
self.log.debug("Appending merge job to history: %s", request.uuid)
self.history.setdefault(request.job_type, [])
self.history[request.job_type].append(
FakeMergeRequest(request.uuid, request.job_type, params)
)
return super().submit(request, params, needs_result)
@property
def initial_state(self):
if self.hold_in_queue:
return MergeRequest.HOLD
return MergeRequest.REQUESTED
class TestingMergerApi(HoldableMergerApi):
log = logging.getLogger("zuul.test.TestingMergerApi")
def _test_getMergeJobsInState(self, *states):
# As this method is used for assertions in the tests, it should look up
# the merge requests directly from ZooKeeper and not from a cache
# layer.
all_merge_requests = []
for merge_uuid in self._getAllRequestIds():
merge_request = self.get("/".join(
[self.REQUEST_ROOT, merge_uuid]))
if merge_request and (not states or merge_request.state in states):
all_merge_requests.append(merge_request)
return sorted(all_merge_requests)
def release(self, merge_request=None):
"""
Releases a merge request which was previously put on hold for testing.
If no merge_request is provided, all merge request that are currently
in state HOLD will be released.
"""
# Either release all jobs in HOLD state or the one specified.
if merge_request is not None:
merge_request.state = MergeRequest.REQUESTED
self.update(merge_request)
return
for merge_request in self._test_getMergeJobsInState(MergeRequest.HOLD):
merge_request.state = MergeRequest.REQUESTED
self.update(merge_request)
def queued(self):
return self._test_getMergeJobsInState(
MergeRequest.REQUESTED, MergeRequest.HOLD
)
def all(self):
return self._test_getMergeJobsInState()
class HoldableMergeClient(zuul.merger.client.MergeClient):
_merger_api_class = HoldableMergerApi
class HoldableExecutorApi(ExecutorApi):
def __init__(self, *args, **kwargs):
self.hold_in_queue = False
super().__init__(*args, **kwargs)
def _getInitialState(self):
if self.hold_in_queue:
return BuildRequest.HOLD
return BuildRequest.REQUESTED
class TestingExecutorApi(HoldableExecutorApi):
log = logging.getLogger("zuul.test.TestingExecutorApi")
def _test_getBuildsInState(self, *states):
# As this method is used for assertions in the tests, it
# should look up the build requests directly from ZooKeeper
# and not from a cache layer.
all_builds = []
for zone in self._getAllZones():
queue = self.zone_queues[zone]
for build_uuid in queue._getAllRequestIds():
build = queue.get(f'{queue.REQUEST_ROOT}/{build_uuid}')
if build and (not states or build.state in states):
all_builds.append(build)
all_builds.sort()
return all_builds
def _getJobForBuildRequest(self, build_request):
# The parameters for the build request are removed immediately
# after the job starts in order to reduce impact to ZK, so if
# we want to inspect them in the tests, we need to save them.
# This adds them to a private internal cache for that purpose.
if not hasattr(self, '_test_build_request_job_map'):
self._test_build_request_job_map = {}
if build_request.uuid in self._test_build_request_job_map:
return self._test_build_request_job_map[build_request.uuid]
d = self.getParams(build_request)
if d:
data = d.get('job_ref', '').split('/')[-1]
else:
data = ''
self._test_build_request_job_map[build_request.uuid] = data
return data
def release(self, what=None):
"""
Releases a build request which was previously put on hold for testing.
The what parameter specifies what to release. This can be a concrete
build request or a regular expression matching a job name.
"""
self.log.debug("Releasing builds matching %s", what)
if isinstance(what, BuildRequest):
self.log.debug("Releasing build %s", what)
what.state = BuildRequest.REQUESTED
self.update(what)
return
for build_request in self._test_getBuildsInState(
BuildRequest.HOLD):
# Either release all build requests in HOLD state or the ones
# matching the given job name pattern.
if what is None or (
re.match(what,
self._getJobForBuildRequest(build_request))):
self.log.debug("Releasing build %s", build_request)
build_request.state = BuildRequest.REQUESTED
self.update(build_request)
def queued(self):
return self._test_getBuildsInState(
BuildRequest.REQUESTED, BuildRequest.HOLD
)
def all(self):
return self._test_getBuildsInState()
class HoldableExecutorClient(zuul.executor.client.ExecutorClient):
_executor_api_class = HoldableExecutorApi
class RecordingExecutorServer(zuul.executor.server.ExecutorServer):
"""An Ansible executor to be used in tests.
:ivar bool hold_jobs_in_build: If true, when jobs are executed
they will report that they have started but then pause until
released before reporting completion. This attribute may be
changed at any time and will take effect for subsequently
executed builds, but previously held builds will still need to
be explicitly released.
"""
_job_class = RecordingAnsibleJob
def __init__(self, *args, **kw):
self._run_ansible = kw.pop('_run_ansible', False)
self._test_root = kw.pop('_test_root', False)
if self._run_ansible:
self._ansible_manager_class = zuul.lib.ansible.AnsibleManager
else:
self._ansible_manager_class = FakeAnsibleManager
super(RecordingExecutorServer, self).__init__(*args, **kw)
self.hold_jobs_in_build = False
self.hold_jobs_in_start = False
self.lock = threading.Lock()
self.running_builds = []
self.build_history = []
self.fail_tests = {}
self.retry_tests = {}
self.return_data = {}
self.job_builds = {}
def failJob(self, name, change):
"""Instruct the executor to report matching builds as failures.
:arg str name: The name of the job to fail.
:arg Change change: The :py:class:`~tests.base.FakeChange`
instance which should cause the job to fail. This job
will also fail for changes depending on this change.
"""
l = self.fail_tests.get(name, [])
l.append(change)
self.fail_tests[name] = l
def retryJob(self, name, change, retries=None):
"""Instruct the executor to report matching builds as retries.
:arg str name: The name of the job to fail.
:arg Change change: The :py:class:`~tests.base.FakeChange`
instance which should cause the job to fail. This job
will also fail for changes depending on this change.
"""
self.retry_tests.setdefault(name, []).append(
dict(change=change,
retries=retries))
def returnData(self, name, change, data):
"""Instruct the executor to return data for this build.
:arg str name: The name of the job to return data.
:arg Change change: The :py:class:`~tests.base.FakeChange`
instance which should cause the job to return data.
:arg dict data: The data to return
"""
# TODO(clarkb) We are incredibly change focused here and in FakeBuild
# above. This makes it very difficult to test non change items with
# return data. We currently rely on the hack that None is used as a
# key for the changes dict, but we should improve that to look up
# refnames or similar.
changes = self.return_data.setdefault(name, {})
if hasattr(change, 'number'):
cid = ' '.join((str(change.number), str(change.latest_patchset)))
else:
# Not actually a change, but a ref update event for tags/etc
# In this case a key of None is used by writeReturnData
cid = None
changes[cid] = data
def release(self, regex=None, change=None):
"""Release a held build.
:arg str regex: A regular expression which, if supplied, will
cause only builds with matching names to be released. If
not supplied, all builds will be released.
"""
builds = self.running_builds[:]
if len(builds) == 0:
self.log.debug('No running builds to release')
return
self.log.debug("Releasing build %s %s (%s)" % (
regex, change, len(builds)))
for build in builds:
if ((not regex or re.match(regex, build.name)) and
(not change or build.change == change)):
self.log.debug("Releasing build %s" %
(build.parameters['zuul']['build']))
build.release()
else:
self.log.debug("Not releasing build %s" %
(build.parameters['zuul']['build']))
self.log.debug("Done releasing builds %s (%s)" %
(regex, len(builds)))
def executeJob(self, build_request, params):
build = FakeBuild(self, build_request, params)
self.running_builds.append(build)
self.job_builds[build_request.uuid] = build
params['zuul']['_test'] = dict(test_root=self._test_root)
super(RecordingExecutorServer, self).executeJob(build_request, params)
def stopJob(self, build_request: BuildRequest):
self.log.debug("handle stop")
uuid = build_request.uuid
for build in self.running_builds:
if build.unique == uuid:
build.aborted = True
build.release()
super(RecordingExecutorServer, self).stopJob(build_request)
def stop(self):
for build in self.running_builds:
build.release()
super(RecordingExecutorServer, self).stop()
class TestScheduler(zuul.scheduler.Scheduler):
_merger_client_class = HoldableMergeClient
_executor_client_class = HoldableExecutorClient
class FakeSMTP(object):
log = logging.getLogger('zuul.FakeSMTP')
def __init__(self, messages, server, port):
self.server = server
self.port = port
self.messages = messages
def sendmail(self, from_email, to_email, msg):
self.log.info("Sending email from %s, to %s, with msg %s" % (
from_email, to_email, msg))
headers = msg.split('\n\n', 1)[0]
body = msg.split('\n\n', 1)[1]
self.messages.append(dict(
from_email=from_email,
to_email=to_email,
msg=msg,
headers=headers,
body=body,
))
return True
def quit(self):
return True
class FakeNodepool(object):
REQUEST_ROOT = '/nodepool/requests'
NODE_ROOT = '/nodepool/nodes'
COMPONENT_ROOT = '/nodepool/components'
log = logging.getLogger("zuul.test.FakeNodepool")
def __init__(self, zk_chroot_fixture):
self.complete_event = threading.Event()
self.host_keys = None
self.client = kazoo.client.KazooClient(
hosts='%s:%s%s' % (
zk_chroot_fixture.zookeeper_host,
zk_chroot_fixture.zookeeper_port,
zk_chroot_fixture.zookeeper_chroot),
use_ssl=True,
keyfile=zk_chroot_fixture.zookeeper_key,
certfile=zk_chroot_fixture.zookeeper_cert,
ca=zk_chroot_fixture.zookeeper_ca,
)
self.client.start()
self.registerLauncher()
self._running = True
self.paused = False
self.thread = threading.Thread(target=self.run)
self.thread.daemon = True
self.thread.start()
self.fail_requests = set()
self.remote_ansible = False
self.attributes = None
self.resources = None
self.python_path = 'auto'
self.shell_type = None
self.connection_port = None
self.history = []
def stop(self):
self._running = False
self.thread.join()
self.client.stop()
self.client.close()
def pause(self):
self.complete_event.wait()
self.paused = True
def unpause(self):
self.paused = False
def run(self):
while self._running:
self.complete_event.clear()
try:
self._run()
except Exception:
self.log.exception("Error in fake nodepool:")
self.complete_event.set()
time.sleep(0.1)
def _run(self):
if self.paused:
return
for req in self.getNodeRequests():
self.fulfillRequest(req)
def registerLauncher(self, labels=["label1"], id="FakeLauncher"):
path = os.path.join(self.COMPONENT_ROOT, 'pool', id)
data = {'id': id, 'supported_labels': labels}
self.client.create(
path, json.dumps(data).encode('utf8'),
ephemeral=True, makepath=True, sequence=True)
def getNodeRequests(self):
try:
reqids = self.client.get_children(self.REQUEST_ROOT)
except kazoo.exceptions.NoNodeError:
return []
reqs = []
for oid in reqids:
path = self.REQUEST_ROOT + '/' + oid
try:
data, stat = self.client.get(path)
data = json.loads(data.decode('utf8'))
data['_oid'] = oid
reqs.append(data)
except kazoo.exceptions.NoNodeError:
pass
reqs.sort(key=lambda r: (r['_oid'].split('-')[0],
r['relative_priority'],
r['_oid'].split('-')[1]))
return reqs
def getNodes(self):
try:
nodeids = self.client.get_children(self.NODE_ROOT)
except kazoo.exceptions.NoNodeError:
return []
nodes = []
for oid in sorted(nodeids):
path = self.NODE_ROOT + '/' + oid
data, stat = self.client.get(path)
data = json.loads(data.decode('utf8'))
data['_oid'] = oid
try:
lockfiles = self.client.get_children(path + '/lock')
except kazoo.exceptions.NoNodeError:
lockfiles = []
if lockfiles:
data['_lock'] = True
else:
data['_lock'] = False
nodes.append(data)
return nodes
def makeNode(self, request_id, node_type, request):
now = time.time()
path = '/nodepool/nodes/'
remote_ip = os.environ.get('ZUUL_REMOTE_IPV4', '127.0.0.1')
if self.remote_ansible and not self.host_keys:
self.host_keys = self.keyscan(remote_ip)
if self.host_keys is None:
host_keys = ["fake-key1", "fake-key2"]
else:
host_keys = self.host_keys
data = dict(type=node_type,
cloud='test-cloud',
provider='test-provider',
region='test-region',
az='test-az',
attributes=self.attributes,
host_id='test-host-id',
interface_ip=remote_ip,
public_ipv4=remote_ip,
private_ipv4=None,
public_ipv6=None,
private_ipv6=None,
python_path=self.python_path,
shell_type=self.shell_type,
allocated_to=request_id,
state='ready',
state_time=now,
created_time=now,
updated_time=now,
image_id=None,
host_keys=host_keys,
executor='fake-nodepool',
hold_expiration=None)
if self.resources:
data['resources'] = self.resources
if self.remote_ansible:
data['connection_type'] = 'ssh'
if 'fakeuser' in node_type:
data['username'] = 'fakeuser'
if 'windows' in node_type:
data['connection_type'] = 'winrm'
if 'network' in node_type:
data['connection_type'] = 'network_cli'
if self.connection_port:
data['connection_port'] = self.connection_port
if 'kubernetes-namespace' in node_type or 'fedora-pod' in node_type:
data['connection_type'] = 'namespace'
data['connection_port'] = {
'name': 'zuul-ci',
'namespace': 'zuul-ci-abcdefg',
'host': 'localhost',
'skiptls': True,
'token': 'FakeToken',
'ca_crt': 'FakeCA',
'user': 'zuul-worker',
}
if 'fedora-pod' in node_type:
data['connection_type'] = 'kubectl'
data['connection_port']['pod'] = 'fedora-abcdefg'
data['tenant_name'] = request['tenant_name']
data['requestor'] = request['requestor']
data = json.dumps(data).encode('utf8')
path = self.client.create(path, data,
makepath=True,
sequence=True)
nodeid = path.split("/")[-1]
return nodeid
def removeNode(self, node):
path = self.NODE_ROOT + '/' + node["_oid"]
self.client.delete(path, recursive=True)
def addFailRequest(self, request):
self.fail_requests.add(request['_oid'])
def fulfillRequest(self, request):
if request['state'] != 'requested':
return
request = request.copy()
self.history.append(request)
oid = request['_oid']
del request['_oid']
if oid in self.fail_requests:
request['state'] = 'failed'
else:
request['state'] = 'fulfilled'
nodes = request.get('nodes', [])
for node in request['node_types']:
nodeid = self.makeNode(oid, node, request)
nodes.append(nodeid)
request['nodes'] = nodes
request['state_time'] = time.time()
path = self.REQUEST_ROOT + '/' + oid
data = json.dumps(request).encode('utf8')
self.log.debug("Fulfilling node request: %s %s" % (oid, data))
try:
self.client.set(path, data)
except kazoo.exceptions.NoNodeError:
self.log.debug("Node request %s %s disappeared" % (oid, data))
def keyscan(self, ip, port=22, timeout=60):
'''
Scan the IP address for public SSH keys.
Keys are returned formatted as: "<type> <base64_string>"
'''
addrinfo = socket.getaddrinfo(ip, port)[0]
family = addrinfo[0]
sockaddr = addrinfo[4]
keys = []
key = None
for count in iterate_timeout(timeout, "ssh access"):
sock = None
t = None
try:
sock = socket.socket(family, socket.SOCK_STREAM)
sock.settimeout(timeout)
sock.connect(sockaddr)
t = paramiko.transport.Transport(sock)
t.start_client(timeout=timeout)
key = t.get_remote_server_key()
break
except socket.error as e:
if e.errno not in [
errno.ECONNREFUSED, errno.EHOSTUNREACH, None]:
self.log.exception(
'Exception with ssh access to %s:' % ip)
except Exception as e:
self.log.exception("ssh-keyscan failure: %s", e)
finally:
try:
if t:
t.close()
except Exception as e:
self.log.exception('Exception closing paramiko: %s', e)
try:
if sock:
sock.close()
except Exception as e:
self.log.exception('Exception closing socket: %s', e)
# Paramiko, at this time, seems to return only the ssh-rsa key, so
# only the single key is placed into the list.
if key:
keys.append("%s %s" % (key.get_name(), key.get_base64()))
return keys
class ChrootedKazooFixture(fixtures.Fixture):
def __init__(self, test_id):
super(ChrootedKazooFixture, self).__init__()
if 'ZOOKEEPER_2181_TCP' in os.environ:
# prevent any nasty hobbits^H^H^H suprises
if 'NODEPOOL_ZK_HOST' in os.environ:
raise Exception(
'Looks like tox-docker is being used but you have also '
'configured NODEPOOL_ZK_HOST. Either avoid using the '
'docker environment or unset NODEPOOL_ZK_HOST.')
zk_host = 'localhost:' + os.environ['ZOOKEEPER_2181_TCP']
elif 'NODEPOOL_ZK_HOST' in os.environ:
zk_host = os.environ['NODEPOOL_ZK_HOST']
else:
zk_host = 'localhost'
if ':' in zk_host:
host, port = zk_host.split(':')
else:
host = zk_host
port = None
zk_ca = os.environ.get('ZUUL_ZK_CA', None)
if not zk_ca:
zk_ca = os.path.join(os.path.dirname(__file__),
'../tools/ca/certs/cacert.pem')
self.zookeeper_ca = zk_ca
zk_cert = os.environ.get('ZUUL_ZK_CERT', None)
if not zk_cert:
zk_cert = os.path.join(os.path.dirname(__file__),
'../tools/ca/certs/client.pem')
self.zookeeper_cert = zk_cert
zk_key = os.environ.get('ZUUL_ZK_KEY', None)
if not zk_key:
zk_key = os.path.join(os.path.dirname(__file__),
'../tools/ca/keys/clientkey.pem')
self.zookeeper_key = zk_key
self.zookeeper_host = host
if not port:
self.zookeeper_port = 2281
else:
self.zookeeper_port = int(port)
self.test_id = test_id
def _setUp(self):
# Make sure the test chroot paths do not conflict
random_bits = ''.join(random.choice(string.ascii_lowercase +
string.ascii_uppercase)
for x in range(8))
rand_test_path = '%s_%s_%s' % (random_bits, os.getpid(), self.test_id)
self.zookeeper_chroot = f"/test/{rand_test_path}"
self.zk_hosts = '%s:%s%s' % (
self.zookeeper_host,
self.zookeeper_port,
self.zookeeper_chroot)
self.addCleanup(self._cleanup)
# Ensure the chroot path exists and clean up any pre-existing znodes.
_tmp_client = kazoo.client.KazooClient(
hosts=f'{self.zookeeper_host}:{self.zookeeper_port}', timeout=10,
use_ssl=True,
keyfile=self.zookeeper_key,
certfile=self.zookeeper_cert,
ca=self.zookeeper_ca,
)
_tmp_client.start()
if _tmp_client.exists(self.zookeeper_chroot):
_tmp_client.delete(self.zookeeper_chroot, recursive=True)
_tmp_client.ensure_path(self.zookeeper_chroot)
_tmp_client.stop()
_tmp_client.close()
def _cleanup(self):
'''Remove the chroot path.'''
# Need a non-chroot'ed client to remove the chroot path
_tmp_client = kazoo.client.KazooClient(
hosts='%s:%s' % (self.zookeeper_host, self.zookeeper_port),
use_ssl=True,
keyfile=self.zookeeper_key,
certfile=self.zookeeper_cert,
ca=self.zookeeper_ca,
)
_tmp_client.start()
_tmp_client.delete(self.zookeeper_chroot, recursive=True)
_tmp_client.stop()
_tmp_client.close()
class WebProxyFixture(fixtures.Fixture):
def __init__(self, rules):
super(WebProxyFixture, self).__init__()
self.rules = rules
def _setUp(self):
rules = self.rules
class Proxy(http.server.SimpleHTTPRequestHandler):
log = logging.getLogger('zuul.WebProxyFixture.Proxy')
def do_GET(self):
path = self.path
for (pattern, replace) in rules:
path = re.sub(pattern, replace, path)
resp = requests.get(path)
self.send_response(resp.status_code)
if resp.status_code >= 300:
self.end_headers()
return
for key, val in resp.headers.items():
self.send_header(key, val)
self.end_headers()
self.wfile.write(resp.content)
def log_message(self, fmt, *args):
self.log.debug(fmt, *args)
self.httpd = socketserver.ThreadingTCPServer(('', 0), Proxy)
self.port = self.httpd.socket.getsockname()[1]
self.thread = threading.Thread(target=self.httpd.serve_forever)
self.thread.start()
self.addCleanup(self._cleanup)
def _cleanup(self):
self.httpd.shutdown()
self.thread.join()
self.httpd.server_close()
class ZuulWebFixture(fixtures.Fixture):
def __init__(self,
changes: Dict[str, Dict[str, Change]], config: ConfigParser,
additional_event_queues, upstream_root: str,
poller_events, git_url_with_auth: bool,
add_cleanup: Callable[[Callable[[], None]], None],
test_root: str, info: Optional[WebInfo] = None):
super(ZuulWebFixture, self).__init__()
self.config = config
self.connections = TestConnectionRegistry(
changes, config, additional_event_queues, upstream_root,
poller_events, git_url_with_auth, add_cleanup)
self.connections.configure(config)
self.authenticators = zuul.lib.auth.AuthenticatorRegistry()
self.authenticators.configure(config)
if info is None:
self.info = WebInfo.fromConfig(config)
else:
self.info = info
self.test_root = test_root
def _setUp(self):
# Start the web server
self.web = zuul.web.ZuulWeb(
config=self.config,
info=self.info,
connections=self.connections,
authenticators=self.authenticators)
self.connections.load(self.web.zk_client, self.web.component_registry)
self.web.start()
self.addCleanup(self.stop)
self.host = 'localhost'
# Wait until web server is started
while True:
self.port = self.web.port
try:
with socket.create_connection((self.host, self.port)):
break
except ConnectionRefusedError:
pass
def stop(self):
self.web.stop()
self.connections.stop()
class MySQLSchemaFixture(fixtures.Fixture):
def setUp(self):
super(MySQLSchemaFixture, self).setUp()
random_bits = ''.join(random.choice(string.ascii_lowercase +
string.ascii_uppercase)
for x in range(8))
self.name = '%s_%s' % (random_bits, os.getpid())
self.passwd = uuid.uuid4().hex
self.host = os.environ.get('ZUUL_MYSQL_HOST', '127.0.0.1')
self.port = int(os.environ.get('ZUUL_MYSQL_PORT', 3306))
db = pymysql.connect(host=self.host,
port=self.port,
user="openstack_citest",
passwd="openstack_citest",
db="openstack_citest")
try:
with db.cursor() as cur:
cur.execute("create database %s" % self.name)
cur.execute(
"create user '{user}'@'' identified by '{passwd}'".format(
user=self.name, passwd=self.passwd))
cur.execute("grant all on {name}.* to '{name}'@''".format(
name=self.name))
cur.execute("flush privileges")
finally:
db.close()
self.dburi = 'mysql+pymysql://{name}:{passwd}@{host}:{port}/{name}'\
.format(
name=self.name,
passwd=self.passwd,
host=self.host,
port=self.port
)
self.addDetail('dburi', testtools.content.text_content(self.dburi))
self.addCleanup(self.cleanup)
def cleanup(self):
db = pymysql.connect(host=self.host,
port=self.port,
user="openstack_citest",
passwd="openstack_citest",
db="openstack_citest",
read_timeout=90)
try:
with db.cursor() as cur:
cur.execute("drop database %s" % self.name)
cur.execute("drop user '%s'@''" % self.name)
cur.execute("flush privileges")
finally:
db.close()
class PostgresqlSchemaFixture(fixtures.Fixture):
def setUp(self):
super(PostgresqlSchemaFixture, self).setUp()
# Postgres lowercases user and table names during creation but not
# during authentication. Thus only use lowercase chars.
random_bits = ''.join(random.choice(string.ascii_lowercase)
for x in range(8))
self.name = '%s_%s' % (random_bits, os.getpid())
self.passwd = uuid.uuid4().hex
self.host = os.environ.get('ZUUL_POSTGRES_HOST', '127.0.0.1')
db = psycopg2.connect(host=self.host,
user="openstack_citest",
password="openstack_citest",
database="openstack_citest")
db.autocommit = True
cur = db.cursor()
cur.execute("create role %s with login password '%s';" % (
self.name, self.passwd))
cur.execute("create database %s OWNER %s TEMPLATE template0 "
"ENCODING 'UTF8';" % (self.name, self.name))
self.dburi = 'postgresql://{name}:{passwd}@{host}/{name}'.format(
name=self.name, passwd=self.passwd, host=self.host)
self.addDetail('dburi', testtools.content.text_content(self.dburi))
self.addCleanup(self.cleanup)
def cleanup(self):
db = psycopg2.connect(host=self.host,
user="openstack_citest",
password="openstack_citest",
database="openstack_citest")
db.autocommit = True
cur = db.cursor()
cur.execute("drop database %s" % self.name)
cur.execute("drop user %s" % self.name)
class PrometheusFixture(fixtures.Fixture):
def _setUp(self):
# Save a list of collectors which exist at the start of the
# test (ie, the standard prometheus_client collectors)
self.collectors = list(
prometheus_client.registry.REGISTRY._collector_to_names.keys())
self.addCleanup(self._cleanup)
def _cleanup(self):
# Avoid the "Duplicated timeseries in CollectorRegistry" error
# by removing any collectors added during the test.
collectors = list(
prometheus_client.registry.REGISTRY._collector_to_names.keys())
for collector in collectors:
if collector not in self.collectors:
prometheus_client.registry.REGISTRY.unregister(collector)
class GlobalRegistryFixture(fixtures.Fixture):
def _setUp(self):
self.addCleanup(self._cleanup)
def _cleanup(self):
# Remove our component registry from the global
COMPONENT_REGISTRY.clearRegistry()
class FakeCPUTimes:
def __init__(self):
self.user = 0
self.system = 0
self.children_user = 0
self.children_system = 0
def cpu_times(self):
return FakeCPUTimes()
class BaseTestCase(testtools.TestCase):
log = logging.getLogger("zuul.test")
wait_timeout = 90
def attachLogs(self, *args):
def reader():
self._log_stream.seek(0)
while True:
x = self._log_stream.read(4096)
if not x:
break
yield x.encode('utf8')
content = testtools.content.content_from_reader(
reader,
testtools.content_type.UTF8_TEXT,
False)
self.addDetail('logging', content)
def shouldNeverCapture(self):
test_name = self.id().split('.')[-1]
test = getattr(self, test_name)
if hasattr(test, '__never_capture__'):
return getattr(test, '__never_capture__')
return False
def setUp(self):
super(BaseTestCase, self).setUp()
self.useFixture(PrometheusFixture())
self.useFixture(GlobalRegistryFixture())
test_timeout = os.environ.get('OS_TEST_TIMEOUT', 0)
try:
test_timeout = int(test_timeout)
except ValueError:
# If timeout value is invalid do not set a timeout.
test_timeout = 0
if test_timeout > 0:
# Try a gentle timeout first and as a safety net a hard timeout
# later.
self.useFixture(fixtures.Timeout(test_timeout, gentle=True))
self.useFixture(fixtures.Timeout(test_timeout + 20, gentle=False))
if not self.shouldNeverCapture():
if (os.environ.get('OS_STDOUT_CAPTURE') == 'True' or
os.environ.get('OS_STDOUT_CAPTURE') == '1'):
stdout = self.useFixture(
fixtures.StringStream('stdout')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))
if (os.environ.get('OS_STDERR_CAPTURE') == 'True' or
os.environ.get('OS_STDERR_CAPTURE') == '1'):
stderr = self.useFixture(
fixtures.StringStream('stderr')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr))
if (os.environ.get('OS_LOG_CAPTURE') == 'True' or
os.environ.get('OS_LOG_CAPTURE') == '1'):
self._log_stream = StringIO()
self.addOnException(self.attachLogs)
else:
self._log_stream = sys.stdout
else:
self._log_stream = sys.stdout
handler = logging.StreamHandler(self._log_stream)
formatter = logging.Formatter('%(asctime)s %(name)-32s '
'%(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger = logging.getLogger()
# It is possible that a stderr log handler is inserted before our
# addHandler below. If that happens we will emit all logs to stderr
# even when we don't want to. Error here to make it clear there is
# a problem as early as possible as it is easy to overlook.
self.assertEqual(logger.handlers, [])
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
# Make sure we don't carry old handlers around in process state
# which slows down test runs
self.addCleanup(logger.removeHandler, handler)
# NOTE(notmorgan): Extract logging overrides for specific
# libraries from the OS_LOG_DEFAULTS env and create loggers
# for each. This is used to limit the output during test runs
# from libraries that zuul depends on.
log_defaults_from_env = os.environ.get(
'OS_LOG_DEFAULTS',
'git.cmd=INFO,'
'kazoo.client=WARNING,kazoo.recipe=WARNING')
if log_defaults_from_env:
for default in log_defaults_from_env.split(','):
try:
name, level_str = default.split('=', 1)
level = getattr(logging, level_str, logging.DEBUG)
logger = logging.getLogger(name)
logger.setLevel(level)
logger.addHandler(handler)
self.addCleanup(logger.removeHandler, handler)
logger.propagate = False
except ValueError:
# NOTE(notmorgan): Invalid format of the log default,
# skip and don't try and apply a logger for the
# specified module
pass
self.addCleanup(handler.close)
self.addCleanup(handler.flush)
if sys.platform == 'darwin':
# Popen.cpu_times() is broken on darwin so patch it with a fake.
Popen.cpu_times = cpu_times
def setupZK(self):
self.zk_chroot_fixture = self.useFixture(
ChrootedKazooFixture(self.id())
)
def getZKWatches(self):
# TODO: The client.command method used here returns only the
# first 8k of data. That means this method can return {} when
# there actually are watches (and this happens in practice in
# heavily loaded test environments). We should replace that
# method with something more robust.
chroot = self.zk_chroot_fixture.zookeeper_chroot
data = self.zk_client.client.command(b'wchp')
ret = {}
sessions = None
for line in data.split('\n'):
if line.startswith('\t'):
if sessions is not None:
sessions.append(line.strip())
else:
line = line.strip()
if not line:
continue
if line.startswith(chroot):
line = line[len(chroot):]
sessions = []
ret[line] = sessions
else:
sessions = None
return ret
def getZKTree(self, path, ret=None):
"""Return the contents of a ZK tree as a dictionary"""
if ret is None:
ret = {}
for key in self.zk_client.client.get_children(path):
subpath = os.path.join(path, key)
ret[subpath] = self.zk_client.client.get(
os.path.join(path, key))[0]
self.getZKTree(subpath, ret)
return ret
def getZKPaths(self, path):
return list(self.getZKTree(path).keys())
def getZKObject(self, path):
compressed_data, zstat = self.zk_client.client.get(path)
try:
data = zlib.decompress(compressed_data)
except zlib.error:
# Fallback for old, uncompressed data
data = compressed_data
return data
class SymLink(object):
def __init__(self, target):
self.target = target
class SchedulerTestApp:
def __init__(self, log, config, changes, additional_event_queues,
upstream_root, poller_events,
git_url_with_auth, add_cleanup, validate_tenants,
wait_for_init, instance_id):
self.log = log
self.config = config
self.changes = changes
self.validate_tenants = validate_tenants
self.wait_for_init = wait_for_init
# Register connections from the config using fakes
self.connections = TestConnectionRegistry(
self.changes,
self.config,
additional_event_queues,
upstream_root,
poller_events,
git_url_with_auth,
add_cleanup,
)
self.connections.configure(self.config)
self.sched = TestScheduler(self.config, self.connections, self,
wait_for_init)
self.sched.log = logging.getLogger(f"zuul.Scheduler-{instance_id}")
self.sched._stats_interval = 1
if validate_tenants is None:
self.connections.registerScheduler(self.sched)
self.connections.load(self.sched.zk_client,
self.sched.component_registry)
# TODO (swestphahl): Can be removed when we no longer use global
# management events.
self.event_queues = [
self.sched.reconfigure_event_queue,
]
def start(self, validate_tenants=None):
self.sched.start()
if validate_tenants is None:
self.sched.prime(self.config)
else:
self.sched.validateTenants(self.config, validate_tenants)
def fullReconfigure(self, command_socket=False):
try:
if command_socket:
command_socket = self.sched.config.get(
'scheduler', 'command_socket')
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s:
s.connect(command_socket)
s.sendall('full-reconfigure\n'.encode('utf8'))
else:
self.sched.reconfigure(self.config)
except Exception:
self.log.exception("Reconfiguration failed:")
def smartReconfigure(self, command_socket=False):
try:
if command_socket:
command_socket = self.sched.config.get(
'scheduler', 'command_socket')
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s:
s.connect(command_socket)
s.sendall('smart-reconfigure\n'.encode('utf8'))
else:
self.sched.reconfigure(self.config, smart=True)
except Exception:
self.log.exception("Reconfiguration failed:")
def tenantReconfigure(self, tenants, command_socket=False):
try:
if command_socket:
command_socket = self.sched.config.get(
'scheduler', 'command_socket')
args = json.dumps(tenants)
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s:
s.connect(command_socket)
s.sendall(f'tenant-reconfigure {args}\n'.
encode('utf8'))
else:
self.sched.reconfigure(
self.config, smart=False, tenants=tenants)
except Exception:
self.log.exception("Reconfiguration failed:")
class SchedulerTestManager:
def __init__(self, validate_tenants, wait_for_init):
self.instances = []
def create(self, log, config, changes, additional_event_queues,
upstream_root, poller_events, git_url_with_auth,
add_cleanup, validate_tenants, wait_for_init):
# Since the config contains a regex we cannot use copy.deepcopy()
# as this will raise an exception with Python <3.7
config_data = StringIO()
config.write(config_data)
config_data.seek(0)
scheduler_config = ConfigParser()
scheduler_config.read_file(config_data)
instance_id = len(self.instances)
# Ensure a unique command socket per scheduler instance
command_socket = os.path.join(
os.path.dirname(config.get("scheduler", "command_socket")),
f"scheduler-{instance_id}.socket"
)
scheduler_config.set("scheduler", "command_socket", command_socket)
app = SchedulerTestApp(log, scheduler_config, changes,
additional_event_queues, upstream_root,
poller_events,
git_url_with_auth, add_cleanup,
validate_tenants, wait_for_init, instance_id)
self.instances.append(app)
return app
def __len__(self):
return len(self.instances)
def __getitem__(self, item):
return self.instances[item]
def __setitem__(self, key, value):
raise Exception("Not implemented, use create method!")
def __delitem__(self, key):
del self.instances[key]
def __iter__(self):
return iter(self.instances)
@property
def first(self):
if len(self.instances) == 0:
raise Exception("No scheduler!")
return self.instances[0]
def filter(self, matcher=None):
thefcn = None
if type(matcher) is list:
def fcn(_, app):
return app in matcher
thefcn = fcn
elif type(matcher).__name__ == 'function':
thefcn = matcher
return [e[1] for e in enumerate(self.instances)
if thefcn is None or thefcn(e[0], e[1])]
def execute(self, function, matcher=None):
for instance in self.filter(matcher):
function(instance)
class ZuulTestCase(BaseTestCase):
"""A test case with a functioning Zuul.
The following class variables are used during test setup and can
be overidden by subclasses but are effectively read-only once a
test method starts running:
:cvar str config_file: This points to the main zuul config file
within the fixtures directory. Subclasses may override this
to obtain a different behavior.
:cvar str tenant_config_file: This is the tenant config file
(which specifies from what git repos the configuration should
be loaded). It defaults to the value specified in
`config_file` but can be overidden by subclasses to obtain a
different tenant/project layout while using the standard main
configuration. See also the :py:func:`simple_layout`
decorator.
:cvar str tenant_config_script_file: This is the tenant config script
file. This attribute has the same meaning than tenant_config_file
except that the tenant configuration is loaded from a script.
When this attribute is set then tenant_config_file is ignored
by the scheduler.
:cvar bool create_project_keys: Indicates whether Zuul should
auto-generate keys for each project, or whether the test
infrastructure should insert dummy keys to save time during
startup. Defaults to False.
:cvar int log_console_port: The zuul_stream/zuul_console port.
The following are instance variables that are useful within test
methods:
:ivar FakeGerritConnection fake_<connection>:
A :py:class:`~tests.base.FakeGerritConnection` will be
instantiated for each connection present in the config file
and stored here. For instance, `fake_gerrit` will hold the
FakeGerritConnection object for a connection named `gerrit`.
:ivar RecordingExecutorServer executor_server: An instance of
:py:class:`~tests.base.RecordingExecutorServer` which is the
Ansible execute server used to run jobs for this test.
:ivar list builds: A list of :py:class:`~tests.base.FakeBuild` objects
representing currently running builds. They are appended to
the list in the order they are executed, and removed from this
list upon completion.
:ivar list history: A list of :py:class:`~tests.base.BuildHistory`
objects representing completed builds. They are appended to
the list in the order they complete.
"""
config_file: str = 'zuul.conf'
run_ansible: bool = False
create_project_keys: bool = False
use_ssl: bool = False
git_url_with_auth: bool = False
log_console_port: int = 19885
validate_tenants = None
wait_for_init = None
scheduler_count = SCHEDULER_COUNT
def __getattr__(self, name):
"""Allows to access fake connections the old way, e.g., using
`fake_gerrit` for FakeGerritConnection.
This will access the connection of the first (default) scheduler
(`self.scheds.first`). To access connections of a different
scheduler use `self.scheds[{X}].connections.fake_{NAME}`.
"""
if name.startswith('fake_') and\
hasattr(self.scheds.first.connections, name):
return getattr(self.scheds.first.connections, name)
raise AttributeError("'ZuulTestCase' object has no attribute '%s'"
% name)
def _startMerger(self):
self.merge_server = zuul.merger.server.MergeServer(
self.config, self.scheds.first.connections
)
self.merge_server.start()
def _setupModelPin(self):
# Add a fake scheduler to the system that is on the old model
# version.
test_name = self.id().split('.')[-1]
test = getattr(self, test_name)
if hasattr(test, '__model_version__'):
version = getattr(test, '__model_version__')
self.model_test_component_info = SchedulerComponent(
self.zk_client, 'test_component')
self.model_test_component_info.register(version)
def setUp(self):
super(ZuulTestCase, self).setUp()
self.setupZK()
self.fake_nodepool = FakeNodepool(self.zk_chroot_fixture)
if not KEEP_TEMPDIRS:
tmp_root = self.useFixture(fixtures.TempDir(
rootdir=os.environ.get("ZUUL_TEST_ROOT"))
).path
else:
tmp_root = tempfile.mkdtemp(
dir=os.environ.get("ZUUL_TEST_ROOT", None))
self.test_root = os.path.join(tmp_root, "zuul-test")
self.upstream_root = os.path.join(self.test_root, "upstream")
self.merger_src_root = os.path.join(self.test_root, "merger-git")
self.executor_src_root = os.path.join(self.test_root, "executor-git")
self.state_root = os.path.join(self.test_root, "lib")
self.merger_state_root = os.path.join(self.test_root, "merger-lib")
self.executor_state_root = os.path.join(self.test_root, "executor-lib")
self.jobdir_root = os.path.join(self.test_root, "builds")
if os.path.exists(self.test_root):
shutil.rmtree(self.test_root)
os.makedirs(self.test_root)
os.makedirs(self.upstream_root)
os.makedirs(self.state_root)
os.makedirs(self.merger_state_root)
os.makedirs(self.executor_state_root)
os.makedirs(self.jobdir_root)
# Make per test copy of Configuration.
self.config = self.setup_config(self.config_file)
self.private_key_file = os.path.join(self.test_root, 'test_id_rsa')
if not os.path.exists(self.private_key_file):
src_private_key_file = os.environ.get(
'ZUUL_SSH_KEY',
os.path.join(FIXTURE_DIR, 'test_id_rsa'))
shutil.copy(src_private_key_file, self.private_key_file)
shutil.copy('{}.pub'.format(src_private_key_file),
'{}.pub'.format(self.private_key_file))
os.chmod(self.private_key_file, 0o0600)
for cfg_attr in ('tenant_config', 'tenant_config_script'):
if self.config.has_option('scheduler', cfg_attr):
cfg_value = self.config.get('scheduler', cfg_attr)
self.config.set(
'scheduler', cfg_attr,
os.path.join(FIXTURE_DIR, cfg_value))
self.config.set('scheduler', 'state_dir', self.state_root)
self.config.set(
'scheduler', 'command_socket',
os.path.join(self.test_root, 'scheduler.socket'))
if not self.config.has_option("keystore", "password"):
self.config.set("keystore", "password", 'keystorepassword')
self.config.set('merger', 'git_dir', self.merger_src_root)
self.config.set('executor', 'git_dir', self.executor_src_root)
self.config.set('executor', 'private_key_file', self.private_key_file)
self.config.set('executor', 'state_dir', self.executor_state_root)
self.config.set(
'executor', 'command_socket',
os.path.join(self.test_root, 'executor.socket'))
self.config.set(
'merger', 'command_socket',
os.path.join(self.test_root, 'merger.socket'))
self.config.set('web', 'listen_address', '::')
self.config.set('web', 'port', '0')
self.config.set(
'web', 'command_socket',
os.path.join(self.test_root, 'web.socket'))
self.statsd = FakeStatsd()
if self.config.has_section('statsd'):
self.config.set('statsd', 'port', str(self.statsd.port))
self.statsd.start()
self.config.set('zookeeper', 'hosts', self.zk_chroot_fixture.zk_hosts)
self.config.set('zookeeper', 'session_timeout', '30')
self.config.set('zookeeper', 'tls_cert',
self.zk_chroot_fixture.zookeeper_cert)
self.config.set('zookeeper', 'tls_key',
self.zk_chroot_fixture.zookeeper_key)
self.config.set('zookeeper', 'tls_ca',
self.zk_chroot_fixture.zookeeper_ca)
gerritsource.GerritSource.replication_timeout = 1.5
gerritsource.GerritSource.replication_retry_interval = 0.5
gerritconnection.GerritEventConnector.delay = 0.0
self.changes: Dict[str, Dict[str, Change]] = {}
self.additional_event_queues = []
self.zk_client = ZooKeeperClient.fromConfig(self.config)
self.zk_client.connect()
self._setupModelPin()
self._context_lock = SessionAwareLock(
self.zk_client.client, f"/test/{uuid.uuid4().hex}")
self.connection_event_queues = DefaultKeyDict(
lambda cn: ConnectionEventQueue(self.zk_client, cn)
)
# requires zk client
self.setupAllProjectKeys(self.config)
self.poller_events = {}
self._configureSmtp()
self._configureMqtt()
self._configureElasticsearch()
executor_connections = TestConnectionRegistry(
self.changes, self.config, self.additional_event_queues,
self.upstream_root, self.poller_events,
self.git_url_with_auth, self.addCleanup)
executor_connections.configure(self.config,
source_only=True)
self.executor_api = TestingExecutorApi(self.zk_client)
self.merger_api = TestingMergerApi(self.zk_client)
self.executor_server = RecordingExecutorServer(
self.config,
executor_connections,
jobdir_root=self.jobdir_root,
_run_ansible=self.run_ansible,
_test_root=self.test_root,
keep_jobdir=KEEP_TEMPDIRS,
log_console_port=self.log_console_port)
self.executor_server.start()
self.history = self.executor_server.build_history
self.builds = self.executor_server.running_builds
self.scheds = SchedulerTestManager(self.validate_tenants,
self.wait_for_init)
for _ in range(self.scheduler_count):
self.createScheduler()
self.merge_server = None
# Cleanups are run in reverse order
self.addCleanup(self.assertCleanShutdown)
self.addCleanup(self.shutdown)
self.addCleanup(self.assertFinalState)
self.scheds.execute(
lambda app: app.start(self.validate_tenants))
def createScheduler(self):
return self.scheds.create(
self.log, self.config, self.changes,
self.additional_event_queues, self.upstream_root,
self.poller_events, self.git_url_with_auth,
self.addCleanup, self.validate_tenants, self.wait_for_init)
def createZKContext(self, lock=None):
if lock is None:
# Just make sure the lock is acquired
self._context_lock.acquire(blocking=False)
lock = self._context_lock
return zkobject.ZKContext(self.zk_client, lock,
None, self.log)
def __event_queues(self, matcher) -> List[Queue]:
# TODO (swestphahl): Can be removed when we no longer use global
# management events.
sched_queues = map(lambda app: app.event_queues,
self.scheds.filter(matcher))
return [item for sublist in sched_queues for item in sublist] + \
self.additional_event_queues
def _configureSmtp(self):
# Set up smtp related fakes
# TODO(jhesketh): This should come from lib.connections for better
# coverage
# Register connections from the config
self.smtp_messages = []
def FakeSMTPFactory(*args, **kw):
args = [self.smtp_messages] + list(args)
return FakeSMTP(*args, **kw)
self.useFixture(fixtures.MonkeyPatch('smtplib.SMTP', FakeSMTPFactory))
def _configureMqtt(self):
# Set up mqtt related fakes
self.mqtt_messages = []
def fakeMQTTPublish(_, topic, msg, qos, zuul_event_id):
log = logging.getLogger('zuul.FakeMQTTPubish')
log.info('Publishing message via mqtt')
self.mqtt_messages.append({'topic': topic, 'msg': msg, 'qos': qos})
self.useFixture(fixtures.MonkeyPatch(
'zuul.driver.mqtt.mqttconnection.MQTTConnection.publish',
fakeMQTTPublish))
def _configureElasticsearch(self):
# Set up Elasticsearch related fakes
def getElasticsearchConnection(driver, name, config):
con = FakeElasticsearchConnection(
driver, name, config)
return con
self.useFixture(fixtures.MonkeyPatch(
'zuul.driver.elasticsearch.ElasticsearchDriver.getConnection',
getElasticsearchConnection))
def setup_config(self, config_file: str):
# This creates the per-test configuration object. It can be
# overridden by subclasses, but should not need to be since it
# obeys the config_file and tenant_config_file attributes.
config = configparser.ConfigParser()
config.read(os.path.join(FIXTURE_DIR, config_file))
sections = [
'zuul', 'scheduler', 'executor', 'merger', 'web', 'zookeeper',
'keystore', 'database',
]
for section in sections:
if not config.has_section(section):
config.add_section(section)
def _setup_fixture(config, section_name):
if (config.get(section_name, 'dburi') ==
'$MYSQL_FIXTURE_DBURI$'):
f = MySQLSchemaFixture()
self.useFixture(f)
config.set(section_name, 'dburi', f.dburi)
elif (config.get(section_name, 'dburi') ==
'$POSTGRESQL_FIXTURE_DBURI$'):
f = PostgresqlSchemaFixture()
self.useFixture(f)
config.set(section_name, 'dburi', f.dburi)
for section_name in config.sections():
con_match = re.match(r'^connection ([\'\"]?)(.*)(\1)$',
section_name, re.I)
if not con_match:
continue
if config.get(section_name, 'driver') == 'sql':
_setup_fixture(config, section_name)
if 'database' in config.sections():
_setup_fixture(config, 'database')
if 'tracing' in config.sections():
self.otlp = OTLPFixture()
self.useFixture(self.otlp)
self.useFixture(fixtures.MonkeyPatch(
'zuul.lib.tracing.Tracing.processor_class',
opentelemetry.sdk.trace.export.SimpleSpanProcessor))
config.set('tracing', 'endpoint',
f'http://localhost:{self.otlp.port}')
if not self.setupSimpleLayout(config):
tenant_config = None
for cfg_attr in ('tenant_config', 'tenant_config_script'):
if hasattr(self, cfg_attr + '_file'):
if getattr(self, cfg_attr + '_file'):
value = getattr(self, cfg_attr + '_file')
config.set('scheduler', cfg_attr, value)
tenant_config = value
else:
config.remove_option('scheduler', cfg_attr)
if tenant_config:
git_path = os.path.join(
os.path.dirname(
os.path.join(FIXTURE_DIR, tenant_config)),
'git')
if os.path.exists(git_path):
for reponame in os.listdir(git_path):
project = reponame.replace('_', '/')
self.copyDirToRepo(project,
os.path.join(git_path, reponame))
# Make test_root persist after ansible run for .flag test
config.set('executor', 'trusted_rw_paths', self.test_root)
return config
def setupSimpleLayout(self, config: ConfigParser):
# If the test method has been decorated with a simple_layout,
# use that instead of the class tenant_config_file. Set up a
# single config-project with the specified layout, and
# initialize repos for all of the 'project' entries which
# appear in the layout.
test_name = self.id().split('.')[-1]
test = getattr(self, test_name)
if hasattr(test, '__simple_layout__'):
path, driver = getattr(test, '__simple_layout__')
else:
return False
files = {}
path = os.path.join(FIXTURE_DIR, path)
with open(path) as f:
data = f.read()
layout = yaml.safe_load(data)
files['zuul.yaml'] = data
untrusted_projects = []
for item in layout:
if 'project' in item:
name = item['project']['name']
if name.startswith('^'):
continue
untrusted_projects.append(name)
self.init_repo(name)
self.addCommitToRepo(name, 'initial commit',
files={'README': ''},
branch='master', tag='init')
if 'job' in item:
if 'run' in item['job']:
files['%s' % item['job']['run']] = ''
for fn in zuul.configloader.as_list(
item['job'].get('pre-run', [])):
files['%s' % fn] = ''
for fn in zuul.configloader.as_list(
item['job'].get('post-run', [])):
files['%s' % fn] = ''
root = os.path.join(self.test_root, "config")
if not os.path.exists(root):
os.makedirs(root)
f = tempfile.NamedTemporaryFile(dir=root, delete=False)
temp_config = [{
'tenant': {
'name': 'tenant-one',
'source': {
driver: {
'config-projects': ['org/common-config'],
'untrusted-projects': untrusted_projects}}}}]
f.write(yaml.dump(temp_config).encode('utf8'))
f.close()
config.set('scheduler', 'tenant_config',
os.path.join(FIXTURE_DIR, f.name))
self.init_repo('org/common-config')
self.addCommitToRepo('org/common-config', 'add content from fixture',
files, branch='master', tag='init')
return True
def setupAllProjectKeys(self, config: ConfigParser):
if self.create_project_keys:
return
path = config.get('scheduler', 'tenant_config')
with open(os.path.join(FIXTURE_DIR, path)) as f:
tenant_config = yaml.safe_load(f.read())
for tenant in tenant_config:
if 'tenant' not in tenant.keys():
continue
sources = tenant['tenant']['source']
for source, conf in sources.items():
for project in conf.get('config-projects', []):
self.setupProjectKeys(source, project)
for project in conf.get('untrusted-projects', []):
self.setupProjectKeys(source, project)
def setupProjectKeys(self, source, project):
# Make sure we set up an RSA key for the project so that we
# don't spend time generating one:
if isinstance(project, dict):
project = list(project.keys())[0]
password = self.config.get("keystore", "password")
keystore = zuul.lib.keystorage.KeyStorage(
self.zk_client, password=password)
import_keys = {}
import_data = {'keys': import_keys}
path = keystore.getProjectSecretsKeysPath(source, project)
with open(os.path.join(FIXTURE_DIR, 'secrets.json'), 'rb') as i:
import_keys[path] = json.load(i)
# ssh key
path = keystore.getSSHKeysPath(source, project)
with open(os.path.join(FIXTURE_DIR, 'ssh.json'), 'rb') as i:
import_keys[path] = json.load(i)
keystore.importKeys(import_data, False)
def copyDirToRepo(self, project, source_path):
self.init_repo(project)
files = {}
for (dirpath, dirnames, filenames) in os.walk(source_path):
for filename in filenames:
test_tree_filepath = os.path.join(dirpath, filename)
common_path = os.path.commonprefix([test_tree_filepath,
source_path])
relative_filepath = test_tree_filepath[len(common_path) + 1:]
with open(test_tree_filepath, 'rb') as f:
content = f.read()
# dynamically create symlinks if the content is of the form
# symlink: <target>
match = re.match(rb'symlink: ([^\s]+)', content)
if match:
content = SymLink(match.group(1))
files[relative_filepath] = content
self.addCommitToRepo(project, 'add content from fixture',
files, branch='master', tag='init')
def assertNodepoolState(self):
# Make sure that there are no pending requests
requests = None
for x in iterate_timeout(30, "zk getNodeRequests"):
try:
requests = self.fake_nodepool.getNodeRequests()
break
except kazoo.exceptions.ConnectionLoss:
# NOTE(pabelanger): We lost access to zookeeper, iterate again
pass
self.assertEqual(len(requests), 0)
nodes = None
for x in iterate_timeout(30, "zk getNodeRequests"):
try:
nodes = self.fake_nodepool.getNodes()
break
except kazoo.exceptions.ConnectionLoss:
# NOTE(pabelanger): We lost access to zookeeper, iterate again
pass
for node in nodes:
self.assertFalse(node['_lock'], "Node %s is locked" %
(node['_oid'],))
def assertNoGeneratedKeys(self):
# Make sure that Zuul did not generate any project keys
# (unless it was supposed to).
if self.create_project_keys:
return
test_keys = []
key_fns = ['private.pem', 'ssh.pem']
for fn in key_fns:
with open(os.path.join(FIXTURE_DIR, fn)) as i:
test_keys.append(i.read())
key_root = os.path.join(self.state_root, 'keys')
for root, dirname, files in os.walk(key_root):
for fn in files:
if fn == '.version':
continue
with open(os.path.join(root, fn)) as f:
self.assertTrue(f.read() in test_keys)
def assertSQLState(self):
reporter = self.scheds.first.connections.getSqlReporter(None)
with self.scheds.first.connections.getSqlConnection().\
engine.connect() as conn:
try:
result = conn.execute(
sqlalchemy.sql.select(
reporter.connection.zuul_buildset_table)
)
except sqlalchemy.exc.ProgrammingError:
# Table doesn't exist
return
for buildset in result.fetchall():
self.assertIsNotNone(buildset.result)
result = conn.execute(
sqlalchemy.sql.select(reporter.connection.zuul_build_table)
)
for build in result.fetchall():
self.assertIsNotNone(build.result)
self.assertIsNotNone(build.start_time)
self.assertIsNotNone(build.end_time)
def assertNoPipelineExceptions(self):
for tenant in self.scheds.first.sched.abide.tenants.values():
for pipeline in tenant.layout.pipelines.values():
self.assertEqual(0, pipeline._exception_count)
def assertFinalState(self):
self.log.debug("Assert final state")
# Make sure no jobs are running
self.assertEqual({}, self.executor_server.job_workers)
# Make sure that git.Repo objects have been garbage collected.
gc.disable()
try:
gc.collect()
for obj in gc.get_objects():
if isinstance(obj, git.Repo):
self.log.debug("Leaked git repo object: 0x%x %s" %
(id(obj), repr(obj)))
finally:
gc.enable()
if len(self.scheds) > 1:
self.refreshPipelines(self.scheds.first.sched)
self.assertEmptyQueues()
self.assertNodepoolState()
self.assertNoGeneratedKeys()
self.assertSQLState()
self.assertCleanZooKeeper()
ipm = zuul.manager.independent.IndependentPipelineManager
for tenant in self.scheds.first.sched.abide.tenants.values():
for pipeline in tenant.layout.pipelines.values():
if isinstance(pipeline.manager, ipm):
self.assertEqual(len(pipeline.queues), 0)
self.assertNoPipelineExceptions()
def shutdown(self):
self.log.debug("Shutting down after tests")
self.executor_server.hold_jobs_in_build = False
self.executor_server.release()
self.scheds.execute(lambda app: app.sched.executor.stop())
if self.merge_server:
self.merge_server.stop()
self.merge_server.join()
self.executor_server.stop()
self.executor_server.join()
self.scheds.execute(lambda app: app.sched.stop())
self.scheds.execute(lambda app: app.sched.join())
self.statsd.stop()
self.statsd.join()
self.fake_nodepool.stop()
self.zk_client.disconnect()
self.printHistory()
# We whitelist watchdog threads as they have relatively long delays
# before noticing they should exit, but they should exit on their own.
whitelist = ['watchdog',
'socketserver_Thread',
'cleanup start',
]
# Ignore threads that start with
# * Thread- : Kazoo TreeCache
# * Dummy- : Seen during debugging in VS Code
# * pydevd : Debug helper threads of pydevd (used by many IDEs)
# * ptvsd : Debug helper threads used by VS Code
threads = [t for t in threading.enumerate()
if t.name not in whitelist
and not t.name.startswith("Thread-")
and not t.name.startswith('Dummy-')
and not t.name.startswith('pydevd.')
and not t.name.startswith('ptvsd.')
and not t.name.startswith('OTLPFixture_')
]
if len(threads) > 1:
thread_map = dict(map(lambda x: (x.ident, x.name),
threading.enumerate()))
log_str = ""
for thread_id, stack_frame in sys._current_frames().items():
log_str += "Thread id: %s, name: %s\n" % (
thread_id, thread_map.get(thread_id, 'UNKNOWN'))
log_str += "".join(traceback.format_stack(stack_frame))
self.log.debug(log_str)
raise Exception("More than one thread is running: %s" % threads)
def assertCleanShutdown(self):
pass
def init_repo(self, project, tag=None):
parts = project.split('/')
path = os.path.join(self.upstream_root, *parts[:-1])
if not os.path.exists(path):
os.makedirs(path)
path = os.path.join(self.upstream_root, project)
repo = git.Repo.init(path)
with repo.config_writer() as config_writer:
config_writer.set_value('user', 'email', '[email protected]')
config_writer.set_value('user', 'name', 'User Name')
repo.index.commit('initial commit')
master = repo.create_head('master')
if tag:
repo.create_tag(tag)
repo.head.reference = master
repo.head.reset(working_tree=True)
repo.git.clean('-x', '-f', '-d')
def create_branch(self, project, branch, commit_filename='README'):
path = os.path.join(self.upstream_root, project)
repo = git.Repo(path)
fn = os.path.join(path, commit_filename)
branch_head = repo.create_head(branch)
repo.head.reference = branch_head
f = open(fn, 'a')
f.write("test %s\n" % branch)
f.close()
repo.index.add([fn])
repo.index.commit('%s commit' % branch)
repo.head.reference = repo.heads['master']
repo.head.reset(working_tree=True)
repo.git.clean('-x', '-f', '-d')
def delete_branch(self, project, branch):
path = os.path.join(self.upstream_root, project)
repo = git.Repo(path)
repo.head.reference = repo.heads['master']
repo.head.reset(working_tree=True)
repo.delete_head(repo.heads[branch], force=True)
def create_commit(self, project, files=None, delete_files=None,
head='master', message='Creating a fake commit',
**kwargs):
path = os.path.join(self.upstream_root, project)
repo = git.Repo(path)
repo.head.reference = repo.heads[head]
repo.head.reset(index=True, working_tree=True)
files = files or {"README": "creating fake commit\n"}
for name, content in files.items():
file_name = os.path.join(path, name)
with open(file_name, 'a') as f:
f.write(content)
repo.index.add([file_name])
delete_files = delete_files or []
for name in delete_files:
file_name = os.path.join(path, name)
repo.index.remove([file_name])
commit = repo.index.commit(message, **kwargs)
return commit.hexsha
def orderedRelease(self, count=None):
# Run one build at a time to ensure non-race order:
i = 0
while len(self.builds):
self.release(self.builds[0])
self.waitUntilSettled()
i += 1
if count is not None and i >= count:
break
def getSortedBuilds(self):
"Return the list of currently running builds sorted by name"
return sorted(self.builds, key=lambda x: x.name)
def getCurrentBuilds(self):
for tenant in self.scheds.first.sched.abide.tenants.values():
for pipeline in tenant.layout.pipelines.values():
for item in pipeline.getAllItems():
for build in item.current_build_set.builds.values():
yield build
def release(self, job):
job.release()
@property
def sched_zk_nodepool(self):
return self.scheds.first.sched.nodepool.zk_nodepool
@property
def hold_jobs_in_queue(self):
return self.executor_api.hold_in_queue
@hold_jobs_in_queue.setter
def hold_jobs_in_queue(self, hold_in_queue):
"""Helper method to set hold_in_queue on all involved Executor APIs"""
self.executor_api.hold_in_queue = hold_in_queue
for app in self.scheds:
app.sched.executor.executor_api.hold_in_queue = hold_in_queue
@property
def hold_merge_jobs_in_queue(self):
return self.merger_api.hold_in_queue
@hold_merge_jobs_in_queue.setter
def hold_merge_jobs_in_queue(self, hold_in_queue):
"""Helper method to set hold_in_queue on all involved Merger APIs"""
self.merger_api.hold_in_queue = hold_in_queue
for app in self.scheds:
app.sched.merger.merger_api.hold_in_queue = hold_in_queue
@property
def merge_job_history(self):
history = defaultdict(list)
for app in self.scheds:
for job_type, jobs in app.sched.merger.merger_api.history.items():
history[job_type].extend(jobs)
return history
@merge_job_history.deleter
def merge_job_history(self):
for app in self.scheds:
app.sched.merger.merger_api.history.clear()
def waitUntilNodeCacheSync(self, zk_nodepool):
"""Wait until the node cache on the zk_nodepool object is in sync"""
for _ in iterate_timeout(60, 'wait for node cache sync'):
cache_state = {}
zk_state = {}
for n in self.fake_nodepool.getNodes():
zk_state[n['_oid']] = n['state']
for nid in zk_nodepool.getNodes(cached=True):
n = zk_nodepool.getNode(nid)
cache_state[n.id] = n.state
if cache_state == zk_state:
return
def __haveAllBuildsReported(self):
# The build requests will be deleted from ZooKeeper once the
# scheduler processed their result event. Thus, as long as
# there are build requests left in ZooKeeper, the system is
# not stable.
for build in self.history:
try:
self.zk_client.client.get(build.build_request_ref)
except NoNodeError:
# It has already been reported
continue
# It hasn't been reported yet.
return False
return True
def __areAllBuildsWaiting(self):
# Look up the queued build requests directly from ZooKeeper
queued_build_requests = list(self.executor_api.all())
seen_builds = set()
# Always ignore builds which are on hold
for build_request in queued_build_requests:
seen_builds.add(build_request.uuid)
if build_request.state in (BuildRequest.HOLD):
continue
# Check if the build is currently processed by the
# RecordingExecutorServer.
worker_build = self.executor_server.job_builds.get(
build_request.uuid)
if worker_build:
if worker_build.paused:
# Avoid a race between setting the resume flag and
# the job actually resuming. If the build is
# paused, make sure that there is no resume flag
# and if that's true, that the build is still
# paused. If there's no resume flag between two
# checks of the paused attr, it should still be
# paused.
if not self.zk_client.client.exists(
build_request.path + '/resume'):
if worker_build.paused:
continue
if worker_build.isWaiting():
continue
self.log.debug("%s is running", worker_build)
return False
else:
self.log.debug("%s is unassigned", build_request)
return False
# Wait until all running builds have finished on the executor
# and that all job workers are cleaned up. Otherwise there
# could be a short window in which the build is finished
# (and reported), but the job cleanup is not yet finished on
# the executor. During this time the test could settle, but
# assertFinalState() will fail because there are still
# job_workers present on the executor.
for build_uuid in self.executor_server.job_workers.keys():
if build_uuid not in seen_builds:
log = get_annotated_logger(
self.log, event=None, build=build_uuid
)
log.debug("Build is not finalized")
return False
return True
def __areAllNodeRequestsComplete(self, matcher=None):
if self.fake_nodepool.paused:
return True
# Check ZK and the scheduler cache and make sure they are
# in sync.
for app in self.scheds.filter(matcher):
sched = app.sched
nodepool = app.sched.nodepool
with nodepool.zk_nodepool._callback_lock:
for req in self.fake_nodepool.getNodeRequests():
if req['state'] != model.STATE_FULFILLED:
return False
r2 = nodepool.zk_nodepool._node_request_cache.get(
req['_oid'])
if r2 and r2.state != req['state']:
return False
if req and not r2:
return False
tenant_name = r2.tenant_name
pipeline_name = r2.pipeline_name
if sched.pipeline_result_events[tenant_name][
pipeline_name
].hasEvents():
return False
return True
def __areAllMergeJobsWaiting(self):
# Look up the queued merge jobs directly from ZooKeeper
queued_merge_jobs = list(self.merger_api.all())
# Always ignore merge jobs which are on hold
for job in queued_merge_jobs:
if job.state != MergeRequest.HOLD:
return False
return True
def __eventQueuesEmpty(self, matcher=None) -> Generator[bool, None, None]:
for event_queue in self.__event_queues(matcher):
yield not event_queue.unfinished_tasks
def __eventQueuesJoin(self, matcher) -> None:
for app in self.scheds.filter(matcher):
for event_queue in app.event_queues:
event_queue.join()
for event_queue in self.additional_event_queues:
event_queue.join()
def __areZooKeeperEventQueuesEmpty(self, matcher=None, debug=False):
for sched in map(lambda app: app.sched, self.scheds.filter(matcher)):
for connection_name in sched.connections.connections:
if self.connection_event_queues[connection_name].hasEvents():
if debug:
self.log.debug(
f"Connection queue {connection_name} not empty")
return False
for tenant in sched.abide.tenants.values():
if sched.management_events[tenant.name].hasEvents():
if debug:
self.log.debug(
f"Tenant management queue {tenant.name} not empty")
return False
if sched.trigger_events[tenant.name].hasEvents():
if debug:
self.log.debug(
f"Tenant trigger queue {tenant.name} not empty")
return False
for pipeline_name in tenant.layout.pipelines:
if sched.pipeline_management_events[tenant.name][
pipeline_name
].hasEvents():
if debug:
self.log.debug(
"Pipeline management queue "
f"{tenant.name} {pipeline_name} not empty")
return False
if sched.pipeline_trigger_events[tenant.name][
pipeline_name
].hasEvents():
if debug:
self.log.debug(
"Pipeline trigger queue "
f"{tenant.name} {pipeline_name} not empty")
return False
if sched.pipeline_result_events[tenant.name][
pipeline_name
].hasEvents():
if debug:
self.log.debug(
"Pipeline result queue "
f"{tenant.name} {pipeline_name} not empty")
return False
return True
def __areAllSchedulersPrimed(self, matcher=None):
for app in self.scheds.filter(matcher):
if app.sched.last_reconfigured is None:
return False
return True
def waitUntilSettled(self, msg="", matcher=None) -> None:
self.log.debug("Waiting until settled... (%s)", msg)
start = time.time()
i = 0
while True:
i = i + 1
if time.time() - start > self.wait_timeout:
self.log.error("Timeout waiting for Zuul to settle")
self.log.debug("All schedulers primed: %s",
self.__areAllSchedulersPrimed(matcher))
self._logQueueStatus(
self.log.error, matcher,
self.__areZooKeeperEventQueuesEmpty(debug=True),
self.__areAllMergeJobsWaiting(),
self.__haveAllBuildsReported(),
self.__areAllBuildsWaiting(),
self.__areAllNodeRequestsComplete(),
all(self.__eventQueuesEmpty(matcher))
)
raise Exception("Timeout waiting for Zuul to settle")
# Make sure no new events show up while we're checking
self.executor_server.lock.acquire()
# have all build states propogated to zuul?
if self.__haveAllBuildsReported():
# Join ensures that the queue is empty _and_ events have been
# processed
self.__eventQueuesJoin(matcher)
for sched in map(lambda app: app.sched,
self.scheds.filter(matcher)):
sched.run_handler_lock.acquire()
if (self.__areAllSchedulersPrimed(matcher) and
self.__areAllMergeJobsWaiting() and
self.__haveAllBuildsReported() and
self.__areAllBuildsWaiting() and
self.__areAllNodeRequestsComplete() and
self.__areZooKeeperEventQueuesEmpty() and
all(self.__eventQueuesEmpty(matcher))):
# The queue empty check is placed at the end to
# ensure that if a component adds an event between
# when locked the run handler and checked that the
# components were stable, we don't erroneously
# report that we are settled.
for sched in map(lambda app: app.sched,
self.scheds.filter(matcher)):
if len(self.scheds) > 1:
self.refreshPipelines(sched)
sched.run_handler_lock.release()
self.executor_server.lock.release()
self.log.debug("...settled after %.3f ms / %s loops (%s)",
time.time() - start, i, msg)
self.logState()
return
for sched in map(lambda app: app.sched,
self.scheds.filter(matcher)):
sched.run_handler_lock.release()
self.executor_server.lock.release()
for sched in map(lambda app: app.sched,
self.scheds.filter(matcher)):
sched.wake_event.wait(0.1)
# Let other threads work
time.sleep(0.1)
def refreshPipelines(self, sched):
ctx = None
for tenant in sched.abide.tenants.values():
with tenant_read_lock(self.zk_client, tenant.name):
for pipeline in tenant.layout.pipelines.values():
with (pipeline_lock(self.zk_client, tenant.name,
pipeline.name) as lock,
self.createZKContext(lock) as ctx):
with pipeline.manager.currentContext(ctx):
pipeline.state.refresh(ctx)
# return the context in case the caller wants to examine iops
return ctx
def _logQueueStatus(self, logger, matcher, all_zk_queues_empty,
all_merge_jobs_waiting, all_builds_reported,
all_builds_waiting, all_node_requests_completed,
all_event_queues_empty):
logger("Queue status:")
for event_queue in self.__event_queues(matcher):
is_empty = not event_queue.unfinished_tasks
self.log.debug(" %s: %s", event_queue, is_empty)
logger("All ZK event queues empty: %s", all_zk_queues_empty)
logger("All merge jobs waiting: %s", all_merge_jobs_waiting)
logger("All builds reported: %s", all_builds_reported)
logger("All builds waiting: %s", all_builds_waiting)
logger("All requests completed: %s", all_node_requests_completed)
logger("All event queues empty: %s", all_event_queues_empty)
def waitForPoll(self, poller, timeout=30):
self.log.debug("Wait for poll on %s", poller)
self.poller_events[poller].clear()
self.log.debug("Waiting for poll 1 on %s", poller)
self.poller_events[poller].wait(timeout)
self.poller_events[poller].clear()
self.log.debug("Waiting for poll 2 on %s", poller)
self.poller_events[poller].wait(timeout)
self.log.debug("Done waiting for poll on %s", poller)
def logState(self):
""" Log the current state of the system """
self.log.info("Begin state dump --------------------")
for build in self.history:
self.log.info("Completed build: %s" % build)
for build in self.builds:
self.log.info("Running build: %s" % build)
for tenant in self.scheds.first.sched.abide.tenants.values():
for pipeline in tenant.layout.pipelines.values():
for pipeline_queue in pipeline.queues:
if len(pipeline_queue.queue) != 0:
status = ''
for item in pipeline_queue.queue:
status += item.formatStatus()
self.log.info(
'Tenant %s pipeline %s queue %s contents:' % (
tenant.name, pipeline.name,
pipeline_queue.name))
for l in status.split('\n'):
if l.strip():
self.log.info(l)
self.log.info("End state dump --------------------")
def countJobResults(self, jobs, result):
jobs = filter(lambda x: x.result == result, jobs)
return len(list(jobs))
def getBuildByName(self, name):
for build in self.builds:
if build.name == name:
return build
raise Exception("Unable to find build %s" % name)
def assertJobNotInHistory(self, name, project=None):
for job in self.history:
if (project is None or
job.parameters['zuul']['project']['name'] == project):
self.assertNotEqual(job.name, name,
'Job %s found in history' % name)
def getJobFromHistory(self, name, project=None, result=None, branch=None):
for job in self.history:
if (job.name == name and
(project is None or
job.parameters['zuul']['project']['name'] == project) and
(result is None or job.result == result) and
(branch is None or
job.parameters['zuul']['branch'] == branch)):
return job
raise Exception("Unable to find job %s in history" % name)
def assertEmptyQueues(self):
# Make sure there are no orphaned jobs
for tenant in self.scheds.first.sched.abide.tenants.values():
for pipeline in tenant.layout.pipelines.values():
for pipeline_queue in pipeline.queues:
if len(pipeline_queue.queue) != 0:
print('pipeline %s queue %s contents %s' % (
pipeline.name, pipeline_queue.name,
pipeline_queue.queue))
self.assertEqual(len(pipeline_queue.queue), 0,
"Pipelines queues should be empty")
def assertCleanZooKeeper(self):
# Make sure there are no extraneous ZK nodes
client = self.merger_api
self.assertEqual(self.getZKPaths(client.REQUEST_ROOT), [])
self.assertEqual(self.getZKPaths(client.PARAM_ROOT), [])
self.assertEqual(self.getZKPaths(client.RESULT_ROOT), [])
self.assertEqual(self.getZKPaths(client.RESULT_DATA_ROOT), [])
self.assertEqual(self.getZKPaths(client.WAITER_ROOT), [])
self.assertEqual(self.getZKPaths(client.LOCK_ROOT), [])
def assertReportedStat(self, key, value=None, kind=None, timeout=5):
"""Check statsd output
Check statsd return values. A ``value`` should specify a
``kind``, however a ``kind`` may be specified without a
``value`` for a generic match. Leave both empy to just check
for key presence.
:arg str key: The statsd key
:arg str value: The expected value of the metric ``key``
:arg str kind: The expected type of the metric ``key`` For example
- ``c`` counter
- ``g`` gauge
- ``ms`` timing
- ``s`` set
:arg int timeout: How long to wait for the stat to appear
:returns: The value
"""
if value:
self.assertNotEqual(kind, None)
start = time.time()
while time.time() <= (start + timeout):
# Note our fake statsd just queues up results in a queue.
# We just keep going through them until we find one that
# matches, or fail out. If statsd pipelines are used,
# large single packets are sent with stats separated by
# newlines; thus we first flatten the stats out into
# single entries.
stats = list(itertools.chain.from_iterable(
[s.decode('utf-8').split('\n') for s in self.statsd.stats]))
# Check that we don't have already have a counter value
# that we then try to extend a sub-key under; this doesn't
# work on the server. e.g.
# zuul.new.stat is already a counter
# zuul.new.stat.sub.value will silently not work
#
# note only valid for gauges and counters; timers are
# slightly different because statsd flushes them out but
# actually writes a bunch of different keys like "mean,
# std, count", so the "key" isn't so much a key, but a
# path to the folder where the actual values will be kept.
# Thus you can extend timer keys OK.
already_set_keys = set()
for stat in stats:
k, v = stat.split(':')
s_value, s_kind = v.split('|')
if s_kind == 'c' or s_kind == 'g':
already_set_keys.update([k])
for k in already_set_keys:
if key != k and key.startswith(k):
raise StatException(
"Key %s is a gauge/counter and "
"we are trying to set subkey %s" % (k, key))
for stat in stats:
k, v = stat.split(':')
s_value, s_kind = v.split('|')
if key == k:
if kind is None:
# key with no qualifiers is found
return s_value
# if no kind match, look for other keys
if kind != s_kind:
continue
if value:
# special-case value|ms because statsd can turn
# timing results into float of indeterminate
# length, hence foiling string matching.
if kind == 'ms':
if float(value) == float(s_value):
return s_value
if value == s_value:
return s_value
# otherwise keep looking for other matches
continue
# this key matches
return s_value
time.sleep(0.1)
stats = list(itertools.chain.from_iterable(
[s.decode('utf-8').split('\n') for s in self.statsd.stats]))
for stat in stats:
self.log.debug("Stat: %s", stat)
raise StatException("Key %s not found in reported stats" % key)
def assertUnReportedStat(self, key, value=None, kind=None):
try:
value = self.assertReportedStat(key, value=value,
kind=kind, timeout=0)
except StatException:
return
raise StatException("Key %s found in reported stats: %s" %
(key, value))
def assertRegexInList(self, regex, items):
r = re.compile(regex)
for x in items:
if r.search(x):
return
raise Exception("Regex '%s' not in %s" % (regex, items))
def assertRegexNotInList(self, regex, items):
r = re.compile(regex)
for x in items:
if r.search(x):
raise Exception("Regex '%s' in %s" % (regex, items))
def assertBuilds(self, builds):
"""Assert that the running builds are as described.
The list of running builds is examined and must match exactly
the list of builds described by the input.
:arg list builds: A list of dictionaries. Each item in the
list must match the corresponding build in the build
history, and each element of the dictionary must match the
corresponding attribute of the build.
"""
try:
self.assertEqual(len(self.builds), len(builds))
for i, d in enumerate(builds):
for k, v in d.items():
self.assertEqual(
getattr(self.builds[i], k), v,
"Element %i in builds does not match" % (i,))
except Exception:
if not self.builds:
self.log.error("No running builds")
for build in self.builds:
self.log.error("Running build: %s" % build)
raise
def assertHistory(self, history, ordered=True):
"""Assert that the completed builds are as described.
The list of completed builds is examined and must match
exactly the list of builds described by the input.
:arg list history: A list of dictionaries. Each item in the
list must match the corresponding build in the build
history, and each element of the dictionary must match the
corresponding attribute of the build.
:arg bool ordered: If true, the history must match the order
supplied, if false, the builds are permitted to have
arrived in any order.
"""
def matches(history_item, item):
for k, v in item.items():
if getattr(history_item, k) != v:
return False
return True
try:
self.assertEqual(len(self.history), len(history))
if ordered:
for i, d in enumerate(history):
if not matches(self.history[i], d):
raise Exception(
"Element %i in history does not match %s" %
(i, self.history[i]))
else:
unseen = self.history[:]
for i, d in enumerate(history):
found = False
for unseen_item in unseen:
if matches(unseen_item, d):
found = True
unseen.remove(unseen_item)
break
if not found:
raise Exception("No match found for element %i %s "
"in history" % (i, d))
if unseen:
raise Exception("Unexpected items in history")
except Exception:
for build in self.history:
self.log.error("Completed build: %s" % build)
if not self.history:
self.log.error("No completed builds")
raise
def printHistory(self):
"""Log the build history.
This can be useful during tests to summarize what jobs have
completed.
"""
if not self.history:
self.log.debug("Build history: no builds ran")
return
self.log.debug("Build history:")
for build in self.history:
self.log.debug(build)
def addTagToRepo(self, project, name, sha):
path = os.path.join(self.upstream_root, project)
repo = git.Repo(path)
repo.git.tag(name, sha)
def delTagFromRepo(self, project, name):
path = os.path.join(self.upstream_root, project)
repo = git.Repo(path)
repo.git.tag('-d', name)
def addCommitToRepo(self, project, message, files,
branch='master', tag=None):
path = os.path.join(self.upstream_root, project)
repo = git.Repo(path)
repo.head.reference = branch
repo.head.reset(working_tree=True)
for fn, content in files.items():
fn = os.path.join(path, fn)
try:
os.makedirs(os.path.dirname(fn))
except OSError:
pass
if isinstance(content, SymLink):
os.symlink(content.target, fn)
else:
mode = 'w'
if isinstance(content, bytes):
# the file fixtures are loaded as bytes such that
# we also support binary files
mode = 'wb'
with open(fn, mode) as f:
f.write(content)
repo.index.add([fn])
commit = repo.index.commit(message)
before = repo.heads[branch].commit
repo.heads[branch].commit = commit
repo.head.reference = branch
repo.git.clean('-x', '-f', '-d')
repo.heads[branch].checkout()
if tag:
repo.create_tag(tag)
return before
def commitConfigUpdate(self, project_name, source_name):
"""Commit an update to zuul.yaml
This overwrites the zuul.yaml in the specificed project with
the contents specified.
:arg str project_name: The name of the project containing
zuul.yaml (e.g., common-config)
:arg str source_name: The path to the file (underneath the
test fixture directory) whose contents should be used to
replace zuul.yaml.
"""
source_path = os.path.join(FIXTURE_DIR, source_name)
files = {}
with open(source_path, 'r') as f:
data = f.read()
layout = yaml.safe_load(data)
files['zuul.yaml'] = data
for item in layout:
if 'job' in item:
jobname = item['job']['name']
files['playbooks/%s.yaml' % jobname] = ''
before = self.addCommitToRepo(
project_name, 'Pulling content from %s' % source_name,
files)
return before
def newTenantConfig(self, source_name):
""" Use this to update the tenant config file in tests
This will update self.tenant_config_file to point to a temporary file
for the duration of this particular test. The content of that file will
be taken from FIXTURE_DIR/source_name
After the test the original value of self.tenant_config_file will be
restored.
:arg str source_name: The path of the file under
FIXTURE_DIR that will be used to populate the new tenant
config file.
"""
source_path = os.path.join(FIXTURE_DIR, source_name)
orig_tenant_config_file = self.tenant_config_file
with tempfile.NamedTemporaryFile(
delete=False, mode='wb') as new_tenant_config:
self.tenant_config_file = new_tenant_config.name
with open(source_path, mode='rb') as source_tenant_config:
new_tenant_config.write(source_tenant_config.read())
for app in self.scheds.instances:
app.config['scheduler']['tenant_config'] = self.tenant_config_file
self.config['scheduler']['tenant_config'] = self.tenant_config_file
self.setupAllProjectKeys(self.config)
self.log.debug(
'tenant_config_file = {}'.format(self.tenant_config_file))
def _restoreTenantConfig():
self.log.debug(
'restoring tenant_config_file = {}'.format(
orig_tenant_config_file))
os.unlink(self.tenant_config_file)
self.tenant_config_file = orig_tenant_config_file
self.config['scheduler']['tenant_config'] = orig_tenant_config_file
self.addCleanup(_restoreTenantConfig)
def addEvent(self, connection, event):
"""Inject a Fake (Gerrit) event.
This method accepts a JSON-encoded event and simulates Zuul
having received it from Gerrit. It could (and should)
eventually apply to any connection type, but is currently only
used with Gerrit connections. The name of the connection is
used to look up the corresponding server, and the event is
simulated as having been received by all Zuul connections
attached to that server. So if two Gerrit connections in Zuul
are connected to the same Gerrit server, and you invoke this
method specifying the name of one of them, the event will be
received by both.
.. note::
"self.fake_gerrit.addEvent" calls should be migrated to
this method.
:arg str connection: The name of the connection corresponding
to the gerrit server.
:arg str event: The JSON-encoded event.
"""
specified_conn = self.scheds.first.connections.connections[connection]
for conn in self.scheds.first.connections.connections.values():
if (isinstance(conn, specified_conn.__class__) and
specified_conn.server == conn.server):
conn.addEvent(event)
def getUpstreamRepos(self, projects):
"""Return upstream git repo objects for the listed projects
:arg list projects: A list of strings, each the canonical name
of a project.
:returns: A dictionary of {name: repo} for every listed
project.
:rtype: dict
"""
repos = {}
for project in projects:
# FIXME(jeblair): the upstream root does not yet have a
# hostname component; that needs to be added, and this
# line removed:
tmp_project_name = '/'.join(project.split('/')[1:])
path = os.path.join(self.upstream_root, tmp_project_name)
repo = git.Repo(path)
repos[project] = repo
return repos
def addAutohold(self, tenant_name, project_name, job_name,
ref_filter, reason, count, node_hold_expiration):
request = HoldRequest()
request.tenant = tenant_name
request.project = project_name
request.job = job_name
request.ref_filter = ref_filter
request.reason = reason
request.max_count = count
request.node_expiration = node_hold_expiration
self.sched_zk_nodepool.storeHoldRequest(request)
class AnsibleZuulTestCase(ZuulTestCase):
"""ZuulTestCase but with an actual ansible executor running"""
run_ansible = True
@contextmanager
def jobLog(self, build):
"""Print job logs on assertion errors
This method is a context manager which, if it encounters an
ecxeption, adds the build log to the debug output.
:arg Build build: The build that's being asserted.
"""
try:
yield
except Exception:
path = os.path.join(self.jobdir_root, build.uuid,
'work', 'logs', 'job-output.txt')
with open(path) as f:
self.log.debug(f.read())
path = os.path.join(self.jobdir_root, build.uuid,
'work', 'logs', 'job-output.json')
with open(path) as f:
self.log.debug(f.read())
raise
class SSLZuulTestCase(ZuulTestCase):
"""ZuulTestCase but using SSL when possible"""
use_ssl = True
class ZuulGithubAppTestCase(ZuulTestCase):
def setup_config(self, config_file: str):
config = super(ZuulGithubAppTestCase, self).setup_config(config_file)
for section_name in config.sections():
con_match = re.match(r'^connection ([\'\"]?)(.*)(\1)$',
section_name, re.I)
if not con_match:
continue
if config.get(section_name, 'driver') == 'github':
if (config.get(section_name, 'app_key',
fallback=None) ==
'$APP_KEY_FIXTURE$'):
config.set(section_name, 'app_key',
os.path.join(FIXTURE_DIR, 'app_key'))
return config
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/base.py
|
base.py
|
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import os
import sys
FIXTURE_DIR = os.path.join(os.path.dirname(__file__),
'fixtures')
CONFIG_DIR = os.path.join(FIXTURE_DIR, 'config')
def print_file(title, path):
print('')
print(title)
print('-' * 78)
with open(path) as f:
print(f.read())
print('-' * 78)
def main():
parser = argparse.ArgumentParser(description='Print test layout.')
parser.add_argument(dest='config', nargs='?',
help='the test configuration name')
args = parser.parse_args()
if not args.config:
print('Available test configurations:')
for d in os.listdir(CONFIG_DIR):
print(' ' + d)
sys.exit(1)
configdir = os.path.join(CONFIG_DIR, args.config)
title = ' Configuration: %s ' % args.config
print('=' * len(title))
print(title)
print('=' * len(title))
print_file('Main Configuration',
os.path.join(configdir, 'main.yaml'))
gitroot = os.path.join(configdir, 'git')
for gitrepo in os.listdir(gitroot):
reporoot = os.path.join(gitroot, gitrepo)
print('')
print('=== Git repo: %s ===' % gitrepo)
filenames = os.listdir(reporoot)
for fn in filenames:
if fn in ['zuul.yaml', '.zuul.yaml']:
print_file('File: ' + os.path.join(gitrepo, fn),
os.path.join(reporoot, fn))
for subdir in ['.zuul.d', 'zuul.d']:
zuuld = os.path.join(reporoot, subdir)
if not os.path.exists(zuuld):
continue
filenames = os.listdir(zuuld)
for fn in filenames:
print_file('File: ' + os.path.join(gitrepo, subdir, fn),
os.path.join(zuuld, fn))
if __name__ == '__main__':
main()
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/print_layout.py
|
print_layout.py
|
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import sys
import os
from zuul.lib import encryption
FIXTURE_DIR = os.path.join(os.path.dirname(__file__),
'fixtures')
def main():
private_key_file = os.path.join(FIXTURE_DIR, 'private.pem')
with open(private_key_file, "rb") as f:
private_key, public_key = \
encryption.deserialize_rsa_keypair(f.read())
plaintext = sys.argv[1].encode('utf-8')
ciphertext = encryption.encrypt_pkcs1_oaep(plaintext, public_key)
print(base64.b64encode(ciphertext).decode('utf-8'))
if __name__ == '__main__':
main()
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/encrypt_secret.py
|
encrypt_secret.py
|
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from zuul.lib import yamlutil as yaml
FIXTURE_DIR = os.path.join(os.path.dirname(__file__),
'fixtures')
CONFIG_DIR = os.path.join(FIXTURE_DIR, 'config')
def make_playbook(path):
d = os.path.dirname(path)
try:
os.makedirs(d)
except OSError:
pass
with open(path, 'w') as f:
f.write('- hosts: all\n')
f.write(' tasks: []\n')
def handle_repo(path):
print('Repo: %s' % path)
config_path = None
for fn in ['zuul.yaml', '.zuul.yaml']:
if os.path.exists(os.path.join(path, fn)):
config_path = os.path.join(path, fn)
break
try:
with open(config_path) as f:
config = yaml.safe_load(f)
except Exception:
print(" Has yaml errors")
return
for block in config:
if 'job' not in block:
continue
job = block['job']['name']
playbook = os.path.join(path, 'playbooks', job + '.yaml')
if not os.path.exists(playbook):
print(' Creating: %s' % job)
make_playbook(playbook)
def main():
repo_dirs = []
for root, dirs, files in os.walk(CONFIG_DIR):
if 'zuul.yaml' in files or '.zuul.yaml' in files:
repo_dirs.append(root)
for path in repo_dirs:
handle_repo(path)
if __name__ == '__main__':
main()
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/make_playbooks.py
|
make_playbooks.py
|
# Copyright 2019 BMW Group
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from graphene import Boolean, Field, Int, List, ObjectType, String
class FakePageInfo(ObjectType):
end_cursor = String()
has_next_page = Boolean()
def resolve_end_cursor(parent, info):
return 'testcursor'
def resolve_has_next_page(parent, info):
return False
class FakeMatchingRef(ObjectType):
name = String()
def resolve_name(parent, info):
return parent
class FakeMatchingRefs(ObjectType):
nodes = List(FakeMatchingRef)
def resolve_nodes(parent, info):
# To simplify tests just return the pattern and a bogus ref that should
# not disturb zuul.
return [parent.pattern, 'bogus-ref']
class FakeBranchProtectionRule(ObjectType):
pattern = String()
requiredStatusCheckContexts = List(String)
requiresApprovingReviews = Boolean()
requiresCodeOwnerReviews = Boolean()
matchingRefs = Field(FakeMatchingRefs, first=Int())
def resolve_pattern(parent, info):
return parent.pattern
def resolve_requiredStatusCheckContexts(parent, info):
return parent.required_contexts
def resolve_requiresApprovingReviews(parent, info):
return parent.require_reviews
def resolve_requiresCodeOwnerReviews(parent, info):
return parent.require_codeowners_review
def resolve_matchingRefs(parent, info, first=None):
return parent
class FakeBranchProtectionRules(ObjectType):
nodes = List(FakeBranchProtectionRule)
def resolve_nodes(parent, info):
return parent.values()
class FakeActor(ObjectType):
login = String()
class FakeStatusContext(ObjectType):
state = String()
context = String()
creator = Field(FakeActor)
def resolve_state(parent, info):
state = parent.state.upper()
return state
def resolve_context(parent, info):
return parent.context
def resolve_creator(parent, info):
return parent.creator
class FakeStatus(ObjectType):
contexts = List(FakeStatusContext)
def resolve_contexts(parent, info):
return parent
class FakeCheckRun(ObjectType):
name = String()
conclusion = String()
def resolve_name(parent, info):
return parent.name
def resolve_conclusion(parent, info):
if parent.conclusion:
return parent.conclusion.upper()
return None
class FakeCheckRuns(ObjectType):
nodes = List(FakeCheckRun)
def resolve_nodes(parent, info):
return parent
class FakeApp(ObjectType):
slug = String()
name = String()
class FakeCheckSuite(ObjectType):
app = Field(FakeApp)
checkRuns = Field(FakeCheckRuns, first=Int())
def resolve_app(parent, info):
if not parent:
return None
return parent[0].app
def resolve_checkRuns(parent, info, first=None):
# We only want to return the latest result for a check run per app.
# Since the check runs are ordered from latest to oldest result we
# need to traverse the list in reverse order.
check_runs_by_name = {
"{}:{}".format(cr.app, cr.name): cr for cr in reversed(parent)
}
return check_runs_by_name.values()
class FakeCheckSuites(ObjectType):
nodes = List(FakeCheckSuite)
def resolve_nodes(parent, info):
# Note: we only use a single check suite in the tests so return a
# single item to keep it simple.
return [parent]
class FakeCommit(ObjectType):
class Meta:
# Graphql object type that defaults to the class name, but we require
# 'Commit'.
name = 'Commit'
status = Field(FakeStatus)
checkSuites = Field(FakeCheckSuites, first=Int())
def resolve_status(parent, info):
seen = set()
result = []
for status in parent._statuses:
if status.context not in seen:
seen.add(status.context)
result.append(status)
# Github returns None if there are no results
return result or None
def resolve_checkSuites(parent, info, first=None):
# Tests only utilize one check suite so return all runs for that.
return parent._check_runs
class FakePullRequest(ObjectType):
isDraft = Boolean()
reviewDecision = String()
mergeable = String()
def resolve_isDraft(parent, info):
return parent.draft
def resolve_mergeable(parent, info):
return "MERGEABLE" if parent.mergeable else "CONFLICTING"
def resolve_reviewDecision(parent, info):
if hasattr(info.context, 'version') and info.context.version:
if info.context.version < (2, 21, 0):
raise Exception('Field unsupported')
# Check branch protection rules if reviews are required
org, project = parent.project.split('/')
repo = info.context._data.repos[(org, project)]
rule = repo._branch_protection_rules.get(parent.branch)
if not rule or not rule.require_reviews:
# Github returns None if there is no review required
return None
approvals = [r for r in parent.reviews
if r.data['state'] == 'APPROVED']
if approvals:
return 'APPROVED'
return 'REVIEW_REQUIRED'
class FakeRepository(ObjectType):
name = String()
branchProtectionRules = Field(FakeBranchProtectionRules, first=Int())
pullRequest = Field(FakePullRequest, number=Int(required=True))
object = Field(FakeCommit, expression=String(required=True))
def resolve_name(parent, info):
org, name = parent.name.split('/')
return name
def resolve_branchProtectionRules(parent, info, first):
return parent._branch_protection_rules
def resolve_pullRequest(parent, info, number):
return parent.data.pull_requests.get(number)
def resolve_object(parent, info, expression):
return parent._commits.get(expression)
class FakeGithubQuery(ObjectType):
repository = Field(FakeRepository, owner=String(required=True),
name=String(required=True))
def resolve_repository(root, info, owner, name):
return info.context._data.repos.get((owner, name))
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fake_graphql.py
|
fake_graphql.py
|
#!/usr/bin/env python
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import urllib
from collections import defaultdict
import datetime
import github3.exceptions
import re
import time
import graphene
from requests import HTTPError
from requests.structures import CaseInsensitiveDict
from tests.fake_graphql import FakeGithubQuery
from zuul.driver.github.githubconnection import utc
FAKE_BASE_URL = 'https://example.com/api/v3/'
class FakeUser(object):
def __init__(self, login):
self.login = login
self.name = login
self.email = '%[email protected]' % login
self.html_url = 'https://example.com/%s' % login
class FakeBranch(object):
def __init__(self, fake_repo, branch='master', protected=False):
self.name = branch
self._fake_repo = fake_repo
@property
def protected(self):
return self.name in self._fake_repo._branch_protection_rules
def as_dict(self):
return {
'name': self.name,
'protected': self.protected
}
class FakeCreator:
def __init__(self, login):
self.login = login
class FakeStatus(object):
def __init__(self, state, url, description, context, user):
self.state = state
self.context = context
self.creator = FakeCreator(user)
self._url = url
self._description = description
def as_dict(self):
return {
'state': self.state,
'url': self._url,
'description': self._description,
'context': self.context,
'creator': {
'login': self.creator.login
}
}
class FakeApp:
def __init__(self, name, slug):
self.name = name
self.slug = slug
class FakeCheckRun(object):
def __init__(self, id, name, details_url, output, status, conclusion,
completed_at, external_id, actions, app):
if actions is None:
actions = []
self.id = id
self.name = name
self.details_url = details_url
self.output = output
self.conclusion = conclusion
self.completed_at = completed_at
self.external_id = external_id
self.actions = actions
self.app = FakeApp(name=app, slug=app)
# Github automatically sets the status to "completed" if a conclusion
# is provided.
if conclusion is not None:
self.status = "completed"
else:
self.status = status
def as_dict(self):
return {
'id': self.id,
"name": self.name,
"status": self.status,
"output": self.output,
"details_url": self.details_url,
"conclusion": self.conclusion,
"completed_at": self.completed_at,
"external_id": self.external_id,
"actions": self.actions,
"app": {
"slug": self.app.slug,
"name": self.app.name,
},
}
def update(self, conclusion, completed_at, output, details_url,
external_id, actions):
self.conclusion = conclusion
self.completed_at = completed_at
self.output = output
self.details_url = details_url
self.external_id = external_id
self.actions = actions
# As we are only calling the update method when a build is completed,
# we can always set the status to "completed".
self.status = "completed"
class FakeGHReview(object):
def __init__(self, data):
self.data = data
def as_dict(self):
return self.data
class FakeCombinedStatus(object):
def __init__(self, sha, statuses):
self.sha = sha
self.statuses = statuses
class FakeCommit(object):
def __init__(self, sha):
self._statuses = []
self.sha = sha
self._check_runs = []
def set_status(self, state, url, description, context, user):
status = FakeStatus(
state, url, description, context, user)
# always insert a status to the front of the list, to represent
# the last status provided for a commit.
self._statuses.insert(0, status)
def set_check_run(self, id, name, details_url, output, status, conclusion,
completed_at, external_id, actions, app):
check_run = FakeCheckRun(
id,
name,
details_url,
output,
status,
conclusion,
completed_at,
external_id,
actions,
app,
)
# Always insert a check_run to the front of the list to represent the
# last check_run provided for a commit.
self._check_runs.insert(0, check_run)
return check_run
def get_url(self, path, params=None):
if path == 'statuses':
statuses = [s.as_dict() for s in self._statuses]
return FakeResponse(statuses)
if path == "check-runs":
check_runs = [c.as_dict() for c in self._check_runs]
resp = {"total_count": len(check_runs), "check_runs": check_runs}
return FakeResponse(resp)
def statuses(self):
return self._statuses
def check_runs(self):
return self._check_runs
def status(self):
'''
Returns the combined status wich only contains the latest statuses of
the commit together with some other information that we don't need
here.
'''
latest_statuses_by_context = {}
for status in self._statuses:
if status.context not in latest_statuses_by_context:
latest_statuses_by_context[status.context] = status
combined_statuses = latest_statuses_by_context.values()
return FakeCombinedStatus(self.sha, combined_statuses)
class FakeRepository(object):
def __init__(self, name, data):
self._api = FAKE_BASE_URL
self._branches = [FakeBranch(self)]
self._commits = {}
self.data = data
self.name = name
self.check_run_counter = 0
# Simple dictionary to store permission values per feature (e.g.
# checks, Repository contents, Pull requests, Commit statuses, ...).
# Could be used to just enable/deable a permission (True, False) or
# provide more specific values like "read" or "read&write". The mocked
# functionality in the FakeRepository class should then check for this
# value and raise an appropriate exception like a production Github
# would do in case the permission is not sufficient or missing at all.
self._permissions = {}
# List of branch protection rules
self._branch_protection_rules = defaultdict(FakeBranchProtectionRule)
self._repodata = {
'allow_merge_commit': True,
'allow_squash_merge': True,
'allow_rebase_merge': True,
}
# fail the next commit requests with 404
self.fail_not_found = 0
def branches(self, protected=False):
if protected:
# simulate there is no protected branch
return [b for b in self._branches if b.protected]
return self._branches
def _set_branch_protection(self, branch_name, protected=True,
contexts=None, require_review=False):
if not protected:
if branch_name in self._branch_protection_rules:
del self._branch_protection_rules[branch_name]
return
rule = self._branch_protection_rules[branch_name]
rule.pattern = branch_name
rule.required_contexts = contexts or []
rule.require_reviews = require_review
def _set_permission(self, key, value):
# NOTE (felix): Currently, this is only used to mock a repo with
# missing checks API permissions. But we could also use it to test
# arbitrary permission values like missing write, but only read
# permissions for a specific functionality.
self._permissions[key] = value
def _build_url(self, *args, **kwargs):
path_args = ['repos', self.name]
path_args.extend(args)
fakepath = '/'.join(path_args)
return FAKE_BASE_URL + fakepath
def _get(self, url, headers=None):
client = FakeGithubClient(data=self.data)
return client.session.get(url, headers)
def _create_branch(self, branch):
self._branches.append((FakeBranch(self, branch=branch)))
def _delete_branch(self, branch_name):
self._branches = [b for b in self._branches if b.name != branch_name]
def create_status(self, sha, state, url, description, context,
user='zuul'):
# Since we're bypassing github API, which would require a user, we
# default the user as 'zuul' here.
commit = self._commits.get(sha, None)
if commit is None:
commit = FakeCommit(sha)
self._commits[sha] = commit
commit.set_status(state, url, description, context, user)
def create_check_run(self, head_sha, name, details_url=None, output=None,
status=None, conclusion=None, completed_at=None,
external_id=None, actions=None, app="zuul"):
# Raise the appropriate github3 exception in case we don't have
# permission to access the checks API
if self._permissions.get("checks") is False:
# To create a proper github3 exception, we need to mock a response
# object
raise github3.exceptions.ForbiddenError(
FakeResponse("Resource not accessible by integration", 403)
)
commit = self._commits.get(head_sha, None)
if commit is None:
commit = FakeCommit(head_sha)
self._commits[head_sha] = commit
self.check_run_counter += 1
commit.set_check_run(
str(self.check_run_counter),
name,
details_url,
output,
status,
conclusion,
completed_at,
external_id,
actions,
app,
)
def commit(self, sha):
if self.fail_not_found > 0:
self.fail_not_found -= 1
resp = FakeResponse(404, 'Not found')
raise github3.exceptions.NotFoundError(resp)
commit = self._commits.get(sha, None)
if commit is None:
commit = FakeCommit(sha)
self._commits[sha] = commit
return commit
def get_url(self, path, params=None):
if '/' in path:
entity, request = path.split('/', 1)
else:
entity = path
request = None
if entity == 'branches':
return self.get_url_branches(request, params=params)
if entity == 'collaborators':
return self.get_url_collaborators(request)
if entity == 'commits':
return self.get_url_commits(request, params=params)
if entity == '':
return self.get_url_repo()
else:
return None
def get_url_branches(self, path, params=None):
if path is None:
# request wants a branch list
return self.get_url_branch_list(params)
elements = path.split('/')
entity = elements[-1]
if entity == 'protection':
branch = '/'.join(elements[0:-1])
return self.get_url_protection(branch)
else:
# fall back to treat all elements as branch
branch = '/'.join(elements)
return self.get_url_branch(branch)
def get_url_commits(self, path, params=None):
if '/' in path:
sha, request = path.split('/', 1)
else:
sha = path
request = None
commit = self._commits.get(sha)
# Commits are created lazy so check if there is a PR with the correct
# head sha.
if commit is None:
pull_requests = [pr for pr in self.data.pull_requests.values()
if pr.head_sha == sha]
if pull_requests:
commit = FakeCommit(sha)
self._commits[sha] = commit
if not commit:
return FakeResponse({}, 404)
return commit.get_url(request, params=params)
def get_url_branch_list(self, params):
if params.get('protected') == 1:
exclude_unprotected = True
else:
exclude_unprotected = False
branches = [x.as_dict() for x in self.branches(exclude_unprotected)]
return FakeResponse(branches, 200)
def get_url_branch(self, branch_name):
for branch in self._branches:
if branch.name == branch_name:
return FakeResponse(branch.as_dict())
return FakeResponse(None, 404)
def get_url_collaborators(self, path):
login, entity = path.split('/')
if entity == 'permission':
owner, proj = self.name.split('/')
permission = None
for pr in self.data.pull_requests.values():
pr_owner, pr_project = pr.project.split('/')
if (pr_owner == owner and proj == pr_project):
if login in pr.admins:
permission = 'admin'
break
elif login in pr.writers:
permission = 'write'
break
else:
permission = 'read'
data = {
'permission': permission,
}
return FakeResponse(data)
else:
return None
def get_url_protection(self, branch):
rule = self._branch_protection_rules.get(branch)
if not rule:
# Note that GitHub returns 404 if branch protection is off so do
# the same here as well
return FakeResponse({}, 404)
data = {
'required_status_checks': {
'contexts': rule.required_contexts
}
}
return FakeResponse(data)
def get_url_repo(self):
return FakeResponse(self._repodata)
def pull_requests(self, state=None, sort=None, direction=None):
# sort and direction are unused currently, but present to match
# real world call signatures.
pulls = []
for pull in self.data.pull_requests.values():
if pull.project != self.name:
continue
if state and pull.state != state:
continue
pulls.append(FakePull(pull))
return pulls
class FakeIssue(object):
def __init__(self, fake_pull_request):
self._fake_pull_request = fake_pull_request
def pull_request(self):
return FakePull(self._fake_pull_request)
@property
def number(self):
return self._fake_pull_request.number
@functools.total_ordering
class FakeFile(object):
def __init__(self, filename, previous_filename=None):
self.filename = filename
if previous_filename is not None:
self.previous_filename = previous_filename
def __eq__(self, other):
return self.filename == other.filename
def __lt__(self, other):
return self.filename < other.filename
__hash__ = object.__hash__
class FakePull(object):
def __init__(self, fake_pull_request):
self._fake_pull_request = fake_pull_request
def issue(self):
return FakeIssue(self._fake_pull_request)
def files(self):
# Github lists max. 300 files of a PR in alphabetical order
return sorted(self._fake_pull_request.files)[:300]
def reviews(self):
return self._fake_pull_request.reviews
def create_review(self, body, commit_id, event):
review = FakeGHReview({
'state': event,
'user': {
'login': 'fakezuul',
'email': '[email protected]',
},
'submitted_at': time.gmtime(),
})
self._fake_pull_request.reviews.append(review)
return review
@property
def head(self):
client = FakeGithubClient(
data=self._fake_pull_request.github.github_data)
repo = client.repo_from_project(self._fake_pull_request.project)
return repo.commit(self._fake_pull_request.head_sha)
def commits(self):
# since we don't know all commits of a pr we just return here a list
# with the head_sha as the only commit
return [self.head]
def as_dict(self):
pr = self._fake_pull_request
connection = pr.github
data = {
'number': pr.number,
'title': pr.subject,
'url': 'https://%s/api/v3/%s/pulls/%s' % (
connection.server, pr.project, pr.number
),
'html_url': 'https://%s/%s/pull/%s' % (
connection.server, pr.project, pr.number
),
'updated_at': pr.updated_at,
'base': {
'repo': {
'full_name': pr.project
},
'ref': pr.branch,
'sha': pr.base_sha,
},
'user': {
'login': 'octocat'
},
'draft': pr.draft,
'mergeable': pr.mergeable,
'state': pr.state,
'head': {
'sha': pr.head_sha,
'ref': pr.getPRReference(),
'repo': {
'full_name': pr.project
}
},
'merged': pr.is_merged,
'body': pr.body,
'body_text': pr.body_text,
'changed_files': len(pr.files),
'labels': [{'name': l} for l in pr.labels]
}
return data
class FakeIssueSearchResult(object):
def __init__(self, issue):
self.issue = issue
class FakeResponse(object):
def __init__(self, data, status_code=200, status_message='OK'):
self.status_code = status_code
self.status_message = status_message
self.data = data
self.links = {}
@property
def content(self):
# Building github3 exceptions requires a Response object with the
# content attribute set.
return self.data
def json(self):
return self.data
def raise_for_status(self):
if 400 <= self.status_code < 600:
if isinstance(self.data, str):
text = '{} {}'.format(self.status_code, self.data)
else:
text = '{} {}'.format(self.status_code, self.status_message)
raise HTTPError(text, response=self)
class FakeGithubSession(object):
def __init__(self, client):
self.client = client
self.headers = CaseInsensitiveDict()
self._base_url = None
self.schema = graphene.Schema(query=FakeGithubQuery)
# Imitate hooks dict. This will be unused and ignored in the tests.
self.hooks = {
'response': []
}
def build_url(self, *args):
fakepath = '/'.join(args)
return FAKE_BASE_URL + fakepath
def get(self, url, headers=None, params=None):
request = url
if request.startswith(FAKE_BASE_URL):
request = request[len(FAKE_BASE_URL):]
entity, request = request.split('/', 1)
if entity == 'repos':
return self.get_repo(request, params=params)
else:
# unknown entity to process
return None
def post(self, url, data=None, headers=None, params=None, json=None):
# Handle graphql
if json and json.get('query'):
query = json.get('query')
variables = json.get('variables')
result = self.schema.execute(
query, variables=variables, context=self.client)
if result.errors:
# Note that github really returns 200 and an errors field in
# case of an error.
return FakeResponse({'errors': result.errors}, 200)
return FakeResponse({'data': result.data}, 200)
# Handle creating comments
match = re.match(r'.+/repos/(.+)/issues/(\d+)/comments$', url)
if match:
project, pr_number = match.groups()
project = urllib.parse.unquote(project)
self.client._data.reports.append((project, pr_number, 'comment'))
pull_request = self.client._data.pull_requests[int(pr_number)]
pull_request.addComment(json['body'])
return FakeResponse(None, 200)
# Handle access token creation
if re.match(r'.*/app/installations/.*/access_tokens', url):
expiry = (datetime.datetime.now(utc) + datetime.timedelta(
minutes=60)).replace(microsecond=0).isoformat()
install_id = url.split('/')[-2]
data = {
'token': 'token-%s' % install_id,
'expires_at': expiry,
}
return FakeResponse(data, 201)
# Handle check run creation
match = re.match(r'.*/repos/(.*)/check-runs$', url)
if match:
if self.client._data.fail_check_run_creation:
return FakeResponse('Internal server error', 500)
org, reponame = match.groups()[0].split('/', 1)
repo = self.client._data.repos.get((org, reponame))
if repo._permissions.get("checks") is False:
# To create a proper github3 exception, we need to mock a
# response object
return FakeResponse(
"Resource not accessible by integration", 403)
head_sha = json.get('head_sha')
commit = repo._commits.get(head_sha, None)
if commit is None:
commit = FakeCommit(head_sha)
repo._commits[head_sha] = commit
repo.check_run_counter += 1
check_run = commit.set_check_run(
str(repo.check_run_counter),
json['name'],
json['details_url'],
json['output'],
json.get('status'),
json.get('conclusion'),
json.get('completed_at'),
json['external_id'],
json['actions'],
json.get('app', 'zuul'),
)
return FakeResponse(check_run.as_dict(), 201)
return FakeResponse(None, 404)
def put(self, url, data=None, headers=None, params=None, json=None):
# Handle pull request merge
match = re.match(r'.+/repos/(.+)/pulls/(\d+)/merge$', url)
if match:
project, pr_number = match.groups()
project = urllib.parse.unquote(project)
pr = self.client._data.pull_requests[int(pr_number)]
conn = pr.github
# record that this got reported
self.client._data.reports.append(
(pr.project, pr.number, 'merge', json["merge_method"]))
if conn.merge_failure:
raise Exception('Unknown merge failure')
if conn.merge_not_allowed_count > 0:
conn.merge_not_allowed_count -= 1
# GitHub returns 405 Method not allowed with more details in
# the body of the response.
data = {
'message': 'Merge not allowed because of fake reason',
}
return FakeResponse(data, 405, 'Method not allowed')
pr.setMerged(json.get("commit_message", ""))
return FakeResponse({"merged": True}, 200)
return FakeResponse(None, 404)
def patch(self, url, data=None, headers=None, params=None, json=None):
# Handle check run update
match = re.match(r'.*/repos/(.*)/check-runs/(.*)$', url)
if match:
org, reponame = match.groups()[0].split('/', 1)
check_run_id = match.groups()[1]
repo = self.client._data.repos.get((org, reponame))
# Find the specified check run
check_runs = [
check_run
for commit in repo._commits.values()
for check_run in commit._check_runs
if check_run.id == check_run_id
]
check_run = check_runs[0]
check_run.update(json['conclusion'],
json['completed_at'],
json['output'],
json['details_url'],
json['external_id'],
json['actions'])
return FakeResponse(check_run.as_dict(), 200)
def get_repo(self, request, params=None):
parts = request.split('/', 2)
if len(parts) == 2:
org, project = parts
request = ''
else:
org, project, request = parts
project_name = '{}/{}'.format(org, project)
repo = self.client.repo_from_project(project_name)
return repo.get_url(request, params=params)
def mount(self, prefix, adapter):
# Don't care in tests
pass
class FakeBranchProtectionRule:
def __init__(self):
self.pattern = None
self.required_contexts = []
self.require_reviews = False
self.require_codeowners_review = False
class FakeGithubData(object):
def __init__(self, pull_requests):
self.pull_requests = pull_requests
self.repos = {}
self.reports = []
self.fail_check_run_creation = False
def __repr__(self):
return ("pull_requests:%s repos:%s reports:%s "
"fail_check_run_creation:%s" % (
self.pull_requests, self.repos, self.reports,
self.fail_check_run_creation))
class FakeGithubClient(object):
def __init__(self, session=None, data=None):
self._data = data
self._inst_id = None
self.session = FakeGithubSession(self)
def setData(self, data):
self._data = data
def setInstId(self, inst_id):
self._inst_id = inst_id
def user(self, login):
return FakeUser(login)
def repository(self, owner, proj):
return self._data.repos.get((owner, proj), None)
def repo_from_project(self, project):
# This is a convenience method for the tests.
owner, proj = project.split('/')
return self.repository(owner, proj)
def addProject(self, project):
owner, proj = project.name.split('/')
self._data.repos[(owner, proj)] = FakeRepository(
project.name, self._data)
def addProjectByName(self, project_name):
owner, proj = project_name.split('/')
self._data.repos[(owner, proj)] = FakeRepository(
project_name, self._data)
def pull_request(self, owner, project, number):
fake_pr = self._data.pull_requests[int(number)]
repo = self.repository(owner, project)
# Ensure a commit for the head_sha exists so this can be resolved in
# graphql queries.
repo._commits.setdefault(
fake_pr.head_sha,
FakeCommit(fake_pr.head_sha)
)
return FakePull(fake_pr)
def search_issues(self, query):
def tokenize(s):
# Tokenize with handling for quoted substrings.
# Bit hacky and needs PDA, but our current inputs are
# constrained enough that this should work.
s = s[:-len(" type:pr is:open in:body")]
OR_split = [x.strip() for x in s.split('OR')]
tokens = [x.strip('"') for x in OR_split]
return tokens
def query_is_sha(s):
return re.match(r'[a-z0-9]{40}', s)
if query_is_sha(query):
# Github returns all PR's that contain the sha in their history
result = []
for pr in self._data.pull_requests.values():
# Quick check if head sha matches
if pr.head_sha == query:
result.append(FakeIssueSearchResult(FakeIssue(pr)))
continue
# If head sha doesn't match it still could be in the pr history
repo = pr._getRepo()
commits = repo.iter_commits(
'%s...%s' % (pr.branch, pr.head_sha))
for commit in commits:
if commit.hexsha == query:
result.append(FakeIssueSearchResult(FakeIssue(pr)))
continue
return result
# Non-SHA queries are of the form:
#
# '"Depends-On: <url>" OR "Depends-On: <url>"
# OR ... type:pr is:open in:body'
#
# For the tests is currently enough to simply check for the
# existence of the Depends-On strings in the PR body.
tokens = tokenize(query)
terms = set(tokens)
results = []
for pr in self._data.pull_requests.values():
if not pr.body:
body = ""
else:
body = pr.body
for term in terms:
if term in body:
issue = FakeIssue(pr)
results.append(FakeIssueSearchResult(issue))
break
return iter(results)
class FakeGithubEnterpriseClient(FakeGithubClient):
version = '2.21.0'
def __init__(self, url, session=None, verify=True):
super().__init__(session=session)
def meta(self):
data = {
'installed_version': self.version,
}
return data
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fakegithub.py
|
fakegithub.py
|
#!/bin/sh
echo $*
case "$1" in
fetch)
if [ -f ./stamp1 ]; then
touch ./stamp2
exit 0
fi
touch ./stamp1
exit 1
;;
version)
echo "git version 1.0.0"
exit 0
;;
esac
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/git_fetch_error.sh
|
git_fetch_error.sh
|
#!/bin/sh
echo $*
case "$1" in
clone)
dest=$3
mkdir -p $dest/.git
;;
version)
echo "git version 1.0.0"
exit 0
;;
esac
sleep 30
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/fake_git.sh
|
fake_git.sh
|
#!/bin/sh
echo "Forwarding from 127.0.0.1:1234 -> 19885"
while true; do
sleep 5
done
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/fake_kubectl.sh
|
fake_kubectl.sh
|
#!/bin/sh
echo $*
case "$1" in
fetch)
echo "Fake git error"
exit 1
;;
version)
echo "git version 1.0.0"
exit 0
;;
esac
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/git_fail.sh
|
git_fail.sh
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/inventory/git/org_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/inventory/git/org_project2/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/inventory/git/org_project3/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/push-reqs/git/org_project1/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/push-reqs/git/org_project2/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/tenant-implied-branch-matchers/git/org_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/openstack/git/openstack_keystone/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/openstack/git/openstack_nova/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/broken/git/org_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/broken/git/org_project2/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/two-tenant/git/org_project1/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/two-tenant/git/org_project2/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/in-repo-join/git/org_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/dependency-graph/git/org_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/multi-github/git/org_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/branch-negative/git/org_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/older-than/git/org_project1/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/older-than/git/org_project2/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/trusted-check/git/org_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/trusted-check/git/gh_project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/newer-than/git/org_project1/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/newer-than/git/org_project2/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/state/git/wip-project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/state/git/current-project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/state/git/open-project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/state/git/status-project/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/vote1/git/org_project1/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/vote1/git/org_project2/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/reject-username/git/org_project1/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/reject-username/git/org_project2/README
|
README
|
test
|
zuul
|
/zuul-9.1.0.tar.gz/zuul-9.1.0/tests/fixtures/config/requirements/email/git/org_project1/README
|
README
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.