content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
A project is a collection of resources necessary to define and run a cluster within CycleCloud. A project consists of three parts:
- Cluster Template
- cluster-init
- Custom Chef
Create a New Project
To create a new project, use the CLI command cyclecloud project init myproject, where myproject is the name of the project you wish to create. A directory tree will be created with skeleton files you will amend to include your own information.
Directory Structure
The following directories will be created by the project command:
The templates directory will hold your cluster templates, while specs will contain the specifications defining your project. spec has two subdirectories: cluster-init and custom chef. cluster-init contains directories which have special meaning, such as the scripts directory (contains scripts that are executed in lexicographical order on the node), files (raw data files to will be put on the node), and tests (contains tests to be run when a cluster is started in testing mode).
The custom chef subdirectory has three directories: site-cookbooks (for cookbook definitions), data_bags (databag definitions), and roles (chef role definition files).
Project Setup
Specs
When creating a new project, a single default spec is defined. You can add additional specs to your project via the cyclecloud project add_spec command.
Uploading Files
You can upload the contents of your project to any locker defined in your CycleCloud install via the command cyclecloud project upload <locker>, where <locker> is the name of a cloud storage locker in your CycleCloud install. This locker will be set as the default target. Alternatively, you can see what lockers are available to you with the command cyclecloud locker list. Details about a specific locker can be viewed with cyclecloud locker show <locker>.
If you add more than one locker, you can set your default with cyclecloud project default_target <locker>, then simply run cyclecloud project upload. You can also set a global default locker that can be shared by projects with the command cyclecloud project default locker <locker> -global.
Note: Default lockers will be stored in the cyclecloud config file (usually located in ~/.cycle/config.ini), not in the project.ini. This is done to allow project.ini to be version controlled.
Uploading your project contents will zip the chef directories and sync both chef and cluster init to your target locker. These will be stored at:
<locker>/projects/<project>/<version>/<spec_name>/cluster-init <locker>/projects/<project>/<version>/<spec_name>/chef
Versioning
By default, all projects have a version of 1.0.0. You can set a custom version as you develop and deploy projects by setting version=x.y.z in the project.ini file.
For example, if “locker_url” was s3://com.cyclecomputing.O66.projects, project was named “Order66”, version was “1.6.9”, and the spec is “default”, your url would be:
s3://com.cyclecomputing.O66.projects/projects/Order66/1.6.9/default/cluster-init s3://com.cyclecomputing.O66.projects/projects/Order66/1.6.9/default/chef
Specify Project within a Cluster Template
Project syntax allows you to specify multiple specs on your nodes. To define a project, use the following:
[[[cluster-init myspec]]] Project = myproject # inferred from name Version = x.y.z Spec = myspec # (optional, inferred from section definition) Locker = default # (optional, will use default locker for node)
Note: The name specified after ‘spec’ can be anything, but can and should be used as a shortcut to define some common properties.
You can also apply multiple specs to a given node as follows:
[[node master]] [[[cluster-init myspec]]] Project = myproject Version = x.y.z Spec = myspec # (optional, inferred from section definition) Locker = default # (optional, will use default locker for node) [[cluster-init otherspec]] Project = otherproject Version = a.b.c Spec = otherspec # (optional, inferred from section definition)
By separating the project name, spec name, and version with colons, CycleCloud can parse those values into the appropriate Project/Version/Spec settings automatically:
[[node master]] [[[cluster-init myproject:myspec:x.y.z]]] [[[cluster-init otherproject:otherspec:a.b.c]]]
Specs can also be inherited between nodes. For example, you can share a common spec between all nodes, then run a custom spec on the master node:
[[node defaults]] [[[cluster-init my-project:common:1.0.0]]] Order = 2 # optional [[node master]] [[[cluster-init my-project:master:1.0.0]]] Order = 1 # optional [[nodearray execute]] [[[cluster-init my-project:execute:1.0.0]]] Order = 1 # optional
This would apply both the common and master specs to the master node, while only applying the common and execute specs to the execute nodearray.
By default, the specs will be run in the order they are shown in the template, running inherited specs first. Order is an optional integer set to a default of 1000, and can be used to define the order of the specs.
If only one name is specified in the [[[cluster-init]]] definition, it will be assumed to be the spec name. For example:
[[[cluster-init myspec]]] Project = myproject Version = 1.0.0
is a valid spec setup in which Spec=myspec is implied by the name.
Warning: If you are using Projects, you cannot use pre-v6.5.4 ClusterInit (pre 6.5.4). They are mutually exclusive.
File Locations
The zipped chef files will be downloaded during the bootstrapping phase of node startup. They are downloaded to $JETPACK_HOME/system/chef/tarballs and unzipped to $JETPACK_HOME/system/chef/chef-repo/, and used when converging the node.
Note: To run custom cookbooks, you MUST specify them in the run_list for the node.
The cluster-init files will be downloaded to /mnt/cluster-init/<project>/<spec>/. For ‘my-project’ and ‘my-spec’, you will see your scripts, files, and tests located in /mnt/cluster-init/my-project/my-spec.
Log Files
Log files generated when running cluster-init are located in $JETPACK_HOME/logs/cluster-init/<project>/<spec>.
Run Files
When a cluster-init script is run successfully, a file is placed in /mnt/cluster-init/.run/<project>/<spec> to ensure it isn’t run again on a subsequent converge. If you want to run the script again, delete the appropriate file in this directory.
Script Directories
When CycleCloud executes scripts in the scripts directory, it will add environment variables to the path and name of the spec and project directories:
CYCLECLOUD_PROJECT_NAME CYCLECLOUD_PROJECT_PATH CYCLECLOUD_SPEC_NAME CYCLECLOUD_SPEC_PATH
On linux, a project named “test-project” with a spec of “default” would have paths as follows:
CYCLECLOUD_PROJECT_NAME = test-project CYCLECLOUD_PROJECT_PATH = /mnt/cluster-init/test-project CYCLECLOUD_SPEC_NAME = default CYCLECLOUD_SPEC_PATH = /mnt/cluster-init/test-project/default
Run Scripts Only
To run ONLY the cluster-init scripts:
jetpack converge --cluster-init
Output from the command will both go to STDOUT as well as jetpack.log. Each script will also have its output logged to:
$JETPACK_HOME/logs/cluster-init/<project>/<spec>/scripts/<script.sh>.out
Custom chef and Composable Specs
Each spec has a chef directory in it. Before a converge, each spec will be untarred and extracted into the local chef-repo, replacing any existing cookbooks, roles, and data bags with the same name(s). This is done in the order the specs are defined, so in the case of a naming collision, the last defined spec will always win. | https://docs.cyclecomputing.com/administrator-guide-v6.5.5/projects | 2019-05-19T08:34:28 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['http://docs.cyclecomputing.com/wp-content/uploads/2017/04/projects_directory.png',
None], dtype=object) ] | docs.cyclecomputing.com |
Contents Now Platform Custom Business Applications Previous Topic Next Topic Excel web service Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Excel web service You can export a list to an Excel file using an HTTP GET request. Excel web service request URLsUse one of several URL parameters to make a request to the Excel web service.Excel web service parametersUse additional URL parameters to customize and filter the response.Related ConceptsWeb services security On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-application-development/page/integrate/inbound-other-web-services/concept/c_EXCELWebService.html | 2019-05-19T08:53:51 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.servicenow.com |
The Suspect Queries tab displays information for all logged queries that are designated as suspect. Suspect queries are those whose values surpass thresholds defined for the Query Log data collector in the Monitored Systems portlet. Thresholds can be set for the following metrics:
Any queries that result in more than one AMP CPU second consumed and that exceed the defined thresholds are listed in the Suspect Queries tab, and the value that exceeds the threshold is displayed in red.
- CPU Skew
- I/O Skew
- Product Join Indicator
- Unnecessary I/O | https://docs.teradata.com/reader/kO54CPhW~G2NnrnBBnb6~A/fyDYVIqku1Xa1MzlJZAL8A | 2019-05-19T09:10:56 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.teradata.com |
Change history¶
This document contains change notes for bugfix releases in the 4.x series, please see What’s new in Celery 4.3 (rhubarb) for an overview of what’s new in Celery 4.3.
4.3.0¶
Added support for broadcasting using a regular expression pattern or a glob pattern to multiple Pidboxes.
This allows you to inspect or ping multiple workers at once.
Contributed by Dmitry Malinovsky & Jason Held
Added support for PEP 420 namespace packages.
This allows you to load tasks from namespace packages.
Contributed by Colin Watson
Added
acks_on_failure_or_timeoutas a setting instead of a task only option.
This was missing from the original PR but now added for completeness.
Contributed by Omer Katz
Added the
task_receivedsignal.
Contributed by Omer Katz
Fixed a crash of our CLI that occurred for everyone using Python < 3.6.
The crash was introduced in acd6025 by using the
ModuleNotFoundErrorexception which was introduced in Python 3.6.
Contributed by Omer Katz
Fixed a crash that occurred when using the Redis result backend while the
result_expiresis set to None.
Contributed by Toni Ruža & Omer Katz
Added support the DNS seedlist connection format for the MongoDB result backend.
This requires the dnspython package which will be installed by default when installing the dependencies for the MongoDB result backend.
Contributed by George Psarakis
Bump the minimum eventlet version to 0.24.1.
Contributed by George Psarakis
Replace the msgpack-python package with msgpack.
We’re no longer using the deprecated package. See our important notes for this release for further details on how to upgrade.
Contributed by Daniel Hahler
Allow scheduling error handlers which are not registered tasks in the current worker.
These kind of error handlers are now possible:
from celery import Signature Signature( 'bar', args=['foo'], link_error=Signature('msg.err', queue='msg') ).apply_async()
Additional fixes and enhancements to the SSL support of the Redis broker and result backend.
Contributed by Jeremy Cohen
Code Cleanups, Test Coverage & CI Improvements by:
- Omer Katz
- Florian Chardin
Documentation Fixes by:
- Omer Katz
- Samuel Huang
- Amir Hossein Saeid Mehr
- Dmytro Litvinov
4.3.0 RC2¶
Filesystem Backend: Added meaningful error messages for filesystem backend.
Contributed by Lars Rinn
New Result Backend: Added the ArangoDB backend.
Contributed by Dilip Vamsi Moturi
Django: Prepend current working directory instead of appending so that the project directory will have precedence over system modules as expected.
Contributed by Antonin Delpeuch
Bump minimum py-redis version to 3.2.0.
Due to multiple bugs in earlier versions of py-redis that were causing issues for Celery, we were forced to bump the minimum required version to 3.2.0.
Contributed by Omer Katz
Dependencies: Bump minimum required version of Kombu to 4.4
Contributed by Omer Katz
4.3.0 RC1¶
Canvas:
celery.chain.apply()does not ignore keyword arguments anymore when applying the chain.
Contributed by Korijn van Golen
Result Set: Don’t attempt to cache results in a
celery.result.ResultSet.
During a join, the results cache was populated using
celery.result.ResultSet.get(), if one of the results contains an exception, joining unexpectedly failed.
The results cache is now removed.
Contributed by Derek Harland
Application:
celery.Celery.autodiscover_tasks()now attempts to import the package itself when the related_name keyword argument is None.
Contributed by Alex Ioannidis
Windows Support: On Windows 10, stale PID files prevented celery beat to run. We now remove them when a
SystemExitis raised.
Contributed by :github_user:`na387`
Task: Added the new
task_acks_on_failure_or_timeoutsetting.
Acknowledging SQS messages on failure or timing out makes it impossible to use dead letter queues.
We introduce the new option acks_on_failure_or_timeout, to ensure we can totally fallback on native SQS message lifecycle, using redeliveries for retries (in case of slow processing or failure) and transitions to dead letter queue after defined number of times.
Contributed by Mario Kostelac
RabbitMQ Broker: Adjust HA headers to work on RabbitMQ 3.x.
This change also means we’re ending official support for RabbitMQ 2.x.
Contributed by Asif Saif Uddin
Command Line: Improve celery update error handling.
Contributed by Federico Bond
Canvas: Support chords with
task_always_eagerset to True.
Contributed by Axel Haustant
Result Backend: Optionally store task properties in result backend.
Setting the
result_extendedconfiguration option to True enables storing additional task properties in the result backend.
Contributed by John Arnold
Couchbase Result Backend: Allow the Couchbase result backend to automatically detect the serialization format.
Contributed by Douglas Rohde
New Result Backend: Added the Azure Block Blob Storage result backend.
The backend is implemented on top of the azure-storage library which uses Azure Blob Storage for a scalable low-cost PaaS backend.
The backend was load tested via a simple nginx/gunicorn/sanic app hosted on a DS4 virtual machine (4 vCores, 16 GB RAM) and was able to handle 600+ concurrent users at ~170 RPS.
The commit also contains a live end-to-end test to facilitate verification of the backend functionality. The test is activated by setting the AZUREBLOCKBLOB_URL environment variable to azureblockblob://{ConnectionString} where the value for ConnectionString can be found in the Access Keys pane of a Storage Account resources in the Azure Portal.
Contributed by Clemens Wolff
Task:
celery.app.task.update_state()now accepts keyword arguments.
This allows passing extra fields to the result backend. These fields are unused by default but custom result backends can use them to determine how to store results.
Contributed by Christopher Dignam
Gracefully handle consumer
kombu.exceptions.DecodeError.
When using the v2 protocol the worker no longer crashes when the consumer encounters an error while decoding a message.
Contributed by Steven Sklar
Deployment: Fix init.d service stop.
Contributed by Marcus McHale
Django: Drop support for Django < 1.11.
Contributed by Asif Saif Uddin
Django: Remove old djcelery loader.
Contributed by Asif Saif Uddin
Result Backend:
celery.worker.request.Requestnow passes
celery.app.task.Contextto the backend’s store_result functions.
Since the class currently passes self to these functions, revoking a task resulted in corrupted task result data when django-celery-results was used.
Contributed by Kiyohiro Yamaguchi
Worker: Retry if the heartbeat connection dies.
Previously, we keep trying to write to the broken connection. This results in a memory leak because the event dispatcher will keep appending the message to the outbound buffer.
Contributed by Raf Geens
Celery Beat: Handle microseconds when scheduling.
Contributed by K Davis
Asynpool: Fixed deadlock when closing socket.
Upon attempting to close a socket,
celery.concurrency.asynpool.AsynPoolonly removed the queue writer from the hub but did not remove the reader. This led to a deadlock on the file descriptor and eventually the worker stopped accepting new tasks.
We now close both the reader and the writer file descriptors in a single loop iteration which prevents the deadlock.
Contributed by Joshua Engelman
Celery Beat: Correctly consider timezone when calculating timestamp.
Contributed by :github_user:`yywing`
Celery Beat:
celery.beat.Scheduler.schedules_equal()can now handle either arguments being a None value.
Contributed by :github_user:` ratson`
Documentation/Sphinx: Fixed Sphinx support for shared_task decorated functions.
Contributed by Jon Banafato
New Result Backend: Added the CosmosDB result backend.
This change adds a new results backend. The backend is implemented on top of the pydocumentdb library which uses Azure CosmosDB for a scalable, globally replicated, high-performance, low-latency and high-throughput PaaS backend.
Contributed by Clemens Wolff
Application: Added configuration options to allow separate multiple apps to run on a single RabbitMQ vhost.
The newly added
event_exchangeand
control_exchangeconfiguration options allow users to use separate Pidbox exchange and a separate events exchange.
This allow different Celery applications to run separately on the same vhost.
Contributed by Artem Vasilyev
Result Backend: Forget parent result metadata when forgetting a result.
Contributed by :github_user:`tothegump`
Task Store task arguments inside
celery.exceptions.MaxRetriesExceededError.
Contributed by Anthony Ruhier
Result Backend: Added the
result_accept_contentsetting.
This feature.
Contributed by Benjamin Pereto
Canvas: Fixed error callback processing for class based tasks.
Contributed by Victor Mireyev
New Result Backend: Added the S3 result backend.
Contributed by Florian Chardin
Task: Added support for Cythonized Celery tasks.
Contributed by Andrey Skabelin
Riak Result Backend: Warn Riak backend users for possible Python 3.7 incompatibilities.
Contributed by George Psarakis
Python Runtime: Added Python 3.7 support.
Contributed by Omer Katz & Asif Saif Uddin
Auth Serializer: Revamped the auth serializer.
The auth serializer received a complete overhaul. It was previously horribly broken.
We now depend on cryptography instead of pyOpenSSL for this serializer.
Contributed by Benjamin Pereto
Command Line: celery report now reports kernel version along with other platform details.
Contributed by Omer Katz
Canvas: Fixed chords with chains which include sub chords in a group.
Celery now correctly executes the last task in these types of canvases:
c = chord( group([ chain( dummy.si(), chord( group([dummy.si(), dummy.si()]), dummy.si(), ), ), chain( dummy.si(), chord( group([dummy.si(), dummy.si()]), dummy.si(), ), ), ]), dummy.si() ) c.delay().get()
Contributed by Maximilien Cuony
Canvas: Complex canvases with error callbacks no longer raises an
AttributeError.
Very complex canvases such as this no longer raise an
AttributeErrorwhich prevents constructing them.
We do not know why this bug occurs yet.
Contributed by Manuel Vázquez Acosta
Command Line: Added proper error messages in cases where app cannot be loaded.
Previously, celery crashed with an exception.
We now print a proper error message.
Contributed by Omer Katz
Task: Added the
task_default_prioritysetting.
You can now set the default priority of a task using the
task_default_prioritysetting. The setting’s value will be used if no priority is provided for a specific task.
Contributed by :github_user:`madprogrammer`
Dependencies: Bump minimum required version of Kombu to 4.3 and Billiard to 3.6.
Contributed by Asif Saif Uddin
Result Backend: Fix memory leak.
We reintroduced weak references to bound methods for AsyncResult callback promises, after adding full weakref support for Python 2 in vine. More details can be found in celery/celery#4839.
Contributed by George Psarakis and :github_user:`monsterxx03`.
Task Execution: Fixed roundtrip serialization for eager tasks.
When doing the roundtrip serialization for eager tasks, the task serializer will always be JSON unless the serializer argument is present in the call to
celery.app.task.Task.apply_async(). If the serializer argument is present but is ‘pickle’, an exception will be raised as pickle-serialized objects cannot be deserialized without specifying to serialization.loads what content types should be accepted. The Producer’s serializer seems to be set to None, causing the default to JSON serialization.
We now continue to use (in order) the serializer argument to
celery.app.task.Task.apply_async(), if present, or the Producer’s serializer if not None. If the Producer’s serializer is None, it will use the Celery app’s task_serializer configuration entry as the serializer.
Contributed by Brett Jackson
Redis Result Backend: The
celery.backends.redis.ResultConsumerclass no longer assumes
celery.backends.redis.ResultConsumer.start()to be called before
celery.backends.redis.ResultConsumer.drain_events().
This fixes a race condition when using the Gevent workers pool.
Contributed by Noam Kush
Task: Added the
task_inherit_parent_prioritysetting.
Setting the
task_inherit_parent_priorityconfiguration)
Contributed by :github_user:`madprogrammer`
Canvas: Added the
result_chord_join_timeoutsetting.
Previously,
celery.result.GroupResult.join()had a fixed timeout of 3 seconds.
The
result_chord_join_timeoutsetting now allows you to change it.
Contributed by :github_user:`srafehi`
Code Cleanups, Test Coverage & CI Improvements by:
- Jon Dufresne
- Asif Saif Uddin
- Omer Katz
- Brett Jackson
- Bruno Alla
- :github_user:`tothegump`
- Bojan Jovanovic
- Florian Chardin
- :github_user:`walterqian`
- Fabian Becker
- Lars Rinn
- :github_user:`madprogrammer`
- Ciaran Courtney
Documentation Fixes by:
- Lewis M. Kabui
- Dash Winterson
- Shanavas M
- Brett Randall
- Przemysław Suliga
- Joshua Schmid
- Asif Saif Uddin
- Xiaodong
- Vikas Prasad
- Jamie Alessio
- Lars Kruse
- Guilherme Caminha
- Andrea Rabbaglietti
- Itay Bittan
- Noah Hall
- Peng Weikang
- Mariatta Wijaya
- Ed Morley
- Paweł Adamczak
- :github_user:`CoffeeExpress`
- :github_user:`aviadatsnyk`
- Brian Schrader
- Josue Balandrano Coronel
- Tom Clancy
- Sebastian Wojciechowski
- Meysam Azad
- Willem Thiart
- Charles Chan
- Omer Katz
- Milind Shakya | http://docs.celeryproject.org/en/master/changelog.html | 2019-05-19T09:27:12 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.celeryproject.org |
Exporting/Importing visualizations¶
You might be interested in exporting and importing visualizations because of several reasons:
- Backup purposes.
- Moving visualizations between different hosts.
- Moving visualizations between users.
- Using the same visualization with different data.
With
cartodb:vizs:export_user_visualization_json task you can export a visualization to JSON, and with
cartodb:vizs:import_user_visualization_json you can import it. First outputs to stdout and second reads stdin.
This example exports
c54710aa-ad8f-11e5-8046-080027880ca6 visualization.
$ bundle exec rake cartodb:vizs:export_user_visualization_json['c54710aa-ad8f-11e5-8046-080027880ca6'] > c54710aa-ad8f-11e5-8046-080027880ca6.json
and this imports it into
6950b745-5524-4d8d-9478-98a8a04d84ba user, who is in another server.
$ cat c54710aa-ad8f-11e5-8046-080027880ca6.json | bundle exec rake cartodb:vizs:import_user_visualization_json['6950b745-5524-4d8d-9478-98a8a04d84ba']
Please keep in mind the following:
- Exporting has backup purposes, so it keeps ids. If you want to use this to replicate a visualization in the same server you can edit the JSON and change the ids. Any valid, distinct UUID will work.
- It does export neither the tables nor its data. Destination user should have tables with the same name than the original one for the visualization to work. You can change the table names in the JSON file if names are different.
Exporting/Importing full visualizations¶
Disclaimer: this feature is still in beta
You can export a complete visualization (data, metadata and map) with this command:
bundle exec rake cartodb:vizs:export_full_visualization['5478433b-b791-419c-91d9-d934c56f2053']
That will generate a .carto file that you can import in any CartoDB installation just dropping the file as usual. | https://cartodb.readthedocs.io/en/v4.11.10/operations/exporting_importing_visualizations.html | 2019-05-19T09:51:44 | CC-MAIN-2019-22 | 1558232254731.5 | [] | cartodb.readthedocs.io |
Description
Executes the provided javascript snippet. Runs asynchronous when Timeout is set. Returns a string.
Usage
Pass the script you want to execute as parameter for the action. You have to set a WebElement as return value, otherwise the action will fail. The optional arguments are stored in an array and can be used in your script. For example type “arguments[0]” to get the value of the Argument 0 parameter. | https://docs.mendix.com/ats/refguide/rg-version-1/execute-javascript-webelement | 2019-05-19T08:27:43 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.mendix.com |
Deletes the import relationship of the specified LUN or the specified foreign disk
You cannot use this command if an import is in-progress between the foreign disk and the LUN unless you use the force option. The import has to either successfully completed or be stopped before deleting the import relationship.
You can use the lun import stop command to stop the data import, and then you delete the import relationship.
cluster1::> lun import delete -vserver vs1 -path /vol/vol2/lun2
Deletes the import relationship of lun2 at the path /vol/vol2/lun2.
cluster1::> lun import delete -vserver vs0 -foreign-disk 6000B5D0006A0000006A020E00040000 | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-cmpr-950/lun__import__delete.html | 2019-05-19T08:39:41 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.netapp.com |
Getting Started¶
MathJax allows you to include mathematics in your web pages, either using LaTeX, MathML, or AsciiMath notation, and the mathematics will be processed using JavaScript to produce HTML, SVG or MathML equations for viewing in any modern browser. use it locally on your hard disk
(with no need for network access). All
The easiest way to use MathJax is to link directly to the public installation available through the MathJax Content Distribution Network (CDN). When you use the MathJax CDN, there is no need to install MathJax yourself, and you can begin using MathJax right away.
The CDN will automatically arrange for your readers to download MathJax files from a fast, nearby server. And since bug fixes and patches are deployed to the CDN as soon as they become available, your pages will always be up to date with the latest browser and devices.
To use MathJax from our server, you need to do two things:
- Link to MathJax in the web pages that are to include mathematics.
- Put mathematics into your web pages so that MathJax can display it.
To jump start, you accomplish the first step by putting
<script type="text/javascript" async </script>
into the
<head> block of your document. (It can also go in the
<body> if necessary, but the head is to be preferred.) This will
load the latest version of MathJax from the distributed server, and
configure it to recognize mathematics in both TeX, MathML, and AsciiMath notation,
and ask it to generate its output using HTML with CSS to display the
mathematics.
Warning
The
TeX-MML-AM_CHTML configuration is one of the most general (and thus largest) combined configuration files. We list it here because it will quickly get you started using MathJax. It is probably not the most efficient configuration for your purposes and other combined configuration files are available. You can also provide additional configuration parameters to tailor one of the combined configurations to your needs or use our development tools to generate your own combined configuration file.
More details about the configuration process can be found in the Loading and Configuring MathJax instructions.
The use of
cdn.mathjax.org is governed by its terms of service, so be
sure to read that before linking to the MathJax CDN server.
Note
To see how to enter mathematics in your web pages, see Putting mathematics in a web page below.
Secure Access to the CDN¶
If the MathJax CDN is accessed via the address (note
the missing
s after
http), the script is downloaded over a regular,
insecure HTTP connection. This poses a security risk as a malicious third
party can intercept the MathJax script and replace it. This is known as a
man-in-the-middle attack.
To prevent such attacks, one should access the MathJax CDN over a secure HTTPS
connection, as demonstrated in the first example earlier.
If the user wishes to use insecure HTTP to download the MathJax script if and only if the page itself is downloaded over insecure HTTP, then a protocol-relative address can be used to automatically switch between HTTP and HTTPS depending on what the current page uses:
<script type="text/javascript" async </script>
Note that this trick will not work if the page is accessed locally via
file://
as it will attempt to load from instead.. The configuration file used in the examples above tells MathJax to look for both TeX, AsciiMath, and MathML notation within your pages. Other configuration files tell MathJax to use only one of these input options or one of the other output options." async<, and in particular how to deal with
single dollar signs in your text when you have enabled single
dollar-sign delimiters. configuration files which include
TeX in their name (e.g.,
TeX-AMS_CHTML)..
Note
See TeX and LaTeX support for details on the other TeX extensions that are available.
MathML input¶
For mathematics written in MathML notation, you mark your
mathematics using standard
<math> tags, where
<math
display="block"> represents displayed mathematics and
<math
display="inline"> or just
<math> represents in-line mathematics.
Note that even on old browsers this will work in HTML files, not just XHTML files (MathJax
works with both), and that the web page need not be served with any
special MIME-type. However note.
Although it is not required, it is recommended that you include the
xmlns="" attribute on all
<math> tags in your document (and this is preferred to the use of
a namespace prefix like
m: above, since those are deprecated in
HTML5) in order to make your MathML work in the widest range of
situations.
Here is a complete sample page containing MathML mathematics (also available in the test/sample-mml.html file):
<!DOCTYPE html> <html> <head> <title>MathJax MathML Test Page</title> <script type="text/javascript".
The component of MathJax that recognizes MathML notation within the
page is called the mml2jax extension, and it has only a few
configuration options; see the
config/default.js file or the
mml2jax configuration options page for more
details.
AsciiMath input¶
MathJax v2.0 introduced a new input format: AsciiMath notation by incorporating ASCIIMathML.
By default, you mark mathematical
expressions written in this form by surrounding them in “back-ticks”, i.e.,
`...`.
Here is a complete sample page containing AsciiMath notation (also available in the test/sample-asciimath.html file):
<!DOCTYPE html> <html> <head> <title>MathJax AsciiMath Test Page</title> <script type="text/javascript" async</script> </head> <body> <p>When `a != 0`, there are two solutions to `ax^2 + bx + c = 0` and they are</p> <p style="text-align:center"> `x = (-b +- sqrt(b^2-4ac))/(2a) .` </p> </body> </html>
The component of MathJax that recognizes asciimath notation within the
page is called the asciimath2jax extension, and it has only a few
configuration options; see the
config/default.js file or the
asciimath2jax configuration options page for more
details.
Note
See the AsciiMath support page for more on MathJax’s AsciiMath support. or hard disk.
- Configure MathJax to suit the needs of your site.
- Link MathJax into the web pages that are to include mathematics.
- Put mathematics into your web pages so that MathJax can display it.
Downloading and Installing MathJax¶
The MathJax source code is hosted on
GitHub.
To install MathJax on your own server, download
the latest distribution and.
Once you have MathJax set up on your server, you can test it using the
files in the
MathJax/test directory. If you are putting MathJax
on a server, load them in your browser using their web addresses.
Note
For more details (such as version control access) see the installation instructions.
Configuring your copy of MathJax¶
When you include MathJax into your web pages as described below, it
will load the file
config/TeX-MML-AM_CHTML.js (i.e., the file
named
TeX-MML-AM_CHTML.js in the
config folder of the
main
MathJax folder). This file preloads all the most
commonly-used components of MathJax, allowing it to process
mathematics that is in the TeX or LaTeX format, AsciiMath format, or in MathML notation.
It will produce output in HTML (with CSS) to render the
mathematics.
There are a number of other prebuilt configuration files that you can
choose from as well, or you could use the
config/default.js file and
customize the settings yourself.
Note
The combined configuration files are described more fully in Common Configurations, and the configuration options are described in Configuration Options.
Linking your copy of MathJax into a web page¶
You can include MathJax in your web page by putting
<script type="text/javascript" async<" async</script>
to load MathJax in your page. For example, your page could look like
<html> <head> ... <script type="text/javascript" async</script> </head> <body> ... </body> </html>
Note. | http://docs.mathjax.org/en/latest/start.html | 2017-02-19T16:32:07 | CC-MAIN-2017-09 | 1487501170186.50 | [] | docs.mathjax.org |
Version: 1.1.111.dev0
Getting Started¶
This page gives an introduction to libtaxii and how to use it. Please note that this page is being actively worked on and feedback is welcome.
Modules¶
The libtaxii library contains the following modules:
- libtaxii - Contains version info and some methods for getting TAXII Messages from HTTP responses. (Implemented in
libtaxii/__init__.py)
- libtaxii.clients. - TAXII HTTP and HTTPS clients. (Implemented in
libtaxii/clients.py)
- libtaxii.common - Contains functions and classes useful for all versions of TAXII
- libtaxii.constants - Contains constants for TAXII
- libtaxii.messages_10 - Creating, handling, and parsing TAXII 1.0 messages. (Implemented in
libtaxii/messages_10.py)
- libtaxii.messages_11 - Creating, handling, and parsing TAXII 1.1 messages. (Implemented in
libtaxii/messages_11.py)
- libtaxii.taxii_default_query - Creating, handling and parsing TAXII Default Queries. (Implemented in
libtaxii/taxii_default_query.py) New in libtaxii 1.1.100.
- libtaxii.validation - Common data validation functions used across libtaxii. (Implemented in
libtaxii/validation.py)
TAXII Messages Module Structure¶
In the TAXII message modules (
libtaxii.messages_10 and
libtaxii.messages_11), there is a class corresponding to each type of
TAXII message. For example, there is a
DiscoveryRequest class for the
Discovery Request message:
import libtaxii.messages_11 as tm11 discovery_request = tm11.DiscoveryRequest( ... )
For types that can been used across multiple messages (e.g., a Content Block
can exist in both Poll Response and Inbox Message), the corresponding class
(
ContentBlock) is (and always has always been) defined at the module level.
content_block = tm11.ContentBlock( ... )
Other types that are used exclusively within a particular TAXII message type
were previously defined as nested classes on the corresponding message class;
however, they are now defined at the top level of the module. For example, a
Service Instance is only used in a Discovery Response message, so the class
representing a Service Instance, now just
ServiceInstance, was previously
DiscoveryResponse.ServiceInstance. The latter name still works for backward
compatibilty reasons, but is deprecated and may be removed in the future.
service_instance = tm11.ServiceInstance( ... ) service_instance = tm11.DiscoveryRequest.ServiceInstance( ... )
See the API Documentation for proper constructor arguments for each type above.
TAXII Message Serialization and Deserialization¶
Each class in the message modules has serialization and deserialization methods
for XML Strings, Python dictionaries, and LXML ElementTrees. All serialization
methods (
to_*()) are instance methods called on specific objects (e.g.,
discovery_request.to_xml()). Deserialization methods (
from_*()) are
class methods and should be called on the class itself (e.g.,
tm11.DiscoveryRequest.from_xml(xml_string)).
Each class in messages.py defines the following:
from_xml(xml_string)- Creates an instance of the class from an XML String.
to_xml()- Creates the XML representation of an instance of a class.
from_dict(dictionary)- Creates an instance of the class from a Python dictionary.
to_dict()- Creates the Python dictionary representation of an instance of a class.
from_etree(lxml_etree)- Creates an instance of the class from an LXML Etree.
to_etree()- Creates the LXML Etree representation of an instance of a class.
To create a TAXII Message from XML:
xml_string = '<taxii:Discovery_Response ... />' # Note: Invalid XML discovery_response = tm11.DiscoveryResponse.from_xml(xml_string)
To create an XML string from a TAXII Message:
new_xml_string = discovery_response.to_xml()
The same approach can be used for Python dictionaries:
msg_dict = { ... } # Note: Invalid dictionary syntax discovery_response = tm11.DiscoveryResponse.from_dict(msg_dict) new_dict = discovery_response.to_dict()
and for LXML ElementTrees:
msg_etree = etree.Element( ... ) # Note: Invalid Element constructor discovery_response = tm11.DiscoveryResponse.from_etree(msg_etree) new_etree = discovery_response.to_etree()
Schema Validating TAXII Messages¶
You can use libtaxii to Schema Validate XML, etree, and file representations of TAXII Messages. XML Schema validation cannot be performed on a TAXII Message Python object, since XML Schema validation can only be performed on XML.
A full code example of XML Schema validation can be found in API Documentation
TAXII Clients¶
The libtaxii.clients module defines a single class
HttpClient capable
of invoking TAXII services over both HTTP and HTTPS. The client is a fairly
straighforward wrapper around Python’s builtin
httplib and supports the use
of of both HTTP Basic and TLS Certificate authentication.
Example usage of clients:
import libtaxii as t import libtaxii.clients as tc import libtaxii.messages_11 as tm11 from libtaxii.constants import * client = tc.HttpClient() client.set_auth_type(tc.HttpClient.AUTH_BASIC) client.set_use_https(True) client.set_auth_credentials({'username': 'MyUsername', 'password': 'MyPassword'}) discovery_request = tm11.DiscoveryRequest(tm11.generate_message_id()) discovery_xml = discovery_request.to_xml() http_resp = client.call_taxii_service2('example.com', '/pollservice/', VID_TAXII_XML_11, discovery_xml) taxii_message = t.get_message_from_http_response(http_resp, discovery_request.message_id) print taxii_message.to_xml() | http://libtaxii.readthedocs.io/en/latest/getting_started.html | 2017-02-19T16:44:31 | CC-MAIN-2017-09 | 1487501170186.50 | [] | libtaxii.readthedocs.io |
new file in the
<PRODUCT_HOME>directory. The file should be named according to your OS as explained below.
- For Linux: The file name should be
password-tmp.
- For Windows: The file name should be
password-tmp.txt.
When you start the server (see step 3 below), "wso2carbon" (the primary keystore password) to the new file and save. By default, the password provider assumes that both private key and keystore passwords are the same. If not, the private key password must be entered in the second line of the file.
Now, start the server as a background process by running the following command.
Start the server by running the product start-up script from the
<PRODUCT_HOME>/bindirectory by executing the following command: | https://docs.wso2.com/display/Carbon443/Resolving+Encrypted+Passwords | 2017-02-19T16:47:51 | CC-MAIN-2017-09 | 1487501170186.50 | [] | docs.wso2.com |
Users Groups Edit
From Joomla! Documentation
Revision as of 23:22, 17 January 2011 by Dextercowley (Talk | contribs)
Contents.
Description
User groups play a central role in what a user can do and see on the site. Creating user groups is normally the first step in setting up the security system for your site. can not can not view that object. for more information. a new group will have similar permissions to an existing group, you can save work by making the new group a child of the existing group. That way, you only need to change the permissions that are different for the new group. | https://docs.joomla.org/index.php?title=Help16:Users_Groups_Edit&oldid=35590 | 2015-08-28T02:12:48 | CC-MAIN-2015-35 | 1440644060173.6 | [] | docs.joomla.org |
Install from Web
From Joomla! Documentation
The).
Video Demo
You can see a video walk through of the Install from web feature here:
Information for Extension Developers
If you already have an extension on JED then you need to make a few small changes to get your extension working on the Joomla Extension Finder. Click Here for more information.
Plugin versions
- 1.0.2 Initial released version with Joomla 3.2.0 Stable
- 1.0.3 Minor fix for translations escapings (server-side fix for "Sort by rating" to correspond to JED)
- 1.0.4 Minor addition for support of display of commercial/non-commercial "$" icon and for number of reviews and votes in category views (server-side: Same additions and fix of sorting in leaf-category view by rating to reflect JED sorting)
- 1.0.5 Mandatory upgrade for Joomla 3.2.1 to adapt to a change in Javascript introduced by Joomla 3.2.1 | https://docs.joomla.org/index.php?title=Install_from_Web&redirect=no | 2015-08-28T03:44:03 | CC-MAIN-2015-35 | 1440644060173.6 | [] | docs.joomla.org |
Information for "Justo.derivera" Basic information Display titleUser talk:Justo.derivera Default sort keyJusto.derivera Page length (in bytes)177 Page ID1008:30, 20 March 2008 Latest editorCirTap (Talk | contribs) Date of latest edit11:30, 20 March 2008 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (3)Templates used on this page: GHOP students 2007-2008/Justo de Rivera (view source) GHOP students/Justo de Rivera (view source) Template:- (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=User_talk:Justo.derivera&action=info | 2015-08-28T02:39:47 | CC-MAIN-2015-35 | 1440644060173.6 | [] | docs.joomla.org |
What is FixMe.IT?
FixMe.IT is a fast and easy-to-use remote desktop application designed for delivering both on-demand (attended) and unattended remote support to users located anywhere in the world.
With FixMe.IT, you can view and/or control a remote user's keyboard and mouse, communicate via text chat, transfer files between computers, reboot and automatically reconnect, video record your sessions, and much more.
FixMe.IT stands out as an easy-to-use, reliable, and reasonably priced remote support application. See what users are saying about FixMe.IT on Capterra and G2Crowd.
Download Product Overview white paper to learn more about the benefits of FixMe.IT | https://docs.fixme.it/general-questions/what-is-fixme-it | 2022-08-08T01:48:03 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.fixme.it |
Introduction
Throughout this article, we will go through creating and managing your VHC templates from within VGM. VHC templates can then be added to jobsheets and assigned to technicians to perform for VGM Technician, our companion app.
Getting started
To get started, you need to navigate to the VHC checklist page by navigating to Config > Vehicle Checklists > VHC Templates, located in the top toolbar.
Navigating here will show you all of the checklists that you have previously created. Using the options in the top toolbar, you can create new checklists, edit existing checklists (you can also double click a template to open it), delete checklists and clone checklists.
Creating a new VHC checklist
To create a new checklist, click the new button in the top toolbar.
A new window will open up where you’ll be able to enter the following basic information:
- Name: Give the template a memorable name. This will be visible when you add the checklist to jobs and assign them to technicians.
- Description: You may wish to add a longer description, but this isn’t required.
- Estimated Hours: The length of time the checklist is estimated to take.
You can click save at any point or proceed to the template items tab, where you can build up the checklist itself.
Adding items to your VHC checklist
To give the checklist some structure, you must first create groups that can be used to group common checklist items.
Once a group is created, you can select it and click Add Item, where you’ll be able to add the following item types:
- Severity: Green, Amber and Red to signify pass, advisor or immediate attention required.
- Tread depth: Record tyre tread depths.
- Whole number: Any whole number.
- Decimal number: Any decimal number.
Used alongside our companion app, VGM Technician, these checklists will render specific inputs depending on the checklist item type to ensure you record the best data possible.
Your technicians will also be able to add notes against each item, and for media package subscribers, they’ll also be able to add photos.
| https://docs.motasoft.co.uk/configuring-vhc-checklist-templates/ | 2022-08-08T01:30:33 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image-9-1024x45.png',
None], dtype=object)
array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/Screenshot-2021-12-06-at-22.54.44-1024x566.png',
None], dtype=object)
array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/Screenshot-2021-12-06-at-22.57.08-1024x567.png',
None], dtype=object)
array(['https://www.motasoft.co.uk/wp-content/uploads/2021/06/Simulator-Screen-Shot-iPhone-12-Pro-Max-2021-06-22-at-10.53.50-473x1024.png',
None], dtype=object) ] | docs.motasoft.co.uk |
If you are running GDB to debug the host program at the same time as performing hardware debug on the kernels, you can also pause the host program as needed by inserting a breakpoint at the appropriate line of code. Instead of making changes to the host program to pause the application as needed, you can set a breakpoint prior to the kernel execution in the host code. When the breakpoint is reached, you can set up the debug ILA triggers in Vivado hardware manager, arm the trigger, and then resume the host program in GDB. | https://docs.xilinx.com/r/en-US/ug1393-vitis-application-acceleration/Pausing-the-Host-Application-Using-GDB | 2022-08-08T01:33:53 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.xilinx.com |
Setting up the company account
Our back end is accessible through APIs as well as a configuration interface called the DataHub.
Introduction
The Damoov platform supports a wide range of different apps. To use our API and SDK, you first need to set up your company workspace and configure some key settings.
Important: Avoid duplicate company accounts
Setting up a Company or Application has to be done ONCE. If someone else on your team has already created the company, ask them to send you an invite to the project instead of setting up a duplicate company.
Create the account
Visit our Datahub, enter your contact information and some details about your company. (You will be able to add other admins later.)
For verification purposes, you will get an email with a code. After acknowledging that, your company account is created with you as an admin.
Note that initially, you will see a message that the hub is "Awaiting for telematics data" because no trip has been recorded at this point yet.
Set up your application
Open the DataHub. Click on Management on the bottom left to enter the Management screen. Go through "Company Settings" and "Application Settings", completing the necessary fields. You can choose "UAT" (User Acceptance Testing) if your app is not yet in the app store(s).
Move to Production
Create a new Prod application and Replace UAT Credentials (InstanceID and Instance KEY) with production InstanceID and InstanceKEY
Info: More than one app under the same company
A company can have more than one app. For example, if you're an insurance company, you may have one app for business customers with vehicle fleets, and you may have another app for young drivers focused on safe driving. Both can be managed under the same company account with the same admin team and exist as separate applications.
That said, if you have an application with both iOS and Android versions, set it up as a single app. (Read more about High-level architecture concept.)
You can skip "Group Settings" for now, meaning that all test users will be in the "Common" user group.
Configure global settings
Click on your username in the top right corner and select "Global Settings".
Chose whether you use km or miles, and whether to use HERE Maps (highly recommended to avoid unpredictable UI bugs in DataHub) or Google Maps
This applies to the DataHub interface only, not to your application.
Updated 6 months ago | https://docs.damoov.com/docs/setting-up-the-company-account | 2022-08-08T01:02:19 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.damoov.com |
How to repair your Outlook personal folder file (.pst)
Original KB number: 272227
Follow these steps to repair your Outlook personal folder file (.pst) by using Inbox Repair tool.
Step 1 - Exit Outlook and start the Inbox Repair tool
Automatically start the Inbox Repair tool
Start the Inbox Repair tool (Scanpst.exe). Then select Open or Run in the File Download dialog box, and follow the steps in the easy fix wizard.
Once the Inbox Repair tool is started, continue with Step 2. (Can't start the Inbox Repair tool?)
Manually start the Inbox Repair tool
To start the Inbox Repair tool manually, locate one of the folders by using Microsoft Windows Explorer, and then double-click the Scanpst.exe file.
Note
The file or folders may be hidden. For instructions about how to unhide files and folders, see your operating system documentation.
(Can't find the Inbox Repair tool?)
Step 2 - Repair the .pst file
In the Inbox Repair tool, type the path and the file name of your personal folders (.pst) file or select Browse to locate the file by using the Windows file system, and then select Start.
Note
If you do not know where the .pst file is located, follow the steps in How to locate, move, or back up your .pst file.
Note
-.
Outlook 2010 and later
- Select the File tab on the ribbon, and then select the Info tab on the menu.
- Select the Account Settings button, and then select Account Settings again.
- Select the Data Files tab.
- Select Add to open the Create or Open Outlook Data File dialog box.
- Enter a file name for your new Outlook Data (.pst) file, and then select OK.
- You should have a new Outlook Data (.pst) file in your profile.
Outlook 2007
- On the File menu, select Data File Management.
- Select Add to open the New Outlook Data File dialog box.
- In the Types of storage dialog box, select Office Outlook Personal Folders File (.pst), and then select OK.
- In the Create or Open Outlook Data File dialog box, select the location and a file name for your new Personal Folders (.pst) file, and then select OK.
- Select OK.
- You should have a new Personal Folders (.pst) file in your profile.
Outlook 2003
- On the File menu, point to New, and then select Outlook Data File.
- Select OK to open the Create or Open Outlook Data File dialog box.
- Enter a file name for your new Personal Folders (.pst) file, and then select OK to open the Create Microsoft Personal Folders dialog box.
- Enter a file name for your new Personal Folders (.pst) file, and then select OK.
- You should have a new Personal Folders (.pst) file in your profile.
Outlook 2002
- On the File menu, point to New, and then select Personal Folders File (.pst).
- Select Create to open the Create Microsoft Personal Folders dialog box.
- Enter a file name for your new Personal Folders (.pst) file, and then select OK.
- You should have a new Personal Folders (.pst) file in your profile. the Recover repaired items from the backup file (Optional) section.
What is the Inbox Repair tool
If you can't start the Inbox Repair tool automatically or manually, you may try to repair your Office application..
Import the New name.pst file that you created in the previous step by using the Import and Export Wizard in Outlook. To do this, follow these steps:
- On the File menu, select Import and Export.
Note.
- Under Options, select Do not import duplicates, and then select Next.
- Under Select the folder to import from, select the Personal Folders (.pst) file, and then select Include subfolders.
- Select Import folders into the same folder in, and then select your new Personal Folders (.pst).
- Select Finish.
Note.
How the Inbox Repair tool validates and corrects errors
Scan.
Messages.
No validation is explicitly done on body-related properties or on subject-related properties, except the implicit low-level validation that this article discusses earlier. The recipient display properties are changed to be consistent with the recovered recipient table. As soon as this operation is complete, other algorithms are run to collect all the orphaned messages and to put them in an Orphans folder.
For more information about binary trees (btrees), see An Extensive Examination of Data Structures. | https://docs.microsoft.com/en-US/outlook/troubleshoot/data-files/how-to-repair-personal-folder-file | 2022-08-08T02:57:52 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['../client/data-files/media/how-to-repair-personal-folder-file/inbox-repair-tool.png',
'Screenshot shows steps to repair the .pst file in the Inbox Repair tool.'],
dtype=object) ] | docs.microsoft.com |
Development¶
Installing from PyPi¶
We currently support python3.6 and python3.7 and you can install it via pip.
pip install evol
Developing Locally with Makefile¶
You can also fork/clone the repository on Github to work on it locally. we’ve added a Makefile to the project that makes it easy to install everything ready for development.
make develop
There’s some other helpful commands in there. For example, testing can be done via;
make test
This will pytest and possibly in the future also the docstring tests.
Generating Documentation¶
The easiest way to generate documentation is by running:
make docs
This will populate the /docs folder locally. Note that we ignore the contents of the this folder per git ignore because building the documentation is something that we outsource to the read-the-docs service. | https://evol.readthedocs.io/en/stable/development.html | 2022-08-08T01:13:16 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://i.imgur.com/7MHcIq1.png',
'https://i.imgur.com/7MHcIq1.png'], dtype=object)] | evol.readthedocs.io |
View CodeDeploy deployment details
You can use the CodeDeploy console, the Amazon CLI, or the CodeDeploy APIs to view details about deployments associated with your Amazon account.
You can view EC2/On-Premises deployment logs on your instances in the following locations:
Amazon Linux, RHEL, and Ubuntu Server:
/opt/codedeploy-agent/deployment-root/deployment-logs/codedeploy-agent-deployments.log
Windows Server: C:\ProgramData\Amazon\CodeDeploy<DEPLOYMENT-GROUP-ID><DEPLOYMENT-ID>\logs\scripts.log
For more information, see Analyzing log files to investigate deployment failures on instances.
View deployment details (console)
To use the CodeDeploy console to view deployment details:.
To see more details for a single deployment, in Deployment history, choose the deployment ID or choose the button next to the deployment ID, and then choose View.
View deployment details (CLI)
To use the Amazon CLI to view deployment details, call the
get-deployment
command or the
batch-get-deployments command. You can call the
list-deployments command to get a list of unique deployment IDs to use
as inputs to the
get-deployment command and the
batch-get-deployments command.
To view details about a single deployment, call the get-deployment command, specifying the unique deployment identifier. To get the deployment ID, call the list-deployments command.
To view details about multiple deployments, call the batch-get-deployments command, specifying multiple unique deployment identifiers. To get the deployment IDs, call the list-deployments command.
To view a list of deployment IDs, call the list-deployments command, specifying:
The name of the application associated with the deployment. To view a list of application names, call the list-applications command.
The name of the deployment group associated with the deployment. To view a list of deployment group names, call the list-deployment-groups command.
Optionally, whether to include details about deployments by their deployment status. (If not specified, all matching deployments will be listed, regardless of their deployment status.)
Optionally, whether to include details about deployments by their deployment creation start times or end times, or both. (If not specified, all matching deployments will be listed, regardless of their creation times.) | https://docs.amazonaws.cn/en_us/codedeploy/latest/userguide/deployments-view-details.html | 2022-08-08T01:57:11 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.amazonaws.cn |
Adm
Credential Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Specifies the Amazon Device Messaging (ADM) credentials.
[System.Runtime.Serialization.DataContract(Name="AdmCredential", Namespace="")] public class AdmCredential : Microsoft.Azure.NotificationHubs.PnsCredential
type AdmCredential = class inherit PnsCredential
Public Class AdmCredential Inherits PnsCredential
- Inheritance
- System.ObjectAdmCredential
- Attributes
- System.Runtime.Serialization.DataContractAttribute | https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.notificationhubs.admcredential?view=azure-dotnet | 2022-08-08T00:38:06 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.azure.cn |
%SYSTEM.WorkMgr
class %SYSTEM.WorkMgr extends %Library.SystemBaseThis %Status value so it can indicate errors, these are displayed and returned by the WaitForComplete() method.
A typical calling sequence is:
Set queue=$system.WorkMgr.Initialize("/multicompile=1",.sc) If $$$ISERR(sc) ; Report Error For i=1:1:100 { Set sc=queue.Queue("##class(MyClass).ClassMethod",i) If $$$ISERR(sc) ; Report Error } Set sc=queue.WaitForComplete() If $$$ISERR(sc) ; Report Error
Then you call Queue() to queue a unit of work to be completed, this takes either a class method call, or a '$$func^rtn' reference and then any arguments you need to pass to this function. As soon as the first Queue() is called a worker will start processing this item of work. It is important to make sure that all the units of work are totally independent and they do not rely on other work units. You can not rely on the order in which the units of work are processed. If the units may be changing a common global you will need to add locking to ensure one worker can not change a global while another worker is in the middle of reading this global. When a unit of work is queued the current security context is stored so when the work is run it will use the current security context. Note that the worker jobs are started by the super server and so will run as the operating system user that the super server process is setup to use, this may be different to your current logged in operating system user.
Finally you call WaitForComplete() to wait for all the units of work to be complete, display any output each unit produced and report any errors reported from the work units. The WaitForComplete() will use the qualifiers provided in the Initialize(). Cache.
Property Inventory
Method Inventory
- Clear()
- DefaultNumWorkers()
- Flush()
- Free()
- Initialize()
- IsWorkerJob()
- NumActiveWorkersGet()
- NumberWorkers()
- Pause()
- Queue()
- QueueCallback()
- Resume()
- Setup()
- TearDown()
- Wait()
- WaitForComplete() | https://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&LIBRARY=%25SYS&CLASSNAME=%25SYSTEM.WorkMgr | 2022-08-08T00:34:54 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.intersystems.com |
The Vitis™ tool flow, as described in Integrating the Application Using the Vitis Tools Flow, is also available in the Vitis IDE. The different steps involved in building the system project, with an AI Engine graph, PL kernels, and PS application, are described in the following sections.
Before using the Vitis IDE, you must first set up the development environment, as described in Setting Up the Vitis Tool Environment. | https://docs.xilinx.com/r/en-US/ug1076-ai-engine-environment/Using-the-Vitis-IDE | 2022-08-08T02:03:03 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.xilinx.com |
Cookie consent
By entering and using this site, you consent to the use of only necessary cookies to enhance your site experience and improve our services.
Modern Solution for Modern BizTalk Server Challenges!View Documentation
Understand how BizTalk360 helps BizTalk Administrators being more effective.
The standard quick installation or upgrade of BizTalk360 application is a breeze.
Understand how to apply your Free Trial or Enterprise license in BizTalk360.
All the consoles you need in one portal. See what all is at your fingertips.
Be notified when something goes wrong with the rich monitoring capabilities.
Retrieve statistical information and reports of your BizTalk data using BizTalk360.
This book has been crafted by renowned BizTalk Server expert panel to make sure your migration to BizTalk Server 2020 could be done in the simplest means.
Yes, if you are currently working with BizTalk360 v8 and above. If you are engaged with lower versions, please consult with our support team via [email protected].
All versions from BizTalk Server 2009 and newer (including BizTalk Server 2020) are supported by BizTalk360.
No, BizTalk360 is a third-party product from Kovai.co for BizTalk Server Monitoring, Operations, and Analytics. We are also proud to add that Microsoft is one of our biggest customers.
We support all the latest Server products like Window Server 2019, SQL Server 2019, and Visual Studio 2019.
BizTalk360 has a subscription model that is based on different license tiers to choose from. To know more, please have a discussion with our expert via the above form or check our pricing page. | https://docs.biztalk360.com/v10-0/es | 2022-08-08T00:38:24 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.biztalk360.com |
Frame.Disposed Event
Namespace: DevExpress.ExpressApp
Assembly: DevExpress.ExpressApp.v22.1.dll
Declaration
Event Data
Remarks
This event is raised as a result of calling the Frame.Dispose method. This method releases all resources allocated by the current Frame and speeds up system performance. Handle this event to release custom resources after the Frame has been disposed of.
See Also | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Frame.Disposed | 2022-08-08T00:21:25 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.devexpress.com |
- 14 May 2021
- 2 Minutes to read
- DarkLight
ASP.NET MVC & WebForms
- Updated on 14 May 2021
- 2 Minutes to read
- DarkLight
Building an ASP.NET MVC or WebForms Application
ASP.NET MVC is the most popular web framework built for .NET that allows developers to build maintainable, scalable, cross-platform web applications. By separating concerns (i.e. not coupling the HTML views with the database backend and vice-versa), teams of developers with a variety of skillsets are able to focus on their areas of expertise without having to understand the intimate details of the underlying framework like its predecessor ASP.NET WebForms.
Building ASP.NET Applications
MSBuild (Microsoft Build Engine) is a tool for building all types of .NET applications, and it's used internally by Visual Studio to build projects and solutions. CI servers would perform similar build tasks or operations by invoking MSBuild operations.
MSBuild doesn't require Visual Studio to be installed, but you will need to install and configure the following:
- Visual Studio Build Tools are installed
- Ensure that ".NET desktop build tools" and "Web development build tools" options are chosen during installation/modification
The simplest way to build a web application is by running MSBuild directly:
msbuild.exe MyWebProject.csproj "/p:Configuration=Release" "/p:OutDir=C:\tmp\website\\"
The difference between web applications from other types of applications (e.g. console applications, WinForms, or WPF) is that when an output directory is specified, the build output is generated in a special subdirectory in the format:
<outputDir>\_PublishedWebsites\<projectName>
The contents of this output directory are the files that would be deployed to the IIS home directory.
Building ASP.NET Applications with BuildMaster
Because building ASP.NET applications simply involves running MSBuild, building ASP.NET applications is simple with BuildMaster. Behind the scenes, BuildMaster uses the
MSBuild::Build-Project operation to run MSBuild.
The general process for building an ASP.NET application is as follows:
- Get source code from the source control repository
- Compile project with MSBuild
- Capture artifact for deployment
A rough example plan of this would be:
Git::Get-Source ( RepositoryUrl:, Branch: master ); MSBuild::Build-Project ProfitCalc.Web\ProfitCalc.Web.csproj ( To: ~\Output ); Create-Artifact ( From: ~\Output\_PublishedWebsites\ProfitCalc.Web );
The
MSBuild::Build-Project operation in this example effectively runs the following MSBuild command:
msbuild.exe ProfitCalc.Web\ProfitCalc.Web.csproj "/p:Configuration=Release" "/p:OutDir=C:\...<buildmaster-temp>...\Output\\"
Restoring NuGet Packages
By default, MSBuild does not restore NuGet packages during a build, which is often the cause of "are you missing an assembly reference" errors.
To install NuGet packages before running MSBuild, use the
NuGet::Restore-Packages() as follows:
NuGet::Restore-Packages ( Target: ~\Src\<project-name>, Source: );
This will essentially call
nuget.exe install, and instruct it to look for a
packages.config file in the
SourceDirectory, and packages in the target
Source.
Unit Tests
Unit tests for ASP.NET applications are handled by VSTest. An example operation to execute and capture unit test results is as follows:
WindowsSdk::Execute-VSTest ( TestContainer: ~\Output\ProfitCalc.Tests.dll );
Deploying ASP.NET Applications to IIS
While ASP.NET applications can be hosted in a variety of web servers, Microsoft's Internet Information Services (IIS) is the most common. See Deploying an IIS Website to learn how to accomplish this with BuildMaster. | https://docs.inedo.com/docs/buildmaster-platforms-dot-net-asp-net | 2022-08-08T02:05:02 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.inedo.com |
Foamgen: generate foam morphology¶
Foamgen is a package that can be used to create spatially three-dimensional virtual representations of foam morphology with desired foam density, cell size distribution and strut content.
Here are some features of Foamgen:
Generation of closed-cell and open-cell morphology
Input parameters are based on physical aspects that can be experimentally determined
Cells are created using weighted tessellation so that desired size distribution is achieved
Mesh generated foam either using structured equidistant grid or unstructured tetrahedral mesh
Modular - easy to run only parts of the generation process
Open-source package with MIT license
Structured mesh workflow consists of several steps (for more details see [Fer18]):
Dense sphere packing
Laguerre tessellation
Geometric morphology creation
Meshing
References¶
- Fer18
Pavel Ferkl. Mathematical Modelling of Properties and Evolution of Polymer Foams. PhD thesis, University of Chemistry and Technology Prague, 2018.
Contents:
- Installation
- Tutorials
- Source Code Documentation
- Developer Documentation
- About Foamgen | https://foamgen.readthedocs.io/en/latest/ | 2022-08-08T00:17:48 | CC-MAIN-2022-33 | 1659882570741.21 | [] | foamgen.readthedocs.io |
Amazon S3 Connector Example¶
The AmazonS3 Connector allows you to access the Amazon Simple Storage Service (Amazon S3) via the AWS SDK.
What you'll build¶
This example depicts how to use AmazonS3 connector to:
- Create a S3 bucket (a location for storing your data) in Amazon cloud.
- Upload a message into the created bucket as a text file.
- Retrieve created text file back and convert into a message in the integration runtime.
All three operations are exposed via an API. The API with the context
/s3connector has three resources:
/createbucket- Once invoked, it will create a bucket in Amazon with the specified name
/addobject- The incoming message will be stored into the specified bucket with the specified name
/info- Once invoked, it will read the specified file from the specified bucket and respond with the content of the file
Following diagram shows the overall solution. The user creates a bucket, stores some message into the bucket, and then receives it back.
To invoke each operation, the user uses the same API.
If you do not want to configure this yourself, you can simply get the project and run it.
Setting up the environment¶
Please follow the steps mentioned at Setting up Amazon S3 document in order to create a Amazon S3 account and obtain credentials you need to access the Amazon APIs. Keep them saved to be used in the next steps.
Configure the connector in WSO2 Integration Studio¶
Follow these steps to set up the Integration Project and import AmazonS3 connector into it.
Specify the API name as
S3ConnectorTestAPIand API context as
/s3connector.
First we will create the
/createbucketresource. This API resource will retrieve the bucket name from the incoming HTTP PUT request and create a bucket in Amazon S3. Right click on the API Resource and go to Properties view. We use a URL template called
/createbucketas we have three API resources inside a single API. The method will be PUT.
Next drag and drop the 'createBucket' operation of the S3 Connector to the Design View as shown below. Here, you will receive the following inputs from the user.
- bucketName - Name of the bucket
Create a connection from the properties window by clicking on the '+' icon as shown below.
In the popup window, the following parameters must be provided.
- Connection Name - Unique name to identify the connection by.
- Connection Type - Type of the connection that specifies the protocol to be used.
- AWS Access Key ID - Access key associated with your Amazon user account.
- AWS Secret Access Key - Secret Access key associated with your Amazon user account.
- Region - Region that is used to select a regional endpoint to make requests.
Note
- You can either define the credentials or allow the AWS SDK to manage the credentials. The SDK will look for AWS credentials in system/user environment variables or use the IAM role for authentication if the application is running in an EC2 instance.
- The IAM role for authentication is available only with Amazon S3 connector v2.0.2 and above.
After the connection is successfully created, select the created connection as 'Connection' from the drop down menu in the properties window.
Next, configure the following parameters in the properties window,
- Bucket Name - json-eval($.bucketName)
- Bucket Region - Select a region from the drop-down menu. Here we are using us-east-2.
Drag and drop the Respond Mediator to send back the response from creating the bucket as shown below.
Create the next API resource, which is
/addobjectby dragging and dropping another API resource to the design view. This API resource will retrieve information about the object from the incoming HTTP POST request such as the bucketName, objectKey and the file content and upload it to Amazon S3.
Drag and drop the ‘putObject’ operation of the S3 Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties,
- Bucket Name - json-eval($.bucketName)
- Object Key - json-eval($.objectKey)
- File Content - json-eval($.message)
Drag and drop the Respond Mediator to send back the response from uploading the object.
Create the next API resource, which is
/infoby dragging and dropping another API resource to the design view. This API resource will retrieve information from the incoming HTTP POST request such as the bucketName, objectKey and get the object from Amazon S3.
Next drag and drop the ‘getObject’ operation of the S3 Connector to the Design View. In the properties view, select the already created connection as 'Connection' from the drop down menu and provide the following expressions to the below properties,
- Bucket Name - json-eval($.bucketName)
- Object Key - json-eval($.objectKey)
Finally, drag and drop the Respond Mediator to send back the response from the getObject operation.
You can find the complete API XML configuration below. You can go to the source view and copy paste the following config.
<?xml version="1.0" encoding="UTF-8"?> <api context="/s3connector" name="S3ConnectorTestAPI" xmlns=""> <resource methods="PUT" uri- <inSequence> <amazons3.createBucket <bucketName>{json-eval($.bucketName)}</bucketName> <bucketRegion>us-east-2</bucketRegion> </amazons3.createBucket> <respond/> </inSequence> <outSequence/> <faultSequence/> </resource> <resource methods="POST" uri- <inSequence> <amazons3.putObject <bucketName>{json-eval($.bucketName)}</bucketName> <objectKey>{json-eval($.objectKey)}</objectKey> <fileContent>{json-eval($.message)}</fileContent> </amazons3.putObject> <respond/> </inSequence> <outSequence/> <faultSequence/> </resource> <resource methods="POST" uri- <inSequence> <amazons3.getObject <bucketName>{json-eval($.bucketName)}</bucketName> <objectKey>{json-eval($.objectKey)}</objectKey> </amazons3.getObject> <respond/> </inSequence> <outSequence/> <faultSequence/> </resource> </api>
Note:
- As
awsAccessKeyIduse the access key obtained from Amazon S3 setup and update the above API configuration.
- As
awsSecretAccessKeyuse the secret key obtained from Amazon S3 setup and update the above API configuration.
- Note that
region,
connectionNameand credentials are hard coded. Please change them as per the requirement.
- For more information please refer the reference guide for Amazon S3 connector.
Now we can export the imported connector and the API into a single CAR application. CAR application is the one we are going to deploy to server runtime..
Now the exported CApp can be deployed in the integration runtime so that we can run it and test.
We can use Curl or Postman to try the API. The testing steps are provided for curl. Steps for Postman should be straightforward and can be derived from the curl requests.
Creating a bucket in Amazon S3¶
- Create a file called
data.jsonwith the following content. Note that the bucket region is
us-east-2. If you need to create the bucket in a different region, modify the hard coded region of the API configuration accordingly.
{ "bucketName":"wso2engineers" }
Invoke the API as shown below using the curl command. Curl Application can be downloaded from here.
Expected Response:
curl -H "Content-Type: application/json" --request PUT --data @data.json
You will receive a response like below containing the details of the bucket created.
{ "createBucketResult": { "success": true, "Response": { "Status": "200:Optional[OK]", "Location": "" } } }
Please navigate to Amazon AWS S3 console and see if a bucket called
wso2engineersis created. If you tried to create a bucket with a name that already exists, it will reply back with a message indicating the conflict.
- Create a file called
data.jsonwith the following content.
{ "bucketName":"wso2engineers", "objectKey":"Julian.txt", "message":"Julian Garfield, Software Engineer, Integration Group" }
Invoke the API as shown below using the curl command. Curl Application can be downloaded from here.
Expected Response: You will receive a response like below containing the details of the object created.
curl -H "Content-Type: application/json" --request POST --data @data.json
Navigate to AWS S3 console and click on the bucket
{ "putObjectResult": { "success": true, "PutObjectResponse": { "ETag": "\"359a77e8b4a63a637df3e63d16fd0e34\"" } } }
wso2engineers. You will note that a file has been created with the name
Julian.txt.
Read objects from Amazon S3 bucket¶
Now let us read the information on
wso2engineers that we stored in the Amazon S3 bucket.
Create a file called data.json with the following content. It specifies which bucket to read from and what the filename is. This example assumes that the object is stored at root level inside the bucket. You can also read a object stored in a folder inside the bucket.
2. Invoke the API as shown below using the curl command.
{ "bucketName":"wso2engineers", "objectKey":"Julian.txt" }
Expected Response: You receive a response similar to the following. The
curl -H "Content-Type: application/json" --request POST --data @data.json
Contentelement contains the contents of the file requested.
Note
The
Contentelement is available only with Amazon S3 connector v2.0.1 and above.
{ "getObjectResult": { "success": true, "GetObjectResponse": { "AcceptRanges": "bytes", "Content": "Julian Garfield, Software Engineer, Integration Group", "ContentLength": 45, "ContentType": "text/plain; charset=UTF-8", "DeleteMarker": false, "ETag": "\"359a77e8b4a63a637df3e63d16fd0e34\"", "LastModified": null, "metadata": null, "MissingMeta": 0, "PartsCount": 0, "TagCount": 0 } } }
In this example Amazon S3 connector is used to perform operations with Amazon S3 storage. You can receive details of the errors that occur when invoking S3 operations using the S3 responses itself. Please read the Amazon S3 connector reference guide to learn more about the operations you can perform with the Amazon S3 connector.Top | https://apim.docs.wso2.com/en/latest/reference/connectors/amazons3-connector/amazons3-connector-example/ | 2022-08-08T01:46:07 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://apim.docs.wso2.com/en/4.1.0/assets/img/integrate/connectors/amazon-s3-diagram.jpg',
'Overview of Amazon S3 use case Amazon S3 use case'], dtype=object) ] | apim.docs.wso2.com |
OverviewOverview
Affiliate Product Rates allow you to set commission rates specific to an Affiliate-Product pairing. This commission rate will take priority over all commission rates except for global recurring rates if you are selling subscription products.
Simply select a product and Affiliate to pair and add a commission rate and type to create a new Affiliate Product Rate that will lock in referrals for this product to that Affiliate at the commission rate, overriding the commission rates you may have set for this Affiliate. You can only create a single Affiliate Product Rate per Affiliate-Product pairing, but you can make as many as you like for a single product using different Affiliates. You can view a list of all of your rates on the list page.
Auto ReferralsAuto Referrals
Products Auto-Referrals allow you to link a Product and an Affiliate without requiring the use of Affiliate Links. This is useful when you want to reward an Affiliate for promoting and selling a specific product regardless of how the customers were referred to your site, such as a revenue split agreement. To create an auto-referral, check the Enable Auto Referral checkbox when creating a new Affiliate Product Rate.
When viewing your rates on the list page, Auto Referral enabled rates will be marked the Enabled status. You can learn more about how this functionality works and how to test it by viewing our Testing the Auto-Referral Feature documentation.
| https://docs.solidaffiliate.com/affiliate-product-rates/ | 2022-08-08T02:04:54 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://3tf5xl143bk13qg1bx3bx5j9-wpengine.netdna-ssl.com/wp-content/uploads/sites/6/2022/04/Screen-Shot-2022-04-28-at-1.43.38-PM-1024x464.png',
'New Affiliate Product Rate'], dtype=object)
array(['https://3tf5xl143bk13qg1bx3bx5j9-wpengine.netdna-ssl.com/wp-content/uploads/sites/6/2022/04/Screen-Shot-2022-04-28-at-2.18.34-PM-1024x281.png',
'Affiliate Product Rates List Page'], dtype=object)
array(['https://3tf5xl143bk13qg1bx3bx5j9-wpengine.netdna-ssl.com/wp-content/uploads/sites/6/2022/04/Screen-Shot-2022-04-28-at-2.23.15-PM-1024x63.png',
'Enable Auto Referrals'], dtype=object)
array(['https://3tf5xl143bk13qg1bx3bx5j9-wpengine.netdna-ssl.com/wp-content/uploads/sites/6/2022/04/Screen-Shot-2022-04-28-at-2.27.00-PM-1024x240.png',
'Auto Referrals Enabled'], dtype=object) ] | docs.solidaffiliate.com |
Licensing for the Splunk Data Stream Processor
After you purchase the Splunk Data Stream Processor, you receive a license that you must upload to use the Data Stream Processor. You must have a valid DSP license in order to activate any pipelines.
Types of licenses
There are two types of Splunk DSP licenses: a Splunk license and a Universal license.
- Splunk license: Send data from a supported data source into a Splunk index or to Splunk Observability.
- Universal license: Send data from a supported data source into any supported data destination.
Add a license
Follow these steps to add a license to the Data Stream Processor.
After you add a license, you can return to this page again to view your license.
Upgrade your license
Contact Splunk Sales if you want to upgrade your license.
License expiration
Contact Splunk Sales to renew your license.
If your license expires, you will not be able to activate any new pipelines or re-activate any existing pipelines in the Data Stream Processor. You will still be able to create pipelines, edit existing pipelines, and perform other pipeline-related tasks.
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.3.0, 1.3.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/DSP/1.3.0/Admin/License | 2022-08-08T00:41:13 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
The Tiles table shows all the tiles that have mapped kernels and buffers in the ADF graph. For example, in this design there are three tiles used, where two of them contain kernels (Tile [24,0], and Tile [25,0]), and two of them have buffers mapped (Tile[24,0], and Tile[24,1]).
Figure 1. Tiles Table | https://docs.xilinx.com/r/en-US/ug1076-ai-engine-environment/Tiles | 2022-08-08T00:43:13 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.xilinx.com |
Desktop 3D Controller¶
Intro¶
The Desktop 3D controller is used as the default pawn (camera) for all three different operations provided with MAGES Unreal. It is a useful development tool for accelerating iteration time when creating VR content; as immersive as VR can be, sometimes you need a quick way to check that some interaction works.
Additionally, the Desktop 3D controller allows you to test networking logic within a single instance of the editor.
Warning
To re-iterate: the Desktop 3D controller is not intended to be the pawn that you will ship the operation with, but only as a tool for development and testing purposes.
We’ll be going through an explanation of the main controls and concepts of the Desktop 3D controller by looking into the reasoning behind them.
Vision & Intention¶
It is impossible to represent all possible movements of a VR user’s head and hands using the mouse & keyboard, without resorting to an interface that makes you feel more like a pilot than anything else. Thus it was obvious from the start that the user will have to choose to control one aspect of a VR character at a time:
The whole body (Avatar), or
One of the hands
Note
You can use the number row on the keyboard to quickly change modes:
1 – Avatar Mode
2 – Left Hand
3 – Right Hand
Additionally, for the hands, sub-modes needed to be implemented since our SDK allows for very fine-grained controller motion requirements, which cannot be emulated with keyboard keys. So, the translation and rotation of the hand needed to be mapped to the user’s mouse movement. In this manner: Through the radial menu (activated using the Spacebar), you can choose to switch to controlling one of the hands in one of two sub-modes:
Position (Translation)
Orientation (Rotation)
Note
The hotkey for switching between translation and rotation without going through the radial menu is the “Tab” key
Interaction¶
Interaction is easy to map: the left mouse button corresponds to the trigger button on a VR controller, and the right mouse button to the grip button, accordingly. Switching to a different mode while having grabbed an object will keep the hand in the same state, so you can hold multiple objects simultaneously.
Note
If you switch back to controlling one of the hands that has grabbed an object, you do not need to hold any of the mouse buttons; the item will stay grabbed. You can press the corresponding mouse button to let go of the item.
Swapping Axes¶
But there is still a problem here: The mouse can only input 2D coordinate movements: horizontal and vertical, which severely limits the user’s options in both cases:
By holding down “Left Ctrl”, you can temporarily change the axis of the translation or rotation
Throw Hand (or Quick Grab)¶
This takes care of a partial mapping of fine-grained movement to the keyboard & mouse. But what about mapping one of the most common aspects of any VR simulation? What about Grabbing?
The “Throw Hand” command does exactly this: It moves the left or right hand to the object at the user’s center of the screen, and tries to grab anything once it’s there.
Note
You can execute the “Throw Hand” command either through the radial menu, or by using Ctrl + Click:
Ctrl + Left Mouse Button Click — Throw right hand
Ctrl + Right Mouse Button Click — Throw left hand
Basic Controls¶
By default, the controller will start in the “Avatar” mode. This is the closest mode to any typical 3D application that uses a first-person perspective. You can:
Look around with the mouse
Move with the W,A,S and D keys
Interact with UIs using the left mouse button, or ‘F’
Open the radial menu with the Spacebar
Note
Movement with WASD is enabled on all different modes, so even when you’re controlling one of the hands you can still move the whole avatar around. | https://docs.oramavr.com/en/4.0.2/unreal/manual/2dof/index.html | 2022-08-08T01:25:56 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['../../../_images/2dof0.png', 'Desktop 3D Controller'],
dtype=object) ] | docs.oramavr.com |
You can migrate StorageGRID Webscale nodes from one Linux host to another to perform host maintenance (such as OS patching and reboot) without impacting the functionality or availability of your grid.
You migrate one or more nodes from one Linux host (the "source host") to another Linux host (the "target host"). The target host must have previously been prepared for StorageGRID Webscale use.
To migrate a grid node to a new host, both of the following conditions must be true:
For more information, see "Node migration requirements" in the StorageGRID Webscale installation instructions for your Linux operating system. | https://docs.netapp.com/sgws-111/topic/com.netapp.doc.sg-maint/GUID-AF3CE830-0D84-458C-BC03-74A9B403F6CE.html?lang=en | 2021-02-25T05:54:16 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.netapp.com |
Deploy and configure the new OneDrive sync client for Mac
This article is for IT administrators managing OneDrive for Business settings in work or school environments. If you're not an IT administrator, read Get started with the new OneDrive sync client on Mac OS X.
Manage OneDrive settings on macOS using property list (Plist) files
Use the following keys to preconfigure or change settings for your users. The keys are the same whether you run the store edition or the standalone edition of the sync client, but the property list file name and domain name will be different. When you apply the settings, make sure to target the appropriate domain depending on the edition of the sync client.
Deploy the sync client settings
Deploy
The following table lists all the settings that are currently exposed for the OneDrive sync client. You need to configure the parameters in parentheses.
You can also configure the OneDrive Standalone sync client to receive delayed updates. | https://docs.microsoft.com/en-us/onedrive/deploy-and-configure-on-macos?redirectSourcePath=%252fcs-cz%252farticle%252fnasazen%2525C3%2525AD-a-konfigurace-nov%2525C3%2525A9ho-synchroniza%2525C4%25258Dn%2525C3%2525ADho-klienta-onedrivu-na-macu-eadddc4e-edc0-4982-9f50-2aef5038c307 | 2018-08-14T18:40:26 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.microsoft.com |
Change the assignment policy on a mailbox
Summary:.
What do you need to know before you begin? change the assignment policy on a mailbox.
Use the Exchange Management Shell to change the assignment policy on a mailbox
To change the assignment policy that's assigned to a mailbox, use the following syntax.
Set-Mailbox <mailbox alias or name> -RoleAssignmentPolicy <assignment policy>
This example sets the assignment policy to Unified Messaging Users on the mailbox Brian.
Set-Mailbox Brian -RoleAssignmentPolicy "Unified Messaging Users"
Use the Exchange Management Shell to change the assignment policy on a group of mailboxes assigned a specific assignment policy
Note
You can't use the EAC to change the assignment policy on a group of mailboxes all at once..
Get-Mailbox | Where { $_.RoleAssignmentPolicy -Eq "<assignment policy to find>" } | Set-Mailbox -RoleAssignmentPolicy <assignment policy to set>
This example finds all the mailboxes assigned to the Redmond Users - No Voicemail assignment policy and changes the assignment policy to Redmond Users - Voicemail Enabled.
Get-Mailbox | Where { $_.RoleAssignmentPolicy -Eq "Redmond Users - No Voicemail" } | Set-Mailbox -RoleAssignmentPolicy "Redmond Users - Voicemail Enabled"
This example includes the WhatIf parameter so that you can see all the mailboxes that would be changed without committing any changes.
Get-Mailbox | Where { $_.RoleAssignmentPolicy -Eq "Redmond Users - No Voicemail" } | Set-Mailbox -RoleAssignmentPolicy "Redmond Users - Voicemail Enabled" -WhatIf
For detailed syntax and parameter information, see Get-Mailbox or Set-Mailbox. | https://docs.microsoft.com/en-us/Exchange/permissions/policy-assignments-for-mailboxes | 2018-08-14T17:08:39 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.microsoft.com |
All API calls are directed through the cloud gateways in the traditional model of API management. This approach has inherent limitations such as the following.
- There may be inefficiencies if only the API backend or only the API consumers are not in the cloud.
- Security concerns associated with API calls going through external gateways.
WSO2 API Cloud offers users the hybrid gateway deployment option for deploying API Gateway(s) on-premise. This enables them to get the best of both the API Cloud and API Manager.
- Rapid deployment model and low total cost of ownership: Most of the API management infrastructure (including management user interfaces, the developer portal, and analytics) is in the cloud, which makes it always accessible to you and your subscribers. This cloud-based infrastructure does not require maintenance.
- High performance, security, and compliance: You can put the API gateway anywhere including your own network. This cuts down the network overhead, ensures security and compliance, and removes the need for VPN or another network connectivity solution.
The following are the options that allow you to test the On-Prem gateway with WSO2 API Cloud.
Option 1 - Run the On-Prem gateway using the downloaded binary file
Follow the steps below to configure and test an On-Prem gateway.
Step 1 - Download your On-Prem Gateway
Log in to WSO2 API Cloud () as an Admin User
In the API Publisher, click On-Prem Gateways.
- Click Download On-Prem Gateway to start the download.
- You will receive a notification as shown below after the download starts.
Step 2 - Configure your On-Prem Gateway
- Configure the Gateway by going to
<ON-PREM_GATEWAY_HOME>/bin, and executing the configuration script:
On Windows:
cloud-init.bat --runOn Linux/Mac OS:
sh cloud-init.sh
Your Gateway will be configured with the required settings to integrate with the API Cloud.
- Provide your email address, organization key, and password.
Your organization key will be displayed as shown below.
- The status of the On-Prem Gateway will be displayed after completion.
Step 3 - Run the on-premise gateway
- Start the API Gateway by going to
<ON-PREM_GATEWAY_HOME>/bin, and executing the startup script:
On Windows:
wso2server.bat --runOn Linux/Mac OS:
sh wso2server.sh
- The status of the On-Prem Gateway will be updated after you start the gateway
Step 4 - Test your on-premise gateway
- Log in to WSO2 API Cloud and create an API.
- Invoke the API using cURL.
The cURL command to invoke the GET method of the API should be similar to the following:
curl -k -X GET --header 'Accept: text/xml' --header 'Authorization: Bearer dXNlckBvcmcuY29tQHRlc3RPcmcxMjM6UGFzc3dvcmQ=’ ''
Replace in the above cURL command with your on-premise gateway URL as indicated below, and run it. The response to this cURL should be identical to that received in the previous step.
curl -k -X GET --header 'Accept: text/xml' --header 'Authorization: Bearer dXNlckBvcmcuY29tQHRlc3RPcmcxMjM6UGFzc3dvcmQ=’ ''
Note that you can also use the HTTP port for API invocations. The HTTP port number would be 8280 by default. An example is given below.
curl -X GET --header 'Accept: text/xml' --header 'Authorization: Bearer dXNlckBvcmcuY29tQHRlc3RPcmcxMjM6UGFzc3dvcmQ=’ ''
When you run multiple On-Premise gateways on the same server or virtual machine (VM), you must change the default port of each Gateway with an offset value to avoid port conflicts. An offset defines the number by which all ports in the runtime (e.g., HTTP/S ports) will be increased. For example, if the default HTTPS port is 8243 and the offset is 1, the effective HTTPS port will change to 8244. For each additional On-Premise Gateway instance that you run in the same server or virtual machine, you have to set the port offset to a unique value. The offset of the default port is considered to be 0.
There are two ways to set an offset to a port:
- Pass the port offset to the server during startup. The following command starts the server with the default port incremented by 1.
./wso2server.sh -DportOffset=1
- Set the port offset in the Ports section in the
<ON-PREM_GATEWAY_HOME>/repository/conf/carbon.xmlfile as shown below.
<Offset>1</Offset>
If your request is successful, your response will be similar to the following.
<?xml version="1.0" encoding="utf-8"?> <PhoneReturn xmlns: <Company>Toll Free</Company> <Valid>true</Valid> <Use>Assigned to a code holder for normal use.</Use> <State>TF</State> <RC /> <OCN /> <OriginalNumber>18006785432</OriginalNumber> <CleanNumber>8006785432</CleanNumber> <SwitchName /> <SwitchType /> <Country>United States</Country> <CLLI /> <PrefixType>Landline</PrefixType> <LATA /> <sms>Landline</sms> <Email /> <AssignDate>Unknown</AssignDate> <TelecomCity /> <TelecomCounty /> <TelecomState>TF</TelecomState> <TelecomZip /> <TimeZone /> <Lat /> <Long /> <Wireless>false</Wireless> <LRN /> </PhoneReturn>
Option 2 - Run the On-Prem gateway as a docker container
Log in to docker.cloud.wso2.com with your username and password.
docker login docker.cloud.wso2.com Username: [email protected] Password: ****** Login Succeeded
Pull the docker image. A sample command is given below.
docker pull docker.cloud.wso2.com/onprem-gateway:2.2.0
Run the docker container.
docker run -p127.0.0.1:8243:8243 -p127.0.0.1:8280:8280 -e "WSO2_CLOUD_ORG_KEY=your_organization_key" -e "[email protected]" -e "WSO2_CLOUD_PASSWORD=your_cloud_password" docker.cloud.wso2.com/onprem-gateway:2.2.0
You can enable gateway debug logs by passing the following environment variable when running the docker container:-e "LOG4J_PROPERTIES=log4j.logger.org.apache.synapse.transport.http.headers=DEBUG,log4j.logger.org.apache.synapse.transport.http.wire=DEBUG"
- Test your On-Prem gateway.
- If you create/update an API and publish/re-publish it from API Cloud Publisher, it could take up to a maximum of 10 minutes before your changes become effective in the on-premise gateway.
- If you create/update a throttling tier from API Cloud, it could take up to a maximum of 15 minutes before your changes become effective in the on-premise gateway.
- Statistics of API usage in your on-premise gateway are published to API Cloud every 6 - 6.5 hours.
Overriding the default gateway configurations
Go to
<API-M_HOME>/repository/conf/
Replace the default configurations in the
on-premise-gateway.propertiesfile with the required custom configurations.
To customize the synchronization time interval of the API updates:
Replace the
api.update.task.cronexpression with a custom cron expression.
To customize the synchronization time interval of the Throttling tiers:
Replace the
throttling.synchronization.cronexpression with a custom cron expression.
To customize the time period it takes for API statistics to get published to API cloud:
Replace the
file.data.upload.task.cronexpression with a custom cron expression. | https://docs.wso2.com/display/APICloud/Working+with+Hybrid+API+Management | 2018-08-14T17:38:21 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.wso2.com |
Understanding Managed Apps
When an app is managed by Jamf Pro, you have more control over distribution and removal of the app, as well as the backup of app data and options for updating the app. The following table provides more detail:
Managed App Requirements
There are two factors that determine whether an app can be managed by Jamf Pro:
Whether users have to pay for the app
The app must be free or paid for by the organization using Apple's Volume Purchase Program (VPP). For more information on VPP, visit one of the following websites:
Apple Deployment Programs Help
Volume Purchase Program for Business
Apple School Manager Help
The mobile devices to which you distribute the app
Mobile devices must have iOS 5 or later, or tvOS 10.2 or later and an MDM profile that supports managed apps.
Mobile devices that have iOS 5 or later when they are enrolled with Jamf Pro automatically obtain an MDM profile that supports managed apps. For instructions on distributing an updated MDM profile that supports managed apps, see the Distributing Updated MDM Profiles Knowledge Base article.
If you try to make an app managed but these requirements are not met, the app behaves as unmanaged. | http://docs.jamf.com/10.6.0/jamf-pro/administrator-guide/Understanding_Managed_Apps.html | 2018-08-14T17:59:06 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.jamf.com |
Caching¶
Elex uses a simple file-based caching system based using CacheControl.
Each request to the AP Election API is cached. Each subsequent API request sends the etag. If the API returns a 304 not modified response, the cached version of the request is used.
Exit codes¶
If the underlying API call is returned from the cache, Elex exits with exit code 64.
For example, the first time you run an Elex results command, the exit code will be
0.
elex results '02-01-2016' echo $? 0
The next time you run the command, the exit code will be
64.
elex results '02-01-2016' echo $? 64
Clearing the cache¶
To clear the cache, run:
elex clear-cache
If the cache is empty, the command will return with exit code
65. This is unlikely to be helpful to end users, but helps with automated testing. | http://elex.readthedocs.io/en/stable/caching.html | 2018-08-14T17:24:31 | CC-MAIN-2018-34 | 1534221209216.31 | [] | elex.readthedocs.io |
Apache Tomcat is an open source application server which supports Java Servlets, JavaServer Pages and Java WebSockets..
How to configure the Apache Tomcat server? is Apache server connected with.
How to create.
You should now be able to access the application at.
How to create an SSL certificate for Apache Tomcat?
A detailed guide is available in the official Apache Tomcat documentation at. increase the upload size limit in Tomcat? publish a Web page?
To serve Web pages with Apache Tomcat, simply copy your files to the default document root directory at /opt/bitnami/apache-tomcat/webapps/ROOT. | https://docs.bitnami.com/aws/components/tomcat/ | 2018-08-14T18:06:32 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.bitnami.com |
WebDriver.
Here's how to get started with WebDriver for Microsoft Edge.
The Microsoft Edge implementation of WebDriver supports both the W3C WebDriver specification and the JSON Wire Protocol for backwards compatibility with existing tests.
Getting started with WebDriver for Microsoft Edge
- Install Windows 10.
- Download the appropriate Microsoft WebDriver server for your build of Windows.
- Download the WebDriver language binding of your choice. All Selenium language bindings support Microsoft Edge.. | https://docs.microsoft.com/en-us/microsoft-edge/webdriver | 2018-08-14T17:13:37 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.microsoft.com |
How to: Correct the Data-tier Name Configuration
If.
To verify the connection to the Team Foundation database and that SQL Server services are running
Log on to the data-tier server on which the Team Foundation database is defined.
Note.
To determine the server name that is stored in the tbl_database table of the TfsIntegration database
Log on to the data-tier server.
Open the Start menu, point to AllPrograms, point to Microsoft SQL Server 2005 or Microsoft SQL Server 2008, and then click SQL Server Management Studio.
In the Connect to Server dialog box, click Database Engine in Server type, type the name of the server to which you want to connect, and then click Connect.
Note.
To change the data source name defined in the Services Web.config file..
Under the appSettings node, locate the ConnectionString key.
Change the value that is assigned to the Data Source to match the server name that is defined in the tbl_database table of the TfsIntegration database.
Save the file, and close the editor.
See Also
Tasks
How to: Rename a Data-Tier Server
Resolving Problems Connecting to the Data-tier Server
Concepts
Team Foundation Server Permissions
Other Resources
Correcting Connection and Configuration Procedures | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/bb909757(v=vs.90) | 2018-08-14T18:36:54 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.microsoft.com |
Export Settings
JustOn can export invoice and booking detail data
- to CSV files to be imported in accounting or ERP systems, and
- to SEPA XML files for triggering payment operations with banks.
The custom setting Export Settings controls the export of invoice and booking detail data.
Info
This document covers general information about the custom setting Export Settings. For details about specific export configurations, see
Invoice & Booking Details CSV
SEPA Direct Debit XML
SEPA Credit XML
Cloud Storage
Export Settings Information
The custom setting Export Settings includes the following information:
Defining Export Settings
Depending on your organization's requirements, you must define an export configuration.
- In Setup, open Custom Settings.
In Salesforce Lightning, navigate to Custom Code > Custom Settings.
In Salesforce Classic, navigate to Develop > Custom Settings.
- Click Manage in the row of Export Settings.
- Click New.
- Specify the details as necessary.
For information about specific export configurations, see
- Click Save. | https://docs.juston.com/en/jo_admin_config_appset_export/ | 2018-08-14T17:24:10 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.juston.com |
#include <wx/ipc.h>
A Communication for an example of how to do this.
Constructs a server object.
Registers the server using the given service name..x. | http://docs.wxwidgets.org/3.0/classwx_server.html | 2018-08-14T17:48:07 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.wxwidgets.org |
Amazon Aurora MySQL Database Engine Updates: 2018-03-13
Version: 1.14.4
Amazon Aurora MySQL v1.14.4 is generally available. If you wish to create new DB clusters in Aurora v1.14.4, you can do so using the AWS CLI or the Amazon RDS API and specifying the engine version. You have the option, but are not required, to upgrade existing 1.14.x DB clusters to Aurora v1.14.4.
With version 1.14.4 RDS DB Instance.
Should you have any questions or concerns, the AWS Support Team is available on the community forums and through AWS Premium Support at. For more information, see Maintaining an Amazon RDS DB Instance.
Zero-Downtime Patching
The zero-downtime patching (ZDP) attempts, on a best-effort basis, to preserve client connections through an engine patch. For more information about ZDP, see Zero-Downtime Patching.
New Features
Aurora MySQL now supports db.r4 instance classes.
Improvements
Fixed an issue where
LOST_EVENTSwere generated when writing large binlog events.
Integration of MySQL Bug Fixes
Ignorable events do not work and are not tested (Bug #74683)
NEW->OLD ASSERT FAILURE 'GTID_MODE > 0' (Bug #20436436) | https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Updates.1144.html | 2018-08-14T18:04:19 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Gets information about a unique device type.
This is an asynchronous operation using the standard naming convention for .NET 4.5 or higher. For .NET 3.5 the operation is implemented as a pair of methods using the standard naming convention of BeginGetDevice and EndGetDevice.
Namespace: Amazon.DeviceFarm
Assembly: AWSSDK.DeviceFarm.dll
Version: 3.x.y.z
The device type's AR | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/DeviceFarm/MIDeviceFarmGetDeviceAsyncStringCancellationToken.html | 2018-08-14T18:10:49 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.aws.amazon.com |
changes.mady.by.user Erica Johnson
Saved on Aug 28, 2015
Saved on Jul 06, 2016
What.
Follow these steps to get started with your customized Group Booking Engine.
NOTE: If a guest wants to book a room with arrival and departure dates different than the Group. They will need to call you to make the booking.
Unable to render embedded object: File (supportcenter.png) not found.
NEED HELP? SUBMIT A TICKET
Unable to render embedded object: File (knowledgebase.png) not found.
ARTICLES AND FAQ
Unable to render embedded object: File (videotutorials.png) not found.
VIDEO TUTORIALS | https://docs.bookingcenter.com/pages/diffpages.action?pageId=7012409&originalId=7012408 | 2018-08-14T17:46:40 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.bookingcenter.com |
- Configuring VLANs on a Single Subnet
- Configuring VLANs on Multiple Subnets
- Configuring Multiple Untagged VLANs across Multiple Subnets
- Configuring Multiple VLANs with 802.1q Tagging
Before configuring a VLAN on a single subnet, make sure that Layer 2 Mode is enabled.
The following figure shows a single subnet environment
Layer 2 mode must be enabled on the NetScaler for the NetScaler to have direct access to the servers.
To configure a VLAN on a single subnet, follow the procedures described in Creating or Modifying a VLAN. VLAN configuration parameters are not required, because the network interfaces are members of this VLAN. | https://docs.citrix.com/en-us/netscaler/11-1/networking/interfaces/configuring-vlans/configuring-vlans-on-a-single-subnet.html | 2018-08-14T17:52:55 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.citrix.com |
SHLoadNonloadedIconOverlayIdentifiers function
Signals the Shell that during the next operation requiring overlay information, it should load icon overlay identifiers that either failed creation or were not present for creation at startup. Identifiers that have already been loaded are not affected.
Syntax
SHSTDAPI SHLoadNonloadedIconOverlayIdentifiers( );
Parameters
This function has no parameters.
Return Value
Type: HRESULT
Always returns S_OK.
Remarks
A call to SHLoadNonloadedIconOverlayIdentifiers does not result in the immediate loading of a Shell extension, nor does it cause an icon overlay handler to be loaded. A call to SHLoadNonloadedIconOverlayIdentifiers results in a situation such that the next code to ask for icon overlay information triggers a comparison of icon overlays in the registry to those that are already loaded. If an icon overlay is newly registered and the system has not already reached its upper limit of fifteen icon overlays, the new overlay is loaded. SHLoadNonloadedIconOverlayIdentifiers alone does not load a new icon overlay; you also need to trigger an action that uses the overlay, such as a refresh of a Windows Explorer view.
For more information, see How to Implement Icon Overlay Handlers. | https://docs.microsoft.com/en-us/windows/desktop/api/shellapi/nf-shellapi-shloadnonloadediconoverlayidentifiers | 2018-08-14T18:34:22 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.microsoft.com |
Administering Dock Items
There are two ways to add or remove Dock items on computers: using a policy or using Jamf Remote.
When you add a Dock item on computers, you can choose whether to add it to the beginning or the end of the Dock.
Requirements
To add or remove a Dock item on computers, the Dock item must be added to Jamf Admin or Jamf Pro. For more information, see Managing Dock Items.
Adding or Removing a Dock Item Dock Items payload and click Configure.
Click Add for the Dock item you want to add or remove.
Choose "Add to Beginning of Dock", "Add to End of Dock", or "Remove from Dock" from the Action pop-up menu..
Adding or Removing a Dock Item add or remove the Dock item.
Click the Dock tab.
In the list of Dock items, select the checkbox for the Dock item you want to add or remove.
Select the Add to Beginning of Dock, Add to End of Dock, or Remove from Dock option. of a policy, and view and flush policy logs. | http://docs.jamf.com/10.6.0/jamf-pro/administrator-guide/Administering_Dock_Items.html | 2018-08-14T17:56:50 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.jamf.com |
Importing Users to the JSS from Apple School Manager
You can import users to the Jamf Software Server (JSS) from Apple School Manager. This allows you to automatically create new users in the JSS from the users in Apple School Manager or append information to existing users in the JSS.
When you import users from Apple School Manager, the following fields are populated in the Roster category of the user's inventory information:
Last Sync
Status
User Number
Full name from Roster
Middle Name
Managed Apple ID
Grade
Password Policy
An assistant in the JSS guides you through the process of importing all users or a subset of users from Apple School Manager. If you choose to import a subset of users, you need to choose the criteria and values for the users you want to import. For example, you could import the students from an "Addition & Subtraction" course or an "Algebra" course only.
You can select from the following options when importing users from Apple School Manager:
Match to an existing user in the JSS —Imported users are matched to existing users in the JSS based on the criteria selected when integrating the JSS with Apple School Manager. (For more information, see Integrating with Apple School Manager.) The JSS displays potential existing users in the JSS that match the specified criteria. When you select an existing user in the JSS to match the imported user to, information is populated in the Roster category of the user's inventory information. If this information existed prior to matching the imported user with the existing user, the information is updated.
Create a new user in the JSS —If you choose to create a new user, the imported user is automatically added to the JSS in the Users tab and inventory information is entered in the Roster category of the user's inventory information.
Note: The number of users you can import and match varies depending on your environment. Importing a large number of users at once may affect performance. You may need to perform more than one import to import all users to the JSS from Apple School Manager.
After users are imported, if an Apple School Manager Sync Time is configured for the Apple School Manager instance, user information is updated automatically based on the scheduled frequency and time. (For more information about configuring the Apple School Manager Sync Time, see Integrating with Apple School Manager.)
Requirements
To import users to the JSS from Apple School Manager, you need the following:
The JSS integrated with Apple School Manager. (For more information, see Integrating with Apple School Manager.)
A JSS user account with the "Users" privilege.
Importing Users from Apple School Manager
Log in to the JSS with a web browser.
Click Users at the top of the page.
Click Search Users.
Leave the search field blank and press the Enter key.
Click Import
.
If you choose to import a subset of users, choose the criteria, operator, and values to use to define the subset of users to import.
Note: When importing a subset of users based on multiple criteria, choose "or" from the And/Or pop-up menu(s) if the criteria are the same.
Follow the onscreen instructions to import users.
Note: If you are importing a large number of users (e.g., 10,000), a progress bar is displayed in the assistant during the import process. You can click Done and perform other management tasks while the import takes place.
User information is imported to the JSS and applied in the Users tab.
If you have site access only, users are imported to your site only.
Related Information
For related information, see the following section in this guide:
Classes
Find out how to create classes in the JSS for use with Apple's Classroom app or Casper Focus.
For related information, see the following technical paper:
Integrating with Apple School Manager to Support Apple's Education Features Using the Casper Suite
Get step-by-step instructions on how to integrate with Apple School Manager to support Apple's education features with the Casper Suite. | http://docs.jamf.com/9.99.0/casper-suite/administrator-guide/Importing_Users_to_the_JSS_from_Apple_School_Manager.html | 2018-08-14T17:57:22 | CC-MAIN-2018-34 | 1534221209216.31 | [array(['images/download/attachments/15181365/choose_users_step.png',
'images/download/attachments/15181365/choose_users_step.png'],
dtype=object)
array(['images/download/attachments/15181365/choose_instance_match_users.png',
'images/download/attachments/15181365/choose_instance_match_users.png'],
dtype=object) ] | docs.jamf.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Return the metadata at the path
Namespace: Amazon.Util
Assembly: AWSSDK.Core.dll
Version: 3.x.y.z
Path at which to query the metadata; may be relative or absolute.
.NET Standard:
Supported in: 1.3
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Util/MEC2InstanceMetadataGetDataString.html | 2018-08-14T17:52:57 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.aws.amazon.com |
Orchestration activities for VMware Your instance must have access to a MID Server configured to use VMware to run certain activities. VMware orchestration activities are for use in workflows. VMware orchestration activities include: Add Disk Change Network Change State Check VM Alive Clone Configure Linux Configure Windows Delete Snapshot Destroy Discover Customization Specifications Get VM Events Get VM Guest Info Reconfigure Revert to Snapshot Snapshot Conversion functionConverts a UUID to the proper format automatically.Determine VMware activity result valuesVMware activities communicate with vCenter through a MID Server.Managed Object BrowserThe Managed Object Browser (MOB) is a vCenter utility that allows users to view detailed information about vCenter objects, such as images and virtual machines.Managed object reference IDA managed object reference (MOR) ID uniquely identifies a VMware virtual machine.Virtual machine UUIDIf you are writing a workflow and not using an automated workflow from ServiceNow, you must provide a properly formatted UUID.Add Disk activityThe Add Disk activity creates a new disk on a virtual machine.Change Network activityThe Change Network activity changes the network that a virtual machine is configured to use.Change State activityThe Change State activity sends commands to vCenter to control the power state of a given VMware virtual machine, such as powering on and powering off the VM.Check VM Alive activityThe Check VM Alive activity uses the VMware API to determine if a newly configured virtual machine is alive.Clone activityThe Clone activity sends commands to vCenter to clone a given VMware virtual machine or virtual machine template.Configure Linux activityThe Configure Linux activity sends commands to vCenter to set the identity and network information on a given VMware virtual Linux machine.Configure Windows activityThe Configure Windows activity sends commands to vCenter to set the identity and network information on a given VMware virtual Windows machine.Delete Snapshot activityThe Delete Snapshot activity deletes a saved virtual machine snapshot from a vCenter server.Destroy activityThe Destroy activity sends a command to vCenter to destroy the named VMware virtual machine.Discover Customization Specifications activityThe Discover Customization Specifications activity talks to vCenter to retrieve guest customization specifications that can be used for VM provisioning.Get VM Events activityThe Get VM Events activity retrieves the most recent events for a virtual machine.Get VM Guest Info activityThe Get VM Guest Info activity retrieves the guest customization information for a virtual machine.Reconfigure activityThe Reconfigure activity updates the number of CPUs and the amount of memory assigned to a virtual machine.Revert to Snapshot activityThe Revert to Snapshot activity reverts a virtual machine to the state captured in a given snapshot.Snapshot activityThe Snapshot activity creates a snapshot of a virtual machine. | https://docs.servicenow.com/bundle/istanbul-it-operations-management/page/product/vmware-support/concept/c_OrchestrationVMwareActivities.html | 2018-08-14T17:15:01 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.servicenow.com |
Use the code example below to create a news item and to set its ID, Title, and Content with the Native API.
NOTE: The ID argument is assigned to the master version of the news item. For more information about the different version of a news item, see For developers: Content lifecycle.
In the example below, you perform the following:
Back To Top | https://docs.sitefinity.com/for-developers-create-a-news-item-with-the-native-api | 2018-08-14T17:08:45 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.sitefinity.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.. SetQueueAttributesAsync.
Namespace: Amazon.SQS
Assembly: AWSSDK.SQS.dll
Version: 3.x.y.z
The URL of the Amazon SQS queue whose attributes are set. Queue URLs are case-sensitive.
A map of attributes to set. The following lists the names, descriptions, and values of the special request parameters that the SetQueueAttributes action uses: DelaySeconds - The length of time, in seconds, for which the delivery of all messages in the queue is delayed. Valid values: An integer from 0 to 900 (15 minutes). action waits for a message to arrive. Valid values: an integer from 0 to 20 (seconds). The default is 0. RedrivePolicy - The string that includes the parameters for the dead-letter queue functionality of the source queue. is exceeded. maxReceiveCount - The number of times a message is delivered to the source queue before being moved to the dead-letter queue. 30. For more information about the visibility timeout, see Visibility Timeout in the Amazon Simple Queue Service Developer Guide. The following attributes apply only to server-side-encryption: KmsMasterKeyId - The ID of an AWS AWS Key Management Service API Reference. KmsDataKeyReusePeriodSeconds -). is the same as the one generated for the first MessageDeduplicationId, the two messages are treated as duplicates and only one copy of the message is delivered. Any other valid special request parameters (such as the following) are ignored: ApproximateNumberOfMessagesApproximateNumberOfMessagesDelayedApproximateNumberOfMessagesNotVisibleCreatedTimestampLastModifiedTimestampQueueArn
| https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SQS/MISQSSetQueueAttributesStringDictionary!String,%20String!.html | 2018-08-14T17:52:42 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.aws.amazon.com |
You can duplicate an existing template and save it with a different name. This is useful when you want to have similar templates and do not want to create them from scratch.
When you duplicate a template, Sitefinity CMS copies its layout, all the widgets and their configuration, the responsive design settings, the template’s permissions, and all the template’s settings. If a widget inside the template is branched, it is also branched inside the duplicated template. For more information, see Template widgets editable in pages.
Perform the following:
Back To Top | https://docs.sitefinity.com/duplicate-page-templates | 2018-08-14T17:08:21 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.sitefinity.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Deletes a conditional forwarder that has been set up for your AWS directory.
For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DeleteConditionalForwarderAsync.
Namespace: Amazon.DirectoryService
Assembly: AWSSDK.DirectoryService.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the DeleteConditionalForwarder service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5
Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/DirectoryService/MIDirectoryServiceDeleteConditionalForwarderDeleteConditionalForwarderRequest.html | 2018-08-14T17:52:18 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.aws.amazon.com |
Viewing Keys
You can use the Encryption keys section of the AWS Management Console to view customer master keys (CMKs), including CMKs that you manage and CMKs that are managed by AWS. You can also use the operations in the AWS Key Management Service (AWS KMS) API, such as ListKeys, DescribeKey, and ListAliases, to view information about CMKs.
Viewing CMKs (Console)
You can see a list of your customer managed keys in the AWS Management Console.
To view your CMKs ).
The console shows all the CMKs in your AWS account in the chosen region, including customer-managed and AWS managed CMKs. The page displays the alias, key ID, status, and creation date for each CMK.
To show additional columns in the list of CMKs
Choose the settings button (
) in the upper-right corner of the page.
Select the check boxes for the additional columns to show, and then choose Close.
To show detailed information about the CMK
The details include the Amazon Resource Name (ARN), description, key policy, tags, and key rotation settings of the CMK.
Choose the alias of the CMK.
If the CMK does not have an alias, choose the empty cell in the Alias column, as shown in the following image.
To find CMKs
You can use the Filter box to find CMKs based on their aliases.
In the Filter box, type all or part of the alias name of a CMK. Only the CMKs with alias names that match the filter appear.
Viewing CMKs (API)
You can use the AWS Key Management Service (AWS KMS) API to view your CMKs. Several operations return details about existing CMKs. The following examples use the AWS Command Line Interface (AWS CLI), but you can use any supported programming language.
Topics
ListKeys: Get the ID and ARN of All CMKs
The ListKeys operation returns the ID and Amazon Resource Name (ARN) of all CMKs in the account and region. To see the aliases and key IDs of your CMKs that have aliases, use the ListAliases operation.
For example, this call to the
ListKeys operation returns the ID and ARN of
each CMK in this fictitious account.
$
aws kms list-keys
{ "Keys": [ { "KeyArn": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab", "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab" }, { "KeyArn": "arn:aws:kms:us-west-2:111122223333:key/0987dcba-09fe-87dc-65ba-ab0987654321", "KeyId": "0987dcba-09fe-87dc-65ba-ab0987654321" }, { "KeyArn": "arn:aws:kms:us-east-2:111122223333:key/1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d", "KeyId": "1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d" } }
DescribeKey: Get Detailed Information About a CMK
The DescribeKey operation returns details about the specified CMK. To identify the CMK, use its key ID, key ARN, alias name, or alias ARN.
For example, this call to
DescribeKey returns information about an existing
CMK. The fields in the response vary with the key state and the key origin.
$":" } }
You can use the
DescribeKey operation on a predefined AWS alias, that is,
an AWS alias with no key ID. When you do, AWS KMS associates the alias with an AWS managed CMK and returns its
KeyId and
Arn in the response.
GetKeyPolicy: Get the Key Policy Attached to a CMK
The GetKeyPolicy operation gets
the key policy that is attached to the CMK. To identify the CMK, use its key ID or
key ARN.
You must also specify the policy name, which is always
default. (If your output
is difficult to read, add the
--output text option to your command.)
$
aws kms get-key-policy --key-id 1234abcd-12ab-34cd-56ef-1234567890ab --policy-name default
{ "Version" : "2012-10-17", "Id" : "key-default-1", "Statement" : [ { "Sid" : "Enable IAM User Permissions", "Effect" : "Allow", "Principal" : { "AWS" : "arn:aws:iam::111122223333:root" }, "Action" : "kms:*", "Resource" : "*" } ] }
ListAliases: View CMKs by Alias Name
The ListAliases operation returns
aliases in the account and region. The
TargetKeyId in the response displays the
key ID of the CMK that the alias refers to, if any.
By default, the ListAliases command returns all aliases in the
account and region. This includes aliases
that you created and associated with your customer-managed CMKs, and aliases that AWS created and associated with AWS managed CMKs in your account. You can recognize AWS
aliases because their names have the format
aws/, such as
<service-name>
aws/dynamodb.
The response might also include aliases that have no
TargetKeyId field,
such as the
aws/redshift alias in this example. These are predefined aliases
that AWS has created but has not yet associated with a CMK.
$
aws kms list-aliases
{ "Aliases": [ { "AliasArn": "arn:aws:kms:us-west-2:111122223333:alias/ImportedKey", "TargetKeyId": "0987dcba-09fe-87dc-65ba-ab0987654321", "AliasName": "alias/ExampleKey" }, { " }, { "AliasArn": "arn:aws:kms:us-west-2:111122223333:alias/aws/dynamodb", "TargetKeyId": "1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d", "AliasName": "alias/aws/dynamodb" }, { "AliasArn": "arn:aws:kms:us-west-2:111122223333:alias/aws/redshift", "AliasName": "alias/aws/redshift" }, { "AliasArn": "arn:aws:kms:us-west-2:111122223333:alias/aws/s3", "TargetKeyId": "0987ab65-43cd-21ef-09ab-87654321cdef", "AliasName": "alias/aws/s3" } ] }
To get the aliases that refer to a particular CMK, use the
KeyId parameter.
The parameter value can be the Amazon Resource Name (ARN) of the CMK or the CMK ID.
You
cannot specify an alias or alias ARN.
The command in the following example gets the aliases that refer to a customer managed CMK. But you can use a command like this one to find the aliases that refer to AWS managed CMKs, too.
$
aws kms list-aliases --key-id arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
{ "Aliases": [ { " }, ] }
Finding the Key ID and ARN
To identify your AWS KMS CMKs in programs, scripts, and command line interface (CLI) commands, you use the ID of the CMK or its Amazon Resource Name (ARN). Some API operations also let you use the CMK alias.
To find the CMK ID and ARN (console)
Open the Encryption Keys section of the AWS Identity and Access Management (IAM) console at.
For Region, choose the appropriate AWS region. Do not use the region selector in the navigation bar (top right corner).
The page displays the key ID and alias, along with the status and creation date of each CMK.
To find the CMK ARN (key ARN), choose the alias name. This opens a page of details that includes the key ARN.
To find the CMK ID and ARN (API) | https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys.html | 2018-08-14T18:03:41 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.aws.amazon.com |
This document is for Celery's development version, which can be significantly different from previous releases. Get the stable docs here: 3.1.
Configuration and defaults¶
This document describes the configuration options available.
If you’re using the default loader, you must create the
celeryconfig.py
module and make sure it is available on the Python path.
- Example configuration file
- New lowercase settings
- Configuration Directives
- General settings
- Time and date settings
- Task settings
- Task execution settings
- Task result backend settings
- Database backend settings
- RPC backend settings
- Cache backend settings
- Redis backend settings
- Cassandra backend settings
- Elasticsearch backend settings
- Riak backend settings
- IronCache backend settings
- Couchbase backend settings
- CouchDB backend settings
- AMQP backend settings
- File-system backend settings
- Consul K/V store backend settings
- Message Routing
- Broker Settings
- Worker
- Error E-Mails
- Events
- Logging
- Security
- Custom Component Classes (advanced)
- Beat Settings (celery beat). imports = ('myapp.tasks',) ## Using the database to store task state and results. result_backend = 'db+sqlite:///results.db' task_annotations = {'tasks.add': {'rate_limit': '10/s'}}
New lowercase settings¶
Version 4.0 introduced new lower case settings and setting organization.
The major difference between previous versions, apart from the lower case
names, are the renaming of some prefixes, like
celerybeat_ to
beat_,
celeryd_ to
worker_, and most of the top level
celery_ settings
have been moved into a new
task_ prefix.
Celery will still be able to read old configuration files, so there is no rush in moving to the new settings format.
Configuration Directives¶
General settings¶
accept_content¶
A white-list accept_content = ['json'] # or the actual content-type (MIME) accept_content = ['application/json']
Time and date settings¶.
timezone¶
Configure Celery to use a custom time zone. The timezone value can be any time zone supported by the pytz library.
If not set the UTC timezone is used. For backwards compatibility
there is also a
enable_utc setting, and this is set
to false the system local timezone is used instead.
Task settings¶
task_annotations¶
This setting can be used to rewrite any task attribute from the configuration. The setting can be a dict, or a list of annotation objects that filter for tasks and return a map of attributes to change.
This will change the
rate_limit attribute for the
tasks.add
task:
task_annotations = {'tasks.add': {'rate_limit': '10/s'}}
or change the same for all tasks:
task_annotations = {'*': {'rate_limit': '10/s'}}
You can change methods too, for example the
on_failure handler:
def my_on_failure(self, exc, task_id, args, kwargs, einfo): print('Oh no! Task failed: {0!r}'.format(exc)) task_annotations = {'*': {'on_failure': my_on_failure}}
If you need more flexibility then you can use objects instead of a dict to choose which tasks to annotate:
class MyAnnotate(object): def annotate(self, task): if task.name.startswith('tasks.'): return {'rate_limit': '10/s'} task_annotations = (MyAnnotate(), {other,})
task_compression¶
Default compression used for task messages.
Can be
gzip,
bzip2 (if available), or any custom
compression schemes registered in the Kombu compression registry.
The default is to send uncompressed messages.
task_protocol¶
Default task message protocol version. Supports protocols: 1 and 2 (default is 1 for backwards compatibility).
task_serializer¶
A string identifying the default serialization method to use. Can be
pickle (default), json, yaml, msgpack or any custom serialization
methods that have been registered with
kombu.serialization.registry.
See also
task_publish_retry¶
New in version 2.2.
Decides if publishing task messages will be retried in the case
of connection loss or other connection errors.
See also
task_publish_retry_policy.
Enabled by default.
task_publish_retry_policy¶
New in version 2.2.
Defines the default policy when retrying publishing a task message in the case of connection loss or other connection errors.
See Message Sending Retry for more information.
Task execution settings¶
task_always_eager¶.
task_eager_propagates¶
If this is
True, eagerly executed tasks (applied by task.apply(),
or when the
task_always_eager setting is enabled), will
propagate exceptions.
It’s the same as always running
apply() with
throw=True.
task_remote_tracebacks¶
If enabled task results will include the workers stack when re-raising task errors.
This requires the tblib library, which can be installed using pip:
$ pip install 'tblib>=1.3.0'
task_ignore_result¶
Whether to store the task return values or not (tombstones).
If you still want to store errors, just not successful return values,
you can set
task_store_errors_even_if_ignored.
task_store_errors_even_if_ignored¶
If set, the worker stores all task errors in the result store even if
Task.ignore_result is on.
task_track_started¶
If
True the task will report its status as ‘started’ when the
task is executed by a worker. The default value is
False as
the normal behavior is to not report that level of granularity. Tasks
are either pending, finished, or waiting to be retried. Having a ‘started’
state can be useful for when there are long running tasks and there is a
need to report which task is currently running.
task_time_limit¶
Task hard time limit in seconds. The worker processing the task will be killed and replaced with a new one when this is exceeded.()
task_acks_late¶
Late ack means the task messages will be acknowledged after the task has been executed, not just before, which is the default behavior.
See also
FAQ: Should I use retry or acks_late?.
task_reject_on_worker_lost¶
Even if
task_acks_late.
Task result backend settings¶.
cassandra
Use Cassandra to store the results. See Cassandra backend settings.
elasticsearch
Use Elasticsearch to store the results. See Elasticsearch backend settings.
ironcache
Use IronCache to store the results. See IronCache backend settings.
couchbase
Use Couchbase to store the results. See Couchbase backend settings.
couchdb
Use CouchDB to store the results. See CouchDB backend settings.
filesystem
Use a shared directory to store the results. See File-system backend settings.
amqp
Older AMQP backend (badly) emulating a database-based backend. See AMQP backend settings.
consul
Use the Consul K/V store to store the results See Consul K/V store backend settings.
result_serializer¶
Result serialization format. Default is
pickle. See
Serializers for information about supported
serialization formats.
result_compression¶
Optional compression method used for task results.
Supports the same options as the
task_serializer setting.
Default is no compression.
result_expires¶
Time (in seconds, or a
timedelta object) for when after
stored task tombstones will be deleted.
A built-in periodic task will delete the results after this time
(
celery.backend_cleanup), assuming that
celery beat is
enabled. The task runs daily at 4am.
A value of
None or 0 means results will never expire (depending
on backend specifications).
Default is to expire after 1 day.
Note
For the moment this only works with the AMQP, database, cache, and Redis backends.
When using the database backend, celery beat must be running for the results to be expired.
result_cache_max¶
Enables client caching of results, which can be useful for the old ‘amqp’ backend where the result is unavailable as soon as one result instance consumes it.
This is the total number of results to cache before older results are evicted.
A value of 0 or None means no limit, and a value of
-1
will disable the cache.
Disabled by default.
Database backend settings¶
Database URL Examples¶
To use the database backend you have to configure the
result_backend setting with a connection URL and the
db+
prefix:
result_backend = 'db+scheme://user:password@host:port/dbname'
Examples:
# sqlite (filename) result_backend = 'db+sqlite:///results.sqlite' # mysql result_backend = 'db+mysql://scott:tiger@localhost/foo' # postgresql result_backend = 'db+postgresql://scott:tiger@localhost/mydatabase' # oracle result_backend = 'db+oracle://scott:[email protected]:1521/sidname'
Please see Supported Databases for a table of supported databases,
and Connection String for more information about connection
strings (which is the part of the URI that comes after the
db+ prefix).
sqlalchemy_dburi¶
This setting is no longer used as it’s now possible to specify
the database URL directly in the
result_backend setting.
sqlalchemy_engine_options¶
To specify additional SQLAlchemy database engine options you can use
the
sqlalchmey_engine_options setting:
# echo enables verbose logging from SQLAlchemy. app.conf.sqlalchemy_engine_options = {'echo': True}
sqlalchemy_short_lived_sessions¶.
sqlalchemy_table_names¶
When SQLAlchemy is configured as the result backend, Celery automatically creates two tables to store result meta-data for tasks. This setting allows you to customize the table names:
# use custom table names for the database result backend. sqlalchemy_table_names = { 'task': 'myapp_taskmeta', 'group': 'myapp_groupmeta', }
RPC backend settings¶:
result_backend = 'cache+memcached://127.0.0.1:11211/'
Using multiple Memcached servers:
result_backend = """ cache+memcached://172.19.26.240:11211;172.19.26.242:11211/ """.strip()
The “memory” backend stores the cache in memory only:
result_backend = 'cache' cache_backend = 'memory'
cache_backend_options¶
You can set pylibmc options using the
cache_backend_options
setting:
cache_backend_options = { 'binary': True, 'behaviors': {'tcp_nodelay': True}, }
cache_backend¶
This setting is no longer used as it’s now possible to specify
the cache backend directly in the
result_backend setting.
Redis backend settings¶
Configuring the backend URL¶
Note
The Redis backend requires the redis library:
To install the redis package use pip or easy_install:
$ pip install redis
This backend requires the
result_backend
setting to be set to a Redis URL:
result_backend = 'redis://:password@host:port/db'
For example:
result_backend = 'redis://localhost/0'
which is the same as:
result_backend = 'redis://'
The fields of the URL are defined as follows:
Password used to connect to the database.
host
Host name or IP address of the Redis server. e.g. localhost.
port
Port to the Redis server. Default is 6379.
db
Database number to use. Default is 0. The db can include an optional leading slash.
redis_max_connections¶
Maximum number of connections available in the Redis connection pool used for sending and retrieving results.
Cassandra backend settings¶
Note
This Cassandra backend driver requires cassandra-driver.
To install, use pip or easy_install:
$ pip install cassandra-driver
This backend requires the following configuration directives to be set.
cassandra_keyspace¶
The key-space in which to store the results. e.g.:
cassandra_keyspace = 'tasks_keyspace'
cassandra_table¶
The table (column family) in which to store the results. e.g.:
cassandra_table = 'tasks'
cassandra_read_consistency¶
The read consistency used. Values can be
ONE,
TWO,
THREE,
QUORUM,
ALL,
LOCAL_QUORUM,
EACH_QUORUM,
LOCAL_ONE.
cassandra_write_consistency¶
The write consistency used. Values can be
ONE,
TWO,
THREE,
QUORUM,
ALL,
LOCAL_QUORUM,
EACH_QUORUM,
LOCAL_ONE.
cassandra_entry_ttl¶
Time-to-live for status entries. They will expire and be removed after that many seconds after adding. Default (None) means they will never expire.
cassandra_auth_provider¶
AuthProvider class within
cassandra.auth module to use. Values can be
PlainTextAuthProvider or
SaslAuthProvider.
cassandra_auth_kwargs¶
Named arguments to pass into the authentication provider. e.g.:
cassandra_auth_kwargs = { username: 'cassandra', password: 'cassandra' }
Elasticsearch backend settings¶
To use Elasticsearch as the result backend you simply need to
configure the
result_backend setting with the correct URL.
Riak backend settings¶
Note
The Riak backend requires the riak library:
To install the riak package use pip or easy_install:
$ pip install riak
This backend requires the
result_backend
setting to be set to a Riak URL:
result_backend = 'riak://host:port/bucket'
For example:
result_backend = 'riak://localhost/celery
which is the same as:
result_backend = 'riak://'
The fields of the URL are defined as follows:
host
Host name or IP address of the Riak server. e.g. ‘localhost’.
port
Port to the Riak server using the protobuf protocol. Default is 8087.
bucket
Bucket name to use. Default is celery. The bucket needs to be a string with ASCII characters only.
Alternatively, this backend can be configured with the following configuration directives.
riak_backend_settings¶
This is a dict supporting the following keys:
host
The host name of the Riak server. Defaults to
"localhost".
port
The port the Riak server is listening to. Defaults to 8087.
bucket
The bucket name to connect to. Defaults to “celery”.
protocol
The protocol to use to connect to the Riak server. This is not configurable via
result_backend
IronCache backend settings¶
Note
The IronCache backend requires the iron_celery library:
To install the iron_celery package use pip or easy_install:
$ pip install iron_celery
IronCache is configured via the URL provided in
result_backend, for example:
result_backend
set to a Couchbase URL:
result_backend = 'couchbase://username:password@host:port/bucket').
CouchDB backend settings¶
Note
The CouchDB backend requires the pycouchdb library:
To install the Couchbase package use pip, or easy_install:
$ pip install pycouchdb
This backend can be configured via the
result_backend
set to a CouchDB URL:
result_backend = 'couchdb://username:password@host:port/container'
The URL is formed out of the following parts:
username
User name to authenticate to the CouchDB server as (optional).
Password to authenticate to the CouchDB server (optional).
host
Host name of the CouchDB server. Defaults to
localhost.
port
The port the CouchDB server is listening to. Defaults to
8091.
container
The default container the CouchDB server is writing to. Defaults to
default.
AMQP backend settings¶).
Note
The AMQP backend requires RabbitMQ 1.1.0 or higher to automatically expire results. If you are running an older version of RabbitMQ you should disable result expiration like this:
result_expires = None
result_exchange_type¶
The exchange type of the result exchange. Default is to use a direct exchange.
result_persistent¶
If set to
True, result messages will be persistent. This means the
messages will not be lost after a broker restart. The default is for the
results to be transient.
File-system backend settings¶
This backend can be configured using a file URL, for example:
CELERY_RESULT_BACKEND = ''
The configured directory needs to be shared and writable by all servers using the backend.
If you are trying Celery on a single system you can simply use the backend without any further configuration. For larger clusters you could use NFS, GlusterFS, CIFS, HDFS (using FUSE) or any other file-system.
Consul K/V store backend settings¶
The Consul backend can be configured using a URL, for example:
CELERY_RESULT_BACKEND = ‘consul://localhost:8500/’
The backend will storage results in the K/V store of Consul as individual keys.
The backend supports auto expire of results using TTLs in Consul.
Message Routing¶
task_queues¶
Most users will not want to specify this setting and should rather use the automatic routing facilities.
If you really want to configure advanced routing, this setting should
be a list of
kombu.Queue objects the worker will consume from.
Note that workers can be overridden this setting via the
-Q option, or individual queues from this
list (by name) can be excluded using the
-X
option.
Also see Basics for more information.
The default is a queue/exchange/binding key of
celery, with
exchange type
direct.
See also
task_routes
task_routes¶
A list of routers, or a single router used to route tasks to queues. When deciding the final destination of a task the routers are consulted in order.
A router can be specified as either:
A router class instance.
A string which provides the path to a router class
- A dict containing router specification:
Will be converted to a
celery.routes.MapRouteinstance.
- A list of
(pattern, route)tuples:
Will be converted to a
celery.routes.MapRouteinstance.
Examples:
task_routes = { 'celery.ping': 'default', 'mytasks.add': 'cpu-bound', 'feed.tasks.*': 'feeds', # <-- glob pattern re.compile(r'(image|video)\.tasks\..*'): 'media', # <-- regex 'video.encode': { 'queue': 'video', 'exchange': 'media' 'routing_key': 'media.video.encode', }, } task_routes = ('myapp.tasks.Router', {'celery.ping': 'default})
Where
myapp.tasks.Router could be:
class Router(object): def route_for_task(self, task, args=None, kwargs=None): if task == 'celery.ping': return {'queue': 'default'}
route_for_task may return a string or a dict. A string then means
it’s a queue name in
task)
Values defined in
task_routes have precedence over values defined in
task_queues when merging the two.
With the follow settings:
task_queues = { 'cpubound': { 'exchange': 'cpubound', 'routing_key': 'cpubound', }, } task_routes = { 'tasks.add': { 'queue': 'cpubound', 'routing_key': 'tasks.add', 'serializer': 'json', }, }
The final routing options for
tasks.add will become:
{'exchange': 'cpubound', 'routing_key': 'tasks.add', 'serializer': 'json'}
See Routers for more examples.
task_queue_ha_policy¶
This will set the default HA policy for a queue, and the value
can either be a string (usually
all):
task_queue_ha_policy = 'all'
Using ‘all’ will replicate the queue to all current nodes, Or you can give it a list of nodes to replicate to:
task_queue_ha_policy = ['rabbit@host1', 'rabbit@host2']
Using a list will implicitly set
x-ha-policy to ‘nodes’ and
x-ha-policy-params to the given list of nodes.
See for more information.
task_queue_max_priority¶
See RabbitMQ Message Priorities.:
task_routes = { 'tasks.add': {'exchange': 'C.dq', 'routing_key': '[email protected]'} }
task_create_missing_queues¶
If enabled (default), any queues specified that are not defined in
task_queues will be automatically created. See
Automatic routing.
task_default_queue¶
The name of the default queue used by .apply_async if the message has no route or no custom queue has been specified.
This queue must be listed in
task_queues.
If
task_queues is not specified then it is automatically
created containing one queue entry, where this name is used as the name of
that queue.
The default is: celery.
task_default_exchange¶
Name of the default exchange to use when no custom exchange is
specified for a key in the
task_queues setting.
The default is: celery.
task_default_exchange_type¶
Default exchange type used when no custom exchange type is specified
for a key in the
task_queues setting.
The default is: direct.
task_default_routing_key¶
The default routing key used when no custom routing key
is specified for a key in the
task_queues setting.
The default is: celery.
Broker Settings¶
broker_url¶
Default broker URL. This must be.
More than one broker URL, of the same transport, can also be specified. The broker URLs can be passed in as a single string that is semicolon delimited:
broker_url = 'transport://userid:password@hostname:port//;transport://userid:password@hostname:port//'
Or as a list:
broker_url = [ 'transport://userid:password@localhost:port//', 'transport://userid:password@hostname:port//' ]
The brokers will then be used in the
broker_failover_strategy.
See URLs in the Kombu documentation for more information.
broker_read_url /
broker_write_url¶
These settings can be configured, instead of
broker_url to specify
different connection parameters for broker connections used for consuming and
producing.
Example:
broker_read_url = 'amqp://user:[email protected]:56721' broker_write_url = 'amqp://user:[email protected]:56722'
Both options can also be specified as a list for failover alternates, see
broker_url for more information./green-threads . This setting is disabled when using gevent.
Worker¶
imports¶
A sequence of modules to import when the worker starts.
This is used to specify the task modules to import, but also to import signal handlers and additional remote control commands, etc.
The modules will be imported in the original order.
include¶
Exact same semantics as
imports, but can be used as a means
to have different import categories.
The modules in this setting are imported after the modules in
imports.
worker.
worker_prefetch_multiplier¶
How many messages to prefetch at a time multiplied by the number of concurrent processes. The default is 4 (four messages for each process). The default setting is usually a good choice, however – if you have very long running tasks waiting in the queue and you have to start the workers, note that the first worker to start will receive four times the number of messages initially. Thus the tasks may not be fairly distributed to the workers.
To disable prefetching, set
worker_prefetch_multiplier to 1.
Changing that setting to 0 will allow the worker to keep consuming
as many messages as it wants.
For more on prefetching, read Prefetch Limits
Note
Tasks with ETA/countdown are not affected by prefetch limits.
worker_lost_wait¶
In some cases a worker may be killed without proper cleanup,
and the worker may have published a result before terminating.
This value specifies how long we wait for any missing results before
raising a
WorkerLostError exception.
Default is 10.0
worker_max_tasks_per_child¶
Maximum number of tasks a pool worker process can execute before it’s replaced with a new one. Default is no limit.
worker_max_memory_per_child¶
Maximum amount of resident memory that may be consumed by a worker before it will be replaced by a new worker. If a single task causes a worker to exceed this limit, the task will be completed, and the worker will be replaced afterwards. Default: no limit.
worker_state_db¶
Name of the file used to stores persistent worker state (like revoked tasks). Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).
Can also be set via the
celery worker --statedb argument.
Not enabled by default.
worker
task_send. task_send_charset = 'utf-8' # email_host_user = 'servers' # email_host_password = 's3cr3t'
Events¶
worker_send_task_events¶
Send task-related events so that tasks can be monitored using tools like
flower. Sets the default value for the workers
-E argument.
task_send_sent_event¶
New in version 2.2.
If enabled, a
task-sent event will be sent for every task so tasks can be
tracked before they are consumed by a worker.
Disabled by default..
event_queue_expires¶
Expiry time in seconds (int/float) for when after a monitor clients
event queue will be deleted (
x-expires).
Default is never, relying on the queue auto-delete setting.
event_serializer¶
Message serialization format used when sending event messages.
Default is
json. See Serializers.
Logging¶
worker_hijack_root_logger¶
New in version 2.2.
By default any previously configured handlers on the root logger will be removed. If you want to customize your own logging handlers, then you can disable this behavior by setting worker_hijack_root_logger = False.
Note
Logging can also be customized by connecting to the
celery.signals.setup_logging signal.
worker_log_color¶
Enables/disables colors in logging output by the Celery apps.
By default colors are enabled if
- the app is logging to a real terminal, and not a file.
- the app is not running on Windows.
worker_log_format¶
The format to use for log messages.
Default is:
[%(asctime)s: %(levelname)s/%(processName)s] %(message)s
See the Python
logging module for more information about log
formats.
worker_task_log_format¶
The format to use for log messages logged in tasks.
Default is:
[%(asctime)s: %(levelname)s/%(processName)s] [%(task_name)s(%(task_id)s)] %(message)s
See the Python
logging module for more information about log
formats.
worker_redirect_stdouts¶
If enabled stdout and stderr will be redirected to the current logger.
Enabled by default. Used by celery worker and celery beat.
Security¶
security_key¶
New in version 2.5.
The relative or absolute path to a file containing the private key used to sign messages when Message Signing is used.
security_certificate¶
New in version 2.5.
The relative or absolute path to an X.509 certificate file used to sign messages when Message Signing is used.
security_cert_store¶
New in version 2.5.
The directory containing X.509 certificates used for
Message Signing. Can be a glob with wild-cards,
(for example
/etc/certs/*.pem).
Custom Component Classes (advanced)¶
worker_pool¶
Name of the pool class used by the worker.
Eventlet/Gevent
Never use this option to select the eventlet or gevent pool.
You must use the
-P option to
celery worker instead, to ensure the monkey patches
are not applied too late, causing things to break in strange ways.
Default is
celery.concurrency.prefork:TaskPool.
worker_pool_restarts¶
If enabled the worker pool can be restarted using the
pool_restart remote control command.
Disabled by default.
worker_autoscaler¶
New in version 2.2.
Name of the autoscaler class to use.
Default is
celery.worker.autoscale:Autoscaler.
worker_autoreloader¶
Name of the auto-reloader class used by the worker to reload Python modules and files that have changed.
Default is:
celery.worker.autoreload:Autoreloader.
worker_consumer¶
Name of the consumer class used by the worker.
Default is
celery.worker.consumer.Consumer
Beat Settings (celery beat)¶
beat_scheduler¶
The default scheduler class. Default is
celery.beat:PersistentScheduler.
Can also be set via the
celery beat -S argument.
beat_schedule_filename¶
Name of the file used by PersistentScheduler to store the last run times of periodic tasks. Can be a relative or absolute path, but be aware that the suffix .db may be appended to the file name (depending on Python version).
Can also be set via the
celery beat --schedule argument.
beat_sync_every¶
The number of periodic tasks that can be called before another database sync is issued. Defaults to 0 (sync based on timing - default of 3 minutes as determined by scheduler.sync_every). If set to 1, beat will call sync after every task message sent.
beat. | http://docs.celeryproject.org/en/master/configuration.html | 2016-06-24T21:56:38 | CC-MAIN-2016-26 | 1466783391519.2 | [] | docs.celeryproject.org |
Changelog for package tf2_geometry_msgs
0.4.12 (2014-09-18)
0.4.11 (2014-06-04)
0.4.10 (2013-12-26)
0.4.9 (2013-11-06)
0.4.8 (2013-11-06)
0.4.7 (2013-08-28)
0.4.6 (2013-08-28)
0.4.5 (2013-07-11)
0.4.4 (2013-07-09)
making repo use CATKIN_ENABLE_TESTING correctly and switching rostest to be a test_depend with that change.
0.4.3 (2013-07-05)
0.4.2 (2013-07-05)
0.4.1 (2013-07-05)
0.4.0 (2013-06-27)
moving convert methods back into tf2 because it does not have any ros dependencies beyond ros::Time which is already a dependency of tf2
Cleaning up unnecessary dependency on roscpp
converting contents of tf2_ros to be properly namespaced in the tf2_ros namespace
Cleaning up packaging of tf2 including: removing unused nodehandle cleaning up a few dependencies and linking removing old backup of package.xml making diff minimally different from tf version of library
Restoring test packages and bullet packages. reverting 3570e8c42f9b394ecbfd9db076b920b41300ad55 to get back more of the packages previously implemented reverting 04cf29d1b58c660fdc999ab83563a5d4b76ab331 to fix
#7
0.3.6 (2013-03-03)
0.3.5 (2013-02-15 14:46)
0.3.4 -> 0.3.5
0.3.4 (2013-02-15 13:14)
0.3.3 -> 0.3.4
0.3.3 (2013-02-15 11:30)
0.3.2 -> 0.3.3
0.3.2 (2013-02-15 00:42)
0.3.1 -> 0.3.2
0.3.1 (2013-02-14)
0.3.0 -> 0.3.1
0.3.0 (2013-02-13)
switching to version 0.3.0
add setup.py
added setup.py etc to tf2_geometry_msgs
adding tf2 dependency to tf2_geometry_msgs
adding tf2_geometry_msgs to groovy-devel (unit tests disabled)
fixing groovy-devel
removing bullet and kdl related packages
disabling tf2_geometry_msgs due to missing kdl dependency
catkinizing geometry-experimental
catkinizing tf2_geometry_msgs
add twist, wrench and pose conversion to kdl, fix message to message conversion by adding specific conversion functions
merge tf2_cpp and tf2_py into tf2_ros
Got transform with types working in python
A working first version of transforming and converting between different types
Moving from camelCase to undescores to be in line with python style guides
Fixing tests now that Buffer creates a NodeHandle
add posestamped
import vector3stamped
add support for Vector3Stamped and PoseStamped
add support for PointStamped geometry_msgs
add regression tests for geometry_msgs point, vector and pose
Fixing missing export, compiling version of buffer_client test
add bullet transforms, and create tests for bullet and kdl
working transformations of messages
add support for PoseStamped message
test for pointstamped
add PointStamped message transform methods
transform for vector3stamped message | http://docs.ros.org/hydro/changelogs/tf2_geometry_msgs/changelog.html | 2019-08-17T23:53:32 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.ros.org |
You can create a project snapshot any time by going to project settings following Snapshots tab. Even more, the project snapshots are created automatically once you import new strings or perform a search/replace, so you can always revert back to previous version (enable Automatic snapshot on upload).
Users at Essential and higher plans can enable Automatic daily snapshots option to keep backups accurate.
Restoring from a snapshot is done by creating a project copy from snapshot. All the project settings, contributors, comments, screenshots and statistics are preserved when restoring project copy from the snapshot. | https://docs.lokalise.co/en/articles/1400540-project-snapshots | 2019-08-17T23:44:06 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.lokalise.co |
All Files Nodes / Layers. | https://docs.toonboom.com/help/harmony-15/premium/reference/node/node.html | 2019-08-17T23:21:01 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.toonboom.com |
3.2.0 (2017-08-30)¶
Overview of merged pull requests¶
TASK: Remove old changelogs¶
- Packages:
Neos
TASK: Final release related adjustments for 3.2¶
- Packages:
ContentRepository
Fusion
Neos
TASK: Exchange login wallpaper for version 3.2¶
Exchange the login wallpaper vor version 3.2. As the image is very good compressible, I chose a width of 2400 px this time.
- Packages:
Neos
SiteKickstarter
BUGFIX: Build correct paths to nodes with hidden parents¶
If a node is visible but lies beneath a hidden parent, the URI path generated for that node had “holes” and did not work. This adjusts the route part handler to return the complete URI including the URI path segments of hidden nodes in the chain up to the site node.
To disallow showing a node actually hidden itself has to be ensured in matching a request path, not in building one.
FEATURE:.
- Packages:
ContentRepository
Fusion
BUGFIX: Allow Prototype names starting with digits¶
Prototype declarations starting with digits previously were wrongly parsed and resulted in broken names, this change fixes it by only casting numeric strings to integers as object keys.
Fixes: #1114
BUGFIX: TypoScriptView should set response headers correctly¶
The httpVersion which could be set in the ResponseHeadImplementation was not used.
Additionally, if a header had multiple values (which can easily be done in TypoScript via RawArray) only the first header was actually transferred to the sent HTTP response.
TASK: Rework CacheSegmentParser to a recursive pattern¶
Rewrite the internals of the CacheSegmentParser to use a recursive approach which should safe some memory and makes the code more readable. Also refactors common logic for fetching tokens and content to methods to avoid code duplication.
Additionally adds calculation of uncached segments in the currently parsed content to make solid cache entries (full page cache) possible.
- Packages:
Fusion
FEATURE:.
- Packages:
Neos
FEATURE: Allow.
- Packages:
Media
Neos
TASK: Keep asset adjustments if asset size did not change¶
This adds a condition to compare the size of the old and new asset in order to verify the size did not change. In this case the method refit() will not be executed to keep asset adjustments. This is very handy if you upload/replace a image with enriched meta data (eg. IPTC or Exif.)
- Packages:
ContentRepository
Media
BUGFIX: Fix nodetype thumbnail path in NodeTypeDefinition.rst¶
corrected path in documentation for thumbnail of nodetype
TASK: Don’t cache dynamic segments with disabled `entryDiscriminator`¶
With this change the caching can be disabled by setting the entryDiscriminator to false when using Content Cache mode dynamic.
Previously a cache entry was created anyways with the entryDiscriminator casted to an (in this case empty) string.
Background: The Content Cache mode dynamic was introduced in order to allow for more flexible caching behaviors depending on the context. But one important feature did not work yet: Being able to disable the cache for certain requests. With this change performance can be improved by caching the display of an interactive element (i.e. cache Forms for GET requests)
Related: #1630
TASK: Ignore preset value if ``false``¶
Until now the
ImageUri prototype would return no image if the preset was set to
false (only accepted fallback until now was
null). With this change it’s also possible to reset the preset with
false.
- Packages:
Neos
BUGFIX: Moving node back and forth in nested workspace causes SQL error¶
This change fixes a problem in the content repository which can lead to a uniqueness constraint error when a node is moved back and forth in a nested workspace.
Fixes #1639
BUGFIX: Prevent space if no css class is given¶
If no attributes.class is given the attribute always starts with space. Like that the attributes get trimmed.
### Output before: class=” foo-bar-content”
### Output after: class=”foo-bar-content”
BUGFIX: Valid URLs with `supportEmptySegmentForDimensions`¶
Makes sure that generated URLs observe the setting.
Fixes: #1644
BUGFIX: cut off long-form processor syntax in ContentElementWrappingImplementation¶
In the change, the default @process.contentElementWrapping on Neos.Neos:Content was changed from @process.contentElementWrapping to the long form “@process.contentElementWrapping.expression” (so it was possible to specify a position).
However, this meant the Fusion path in Frontend for such an element was calculated wrongly; It appended __meta/process/contentElementWrapping/expression<Neos.Neos:ContentElementWrapping> to the fusionPath in the DOM.
For the “old” Neos (ember-based) UI, everything works pretty much as expected (it’s quite hard to construct a scenario where this would trigger an actual bug); but the new React UI gets confused with rendering the element when the Fusion path is wrong. And as it is a core bug, let’s fix it in the core.
- Packages:
Neos
BUGFIX: Trigger ContentCacheFlusher on asset update¶
The content cache does not invalidate on changes to an asset. Expected behavior would be to flush the content cache on changes to an asset (e.g title, caption).
Neos 2.3 PR of:
BUGFIX: Hide disabled modules in submodule overviews¶
When a module has been disabled using the disabled flag, the module is hidden from the main menu and cannot be accessed, however it was still being displayed in submodule overviews.
- Packages:
Neos
FEATURE:.
Related: #964
- Packages:
Neos
FEATURE:.
- Packages:
Neos
BUGFIX: Trigger ContentCacheFlusher on asset update¶
The content cache does not invalidate on changes to an asset. Expected behavior would be to flush the content cache on changes to an asset (e.g title, caption).
- Packages:
Neos
FEATURE: Add `async` flag to the `Neos.Neos:ImageUri` and `Neos.Neos:ImageTag` fusion-objects.
- Packages:
Neos
BUGFIX: Set default value for event uid on PostgreSQL¶
It seems that the default value set previously does not survive renaming the sequence. This leads to errors when events are to be persisted.
This change adds the expected default value (back).
- Packages:
Neos
BUGFIX: Use strict comparison to avoid nesting level error¶
Comparing objects in Fluid templates using == may lead to “nesting level too deep” errors, depending on the objects being compared.
This change adjusts all non-strict comparisons against strict ones.
Fixes #1626
- Packages:
Browser
Neos
BUGFIX: Skip apply change handler if the editor is undefined¶
This change fix a JS console error, when ember try to call _applyChangeHandler on property like _nodeType, because this property is not really an editor.
BUGFIX: Correct merge of RelatedNodes template/xliff¶
This moves added labels to the Neos.Media.Browser package and adjusts the RelatedNodes.html template as needed.
Some german translations are moved as well.
Related to d5824fd4097bb658d22d0abc633ce68341c735c1 and the merge in 00f07ee986fcecf284f3548dc8b687780ccdb272.
- Packages:
Browser
Neos
BUGFIX: Publish moved nodes in nested workspaces¶
This change contains a fix and additional Behat tests which solves an issue with moving and publishing nodes in a nested workspace scenario which can lead to data corruption in the content repository.
Resolves #1608
BUGFIX: Fix missing translation for inspector file uploads¶
This changes the occurrences of Neos.Neos:Main:upload to Neos.Neos:Main:choose in Settings.yaml, as this has been the label formerly used for upload-related Inspector editors.
The label Neos.Neos:Main:upload does not seem to exist currently, so the tooltips above upload buttons in the Inspector haven’t been translated.
<!– Thanks for your contribution, we appreciate it!
Please read through our pull request guidelines, there are some interesting things there:
And one more thing… Don’t forget about the tests! –>
What I did Fix the missing translation for the upload buttons in the image and asset inspector editors.
How I did it Change all occurrences of Neos.Neos:Main:upload to Neos.Neos:Main:choose in Settings.yaml.
How to verify it
Hover above the upload Button of the image editor. Without this change, the tooltip contains the fallback label “Upload file” in every language.
Checklist
- [x] Code follows the PSR-2 coding style
- [x] Tests have been created, run and adjusted as needed
- [x] The PR is created against the [lowest maintained branch]() PR is against 3.0, since the problem doesn’t seem to occur prior to that
- Packages:
Neos
BUGFIX: NodeData property exists even if set to NULL¶
Even if the property is set to null AbstractNodeData::hasProperty() should return true.
FEATURE:
- Packages:
Neos
FEATURE:.
- [x] Code follows the PSR-2 coding style
- [x] Tests have been created, run and adjusted as needed
- [x] The PR is created against the [lowest maintained branch]()
- Packages:
Neos
BUGFIX: Detect asset://so-me-uu-id links in node properties¶
To detect links to assets as “usage” in the media management, the search in the NodeDataRepository is amended as needed.
Fixes #1575
BUGFIX: Only show link to accessible nodes¶
Fixed some misleading text on the listing page and added i18n.
related to #1578.
for 3.x upmerge have a look at.
BUGFIX: render asset changes correctly in workspaces overview¶
This change fixes asset rendering in the workspace overview.
Fixes #1592.
TASK: Fix code example in CustomTypoScriptObjects docs¶
- Packages:
Neos
BUGFIX: Avoid orphaned content nodes when calling publishNode()¶
This changes an issue with using the PublishingService::publishNode() which can result in an inconsistent structure in a user’s workspace.
This change also changes the behavior of PublishingService::discardNode() which now will also discard content of a given document node to protect consistency.
Document nodes and their content must be published or discarded together in order to protect against inconsistencies when the document node is moved or removed in one of the base workspaces.
Fixes #1617
BUGFIX: Behat tests fail with fresh checkout¶
This change fixes an issue with failing Behat tests caused by missing isolation between tests.
When certain tests were run in a specific order, they might fail with an access denied error because no user is authenticated.
Fixes #1613
BUGFIX: Use null, not empty string in Workspace->setOwner¶
A workspace having an owner of null (plus some other factors) is considered an internal workspace. This change makes sure the owner is set to null if an empty string is passed to setOwner().
Fixes #1610
BUGFIX: Nodes are inaccessible after base workspace switch¶
This change fixes a problem with the routing cache which results in inaccessible document nodes for cases where nodes with different identifiers but the same URI path exist in separate workspaces.
Fixes #1602
TASK: Remove further TypoScript references¶
This removes even more uses of TypoScript from various places in the codebase:
- TASK: Rename typoScriptPath to fusionPath in FE/BE interaction
- TASK: Fusion RenderViewHelper adjustments
- TASK: Rename BaseTypoScript.fusion test fixture to Base.fusion
- TASK: Remove unused NoTypoScriptConfigurationException
- TASK: Remove TypoScript from internal variable/function names
- Packages:
Neos
BUGFIX: Reset broken properties to array¶
If the content of the properties property cannot be decoded from JSON correctly, it will be null. This leads to errors when any operation is done that expects it to always be an array.
This change adds a PostLoad Doctrine lifecycle method to reset properties to an empty array if it is null after reconstitution.
Fixes issue #1580
BUGFIX: Add missing namespace import in AssetService¶
This adds a missing namespace import for Uri after the upmerge of #1574.
- Packages:
Media
Neos
BUGFIX: Fix sample code¶
The sample code inside the DocBlock used the wrong view helper
BUGFIX: Fix a typo in the docs¶
- Packages:
ContentRepository
Media
Neos
BUGFIX: Create resource redirects correctly¶
The redirects for replaced resources were created using full URLs, but the redirect handler expects relative URL paths to be given.
Fixes #1573
BUGFIX: Add missing Noto Sans fonts to Media.Browser¶
- Packages:
Browser
TASK: Correct kickstarter package name in documentation¶
The kickstart package name is outdated in documentation (Creating a plugin:). I replaced it with the current and right one ().
- Packages:
Fusion
Neos
!!!TASK: Replace occurrences of ‘typoScript’ with ‘fusion’¶
- Deprecates methods with ‘TypoScript’ in name
- Replaces ‘typoScript’ with ‘fusion’ in variable names, doc blocks
- Packages:
Fusion
TASK: Add functional-test that validates the integrity of the configuration and schemas in neos-packages¶
This change adds a functional test to neos to validate that the configurations defined in the packages that are part of the flow base distribution are all valid and that the contained schema files are valid as well.
This extends a flow test case with an extended set of packages and configurations that is taken into account.
- Packages:
Neos
TASK: Remove comparison to TYPO3 CMS in documentation¶
Do we really need the comparison to TYPO3 CMS? I think this is a nice background information but not related anymore.
- Packages:
Neos
FEATURE:
- Packages:
Fusion
TASK: Update ViewHelper and Command references¶
Replaces some left over occurrences of “typo3” and updates ViewHelper and Command references accordingly.
Fixes: #1558
- Packages:
Fusion
Media
Neos
BUGFIX: Asset list should be correctly converted for UI¶
Since PR #1472 was merged the asset list was not correctly converted anymore,
this had two reasons, first the wrong converter was used for the array
itself (
ArrayTypeConverter vs.
TypedArrayConverter). This is
corrected by setting the correct converter for the respective node property
data type in the settings.
Note that user code should follow the added comment in settings on how to
configure custom types, especially array of objects. It is important to define
the
TypedArrayConverter for the array data type.
Additionally the
PropertyMapper prevent conversion of the inner objects
as with the change the targetType suddenly matched the expected type and so
the PropertyMapper just skipped those objects. That was an unexpected side
effect as the expectation was, that the configured type converter is used no
matter what. By setting the inner target type to the dummy value “string” the
PropertyMapper will proceed with the configured
TypeConverter.
Fixes: #1568 Fixes: #1565
- Packages:
Neos
FEATURE: Allow strings and arrays in ``CachingHelper::nodeTypeTag``¶.
Fixes: #871
- Packages:
Neos
BUGFIX: An empty string is not rendered as a valid node name¶
Making sure that
Utility::renderValidNodeName() actually only
result in strings with length greater zero.
Fixes: #1091
BUGFIX: Correctly require a stable version of neos/imagine¶
The
neos/image version should be a stable version. Additionally
corrects the
PHP version requirement to 7.0 an higher.
- Packages:
Media
Neos
BUGFIX: Avoid null being used in trimExplode()¶
Fixes #1552
- Packages:
ContentRepository
TASK: Corrected required php version in documentation¶
Replaced 5.5.0 with 7.0.0
_Please note that this should be also changed in 3.1 and master branch_
- Packages:
Neos
TASK: Rewrite Node Type Constraint docs for correctness and clarity¶
Completely rewrote that chapter of the docs in order to make it more explicit and understandable.
- Packages:
Neos
BUGFIX: Context variable `site` is available in Plugin prototype¶
As plugins are uncached the prototype defines which context variables
will be available inside. As
node,
documentNode and
site are
defaults that apply everywhere else, the missing
site variable
was added to the context for consistency.
Fixes: #841
BUGFIX: Avoid loading original image unless cropping occurs¶
The image inspector used to load the full image for preview. With this change a much smaller thumbnail will be loaded instead, unless the image has been cropped or is being cropped in the cropping editor. In that case we need to load the full image to give the user a crisp preview of the selected image segment.
TASK: Add content to 3.0.0 release notes¶
Fixes #1420
- Packages:
Neos
BUGFIX: Typo in User Settings doc¶
Fixed a typo in documentation in the User Settings document.
Can we verified using :
- Packages:
Neos
BUGFIX: Detect recursive prototype inheritance¶
This throws an exception if there is a direct or indirect prototype inheritance recursion.
Fix #1115
- Packages:
ContentRepository
Fusion
Neos
BUGFIX: Clarified doc block for LiveViewHelper¶
If the
LiveViewHelper doesn’t get a node as argument and neither
there is a node in the template variables it will always return true.
The adjusted doc block clarifies that you need either.
Fixes: #1416
TASK: Remove un-necessary `toString` method in FusionPathProxy¶
This was needed at some point to evaluate the proxy to a string in a Fluid template, but the way a proxy is handled was changed some time ago so that the respective methods are called to get the content of the proxy instead of just string casting, therefore it was no longer needed.
And the exception handling in the toString is not a good idea anyway (but necessary because toString cannot raise exceptions) so all in all this method is undesirable and as we don’t use it anymore it should be removed.
This is basically the result of a long debugging session at the last sprint where we implemented a short term bugfix and figured that we don’t need this method anymore and should remove it in one of the next releases.
- Packages:
Fusion
BUGFIX: Clean TypoScript of windows line-breaks¶
Multi-line EEL expressions fail if the TypoScript file had Windows linebreaks as the explode on line feed leaves the carriage return in every line which then stops the parser from detecting the end of a multi-line EEL expression. | https://neos.readthedocs.io/en/3.2/Appendixes/ChangeLogs/320.html | 2019-08-17T23:19:09 | CC-MAIN-2019-35 | 1566027313501.0 | [] | neos.readthedocs.io |
-c/--connectthe
--nologinCLI options,
-c/--connectand the connection either fails or the log in parameters are incomplete,
--nologinCLI option. Connection can be then initiated later either using the menu items described above or through execution of a Connect command from a test script.
tplanrobot.cfgconfiguration file in the user home folder to
tplanrobot.cfg.bakand restarts the application. This option may help when Robot fails to start for an invalid configuration value. | http://www.docs.t-plan.com/robot/docs/v4.2ee/gui/login.html | 2019-08-17T23:08:08 | CC-MAIN-2019-35 | 1566027313501.0 | [] | www.docs.t-plan.com |
Contents IT Service Management Previous Topic Next Topic Domain separation and Contract Management Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Domain separation and Contract Management This is an overview of domain separation and Contract Management.Use the Contract Management Overview moduleRelated conceptsContract Management useCondition check definitionsRelated referenceComponents installed with Contract ManagementRelated topicsDomain separation On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-it-service-management/page/product/contract-management/concept/domain-separation-contract-mgmt.html | 2019-08-17T23:25:55 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.servicenow.com |
_mbsnbcpy_s, _mbsnbcpy_s_l
Copies n bytes of a string to a destination string. These versions of _mbsnbcpy, _mbsnbcpy_l have security enhancements, as described in Security Features in the CRT.
Important
This API cannot be used in applications that execute in the Windows Runtime. For more information, see CRT functions not supported in Universal Windows Platform apps.
Syntax
Parameters
strDest
Destination for character string to be copied.
sizeInBytes
Destination buffer size.
strSource
Character string to be copied.
count
Number of bytes to be copied.
locale
Locale to use.
Return Value
Zero if successful; EINVAL if a bad parameter was passed in.
Remarks.
Note
Unlike the non-secure version of this function, _mbsnbcpy_s does not do any null padding and always null terminates the string..
Generic-Text Routine Mappings
Requirements
For more compatibility information, see Compatibility.
See also
Feedback | https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/mbsnbcpy-s-mbsnbcpy-s-l?view=vs-2019 | 2019-08-18T00:01:26 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.microsoft.com |
How to: Add ListObject controls to worksheets
You can add ListObject controls to a Microsoft Office Excel worksheet at design time and at runtime in document-level projects.
Applies to: The information in this topic applies to document-level projects and VSTO Add-in projects for Excel. For more information, see Features available by Office application and project type.
You can also add ListObject controls at runtime in VSTO Add-in projects.
This topic describes the following tasks:
Add ListObject controls at design time
Add ListObject controls at runtime in a document-level project
Add ListObject controls at runtime in a VSTO Add-in project
For more information about ListObject controls, see ListObject control.
Add ListObject controls at design time
There are several ways to add ListObject controls to a worksheet in a document-level project at design time: From within Excel, from the Visual Studio Toolbox, and from the Data Sources window.
Note
Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalize the IDE.
To use the Ribbon in Excel
On the Insert tab, in the Tables group, click Table.
Select the cell or cells you want to include in the list and click OK.
To use the Toolbox
From the Excel Controls tab of the Toolbox, drag a ListObject to the worksheet.
The Add ListObject Control dialog box appears.
Select the cell or cells you want to include in the list and click OK.
If you do not want to keep the default name, you can change the name in the Properties window.
To use the Data Sources window
Open the Data Sources window and create a data source for your project. For more information, see Add new connections.
Drag a table from the Data Sources window to your worksheet.
A data-bound ListObject control is added to the worksheet. For more information, see Data binding and Windows Forms.
Add ListObject controls at runtime in a document-level project
You can add the ListObject control dynamically at runtime. This enables you to create the host controls in response to events. Dynamically created list objects are not persisted in the worksheet as host controls when the worksheet is closed. For more information, see Add controls to Office documents at runtime.
To add a ListObject control to a worksheet programmatically
In the Startup event handler of
Sheet1, insert the following code to add a ListObject control to cells A1 through A4.
Microsoft.Office.Tools.Excel.ListObject employeeData; employeeData = this.Controls.AddListObject(this.get_Range("$A$1:$D$4"), "employees");
Dim employeeData As Microsoft.Office.Tools.Excel.ListObject employeeData = Me.Controls.AddListObject(Me.Range("$A$1:$D$4"), "employees")
Add ListObject controls at runtime in a VSTO Add-in project
You can add a ListObject control programmatically to any open worksheet in a VSTO Add-in project. Dynamically created list objects are not persisted in the worksheet as host controls when the worksheet is saved and then closed. For more information, see Extend Word documents and Excel workbooks in VSTO Add-ins at runtime.
To add a ListObject control to a worksheet programmatically
The following code generates a worksheet host item that is based on the open worksheet, and then adds a ListObject control to cells A1 through A4.
private void AddListObject() { Worksheet worksheet = Globals.Factory.GetVstoObject( Globals.ThisAddIn.Application.ActiveWorkbook.Worksheets[1]); Microsoft.Office.Tools.Excel.ListObject list1; Excel.Range cell = worksheet.Range["$A$1:$D$4"]; list1 = worksheet.Controls.AddListObject(cell, "list1"); }
Private Sub AddListObject() Dim NativeWorksheet As Microsoft.Office.Interop.Excel.Worksheet = Globals.ThisAddIn.Application.ActiveWorkbook.Worksheets(1) Dim worksheet As Microsoft.Office.Tools.Excel.Worksheet = Globals.Factory.GetVstoObject(NativeWorksheet) Dim list1 As Microsoft.Office.Tools.Excel.ListObject Dim cell As Excel.Range = worksheet.Range("$A$1:$D$4") list1 = worksheet.Controls.AddListObject(cell, "MyListObject") End Sub
See also
- Extend Word documents and Excel workbooks in VSTO Add-ins at runtime
- Controls on Office documents
- ListObject control
- Automate Excel by using extended objects
- Host items and host controls overview
- How to: Resize ListObject controls
- Bind data to controls in Office solutions
- Programmatic limitations of host items and host controls
Feedback | https://docs.microsoft.com/en-us/visualstudio/vsto/how-to-add-listobject-controls-to-worksheets?view=vs-2019 | 2019-08-17T22:51:12 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.microsoft.com |
If you have set up Zendesk SSO feature, user password is no longer stored in Zendesk. This may result in failed CTI integration with Virtual Contact Center. To set up the integration with Virtual Contact Center, you must enable the API token in Zendesk and configure VCC to use the token as described below:
Enter the Login URL. Append /access/login to the service URL.
For example:
If you use Zendesk SSO feature, you must enable Use Remote Login option.
Paste the API token you generated in Zendesk here.
Click the 8x8 app icon
in the Header bar to bring up the Virtual Contact Center – Agent Console.
Enter your credentials to log in to Agent Console.
From the Control Panel menu, navigate to Profile.
In the Agent Profile, under External Setup, add a valid Zendesk user name and a place holder value for password.
Save your settings. Log out and log back in. | https://docs.8x8.com/vcc-8-1-ProductDocumentation/VCC_NetSuite_IntegrationWebHelp/Content/Zendesk%20CTI%20Integration%20Configuration%20Guide%208.0v4/ZendeskSSOOption2.htm | 2019-08-17T23:44:15 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.8x8.com |
Sync.
Where: This change applies to Lightning Experience in Professional, Enterprise, Performance, and Unlimited editions.
Who: Syncing occurs for Einstein Activity Capture users who meet the following criteria.
- The email account on their user record is connected to Salesforce.
- You add them to an Einstein Activity Capture configuration that includes syncing.
How: From Setup, go to the Einstein Activity Capture settings page. Create a configuration that syncs contacts or events.
| https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_sales_productivity_einstein_activity_capture_sync.htm | 2019-08-17T22:55:22 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['release_notes/images/218_einstein_activity_capture_config.png',
'Einstein Activity Capture configuration'], dtype=object) ] | docs.releasenotes.salesforce.com |
Contents Now Platform Capabilities Previous Topic Next Topic Automated Test Framework use case examples Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Automated Test Framework use case examples Use cases can help you construct tests for common scenarios. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/administer/auto-test-framework/concept/atf-use-cases.html | 2019-08-17T23:10:02 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.servicenow.com |
This is documentation for Orange 2.7. For the latest documentation, see Orange 3.
File¶
Reads attribute-value data from an input file.
Description¶
File widget reads the input data file (data table with data instances) and sends the data set to its output channel. It maintains a history of most recently opened files. For convenience, the history also includes a directory with the sample data sets that come pre-installed with Orange.
The widget reads data from simple tab-delimited or comma-separated files, as well as files in Weka’s arrf format.
- Browse for a data file.
- Browse through previously opened data files, or load any of the sample data files.
- Reloads currently selected data file.
- Information on loaded data set (data set size, number and types of data features).
- Opens a sub-window with advanced settings.
- Adds a report on data set info (size, features).
Advanced Options¶
- Symbol for don’t care data entry.
- Symbol for don’t know data entry.
- Settings for treatment of feature names in the feature space of Orange.
Tab-delimited data file can include user defined symbols for undefined values. The symbols for “don’t care” and “don’t know” values can be specified in the corresponding edit lines. The default values for “don’t know” and “don’t care” depend upon format. Most users will use tab-delimited files: keep the field empty or put a question mark in there and that’s it. Most algorithms do not differ between don’t know and don’t care values, so consider them both to mean undefined.
Orange will usually treat the attributes with the same name but appearing in different files as the same attribute, so a classifier which uses the attribute “petal length” from the first will use the attribute of the same name from the second. In cases when attributes from different files just accidentally bear different names, one can instruct Orange to either always construct new attribute or construct them when they differ in their domains. Use the options on dealing with new attributes with great care (if at all).
Example¶
Most Orange workflows would probably start with the File widget. In the schema below, the widget is used to read the data that is sent to both Data Table widget and to widget that displays Attribute Statistics.
| https://docs.biolab.si/2/widgets/rst/data/file.html | 2019-08-17T22:30:36 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['../../../_images/File-stamped.png',
'File widget with loaded Iris data set'], dtype=object)
array(['../../../_images/spacer.png', '../../../_images/spacer.png'],
dtype=object)
array(['../../../_images/File-Advanced-stamped.png',
'Advanced options of File widget'], dtype=object)
array(['../../../_images/spacer.png', '../../../_images/spacer.png'],
dtype=object)
array(['../../../_images/File-Workflow.png',
'Example schema with File widget'], dtype=object)] | docs.biolab.si |
This is documentation for Orange 2.7. For the latest documentation, see Orange 3.
Rank¶
Ranking of attributes in classification or regression data sets.
Description¶
Rank widget considers class-labeled data sets (classification or regression) and scores the attributes according to their correlation with the class.
- Attributes (rows) and their scores by different scoring methods (columns).
- Scoring techniques and their (optional) parameters.
- For scoring techniques that require discrete attributes this is the number of intervals to which continues attributes will be discretized to.
- Number of decimals used in reporting the score.
- Toggles the bar-based visualisation of the feature scores.
- Adds a score table to the current report.
Example: Attribute Ranking and Selection¶
Below we have used immediately after the File widget to reduce the set of data attribute and include only the most informative one:
Notice how the widget outputs a data set that includes only the best-scored attributes:
Example: Feature Subset Selection for Machine Learning¶
Following is a bit more complicated example. In the workflow below we first split the data into training and test set. In the upper branch the training data passes through the Rank widget to select the most informative attributes, while in the lower branch there is no feature selection. Both feature selected and original data sets are passed to its own Test Learners widget, which develops a Naive Bayes classifier and scores it on a test set.
For data sets with many features and naive Bayesian classifier feature selection, as shown above, would often yield a better predictive accuracy. | https://docs.biolab.si/2/widgets/rst/data/rank.html | 2019-08-17T22:42:14 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['../../../_images/Rank-stamped.png',
'../../../_images/Rank-stamped.png'], dtype=object)
array(['../../../_images/Rank-Select-Schema.png',
'../../../_images/Rank-Select-Schema.png'], dtype=object)
array(['../../../_images/Rank-Select-Widgets.png',
'../../../_images/Rank-Select-Widgets.png'], dtype=object)
array(['../../../_images/Rank-and-Test.png',
'../../../_images/Rank-and-Test.png'], dtype=object)] | docs.biolab.si |
Add the Work Queue Component to Email Application Panes
Provide your reps the same High Velocity Sales Work Queue they use in Salesforce directly in Microsoft® Outlook® and Gmail™.
Where: This change applies to Lightning Experience and Salesforce Classic in Essentials, Group, Professional, Enterprise, Performance, Unlimited, and Developer editions.
Who: Companies with High Velocity Sales and Inbox enabled can add the Work Queue component to their email application panes.
How: For the best experience, we recommend adding a tab to the Outlook integration Tab component, and placing the Work Queue component in that new tab. Include the component in each pane assigned to your inside sales reps who use the High Velocity Sales features. | https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_forcecom_lab_work_queue_in_email.htm | 2019-08-17T23:53:14 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.releasenotes.salesforce.com |
Contents Now Platform Custom Business Applications Previous Topic Next Topic Server test step: Impersonate Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Server test step: Impersonate Impersonate the specified user for the test. Impersonate specifies a user for executing subsequent steps in this test. It works for both server-side and browser-side steps and stays in effect until changed with another Impersonate step or until the test ends. The impersonation automatically ends when the test is over.Note: Do not impersonate a user with the test author role. Doing so can lead to conflicts that interfere with executing the test. Do not rely on user IDs being consistent across different instances. The system dynamically assigns users IDs so the ID for a particular user often differs from one instance to the next. When exporting and importing automated tests, keep in mind that update sets do not update the user field.. Test (Read only.) The test to which this step belongs. Step config (Read only.) The test step for this form. User The user ID for the user to impersonate. Table 2. Outputs Field Description user The user id of the user impersonated. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-application-development/page/administer/auto-test-framework/reference/atf-impersonate.html | 2019-08-17T23:12:57 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.servicenow.com |
Configuration options available for Admins in LaraPass.
List of various] Option to display the Changelog Page to end-users [Yes/No]
{warning} It is recommended not to keep changing the larapass system settings often without proper understanding of the underlying system.
You can make direct a announcement from admin menu which will be displayed on the Dashboard of all the users under
Latest Announcements
Ex: Mainteance Schedules from 5:00pm UTC to 6:00pm UTC.
We have build-in the ability to check whether there are any new updates released for LaraPass. With the next minor release, we will be adding the functionality to automatically update your app directly from the menu. | https://docs.larapass.net/1.0/admin-settings | 2019-08-17T23:55:22 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.larapass.net |
Lokalise is a better way to adapt web and mobile apps, games, IoT, software or digital content for international markets.
We offer a translation management system (TMS) that helps teams to automate, manage and translate content (text strings, documents).
What can I do with Lokalise?
With Lokalise you can:
- Translate your localization files.
- Collaborate and manage all your software localization projects in one platform.
- Implement an agile localization workflow.
- Add screenshots for automatic recognition and matching with the text strings in your projects.
- Set up automated workflows via API, use webhooks or integrate with other services.
- Preview in real-time how the translations will look like in your web or mobile app.
- Order professional translations from Lokalise translators or use machine translation.
See all Lokalise features.
Who is Lokalise for?
- Developers who want to reduce the routine manual operations by automating the translation and localization workflow.
- Project/Localization Managers who want to make the localization process faster and manage all projects and team in one place and in a more efficient way.
- Translators who want to provide the high-quality translations by leveraging screenshots, comments, machine translation, in-context editors and other tools. | https://docs.lokalise.co/en/articles/1400427-what-is-lokalise | 2019-08-17T22:57:07 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.lokalise.co |
Contents Now Platform Capabilities Previous Topic Next Topic Input variable removal Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share InputAn existing workflow already contains two input variables.Input variable removal solutionWhen editing workflows, particularly when deleting input variables, be sure to use a single update set for all variable editing and workflow publishing.Input variable removal preventionPrior to publishing a workflow version, the system validates the workflow model to assist the designer in planning for deployment. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/workflow-administration/concept/c_InputVariableRemoval.html | 2019-08-17T23:08:44 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.servicenow.com |
Add a new Web Service
Ucommerce uses WebAPI 2 to host web services and in this article you will learn to create custom web services and how to deploy them with Ucommerce.
How to create a custom web service using WebAPI 2
web services are wired up using attribute routing, you can read more about attribute routing here:
In this example we are creating a custom web service to get the customers basket from Ucommerce. All web services should start with "ucommerceapi" to avoid conflicts with the CMS's web services.
[RoutePrefix("ucommerceapi/Basket")] public class BasketController : ApiController { [Route("Get")] public IHttpActionResult Get() { if (!TransactionLibrary.HasBasket()) { return NotFound(); } //Use Ucommerce's top level APIs to get current basket for the customer. PurchaseOrder purchaseOrder = TransactionLibrary.GetBasket(false).PurchaseOrder; BasketModel basket = ConvertPurchaseOrderToBasketModel(purchaseOrder); return Ok(basket); } }
Once you have deployed this web service, it will be available on the url "/ucommerceapi/basket/get"
Attributes
"IsAuthenticated" Attribute
The "IsAuthenticated" ensures that the web services can only be request by someone who is logged in to the back office of the CMS.
[Route("Get")] [IsAuthenticated] public IHttpActionResult Get() {
"IsInRole" Attribute
The "IsInRole" ensures that the web services can only be request by someone who is logged in to the back office of the CMS and the specified permissions.
[Route("Get")] [IsInRole(typeof(SettingsRole))] public IHttpActionResult Get() {
All of the roles can be found in the "UCommerce.EntitiesV2" namespace.
Deployment
The only thing left is deploying your web services to your Ucommerce environment. This is done by adding the assembly to a suitable folder underneath Ucommerce/apps then Ucommerce will automatically pick it up when the application start. | https://docs.ucommerce.net/ucommerce/v7.16/extending-ucommerce/add-a-new-web-service.html | 2019-08-17T22:51:11 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.ucommerce.net |
Image Control
- PDF for offline use
-
- Sample Code:
-
Let us know how you feel about this
0/250
last updated: 2016-09
watchOS provides a
WKInterfaceImage control to display
images and simple animations. Some controls
can also have a background image (such as
buttons, groups, and interface controllers).
Use asset catalog images to add images to Watch Kit apps. Only @2x versions are required, since all watch devices have Retina displays.
It is good practice to ensure the images themselves are the correct size for the watch display. Avoid using incorrectly sized images (especially large ones) and scaling to display them on the watch.
You can use the Watch Kit sizes (38mm and 42mm) in an asset catalog image to specify different images for each display size.
Images on the Watch
The most efficient way to display images is to
include them in the watch app project and
display them using the
SetImage(string imageName)
method.
For example, the WatchKitCatalog sample has a number of images added to an asset catalog in the watch app project:
These can be efficiently loaded and displayed
on the watch using
SetImage with the string
name parameter:
myImageControl.SetImage("Whale"); myOtherImageControl.SetImage("Worry");
Background Images
The same logic applies for the
SetBackgroundImage (string imageName)
on the
Button,
Group, and
InterfaceController classes. Best
performance is achieved by storing the images in the watch app itself.
Images in the Watch Extension
In addition to loading images that are stored in the watch app itself, you can send images from the extension bundle to the watch app for display (or you could download images from a remote location, and display those).
To load images from the watch extension, create
UIImage instances and then call
SetImage with
the
UIImage object.
For example, the WatchKitCatalog sample has an image named Bumblebee in the watch extension project:
The following code will result in:
- the image being loaded into memory, and
- displayed on the watch.
using (var image = UIImage.FromBundle ("Bumblebee")) { myImageControl.SetImage (image); }
Animations
To animate a set of images, they should all begin with the same prefix and have a numeric suffix.
The WatchKitCatalog sample has a series of numbered images in the watch app project with the Bus prefix:
To display these images as an animation, first load the
image using
SetImage with the prefix name and
then call
StartAnimating:
animatedImage.SetImage ("Bus"); animatedImage.StartAnimating ();
Call
StopAnimating on the image control to
stop the animation looping:
animatedImage.StopAnimating ();
Appendix: Caching Images (watchOS 1)
If the application repeatedly uses an image that is stored in the extension (or has been downloaded), it is possible to cache the image in the watch's storage, to increase performance for subsequent displays.
Use the
WKInterfaceDevices
AddCachedImage method
to transfer the image to the watch, and then use
SetImage with the image name parameter as a string
to display it:
var device = WKInterfaceDevice.CurrentDevice; using (var image = UIImage.FromBundle ("Bumblebee")) { if (!device.AddCachedImage (image, "Bumblebee")) { Console.WriteLine ("Image cache full."); } else { cachedImage.SetImage ("Bumblebee"); } } }
You can query the contents of the image cache in
code using
WKInterfaceDevice.CurrentDevice.WeakCachedImages.
Managing the Cache
The cache about 20 MB in size. It is kept across app restarts,
and when it fills up it is your responsibility to clear out
files using
RemoveCachedImage or
RemoveAllCachedImages
methods on the
WKInterfaceDevice.CurrentDevice. | https://docs.mono-android.net/guides/ios/watch/controls/image/ | 2017-03-23T04:12:16 | CC-MAIN-2017-13 | 1490218186774.43 | [array(['Images/image-walkway.png', None], dtype=object)
array(['Images/image-animation.png', None], dtype=object)
array(['Images/asset-universal-sml.png', None], dtype=object)
array(['Images/asset-watch-sml.png', None], dtype=object)
array(['Images/asset-whale-sml.png', None], dtype=object)
array(['Images/asset-bumblebee-sml.png', None], dtype=object)
array(['Images/asset-bus-animation-sml.png', None], dtype=object)] | docs.mono-android.net |
- :
The following example models the tree using Parent References,
storing the reference to the parent category in the field
parent:
db.categories.insert( { _id: "MongoDB", parent: "Databases" } ) db.categories.insert( { _id: "dbm", parent: "Databases" } ) db.categories.insert( { _id: "Databases", parent: "Programming" } ) db.categories.insert( { _id: "Languages", parent: "Programming" } ) db.categories.insert( { _id: "Programming", parent: "Books" } ) db.categories.insert( { _id: "Books", parent: null } )
The query to retrieve the parent of a node is fast and straightforward:
db.categories.findOne( { _id: "MongoDB" } ).parent
You can create an index on the field
parentto enable fast search by the parent node:
db.categories.createIndex( { parent: 1 } )
You can query by the
parentfield to find its immediate children nodes:
db.categories.find( { parent: "Databases" } )
The Parent Links pattern provides a simple solution to tree storage but requires multiple queries to retrieve subtrees. | https://docs.mongodb.com/v3.2/tutorial/model-tree-structures-with-parent-references/ | 2017-03-23T04:18:16 | CC-MAIN-2017-13 | 1490218186774.43 | [array(['../../_images/data-model-tree.png',
'Tree data model for a sample hierarchy of categories.'],
dtype=object) ] | docs.mongodb.com |
The Results view shows the name of the files that contain the strings you have to retrieve (and replace), their path, their size, the number of strings found and the user id of the files. This view also provides the exact position of each match. You can also open a file by clicking with themouse button on an list entry that contains line and column position. | https://docs.kde.org/stable4/en/kdewebdev/kfilereplace/kfilereplace-the-results-view.html | 2016-09-25T03:41:10 | CC-MAIN-2016-40 | 1474738659833.43 | [array(['/stable4/common/top-kde.jpg', None], dtype=object)
array(['results_view.png', "KFileReplace's Results view"], dtype=object)] | docs.kde.org |
In Scala, patterns can be defined independently of case classes. To this end, a method named unapply is defined to yield a so-called extractor. An extractor can be thought of as a special method that reverses the effect of applying a particular object on some inputs. Its purpose is to ‘extract’ the inputs that were present before the ‘apply’ operation. For instance, the following code defines an extractor object Twice.
object Twice { def apply(x: Int): Int = x * 2 def unapply(z: Int): Option[Int] = if (z%2 == 0) Some(z/2) else None } object TwiceTest extends App { val x = Twice(21) x match { case Twice(n) => Console.println(n) } // prints 21 }
There are two syntactic conventions at work here:
The pattern
case Twice(n) will cause an invocation of
Twice.unapply, which is used to match any even number; the return value of the
unapply signals whether the argument has matched or not, and any sub-values that can be used for further matching. Here, the sub-value is
z/2
The
apply method is not necessary for pattern matching. It is only used to mimick a constructor.
val x = Twice(21) expands to
val x = Twice.apply(21).
The return type of an
unapply should be chosen as follows:
Boolean. For instance
case even()” (see section 4) by Emir, Odersky and Williams (January 2007).blog comments powered by Disqus
Contents | http://docs.scala-lang.org/tutorials/tour/extractor-objects | 2017-01-16T21:44:35 | CC-MAIN-2017-04 | 1484560279368.44 | [] | docs.scala-lang.org |
ALJ/CFT/avs DRAFT Agenda ID #10227 (Rev. 1)
3/24/2011 Item 23
BEFORE THE PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA
ORDER INSTITUTING RULEMAKING
TABLE OF CONTENTS
Title Page
ORDER INSTITUTING RULEMAKING 22
2.1. Prior Commission Actions Regarding Assembly Bill 32 33
2.2. Actions Regarding Assembly Bill 32 66
2.3. Potential Utility Revenues from
Auctions of GHG Emissions Allowances 1010
2.4. Potential Utility Revenues from Low Carbon
Fuel Standard Credits 1212
2.5. Utility Management of GHG Cost Exposure 1313
3. Preliminary Scoping Memo 1717
3.1. Use of Revenues from GHG Emissions Allowances
Auctions and the Sale of Low Carbon Fuel Standard Credits 1919
3.2. Management of GHG Compliance Costs
Associated with Electricity Procurement 1919
5. Category of Proceeding and Need for Hearing 2121
6. Service of OIR, Creation of Service List, and Subscription Service 2222
6.1. During the First 20 Days 2323
6.2. After the First 20 Days 2424
6.3. Updating Information 2424
6.4. Serving and Filing Documents 2525
6.5. Subscription Service 2525
8. Intervenor Compensation 2626
9. Ex parte Communications 2626
ORDER INSTITUTING RULEMAKING
The Commission opens this rulemaking to address potential utility cost and revenue issues associated with greenhouse gas (GHG) emissions. At this time, our primary focus will be on the possible use of revenues that electric utilities may generate from auction of allowances allocated to them by the California Air Resources Board, the use of revenues that electric utilities may receive from the sale of Low Carbon Fuel Standard credits they may receive from the California Air Resources Board (ARB), and the treatment of possible GHG compliance costs associated with electricity procurement. This rulemaking may also address other GHG issues, particularly those affecting utility costs and revenues related to GHG emission regulations and statutory requirements.
The Commission acknowledges that the ARB has been enjoined by the San Francisco Superior Court from implementing aspects of its GHG regulatory program. This may result in delays or changes to the ARB's regulatory program, but in order to avoid additional future delays, we are opening this rulemaking to ensure that this Commission is prepared to timely address the issues within our jurisdiction when and if the problems identified by the Superior Court are resolved. To the extent ARB changes its regulatory program, the scope and schedule of this rulemaking may also change.
Some issues related to GHG emissions are addressed more appropriately in other Commission proceedings. Specifically, utilities' authorization to buy and sell GHG allowances and offsets is being addressed in the long-term procurement planning proceeding, Rulemaking 10-05-006.
2.1. Prior Commission Actions
Regarding Assembly Bill 32
The Global Warming Solutions Act of 2006 (Assembly Bill (AB) 32)1 caps California's greenhouse gas (GHG) emissions at the 1990 level by 2020. AB 32 granted the California Air Resources Board (ARB) broad authority to regulate GHG emissions to reach the goal of having GHG emissions in 2020 be no higher than the 1990 level.
Prior to AB 32's enactment, the Commission was taking steps in Phase 2 of Rulemaking (R.) 06-04-009 to implement a load-based GHG emissions allowance cap-and-trade program adopted in Decision (D.) 06-02-032 for the electric utilities, and to address GHG emissions associated with customers' direct use of natural gas. With enactment of AB 32, Phase 2 of R.06-04-009 shifted to support ARB's implementation of the new statute and was undertaken thereafter jointly with the California Energy Commission.
On March 14, 2008, D.08-03-018 in Phase 2 of R.06-04-009 recommended that ARB adopt a mix of direct mandatory/regulatory requirements for the electricity and natural gas sectors. These recommendations included that ARB designate "deliverers" of electricity to the California grid, regardless of where the electricity is generated, as the entities in the electricity sector responsible for compliance with the AB 32 requirements, and that ARB implement a multi-sector GHG emissions allowance cap-and-trade system that includes the electricity sector. That decision addressed the distribution of GHG emissions allowances and recommended that some portion of the GHG emissions allowances available to the electricity sector be auctioned. It also included preliminary recommendations regarding the use of proceeds from the auctioning of GHG emissions allowances allocated to the electricity sector:.2
On October 22, 2008, the Commission issued D.08-10-037 in Phase 2 of R.06-04-009, the Final Opinion on Greenhouse Gas Regulatory Strategies. That decision provided more detailed recommendations to ARB as it proceeded with implementing AB 32. Recognizing that it is ARB's role to determine whether implementation of a cap-and-trade program in California is the appropriate policy, D.08-10-037 recommended that ARB allocate 80% of electric sector allowances in 2012 to the "deliverers" of electricity to the California transmission grid and 20% to "retail providers" of electricity (including load serving entities and publicly owned utilities), with the relative proportions changing each year until all allowances would be allocated to retail providers by 2016 and in every year thereafter. As part of this recommendation, the retail providers would be required to sell their allowances through a centralized auction undertaken by ARB or its agent.
Section 5.5 of D.08-10-037 includes discussion of the proper uses for GHG emissions allowance auction proceeds received by retail providers of electricity:
We agree with parties that all auction revenues should be used for purposes related to AB 32. ... In our view, the scope of permissible uses should be limited to direct steps aimed at reducing GHG emissions and also bill relief to the extent that the GHG program leads to increased utility costs and wholesale price increases. It is imperative, however, that any mechanism implemented to provide bill relief be designed so as not to dampen the price signal resulting from the cap-and-trade program.3
Ordering Paragraphs 15 and 16 in D.08-10-037 are particularly relevant to today's rulemaking:
15. We recommend that ARB require that all allowance auction revenues be used for purposes related to Assembly Bill (AB) 32, and that ARB require all auction revenues from allowances allocated to the electricity sector be used to finance investments in energy efficiency and renewable energy or for bill relief, especially for low income customers.
16. We recommend that ARB allow the Public Utilities Commission for load serving entities and the governing boards for publicly-owned utilities to determine the appropriate use of retail providers' auction revenues consistent with the purposes of AB 32 and the restrictions recommended in Ordering Paragraph 15.4
Following D.08-10-037, Commission staff has continued to work informally with ARB as it proceeds to develop its regulations implementing AB 32.
2.2. Actions Regarding Assembly Bill 32
ARB's Climate Change Scoping Plan includes a recommendation that California adopt a portfolio of emissions reduction measures, including, if appropriate, a California GHG cap-and-trade program that can link with other programs to create a regional market system.5
On October 28, 2010, ARB staff released its "Proposed Regulation to Implement the California Cap-and-Trade Program." Part I of that document is the "Staff Report: Initial Statement of Reasons for Proposed Regulation to Implement the California Cap-and-Trade Program" (ISOR), which presents the rationale and basis for the proposed regulation. Appendix A to the ISOR contains ARB staff's Proposed Regulation Order.6
The staff-proposed ARB regulations would create a GHG emissions allowance cap-and-trade system, with compliance obligations in the electricity sector applicable to "first deliverers of electricity," generally consistent with the "deliverer" obligations that this Commission and the California Energy Commission had recommended. The proposed regulations would, however, allocate all emissions allowances in the electricity sector to "electrical distribution utilities"7 and require that the "first deliverers of electricity" purchase all of the allowances needed to meet their compliance obligations. The term "electrical distribution utilities" is generally consistent with the "retail providers" recommended by this Commission and the California Energy Commission, except that it does not include Electric Service Providers and Community Choice Aggregators.
Following the receipt of written comments and public testimony on its proposed regulations, ARB staff prepared suggested modifications to the originally proposed regulations attached to the ISOR, and submitted the proposed modifications8 to ARB on December 16, 2010.
On December 16, 2010, ARB considered its staff's recommendations and approved Resolution 10-42.9 Resolution 10-42 authorized ARB's Executive Officer to consider and make several modifications to the proposed regulation, as appropriate, and then to take final action to adopt the revised regulation or bring the revised regulation back to ARB for further consideration.10
One of the ARB staff's recommended modifications was finalization of the methodology for allocation of free GHG emissions allowances to the electrical distribution utilities. Other unaddressed issues that affect the electric industry include the treatment of combined heat and power (CHP) facilities in a cap-and-trade program, and a set-aside for voluntary renewable electricity.
Prior to the decision by the San Francisco Superior Court, ARB was expecting that its cap-and-trade regulation would be finalized in the fall of this year, to go into effect in December 2011, and ARB was planning that the first auction of GHG emissions allowances would occur on February 14, 2012, with auctions to be held quarterly thereafter. These dates are now uncertain, as it is not clear how long it will take for the problems identified by the Superior Court to be resolved. Even with that uncertainty, it is prudent for the Commission to begin thinking about how to possibly implement what appears to be ARB's preferred approach, so that this Commission will be prepared if and when ARB moves forward.
Section 95892(d) of the ARB staff-proposed regulation includes language limiting the use of auction proceeds from allowances allocated to electrical distribution utilities. Sections 95892(d)(2) and 95892(d)(3) are provided below:
(2) Proceeds obtained from the monetization of allowances directly allocated to investor owned utilities shall be subject to any limitations imposed by the California Public Utilities Commission and to the additional limitations set forth in section 95892(d)(3) below.
(3) Auction proceeds obtained by an electrical distribution utility shall be used exclusively for the benefit of retail ratepayers of each electrical distribution utility, consistent with the goals of AB 32, and may not be used for the benefit of entities or persons other than such ratepayers.
(A) Investor owned utilities shall ensure equal treatment of their own customers and customers of electricity service providers and community choice aggregators.
(B) To the extent that an electrical distribution utility uses auction proceeds to provide ratepayer rebates, it shall provide such rebates with regard to the fixed portion of ratepayers' bills or as a separate fixed credit or rebate.
(C) To the extent that an electrical distribution utility uses auction proceeds to provide ratepayer rebates, these rebates shall not be based solely on the quantity of electricity delivered to ratepayers from any period after January 1, 2012.
Regarding the use of auction revenues, the ARB resolution adopted on December 16, 2010 states that:
...the [ARB] directs the Executive Officer to work with the California Public Utilities Commission (CPUC) and the publicly owned utilities (POU) to ensure that the proposed allowance value.
... the [ARB] strongly advises the CPUC and the POU governing boards to work with local governments and non-governmental organizations to direct a portion of allowance value, if the cap-and-trade regulation is approved, into investments in local communities, especially the most disadvantaged communities, and to provide an opportunity for small businesses, schools, affordable housing associations, and other community institutions to participate in and benefit from statewide efforts to reduce greenhouse gas emissions.11
2.3. Potential Utility Revenues from Auctions
of GHG Emissions Allowances
ARB staff recommends that 97.7 million metric tons (MMT) of allowances be allocated for free to electrical distribution utilities in 2012, with the recommended sector allocation declining linearly to 83 MMT in 2020. ARB staff recommends that all allowances for 2012 through 2020 be allocated to individual utilities at the start of the program, so that each utility would know its yearly allocations and could plan accordingly. ARB staff is evaluating various methods for the allocation of allowances to the individual electrical distribution utilities, and recommends that the final allocation approach take into account ratepayer cost burden, energy efficiency accomplishments, and early action as measured by investments in qualifying renewable resources.
Preliminary estimates by ARB staff provide insights into the total amount of money that may be at stake for the electric utilities we regulate if ARB implements a cap-and-trade program with GHG emissions allowance allocations similar to those under consideration by ARB staff. In the suggested modifications provided to ARB on December 16, 2010, ARB staff included a graphical depiction of its preliminary estimates of allowance allocations to individual electrical distribution utilities during the 2012 through 2020 period.12 We estimate, based on the total multi-year allocations indicated in that figure and the ARB staff-recommended 2012 allowance allocation to the electric sector of 97.7 MMT, that the electric utilities we regulate could receive allowances in the neighborhood of 65 MMT in 2012, if an allocation method similar to those illustrated in the ARB staff proposal is implemented. ARB staff recommends that an auction reserve price for 2012 auctions be set at $10 per metric ton. At that price and using the rough estimate just described, the electric utilities could receive approximately $650 million for the quarterly auctions that ARB has planned to be held during 2012. If auction prices were to exceed $10 per metric ton, the utilities' revenues could be commensurately higher.
2.4. Potential Utility Revenues from
Low Carbon Fuel Standard Credits
ARB has identified a Low Carbon Fuel Standard as a Discrete Early Action Measure consistent with AB 32. ARB has developed and adopted Low Carbon Fuel Standard regulations,13 which ARB put into effect on April 15, 2010.
The Low Carbon Fuel Standard would be applicable to providers of transportation fuels, and would require a 10% reduction in the carbon content of California's transportation fuels by 2020. The providers of transportation fuels can meet annual carbon content level requirements with any combination of fuels they supply or produce, and with Low Carbon Fuel Standard credits acquired in previous years or purchased from other parties. The standard would allow electric utilities, along with other electricity fuel providers, to receive credits for electricity that is used for transportation purposes, subject to certain electricity metering and reporting requirements.
In R.09-08-009, our rulemaking on alternative-fueled vehicle issues, the January 12, 2010 Assigned Commissioner's Scoping Memo stated that R.09-08-009 would consider addressing the disposition of any revenues that utilities may receive from the sale of Low Carbon Fuel Standard credits. However, the Proposed Decision that was published recently in that proceeding and is under consideration would defer that issue because of unresolved details of ARB's regulations. We plan to consider this issue in this rulemaking.
2.5. Utility Management of
GHG Cost Exposure
With the potential for implementation of a GHG emissions allowance cap-and-trade system, the utilities may face GHG cost exposure in various ways, including the arrangements for GHG compliance responsibility in bilateral contracts as well as utilities' participation in the GHG emissions allowance and offset markets. Bilateral contract issues may arise in, but may not be limited to, the following four procurement scenarios.
First, the Commission has adopted decisions in Application (A.) 08-11-001 and R.08-06-024 with implications for utility exposure to GHG risk from CHP and Qualifying Facility (QF) resources. In D.10-12-035, the Commission adopted a "Qualifying Facility and Combined Heat and Power Program Settlement Agreement," which resolves outstanding litigation between utilities and QFs, adopts a new short-run avoided cost (SRAC) methodology that incorporates GHG allowance costs, and creates a path forward for the procurement of CHP to meet the goals of GHG emissions reductions under AB 32.
Once the approved settlement becomes effective (after final approval by the Federal Energy Regulatory Commission (FERC), among other conditions), the newly adopted SRAC is designed such that CHP generators are paid for avoided GHG costs. CHP generators will not be paid for their own GHG compliance costs; rather payment will reflect the avoided GHG compliance costs of the marginal generating unit that would have been built but for the CHP generator. This will be achieved by incorporating GHG compliance costs into the SRAC payment, which is the avoided cost of the marginal generator. Due to uncertainty regarding the extent to which GHG compliance costs will be reflected in wholesale energy prices during 2012 through 2014, a floor test will be in effect to ensure that GHG allowance costs are fully reflected in the market price of energy.14 However, the adopted SRAC will transition to a market-based energy pricing methodology after the first GHG compliance period.15
For CHP that is procured via a competitive solicitation process, sellers will be required to bid two different prices, depending on whether the seller or the purchasing utility accepts GHG compliance responsibility. The utility will weigh the costs and benefits of the different bids to determine who is best positioned to assume GHG compliance cost risks and will decide which option to select.
For "legacy" QF contracts (existing contracts that do not expire in the near term), QFs will have the option of being paid the SRAC described above or choosing from four other pricing options that reflect different GHG cost/risk balances between buyer and seller.16 QF generators with legacy contracts may choose to assume all GHG compliance risk in exchange for a higher fixed heat rate or they may choose between two options to share GHG risk with the utilities in exchange for lower fixed heat rates with GHG allowance price caps (a price above which the seller assumes the risk). The final option available to legacy QFs is to sign a tolling agreement with the purchasing utility, under which the utility would assume the GHG compliance obligation but would be allowed to manage that risk by assuming dispatch rights over the QF. Under any scenario where the utility assumes the GHG compliance risk, any free allowances held by the seller for electricity purchased by the contracting utility must be surrendered to the utility, and any GHG payments will be for costs not covered by those free allowances.17
Second, in D.09-12-042, as modified by D.10-04-055 and D.10-12-055, the Commission adopted rules and terms for a feed-in-tariff program aimed at small, highly efficient CHP. Under this program, the utility will be responsible for GHG compliance costs associated with the electricity it purchases, up to the emissions associated with operating the facility at or above the minimum efficiency level determined by the California Energy Commission.18 The CHP facility is provided the options to procure GHG allowances for electricity sold to the utility and then seek reimbursement from the utility, or to have the utility perform this allowance procurement function.19
Third, similar to the requirement in the CHP settlement that requires QFs responding to competitive solicitations to submit two different bid prices, at least one utility (Pacific Gas and Electric Company) requires all bidders of fossil fuel-based resources in its long-term solicitations to provide two alternate bids for each project - one in which the utility assumes the GHG compliance obligation and one in which the seller assumes the obligation. Under the first option, the facility owner would assume the GHG compliance cost, and therefore the risk that the compliance costs could change dramatically during the term of the power purchase agreement. Under the second option, the facility owner would pass through the costs of GHG compliance to the utility, and the utility would bear the risk of changes in allowance prices.
Fourth, another issue of concern is the treatment of contracts executed between independent generators and utilities before the passage of AB 32 which may extend into 2012 or beyond and may not allow the generator to pass through GHG compliance costs. If required to sell their output under the terms of their existing contracts, generators with such contracts may be faced with significant GHG compliance costs for which they will not be reimbursed or receive allowances. This issue may also be considered by ARB.20
As required by Rule 7.1(d)21 of the Commission's Rules of Practice and Procedure, this Order Instituting Rulemaking (OIR) includes a Preliminary Scoping Memo. In this Preliminary Scoping Memo, we describe the issues to be considered in this proceeding and the timetable for resolving the proceeding.
This new rulemaking is opened to consider potential utility cost and revenue issues associated with GHG emissions. At this time, we plan to examine two broad aspects of the effect of ARB's staff-proposed GHG mitigation programs on electric utilities. The first issue is the direction the Commission should give to the electric utilities about the uses of revenues they may receive to the extent there is auctioning of their GHG emissions allowances by ARB, and revenues they may receive if they sell Low Carbon Fuel Standard credits received from ARB. The second issue is the utilities' potential exposure to GHG compliance costs, and the guidance the Commission should provide to the utilities regarding potential GHG compliance costs associated with electricity procurement.
As this rulemaking progresses, it may be determined that additional GHG issues, particularly those affecting the utilities' potential costs and revenues associated with GHG emissions, should be addressed in this proceeding. While the issues identified in this Preliminary Scoping Memo apply only to electric utilities, it is possible that GHG-related issues affecting the natural gas utilities may be identified subsequently for consideration in this proceeding.22
The Commission recognizes that ARB's proposed regulations for a GHG emissions allowance cap-and-trade program are not final, and that implementation of the Scoping Plan has been enjoined. However, to the extent that ARB is able to proceed as scheduled, and hold the first auction of allowances allocated to the utilities in less than a year, it would be imprudent to delay our consideration of the potential implications for the utilities and their ratepayers. We will proceed with this rulemaking while recognizing that adjustments may be needed as the ARB process unfolds.
The action by the San Francisco Superior Court enjoining ARB's implementation of its Scoping Plan23 creates significant uncertainty, both as to the schedule and scope of ARB's ultimate implementation of AB 32 and its Scoping Plan, including the GHG emissions allowance cap-and-trade program. Accordingly, the assigned Commissioner and/or the assigned Administrative Law Judge (ALJ) may make procedural rulings as necessary to address the consequences of this litigation, and may also address this issue further in the Scoping Memo for this proceeding. We intend for the scope of this rulemaking to be broad, and accordingly grant the assigned Commissioner and assigned ALJ discretion to revise the scope to include other relevant GHG issues that may arise, particularly those relating to utility costs and revenues from GHG emission regulations and statutory requirements.
3.1. Use of Revenues from GHG Emissions Allowances
Auctions and the Sale of Low Carbon Fuel Standard
Credits
As described in Section 2 above, regulations being considered at ARB would provide some guidance on the use of revenues from the auctioning of GHG emissions allowances to be allocated to the utilities. In this proceeding, the Commission will consider additional guidelines that may be needed. As an example, the Commission could adopt percentages, or dollar amounts, of potential auction revenues to be used for specified purposes, such as customer bill relief, energy efficiency programs, programs that achieve AB 32 environmental justice goals, and research, development and demonstration of GHG emissions reducing technologies. Additionally, the Commission may consider the appropriate use of potential revenues the utilities may receive from the sale of Low Carbon Fuel Standard credits given to them by ARB.
3.2. Management of GHG Compliance Costs
Associated with Electricity Procurement
This rulemaking will also address various aspects of the utilities' management of their potential GHG cost exposure, which includes the arrangements for GHG compliance responsibility in bilateral contracts as well as utilities' participation in the GHG allowance and offset markets. Bilateral contract issues may arise in, but may not be limited to, the procurement scenarios described in Section 2.5 above.
In their procurement decisions, the utilities will have to make assumptions regarding the price of potential future GHG emissions allowances in order to choose among competing bids, each having potentially different GHG compliance exposure characteristics and with differing spreads between the prices offered for different GHG exposure options. This proceeding will consider the establishment of rules or guidelines to govern the utilities' evaluations of such options to ensure that ratepayers do not over-compensate generators that take on the GHG compliance risk. Among other issues, such guidelines may address how to evaluate requests for reimbursement from generating facilities when facilities procure allowances on their own behalf but utilities are responsible for the GHG compliance costs associated with the purchased electricity, as may be the case under the CHP feed-in-tariff program. The guidelines may also address legacy contracts, as described in Section 2.5 above.
In R.10-05-006, the long-term procurement planning proceeding, the Commission is considering authorization for utilities to buy and sell GHG emissions allowances and offsets. Either R.10-05-006 or this proceeding may consider the establishment of guidelines for the utilities' possible participation in GHG emissions allowance and offset markets.
The assigned Commissioner or assigned ALJ will schedule a prehearing conference as soon as practicable. The scope, schedule, and other procedural issues will be discussed at the first prehearing conference. To facilitate these discussions, parties may file Prehearing Conference Statements addressing the scope and schedule of this proceeding, category, need for hearing, and other procedural issues no later than April 21, 2011 and Replies to Prehearing Conference Statements no later than May 5, 2011.
We leave it to the assigned Commissioner and/or assigned ALJ to establish a schedule that sequences the issues most appropriately. The assigned Commissioner or assigned ALJ may adjust the schedule and refine the scope of the proceeding as needed, consistent with the requirements of the Rules of Practice and Procedure.
Consistent with Public Utilities Code Section 1701.5, we expect this proceeding to be concluded within 18 months of the date of the scoping memo.
Rule 7.1(d) of the Commission's Rules of Practice and Procedure provides that the order instituting rulemaking "shall preliminarily determine the category and need for hearing..." This rulemaking is preliminarily determined to be ratesetting, as that term is defined in Rule 1.3(e). We anticipate that the issues in this proceeding may be resolved through a combination of workshops and filed comments, and that evidentiary hearings will not be necessary. Any person who objects to the preliminary categorization of this rulemaking as "ratesetting" or to the preliminary hearing determination, shall state the objections in their Prehearing Conference Statements. The assigned Commissioner will determine the need for hearing and will make a final category determination in the scoping memo; this final determination as to category is subject to appeal as specified in Rule 7.6(a).
We will serve this OIR on the service lists (appearances, state service list, and information-only category) in the following proceedings:
· CHP feed-in-tariff rulemaking;
· A.08-11-001, R.06-02-013, R.04-04-003, R.04-04-025, and R.99-11-022, the QF).
Such service of the OIR does not confer party status in this proceeding upon any person or entity, and does not result in that person or entity being placed on the service list for this proceeding.
The Commission will create an official service list for this proceeding, which will be available at. We anticipate that the official service list will be posted before the first filing deadline in this proceeding. Before serving documents at any time during this proceeding, parties shall ensure they are using the most up-to-date official service list by checking the Commission's website prior to each service date.
While all electric and natural gas utilities may be bound by the outcome of this proceeding, only those who notify us that they wish to be on the service list will be accorded service by others until a final decision is issued.
If you want to participate in the Rulemaking or simply to monitor it, follow the procedures set forth below. To ensure you receive all documents, send your request within 20 days after the OIR is published. The Commission's Process Office will update the official service list on the Commission's website as necessary.
6.1. During the First 20 Days
Within 20 days of the publication of this OIR, any person may ask to be added to the official service list. Send your request to the Process Office. You may use e-mail ([email protected]) or letter (Process Office, California Public Utilities Commission, 505 Van Ness Avenue, San Francisco, CA 94102). Include the following information:
· Docket Number of this Rulemaking;
· Name (and party represented, if applicable);
· Telephone Number;
· Desired Status (Party, State Service, or Information Only).24
6.2. After the First 20 Days
If you want to become a party after the first 20 days, you may do so by filing and serving timely comments (including a Prehearing Conference Statement or Reply to Prehearing Conference Statements) in the Rulemaking (Rule 1.4(a)(2)), or by making an oral motion (Rule 1.4(a)(3)), or by filing a motion (Rule 1.4(a)(4)). If you make an oral motion or file a motion, you must also comply with Rule 1.4(b). These rules are in the Commission's Rules of Practice and Procedure, which you can read at the Commission's website.
If you want to be added to the official service list as a non-party (that is, as State Service or Information Only), follow the instructions in Section 6.1 above at any time.
6.3. Updating Information
Once you are on the official service list, you must ensure that the information you have provided is up-to-date. To change your postal address, telephone number, e-mail address, or the name of your representative, send the change to the Process Office by letter or e-mail, and send a copy to everyone on the official service list.
6.4. Serving and Filing Documents
When you serve a document, use the official service list published at the Commission's website as of the date of service. You must comply with Rules 1.9 and 1.10 when you serve a document to be filed with the Commission's Docket Office.
The Commission encourages electronic filing and e-mail service in this Rulemaking. You may find information about electronic filing at. E-mail service is governed by Rule 1.10. If you use e-mail service, you must also provide a paper copy to the assigned Commissioner and ALJ. The electronic copy should be in Microsoft Word or Excel formats to the extent possible. The paper copy should be double-sided. E-mail service of documents must occur no later than 5:00 p.m. on the date that service is scheduled to occur.
If you have questions about the Commission's filing and service procedures, contact the Docket Office.
6.5. Subscription Service
This proceeding can also be monitored by subscribing in order to receive electronic copies of documents in this proceeding that are published on the Commission's website. There is no need to be on the service list in order to use the subscription service. Instructions for enrolling in the subscription service are available on the Commission's website at.
Any person or entity interested in participating in this Rulemaking who is unfamiliar with the Commission's procedures should contact the Commission's Public Advisor in San Francisco at (415) 703-2074 or (866) 849-8390 or e-mail [email protected]; or in Los Angeles at (213) 576-7055 or (866) 849-8391, or e-mail [email protected]. The TYY number is (866) 836-7825.
Any party that expects to claim intervenor compensation for its participation in this Rulemaking shall file its notice of intent to claim intervenor compensation no later than 30 days after the first prehearing conference or pursuant to a date set forth in a later ruling which may be issued by the assigned Commissioner or assigned ALJ.
Pursuant to Rule 8.2(c), ex parte communications will be allowed in this ratesetting proceeding subject to the restrictions in Rule 8.2(c) and the reporting requirements in Rule 8.3.
Therefore, IT IS ORDERED that:
1. A rulemaking is instituted on the Commission's own motion to address utility cost and revenue issues associated with greenhouse gas (GHG) emissions. While other issues may be considered, the rulemaking will consider, in particular, the use of GHG emissions allowance auction revenues that electric utilities may receive from the California Air Resources Board (ARB), the use of revenues that electric utilities may receive from the sale of Low Carbon Fuel Standard credits the electric utilities may receive from ARB, and the treatment of potential GHG compliance costs associated with electricity procurement. This rulemaking may also address other issues affecting electric and/or natural gas utility costs and revenues related to GHG emission regulations and statutory requirements.
2. The assigned Commissioner or Administrative Law Judge shall schedule a prehearing conference in this rulemaking as soon as practicable. Parties may file Prehearing Conference Statements no later than April 21, 2011 and may file Replies to Prehearing Conference Statements no later than May 5, 2011.
3. The assigned Commissioner or assigned Administrative Law Judge may adjust the schedule and refine the scope of the proceeding as needed, consistent with the requirements of the Rules of Practice and Procedure.
4. This rulemaking is preliminarily determined to be ratesetting, as that term is defined in Rule 1.3(e). It is preliminarily determined that evidentiary hearings are not needed in this proceeding. Any persons objecting to the preliminary categorization of this rulemaking as "ratesetting" or to the preliminary determination that evidentiary hearings are not necessary shall state their objections in their Prehearing Conference Statements.
5. The Executive Director shall cause this Order Instituting Rulemaking to be served on the service lists in the following proceedings:
· Rulemaking combined heat and power feed-in-tariff rulemaking;
· Application (A.) 08-11-001, R.06-02-013, R.04-04-003, R.04-04-025, and R.99-11-022, the qualifying facility).
6. Interested persons shall follow the directions in Section 6 of this Order Instituting Rulemaking to become a party or be placed on the official service list.
7. Any party that expects to request intervenor compensation for its participation in this rulemaking shall file its notice of intent to claim intervenor compensation in accordance with Rule 17.1 of the Commission's Rules of Practice and Procedure, no later than 30 days after the first prehearing conference or pursuant to a date set forth in a later ruling which may be issued by the assigned Commissioner or assigned Administrative Law Judge.
This order is effective today.
Dated , at San Francisco, California.
1 Statutes of 2006, Chapter 488.
2 D.08-03-018 at 9. See also at 98 - 99, Finding of Fact 30 and Ordering Paragraph 9.
3 D.08-10-037, at 227.
4 D.08-10-037, at 299.
5 ARB Resolution 10-42 at 3.
6 The ARB documents cited in this paragraph are available at.
7 ARB staff's proposed regulations define "electrical distribution utilities" to include "an Investor Owned Utility as defined in the Public Utilities Code section and 218 [sic] or a local publicly owned electric utility that provides electricity to retail end users in California." (Proposed Regulations at A-14.) We note that Public Utilities Code Section 218 defines "electrical corporation," not "investor owned utility." We assume, absent clarification otherwise from ARB, that the proposed regulations use the term "Investor Owned Utility" to mean "electrical corporation." The electrical corporations that provide electricity to retail end users in California include Bear Valley Electric Service, California Pacific Electric Company, Mountain Utilities, Pacific Gas and Electric Company, PacifiCorp, San Diego Gas & Electric Company, and Southern California Edison Company.
8 Attachment B to ARB Resolution 10-42, available at.
9 The final Resolution 10-42, updated to reflect changes directed by ARB on December 16, 2010, is available at.
10 The previously proposed schedule for these activities is posted at.
11 ARB Resolution 10-42, December 16, 2010, at 13.
12 Attachment B to ARB Resolution 10-42 , Appendix 1, Figure 2, available at.
13 Available at. See also ARB's Resolution 09-31, available at, and Resolution 10-49, available at.
14 Upon commencement of a cap-and-trade program in California, the adopted QF and CHP settlement "establishes a floor test which compares an energy price developed with a market-based heat rate to an energy price developed with either a negotiated heat rate, or a heat rate from a period prior to the start of a cap-and-trade program, plus the market price of GHG allowances. The higher of the two energy prices is the one chosen as SRAC." D.10-12-035 at 20.
15 D.10-12-035 at 41.
16 See Qualifying Facility and Combined Heat and Power Program Settlement at Section 11, accessible through links in Appendix A to D.10-12-035.
17 See QF Facility and Combined Heat and Power Program Settlement at Section 10.2.3.
18 D.09-12-042 at 49. Final guidelines issued by the California Energy Commission in February 2010 require a CHP system to not exceed a GHG emission standard of 1,100 pounds of carbon dioxide equivalent emissions per megawatt-hour in order to be eligible for this program.
19 Applications for rehearing filed jointly by Pacific Gas and Electric Company and San Diego Gas & Electric Company and separately by Southern California Edison Company on January 18, 2011 seek rehearing of D.10-12-055, based partially on the treatment of GHG compliance costs. The Commission has not yet ruled on these applications for rehearing.
20 The ARB staff's October 28, 2010, ISOR states that "Some generators have reported that some existing contracts do not include provisions that would allow full pass-through of cap-and-trade costs. These contracts pre-date the mid-2000s and many may be addressed through the recently announced combined heat and power settlement at the California Public Utilities Commission. Staff is evaluating this issue to determine whether some specific contracts may require special treatment on a case-by-case basis." (ISOR at II-32, ft. 22.)
21 "Rulemakings. An order instituting rulemaking shall preliminarily determine the category and need for hearing and shall attach a preliminary scoping memo. The preliminary determination is not appealable, but shall be confirmed or changed by assigned Commissioner's ruling pursuant to Rule 7.3, and such ruling as to the category is subject to appeal under Rule 7.6."
22 Under the ARB staff-recommended cap-and-trade regulations, natural gas distribution utilities would be responsible, beginning in 2015, for the emissions associated with natural gas delivered to customers not directly covered under the proposed cap-and-trade program, including residential, commercial, and small industrial customers. (ISOR at II-35.)
23 Association of Irritated Residents et al. v. California Air Resources Board, Case No. CPF-09-509562, March 18, 2011.
24 If you want to file comments or otherwise actively participate, choose "Party" status. If you do not want to actively participate but want to follow events and filings as they occur, choose "State Service" status if you are an employee of the State of California; otherwise, choose "Information Only" status. | http://docs.cpuc.ca.gov/PUBLISHED/AGENDA_DECISION/132508.htm | 2017-01-16T21:48:06 | CC-MAIN-2017-04 | 1484560279368.44 | [] | docs.cpuc.ca.gov |
The Catalog area groups together most of the features related to your inventory.
Please see: URI Management > URIs
Please see:
To Add Digital Products One at a Time
Please see:
To See What Digital Products Have Been Downloaded
Please see: To Create a New Custom Field.
Please see:
Marketplaces - eBay > Listings
Please see:
To Edit a Product's eBay Listing Settings
An Overview of Marketplaces - eBay
Please see:
To Link Your Miva Merchant Store to Your Amazon Seller Account
See Marketplaces - Etsy > Listings Tab.
See To Edit a Product's Etsy Listing Settings. | http://docs.miva.com/reference-guide/products | 2017-01-16T21:47:11 | CC-MAIN-2017-04 | 1484560279368.44 | [] | docs.miva.com |
Copyright © 2015 The Xubuntu documentation team. Xubuntu and Canonical are registered trademarks of Canonical Ltd.
This documentation is a reference for all Xubuntu contributors. The chapters of this documentation provides information on the development processes - both social and technical - that the Xubuntu contributors use as a guideline in their work.
There are two main appendice for this documentation:
Appendix A, Strategy Document, which is the primary guideline in Xubuntu development.
Appendix B, Common Reference, which describes many technical tasks that the Xubuntu developers need to use continuously.
Would you rather read this documentation in PDF? Select your preferred paper size: A4 or US letter.
Table of Contents | http://docs.xubuntu.org/contributors/ | 2017-01-16T21:40:44 | CC-MAIN-2017-04 | 1484560279368.44 | [] | docs.xubuntu.org |
This guide contains information on administrating Uptime Infrastructure Monitor after it is installed and Auto Discovery is performed on your network.
Uptime Infrastructure Monitor administration comprises organizing discovered Elements and services into groups and Applications, as well as using them to build and define service-level agreements (SLAs). Manage alert thresholds and escalation policies for these individual or grouped Elements, Applications, and SLAs. User profiles determine which members of the organization have access to specific parts of Uptime Infrastructure Monitor. This guide also contains information on managing Uptime Infrastructure Monitor configuration settings.
- Understanding Uptime Infrastructure Monitor
- Quick Topics
- My Portal
- Managing Your Infrastructure
- Overseeing Your Infrastructure
- Using Service Monitors
- Monitoring VMware vSphere
- User Management
- Service Level Agreements
- Alerts and Actions
- Understanding Report Options
- Configuring and Managing Uptime Infrastructure Monitor | http://docs.uptimesoftware.com/display/UT76/Administrator%27s+Guide?showChildren=false | 2019-01-16T10:59:25 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.uptimesoftware.com |
Firewalling¶
Concept¶
A firewall is a basic defense tool for hosts connected to the internet. The basic concept is that network traffic (usually in the form of IP packets, see The Internet Protocol) is allowed or disallowed based on the rules configured in the firewall. Disallowed traffic is either silently discarded (“dropped”) or an error message is returned to the sender (“rejected”). Firewalls often also have other features such as re-writing parts of packets (e.g. for Network Address Translation).
On Linux systems, firewalling is handled in the kernel and can be configured from the userspace as root via iptables or nftables and tools using either of those. Generally, firewalls are configured with very different philosophical approaches based on where they are employed. A firewall on a router may have very different requirements than a firewall on a server or end-user system.
Firewalls often mainly work on the transport layer and below. Layers above are rarely taken into account, and when they are, the technique is usually called Deep Packet Inspection.
There are two classes of firewalls: state-less and state-ful firewalls. State-less firewalls look at each packet entirely in isolation. No information from other packets is taken into account. These days, entirely state-less firewalls are rare on end-user and server systems, because they have difficulties filtering connection-oriented transports such as TCP: from a single TCP packet, it is not trivial to distinguish inbound and outbound traffic for example. The Linux packet filtering used by iptables and nftables is state-ful. An example of a stateful mechanism with iptables is conntrack, which allows to track the state of TCP connections. We will discuss an example of that later on.
IPtables¶
iptables rules are organised in chains which are organised in tables. We will only discuss the filter table in this document; it is responsible for accepting and rejecting inbound, outbound and forwarded traffic. Other tables are mangle and nat, which are used for more advanced scenarios and which I personally avoid to write rules for by hand.
The filter table has three chains,
INPUT,
OUTPUT and
FORWARD. The
INPUT chain processes traffic directed at the host itself. The
OUTPUT chain processes traffic originating from the host and the
FORWARD chain processes traffic which is only forwarded by the host. Traffic which goes through the
FORWARD chain does not pass through
INPUT or
OUTPUT, since it is neither directed to the host itself nor originating from there.
Practical Approach¶
As hinted on above, a firewall is only as useful as the rules programmed into it. To create these rules, there are two basic approaches: blacklisting and whitelisting. With blacklisting, all traffic is allowed by default and only unwanted traffic is filtered out. With whitelisting, all traffic is disallowed by default and only known good traffic is passed on (the extent to which the firewall decides whether traffic is “good” depends on how deep it inspects the packets; see above).
Note
Generally, I recommend the blacklisting approach for routers and the whitelisting approach for server and end-user systems. Some people argue that firewalls should not run at all on server systems; I hold against that that a firewall is a good defense-in-depth measure. Of course a (not too stateful) firewall does not help against an exploit in OpenSSH or any other daemon purposefully running on the system and listening into the wide internet.
However, a firewall can very well help with protecting additionally protecting services e.g. by adding IP-address-based filters (which are at least partially useful with connection-oriented protocols).
Configuration¶
As mentioned, firewalls on Linux are configured (on the lowest level within userspace) with iptables or nftables. The most common tool at the time of writing is iptables; nftables is gaining traction, and I recommend reading up on it by yourself.
Maintaining iptables rules by hand is cumbersome, which is why many users resort to wrappers around iptables, such as ferm. If one is familiar with the iptables syntax, ferm will be easy to learn. It allows to factor out common parts of iptables rules, making them much easier to read and maintain. In addition, it usually comes with a service definition which takes care of applying those rules at boot time.
A simple example of a ferm ruleset is shown below:
domain (ip ip6) table filter { chain INPUT { policy DROP; # connection tracking mod state state INVALID DROP; mod state state (ESTABLISHED RELATED) ACCEPT; # allow local packet interface lo ACCEPT; # respond to ping proto icmp ACCEPT; # allow SSH connections proto tcp dport ssh ACCEPT; } chain OUTPUT { policy ACCEPT; # connection tracking #mod state state INVALID DROP; mod state state (ESTABLISHED RELATED) ACCEPT; } chain FORWARD { policy DROP; # connection tracking mod state state INVALID DROP; mod state state (ESTABLISHED RELATED) ACCEPT; } }
Let us disect that step by step. First of all, the braces (
{ and
}) group rules together; those can be nested, too. So the first line essentially says “everything between the outer pair of braces applies to both IPv4 and IPv6 and to the filter table of iptables”.
Then there are three blocks, one for inbound traffic (started by
chain INPUT), one for outbound traffic (started by
chain OUTPUT) and one for forwarded traffic (
chain FORWARD). The
table and
chain directives of ferm directly relate to the tables and chains of iptables.
The
policy statement tells iptables what to do with traffic which is not matched by any rule. In this case, the
OUTPUT chain is set to accept all traffic by default, while
INPUT and
FORWARD chains use whitelisting, i.e. they drop all traffic by default.
Note
In my opinion, there is rarely a use-case for filtering on the
OUTPUT chain. One prominent one is however to prevent a system from sending mail to any host except specific hosts.
The
mod state state INVALID DROP line in the
INPUT chain can be understood as follows:
mod state: use the
stateiptables module (see the
iptables-extensionsmanpage for more information on iptables modules)
state INVALID: select packets in
INVALIDstate according to the
statemodule
DROP: apply the
DROPaction (simply discard the packets)
The line below uses a ferm feature which I personally consider one of the most important ones: each “value” in a rule can be parenthesied to make the rule apply to multiple values without repeating it. So the line
mod state tsate (ESTABLISHED RELATED) ACCEPT tells iptables to accept traffic which is in
ESTABLISHED or
RELATED state.
In general, all ferm statements are constructed by concatenating keywords (such as
chain,
mod,
state) followed by arguments (such as
INPUT,
state and
ESTABLISHED) to form a rule for matching packets and finished with an action (such as
ACCEPT,
DROP or
REJECT).
With that in mind, the
interface lo ACCEPT statement should be clear: all traffic arriving on the
lo interface shall be accepted.
Another important rule in that piece of ferm configuration is
proto tcp dport ssh ACCEPT. Two things are important about that: first, a rule like that should always be in your firewall: it allows traffic to the SSH daemon. Second:
dport ssh does not mean to accept SSH traffic. It in fact means to accept traffic on the port number associated with the ssh service; note that those associations are defined by the IANA and have nothing to do with your SSH daemons configuration.
If you change your SSH config to use port 1234, you will have to adapt the rule to
dport 1234. This is a reason why I prefer numeric port numbers over the port names: it avoids the reader to be mislead by the name (I could also have an HTTP daemon listening on port 22 to confuse people).
The above piece of ferm configuration can “safely” be deployed on any system whose SSH daemon listens on port 22: you will still be able to connect to that after applying this piece of configuration.
- Debugging
- iptables-save | https://docs.zombofant.net/admin-guide/master/networking/firewalling.html | 2019-01-16T09:54:17 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.zombofant.net |
AutoCAD Map 3D reserves 25% of the total physical memory (RAM) on your system for inserting images with the Raster Extension. If you increase the default amount, more of the physical memory is used for images and less is available for other operations in AutoCAD Map 3D and for other applications you might be running.
If you require additional memory for your images, the Raster Extension uses a temporary swap file. For example, if you insert a 100 MB file, and the Memory Limit is 8 MB, AutoCAD Map 3D stores the remaining 92 MB in a temporary file. You can specify where the swap file is created.
You can change the following Raster Extension memory settings: | http://docs.autodesk.com/MAP/2010/ENU/AutoCAD%20Map%203D%202010%20User%20Documentation/HTML%20Help/files/WS0D7FAE283D823F4EA432DF566A59D89F.htm | 2019-01-16T10:19:49 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.autodesk.com |
Programming Structure
Table 32 defines the structure by programming language for communicating between Teradata TPump and INMOD or notify exit routines.
In each structure, the records must be constructed so that the left‑to‑right order of the data field corresponds to the order of the field names specified in the Teradata TPump LAYOUT command and subsequent FIELD, FILLER, and TABLE commands. | https://docs.teradata.com/reader/YeE2bGoBx9ZGaBZpxlKF4A/C2SqKUtAdU6i_z4f~RCTfA | 2019-01-16T09:50:24 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.teradata.com |
Flask-AppBuilder¶
Simple and rapid application development framework, built on top of Flask. Includes detailed security, auto CRUD generation for your models, google charts and much more.
Lots of examples and a live Demo (login has guest/welcome).
Fixes, bugs and contributions¶
You’re welcome to report bugs, propose new features, or even better contribute to this project.
Issues, bugs and new features | https://flask-appbuilder.readthedocs.io/en/stable/ | 2019-01-16T10:16:03 | CC-MAIN-2019-04 | 1547583657151.48 | [] | flask-appbuilder.readthedocs.io |
The easiest way to add a workout is to select the whiteboard, and then the day you wish to add the workout to.
Note: Some workouts of the day have been pre-programmed into the Training database.
When adding a workout, you may first like to add a warmup and warmdown (that are only visible to staff):
The workout portion is what will be visible for your members to see, and is the aspect they are able to record a result for in InfluxApp.
When adding a workout, you have two options:
1. A Workout of the Day (WOD), or
2. A Lift.
New workout of the day (WOD)
Here you can input a name and description, and choose the scoring method for your members to record in InfluxApp:
Your workouts of the day can be scored a variety of ways:
- For time – e.g. 5km time trial, or Fran wod
- For rounds – e.g. Cindy wod, or Beep test
- For distance – e.g. Row for 10mins
- For load – e.g. 5mins to complete Max squat and Push Press for 10 reps each
- For repetitions – e.g. Number of push ups in 1 minute
- Tabata – e.g. 8 rounds of sit ups: 20 seconds per round and 10 seconds rest
- Total – e.g. Other total not included above
New Lift
Here you can add the lift and the programmed reps and sets you wish your members to complete.
Note: they can manually adjust these in InfluxApp to reflect what they actually completed.
If there is any exercise / movement you wish us to add, please contact us. | https://docs.influxhq.com/training/schedule-workouts/ | 2019-01-16T10:47:50 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['https://influxhqcms.s3.amazonaws.com/sites/5372211a4673aeac8b000002/assets/54d27ce34673aef82e100350/Add_to_whiteboard.png',
None], dtype=object)
array(['https://influxhqcms.s3.amazonaws.com/sites/5372211a4673aeac8b000002/assets/54d281534673aef82e100472/New_workout_of_the_day.png',
None], dtype=object)
array(['https://influxhqcms.s3.amazonaws.com/sites/5372211a4673aeac8b000002/assets/54d285e94673aef82e1005ab/New_lift.png',
None], dtype=object) ] | docs.influxhq.com |
Merchant Application Form (MAF) API
The MAF includes the below details:
Company Profile:
The request must contain all the legal information about the company.
Ownership Profile :
The request must contain all the information related to shareholders, directors and authorized signatory.
Business Profile:
The request must contain all website related information.
Bank Profile:
The request must contain all transaction and bank related information.
Cardholder Profile:
The request must contain all cardholder related information.
MAF request is send over HTTPS to the
/applicationServices/api/ApplicationManager/submitMAF resource using POST method.
In our API Specifications you can find a full list of parameters that can be sent in the request. Also the number of parameters varies depending on the acquiring banks selected as seen in the sample request given below.
Sample Request
Sample Response
Hashing Rule
supports need to be send along with the authentication parameters in each server-to-server request:
<memberId>|<secureKey>|<random>
Sample Code
Banks
Find below the list of banks for synchronous MAF. Merchant details needs to be filled for all banks. | https://docs.paymentz.com/merchant-boarding-api/maf.php | 2019-01-16T09:44:31 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['../assets/img/ajax-loader.gif', None], dtype=object)
array(['../assets/img/ajax-loader.gif', None], dtype=object)
array(['../assets/img/ajax-loader.gif', None], dtype=object)] | docs.paymentz.com |
Use the VSP with your service
GOV.UK Verify uses SAML (Security Assertion Markup Language) to securely exchange information about identities. A Relying Party can use the Verify Service Provider (VSP) to generate SAML to send to the Verify Hub, and translate the SAML responses returned by the Verify Hub.
This tutorial explains how to integrate with the VSP in your local environment using the Compliance Tool as a placeholder for the GOV.UK Verify Hub. You will find out how to to send a SAML
AuthnRequest and how to handle a SAML
Response.
Prerequisites
To be able to follow this tutorial you must:
- have Java 8 or higher
- set up and configure the VSP
- have initialised the GOV.UK Verify Hub placeholder
You can check if your VSP is set up properly by doing a GET request to the
admin/healthcheck endpoint to confirm it is running correctly.
Step 1: Generate a SAML AuthnRequest
Make a POST request to the
/generate-request endpoint to generate a SAML
AuthnRequest. The request body must also contain the level of assurance for your service.
Example call
> POST /generate-request HTTP/1.1 > Content-Type: application/json > > { "levelOfAssurance": "LEVEL_2" }
Example response
{ "samlRequest": ": "_f43aa274-9395-45dd-aaef-25f56f49084e", "ssoLocation": "" }
The parts of the response represent:
samlRequest- your base64 encoded
AuthnRequest
requestId- a token that identifies the
AuthnRequest. It is used to connect the user’s browser with a specific request.
ssoLocation- the URL to send the
AuthnRequestto
Step 2: Store the requestId from the response
You will need to access the
requestId later in the process to link the identities received from GOV.UK Verify with the correct user.
You must store the
requestId securely and link it to the user’s session. We recommend you store the
requestId in a secure cookie.
Step 3: Send the AuthnRequest to Compliance Tool
The
AuthnRequest is sent via the user’s browser in a form. We recommend you do this by rendering an HTML form and JavaScript to submit it, as per SAML HTTP Post Binding.
The HTML form should:
- escape inputs - to make sure no symbols or special characters are processed
- contain JavaScript to auto-post - to automatically send the user on to the Hub
- include page styling to display if JavaScript is disabled - to prompt users to turn on JavaScript. This should look like your service
Example HTML form from passport-verify
<form class='passport-verify-saml-form' method='post' action='${escape(ssoLocation)}'> <h1>Continue to next step</h1> <p>Because Javascript is not enabled on your browser, you must press the continue button</p> <input type='hidden' name='SAMLRequest' value='${escape(samlRequest)}'/> <input type='hidden' name='relayState' value=''/> <button class='passport-verify-button'>Continue</button> </form> <script> var form = document.forms[0] form.setAttribute('style', 'display: none;') window.setTimeout(function () { form.removeAttribute('style') }, 5000) form.submit() </script> <style type='text/css'> body { padding-top: 2em; padding-left: 2em; } .passport-verify-saml-form { font-family: Arial, sans-serif; } .passport-verify-button { background-color: #00823b; color: #fff; padding: 10px; font-size: 1em; line-height: 1.25; border: none; box-shadow: 0 2px 0 #003618; cursor: pointer; } .passport-verify-button:hover, .passport-verify-button:focus { background-color: #00692f; } </style>
The response from Compliance Tool should contain
"status": "PASSED" and a
responseGeneratorLocation URL which you can use to access the test scenarios.
Example response from Compliance Tool
{ "status": { "status": "PASSED", "message": null }, "responseGeneratorLocation": "" }
If the status is not
PASSED, you may need to re-initialise the Compliance Tool. You should also check the Compliance Tool initialisation request matches the VSP configuration.
Go to the URL in
responseGeneratorLocation using your browser. The response will contain the test scenarios for possible responses.
Example test scenarios from Compliance Tool
{ "id" : "_6817b389-4924-479c-9851-db089c4e639c", "testCases" : [ { "executeUri" : "", "id" : 10, "title" : "Verified User On Service With Non Match Setting", "description" : "Issues a successful response where the user has been successfully verified." }, { "executeUri" : "", "id" : 11, "title" : "No Authentication Context Response With Non Match Setting", "description" : "Issues a response with NoAuthnContext status. This happens when the user cancels or fails to authenticate at an appropriate level of assurance." }, { "executeUri" : "", "id" : 13, "title" : "Authentication Failed Response", "description" : "Issues an Authentication Failed response. The user was not authenticated successfully." }, { "executeUri" : "", "id" : 14, "title" : "Fraudulent match response with assertions signed by hub", "description" : "Issues a response with an assertion signed with the hub's private key. Your service should return an error to the user because your service should only trust assertions signed by your matching service adapter." } ] }
Access the URL for a particular scenario to test that your service can handle that particular response.
Step 4: Receive the SAML Response
The SAML Response will be submitted to the URL you specified when initialising the Compliance Tool, via the user’s browser. For example,
passport-verify-stub-relying-party uses
/verify/response. The SAML Response will be URL form encoded,
application/x-www-form-urlencoded.
Example form body submitted from the user’s browser
SAMLResponse=PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHNhbWwycDpSZXNwb25zZSBEZXN0aW5hdGlvbj0iaHR0cHM6Ly9wYXNzcG9ydC12ZXJpZnktc3R1Yi1yZWx5aW5nLXBhcnR5LWRldi5jbG91ZGFwcHMuZGlnaXRhbC92ZXJpZnkv...
Step 5: Translate the SAML Response into JSON
Send a POST request to
/translate-non-matching-response to translate the SAML Response into JSON.
The call must contain:
samlResponse- the base64 encoded SAML from the Compliance Tool you got in Step 4
requestId- the token you stored in Step 2
levelOfAssurance- to validate that the user meets the minimum level of assurance you have requested
Example call
> POST /translate-non-matching-response HTTP/1.1 > Content-Type: application/json > { "samlResponse" : " : "_64c90b35-154f-4e9f-a75b-3a58a6c55e8b", "levelOfAssurance" : "LEVEL_2" }
Example successful response
> HTTP/1.1 200 OK > Content-Type: application/json > { "scenario": "IDENTITY_VERIFIED", "pid": "etikgj3ewowe", "levelOfAssurance": "LEVEL_2", "attributes": pie }
Step 6: Handle the JSON response
When you receive an HTTP 200 response, with one of 4 scenarios:
In the
IDENTITY_VERIFIED scenario, the response will also contain:
pid- a unique identifier for a user
levelOfAssurance- the level of assurance the user verified at
attributes- information about the user’s identity
Your service can now use the provided identity or error messages to further guide the user in their journey in your service. | https://www.docs.verify.service.gov.uk/get-started-with-vsp/use-vsp-with-your-service/ | 2019-01-16T10:50:59 | CC-MAIN-2019-04 | 1547583657151.48 | [] | www.docs.verify.service.gov.uk |
Redux Theme Options () is very popular with Theme Forest WordPress users. It is a great framework, and we add some more extensions make it become more flexible, especially Preset. Now, all options of Theme Options can be stored in one preset. This mean you can create various, even unlimited style for your site, which is difficult to achieve with Redux origin version.
+ Create a new preset:
To set preset for a page, you can see below:
Note: Global preset mean if you don’t set preset for a page, this page will use the global preset as default. | https://docs.9wpthemes.com/document/theme-options-preset-important/ | 2019-01-16T11:21:22 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.9wpthemes.com |
Upgrade paths
Before you begin the upgrade process, it is important to understand your upgrade paths.
HDF upgrade paths
- HDF 3.2.0
- HDF 3.1.0
If you are running an earlier HDF version, upgrade to at lease HDF 3.1.0, and then proceed to the HDF 3.3.0 upgrade.
HDP and Ambari versions
- Supported HDP versions – 3.0.x, 3.1.0
- Supported Ambari versions – 2.7.x | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.3.1/ambari-managed-hdf-upgrade-ppc/content/hdf-upgrade-paths.html | 2019-01-16T11:01:14 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.hortonworks.com |
Migrating Calico data
Important: Once you begin the migration, stop using
calicoctlor otherwise modifying the etcdv2 datastore. Any changes to etcdv2 data will not be migrated to the new datastore.
To begin an interactive data migration session, use the
startcommand. While existing connectivity will continue as before, you cannot add any new endpoints until the migration and upgrade complete.
Syntax
calico-upgrade[-darwin-amd64|-windows-amd64.exe] start [--apiconfigv1 path/file] [--apiconfigv3 path/file]
Reference
Example
calico-upgrade start --apiconfigv1 etcdv2.yaml --apiconfigv3 etcdv3.yaml
Check the generated reports for details of conversions.
Errors: If the
startcommand returns one or more errors, review the logs carefully. If it fails partway through, it will attempt to abort the process. In rare circumstances, such as due to transient connectivity issues, it may be unable to abort. In this case, it may instruct you to manually run the
calico-upgrade abortcommand.
Failures: If the migration fails to complete, the etcdv3 datastore may contain some of your data. This will cause future attempts to run the
calico-upgrade startcommand to fail. You must either manually remove this data from the etcdv3 datastore before trying again or include the
--ignore-v3-dataflag with the
calico-upgrade startcommand.
Next steps
Once you have succeeded in migrating your data from etcdv2 to etcdv3, continue to Upgrading. | https://docs.projectcalico.org/v3.3/getting-started/kubernetes/upgrade/migrate | 2019-01-16T10:23:49 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.projectcalico.org |
Welcome and Getting Started
Congratulations on looking at the documentation. All product documentation for up.time can be accessed from here. At any time, you can access any of the product guides from the left content page, which may or may not be able to be removed for this home page.
Page: Release Notes Page: Installation and Quick-Start Guide Page: Administrator's Guide Page: Reference Guide Page: Integration Guide | http://docs.uptimesoftware.com/display/UT72/up.time+Documentation+Home | 2019-01-16T11:01:12 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.uptimesoftware.com |
Preface¶
This documentation has its origins in a collaboration between students. I had spent quite some time with administrating my own services, ranging from simple webservices to full-blown redundant email hosts. Over the years, we (that is, I and my admin friends) had moved from centralised offers to our own servers for almost everything we use on our daily basis: email, chat, calendars, version control hosting, and more. Being nerds, we of course also self-hosted the underlying infrastructure: the domain name system, virtual servers to save some money on more powerful servers, routing and redundant private links between the systems, monitoring, you name it.
Now at some point, I met a group of students and other people who wanted to learn from me. We decided to start a shared project: with a piece of hardware, I was tasked to set up virtualisation so that each of us would get their own machine. In addition, I would write documentation on how to set the systems up and how they could run their machine efficiently and safely. This is the document you are reading right now.
Warning
Some pieces of this document will be very opinion-based. I will try to mark those pieces as such, but in these times, people consider facts as opinion, so there’s that. | https://docs.zombofant.net/admin-guide/master/preface.html | 2019-01-16T10:35:26 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.zombofant.net |
OpenAIRE Guidelines¶
Welcome:
Current Guidelines
The guidelines specifically provide guidance on how to specify:
- Access right
- Funding information
- Related publications, datasets, software etc..
Participate¶
You are invited to participate by commenting or editing the content. See our guide for how to get started:
How to Contribute
OpenAIRE Validator¶
The OpenAIRE Validator service is integrated in the Content Provider Dashboard and: | https://guidelines.readthedocs.io/en/latest/ | 2019-01-16T10:06:06 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['_images/openaire.png', '_images/openaire.png'], dtype=object)] | guidelines.readthedocs.io |
current limitations when using a mesh light are:
- Mesh Light ignores smoothing on poly objects.
- NURBS surfaces do not currently work with Mesh Light..
Sphere converted to Mesh light with 'Light Visible' enabled
Changing the mesh parameter "translator" to "mesh_light" is still supported, however, it is now considered as deprecated and will be removed in the long-term future.
Mesh Attributes
In Mesh
Displays the name of the shape used as a Mesh Light.
Show Original Mesh
Displays and renders the original mesh shape chosen to represent the Mesh Light.
Light Visible
Makes the light source visible to the camera., emission is noisier than a Mesh light with Diffuse samples = 2.
Below is another comparison test between a mesh light (left image), and a sphere with a highly emissive Standard | https://docs.arnoldrenderer.com/display/A5AFMUG/Ai+Mesh+Light | 2019-01-16T09:54:29 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['https://docs.arnoldrenderer.com//download/attachments/35457939/mesh-light-woman_500x466.jpg?api=v2',
None], dtype=object) ] | docs.arnoldrenderer.com |
Installing GravityView
First, download the GravityView plugin file from the GravityView Account page.
Log in to your account, click the "Downloads" tab, then click the link to download the latest version of GravityView.
Click on the "Plugins" menu in WordPress Dashboard, then click the Add New button
Click the Upload Plugin button at the top of the Add Plugins page
Click "Choose File" to select the GravityView plugin file
Note: Depending on your computer and browser, this upload field may look different. The idea is the same, though: click the button to choose the file.
Choose the downloaded file from your computer
Click "Install Now"
Click the Activate Plugin button
You should now see the Getting Started screen
Click on Settings
Enter your GravityView license key into the License Key field
Once you enter your license key, click the Activate License button that appears
When your license has been activated, you will see your account details
Click the Update Settings button
Now GravityView has been activated! You will receive automatic updates.
If you ever have questions about the plugin, you can click on the blue circle at the bottom of each GravityView page (we call that the "Support Port").
You can search our how-to articles from there, and you can also click the Contact Support button and send us a question without leaving your site.
| https://docs.gravityview.co/article/69-installing-gravityview | 2019-01-16T10:51:01 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['https://gravityview.co/wp-content/uploads/2018/04/Screen-Shot-on-2018-04-02-at-102652.png',
'GravityView account "Downloads" tab, clicking the link titled "GravityView-Version 1.19.1"'],
dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/click-on-the-plugins-menu-in-wordpress-dashboard-then-click-the-add-new-button.png?1479253149',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/clcik-the-upload-plugin-button-at-the-top-of-the-add-plugins-page.png?1479253150',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/click-choose-file-to-select-the-gravityview-plugin-file.png?1479253150',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/choose-the-downloaded-file-from-your-computer.png?1479253151',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/click-install-now-.png?1479253152',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/click-the-activate-plugin-button.png?1479253153',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/you-should-now-see-the-getting-started-screen.png?1479253154',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/click-on-settings.png?1479253155',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/enter-your-gravityview-license-key-into-the-license-key-field.png?1479253157',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/once-you-enter-your-license-key-click-the-activate-license-button-that-appears.png?1479253157',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/when-your-license-has-been-activated-you-will-see-your-account-details.png?1479253159',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/click-the-update-settings-button.png?1479253159',
None], dtype=object)
array(['https://gravityview.co/wp-content/uploads/2018/01/now-gravityview-has-been-activated-you-will-receive-automatic-updates.png?1479253159',
None], dtype=object) ] | docs.gravityview.co |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.