content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Methods
Selector methods return all resources that share a common property, using the
syntax
method:value.
The "tag" methodThe "tag" method
The
tag: method is used to select models that match a specified tag.
$ dbt run --select tag:nightly # run all models with the `nightly` tag
The "source" methodThe "source" method
The
source method is used to select models that select from a specified source. Use in conjunction with the
+ operator.
$ dbt run --select source:snowplow+ # run all models that select from Snowplow sources
The "path" methodThe "path" method
The
path method is used to select models located at or under a specific path.
While the
path prefix is not explicitly required, it may be used to make
selectors unambiguous.
# These two selectors are equivalentdbt run --select path:models/staging/githubdbt run --select models/staging/github# These two selectors are equivalentdbt run --select path:models/staging/github/stg_issues.sqldbt run --select models/staging/github/stg_issues.sql
The "package" methodThe "package" method
The
package method is used to select models defined within the root project
or an installed dbt package. While the
package: prefix is not explicitly required, it may be used to make
selectors unambiguous.
# These three selectors are equivalentdbt run --select package:snowplowdbt run --select snowplowdbt run --select snowplow.*
The "config" methodThe "config" method
The
config method is used to select models that match a specified node config.
$ dbt run --select config.materialized:incremental # run all models that are materialized incrementally$ dbt run --select config.schema:audit # run all models that are created in the `audit` schema$ dbt run --select config.cluster_by:geo_country # run all models clustered by `geo_country`
The "test_type" methodThe "test_type" method
The
test_type method is used to select tests based on their type,
singular or
generic:
$ dbt test --select test_type:singular # run all tests defined singularly$ dbt test --select test_type:generic # run all tests defined generically
The "test_name" methodThe "test_name" method
The
test_name method is used to select tests based on the name of the generic test
that defines it. For more information about how generic tests are defined, read about
tests.
$ dbt test --select test_name:unique # run all instances of the `unique` test$ dbt test --select test_name:equality # run all instances of the `dbt_utils.equality` test$ dbt test --select test_name:range_min_max # run all instances of a custom schema test defined in the local project, `range_min_max`
The "state" methodThe "state" method
N.B. State-based selection is a powerful, complex feature. Read about known caveats and limitations to state comparison.
The
state method is used to select nodes by comparing them against a previous version of the same project, which is represented by a manifest. The file path of the comparison manifest must be specified via the
--state flag or
DBT_ARTIFACT_STATE_PATH environment variable.
state:new: There is no node with the same
unique_id in the comparison manifest
state:modified: All new nodes, plus any changes to existing nodes.
$ dbt test --select state:new # run all tests on new models + and new tests on old models$ dbt run --select state:modified # run all models that have been modified$ dbt ls --select state:modified # list all modified nodes (not just models)
Because state comparison is complex, and everyone's project is different, dbt supports subselectors that include a subset of the full
modified criteria:
state:modified.body: Changes to node body (e.g. model SQL, seed values)
state:modified.configs: Changes to any node configs, excluding
database/
schema/
alias
state:modified.relation: Changes to
database/
schema/
alias(the database representation of this node), irrespective of
targetvalues or
generate_x_namemacros
state:modified.persisted_descriptions: Changes to relation- or column-level
description, if and only if
persist_docsis enabled at each level
state:modified.macros: Changes to upstream macros (whether called directly or indirectly by another macro)
Remember that
state:modified includes all of the criteria above, as well as some extra resource-specific criteria, such as changes to a source's
freshness property or an exposure's
maturity property. (View the source code for the full set of checks used when comparing sources, exposures, and executable nodes.)
The "exposure" methodThe "exposure" method
The
exposure method is used to select parent resources of a specified exposure. Use in conjunction with the
+ operator.
$ dbt run --select +exposure:weekly_kpis # run all models that feed into the weekly_kpis exposure$ dbt test --select +exposure:* # test all resources upstream of all exposures$ dbt ls --select +exposure:* --resource-type source # list all sources upstream of all exposures | https://6167222043a0b700086c2b31--docs-getdbt-com.netlify.app/reference/node-selection/methods | 2021-11-27T03:48:57 | CC-MAIN-2021-49 | 1637964358078.2 | [] | 6167222043a0b700086c2b31--docs-getdbt-com.netlify.app |
Kommander provides centralized monitoring, in a multi-cluster environment, using the monitoring stack running on any managed clusters. Centralized monitoring is provided by default in every Kommander cluster.
Managed clusters are distinguished by a monitoring ID. The monitoring ID corresponds to the kube-system namespace UID of the cluster. To find a cluster’s monitoring ID, you can go to the Clusters tab on the Kommander UI (in the relevant workspace), or go to the Clusters page in the Global workspace:
https://<CLUSTER_URL>/ops/portal/kommander/ui/clusters
Click on the
View Details link on the managed cluster card, and then click the Configuration tab, and find the monitoring ID under Monitoring ID (clusterId).
You may also search or filter by monitoring IDs on the Clusters page, linked above.
You can also clusters remotely using Thanos. You can visualize these metrics in Grafana using a set of provided dashboards.
The Thanos Query component is installed on the Kommander cluster.
Thanos Query queries the Prometheus instances / Cluster [Global]. accessible at:
https://<CLUSTER_URL>/ops/portal/kommander/monitoring/query
You can also check that the managed cluster’s Thanos sidecars are successfully added to Thanos Query by going to:
https://<CLUSTER_URL>/ops/portal/kommander/monitoring/query/stores
The preferred method to view the metrics for a specific cluster": ... # Complete json file here ... }
If you have already deployed your cluster, you will need to run this Konvoy command to deploy your custom dashboard:
konvoy deploy addons
Centralized AlertsCentralized Alerts
A centralized view of alerts, from managed clusters, is provided using an alert dashboard called Karma. Karma aggregates all alerts from the Alertmanagers running in the managed clusters, allowing you to visualize these alerts clusters by following these instructions. To use these instructions you must install the kubefedctl CLI. | https://docs.d2iq.com/dkp/kommander/1.4/centralized-monitoring/ | 2021-11-27T02:01:21 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.d2iq.com |
Getting started¶
GitLab CI registration token¶
To register a Runner with your GitLab CI instance, you need to provide
a registration token. It can be found on the
https://<host>/admin/runners
page of your GitLab installation.
The registration token is generated randomly on each GitLab startup, and
unfortunately cannot be accessed using an API. Therefore, the easiest way to
provide it to the role is to store it in an environment variable. The
debops.gitlab_runner checks the value ofthe
$GITLAB_RUNNER_TOKEN system
variable and uses the token found there.
The registration token is required to perform changes on the GitLab server itself, ie. registration and removal of Runners. It's not required for the role to manage the Runners on the host - the Runner tokens are saved in local Ansible facts and reused if necessary.
An example way to run
debops so that the role registers the Runners in
GitLab CI:
GITLAB_RUNNER_TOKEN=<random-token> debops service/gitlab_runner
To change the environment variable that holds the registration token, or save
the token in Ansible inventory, you can use the
gitlab_runner__token
variable.
In case that you don't want to expose the registration token via the Ansible
inventory directly, you can store it it in the
ansible/secret/credentials/ directory managed by the
debops.secret role in a predetermined location.
To create the path and file to store the GitLab Token execute this commands in the root of the DebOps project directory with the relevant GitLab domain:
mkdir -pv ansible/secret/credentials/code.example.org/gitlab/runner editor ansible/secret/credentials/code.example.org/gitlab/runner/token
In the editor, paste the GitLab registration token and save the file. Then add
the
gitlab_runner__token variable to your inventory.
gitlab_runner__token: '{{ lookup("password", secret + "/credentials/" + gitlab_runner__api_fqdn + "/gitlab/runner/token chars=ascii,numbers") }}'
This allows the token to be safely stored outside of the inventory but accessible at runtime.
Initial configuration¶
By default,
debops.gitlab_runner will configure a single Runner instance
which uses a shell executor. If a Docker installation is detected via Ansible
local facts, the role will disable the shell executor and configure two Docker
executors - one unprivileged, and one privileged. The executors will have a set
of tags that identify them, shell executors will have additional tags that
describe the host's architecture, OS release, etc.
If the
debops.lxc role has been used to configure LXC support on the host,
the
debops.gitlab_runner will install the
vagrant-lxc package and
configure sudo support for it. Using a shell executor you can start
and stop Vagrant Boxes using LXC containers and execute commands inside them.
If the
debops.libvirtd role has been used to configure libvirt support on
the host, the
debops.gitlab_runner will install the
vagrant-libvirt
package and configure sudo support for it. Using a shell executor
you can start and stop Vagrant Boxes using libvirt and execute commands inside
them.
The Runner instances can be configured with variables specified as the keys of the dictionary that holds the specific Runner configuration. If any required keys are not specified, the value of the global variable will be used instead.
Some of the variables will be added together (Docker volumes, for example), so that you can define a list of global values included in all of the Runner instances.
Environment variables¶
You can use
gitlab_runner__environment default variable to specify a custom
set of environment variables to configure in a GitLab Runner instance. You can
use the global variable, or set the environment at the instance level by
specifying it as
item.environment variable.
The environment variables can be specified in a different ways:
a single variable as a string:
gitlab_runner__environment: 'VARIABLE=value'
a list of environment variables:
gitlab_runner__environment: - 'VARIABLE1=value1' - 'VARIABLE2=value2'
a YAML dictionary with variable names as keys and their values as values:
gitlab_runner__environment: VARIABLE1: 'value1' VARIABLE2: 'value2'
Different specifications cannot be mixed together.
Example inventory¶
To install GitLab Runner service on a host, it needs to be added to the
[debops_service_gitlab_runner] inventory host group:
[debops_service_gitlab_runner] hostname
Example playbook¶
Here's an example playbook that can be used to enable and manage the GitLab Runner service on a set of hosts:
--- - name: Manage GitLab Runner service collections: [ 'debops.debops', 'debops.roles01', 'debops.roles02', 'debops.roles03' ] hosts: [ 'debops_service_gitlab_runner' ] become: True environment: '{{ inventory__environment | d({}) | combine(inventory__group_environment | d({})) | combine(inventory__host_environment | d({})) }}' roles: - role: keyring tags: [ 'role::keyring', 'skip::keyring', 'role::gitlab_runner' ] keyring__dependent_apt_keys: - '{{ gitlab_runner__keyring__dependent_apt_keys }}' - role: gitlab_runner tags: [ 'role::gitlab_runner', 'skip::gitlab_runner' ] | https://docs.debops.org/en/stable-2.2/ansible/roles/gitlab_runner/getting-started.html | 2021-11-27T01:45:56 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.debops.org |
Date: Tue, 2 Jan 1996 23:03:17 -0500 (EST) From: "Jonathan M. Bresler" <[email protected]> To: Stephen Couchman <[email protected]> Cc: [email protected] Subject: Re: iijpp STILL cannot talk to my modem :-(( Message-ID: <[email protected]> In-Reply-To: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Tue, 2 Jan 1996, Stephen Couchman wrote: > I have made the changes to the initial state using rc.serial as you > suggested so that only /dev/cuaia1 is affected. This had the desired affect > on /dev/cuaa1, so now when I check the state using stty -a </dev/cuaa1, > crtscts and clocal are set. cool, one step forward ;) >): <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=260074+0+/usr/local/www/mailindex/archive/1996/freebsd-questions/19960101.freebsd-questions | 2021-11-27T02:29:00 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.freebsd.org |
Check Database Integrity Task
Applies to:
SQL Server (all supported versions)
SSIS Integration Runtime in Azure Data Factory
The Check Database Integrity task checks the allocation and structural integrity of all the objects in the specified database. The task can check a single database or multiple databases, and you can choose whether to also check the database indexes.
The Check Database Integrity task encapsulates the DBCC CHECKDB statement. For more information, see DBCC CHECKDB (Transact-SQL).
Configuration of the Check Database Integrity Task: | https://docs.microsoft.com/en-us/sql/integration-services/control-flow/check-database-integrity-task?view=sql-server-ver15 | 2021-11-27T03:15:09 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.microsoft.com |
Vector arithmetic is fundamental to 3D graphics, physics and animation and it is useful to understand it in depth to get the most out of Unity. Below are descriptions of the main operations and some suggestions about the many things they can be used for.. For example, to find a point 5 units above a location on the ground, you could use the following calculation:-
var pointInAir = pointOnGround + new Vector3(0, 5, 0);
If the vectors represent forces then it is more intuitive to think of them in terms of their direction and magnitude (the magnitude indicates the size of the force). Adding two force vectors results in a new vector equivalent to the combination of the forces. This concept is often useful when applying forces with several separate components acting at once (eg, a rocket being propelled forward may also be affected by a crosswind).
Vector subtraction is most often used to get the direction and distance from one object to another. Note that the order of the two parameters does matter with subtraction:-
// The vector d has the same magnitude as c but points in the opposite direction. var c = b - a; var d = a - b;
As with numbers, adding the negative of a vector is the same as subtracting the positive.
// These both give the same result. var c = a - b; var c = a + -b;
The negative of a vector has the same magnitude as the original and points along the same line but in the exact opposite direction.
When discussing vectors, it is common to refer to an ordinary number (eg, a float value) as a scalar. The meaning of this is that a scalar only has “scale” or magnitude whereas a vector has both magnitude and direction.
Multiplying a vector by a scalar results in a vector that points in the same direction as the original. However, the new vector’s magnitude is equal to the original magnitude multiplied by the scalar value.
Likewise, scalar division divides the original vector’s magnitude by the scalar.
These operations are useful when the vector represents a movement offset or a force. They allow you to change the magnitude of the vector without affecting its direction.
When any vector is divided by its own magnitude, the result is a vector with a magnitude of 1, which is known as a normalized vector. If a normalized vector is multiplied by a scalar then the magnitude of the result will be equal to that scalar value. This is useful when the direction of a force is constant but the strength is controllable (eg, the force from a car’s wheel always pushes forwards but the power is controlled by the driver).
The dot product takes two vectors and returns a scalar. This scalar is equal to the magnitudes of the two vectors multiplied together and the result multiplied by the cosine of the angle between the vectors. When both vectors are normalized, the cosine essentially states how far the first vector extends in the second’s direction (or vice-versa - the order of the parameters doesn’t matter).
It is easy enough to think in terms of angles and then find the corresponding cosines using a calculator. However, it is useful to get an intuitive understanding of some of the main cosine values as shown in the diagram below:-
The dot product is a very simple operation that can be used in place of the Mathf.Cos function or the vector magnitude operation in some circumstances (it doesn’t do exactly the same thing but sometimes the effect is equivalent). However, calculating the dot product function takes much less CPU time and so it can be a valuable optimization.
The other operations are defined for 2D and 3D vectors and indeed vectors with any number of dimensions. The cross product, by contrast, is only meaningful for 3D vectors. It takes two vectors as input and returns another vector as its result.
The result vector is perpendicular to the two input vectors. The “left hand rule” can be used to remember the direction of the output vector from the ordering of the input vectors. If the first parameter is matched up to the thumb of the hand and the second parameter to the forefinger, then the result will point in the direction of the middle finger. If the order of the parameters is reversed then the resulting vector will point in the exact opposite direction but will have the same magnitude.
The magnitude of the result is equal to the magnitudes of the input vectors multiplied together and then that value multiplied by the sine of the angle between them. Some useful values of the sine function are shown below:-
The cross product can seem complicated since it combines several useful pieces of information in its return value. However, like the dot product, it is very efficient mathematically and can be used to optimize code that would otherwise depend on slow transcendental functions.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2019.1/Documentation/Manual/UnderstandingVectorArithmetic.html | 2021-11-27T03:31:12 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.unity3d.com |
.
Prerequisites
- Verify that each of your vCenter Server systems meets the prerequisites for installing NSX Manager.
- Perform the installation task for the NSX Manager virtual appliance described in the NSX Installation and Upgrade Guide.
Procedure
- Log in to the NSX Manager virtual appliance that you installed and confirm the settings that you specified during installation.
- Associate the NSX Manager virtual appliance that you installed with the vCenter Server system that you plan to add to vCloud Director in your planned vCloud Director installation.
What to do next. | https://docs.vmware.com/en/VMware-Cloud-Director/8.10/com.vmware.vcloud.install.doc_810/GUID-0AEE373A-5CA1-43F5-844A-06099063244F.html | 2021-11-27T03:37:08 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.vmware.com |
VRC_OscButtonIn
Deprecated
This component is deprecated. It is not available in the latest VRChat SDK, and is either non-functional, or will no longer receive updates. It may be removed at a later date.
We plan on implementing OSC support to Udon in SDK3 at a future date.
Control your world with any software compatible with OpenSoundControl.
Fires when an OSC message is received at a specific address.
OSC Port
VRChat listens for OSC messages on port 9000.
Updated about 1 month ago
Did this page help you? | https://docs.vrchat.com/docs/vrc_oscbuttonin | 2021-11-27T01:43:16 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['https://files.readme.io/c05510b-explorer_2018-07-17_15-05-41.png',
'explorer_2018-07-17_15-05-41.png'], dtype=object)
array(['https://files.readme.io/c05510b-explorer_2018-07-17_15-05-41.png',
'Click to close...'], dtype=object) ] | docs.vrchat.com |
Import CopybooksImport Copybooks
The host transaction that you use to build your client operation is populated by the data items contained in a COBOL copybook.
You can import or redefine a copybook to update or replace a host transaction data structure.
When you import a new copybook and associate it with a host transaction, the previously associated copybook is over-written. The name and data variables of the host transaction and client operation mappings are replaced with the data from the new data structure. However, the original copybook, host transaction, and client operation are available until you save the project.
If you are using an existing host transaction, from the File menu, choose Import and then select Copybooks to Replace Existing Data Structure, then select the host transaction you want to associate with the imported copybook.
The order in which copybooks are placed must match the order expected by the host program.
The copybook is displayed in the Copybook Editor. If necessary you can edit the copybook in the editor. A full range of editing capabilities are available to you.
If you import more than one copybook, the copybooks are concatenated in the sequence in which they were chosen in the Import dialog box. If you imported one copybook, a copy of the selected copybook is displayed in the editor. See Edit Copybooks for more information.
Verify that the mapping between elements in your client operation and data items in your host transaction is accurate. Although the map is updated when the copybook is imported, it is important to make sure that the map is correct before you save your project.
You can re-import copybooks. When re-importing:
See Editing Copybooks with Syntax Errors
You can create a copybook by copying and pasting from a COBOL program that does not contain a simple copybook to import, by importing without selecting a specific copybook. Click Finish. | https://docs.attachmate.com/Verastream/vbi/5.0/com.attachmate.eclipse.transaction.designer.help/tasks/copybook_import_pr.html | 2021-11-27T02:54:47 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.attachmate.com |
This reference page is linked to from the following overview topics: Overview of example plug-ins, Example C++ plug-in descriptions.
Particle Instancer object access class.
Class for obtaining information about a particle instancer node.
#include <MFnInstancerInstancer.
Reimplemented from MFnDagNode.
Returns the number of particles feeding the active instancer.
Returns the DAG paths and instancer matrix for all instances generated by a specified particle.
Returns information about all instances generated by a particular particle instancer node.
Since many particles will typically instance similar sets of paths, the information is returned in a compact representation. An array of paths is returned, representing the unique set of paths that are instanced by any particle in the system. For each particle, the routine returns a set of indices into this path array to illustrate the paths instanced at that particle. The index arrays for all particles are concatenated together, so a "start index" array is used to indicate which is the first entry in the index array for each particle. For each particle, the routine also returns the transformation matrix applied to that particle's instanced paths to generate the final particle instance transformations. | https://docs.autodesk.com/MAYAUL/2013/ENU/Maya-API-Documentation/cpp_ref/class_m_fn_instancer.html | 2021-11-27T03:30:17 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.autodesk.com |
Thanks to the additional components that come with every KM Suite, you. Contact us today to learn more, and see how SharePoint knowledge management can change the way you use. | https://docs.bamboosolutions.com/document/knowledge_management/ | 2021-11-27T03:14:04 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.bamboosolutions.com |
Crate moore_vhdl
Version 0.11.0
See all moore_vhdl's items
This crate implements VHDL for the moore compiler.
pub extern crate moore_vhdl_syntax as _;
pub use moore_vhdl_syntax as syntax;
A context within which nodes can be added.
Multi-type arena allocation
Builtin libraries, packages, types, and functions.
LLHD code generation for VHDL.
A compiler pass that gathers definitions.
The High-level Intermediate Representation of a VHDL design.
This module implements constant value calculation for VHDL.
This module implements constant values for VHDL.
An implementation of lazy compiler passes.
A context within which compiler passes can be described.
Operators
Overload resolution for subprograms and enum literals.
Facilities to manage declarations and resolve names.
This module implements the scoreboard that drives the compilation of VHDL.
Expressions
This module implements VHDL types.
The VHDL type system.
This module implements the type calculation of the scoreboard.
Generate a collection of arenas for different types. | https://docs.rs/moore-vhdl/0.11.0/i686-unknown-linux-gnu/moore_vhdl/index.html | 2021-11-27T02:38:21 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.rs |
Your preferred payment method will be the account you will receive your rewards through. You may configure multiple payment methods. However, only your preferred payment method will be used to receive rewards.
Choosing a Preferred Payment Method
- Go to your Bugcrowd researcher profile and click on Account Settings
- Next, go to the Payment methods tab.
- If you have connected multiple types of payment methods, you will see a “Preferred payment method” section in the “Manage payments” box, at the top of the page.
- Use the drop-down menu to select the method you prefer to be paid via.
- When you are happy with your selection, click the save button to confirm your choice.
Changing a Preferred Payment Method.
You can easily change which method the platform will pay you through, by selecting another available method from the drop-down menu at any time.
You can change your PayPal address or Payoneer account while that method is set to preferred
- The changes do NOT require you to reset the preferred status.
- Removing a payment method will remove it as your preferred method, and the method will no longer appear in the list of available methods to select from.
- If you have only 2 methods, and you delete one from your account, the option to select a preferred method will be removed from your account page, until you add a second method again. | https://researcherdocs.bugcrowd.com/docs/preferred-payment-method | 2019-08-17T13:30:02 | CC-MAIN-2019-35 | 1566027313259.30 | [] | researcherdocs.bugcrowd.com |
One can set JD Builder’s common settings & other settings here under Global settings options. From setting the typography of your complete page builder to adding license key for upgrading to pro version all such options are there as the global options.
You can access the JD Builder global settings from either of the ways:
- System > Global Configuration > JD Builder
- JD Builder > Any JD Builder Page > Options > Global Options > Configuration
The global options comprises of 2 tabs that are the JD Builder tab & Typography tab. Each of the tab options are discussed below:-
Firstly we have the JD Builder tab that has the following options:
- Builder Pro Activation Key- Here you need to add the license key for upgrading your page builder’s free version to Pro. For more info on how to add the key read the article.
- Facebook App ID- If you are using the Facebook Page element in any of your JD Builder page the to get it working you need to add your Application ID’s here. To know more about how to create the ID read the article.
- Google Map API Key- To use the Google Map element on your site, enter the Map’s API key.
- reCAPTCHA Site & Secret Keys- Enter the reCAPTCHA keys if you want to use the reCAPTCHA in your Form element. To know more about how to get the keys, sign up for the API key pair.
| https://docs.joomdev.com/article/jd-builder-global-options/ | 2020-09-18T15:06:46 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['https://docs.joomdev.com/wp-content/uploads/2020/01/Pasted-into-JD-Builder-Global-Options-3.png',
None], dtype=object) ] | docs.joomdev.com |
My WebLink
|
|
About
|
Sign Out
Search
08-14-78 Council Meeting Minutes
sbend
>
Public
>
Common Council
>
Common Council Meeting Minutes
>
1978
>
08-14-78 Council Meeting Minutes
Metadata
Thumbnails
Annotations
Entry Properties
Last modified
7/5/2013 9:29:38 AM
Creation date
7/5/2013 8:28:00 AM
Metadata
Fields
Template:
City Council - City Clerk
City Council - Document Type
Council Mtg Minutes
City Counci - Date
8/14 />AUGUST 14, 1978 <br />Be it remembered that the Common Council of the City of South Bend met in the Council Chambers <br />of the County -City Building on Monday, August 14, 1978, at 7:00 p.m. Council Vice President <br />Dombrowski presiding. The meeting was called to order, and the Pledge to the Flag was given. <br />ROLL CALL PRESENT: Council Members Serge, Szymkowiak, Miller, Adams, <br />Dombrowski and Horvath <br />ABSENT: Council Members Taylor, Kopczynski and Parent <br />REPORT FROM THE SUB - COMMITTEE ON MINUTES <br />To the Common Council of the City of South Bend: <br />Your sub - committee on the inspection and supervision of the minutes would respectfully <br />report that it has inspected the minutes of the July 24, 1978 meeting of the Council and found <br />them correct. <br />The sub - committee, therefore, recommends that the same be approved. <br />/s /Mary Christine Adams <br />Council Member Miller made a motion that the minutes of the July 24, 1978, meeting be placed <br />on file, seconded by Council Member Serge. The motion carried. <br />SPECIAL BUSINESS <br />Mayor Nemeth made a presentation for a resolution to assist the people in South Bend, Texas. <br />Council Vice President Dombrowski indicated this would be handled along with the other resolu- <br />tions on the agenda <br />REPORTS FROM CITY OFFICES <br />Mr. Carl Ellison, Director of Community Development briefly reported on Community Development's <br />performance. He indicated they were required to have a performance hearing by HUD, and this <br />report would serve that purpose. Council Member Adams made a motion to resolve into the Com- <br />mittee of the Whole, seconded by Council Member Serge. The motion carried. <br />COMMITTEE OF THE WHOLE <br />Be it remembered that the Common Council of the City of South Bend met in the Committee of the <br />Whole on Monday, August 14, 1978, at 7:07 p.m., with six members present. Chairman Frank <br />Horvath presiding. <br />BILL NO. 122 -78 A BILL AUTHORIZING THE CITY OF SOUTH BEND, INDIANA TO ISSUE ITS <br />ECONOMIC DEVELOPMENT REVENUE BONDS, (THE BENDIX CORPORATION PROJECT), <br />SERIES 1978, AND APPROVING OTHER ACTIONS IN RESPECT THERETO. <br />This being the time heretofore set for public hearing on the above bill, proponents and opponent <br />were given an opportunity to be heard. Mr. Kenneth Fedder, attorney for the Economic Developmen <br />Commission, made the presentation for the bi�l. He indicated that $1,000,000 bond issue was for <br />the acquisition and installation of certain machinery and equipment at the Bendix Corporation. <br />He said 52 new hourly jobs and 11 salary jobs will be created with an estimated payroll of <br />$1,000,000. Council Member Szymkowiak made a motion to recommend this bill to the Council <br />favorable, seconded by Council Member Dombrowski. Council Member Miller congratulated the Bendi <br />Corporation. The motion carried. <br />BILL NO. 123 -78 <br />A BILL APPROVING THE FORM AND TERMS OF LEASE AND TRUST INDENTURE AND <br />INDUSTIRAL DEVELOPMENT REVENUE BONDS, AND AUTHORIZING THE EXECUTION <br />THEREOF PERTAINING TO ACRA ACRES. <br />This being the time heretofore set for public hearing on the above bill, proponents and opponent: <br />were given an opportunity to be heard. Mr. Kenneth Fedder, attorney for the Economic Development <br />Commission, made the presentation for the bill. He indicated that this $275,000 bond issue was <br />for the construction and equipping of a facility at 51563 U.S. 31 North. He indicated five new <br />jobs would"be created. Council Member Adams made a motion to recommend this to the Council <br />favorable, seconded by Council Member Dombrowski. The motion carried. <br />
The URL can be used to link to this page
Your browser does not support the video tag. | http://docs.southbendin.gov/WebLink/0/doc/41222/Page1.aspx | 2020-09-18T14:11:31 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.southbendin.gov |
In Intellicus, you can create connections to R Server and Python Data Science environments. Creating connection to any of the Data Science environments requires prerequisite file system-based connection. This connection’s file location helps as a shared location to exchange data between Intellicus and Data Science environment.
When you create connection to the Data Science environment, you need to select the file connection location created above under Dump Connection Name field.
Figure 5: Data Science Engine
Provide the following properties to create a connection to Data Science environment. The details associated to common properties for most of the connections, can be found here. | https://docs.intellicus.com/documentation/basic-configurations-must-read-19-1/working-with-database-connections-19-1/data-science-environments-19-1/ | 2020-09-18T12:53:31 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['https://docs.intellicus.com/wp-content/uploads/2019/13/DataScienceenvironmentconnection.png',
'Data Science environment connection'], dtype=object) ] | docs.intellicus.com |
OPNsense Azure Virtual Appliance¶.
The Virtual Appliance is available on the Microsoft Azure Marketplace (here).
Our installation manual will guide you through a simple installation scenario using 1 network interface, for more advanced network setups you best checkout the Azure documentation.
Setup : Basic settings¶
The Marketplace create button guides you to the initial virtual machine setup, choose your subscription and system preferences here and name your virtual machine.
Next make sure you create an initial administrative user, since some names are reserved (like admin and root), you
need to choose another one here. In our example we choose
adm001 here.
Note
You can enable the root user after installation, the setup user can access the system using ssh or https after installation todo so.
Setup : Disks¶
Next you can choose a disk type to use, standard SSD is fast enough for most workloads.
Setup : Network¶
For our example, we kept our settings simple using a private IP which is accessible over port 443 (https) after bootup. Most settings can be changed after deployment.
| https://docs.opnsense.org/manual/how-tos/installazure.html | 2020-09-18T12:55:40 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['../../_images/azure_offer.png', '../../_images/azure_offer.png'],
dtype=object)
array(['../../_images/azure_deploy_basics.png',
'../../_images/azure_deploy_basics.png'], dtype=object)
array(['../../_images/azure_deploy_basics_user.png',
'../../_images/azure_deploy_basics_user.png'], dtype=object)
array(['../../_images/azure_deploy_disks.png',
'../../_images/azure_deploy_disks.png'], dtype=object)
array(['../../_images/azure_deploy_network.png',
'../../_images/azure_deploy_network.png'], dtype=object)] | docs.opnsense.org |
The billing model for a specific deal is set by the dropdown menu in the upper right hand corner of the deal. When you create a new deal, this field will be populated by the default billing model set by your company administrator - At delivery in the following example.
Do note that the billing model doesn't specify when you actually do invoice. It rather specifies the earliest date the system allow the users to invoice. It helps you make sure the client isn't invoiced too early. The billing items will show up in Billing where one can sort by Earliest billing date to make sure everything that CAN be invoiced is invoiced.
At delivery
The billing model called At delivery set the billing items to reflect the fulfillment dates specified in the corresponding Channel. For example, if the print date for an issue is set to June 1, then June 1 will be the billing date for that billing item.
The following example shows how the billing items have been sorted by Earliest billing date.
Traditionally, this is how magazines bill their clients. From a cashflow perspective it's not the best model though. Note that to the right of the billing model field, there is a dropdown for Payment terms. That field can certainly be set to Due upon receipt, but even that does not set a firm date for when the invoices have to be paid. And if it's set to 30 days, which is the standard in many countries, then you have to wait 30 days past your fulfillment date to get paid.
In advance
The billing model In advance sets today's date for all billing items. This is useful in case you want to bill the client immediately for the entire deal. Obviously, this model will be the most cashflow positive model for you.
In advance, per delivery
The billing model In advance, per delivery sets the billing date to 30 days prior to the Channel's fulfillment date. This is useful in case you want to drive a positive cashflow but still connect the billing to your fulfillment process.
50/50
The billing model 50/50 splits each original billing item into two. One is set to bill 30 days in advance and the other one is set to the fulfillment date. This is useful in case you want to improve your cashflow slightly compared to just billing upon fulfillment.
By installment
The billing model By installment allow you to bill in installments. This is especially useful in case you are creating large deals that contains content from several channels.
By default, the number of installments are set to 12 and today's date for the first payment, but this is something you can edit.
In the following example, we have changed the number of installments to three.
You can also edit the billing items, both the values and the items description, which is very useful if you're working with installments.
Don't forget to click the Update link to the left of the billing item, circled in green.
The value numbers will update as you update the billing item to make sure the total value number is remains the same as shown in the example below.
| https://docs.runmags.com/en/articles/1988704-billing-model | 2020-09-18T13:55:13 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['https://downloads.intercomcdn.com/i/o/62361130/86f7e2ed17facbe7e17340d7/At+delivery.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/62361349/dddd2884490883155348b377/At+delivery+text.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/62361645/1beee5a33514628c797bc659/In+advance.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/62362135/f4c507fd28444bb1446f6f9b/In+advance+per+delivery.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/62362285/4bbd33cd5c1058da709635a1/50-50.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/62362891/8f613d81f66e8bf7e23e00d3/By+installment+12.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/62363085/a0fc084b1c3526b4d8edd656/By+installment+3.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/62363155/c6fde3e7502bca91cd12f676/By+installment+3+edit+1.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/62363281/9aeba865c89a8d1f64b7f255/By+installment+3+edit+2.png',
None], dtype=object) ] | docs.runmags.com |
Install the Camunda Modeler
This page explains how to install the Camunda Modeler for modeling BPMN 2.0 diagrams, CMMN 1.1 cases and DMN 1.3 decision tables.
Requirements
Operation Systems
Officially supported on the following operating systems:
- Windows 7\.
For evaluating DMN 1.3 Decisions created using Camunda Modeler, Process Engine version 7.13.0, 7.12.4, 7.11.11, 7.10.17 and above is required.
Note that you do not need to install the Process Engine if you do not want to execute the BPMN Diagrams or evaluate DMN Decisions.. | https://docs.camunda.org/manual/latest/installation/camunda-modeler/ | 2020-09-18T14:04:34 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.camunda.org |
Searching data / Building a query / Operations reference / String group / Contains tokens (toktains)
Contains tokens (toktains)
Description
You can apply this operation either as a Filter or Create column operation: | https://docs.devo.com/confluence/ndt/searching-data/building-a-query/operations-reference/string-group/contains-tokens-toktains | 2020-09-18T13:53:41 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.devo.com |
babel-preset-expoextends the default react-native preset and adds support for all other Expo platforms. In the browser this has massive performance benefits by enabling tree-shaking of the unused
react-native-webmodules.
@expo/webpack-configA default Webpack config that's optimized for running
react-native-webapps and creating progressive web apps.
jest-expoA universal solution for testing your code against all of the platforms it runs on. Learn more about Universal Testing.
# Make sure you can successfully install the native image editing library Sharp npm install -g sharp-cli # Then in your project run: npx expo-optimize
yarn add -D webpack-bundle-analyzer
expo customize:weband select
webpack.config.js.
const createExpoWebpackConfigAsync = require('@expo/webpack-config'); const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer'); module.exports = async (env, argv) => { const config = await createExpoWebpackConfigAsync(env, argv); // Optionally you can enable the bundle size report. // It's best to do this only with production builds because it will add noticeably more time to your builds and reloads. if (env.mode === 'production') { config.plugins.push( new BundleAnalyzerPlugin({ path: 'web-report', }) ); } return config; };
EXPO_WEB_DEBUG=true expo build:web
This will make your bundle much larger, and you shouldn't publish your project in this state.
expo build:weband serving it somewhere, run Lighthouse with the URL your site is hosted at.
lighthouse <url> --view | https://docs.expo.io/guides/web-performance/ | 2020-09-18T14:42:23 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.expo.io |
Key Features
- Correlates Site24x7 alerts to help you understand and respond faster to production issues.
- Allows you to act immediately when your monitors change status.
- Uses Site24x7 actions to forward alerts to BigPanda.
How It Works
The integration uses Site24x7 actions, which send an HTTP request to BigPanda on every change in a monitor's status. BigPanda then processes and correlates the alert data from Site24x7 to create and maintain up-to-date incidents in BigPanda.
How and When Alerts are Closed
BigPanda stays in sync with the status of Site24x7 monitors. An alert opens when a status changes to TROUBLE or DOWN and closes when the status is UP. Note this exception:
Site24x7 monitors that are set to Suspend do not generate alerts in BigPanda.
Installing The Integration
Administrators can install the integration by following the on-screen instructions in BigPanda. For more information, see Installing an Integration.
Site24x7 Data Model
BigPanda normalizes alert data from Site24x7 into tags. You can use tag values to filter the incident feed and to define filter conditions for Environments. The primary and secondary properties are also used during the correlation process.
Standard Tags
Uninstalling Site24x7
You must delete the BigPanda action from Site24x7 monitors Site24x7 UI.
Procedure
- Go to Admin > Inventory > Monitors.
- For each monitor that is connected with BigPanda:
- Open the monitor in edit mode.
- Under Configuration Profiles > Actions, click the X beside BigPanda.
- Click Save.
Post-Requisites
Delete the integration in BigPanda to remove the Opsview integration from your UI.
Updated about a year ago | https://docs.bigpanda.io/docs/site24x7?utm_source=site-partners-page | 2020-09-18T13:11:54 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.bigpanda.io |
Mixed Multinomial Logit Model
The Multinomial Logit Model estimates a single set of parameters. For example, if being used to predict preferences for different phones based on the prices and features of the phones it estimates a single parameter for price. Thus, it can be interpreted as assuming that everybody in the population is identical, with any differences in preferences reflecting random error (this is the Random Utility Theory interpretation of the model).
A 'mixed' logit model is a Generalization of the Multinomial Logit Model which accounts for Heterogeneity by estimating ranges of values of the parameters in the model. In this context the term 'mixed' means that the model that is estimated can be viewed as a combination (i.e., 'mixture') of multinomial logit models. Many different ways of accounting for heterogeneity have been developed for regression in general. In the case of the multinomial logit model, the most widely used mixture models are:
- Latent class logit, which assumes that the population contains a number of segments (e.g., a segment wanting low priced phones with few features and another segment willing to pay a premium for more features) and identifies the segments automatically.
- Random parameters logit, which assumes that the distribution of the parameters in the population is described by a multivariate normal distribution. This model is sometimes referred to in market research as Hierarchical Bayes, although this is a misnomer. | https://docs.displayr.com/wiki/Mixed_Multinomial_Logit_Model | 2020-09-18T14:00:10 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.displayr.com |
Social Links element allows you to add social icons to any page for sharing the content on that page. One can add any number of social links with this element and style it as per choice.
The element has three different tabs Social Profiles, Style Options and Color Options under the General tab.
In Social Profiles we have the following options:
- Social Icons: Add any number or social profiles by clicking on the Add Item button. This will open you to the option of adding Title, Icon and Link to the profile.
In Style Options we have the following options:
- Alignment: Set the alignment of the social profiles to Left, Center or Right.
- Display: Choose how you want to display Both(icon & title), Only icon or Only title.
- Icon Position: If you choose Both in display then choose where you want to position the icon Left, Right, Top or Bottom.
- Inner Padding: Choose how much space you want between the content and the box inside the box.
- Icon Size: Set the size of icon in pixels by sliding its value.
- Text Size: Set the size of text in pixels by sliding its value. This option is only available if you choose Both for display.
- Space Between: Set the amount of space between two different social profile icons.
- Border Style: Select the border type for the social icon among None, Solid, Double, Dotted or Dashed. By default its set to None.
- Border Width: After choosing the border type set the width of the border.
- Border Radius: Border radius will give your social icons rounded corners.
- Box Shadow: Add background shadow to the profiles.
- Hover Animation: Choose the animation on hover over the social profiles.
Now you can manage the color of the Social Icons under Colors Options tab, that has these options:
- Colors: Decide whether you want to show the social icons in there brand colors or customize them.
- Brand Colors: Choosing this will display the social profiles in their brand colors. You can also invert the colors if you choose Inverted colors.
- Custom Colors: Customize the social profiles to the color of your choice.
- Background Hover Colors: Set the hover colors to the social profiles.
For details on Design & Advanced tabs read the articles. | https://docs.joomdev.com/article/social-links-element/ | 2020-09-18T13:20:46 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['https://docs.joomdev.com/wp-content/uploads/2019/07/Pasted-into-Social-Links-Element.png',
None], dtype=object)
array(['https://docs.joomdev.com/wp-content/uploads/2019/07/2019-07-06_1755.png',
None], dtype=object) ] | docs.joomdev.com |
Kubermatic provides live updates of your Kubernetes cluster
without disrupting your daily business. The allowed updates are defined in the file
updates.yaml. You find it in your Kubermatic installer clone directory:
$ git clone [email protected]:kubermatic/kubermatic-installer.git $ cd kubermatic-installer/ $ ls charts/kubermatic/static/master/
The file contains the supported upgrade paths for Kubernetes. The file format is YAML.
updates: # ======= 1.12 ======= # Allow to change to any patch version - from: 1.12.* to: 1.12.* automatic: false # CVE-2018-1002105 - from: <= 1.12.2, >= 1.12.0 to: 1.12.3 automatic: true # Allow to next minor release - from: 1.12.* to: 1.13.* automatic: false # ======= 1.13 ======= # Allow to change to any patch version - from: 1.13.* to: 1.13.* automatic: false # Allow to next minor release - from: 1.13.* to: 1.14.* automatic: false # ======= 1.14 ======= # Allow to change to any patch version - from: 1.14.* to: 1.14.* automatic: false # Allow to next minor release - from: 1.14.* to: 1.15.* automatic: false
As you can see it is a list containing the keys
from,
to, and
automatic. The fields
from and
to contain patterns descibing the Kubernetes version numbers. These can be absolut,
contain wildcards, or be ranges. This way Kubermatic can check which updates are allowed for
the current version.
The field
automatic determines if an update has to be initiated manually or if the system will
do it immediatelly in case of a matching version path. So in case of the example above a cluster
running in any Kubernetes version from 1.12.0 to 1.12.2 would automatically upgrade to 1.12.3.
This way known vulnerabilities can be handled directly.
Note: The automatic update only updates the control plane. Kubelets on the nodes still have to be updated manually.
After editing the list Kubermatic has to be upgraded by using
helm.
$ cd kubermatic-installer/charts/kubermatic $ vim static/master/updates.yaml $ helm upgrade kubermatic .
Afterwards the new update paths are available. | https://docs.kubermatic.io/advanced/update_updates/ | 2019-09-15T12:16:22 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.kubermatic.io |
Occurs before a popup window is displayed.
Namespace: DevExpress.ExpressApp.Web
Assembly: DevExpress.ExpressApp.Web.v19.1.dll
public event EventHandler<PopupShowingEventArgs> PopupShowing
Public Event PopupShowing As EventHandler(Of PopupShowingEventArgs)
The PopupShowing event handler receives an argument of the PopupShowingEventArgs type. The following properties provide information specific to this event.
You can handle this event to access the XafPopupWindowControl object. An example of using the PopupShowing event is provided in the How to: Adjust the Size and Style of Pop-up Dialogs (ASP.NET) topic. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Web.PopupWindowManager.PopupShowing | 2019-09-15T12:10:49 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.devexpress.com |
Represents a Window Controller.
Creates an instance of the WindowController Window is assigned to the current WindowController... | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.WindowController._members | 2019-09-15T13:04:18 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.devexpress.com |
Category can be selected on viewer by starting to type the category name. Selection is made immediately when there is only one matching category left.
To switch to category selection mode simply type slash (/). Possible options are shown after Assigning: text in curly brackets and it is immediately revised once more letters are typed. When a category is selected you can keep on typing to select the item value. Following image shows this in action and below it is explanation how this all works.
Follow the category setup below to better understand this explanation. Starting by typing "/k" the input selection will shift from "Tokens" to "Keywords" (since K is unique). After that, typing "g" will assign the "good" keyword immediately to the image. Typing "b" will show B{ad,oring} in the info box and typing "a" or "o" next will complete the match and assign the result to the image. Typing "/p" will work similarly and show the partial category match "P{eople,laces} so you can type "e" or "l" to complete "People" or "Places" respectively.
Tokens
A..Z
Keywords
Good
Bad
Boring
People
George
Fred
Places
Internet
If you want to insert a new word or have words like boa and board you need to be able to type in the exact word you want to insert or select. This can be achieved by starting the word with double quote (") and ending with comma (,). If we would select boa immediately when it is typed you could not select board and otherwise we would be waiting for more key presses and you could not select boa.
If you type two '/'s in a row it will toggle between two different modes. The default mode described above and a category selection mode. In the latter mode we go straight back to category selection after a match. That way you can continually select items within different categories. It'll still do the fastest match possible so that typing "kbo" will still match "Keywords/Boring" in the example set above.
If the current input is blank (e.g. you're not in the middle of selecting anything) and you hit a function key F1 through F12 (or up to F35 if your keyboard supports), it will assign the last matched assignment to that key. You can apply the same assignment to new images just by hitting the shortcut key. To remove use shift modifier and the shortcut (Shift+F#). This is useful for quickly assigning frequently repeating items in your current set of images. It remembers both the category and the category item. These shortcuts are remembered until KPhotoAlbum is closed, there is currently no support for replacing the assigned shortcut. | https://docs.kde.org/trunk5/en/extragear-graphics/kphotoalbum/category-selectors-in-viewer.html | 2019-09-15T13:11:49 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
Method: channels.leaveChannel
Back to methods index
Leave a channel/supergroup
Parameters:
Return type: Updates(); $Updates = $MadelineProto->channels->leaveChannel(['channel' => InputChannel, ]);
Or, if you’re into Lua:
Updates = channels.leaveChannel({channel=InputChannel, })
Errors
This site uses cookies, as described in the cookie policy. By clicking on "Accept" you consent to the use of cookies. | https://docs.madelineproto.xyz/API_docs/methods/channels_leaveChannel.html | 2019-09-15T13:00:27 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.madelineproto.xyz |
Improved.
Cloud sandbox with production data
A common scenario when working with cloud sandboxes, especially when testing or troubleshooting, is the wish to have production data available. With this release, we add the ability to create a cloud cloud sandbox use..
Visual Studio Code AL Extension enhancements
With versioning check and backward compatibility, you, you can now use the common Break on Error, as well as Break on Write. You can also go to definition in the base application code and set breakpoints there.
IntelliSense enhancements
All properties in AL, both on hover and in IntelliSense, now have Help links that redirect you to the related online documentation. Furthermore, the documentation for AL language constructs is autogenerated and used for both online reference documentation and IntelliSense, ensuring up-to-date and aligned documentation.
Suggestions for Image properties in an extension now only propose the ones that can be used in the current context, displaying a warning for images that cannot be used in the current context, and you can preview images when using IntelliSense and on-hover..
.NET Interop
When working with Business Central solutions that target on-premises deployments, you can now add .NET Interop in AL code. Note that this implies that the solution cannot be moved to the cloud later without replacing the .NET Inter.
OData-bound actions in AL
It is now possible to declare OData bound actions in AL. A new attribute and a new AL type have been introduced to achieve this.
[ServiceEnabled] procedure CreateCustomerCopy(var actionContext : WebServiceActionContext) var createdCustomerGuid : Guid; customer : Record Customer; begin actionContext.SetObjectType(ObjectType::Page); actionContext.SetObjectId(Pages::Customer); actionContext.AddEntityKey(customer.fieldNo(Id), createdCustomerGuid); actionContext.SetResultCode(WebServiceActionResultCode::Created); end;
Tell us what you think
Help us improve Dynamics 365 Business Central by discussing ideas, providing suggestions, and giving feedback. Use the Business Central forum at. | https://docs.microsoft.com/en-us/business-applications-release-notes/october18/dynamics365-business-central/visual-studio-code-improvements | 2019-09-15T13:14:23 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['media/event-tracer.png', 'Event tracer Event tracer'],
dtype=object)
array(['media/go-to-definition-f12.gif',
'F12 Go to Definition for base application code F12 Go to Definition for base application code'],
dtype=object)
array(['media/help-link-from-intellisense.gif',
'Help link from IntelliSense Help link from IntelliSense'],
dtype=object)
array(['media/intellisense-preview-images.gif',
'Select and preview images with IntelliSense Select and preview images with IntelliSense'],
dtype=object)
array(['media/permissions-al-command.png',
'Visual Studio Code AL command for generating permissions file for extension objects Visual Studio Code AL command for generating permissions file for extension objects'],
dtype=object)
array(['media/dotnet-interop.png',
'.NET Interop in on-premises AL .NET Interop in on-premises AL'],
dtype=object)
array(['media/xliff-note.png',
'XLIFF translation file note tag XLIFF translation file note tag'],
dtype=object) ] | docs.microsoft.com |
SPAttachmentCollection class
Represents the collection of attachments for a list item.
Inheritance hierarchy
System.Object
Microsoft.SharePoint.SPAttachmentCollection
Namespace: Microsoft.SharePoint
Assembly: Microsoft.SharePoint (in Microsoft.SharePoint.dll)
Syntax
'Declaration Public Class SPAttachmentCollection _ Implements ICollection, IEnumerable 'Usage Dim instance As SPAttachmentCollection
public class SPAttachmentCollection : ICollection, IEnumerable
Remarks.
Examples.
Dim oSiteCollection As SPSite = SPContext.Current.Site Dim collWebsites As SPWebCollection = oSiteCollection.AllWebs Dim oWebsite As SPWeb = collWebsites("Site_Name") Dim oFolder As SPFolder = oWebsite.Folders("Shared Documents") For Each oWebsiteNext As SPWeb In collWebsites Dim oList As SPList = oWebsiteNext.Lists("List_Name") Dim collItem As SPListItemCollection = oList.Items Dim oListItem As SPListItem = collItem(0) Dim collAttachments As SPAttachmentCollection = oListItem.Attachments Dim collFiles As SPFileCollection = oFolder.Files For Each oFile As SPFile In collFiles Dim strFileName As String = oFile.Name Dim binFile As Byte() = oFile.OpenBinary() collFiles.Add(strFileName, binFile) Next oFile oListItem.Update() oWebsiteNext.Dispose() Next oWebsiteNext oWebsite.Dispose()
SPAttachmentCollection members
Microsoft.SharePoint namespace | https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-server/ms472013(v=office.15) | 2019-09-15T13:08:05 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
NtXxx Routines
This section describes the NtXxx versions of the Windows Native System Services routines. Most native system services routines have two versions, one of which has a name begins with the prefix Nt; the other version has a name that begins with the prefix Zw. For example, calls to NtCreateFile and ZwCreateFile perform similar operations and are, in fact, serviced by the same kernel-mode system routine..
The following table summarizes the NtXxx and ZwXxx versions of the routines:
Feedback | https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/ntxxx-routines?redirectedfrom=MSDN | 2019-09-15T13:45:38 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
Table of Contents
Besides the main view showing the currently selected device in a graphical and a tree view, KDE Partition Manager uses Qt™'s “dock widgets” or panels to display some information and allow selections. See the following screen shot for an overview of KDE Partition Manager's main window.
Menubar: The menu bar presents some custom and some non-standard menus to choose actions to perform. All commands are described in detail in Chapter 3, Command Reference.
Toolbar: KDE Partition Manager's tool bar is a standard tool bar. It can be fully customized; for details see the section called “The Settings and Help Menu”.
Devices Panel: This panel lists all devices found on your computer that KDE Partition Manager can handle. Select a device in this panel to view or modify it in the graphical device view or in the tree device view.
Graphical Device View: In this view KDE Partition Manager shows a graphical representation of the currently selected device. Each of the device's partitions has its own box with device node name (“sda1” for the first partition in the screenshot above) and usage information (the dark violet area in the screenshot).
Extended partitions are visually distinct by their extra border (light green in the screenshot above) around them.
You can select a partition by clicking on it in the graphical device view. A double click opens the partition's properties dialog. A right click shows the partition context menu.
Tree Device View: The tree device view shows extended information about each partition on the selected device. The currently selected partition is highlighted. Double-clicking a partition opens the partition's properties dialog. A right click shows the partition context menu.
Information Panel: The information panel shows some details about the currently selected device or partition. It is not enabled by default.
Pending Operations Panel: This panel lists all operations that will be executed once you choose → .
In the screenshot above, one operation is pending: If the user applies the operations now, the file system on /dev/sdb3 will be checked for errors and, if required, repaired.
Statusbar: The status bar shows how many operations are currently pending.
Log Output Panel: This panel shows log information. It is only of secondary importance for non-advanced users and is not enabled by default. | https://docs.kde.org/trunk5/en/extragear-sysadmin/partitionmanager/usermanual.html | 2019-09-15T13:06:25 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)
array(['mainwindow.png', 'Main Window'], dtype=object)] | docs.kde.org |
About the Exchange Online admin role
To help you administer Office 365, you can assign users permissions to manage your organization's email and mailboxes from the Exchange admin center. You do this by assigning them to the Exchange admin role.
Tip: When you assign someone to the Exchange admin role, also assign them to the Service admin role. This way they can see important information in the Microsoft 365 admin center, such as the health of the Exchange Online service, and change and release notifications.
Here are some of the key tasks users can do when they are assigned to the Exchange admin role:
Recover deleted items in a user mailbox - Admin Help
Set up an archive and deletion policy for mailboxes in your Office 365 organization.
Set up mailbox features such as the mailbox sharing policy: how users can share calendar and contacts information with others outside of your organization.
Set up "Send As" and "Send on Behalf" delegates for someone's mailbox. For example, an executive may want their assistant to have the ability to send mail on their behalf.
Create a shared mailbox so a group of people can monitor and send email from a common email address.
Office 365 email anti-spam protection and malware filters for the organization.
Manage Office 365 Groups
Exchange Online role groups
If you have a large organization, the Exchange admin might want to assign users to Exchange role groups. When an admin adds a user to a role group, the user gets permissions to perform certain business functions only members of that group can do.
For example, the Exchange admin might assign someone to the Discovery Management role group so they can perform searches of mailboxes for data that meets certain criteria. To learn more, see Permissions in Exchange Online and Manage Role Groups.
Feedback | https://docs.microsoft.com/en-us/office365/admin/add-users/about-exchange-online-admin-role?redirectSourcePath=%252fbg-bg%252farticle%252f%2525D0%2525B7%2525D0%2525B0-%2525D1%252580%2525D0%2525BE%2525D0%2525BB%2525D1%25258F%2525D1%252582%2525D0%2525B0-%2525D0%2525BD%2525D0%2525B0-%2525D0%2525B0%2525D0%2525B4%2525D0%2525BC%2525D0%2525B8%2525D0%2525BD%2525D0%2525B8%2525D1%252581%2525D1%252582%2525D1%252580%2525D0%2525B0%2525D1%252582%2525D0%2525BE%2525D1%252580-%2525D0%2525BD%2525D0%2525B0-exchange-online-097ae285-c4af-4319-9770-e2559d66e4c8&view=o365-worldwide | 2019-09-15T13:03:56 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
Follow this procedure to install New Relic APM's PHP agent using AWS Linux 2, RedHat, or CentOS. This is not the same as the CentOS procedures for New Relic Infrastructure.
If you have an earlier version of New Relic installed, upgrade the agent instead of following the instructions on this page.
Install the agent
Even though the package name for the PHP agent refers to PHP 5, the package works for all supported PHP versions, including PHP 7 versions.
- Make sure you have your New Relic license key accessible.
Use either of the following ways to obtain the installation package:
- Tell the package manager (rpm) about the New Relic repository
For 32-bit systems, run:
sudo rpm -Uvh
For 64-bit systems, run:
sudo rpm -Uvh
- Download the rpm file from the New Relic website
For 32-bit systems, download these three files from the 32-bit packages (replacing
X.X.X.Xwith the most recent PHP agent version number):
newrelic-php5-common-X.X.X.X-1.noarch.rpm
newrelic-daemon-X.X.X.X-1.i386.rpm
newrelic-php5-X.X.X.X-1.i386.rpm
For 64-bit systems, download these three files from the 64-bit packages (replacing
X.X.X.Xwith the most recent PHP agent version number):
newrelic-php5-common-X.X.X.X-1.noarch.rpm
newrelic-daemon-X.X.X.X-1.x86_64.rpm
newrelic-php5-X.X.X.X-1.x86_64.rpm
Install the agent and daemon using your preferred package manager:
- yum
sudo yum install newrelic-php5
The first time you install New Relic for PHP, yum prompts you to accept the New Relic public key. New Relic's key ID is
548C16BF.
- 32-bit rpm
Replace
X.X.X.Xwith the most recent PHP agent version number when you run this command:
rpm -i newrelic-php5-common-X.X.X.X-1.noarch.rpm newrelic-daemon-X.X.X.X-1.i386.rpm newrelic-php5-X.X.X.X-1.i386.rpm
- 64-bit rpm
Replace
X.X.X.Xwith the most recent PHP agent version number when you run this command:
rpm -i newrelic-php5-common-X.X.X.X-1.noarch.rpm newrelic-daemon-X.X.X.X-1.x86_64.rpm newrelic-php5-X.X.X.X-1.x86_64.rpm
- tarball
If yum and rpm do not work with your host config, install from the binary tarball.
Run the
newrelic-installscript and follow the instructions.
sudo newrelic-install install
Restart your web server (Apache, NGINX, PHP-FPM, etc.).
- Generate traffic to your application, and wait a few minutes for it to send data to New Relic. Then, check your app's performance in the New Relic UI.
For more help
Additional documentation resources include:
- No data appears (troubleshooting instructions for the PHP agent)
- PHP install script (more information about installing the agent)
- New Relic for PHP configuration (information on configuring your agent)
- PHP agent installation: non-standard PHP (install on a non-standard PHP configuration)
- Uninstall (uninstall the PHP agent) | https://docs.newrelic.com/docs/agents/php-agent/installation/php-agent-installation-aws-linux-redhat-centos | 2019-09-15T12:13:21 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.newrelic.com |
>>.0's documentation on Splunkbase.
As a best practice, it is recommended that indexers be at the same or higher version level than the forwarders they're receiving data
Hi we have installed splunk forwarder version 6.1 and we are using indexer 6.2.0 so will the data will be passing through the indexers?
Hi AaronMoorcroft,
Yes.
Will a 6. + forwarder work without issue with a 5.0.2 indexer ?
Hi Eiujmoore,
Yes, 6.2.2 forwarders should work without a problem on 6.0.5 indexers.
Will a 6.2.2 forwarder work with a 6.0.5 indexer? We will be upgrading soon, but need to install a couple of new forwarders and would like to avoid the need to upgrade those later.
Hi Ammula88,
6.2.1 forwarders should absolutely work with 6.2.0 indexers.
Is there a compatibility with 6.2.1 Universal forwarder and splunk 6.2.0 Indexers?
Hi Anandhscareer,
I'm not sure what you mean by 'passing through the indexers' but a 6.1.0 forwarder can send data to a 6.2.0 indexer without a problem. Let me know if that answers your question. | https://docs.splunk.com/Documentation/Splunk/6.2.1/Forwarding/Compatibilitybetweenforwardersandindexers | 2019-09-15T13:19:38 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
CG films and animations commonly feature highly realistic reflections, which are important for giving a sense of “connectedness” among the objects in the sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary. However, the accuracy of these reflections comes with a high cost in processor time and while this is not a problem for films, it severely limits the use of reflective objects in realtime games.
Traditionally, games have used a technique called reflection mapping to simulate reflections from objects while keeping the processing overhead to an acceptable level. This technique assumes that all reflective objects in the scene can “see” (and therefore reflect) the exact same surroundings. This works quite well for the game’s main character (a shiny car, say) if it is in open space but is unconvincing when the character passes into different surroundings; it looks strange if a car drives into a tunnel but the sky is still visibly reflected in its windows.
Unity improves on basic reflection mapping through the use of Reflection ProbesA rendering component that captures a spherical view of its surroundings in all directions, rather like a camera. The captured image is then stored as a Cubemap that can be used by objects with reflective materials. More info
See in Glossary, which allow the visual environment to be sampled at strategic points in the scene. You should generally place them at every point where the appearance of a reflective object would change noticeably (eg, tunnels, areas near buildings and places where the ground colour changes). When a reflective object passes near to a probe, the reflection sampled by the probe can be used for the object’s reflection map. Furthermore, when several probes are nearby, Unity can interpolate between them to allow for gradual changes in reflections. Thus, the use of reflection probes can create quite convincing reflections with an acceptable processing overhead.
The visual environment for a point in the scene can be represented. This is conceptually like a box with flat images of the view from six directions (up, down, left, right, forward and backward) painted on its interior surfaces.
For an object to show the reflections, its shaderA small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. More info
See in Glossary must have access to the images representing the cubemap. Each point of the object’s surface can “see” a small area of cubemap in the direction the surface faces (ie, the direction of the surface normal vector). The shader uses the colour of the cubemap at this point in calculating what colour the object’s surface should be; a mirror material might reflect the colour exactly while a shiny car might fade and tint it somewhat.
As mentioned above, traditional reflection mapping makes use of only a single cubemap to represent the surroundings for the whole scene. The cubemap can be painted by an artist or it can be obtained by taking six “snapshots” from a point in the scene, with one shot for each cube face. Reflection probes improve on this by allowing you to set up many predefined points in the scene where cubemap snapshots can be taken. You can therefore record the surrounding view at any point in the scene where the reflections differ noticeably.
In addition to its view point, a probe also has a zone of effect defined by an invisible box shape in the scene. A reflective object that passes within a probe’s zone has its reflection cubemap supplied temporarily by that probe. As the object moves from one zone to another, the cubemap changes accordingly.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.3/Documentation/Manual/ReflectionProbes.html | 2019-09-15T12:59:25 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.unity3d.com |
New Relic Insights employs rate limiting to ensure a high level of availability and reliability for all users. The amount of requests you can healthily execute depends on the type of query. A query to Insights occurs anytime an event-driven query is executed; for example:
- Event charts rendered on an existing dashboard
- Charts rendered through the NRQL query bar
- Insights Query API
Most users rarely encounter rate limiting, especially if following.
If you are using a query that is rate limited through the Insights Query API, the API will return a 503 error. Dashboard charts in the New Relic UI may also display timeout error messages.
Inspected count limits
Inspected count rate limits are imposed on a per-account basis. Each Insights account has a limit to the total number of events that can be inspected within two different time frames: rolling 30-minute time windows and a 24-hour period. These limits are as follows:
Once the limit has been reached for a given time period, limiting will be imposed and some queries may be impacted. After the time period has passed, if query volume drops below the limit, restrictions will be removed automatically.
These limits were designed to allow users to leverage the flexibility of Insights and increase query load as necessary. With these limits, users are still able to run more complex queries, dashboards or scripts against the Insights Query API.
In Insights, you can see the inspected event count at the bottom of the query window. This metadata is included in the query response JSON as well as from the Insights API. Here is an example:
Query rate limits
The current limit on Insights queries is 50 queries per second, or 3000 queries per minute. Past this, New Relic cannot guarantee query performance, and you may be rate limited.
Event type limits
The current limit for total number of event types is 250 per account, in a given 24-hour time period. If a user exceeds this limit, New Relic may filter or drop data. Event types include:
- Default events from New Relic agents
- Custom events from New Relic agents
- Custom events from Insights custom event inserter | https://docs.newrelic.com/docs/insights/use-insights-ui/manage-account-data/rate-limits-insights | 2019-09-15T12:10:57 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['/sites/default/files/thumbnails/image/insights-inspected-event-count-modal_0.png',
'New Relic Insights inspected event count New Relic Insights inspected event count'],
dtype=object) ] | docs.newrelic.com |
TSHttpTxnServerReqGet¶
Synopsis¶
#include <ts/ts.h>
- TSReturnCode
TSHttpTxnServerReqGet(TSHttpTxn txnp, TSMBuffer * bufp, TSMLoc * obj)¶
Description¶
Get the request Traffic Server is sending to the upstream (server) for the transaction txnp. bufp and obj should be valid pointers to use as return values. The call site could look something like
TSMBuffer mbuffer; TSMLoc mloc; if (TS_SUCCESS == TSHttpTxnServerReqGet(&mbuffer, &mloc)) { /* Can use safely mbuffer, mloc for subsequent API calls */ } else { /* mbuffer, mloc in an undefined state */ }
This call returns
TS_SUCCESS on success, and
TS_ERROR on failure. It is the
caller’s responsibility to see that txnp is a valid transaction.
Once the request object is obtained, it can be used to access all of the elements of the request,
such as the URL, the header fields, etc. This is also the mechanism by which a plugin can change the
upstream request, if done before the request is sent (in or before
TS_HTTP_SEND_REQUEST_HDR_HOOK). Note that for earlier hooks, the request may not yet
exist, in which case an error is returned. | https://docs.trafficserver.apache.org/en/latest/developer-guide/api/functions/TSHttpTxnServerReqGet.en.html | 2019-09-15T13:00:06 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.trafficserver.apache.org |
Sets the current modelview matrix to the one specified.
This method is equivalent to
glLoadMatrix(mat) in OpenGL. In other graphics APIs, the corresponding functionality is emulated.
Because changing the modelview matrix overrides the view parameters of the current camera, it is recommended that you save and restore the matrix using GL.PushMatrix and GL.PopMatrix. | https://docs.unity3d.com/ja/2019.1/ScriptReference/GL.MultMatrix.html | 2019-09-15T12:15:32 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.unity3d.com |
these components:
- Private virtual IP address
- Port (a port which the service is available on)
- Service name
You can assign a VIP to your application from the DC/OS GUI. The values you enter when you deploy a new service are translated into these Marathon application definition entries:
portDef then.
In the example above, clients can access the service at
my-service.marathon.l4lb.thisdcos.directory:5555.
Click REVIEW & RUN and RUN SERVICE.
You can click on the Networking tab to view networking details for your service. (e.g.. | http://docs-staging.mesosphere.com/mesosphere/dcos/1.10/networking/load-balancing-vips/virtual-ip-addresses/ | 2019-09-15T13:39:05 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['/mesosphere/dcos/1.10/img/vip-service-definition-output.png',
'VIP output'], dtype=object) ] | docs-staging.mesosphere.com |
Cross-Validation
In cross-validation, Training Server follows these steps:
- It builds one model using all of the data.
- It divides the data into x partitions, where x = 3, 5, or 10.
- It builds a number of partial models: as many as there are partitions, each one using a different combination of x -1 partitions.
For example, if the data is divided into the three partitions A, B, and C, Training Server builds model X using partitions A and B, model Y using partitions A and C, and model Z using partitions B and C.
- It tests each of these partial models against the partition that it omitted when it was built.
In the example, it tests model X against partition C, model Y against partition B, and model Z against partition A.
- It aggregates the results of all these tests and presents them as the rating of the entire model.
These ideas underlie the concept of cross-validation:
- The best way to test a model is to apply it to data that was not used in building the model.
- A model built using most of the data is usefully similar to the model built using all of the data, so the results of testing (for example) all possible 90-percent models are a good indication of the quality of the 100-percent model.
Because cross-validation adds to the time required to build a model, you may not want to select cross-validation for very large training objects or for objects for which you selected training quality Regular Level 6.
This page was last modified on December 11, 2018, at 11:51.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/ESDA/8.5.3/ContAn/xVal | 2019-09-15T12:22:15 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.genesys.com |
Understand shared IP addresses in Azure DevTest Labs
Azure DevTest Labs lets lab VMs share the same public IP address to minimize the number of public IP addresses required to access your individual lab VMs. This article describes how shared IPs work and their related configuration options.
Shared IP setting
When you create a lab, it's created in a subnet of a virtual network. By default, this subnet is created with Enable shared public IP set to Yes. This configuration creates one public IP address for the entire subnet. For more information about configuring virtual networks and subnets, see Configure a virtual network in Azure DevTest Labs.
For existing labs, you can enable this option by selecting Configuration and policies > Virtual Networks. Then, select a virtual network from the list and choose ENABLE SHARED PUBLIC IP for a selected subnet. You can also disable this option in any lab if you don't want to share a public IP address across lab VMs.
Any VMs created in this lab default to using a shared IP. When creating the VM, this setting can be observed in the Advanced settings page under IP address configuration.
- Shared: All VMs created as Shared are placed into one resource group (RG). A single IP address is assigned for that RG and all VMs in the RG will use that IP address.
- Public: Every VM you create has its own IP address and is created in its own resource group.
- Private: Every VM you create uses a private IP address. You can't connect to this VM directly from the internet with Remote Desktop.
Whenever a VM with shared IP enabled is added to the subnet, DevTest Labs automatically adds the VM to a load balancer and assigns a TCP port number on the public IP address, forwarding to the RDP port on the VM.
Using the shared IP
Linux users: SSH to the VM by using the IP address or fully qualified domain name, followed by a colon, followed by the port. For example, in the image below, the RDP address to connect to the VM is
mydevtestlab597975021002.eastus.cloudapp.azure.com:50661.
Windows users: Select the Connect button on the Azure portal to download a pre-configured RDP file and access the VM.
Next steps
Feedback | https://docs.microsoft.com/en-us/azure/lab-services/devtest-lab-shared-ip | 2019-09-15T12:37:55 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['media/devtest-lab-shared-ip/lab-subnet.png', 'New lab subnet'],
dtype=object)
array(['media/devtest-lab-shared-ip/new-vm.png', 'New VM'], dtype=object)] | docs.microsoft.com |
Custom Events
The Exceptions dialog box, which opens from the Exceptions property, contains an Event button where you can define a custom event in addition to selecting a predefined event. Use this dialog box to define which exceptions or events to handle.
- In the block, click opposite Exceptions under Value.
- Click the ... button to bring up the Exceptions dialog box.
- In the Exceptions dialog box, click Event.
- In the resulting dialog box, name the event and click OK. The event name appears in the Name column.
- When through in the dialog box, click OK.
- Once a custom event is added to the list of exceptions in a block, you will see an exception port for this event (or exception) on the block, which you can now connect to another block to handle that special condition.
This page was last modified on May 11, 2017, at 16:23.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/Composer/8.1.4/Help/CustomEvents | 2019-09-15T12:00:02 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.genesys.com |
Prior to the introduction of GDPR in May 2018, Litium's platform and cloud services will provide features supporting our customers' in their efforts to become GDPR-compliant.
The supporting features will be released in the upcoming version Litium 6 and for our supported versions Litium 5 and Litium 4.8.
More detailed information will be sent shortly by email and will also be published here on KC. Do you have any questions so far please contact Christian Rosendahl at [email protected] or 076-5258373.
About Litium
Join the Litium team
Support | https://docs.litium.com/news/litium-s-support-for-gdpr-compliance | 2019-09-15T12:32:27 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.litium.com |
>> master causes the cache manager to protect buckets that contain recent data over other buckets. When eviction is necessary, the cache manager will not evict buckets until their configured retention periods have passed, unless all other buckets have already been evicted.
Similarly,.
Details of settings that control retention based on data recency
The
hotlist_recency_secs setting affects the cache retention period for warm buckets. The cache manager attempts to defer bucket eviction until the interval between the bucket's latest time and the current time exceeds this setting. This setting defaults to 86400 seconds, or 24 hours..
You configure each setting in
server.conf or
indexes.conf, depending on whether you want to configure the setting for all SmartStore indexes or for individual indexes.
Configure recency recency: 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.3.0, 7.3.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/7.3.0/Indexer/ConfigureSmartStorecachemanager | 2019-09-15T12:47:35 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
call Android functions written in C/C++ directly from C# scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary (Java functions can be called indirectly).
To find out how to make these functions accessible from within Unity, visit the Android plug-ins page.
Unity includes support for occlusion culling, which is a valuable optimization method for mobile platforms.
Refer to the Occlusion CullingA on Android.
Refer to the Customizing an Android Splash Screen Manual page for more information.
The Android troubleshooting guide helps you discover the cause of bugs as quickly as possible. If, after consulting the guide, you suspect the problem is being caused by Unity, file a bug report following the Unity bug reporting guidelines.
See the Android bug reporting page for details about filing bug reports.
Ericsson Texture CompressionA method of storing data that reduces the amount of storage space it requires. See Texture Compression, (ETC) is the standard texture compression3D Graphics hardware requires Textures to be compressed in specialised formats which are optimised for fast Texture sampling. More info
See in Glossary format on Android.
ETC1 is supported on all current Android devices, but it does not support textures that have an alpha channel. ETC2 is supported on all Android devices that support OpenGL ES 3.0. It provides improved quality for RGB textures, and also supports textures with an alpha channel.
By default, Unity uses ETC1 for compressed RGB textures and ETC2 for compressed RGBA textures. If ETC2 is not supported by an Android device, the texture is decompressed at run time. This has an impact on memory usage, and also affects rendering are all support textures with an alpha channel. These formats also support higher compression rates and/or better image quality, but they are only supported on a subset of Android devices.
It is possible to create separate Android distribution archives (.apk) for each of these formats and let the Android Market’s filtering system select the correct archives for different devices.
We recommend you use the Video Player to play video files. This supersedes the earlier Movie Texture feature.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.2/Documentation/Manual/android-GettingStarted.html | 2019-09-15T12:16:44 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.unity3d.com |
Queue.
Message Category Queue.
Message Category Queue.
Message Category Queue.
Property
Category
Definition
Gets or sets the queue category.
public: property Guid Category { Guid get(); void set(Guid value); };
[System.Messaging.MessagingDescription("MQ_Category")] public Guid Category { get; set; }
member this.Category : Guid with get, set
Public Property Category As Guid
Property Value
A Guid that represents the queue category (Message Queuing type identifier), which allows an application to categorize its queues. The default is
Guid.empty.
Exceptions
The queue category was set to an invalid value.
An error occurred when accessing a Message Queuing method.
Examples
The following code example gets and sets the value of a message queue's Category property.
// Set the queue's Category property value. queue.Category = new System.Guid("00000000-0000-0000-0000-000000000001"); // Display the new value of the queue's Category property. Console.WriteLine("MessageQueue.Category: {0}", queue.Category);
null.
Setting this property modifies the Message Queuing queue. Therefore, any other MessageQueue instances are affected by the change.
The following table shows whether this property is available in various Workgroup modes. | https://docs.microsoft.com/en-us/dotnet/api/system.messaging.messagequeue.category?view=netframework-4.7.1 | 2019-09-15T13:24:05 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
Site
Map
Site Data Source. Start From Current Node Map
Site Data Source. Start From Current Node Map
Site Data Source. Start From Current Node Map
Property
Data Source. Start From Current Node
Definition
Gets or sets a value indicating whether the site map node tree is retrieved using the node that represents the current page.
public: virtual property bool StartFromCurrentNode { bool get(); void set(bool value); };
public virtual bool StartFromCurrentNode { get; set; }
member this.StartFromCurrentNode : bool with get, set
Public Overridable Property StartFromCurrentNode As Boolean
Property Value
true if the node tree is retrieved relative to the current page; otherwise,
false. The default is
false.
Remarks
The StartFromCurrentNode property is evaluated during calls to the GetView and the GetHierarchicalView methods to help determine which site map node to use as a starting point to build the node tree. The StartFromCurrentNode and StartingNodeUrl properties are mutually exclusive - if you set the StartingNodeUrl property, ensure that the StartFromCurrentNode property is
false.
The value of the StartFromCurrentNode property is stored in view state. | https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.sitemapdatasource.startfromcurrentnode?view=netframework-4.7.2 | 2019-09-15T13:04:56 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
Executed
Routed
Executed Event Args. Command Routed
Executed Event Args. Command Routed
Executed Event Args. Command Routed
Property
Event Args. Command
Definition
Gets the command that was invoked.
public: property System::Windows::Input::ICommand ^ Command { System::Windows::Input::ICommand ^ get(); };
public System.Windows.Input.ICommand Command { get; }
member this.Command : System.Windows.Input.ICommand
Public ReadOnly Property Command As ICommand
Property Value
Examples
The following example creates an ExecutedRoutedEventHandler that handles multiple commands. The handler checks the Command property on the ExecutedRoutedEventArgs to determine which method to call.
private void ExecutedDisplayCommand(object sender, ExecutedRoutedEventArgs e) { RoutedCommand command = e.Command as RoutedCommand; if(command != null) { if(command == MediaCommands.Pause) { MyPauseMethod(); } if(command == MediaCommands.Play) { MyPlayMethod(); } if(command == MediaCommands.Stop) { MyStopMethod(); } } }
Private Sub ExecutedDisplayCommand(ByVal sender As Object, ByVal e As ExecutedRoutedEventArgs) Dim command As RoutedCommand = TryCast(e.Command, RoutedCommand) If command IsNot Nothing Then If command Is MediaCommands.Pause Then MyPauseMethod() End If If command Is MediaCommands.Play Then MyPlayMethod() End If If command Is MediaCommands.Stop Then MyStopMethod() End If End If End Sub
Remarks
The command associated with the event can be cast to the specific implementation of ICommand, such as a RoutedCommand, if the type is known. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.input.executedroutedeventargs.command?view=netframework-4.8 | 2019-09-15T12:59:22 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
Image Gallery
The Image Gallery feature allows you to create a swipeable gallery of a group of image files through a custom URL scheme. The resulting URL can be set as the app's homepage, a custom link in the bottom navigation bar, or as a link in an HTML page.
Image files must be stored locally on the device. Supported formats include .jpg and .png.
Using this URL scheme, you can set the path to your images, set an overlay image that will appear on every slide (optional), and choose the opacity for the overlay image (optional).
kiosk-igallery://?gallery-path=[path-to-image-folder]&overlay-path=[path-to-overlay-image]&overlay-alpha=[0-1]
Text in the square brackets in the string shown above must be updated to reflect your content/preferences and the square brackets removed. Ampersands ('&') are used to separate parameters.
Initial Protocol
kiosk-igallery://?
The initial protocol instructs Kiosk Pro to create the image gallery. The entire string, including the '?', must be included before any of the following parameters.
Gallery Path
gallery-path=[path-to-image-folder]
The gallery path is where the images are located. If you have a folder in the Kiosk Pro documents folder called "slideshow", than this portion of the scheme would be:
gallery-path=slideshow
To show images that are located directly in Kiosk Pro's documents folder, you can use:
kiosk-igallery:// or
kiosk-igallery://?gallery-path
You can define the order in which images are shown by naming the files alphabetically, or serially using numbers (for example, ‘1.jpg’, ‘2.jpg’, ‘3.jpg’, etc.).
You can also use an .xml file stored in the same folder to define the order:
<xml> <slide_show_settings> <slides> <slide path='slide10.jpg'/> <slide path='slide38.jpg'/> <slide path='slide7.png'/> <slide path='slide14.png'/> images being shown. Additional images can be defined by adding additional
<slide path='content.ext'/> entries.
Overlay Image
overlay-path=[path-to-overlay-image]
The overlay image is an image that will be displayed in front of every image in the gallery. We recommend making this a transparent PNG file the same resolution as your iPad. If your overlay image is called "overlay.png" and is directly in Kiosk Pro's documents folder, this portion of the scheme would be:
overlay-path=overlay.png
Overlay Opacity
overlay-alpha=[0-1]
The overlay opacity lets you set how opaque the image is. With an opacity of 1, the image will be completely visible, and an opacity of 0 would make it completely invisible. The number set here can be 0, 1 or a decimal (like 0.8). If you wanted the opacity to be at 50%, this portion of the scheme would be:
overlay-alpha=0.5
If an overlay opacity is not set in the scheme, the default will be 1 (completely visible).
Example
For a folder of images in the Kiosk Pro documents folder called "slideshow", and an overlay image directly in the documents folder called "overlay.png" which you wanted to be 50% opaque, your scheme would be:
kiosk-igallery://?gallery-path=slideshow&overlay-path=overlay.png&overlay-alpha=0.5 | https://docs.kioskproapp.com/article/860-image-gallery | 2019-08-17T16:58:19 | CC-MAIN-2019-35 | 1566027313436.2 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55843a0fe4b027e1978e93c6/images/55954bf8e4b0f49cc3ffe04b/file-tpRjEsTCe3.png',
None], dtype=object) ] | docs.kioskproapp.com |
FireOS Getting Started
Learn more about installing using Gradle (or manually), initializing it in your FireOS)
Optionally, add Events to your app. In addition to a rich set of event for Mobile Commerce, Drones, Bitcoin etc. available right out-of-the-box (events api), Pyze provides the ability to send custom events using the postCustomEvent method in the PyzeEvents class. Pyze Events are avilable to all apps and do not require any instrumentation.
Events Overview is available in the api & events.
5. Enable Push Notifications, In-App Notifications and Attribution
Pyze delivers intelligence-driven marketing and growth automation, so you can build meaningful relationships with your users.
- Develop meaningful relationships with your users by enabling In-app Notifications.
- Enable Personalization for FireOS.
- Track referral sources for App Installs by enabling Pyze Attribution Tracking for FireOS.
Install
Get Pyze App Key
Get a Pyze App Key (PAK)_3<< FireOS Guide.
- In the Main Activity for your project, make the following additions:
Add the following import statement in the Main Activity:
import com.pyze.android.*;
- Add the following Pyze.initialize statement in the Main Activity’s onCreate method:
public void onCreate(Bundle savedInstanceState) { // // //... Pyze.initialize(getApplication()); }
Build and Go!
Add Events & Attribution
Pyze FireOS()); }
For example, to track ad request use:
import com.pyze.android.PyzeEvents; import java.util.HashMap; //... //... // Pyze.initializeEvents(getApplication()); //Initialize in onCreate method of main activity // // Definition of postAdRequested method in PyzeAd curated class // // public static void postAdRequested( // java.lang.String adNetwork, // java.lang.String appScreen, // java.lang.String size, // java.lang.String type, // java.util.HashMap;<String, String>; // Add required attributes and add HashMap <String, String> attributes = new HashMap<String, String>(); attributes.put("device", "Samsung Galaxy S6"); attributes.put("power user index", "7.5"); attributes.put("interests", "traveling");; //... //... // Pyze.initializeEvents(getApplication()); //Initialize in onCreate method of main activity // // FireOS"}));) | https://docs.pyze.com/fireos.html | 2019-08-17T17:12:49 | CC-MAIN-2019-35 | 1566027313436.2 | [array(['images/android/Getting-Started.png', None], dtype=object)
array(['images/android/Events-Push-InApp.png', None], dtype=object)
array(['images/product/AddAppCropped.png', None], dtype=object)
array(['images/product/AppProfile.png', None], dtype=object)
array(['images/android/eclipse-install.jpg', None], dtype=object)] | docs.pyze.com |
.
NOTE: we strongly recommend testing all PHP code using a local or staging/sandbox environment and only install on your live site once it has been thoroughly vetted.
Adding PHP Using The Code Snippets Plugin
The Code Snippets plugin is a great way to add PHP snippets to your website. You can activate and deactivate certain snippets, and even adds notes about what they do. It’s also more forgiving when handling PHP errors and avoiding the white screen of death.
Adding PHP Using Functions.php File
We do not recommend editing your parent theme’s functions.php file directly as you’ll end up losing those edits whenever you update your theme. Instead, use a child theme and add custom snippets to its functions.php file. | https://docs.ventureeventmanager.com/adding-php/ | 2019-08-17T16:59:11 | CC-MAIN-2019-35 | 1566027313436.2 | [] | docs.ventureeventmanager.com |
Integrating with OpenStack Keystone¶
It is possible to integrate the Ceph Object Gateway with Keystone, the OpenStack identity service. This sets up the gateway to accept Keystone as the users authority. A user that Keystone authorizes to access the gateway will also be automatically created on the Ceph Object Gateway (if didn’t exist beforehand). A token that Keystone validates will be considered as valid by the gateway.
The following configuration options are available for Keystone integration:
[client.radosgw.gateway] rgw keystone api version = {keystone api version} rgw keystone url = {keystone server url:keystone server admin port} rgw keystone admin token = {keystone admin token} rgw keystone admin token path = {path to keystone admin token} #preferred rgw keystone accepted roles = {accepted user roles} rgw keystone token cache size = {number of tokens to cache} rgw keystone revocation interval = {number of seconds before checking revoked tickets} rgw keystone implicit tenants = {true for private tenant for each new user} nss db path = {path to nss db}
It is also possible to configure a Keystone service tenant, user & password for
keystone (for v2.0 version of the OpenStack Identity API), similar to the way
OpenStack services tend to be configured, this avoids the need for setting the
shared secret
rgw keystone admin token in the configuration file, which is
recommended to be disabled in production environments. The service tenant
credentials should have admin privileges, for more details refer the Openstack
keystone documentation, which explains the process in detail. The requisite
configuration options for are:
rgw keystone admin user = {keystone service tenant user name} rgw keystone admin password = {keystone service tenant user password} rgw keystone admin password = {keystone service tenant user password path} # preferred rgw keystone admin tenant = {keystone service tenant name}
A Ceph Object Gateway user is mapped into a Keystone
tenant. A Keystone user
has different roles assigned to it on possibly more than a single tenant. When
the Ceph Object Gateway gets the ticket, it looks at the tenant, and the user
roles that are assigned to that ticket, and accepts/rejects the request
according to the
rgw keystone accepted roles configurable.
For a v3 version of the OpenStack Identity API you should replace
rgw keystone admin tenant with:
rgw keystone admin domain = {keystone admin domain name} rgw keystone admin project = {keystone admin project name}
Prior to Kilo¶
Keystone itself needs to be configured to point to the Ceph Object Gateway as an object-storage endpoint:
keystone service-create --name swift --type object-store keystone endpoint-create --service-id <id> --publicurl \ --internalurl --adminurl
As of Kilo¶
Keystone itself needs to be configured to point to the Ceph Object Gateway as an object-storage endpoint:
openstack service create --name=swift \ --description="Swift Service" \ object-store +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Swift Service | | enabled | True | | id | 37c4c0e79571404cb4644201a4a6e5ee | | name | swift | | type | object-store | +-------------+----------------------------------+ openstack endpoint create --region RegionOne \ --publicurl "" \ --adminurl "" \ --internalurl "" \ swift +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | adminurl | | | | +--------------+------------------------------------------+ $ openstack endpoint show object-store +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | adminurl | | | enabled | True | | | +--------------+------------------------------------------+
Note
If your radosgw
ceph.conf sets the configuration option
rgw swift account in url = true, your
object-store
endpoint URLs must be set to include the suffix
/v1/AUTH_%(tenant_id)s (instead of just
/v1).
The keystone URL is the Keystone admin RESTful API URL. The admin token is the token that is configured internally in Keystone for admin requests.
The Ceph Object Gateway will query Keystone periodically for a list of revoked
tokens. These requests are encoded and signed. Also, Keystone may be configured
to provide self-signed tokens, which are also encoded and signed. The gateway
needs to be able to decode and verify these signed messages, and the process
requires that the gateway be set up appropriately. Currently, the Ceph Object
Gateway will only be able to perform the procedure if it was compiled with
--with-nss. Configuring the Ceph Object Gateway to work with Keystone also
requires converting the OpenSSL certificates that Keystone uses for creating the
requests to the nss db format, for example:
mkdir /var/ceph/nss openssl x509 -in /etc/keystone/ssl/certs/ca.pem -pubkey | \ certutil -d /var/ceph/nss -A -n ca -t "TCu,Cu,Tuw" openssl x509 -in /etc/keystone/ssl/certs/signing_cert.pem -pubkey | \ certutil -A -d /var/ceph/nss -n signing_cert -t "P,P,P"
Openstack keystone may also be terminated with a self signed ssl certificate, in
order for radosgw to interact with keystone in such a case, you could either
install keystone’s ssl certificate in the node running radosgw. Alternatively
radosgw could be made to not verify the ssl certificate at all (similar to
openstack clients with a
--insecure switch) by setting the value of the
configurable
rgw keystone verify ssl to false.
Keystone integration with the S3 API¶
It is possible to use Keystone for authentication even when using the
S3 API (with AWS-like access and secret keys), if the
rgw s3 auth
use keystone option is set. For details, see
Authentication and ACLs. | http://docs.ceph.com/docs/master/radosgw/keystone/ | 2019-02-15T22:29:04 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.ceph.com |
Customize Appearance
Search Boost uses templates to render both the input box and the search results. The templates provide full customization capabilities because they are made of:
- XSL Template that is used to transform the XML result used to generate the HTML structure;
- CSS File that applies styles to the HTML structure.
All templates are located at
WebsiteRoot/DesktopModules/DnnSharp/SearchBoost/templates. You can create your own templates or develop an understanding of the customization capabilities, which is the place to start. The two subfolders have similar templates but they are used for different purposes:
Input Folder. The Input Folder contains templates for the Search Input Box. The templates include a text box, a search button, the optional portal filter, the settings link (for skin objects) and other custom elements such as javascript effects, image, and others. Read more about Search Box templates.
Output Folder. The Output Folder contains templates for the Search Results. They display a list of entries with pagination and some text summaries. Each entry features a title, a small description and a link. This type of template is less restrictive - there are people that built templates which are photo albums or product listings. Read more about Search Results templates.
There is more to appearance than templates. There are settings to control the page size, maximum description length and allows HTML in it and so on. Read Search Box and Search Result pages for more information.
Most of the appearance settings are grouped under UI Settings tab of the Search Boost Administration Console as shown below. Note this is subject to change in future releases as we plan to better separate settings based on topic. You can access the Search Boost Administration Console through the Search Input box on your Search Boost portal. Follow the directions below to open UI Settings.
Open UI Settings
In your Search Boost portal, go to your Search Input box and select the Manage > Search Settings option to open Search Boost Administration Console;
Click on the UI Settings tab to open UI Settings. | https://docs.dnnsharp.com/search-boost/customize-appearance/custom_appearance.html | 2019-02-15T21:44:56 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.dnnsharp.com |
Edit Emails
From PeepSo Docs
Administrators can edit PeepSo’s email templates and easily change their content. Emails can be personalised with tokens like {sitename} which will display the site name and {userfirstname} to display the user’s first name in the email.
Contents
Allowed Tokens
The following tokens can be used within the content of emails:
- {date} – Current date in the format that WordPress displays dates.
- {datetime} – Current date and time in the format that WordPress displays dates with time.
- {sitename} – Name of your site from the WordPress title configuration.
- {siteurl} – URL of your site.
- {unsubscribeurl} – URL to receiving user’s Alert Configuration page.
- {year} – The current four digit year.
- {permalink} – Link to the post, comment or other item referenced; context specific.
These are referring to the user causing the alert, such as “{fromlogin} liked your post…”:
- {fromemail} – Message sender’s email address.
- {fromfullname} – Message sender’s full name.
- {fromfirstname} – Message sender’s first name.
- {fromlastname} – Message sender’s last name.
- {fromlogin} – Message sender’s username.
These are referring to the receiving user on all messages, such as “Welcome {userfirstname}…”:
- {useremail} – Message recipient’s email address.
- {userfullname} – Message recipient’s full name.
- {userfirstname} – Message recipient’s first name.
- {userlastname} – Message recipient’s last name.
- {userlogin} – Message recipient’s username. | https://docs.peepso.com/wiki/Edit_Emails | 2019-02-15T21:33:31 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.peepso.com |
Discover Media - Overview
Overview
Before using a new media, the MediaAgent must collect certain information about it through a process known as discovery. When a media has been discovered its information is entered into the CommServe database. The media information is permanently retained; media does not have to be rediscovered if it is exported from the library and re-imported.
If new media are imported through a library’s mail slot, the import operation triggers a discover operation. This is dependent on whether you have enabled or disabled the Enable Auto-Discover option for the library. (For more information on this option, see Library Properties - Media tab.)
- If the automatic discovery option is not enabled, the system will prompt you to provide the necessary details for the media.
- If the automatic discovery option is enabled, the system discovers the media during a subsequent inventory update triggered by a job from the CommCell.
If the automatic discovery option is not enabled for the library and if you have some undiscovered media from a previous import, or if you import new media by opening the library door and inserting them, you must initiate a discover operation.
Media can be discovered from both the Expert Storage Configuration window and the CommCell Browser.
Discover Cleaning Media
When you discover cleaning media, the system automatically assigns it to the Cleaning Media pool.
Related Alerts
You can generate a 'Media Inventory' alert when an inventory operation completes successfully or completes with errors or if an inventory operation fails, fails to start, or is killed by a user. Refer to Media Inventory for a list of Available Alerts.
Refer to Alerts and Notifications for comprehensive information on setting up Alerts. | http://docs.snapprotect.com/netapp/v11/article?p=features/media_operations/discover_media/discover_media.htm | 2019-02-15T21:38:43 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.snapprotect.com |
Previous/Next Fulltext Content
- Last Modified:
- 01 Feb 2019
- User Level:
- Power User
Description
With the Previous/Next Fulltext Content Navigation Object you can output "Previous" and "Next" links to enable users to navigate through the fulltext content items in a Section. This is useful in cases where a user would want to view fulltext content in a particular sequence.
The one Navigation Object can be configured to either output the "Previous" link, "Next" link or both links.
How to Create a Previous/Next Fulltext Content Object
To create this object, go to Assets > Navigation and click Create New Navigation and select Previous/Next Fulltext. | https://docs.terminalfour.com/documentation/navigation/create-a-new-navigation-object/previousnext-fulltext-content-navigation-object/ | 2019-02-15T21:43:22 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.terminalfour.com |
Why phones beat fobs when it comes to protecting sensitive patient data
From time to time, it's helpful to look at what other health systems around the world are doing with technology, as a way of studying the kinds of services that cloud technologies make possible. Here's an inspiring health story from the U.S.
When Presence Health wanted to give its clinical staff access to patient records across all of its facilities, it needed to certain those records would be secure. So it turned to the cloud.
Presence is the largest healthcare organisation in Illinois, with more than 150 locations, including 12 hospitals and 27 long-term care facilities. It needed a single system that everyone could use. The system also needed to be flexible enough to be configurable to work even in remote areas with poor phone service. And Presence didn’t want to spend a bundle on new hardware. For years, the company had relied on an old-fashion fob-based security system. But recently they opted to use a phone-based access management system using Windows Azure. Here’s why:
- It was less expensive. Relying on the cloud meant that Presence didn’t have to buy any new costly hardware.
- It’s phone-based. Employees are already used to carrying around smartphones and they tend to keep a close eye on their devices, so it’s a natural fit for a security system. It’s also a familiar piece of equipment, so little training is needed.
- It’s easy to use. Under the old system, employees had to use small physical fobs to log in. The fobs were easy to misplace and difficult to use, requiring too many steps to log in.
The new Windows Azure Multi-Facto Authentication system is simple and fast. Getting patient records at the point of care was finally easy, fast and secure.
“Given the critical nature of healthcare, it is essential that we provide constantly available access for users, and we can do that with Windows Azure Multi-Factor Authentication. This is a highly available, reliable solution that gives us the confidence we need,” says Mike Baran, System Director, Technology, Presence Health.
Learn more about the Presence Health story. | https://docs.microsoft.com/en-us/archive/blogs/microsoft_uk_health_blog/why-phones-beat-fobs-when-it-comes-to-protecting-sensitive-patient-data | 2020-07-02T20:08:16 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
Configure Event De-duplication in the Alert Builder
The Alert Builder Moolet assembles alerts from incoming events, sent by the LAMs across the Message Bus. These alerts are visible through the Alert View in the User Interface (UI). The Alert Builder Moolet is also responsible for:
Updating all the necessary data structures.
Ensuring copies of the old alert state are stored in the snapshot table in MoogDb, relevant events are created and the old alert record is updated to reflect the new events arriving into Moogsoft AIOps.
Configure Alert Builder
Edit the configuration file at
$MOOGSOFT_HOME/config/moolets/alert_builder.conf.
See Alert Builder Reference for a full description of all properties. Some properties in the file are commented out by default.
Example Configuration
The following example demonstrates a simple Alert Builder configuration:
{ name : "AlertBuilder", classname : "CAlertBuilder", run_on_startup : true, moobot : "AlertBuilder.js", event_streams : [ "AppA" ], threads : 4, metric_path_moolet : true, events_analyser_config : "events_analyser.conf", priming_stream_name : null, priming_stream_from_topic : false }
Alert Builder Moobot
The Moobot,
AlertBuilder.js, is associated with the Alert Builder Moolet. It undertakes most of the activity of the Alert Builder. When the Alert Builder Moolet processes an event, it calls the JavaScript function,
newEvent:
events.onEvent ( "newEvent" , constants.eventType( "Event" )).listen();
The function
newEvent contains a call to create an alert. The newly created alert is broadcast on the Message Bus.
Learn More
See the following topics for more information:
Alert Builder Reference for all Alert Builder properties.
Moobot Modules for further information about Moobots. | https://docs.moogsoft.com/AIOps.7.3.0/configure-event-de-duplication-in-the-alert-builder.html | 2020-07-02T19:32:34 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.moogsoft.com |
Configure the Datadog Polling LAM
The Datadog Polling LAM allows you to retrieve alerts from Datadog. The Datadog Polling LAM is an HTTPS client that polls one or more Datadog servers at configurable intervals. It parses the JSON responses it receives into Moogsoft Enterprise events.
You can install a basic Datadog Polling integration in the UI. See Datadog Polling for integration steps.
Configure the Datadog Polling LAM if you want to configure custom properties, set up high availability or configure advanced options that are not available in the UI integration.
Before You Begin
The Datadog Polling LAM has been validated with Datadog v2018. Before you set up the LAM, ensure you have met the following requirements for each Datadog server:
You know your Datadog server URL.
You know your Datadog API key and Application Key.
The port for your Datadog server is open and accessible from Moogsoft Enterprise.
Your Datadog system can accept HTTPS requests.
Configure the LAM
Edit the configuration file to control the behavior of the Datadog Polling LAM. You can find the file at
$MOOGSOFT_HOME/config/datadog_client_lam.conf
See the Datadog Polling LAM Reference and LAM and Integration Reference for a full description of all properties. Some properties in the file are commented out by default. Uncomment properties to enable them.
Configure the connection properties for each Datadog target:
url: Datadog request URL including host and port.
user: Datadog account user.
password or encrypted_password: Datadog account password or encrypted password. the REST connection:
disable_certificate_validation: Whether to disable SSL certificate validation.
path_to_ssl_files: Path to the directory that contains the SSL certificates.
ssl_key_filename: SSL server key file.
ssl_cert_filename: SSL root CA file.
ssl_protocols: Sets the allowed SSL protocols.
If you want to connect Datadog LAM to retrieve events from one or more targets. The following example demonstrates a configuration that targets two Datadog sources. For a single source, comment out the
target2 section. If you have more than two sources, add a target section for each one and uncomment properties to enable them.
In the following example, the Datadog LAM is configured to poll two different Datadog instances. The LAM uses the tokens NodeID and EventID to identify duplicate events. These configurations specify use variables
$from and
$to for the query time window; the LAM specifies UNIX epoch values for these fields when it sends a poll request.
monitor: { name: "Datadog REST Client Monitor", class: "CRestClientMonitor", request_interval: 60, targets: { target1: { url: "", proxy: { host: "localhost", port: 8181, user: "user", password: "pass", #encrypted_password: "ieytOFRUdLpZx53nijEw0rOh07VEr8w9lBxdCc7229o=" }, request_interval: 60, timeout: 120 disable_certificate_validation: false, path_to_ssl_files: "config", server_cert_filename: "server.crt", client_key_filename: "client.key", client_cert_filename: "client.crt", requests_overlap: 10, enable_epoch_converter: true, results_path: "events", overlap_identity_fields: [ "NodeID", "EventID" ], request_query_params: { start: "$from", end: "$to", api_key: "1234", application_key: "1234" }, params_date_format: "%s" } target2: { url: "", user: "user2", host: "localhost", port: 8181, request_interval: 60, timeout: 120 disable_certificate_validation: false, path_to_ssl_files: "config", server_cert_filename: "server.crt", client_key_filename: "client.key", client_cert_filename: "client.crt", path_to_ssl_files: "config", requests_overlap: 10, enable_epoch_converter: true, results_path: "events", overlap_identity_fields: [ "NodeID", "EventID" ], request_query_params: { start: "$from", end: "$to", api_key: "1234", application_key: "1234" }, params_date_format: "%s", } } }, agent: { name: "Datadog Client", capture_log: "$MOOGSOFT_HOME/log/data-capture/datadog_client_lam.log" }, log_config: { configuration_file: "$MOOGSOFT_HOME/config/logging/datadog_client_lam_log.json" },
Configure for High Availability
Configure the Datadog LAM for high availability if required. See High Availability Overview for details.
Configure the LAMbot Processing
The Datadog Polling LAMbot processes and filters events before sending them to the Message Bus. You can customize or bypass this processing if required. You can also load JavaScript files into the LAMbot and execute them.
See LAMbot Configuration for more information. An example Datadog Polling LAM filter configuration is shown below.
filter: { presend: "DatadogClientLam.js", modules: [ "CommonUtils.js" ] }
Map LAM Properties
You can configure custom mappings in the Datadog Polling LAMbot. See Data Parsing information for details.
By default, the following DataDog event properties map to the following Moogsoft Enterprise Datadog Polling LAM properties:
The overflow properties are mapped to "custom info" and appear under
custom_info in Moogsoft Enterprise alerts. This mapping requires Event Monitor tag values in the correct format (
{{event.tags.example-tag}}) as described in the Datadog documentation.
Start and Stop the LAM
Restart the Datadog LAM to activate any changes you make to the configuration file or LAMbot.
The LAM service name is
datadogclientlamd.
See Control Moogsoft Enterprise Processesfor further details.
If the LAM fails to connect to one or more Datadog sources, Moogsoft Enterprise creates an alert and writes the details to the process log. Refer to the logging details for LAMs and integrations for more information. | https://docs.moogsoft.com/Enterprise.8.0.0/configure-the-datadog-polling-lam.html | 2020-07-02T18:06:42 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.moogsoft.com |
JMS
The Java Messaging Service (JMS) LAM is a link access module that communicates with application servers and message brokers, and takes its input from Java Messaging Services.
If you want to implement a more complex JMS LAM with custom settings, see Configure the JMS LAM.
Enter the following information for the JMS integration:
Unique instance name: This could be any name, it is used to identify this JMS integration. The name entered here should be unique e.g. jms_lam1.
provider_user_name and provider_password: The provider user name and password which is required for the connection to be established between the JMS server provider and the JMS LAM. If there is no password configured then leave it blank. For JBoss it is the user name and password of the user which is both a management and an application user, created in JBoss. For Active MQ the user name is admin and password is also admin. For WebLogic it is the user name and password of the Administration Console, created during its installation are entered in these fields. If there is no username and password configured for the queue or topic then leave it blank. For JBoss it is the username and password of the user which is both a management and an application user, created in JBoss. For Active MQ the username is admin and password is also admin. For WebLogic it is the username and password of the Administration Console, created during its installation.
Note
Polling will continue every 60 seconds.
After adding all the above information, click Confirm. | https://docs.moogsoft.com/Enterprise.8.0.0/jms.html | 2020-07-02T19:52:16 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.moogsoft.com |
situationDelta
A Workflow Engine function that returns
true when attributes have changed. This is based on the
previous_data metadata, which Moogsoft Enterprise sends with the situation object in a situationUpdate event.
Only use this function in conjunction with an entry filter that includes the
event_handler trigger for "Situation Updated".
This function does not check the values of the attributes, only if the attributes have changed. As standard de-duplication changes attributes, use this function carefully.
Moogsoft recommends placing
situationDelta in an engine dedicated to handling Situation Updates and other alert event handlers. This prevents updated alerts re-entering the processing chain through standard Situation Workflows. Contact your Moogsoft Enterprise administrator for more information.
This function is available for Situation workflows only.
Back to Workflow Engine Functions Reference.
Arguments
Workflow Engine function
alertDelta takes the following arguments:
Example
The following example demonstrates typical use of Workflow Engine function
situationDelta.
You want to check if the moderator of a Situation has changed before performing subsequent actions in your workflow. You could use an entry filter to check for a specific moderator, but in this instance the value of the moderator is not relevant, only that it has changed.
Using a separate Workflow Engine to prevent unwanted re-entry, you set up a workflow with an entry filter that includes the
event_handler trigger for "Situation Update" and the moderator as "Unassigned":
(event_handler = "Situation Update") AND (moderator != "anon")
Set the following:
fields: moderator
Forwarding behavior: Stop this workflow. This ensures that if the alert owner has not changed, subsequent actions in this workflow do not execute.
The UI translates your settings to the following JSON:
{"fields":["moderator"]}
If the Situation metadata shows that the “moderator” has changed, the function returns
true and the alert is forwarded to the next action in the workflow.
If function does not detect a change of ownership, the function returns
false and the forwarding behaviour prevents subsequent actions in the workflow from executing. | https://docs.moogsoft.com/Enterprise.8.0.0/situationdelta.html | 2020-07-02T20:09:09 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.moogsoft.com |
Contents
This sample uses a Metronome operator to output the current share price of all stocks every five seconds. The current prices are stored in a query table.
In StreamBase Studio, import this sample with the following steps:
From the top-level menu, click> .
Enter
sample groupto narrow the list of options.
Select Operator sample group from the Data Constructs and Operators category.
Click.
StreamBase Studio creates a single project
Metronome.sbappfile and click the
Run button. This opens the SB Test/Debug perspective and starts the module.
In the Output Streams view, select the
CurrentPricesstream. Observe a tuple showing every five seconds, with the current time, a null symbol, and a null price. This indicates that no prices are currently known. Once we record some share prices, the nulls will be replaced with actual values.
In the Manual Input view, enter
INTCfor symbol, and
23.25for price.
Click Send Data. Within five seconds, look for a tuple on the
CurrentPricesoutput stream, stating that the current price of INTC is 23.25.
Enter
MSFTfor symbol, and
25.75for price.
Click Send Data. Within five seconds, look for two tuples emitted on the
CurrentPricesoutput stream. These tuples are repeated every five seconds.. | https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/samplesinfo/Metronome.html | 2020-07-02T19:02:08 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.streambase.com |
toyplot.scenegraph module¶
Functionality for managing scene graphs.
Note that the scenegraph is an internal implementation detail for developers adding new functionality to Toyplot, casual Toyplot users will never need to work with it.
- class
toyplot.scenegraph.
AdjacencyList[source]¶
Adjaceny list representation for a directed graph.
- class
toyplot.scenegraph.
SceneGraph[source]¶
Collection of graphs representing semantic relationships among a (small number) of canvas objects.
add_edge(source, relationship, target)[source]¶
Add an edge of type relationship from source to target.
remove_edge(source, relationship, target)[source]¶
Remove an edge of type relationship from source to target, if one exists.
source(relationship, target)[source]¶
Return a single node that is connected to target via an incoming edge of type relationship.
Raises an exception if there isn’t exactly one incoming edge of type relationship.
sources(relationship, target)[source]¶
Return nodes that are connected to target via incoming edges of type relationship, if any. | https://toyplot.readthedocs.io/en/latest/toyplot.scenegraph.html | 2020-07-02T18:59:43 | CC-MAIN-2020-29 | 1593655879738.16 | [] | toyplot.readthedocs.io |
Adding new fields to the overlay
After you build the database union structure, you add any new fields to the SHR:Union_OverviewConsole overlay.
To add new fields to the overlay
- Open BMC Remedy Developer Studio in Best Practice Customization mode.
- In the Form list, locate the SHR:Union_OverviewConsole form.
- If an overlay has not yet been created for the form, right-click the form name in the list and select Create Overlay.
The form opens automatically.
- If you did not create the overlay in the preceding step, double-click the form name in the Form list to open it.
- In the view that you are customizing, select Create View Overlay from the Form menu.
- If you need to include a new custom field on the form, copy the field from the source field into the overlay.
- Position the field in the view.
Note
This is a back-end form and is not visible to end users.
- On the Properties tab, change the View Information attributes of each field to match the union field — for example, CUSTOM_CHAR_FIELD and CUSTOM_SELECTION_FIELD.
- If the field ID number is within the reserved range of field ID numbers, change the field ID number.
- If the fields that you are copying come from the base layer (that is, they are out-of-the-box fields), modify the database name to include a prefix or suffix to ensure that the name does not conflict with any out-of-the-box fields that might be added by subsequent system upgrades.
- In a localized environment, if you are implementing selection fields, ensure that the localized alias values are set as required.
- Modify the additional field attributes, permissions, and so on, as required.
- Save the form.
Where to go from here
Adding new columns to the table in the overlay
PT. 8. Remarks
You could also copy paste an existing field of the correct data type on the Form itself.
You could use add field from source form, but it'll only work for non-date fields.
As the date fields are really integers and if you use add field from source form the data type is read only so unchangeable.
Thus you should note down the Field names (following this example CUSTOM_DATE_FIELD) and manually change the View Information to that field name. Especially when doing these steps for Date Fields. | https://docs.bmc.com/docs/itsm81/adding-new-fields-to-the-overlay-259657035.html | 2020-07-02T19:16:51 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.bmc.com |
Windows SMB driver¶
While the generic driver only supports Linux instances, you may use the Windows SMB driver when Windows VMs are preferred.
This driver extends the generic one in order to provide Windows instance support. It can integrate with Active Directory domains through the Manila security service feature, which can ease access control.
Although Samba is a great SMB share server, Windows instances may provide improved SMB 3 support.
Limitations¶
ip access rules are not supported at the moment, only user based ACLs may be used
SMB (also known as CIFS) is the only supported share protocol
although it can handle Windows VMs, Manila cannot run on Windows at the moment. The VMs on the other hand may very well run on Hyper-V, KVM or any other hypervisor supported by Nova.
Prerequisites¶
This driver requires a Windows Server image having cloudbase-init installed. Cloudbase-init is the de-facto standard tool for initializing Windows VMs running on OpenStack. The driver relies on it to do tasks such as:
configuring WinRM access using password or certificate based authentication
network configuration
setting the host name
Note
This driver was initially developed with Windows Nano Server in mind. Unfortunately, Microsoft no longer supports running Nano Servers on bare metal or virtual machines, for which reason you may want to use Windows Server Core images.
Configuring¶
Below is a config sample that enables the Windows SMB driver.
[DEFAULT] manila_service_keypair_name = manila-service enabled_share_backends = windows_smb enabled_share_protocols = CIFS [windows_smb] service_net_name_or_ip = private tenant_net_name_or_ip = private share_mount_path = C:/shares # The driver can either create share servers by itself # or use existing ones. driver_handles_share_servers = True service_instance_user = Admin service_image_name = ws2016 # nova get-password may be used to retrieve passwords generated # by cloudbase-init and encrypted with the public key. path_to_private_key = /etc/manila/ssh/id_rsa path_to_public_key = /etc/manila/ssh/id_rsa.pub winrm_cert_pem_path = /etc/manila/ssl/winrm_client_cert.pem winrm_cert_key_pem_path = /etc/manila/ssl/winrm_client_cert.key # If really needed, you can use password based authentication as well. winrm_use_cert_based_auth = True winrm_conn_timeout = 40 max_time_to_build_instance = 900 share_backend_name = windows_smb share_driver = manila.share.drivers.windows.windows_smb_driver.WindowsSMBDriver service_instance_flavor_id = 100 | https://docs.openstack.org/manila/train/configuration/shared-file-systems/drivers/windows-smb-driver.html | 2020-07-02T19:46:44 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.openstack.org |
HubSpot supports the OAuth 2.0 Authorization Code grant type, which involves several steps:
The sections listed below detail the usage of OAuth 2.0 with HubSpot:
Note: The code examples in this section are written in JavaScript (Node.js)
If this is your first time using OAuth authentication with the HubSpot API, we strongly recommend checking out the OAuth 2.0 Quickstart App, written in Node.js. We've designed the app to get you started using OAuth 2.0 as quickly as possible: It demonstrates all of the steps outlined below in Getting OAuth 2.0 tokens. For a more detailed look at each step of the OAuth 2.0 process, please check out the reference docs for each step.
Get the OAuth 2.0 Quickstart App here
Before beginning the steps for using OAuth 2.0 with HubSpot, you should first:
When sending the user to HubSpot's OAuth 2.0 server, first step is to create the authorization URL. This will identify your app, and define the resources that it's requesting access to on behalf of the user. The query parameters that you can pass as part of an authorization URL are shown below; for more detailed information on this step, check out the reference doc.
To start the OAuth 2.0 process, send the user to the authorization URL.
Example:>
HubSpot displays a consent window to the user that shows the name of your app and a short description of each of the HubSpot API services that it's requesting permission to access. The user can then grant access to your app.
Note: The user who installs the app must have access to all scopes that are being requested in their hub. If they do not have the required access, the installation will fail and they will be directed to an error page. If a user sees this permissions error page, they will need to work with a super admin user, and have a super admin install the app.
Your application doesn't do anything at this stage. When access has been granted, the HubSpot OAuth 2.0 server will send a request to the callback URI defined in the authorization URL.
When the user has completed the consent prompt from step 3, the OAuth 2.0 server sends a GET request to the redirect URI specified in your authentication URL. If there are no issues and the user approves the access request, the request to the redirect URI will have a
code query parameter attached when it's returned. If the user doesn't grant access, no request will be sent.
Example:
app.get('/oauth-callback', async (req, res) => { if (req.query.code) { // Handle the received code } });
After your app has received an authorization code from the OAuth 2.0 server, it can exchange the code for an access and refresh token by sending a URL-form encoded POST request to with the values shown below; for more detailed information on this step, check out (6 hours). For information on getting a new access token, see Refreshing OAuth 2.0 tokens
Now that you’ve completed the OAuth Authorization flow and retrieved an access token, you can make calls to HubSpot's API on behalf of the user. The next sections will go into detail on how to use the access token and how you can request a new one when it expires.
Once the authorization code flow has been completed, your app is now } );
OAuth access tokens will expire after a certain period of time. This is to make sure that if they are compromised, attackers will only have access for a short period. The token's lifespan is specified in the
expires_in field when an authorization code is exchanged for an access token, which is 6 hours by default. To get a new access token, your app can exchange the received refresh token for a new access token by sending a URL-form encoded POST request to with the following values;.
For more detail on any part of HubSpot's OAuth 2.0 process, check out the reference docs listed below: | https://legacydocs.hubspot.com/docs/methods/oauth2/oauth2-quickstart | 2020-07-02T18:27:54 | CC-MAIN-2020-29 | 1593655879738.16 | [] | legacydocs.hubspot.com |
Enabling Maintenance Mode
Use Application Maintenace Mode to put up a notification page for your users when your app is in maintenance (see example). You'll use maintenance mode whenver you need to:
- Make a code deployment that requires downtime.
- Make a change to an underlying service which is going to required downtime for an app.
Heads up!
In order to go into mainteance, you'll first need an app. See Creating Your First App to create an app that can go into maintenance.
- Introducing Maintenance Mode
- How Maintenance Mode Works
- Enabling Maintenance Mode
- Additional Options
Introducing Maintenance Mode
Often times when deploying applications, its not always possible to do a live deployment, and thus downtime may be necessary. Sometimes this happens if for example major changes need to be made to a database before the new version can be deployed, or if you are migrating to a completely new installation. Droppanel will help you easily enable and disable maintenance mode in these situations.
Heads Up! If you are following the release workflow, maintenance mode will be handled automatically for you. You'll usually only need to manually enable maintenance mode for updates that you are not using a release for, or for some kind of emergency when you want to go into maintenance.
How Maintenance Mode Works
Droppanel enables maintenance mode by dynamically routing your public IP endpoint to a specific maintenance server and bypassing your application instances. This makes it very easy to toggle maintenance mode on and off without impacting your production servers. It also allows you to easily customize your maintenance page without impacting the application.
Enabling Maintenance Mode
Login to Droppanel and go to the App that will be put into maintenance.
Heads up!
In order to go into mainteance, you'll first need an app. See Creating Your First App to create an app that can go into maintenance.
Turn on Maintenance
From the App screen, there is a section called "Maintenance" under the Application Endpoint. This section controls maintenance mode for the application. Enable maintenance mode by clicking the.
button
some description about the image
Wait for The Maintenance Page to Boot
Droppanel will spin up a new droplet to host the maintenance page before switching the public IP to this droplet. While it is booting you will see this:
What an app looks like while waiting for the maintenance server to deploy
Check that Maintenance is Enabled
Once Droppanel finishes installing the maintenance droplet, it will put the application into maintenance mode by switching the Public IP from your application instances to the maintenance server. It will look like this:
What an app looks like when it is in maintenance
Click on the Public IP to ensure that the maintenance page is properly serving.
Disable Maintenance Mode
When you are done performing maintenace on your application, click the
or
to disable maintenance mode any put the App back into production.
Additional Options
There are a few additional options available in the maintenance section to manage your maintenance activities:
Keep Alive
By default, Droppanel only creates a maintenance server when needed. This saves money at digital ocean because the maintenance server will be a droplet in your DigitalOcean account and costs money. However, it takes about 60 seconds to spin up the maintenance server. If you want to go into maintenance mode a lot, consider turning Keep Alive to on. Droppanel will keep the maintenance server active between maintenances, thereby speeding up going into maintenance mode.
Configuration
The image for the maintenance can be customized to your particular use case. Edit the configuration where you'll be able to change a few settings:
Maintenance Output
Edit the HTML that will be served by the maintenance server. Any valid markup can be allowed here. No additional files can be saved, so include all of your Javascript and CSS in this page. Embed any images at base64 encoded.
Image Size
You can change the image/droplet size used for the maintenance server. This may be necessary for very heavy traffic situations. For most applications the default size of 512mb & 1 vCPU should be enough. | https://docs.droppanel.io/getting-started/enabling-maintenance-mode | 2020-07-02T18:18:59 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.droppanel.io |
Data Source
Filtering Records
By default all records you chose to see will be displayed in the table. For example, if you’re working on the details page of a customer and chose to view all the jos connected to this customer - all records for this customer will be displayed. To filter these jobs to only show jobs that meet specific criteria (on top of the original criteria) you can do so in the Data Source tab.
You can create as many conditions groups and nested conditions as you see fit.
Regular conditions have the following options; Choose the field, operator and value.
For example: Job Status (Field) is (operator) “Open” (value)
You can add more than one condition and choose if the rule should be AND or OR by changing the “Match All” option to “Match Any”:
.png)
In some cases you might want to nest conditions as a “Child Filter Group” which can give you much filtering flexibility.
For example: suppose we want to find the following jobs in our table:
All jobs where title contains “Video” AND where status is “Open” OR status is “Pending” in most query languages this would look something like this:
`job title` CONTAINS “video” AND ( `status` = “open” OR `status` = “pending” );
This will first check for any jobs with the title “Video” and then will check if status is Open or Pending.
Between the parent and child filter will always be filered as “AND.”
To take this even a step further we can nest another filter below our second filter.
.png)
For example: suppose we want to find all jobs where title contains “Video” AND where status is “Open” OR status is “Pending” AND due date is today.
In most query languages this would look something like this:
`job title` CONTAINS “video” AND ( `status` = “open” OR `status` = “pending” AND ( `due date` “is today" ) );
The following is a comprehensive list of filter options available depending on the field type:
Filters for date fields:
There are several advanced filtering functions on top of the basic that can be added based on the user who’s currently logged in:
Show records that share a connection with the logged in user.
Show records that are connected to a user who has one of the same role as the logged in user.
For the first scenario, we’ll assume the following data structure.
We have 3 tables:
Users
Companies
Jobs
Each Job belongs to a single company and each User belongs to a company- as can be seen in this diagram:
.png)
Suppose we now want to view all the Jobs that belong to the same Company the logged in user is part of.
Since each User belongs to a specific company, we can easily add a filter to show his companies. But here there is no direct connection between the User and the Job.
This can be done by adding a “Jobs” component and choosing the filter to only show jobs where Company is connected the Company this user belongs to.
.png)
The second advanced filter is the ability to filter records that are connected to a record that is in the same role as the user logged in. That’s a mouthful so let’s break it down.
For this example we’ll assume the following database structure:
We have 3 tables and 3 user roles:
Tables:
Jobs
Users (our default table)
Each Job belongs to a specific user in our case - the one who created the record.
.png)
We also have 3 different user roles: Marketing, Human Resources, and Engineering.
Here is a list of our users currently in the app and their associated roles.
.png)
Notice how Jennifer Weiss and Alison Marcus are both assigned the Human Resources role.
Now let’s look at the jobs table.
.png)
Notice how Alison has 2 jobs and Jennifer has one job.
Our objective is to allow anyone that shares the same role as the person who created the job the ability to view the job. (For this instance it’ll be helpful to think of each role as a group.)
In this case, suppose Jennifer from Human Resources logs in the app, we want her to see all the jobs created by anyone in the Human Resources team, in this case also Alison… and vice versa.
You can accomplish this by adding the filter in the data source to show records where Created By is connected to any logged in user’s roles. As you can see in screenshot below:
.png)
Sorting records
By default you can choose which field is used for sorting records from the server. This can have an impact on which records you see especially when you combine this with limiting how many records you see.
.png)
Limit records
By default all records that match the predefined criteria are sent from the database. However, you can limit the number to a specific amount by changing the record limit.
.png) | https://docs.tadabase.io/categories/manual/article/data-source | 2020-07-02T19:08:52 | CC-MAIN-2020-29 | 1593655879738.16 | [array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/zJ4pasted-image-0-(1',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/G2zpasted-image-0.png',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/do2pasted-image-0-(2',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/d5rpasted-image-0-(3',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/tvVpasted-image-0-(4',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/OnUpasted-image-0-(5',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/btFpasted-image-0-(6',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/8lbpasted-image-0-(7',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/9OQpasted-image-0-(8',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/eDApasted-image-0-(9',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-03/x61pasted-image-0-(10',
None], dtype=object) ] | docs.tadabase.io |
to create a service account.
Using the DC/OS Enterprise CLIUsing the DC/OS Enterprise CLI
From a terminal prompt, create a new service account (
<service-account-id>) containing the public key (
<your-public-key>.pem).
dcos security org service-accounts create -p <your-public-key>.pem -d "<description>" <service-account-id>.
Create a SecretCreate a Secret
Create a secret (
<secret-name>) with your service account (
service-account-id>) and private key specified (
<private-key>.pem).
PermissivePermissive
dcos security secrets create-sa-secret <private-key>.pem <service-account-id> <secret-name>
StrictStrict
In strict mode, the service account name (
<service-account-id>) must match the name specified in the framework
principal.
dcos security secrets create-sa-secret --strict <private-key>.pem <service-account-id> <secret-name> to view the
denylogs for your service account (
<service-account-id>).
journalctl -u "dcos-*" |grep "audit" |grep "<service-account-id>" |grep "deny"
This command will return a list of the audit logs that are generated when your service was denied access due to insufficient permissions or a bad token. The rejection messages should include the permission that was missing. You might need to repeat this process several times to determine the full list of required permissions.
Troubleshooting:
You can grant your service superuser permission to rule out any functional issues. All valid services should be able to run as superuser.
curl -x put --cacert dcos-ca.crt \ -h "authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:superuser/users/<service-account-id>/full
For more information, see the permissions reference.
Assign the PermissionsAssign the Permissions
Using the permissions reference and the log output, assign permissions to your service..
Request an Authentication Token
Generate a service login token, where the service account (
<service-account-id>) and private key (
<private-key>.pem) are specified.
dcos auth login --username=<service-account-id> --private-key=<private-key>.pem
Pass the Authentication Token in Subsequent Requests
After the service has successfully logged in, an authentication token is created. The authentication token should used in subsequent requests to DC/OS endpoints. You can reference the authentication token as a shell variable, for example:
curl -H "Authorization: token=$(dcos config show core.dcos_acs_token)"
Refresh the Authentication Token
By default, authentication tokens expire after five days. Your service will need to renew its token either before or after it expires. The token itself contains the expiration, so your service can use this information to proactively refresh the token. Alternatively, you can wait to get a
401 from DC/OS and then refresh it.
To refresh your authentication token, just repeat the process discussed in Request an authentication token. | http://docs-staging.mesosphere.com/mesosphere/dcos/1.9/security/ent/service-auth/custom-service-auth/ | 2020-07-02T19:55:03 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs-staging.mesosphere.com |
Version 6.0 Release Notes in Brief
From WHMCS Documentation
So you're wanting to get up and running with Version 6.0 as quickly as possible? Ok... here's the key points you need to know.
V6.0 Documentation Portal
We have extensive new documentation available covering the many new features, functionalities and API's available in Version 6.0. To view them, visit the Version 6.0 Documentation Portal.
We have extensive new documentation available covering the many new features, functionalities and API's available in Version 6.0. To view them, visit the Version 6.0 Documentation Portal.
- Version 6.0 introduces new system requirements. You must be running PHP 5.3.7 and later, and have the PDO extensions available to PHP. Version 6.0 is compatible with PHP 5.4, 5.5 and 5.6.
- As this is a major version update, only a full version is being made available. There is no incremental upgrade for users of the latest 5.3 release.
- The new client area template in Version 6.0 is named Six. This will be the default theme for all new installations but for users who upgrade, your configured theme in Setup > General Settings will not change.
- For help getting started working with the new Six template, please refer to Customising the Six Theme
- The previous default theme in Version 5.x releases will from now on be referred to as Five.
- The Original admin area theme has been deprecated and is no longer supported.
- Custom logo image files need to be relocated from /images/logo.png or /images/logo.jpg to /assets/img/logo.png and /assets/img/logo.jpg
- Version 6.0 updates the templating engine to Smarty 3.1. Some features of Smarty have changed. Please check our Version 6 Template Migration Guide to check if you will need to make any changes.
- One of the biggest changes to Smarty relates to the use of {php} tags within template files. This functionality is now disabled by default and must be explicitly enabled in Setup > General Settings > Security should you require it. We recommend using hooks for Templates and Custom PHP Logic.
- Custom hooks should continue to work without requiring any changes for WHMCS Version 6.0.
- Modules that are compatible with WHMCS V5.x should continue work in Version 6.0. However, if that code does something we haven't anticipated, it may require updates. In the event of Blank or Partially Rendered Pages, please begin your troubleshooting here.
- Version 6.0 does introduce a number of database schema changes. More information on these can be found @ Version 6.0 Database Changes
-.
All release notes are important, and there are many detailed release notes available in the Full Version 6.0 Release Notes. We recommend reading them.
Special Thanks
A massive thanks to everyone who took part in, and contributed to the beta program. Your dedication and attention to detail helped make WHMCS 6.0 the best release it could possibly be. Thanks from everyone here at WHMCS. | https://docs.whmcs.com/Version_6.0_Release_Notes_in_Brief | 2020-07-02T19:32:11 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.whmcs.com |
The MSMQ transport in NServiceBus is a distributed transport in which the MSMQ process runs on each machine, storing messages locally before being forwarded to other machines. In this model, each
endpoint connects to the local MSMQ process and both the queue name and the host name must be specified when addressing a different endpoint.
Scaling out
Because the MSMQ queues are not accessible from outside the machine they are hosted in, NServiceBus endpoints using the MSMQ transport are not able to use the competing consumers pattern to scale out with a single shared queue.
The distributor can be used to overcome this limitation.
Physical mapping
The host name of the machine running each endpoint can be specified directly in the message-endpoint mapping configuration section by adding a
@machine suffix.
>
When using MSMQ, if there is no
@machine suffix, NServiceBus assumes the configured endpoint runs on a local machine. | https://particular-docs.azurewebsites.net/transports/msmq/routing?version=core_5 | 2020-07-02T19:01:54 | CC-MAIN-2020-29 | 1593655879738.16 | [] | particular-docs.azurewebsites.net |
GAVO DaCHS: DirectGrammars and Boosters¶:
Replace your data element’s grammar with the direct grammar spec, which would look somewhat like this:
<directGrammar id="fits" type="fits" cBooster="res/boosterfunc.c"/>
Generate the booster:
gavo mkboost your/rd#fits > res/boosterfunc.c
Edit res/boosterfunc.c (may be optional for fits boosters)
Import your data:
gavo imp your/rd
In the directGrammar element, the path in the
cBooster attribute is
interpreted relative to the RD’s resdir. The type argument says rougly
what kind of source you’re parsing from. Values allowed here include:
col(the default) – parse from stuff conventionally handled by a
columnGrammar
bin– parse from data that has fixed-length binary records (this is stuff that a
binaryGrammarwould grok)
split– parse from files that have fields separated by some constant sequence of character (conventionally, these can be parsed by a
reGrammar)
fits– parse from FITS binary tables (that’s what a
fitsTableGrammarcan read).
The
mkboost subcommand receives a reference of to the
directGrammar element – that is, the RD id, a hash, and the XML id
of the grammar – as an argument.
Booster source code¶
Once you’ve generated the booster source, you’re free to change it in
whatever way you fancy. On schema updates, unfortunately, you’ll have
to merge in changes manually, as we’ve not found a sensible and general
way to preserve arbitrary source changes when (re-)generating a booster.
If you have a creative idea how better to separate generated and
hand-made code, we’re certainly interested. The way things are now, if
you change the schema, you can re-run
gavo mkboost but have to merge
any changes manually.
The code generated starts somewhat like this:
#include <math.h> #include <string.h> #include "boosterskel.h" #define QUERY_N_PARS 33 enum outputFields { fi_localid, /* Identifier, text */ fi_pmra, /* PM (alpha), real */ fi_pmde, /* PM (delta), real */
The definition of
QUERY_N_PARS (which is the number of columns in the
table) is essential and must remain in this form, as the function
building the booster greps it out of the source code to communicate this
value to the booster boilerplate; this, however, means that you’re free
to change the concrete number if the number of table columns changes in
the source file (you’d have to adjust the outputFields as well; this is
typcially going to be a cut-and-paste job from a repeated run of
gavo
mkboost). Again,
QUERY_N_PARS must always be equal to the number
of columns in the target table.
The code continues with an enumeration mapping symbolic names to the indices of the corresponding columns in the target table; the names are simple fi_ and the field destination lowercased. If you only use these names to access fields, cutting and pasting on later schema changes should be fairly painless and safe.
While you shouldn’t need to change any of this, you in general have to
change the
getTuple function. What it looks like strongly depends
on the sort of booster you’re generating for; this includes the
prototype.
What’s common is that getTuple needs to return a
Field array. All
boosters declare the return value like this:
static Field vals[QUERY_N_PARS];
– it needs to be static as a pointer to it is returned from the
function; don’t rely on anything in there to be stable across function
calls, though, as the serialization to COPY material might mess around
in that memory. The name
vals is expected by, e.g., the
F macro and
must therefore not be changed.
Field is defined as follows:
typedef struct Field_s { valType type; int length; /* ignored for anything but VAL_TEXT */ union { char *c_ptr; double c_double; float c_float; int32_t c_int32; int8_t c_int8; } val; } Field;
where
type is one of:
typedef enum valType_e { VAL_NULL, VAL_BOOL, VAL_CHAR, VAL_SHORT, VAL_INT, VAL_BIGINT, VAL_FLOAT, VAL_DOUBLE, VAL_TEXT, VAL_JDATE, /* a julian year ("J2000.0"); this is stored as a simple double */ VAL_DATE, /* date expressed as a time_t */ VAL_DATETIME, /* date and time expressed as a time_t */ } valType;
JDATE is a julian day number to be dumped as a date (rather than a datetime). For other ways to represent dates and datetimes, see below.
You can, and frequently will, fill the stuff by hand. There are, however, a couple of functions that take care of some standard situations:
void linearTransform(Field *field, double offset, double factor)– changes field in place to
offset+factor*oldValue. Handles NULL correctly, silently does nothing for anything non-numeric
void parseFloatWithMagicNULL(char *src, Field *field, int start, int len, char *magicVal)– parses a float from
src[start:start+len]into field, writing NULL when magicVal is found in the field.
void parseDouble(char *src, Field *field, int start, int len)– parses a double from
src[start:start+len]into field, writing NULL if it’s whitespace only.
void parseInt(char *src, Field *field, int start, int len)– parses a 32-bit int from
src[start:start+len]into field.
void parseShort(char *src, Field *field, int start, int len)– parses a 16-bit int from
src[start:start+len]into field.
void parseBlankBoolean(char *src, Field *field, int srcInd)– parses a boolean such that field becomes true when src[srcInd] is nonempty.
void parseBigint(char *src, Field *field, int start, int len)– parses a 64-bit int from
src[start:start+len]into field.
void parseString(char *src, Field *field, int start, int len, char *space)– copies len bytes starting at start from src into space (you are responsible for allocating that; usually, a static buffer should do, since the postgres input is generated before the next input line is parsed) and stuffs the whole thing into field.
void parseChar(char *src, Field *field, int srcInd)– guess.
MAKE_NULL(fi)– makes fi NULL
MAKE_DOUBLE(fi, value)– make fi a double with value
MAKE_BIGINT(fi, value)– make fi a double with value
MAKE_FLOAT(fi, value)–
MAKE_SHORT(fi, value)–
MAKE_CHAR(fi, value)–
MAKE_JDATE(fi, value)–
MAKE_TEXT(fi, value)– note that you must manage the memory of value yourself. In particular, it must not be automatic memory of getTuple, since that will not be valid when the tuple actually is built. Most commonly, you’ll be using a static buffer here.
MAKE_CHAR_NULL(fi, value, nullvalue)– makes fi a char with value unless value==nullvalue; in that case, fi becomes a NULL
double mjdToJYear(mjd)– returns a julian year for mjd
AS2DEG(field)– turns a field value in arcsecs to degrees
MAS2DEG(field)– turns a field value in milli-arcsecs to degrees
Of course, you can also manually copy or delimit data and use fieldscanf as documented in split boosters
Boosters are linked together with
boosterskel.c and must include
boosterskel.h. If you’re interested what these things do (or want
to fix bugs, or whatever), you can get the files using:
gavo admin dumpDF src/boosterskel.c # or .h
Line-based boosters¶
These are boosters that read from a text file, line by line. Currently,
the maximum line length is set to 4000 (
INPUT_LINE_MAX in
boosterskel.c). It is up to the parsing function to split and
digest this text line.
Col boosters¶
For col boosters, the getTuple function looks somewhat like this:
Field *getTuple(char *inputLine) { static Field vals[QUERY_N_PARS]; parseWhatever(inputLine, F(fi_localid), start, len); parseFloat(inputLine, F(fi_pmra), start, len); parseFloat(inputLine, F(fi_pmde), start, len); parseFloat(inputLine, F(fi_raerr), start, len);
Here, it’s your job to fill out start and len (at least; start is
zero-based).
gavo mkboost inserts parseXXX function calls according
to the table metadata, which should be what you want in general. Add
scaling or other processing as required.
Split boosters¶
When the input data comes as xSV (e.g., values separated by vertical
bars, commas, or tabs), give a
splitChar and set the
type
attribute to
split in the
directGrammar.
This then creates a source like:
char *curCont = strtok(inputLine, "\t"); fieldscanf(curCont, fi_objid, VAL_INT_64); curCont = strtok(NULL, "\t"); fieldscanf(curCont, fi_run, VAL_SHORT);
etc. Thus, the input line is parsed using strtok, and each value is
parsed using the
fieldscanf function. This function takes the string
containing the literal in the first argument, the field index in the
second, and finally the type specifier. If the data comes in the
sequence of the table columns, the generated source might just work.
Warning: C’s standard strtok function merges adjacent separators,
i.e.,
foo|bar||baz would just yield three tokens, foo, bar, and baz.
With astronomical data, this is typically not what you want. Therefore,
the generated booster function will have a line like:
#define strtok strtok_u
Delete it in case that you need the POSIX strtok behaviour. This would in particular apply if you have whitespace separated data with a variable number of blanks (which, however, would suggest that you’re really looking at material for a col booster).
Bin boosters¶
When you get binary data of fixed record length, set the
recordSize
attribute on the
DirectGrammar element:
<directGrammar type="bin" recordSize="300"...
Note that a
recordSize larger than
INPUT_LINE_MAX will cause a
buffer overflow.
You are mainly on your own in terms of segmentation, but for entering
values, you can use the
MAKE_* discussed above.
For these in particular, use the the portable type specifiers for
integral types, viz.,
int8_t,
int16_t,
int32_t, and
int64_t and these names with a
u in front.
In particular with binary boosters, it is essential you always properly cast what you read, e.g.,:
MAKE_DOUBLE(fi_dej2000, -90+*(int32_t*)(line+4)/1e6); /* SPD */
when a declination is given as mas of south polar distance.
FITS boosters¶
These read from FITS binary tables and are really a somewhat special
beast. To build one of those, DaCHS inspects the first file matched by
the parent data’s
sources element (which also means these won’t work
outside of a
data element). DaCHS expects each table column to have
a match (i.e., after lowercasing the name in the FITS table) in the FITS
table. FITS table column without a match in the database table are
ignored.
FITS binary tables are organized by columns rather than by rows, bearing witness to their FORTRAN heritage. The way the boosters are currently generated, all these columns are completely read into memory, which means you cannot ingest FITS binary tables that do not fit into your machine’s memory. Fixing this would be fairly straightforward (patches are welcome, but we’ll also fix this if you ask for it).
FITS boostes can automatically map column names for you.
<mapKeys> raj2000:RA, dej2000:DEC </mapKeys>
will map column named RA in your sourcefile to column named raj2000 in
your database table and analoguosly for DEC. If you don’t do this, only
column names from your DB table will be read and imported.
If you need to postprocess the items, we recommend you do that again in
the
getTuple function (note how that gets passed the row index) for
maintainability, rather than directly after reading the rows.
Attention: The system will not warn you if the type of a column in the table is not compatible with what you have in the database. If it is, the program will probably silently dump garbage into the db, though if you’re lucky it’ll crash. This is almost on purpose. It will let you do manual type conversions like, for example, making a 64 bit integer from a string as follows:
if (nulls[18][rowIndex]) { MAKE_NULL(fi_ppmxl); } else { parseBigint(((char**)(data[18]))[rowIndex], F(fi_ppmxl), 0, 19); } return vals;
(we could admittedly warn you if this kind of thing becomes necessary, and we’ll gladly accept patches for that).
Filling in data manually¶
The
F(index) macro lets you access the field info directly. So, you
could enter a fixed-length piece of memory into
fi_magic like this:
static char bufForMagic[8]; memcpy(bufForMagic, inputLine+20, 8); F(fi_magic)->type = VAL_TEXT; F(fi_magic)->val.c_ptr = bufForMagic; F(fi_magic)->length = 8;
Having static buffers in
getTuple is usually ok since the COPY input
is generated before
getTuple is called again.
It is quite common to have to handle null values. In the example above,
this could look like this if a NULL for magic were signified by a F in
inputLine[19]:
static char bufForMagic[8]; if (inputLine[19]=='F') { F(fi_magic)->type = VAL_NULL; } else { memcpy(bufForMagic, inputLine+20, 8); ...
Skipping a record¶
If you need to skip a record, do:
longjmp(ignoreRecord)
in
getTuple. That works independently of the booster type.
Dates and times¶
The boosters treat “normal” dates and datetimes as
struct tm``s. If you
need a larger range, use ``VAL_JDATE, which lets you store julian
dates in floats. Julian dates are serialized to dates rather than
datetimes.
To parse
VAL_DATE or
VAL_DATETIME, you will write something
like:
fieldscanf(curCont, fi_date, VAL_DATE, "%Y-%m-%d");
if parsing from date strings. If your input is something weird, figure
out a way to generate a
struct tm as defined in
time.h. Then
write:
struct tm timeParts; timeParts.tm_sec = 12; ... timeParts.tm_year = 1920; F(fi_dt)->val.time = timeParts; F(fi_dt).type = VAL_DATETIME;
(or
VAL_DATE, as the case may be).
Having said all this, long experience has taught us it’s ususally best
do have dates and such in the database as MJD or julian years. You can
format those to ISO strings (or, really, anything else you want) on
output by using display hints on
outpuField or even
column
itself.
MJDs are just so much easier to handle within ADQL queries. Support for timestamps, on the other hand, is extremely lousy.
Debugging¶
The source code generated by
gavo mkboost typically is really mean.
The preference is to make it coredump rather than give fancy errors,
under the assumption that error messages from the booster would in
general help less than the post-mortem dumps; this of course also means
that you should not use direct grammars to parse from potentially
malicious sources unless you substantially harden the generated code.
To figure out what’s wrong if things go wrong, say:
ulimit -c unlimited # bash and friends gavo imp q gdb bin/booster core where # that's for gdb
This should give you the line where things failed, and of course the full power of gdb to inspect how that happened.
As a short example, consider a gdb session where the author I forgot to use the mapKeys in a FITS directGrammar for columns which are filled from the Binary table. This resulted in a segmentation fault, which made gdb say:
gdb: Program terminated with signal 11, Segmentation fault. #0 0x0000000000406cde in getTuple (data=0x7fff592a41a0, nulls=0x7fff592a4250, rowIndex=0)
To figure out where the program crashed, say:
(gdb) where #0 0x0000000000406cde in getTuple (data=0x7fff592a41a0, nulls=0x7fff592a4250, rowIndex=0) at func.c:73 #1 0x0000000000407784 in createDumpfile (argc=2, argv=0x7fff592a53c8) at func.c:296 #2 0x0000000000406bdf in main (argc=2, argv=0x7fff592a53c8) at boosterskel.c:673
In the traceback, you can see the frame you’re interested in and go there using up (or down, if you’re too far up):
(gdb) up #1 0x0000000000407784 in createDumpfile (argc=2, argv=0x7fff592a53c8) at func.c:296 296 in func.c
Incidentally, you could instruct gdb to use your boosterfunc.c file as the source file for func.c (that’s the temporary name of that file when DaCHS built the binary in a sandbox). But it’s probably as straightforward to just check the source code in your editor and figure out what variables you’re interested in. In this case, this might be the number of the row where the crash happened (we are in the main row-reading loop of the booster):
(gdb) print i $8 = 0
Voila, we crashed on the first row already. Let’s go back into getTuple to figure out which column was bad:
(gdb) down #0 0x0000000000406cde in getTuple (data=0x7fff592a41a0, nulls=0x7fff592a4250, rowIndex=0) at func.c:73 73 in func.c
Looking up line 73, there’s (in this example) an access to
nulls[0][rowIndex]. Could this dereference a null pointer? See for
yourself:
(gdb) print nulls[0] $9 = 0x0
Right – so that’s where the trouble starts (in this case, the underlying reason was a DaCHS bug, as that array should never have been uninitialized). | https://dachs-doc.readthedocs.io/booster.html | 2020-07-02T18:09:31 | CC-MAIN-2020-29 | 1593655879738.16 | [] | dachs-doc.readthedocs.io |
References
API reference
Contains descriptions of classes and their members in Kentico API.
API examples
Provide basic hands-on examples of creating, retrieving, updating, and deleting objects.
Kentico controls
Explains how to work with the built-in controls to create your custom web parts.
Web.config settings
Lists the system configuration settings that you can adjust in the web.config file.
Web part property reference
You can find the descriptions of a web part's properties under the Documentation link in the web part properties dialog. | https://docs.kentico.com/k8/references | 2020-07-02T17:59:23 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.kentico.com |
You can initiate a build by pressing the Build button. This will ask you to confirm the build and start the process.
Please note that creating stand-alone executables may momentarily lock up Pixel Vision 8 based on your computer’s specs. While PV8 will show you which step of the build process it is in, the actual build for each platform may take anywhere from a few seconds to a full minute. Once the build is done, you will be notified of the location for the build.
It’s also important to point out that you will not be able to view the stand-alone executable zip files that were generated by the build process in the Workspace Explorer. Simply navigate to your computer’s Workspace directory and find the build folder for the game you just exported from. | https://docs.pixelvision8.com/exportinggames/buildingexecutables | 2020-07-02T19:37:16 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.pixelvision8.com |
How to create your first Cozy application¶
Prerequisite¶
Developing an application for Cozy is quite easy. All you need to know is: - how to develop a single page application in HTML5. You can use the tools or framework of your choice, or no framework - basic Docker knowledges.
The only required tool is Docker. We have been told that installing Docker on some familial flavours of Windows may be a bit difficult. If you use Windows, please check if Docker is available on your system.
Install the development environment¶
On GNU/Linux, according to the documentation: « The docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. If you don’t want to use sudo when you use the docker command, create a Unix group called docker and add users to it. Be warned that the docker group grants privileges equivalent to the root user. You should have a look at Docker’s documentation on security.
Every application running inside Cozy is a client-side HTML5 application interacting with your data through the API of the server. To develop an application, you’ll require a running Cozy server.
The easiest way is to use the Docker image for developers we provide.
Just install it:
docker pull cozy/cozy-app-dev
(We update this image on a regular basis with the latest version of the server and our library. Don’t forget to update the image by running
docker pull cozy/cozy-app-dev from time to time).
Create your first application¶
The minimal application consist of only two files:
- an HTML file,
index.html, with the markup and the code of your application
- a manifest describing the application. It’s a JSON file named
manifest.webapp with the name of the application, the permissions it requires… We’ll have a deeper look to it content later. #TODO add an inner link to the manifest description.
Your application will be able to use some shared libraries provided by the server, so you don’t have to include them into your project.
Your application requires some informations to interact with the server API, for example the URL of its entrypoint, and an auth token. This data will be dynamically injected into
index.html when it serves the page. So the
index.html file has to contain some string that will be replaced by the server. The general syntax of this variables is
{{…}}, so don’t use this syntax for other purpose in the page, for example inside comments.
You can use the following variables:
{{.Domain}}will be substituted by the URL of the API entrypoint
{{.Token}}will be replaced by a token that authenticate your application when accessing the API
{{.Locale}}: the lang f the instance
{{.AppName}}: the name of the application
{{.IconPath}}will be replaced by HTML code to display the favicon
{{.CozyClientJS}}will be replaced with HTML code to inject the Cozy client library
{{.CozyBar}}will be replaced with HTML code to inject the upper menu bar.
Use the API¶
If you added
{{.CozyClientJS}} to your page, interacting with the server will be as easy as using the Cozy Client JS library. All you have to do is to initiate the library with the server parameters (the URL of the API and the auth token of your application):
window.cozy.client.init({cozyURL: "…", token: "…"});
You can then interact with the server by using methods of the
window.cozy.client properties. For example, to get current disk usage:
cozy.client.settings.diskUsage() .then(function (usage) {console.log("Usage (promise)", usage);}); .catch(function(err){ console.log("fail", err); });
This library embeds most of the available server APIs: manipulate documents and files, manage applications and server settings… It also provides some some methods to help application keep working while being offline.
Some server APIs may not be available right now through the library. If you want to use one of this method, you’ll have to call it manually. See below. #TODO - add inner link.
Behind the magic¶
Some server APIs may not be available right now through the library. If you want to use one of this method, you’ll have to call it manually. We’ll describe here how to access the API without using the Cozy Cliznt JS library.
Connecting to the API requires three things:
- its URL, injected into the page through the
{{.Domain}}variable
- the application auth token, injected into the page through the
{{.Token}}variable. Each request sent to the server must include this token in the
Authorizationheader
- the session cookie, created when you connect to your server. This is an
HttpOnly cookie, meaning that JavaScript applications can’t read it. This prevent a malicious script to stole the cookie.
Here’s a sample code that get API informations provided by the server and query the API:
<div data-
document.addEventListener('DOMContentLoaded', () => { "use strict"; const app = document.querySelector('[data-cozy-token]'); fetch(`//${app.dataset.cozyDomain}/apps`, { method: 'GET', headers: { Authorization: `Bearer ${app.dataset.cozyToken}` // Here we use the auth token }, credentials: 'include' // don’t forget to include the session cookie }) .then(function (response) { if (response.ok) { response.json().then((result) => { console.log(result); }); } else { throw new Error('Network response was not ok.'); } }) .catch(function (error) { console.log('There has been a problem with your fetch operation: ' + error.message); }); });
The manifest¶
Each application must have a “manifest”. It’s a JSON file named
manifest.webapp stored at the root of the application directory. It describes the application, the type of documents it uses, the permissions it require…
Here’s a sample manifest:
{ "name": "My Awesome application", "permissions": { "apps": { "type": "io.cozy.apps" }, "permissions": { "type": "io.cozy.permissions" }, "settings": { "type": "io.cozy.settings" }, "sample": { "type": "io.cozy.dev.sample", "verbs": ["GET", "POST", "PUT", "PATCH", "DELETE"] }, "jobs": { "type": "io.cozy.jobs" } }, "routes": { "/": { "folder": "/", "index": "index.html", "public": false }, "/public": { "folder": "/public", "index": "index.html", "public": true } } }
Permissions¶
Applications require permissions to use most of the APIs. Permissions can be described inside the manifest, so the server can ask the user to grant them during installation. Applications can also request permissions at run time.
A permission must at type contain a target, the type of objects the application want to interact with. Can be a document type, or an action on the server. By default, all grant on this object are granted, but we can also request fine grained permissions, for example limiting to read access. We can also limit the scope to a subset of the documents.
In the manifest, each permission is an object, with a random name and some properties:
type: mandatory the document type or action name
description: a text that will be displayed to the user to explain why the application require this permission
verbs: an array of HTTP verbs. For example, to limit permissions to read access, use
["GET"]
selector: a document attribute to limit access to a subset of documents
values: array of allowed values for this attribute.
An application can request a token that grant access to a subset of its own permissions. For example if the application has full access to the files, it can obtain a token that give only read access on a file. Thus, the application can make some documents publicly available. The public page of the application will use this token as authentication token when accessing the API.
Samples¶
Application require full access to files:
{ "permissions": { "files": { "description": "…", "type": "io.cozy.files" }, } }
Application want to be able to read the contact informations of
[email protected]
{ "permissions": { "contact": { "type": "io.cozy.contacts", "verbs": ["GET"], "selector": "email", "values": ["[email protected]"] } } }
Routing¶
The application must declare all of its URLs (routes) inside the manifest. A route is an object associating an URL to an HTML file. Each route has the following properties:
folder: the base folder of the route
index: the name of the file inside this folder
public: a boolean specifying whether the route is public or private (default).
Sample:
"routes": { "/admin": { "folder": "/", "index": "admin.html", "public": false }, "/public": { "folder": "/public", "index": "index.html", "public": true }, "/assets": { "folder": "/assets", "public": true } }
cozy-client-js¶
This library embeds most of the available server APIs: manipulate documents and files, manage applications and server settings… It also provides some some methods to help application keep working while being offline.
The library expose a client API under the
window.cozy.client namespace. Before using it, you have to initiate the library with the server parameters (the URL of the API and the auth token of your application):
window.cozy.client.init({cozyURL: "…", token: "…"});
The library supports two programming paradigms: callback and Promises, so you can use your favorite one. If you prefer using callbacks rather than Promises, just add
disablePromises to the options when initializing the library:
window.cozy.client.init({cozyURL: "…", token: "…", disablePromises: true}); window.client.settings.diskUsage(function (err, res) { (…) });
Raw API documentation¶
In this tutorial, we’ll only see a few samples of how to use the library. For a complete description of all available methods, please refer to its own documentation:
- documents
- files
- authentification
- authentication with OAuth2
- settings
- inter-app communication
- jobs and triggers
- sharing
- offline
- Cozy Bar
Manipulating documents¶
Inside cozy data system, all documents are typed. To prevent applications to create document types with the same name but different description, the naming of the doctypes use the Java specification. Every document type name must be prefixed by the reverted domain name of its creator. If you don’t own a domain name, you can also use your email address. For example, doctypes created by Cozy are prefixed by
io.cozy or
io.cozy.labs. If you don’t own a domain name, and your email address is
[email protected], prefix your doctype names with
cloud.bar.foo.
Before manipulating documents, you have to request permission to access their doctype, either in the manifest or dynamically.
Every method allowing to handle document are available under the
cozy.client.data namespace. For example:
cozy.client.data.create(doctype, attributes),
cozy.client.data.update(doctype, doc, newdoc),
cozy.client.data.delete(doctype, doc)to create, update and delete documents
cozy.client.data.updateAttributes(doctype, id, changes)to only update some attributes of a document
cozy.client.data.find(doctype, id)return a document using its ident
cozy.client.data.changesFeed(doctype, options)get the latests updates of documents of a doctype.
- you can attach files to a document using
cozy.client.data.addReferencedFiles(doc, fileIds)and list attachments with
cozy.client.data.listReferencedFiles(doc).
Querying¶
To search documents inside the database, you first need to create an index on some attributes of the documents, then perform a query on this index. The library offers the following methods:
cozy.client.data.defineIndex(doctype, fields)to create the index
cozy.client.data.query(indexReference, query)to query an index. The query parameter uses the syntax of the Mango API from CouchDB 2.
For example, to search contacts by their email address, you could use:
cozy.client.data.defineIndex("io.cozy.contacts", ["email"]) .then((index) => { return cozy.data.query(index, {"selector": {email: "[email protected]"}}) }) .then( (result) => { console.log(result[0].name); });
Manipulating files¶
The metadata of the files are stored inside the server database, allowing to perform advanced queries, and the files themselves on a virtual file system.
The library offer a lot of methods under
cozy.client.files namespace to manipulate files. Most of the methods allows to manipulate a file or folder either by its id or by its full path. Here are the most commons ones, but a lot of other methods are available in the raw API documentation:
create()and
updateById()to create and update a file
createDirectory()to create a folder
updateAttributesById()et
updateAttributesByPath()allow to update some metadata
- use
destroyByIdto remove a file
- a virtual trash is available. You can put files into the trash (
trashById()) and restore them (
restoreById()). You can also list the content of the trash (
listTrash()) and purge all trashed files (
clearTrash())
statById(id)et
statByPath(path)return the metadata and, or folders, their content
Folders¶
When using
statById() or
statByPath() to get metadata of of folder, you can than call
relations() on the resulting object to access their content. For example, to list content of the root folder, use:
cozy.client.files.statByPath("/") .then((dir) => { console.log(dir.relations("contents")); })
Some special folder have a pre-defined id that will never change:
io.cozy.files.root-diris the root of the filesystem
io.cozy.files.trash-diris the trash.
The Cozy Bar¶
The Cozy Bar is a component that display the Cozy menu on the top of your application and allow inter-apps features like content sharing.
Your application interacts with this component through
cozy-bar.js, a library injected into your pages by the server when you add
{{.CozyBar}} in the header. It exposes an API behind the window.cozy.bar namespace.
Before using it, you have to initialize the library:
window.cozy.bar.init({appName: "Mon application"}).
Styling¶
If you plan to build a webapp to run on Cozy, you’ll probably want to use a simple and elegant solution to build your interfaces without the mess of dealing with complex markup and CSS. Then Cozy UI is here for you!
It relies on Stylus as preprocessor. You can add it as a library in your project to use it out-of-the-box.
Start the development server¶
Now it’s time to start the development server, to test our application.
(remember what we previously said about the permissions required to run Docker: if your user doesn’t belong to the docker group, you’ll have to use
sudo to run each of this commands.)
To run your application inside the development server, just run the following command from the folder where your
index.html and
manifest.webapp files leave:
docker run --rm -it -p 8080:8080 -p 5984:5984 -p 8025:8025 -v $(pwd):/data/cozy-app --name cozydev cozy/cozy-app-dev
Let’s have a quick look at this command, so you can adapt it to your needs:
--rmwill delete the server when you stop it. This prevent Docker from keeping a lot of unused stopped images
-itallow to attach an interactive terminal, so you’ll be able to use the command line inside the server
-p 8080:8080: the server listens on port 8080 on the virtual machine. We forward this port to the same port on your local machine. To use another local port, for example 9090, use
-p 9090:8080
-p 5984:5984: this is just a convenient way to access the CouchDB database running inside the server. Point your browser to access its administrative interface
-p 8025:8025: Cozy requires a mail server. In the development image, we don’t use a real email server, but a software that can display the sent messages. Just point your browser to display the messages sent by the server
-v $(pwd):/data/cozy-appthis mount the current folder, where your application leaves, inside the server. This is what make the application available on the server
--name cozydevname the running virtual machine
cozydev, so you can easily refer to it from other Docker commands. For example, if you want to connect to a shell inside the server, you can use
docker exec -ti /bin/bash.
With this syntax, there is no data persistance: all your test data will be lost every time you stop the server. This is a good way to prevent side effects and start on a clean base, with an empty database.
However, if you want to persist data, you have to mount two folders from the virtual server to local folders:
/usr/local/couchdb/data (database) and
/data/cozy-storage (the virtual filesystem). This can be achieved by adding to the command line
-v ~/cozy/data/db:/usr/local/couchdb/data -v ~/cozy/data/storage:/data/cozy-storage which will store the server’s data into
~/cozy/data.
Once the server started, go to, connect to the server with the default password
cozy and you should be able to start testing your application.
You can also access the following URLs: get the database administrative panel display the emails sent by the server.
Test multiple applications¶
You can install more than one application into the development server, for example to test communication between applications. In order to achieve this, you have to mount the folder where your application leaves into subfolders of
/data/cozy-apps. For example, if the code of Cozy Drive and Cozy Photos is on your local filesystem in
~/cozy/drive and
~/cozy/photos, start the development server with:
docker run --rm -it -p 8080:8080 -p 5984:5984 -p 8025:8025 -v "~/cozy/drive":/data/cozy-app/drive" -v "~/cozy/photos:/data-cozy-app/photos" --name=cozydev cozy/cozy-app-dev
You’ll access the applications by connecting to and.
TODO¶
Ce serveur de développement utilise les noms de domaine
*.cozy.tools. Nous avons paramétré ce domaine pour qu’il pointe toujours vers
127.0.0.1, l’adresse de votre machine locale.
La branche
sample du dépôt de cette documentation contient un squelette minimaliste avec les fichiers nécessaires pour créer une application. Vous pouvez les récupérer en faisant :
git clone -b sample myapp cd myapp | https://docs.cozy.io/en/dev/app/ | 2017-11-17T21:12:15 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.cozy.io |
If you enforce verification of certificate validity by selecting Accept only SSL certificates signed by a trusted Certificate Authority in the virtual appliance management interface (VAMI) of the vSphere Replication appliance, some fields of the certificate request must meet certain requirements.
vSphere Replication can only import and use certificates and private keys from a file in the PKCS#12 format. Sometimes these files have a .pfx extension.
The certificate must be issued for the same server name as the value in the VRM Host setting in the VAMI. Setting the certificate subject name accordingly is sufficient, if you put a host name in the VRM Host setting. If any of the certificate Subject Alternative Name fields of the certificate matches the VRM Host setting, this will work as well.
vSphere Replication checks the issue and expiration dates of the certificate against the current date, to ensure that the certificate has not expired.
If you use your own certificate authority, for example one that you create and manage with the OpenSSL tools, you must add the fully qualified domain name or IP address to the OpenSSL configuration file.
If the fully qualified domain name of the appliance is
VR1.example.com, add
subjectAltName = DNS: VR1.example.comto the OpenSSL configuration file.
If you use the IP address of the appliance, add
subjectAltName = IP: vr-appliance-ip-addressto the OpenSSL configuration file.
vSphere Replication requires a trust chain to a well-known root certificate authority. vSphere Replication trusts all the certificate authorities that the Java Virtual Machine trusts. Also, you can manually import additional trusted CA certificates in /opt/vmware/hms/security/hms-truststore.jks on the vSphere Replication appliance.
vSphere Replication accepts MD5 and SHA1 signatures, but VMware recommends that you use SHA256 signatures.
vSphere Replication does not accept RSA or DSA certificates with 512-bit keys. vSphere Replication requires at least 1024-bit keys. VMware recommends using 2048-bit public keys. vSphere Replication shows a warning if you use a 1024-bit key. | https://docs.vmware.com/en/vSphere-Replication/5.8/com.vmware.vsphere.replication_admin.doc/GUID-3FBE6E3F-0788-4B29-8978-959A8E0B2D82.html | 2017-11-17T21:25:20 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
Zenoss is an open-source application, server, and network management platform. OpsGenie is an alert and notification management solution that is highly complementary to Zenoss.
OpsGenie Zenoss plugin supports bi-directional integration with Zenoss. Integration leverages OpsGenie's zenoss-specific executable and marid utility to automatically create alerts and synchronize alert status between Zenoss and OpsGenie.
OpsGenie Zenoss integration plugin utilizes full capabilities of OpsGenie and provides bi-directional integration with Zenoss. Integration leverages OpsGenie's Zenoss-specific executable and marid utility to automatically create alerts and synchronizes alert status between Zenoss and OpsGenie.
The steps below describe how to integrate OpsGenie and Zenoss using OpsGenie Zenoss integration plugin. Note that you may need to slightly alter these instructions depending on your exact Linux distribution and your Zenoss configuration.
Packages provided support the following systems:
- Red Hat based linux distributions
- Debian based linux distributions
For Red Hat Based Distributions
- Download OpsGenie Zenoss (Linux RPM)
- Run the following command:
rpm -i opsgenie-zenoss- Zenoss (Linux DEB)
- Run the following command:
dpkg -i opsgenie-zenoss-<your_version>.deb
To add Zenoss integration in OpsGenie, go to OpsGenie Zenoss Integration page
Click on "Save Integration" button to save the integration. An "API Key" is generated for the integration. This key will be used by Zenoss to authenticate with OpsGenie and specify the integration that should be used to process Zenoss alerts.
The plugin uses a golang-executable file (included in the plugin as zenoss2opsgenie) to create, acknowledge and close alerts in OpsGenie. Zenoss should be configured to execute this file on events to create, acknowledge and close alerts in OpsGenie.
apiKey
Copy the API key from the Zenoss integration you've created above. zenoss2opsgenie uses this key to authenticate to OpsGenie. API key is also used to identify the right integration configuration that should be used to process alerts.
Yes
zenoss.command_url
URL to get detailed event data from Zenoss in zenoss2opsgenie.
Optional
zenoss.user
Credentials to authenticate Zenoss web server
Optional
zenoss.password
Credentials to authenticate Zenoss web server
Optional
recipients
Recipients field is used to specify who should be notified for the Zenoss alerts. This field is used to set the default recipients field value. It can be modified to route different alerts to different people in OpsGenie Zenoss Zenoss alerts. This field is used to set the default teams field value. It can be modified to route different alerts to different teams in OpsGenie Zenoss integration, Advanced Settings page.
Optional
Tags field is used to specify the tags of the alert that created in Opsgenie.
Optional
viaMaridUrl
viaMaridUrl field is used to send alerts to OpsGenie through Marid. You should enter host and port values of your working Marid.
- Useful when Zenoss server has no internet connection but Marid has internet connection.
- In order to use this feature you should be running the Marid provided within OpsGenie Zenoss Plugin
- Marid should be running with web server enabled ( http or https configurations enabled )
- Marid can run on a seperate host server, the communication between zenoss2opsgenie & Marid is done with basic http.
- Helps Zenoss server to consume less time when sending data to OpsGenie by letting Marid do the long task with an async approach.
Optional
logPath
Specifies the full path of the log file. (Default value is /var/log/opsgenie/zenoss2opsgenie.log)
Optional
zenoss2opsgenie.http.proxy.enabled
zenoss2opsgenie.http.proxy.enabled field is to enable/disable external proxy configuration. The default value is false.
Optional
zenoss2opsgenie.http.proxy.host
It is the host of the proxy.
Optional
zenoss2opsgenie.http.proxy.port
It is the port of the proxy.
Optional
zenoss2opsgenie.http.proxy.scheme
It is the proxy connection protocol. It may be http or https depending on your proxy servers. Its default value is http.
Optional
zenoss2opsgenie.http.proxy.username
It is the Proxy authentication username.
Optional
zenoss of the notification you created in Zenoss, which is described in "Configure Triggers in Zenoss" section. Use -apiKey flag for your apiKey. zenoss2opsgenie.go script. If you use this option, you need to build the script again and put the new executable to /usr/bin directory. You can find information about the location of the zenoss2opsgenie.go and how to build a go script in the "Source" section.
Before creating a notification you should:
- Select Events > Triggers from the Navigation menu
- Create a trigger named opsgenie
After creating the trigger, follow the steps below:
- Select Events > Triggers from the Navigation menu
- Select Notifications in the left panel
- Create a notification
- Choose the notification you created and click edit button
- In "Notification" tab, enable the notification, set "Send Clear" as checked and add the trigger named "opsgenie" from the trigger list and click "Add"
- In "Content" tab, put the following into "Command" and "Clear Command" fields. If you add optional -eventState=close to your Clear Command, zenoss2opsgenie executable will not try to get event details from Zenoss and will directly close the event's alert in OpsGenie.
/usr/bin/zenoss2opsgenie -evid=${evt/evid}
- In "Subscribers" tab, choose the subscribers and click "SUBMIT".
The plugin uses Marid utility (included in the plugin) to update the state of alerts in Zenoss when they get updated in OpsGenie. For example, when users acknowledge/close an alert from their mobile devices using the OpsGenie app, alert gets acknowledged/closed in Zenoss. Marid subscribes to alert actions in OpsGenie and reflects these actions on Zenoss using Zenoss JSON API.
- To start Marid, run following command:
/etc/init.d/marid start
- To stop Marid, run following command:
/etc/init.d/marid stop
Marid is a java application; therefore requires the Java Runtime version 1.6+ Both the Open JDK and Oracle JVMs can be used.
In order to use this feature "Send Alert Actions To Zenoss" checkbox should be enabled in OpsGenie Zenoss Zenoss, Marid gets the configuration parameters from /etc/opsgenie/conf/opsgenie-integration.conf file.
zenoss.command_url
URL to update Zenoss events when alerts get acknowledged, closed, etc.
zenoss.user
Credentials to authenticate on Zenoss web server.
zenoss.password
Credentials to authenticate on Zenoss web server.
If you see "JAVA_HOME not defined" error in /var/log/opsgenie/zenoss2opsgenie.log, you should define it in /etc/opsgenie/profile shell script.
For more information refer to Marid Integration Server and Callbacks docs. Please do not hesitate to get in touch with any questions, issues, etc.
Zenoss integration package does not support SSL v1.0. If your Zenoss Server has SSL v1.0, we suggest you to upgrade your SSL server.
If you're having trouble getting the integration to work, please check if your problem is mentioned below, and follow our advice:
1. Zenoss alerts are not getting created in OpsGenie:
Run the following test command from the shell. Check if the test alert is created in OpsGenie:
/usr/bin/zenoss2opsgenie -test
- If you're getting a "Trace/breakpoint trap" error: It means your zenoss2opsgenie plugin isn't compatible with your server distribution. Follow the "Source and Recompiling zenoss2opsgenie" section below and rebuild your zenoss2opsgenie.go according to your specific server environment.
- If the alert is created in OpsGenie: It means the integration is installed correctly. The problem might be that Zenoss is not notifying the OpsGenie contact for alerts. Check your Zenoss alert notifications log.
- If not: Check the logs at /var/log/opsgenie/zenoss Zenoss" above.
- If you can't make sense of the problem, set the plugin's log level to debug, try again and send the logs to us at [email protected].
- If there is no /var/log/opsgenie/zenoss2opsgenie.log file, or there are no logs in it, check the following:
- First, make sure the zenoss user has permission to write to /var/log/opsgenie directory. The installation package should automatically do this for you. If you encounter problems, execute:
chown -R zenoss:opsgenie /var/log/opsgenie
- Now check your Zenoss server logs at /opt/zenoss/log/zeneventd.log. See if there are error logs regarding zenoss2opsgenie, and contact us with them.
Setting zenoss2opsgenie plugin's log level to DEBUG:
Change the line zenoss2opsgenie.logger=warning to zenoss2opsgenie.logger=debug in /etc/opsgenie/conf/opsgenie-integration.conf file.
2. The Zenoss alert is not acknowledged when you ack the alert at OpsGenie:
- First, check your alert logs.
- If you don't see the "Posted [Acknowledge] action to Zenoss.." log, it means OpsGenie didn't send the Acknowledge action to Zenoss. Check your integration configuration, it might not have matched the alert action.
- If you're seeing "Executed [Acknowledge] action via Marid with errors." log, it means the zenossActionExecutor.groovy script in your Marid has encountered an error. Check the logs at /var/log/opsgenie/marid/script.log for error logs.
- If you only see the "Posted [Acknowledge] action to Zenoss.." zenoss2opsgenie is located under /usr/bin/ and zenoss2opsgenie.go is located under /etc/opsgenie/ and is also available at GitHub OpsGenie Integration repository. If you wish to change the behavior of the executable, you can edit zenoss2opsgenie.go and build it using:
go build zenoss2opsgenie.go
For installing go, refer to. Note that the executable in the plugin is built for linux/386 systems. | https://docs.opsgenie.com/docs/zenoss-integration | 2017-11-17T21:29:08 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.opsgenie.com |
Pokémon Encounters¶
Since the IV update of April 21st 2017 which makes IVs the same for players of level 30 and above, the encounter system has been reworked and now includes CP/IV scanning.
Steps for using the new encounter system:
Make sure initial scan has finished. Enabling encounters during initial scan is a waste of requests.
Enable encounters on your map (
-enc).
Add L30 accounts for IV/CP scanning into a CSV file (separate from your regular accounts file, e.g. ‘high-level.csv’). Warning: read the important points below if you’re scanning with only L30s! The lines should be formatted as “service,user,pass”:
ptc,randOMusername1,P4ssw0rd! ptc,randOMusername2,P4ssw0rd! ptc,randOMusername1,P4ssw0rd!
The config item or parameter to use this separate account file is:
--high-lvl-accounts high-level.csv
Create files for your IV/CP encounter whitelist and add the Pokémon IDs which you want to encounter, one per line. enc-whitelist.txt:
10 25 38 168
Enable the whitelist files in your config or cli parameters (check commandline.md for usage):
--enc-whitelist-file enc-whitelist.txt
(Optional) Set a speed limit for your high level accounts. This is separate from the usual speed limit, to allow a lower speed to keep high level accounts safer:
--hlvl-kph 25
(Optional) To reduce the number of times a high level account will log into the game via the API, the API objects are stored in memory to re-use them rather than recreating them. This is enabled by default to keep high level accounts safer but it will cause an increase in memory usage. To reduce memory usage, disable the feature with:
--no-api-store
L30 accounts are not being recycled and are not in the usual account flow. This is intentional, to allow for future reworks to handle accounts properly. This also keeps interaction with high level accounts to a minimum. We can consider handling them more automatically when the account handlers are properly fully implemented.
Some important notes:
- If you’re only scanning with high level accounts (i.e. your regular accounts file only has L30s), the
--high-lvl-accountsfile can stay empty. The encounter code will use your regular accounts to encounter Pokémon if they’re high enough level. But don’t mix low level accounts with high levels, otherwise encounters will be skipped.
- To report Unown form, Unown’s Pokémon ID must be added to the IV or CP whitelist.
- The old encounter whitelists/blacklists have been removed entirely.
- Both the IV and CP whitelists are optional.
- Captcha’d L30 accounts will be logged in console and disabled in the scheduler. Having
-venabled will show you an entry in the logs mentioning “High level account x encountered a captcha”. They will not be solved automatically.
- The encounter is a single request (1 RPM). We intentionally don’t use the account for anything else besides the encounter.
- The high level account properly uses a proxy if one is set for the scan, and properly rotates hashing keys when needed.
Relevant Logs¶
The following messages can be logged (
-v) and used to debug problems. Values denoted as
%s are variable.
Info & debug:
Encountering Pokémon ID %s with account %s at %s, %s.
Using hashing key %s for this encounter.
Encounter for Pokémon ID %s at %s, %s successful: %s/%s/%s, %s CP.
Errors & exceptions:
Exception while encountering Pokémon: %s.
Account %s encountered a captcha. Account will not be used.
Expected account of level 30 or higher, but account %s is only level %s.
No L30 accounts are available, please consider adding more. Skipping encounter. | http://rocketmap.readthedocs.io/en/latest/extras/encounters.html | 2017-11-17T21:12:44 | CC-MAIN-2017-47 | 1510934803944.17 | [] | rocketmap.readthedocs.io |
You can select an virtual machine image from a list of available images when creating blueprints for OpenStack resources.
A virtual machine image is a template that contains a software configuration, including an operating system. Virtual machine images are managed by the OpenStack provider and are imported during data collection.
If an image that is used in a blueprint is later deleted from the OpenStack provider, it is also removed from the blueprint. If all the images have been removed from a blueprint, the blueprint is disabled and cannot be used for machine requests until it is edited to add at least one image. | https://docs.vmware.com/en/vRealize-Automation/7.1/com.vmware.vrealize.automation.doc/GUID-CB9DF999-FC51-4EEC-BE66-22DCC2E7EA7C.html | 2017-11-17T21:36:09 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
You can change the frequency of several callback procedures, including the frequency that the vRealize Automation callback procedure is run for changed machine leases.
About this task
vRealize Automation uses a configured time interval to run different callback procedures on the Model Manager service, such as ProcessLeaseWorkflowTimerCallbackIntervalMiliSeconds which searches for machines whose leases have changed. You can change these time intervals to check more or less frequently.
When entering a time value for these variables, enter a value in milliseconds. For example, 10000 milliseconds = 10 seconds and 3600000 milliseconds = 60 minutes = 1 hour.
Prerequisites
Log in as an administrator to the server hosting the IaaS Manager Service. For distributed installations, this is the server on which the Manager Service was installed.
Procedure
- Open the ManagerService.exe.config file in an editor. The file is located in the vRealize Automation server install directory, typically %SystemDrive%\Program Files x86\VMware\vCAC\Server.
- Update the following variables, as desired.
- Save and close the file.
- Stop and then restart the vCloud Automation Center service.
- (Optional) : If vRealize Automation is running in High Availability mode, any changes made to the ManagerService.exe.config file after installation must be made on both the primary and failover servers. | https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-5FBB7C73-2AAD-4106-9C0D-DE7B416A4716.html | 2017-11-17T21:46:55 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
gpfdist
Serves data files to or writes data files out from HAWQ segments.
Synopsis
gpfdist [-d <directory>] [-p <http_port>] [-l <log_file>] [-t <timeout>] [-S] [-w <time>] [-v | -V] [-s] [-m <max_length>] [--ssl <certificate_path>] gpfdist -? | --help gpfdist --version
Description
gpfdist is HAWQ parallel file distribution program. It is used by readable external tables and
hawq load to serve external table files to all HAWQ segments in parallel. It is used by writable external tables to accept output streams from HAWQ segments in parallel and write them out to a file.
In order for
gpfdist to be used by an external table, the
LOCATION clause of the external table definition must specify the external table data using the
gpfdist:// protocol (see the HAWQ command
CREATE EXTERNAL TABLE).
Note: If the
--ssl option is specified to enable SSL security, create the external table with the
gpfdists:// protocol. HAWQ.
Note: Currently, readable external tables do not support compression on Windows platforms, and writable external tables do not support compression on any platforms.
To run
gpfdist on your ETL machines, refer to Client-Based HAWQ Load Tools for more information.
Note: When using IPv6, always enclose the numeric IP address in brackets.
You can also run
gpfdist as a Windows Service. See Running gpfdist as a Windows Service for more details.
Options
gpfdistwill serve files for readable external tables or create output files for writable external tables. If not specified, defaults to the current directory.
gpfdistwill serve files. Defaults to 8080.
gpfdistprocess. Default is 5 seconds. Allowed values are 2 to 600 seconds. May need to be increased on systems with a lot of network traffic.
line too longerror message occurs). Should not be used otherwise as it increases resource allocation. Valid range is 32K to 256MB. (The upper limit is 1MB on Windows systems.)
WARNlevel and higher are written to the
gpfdistlog file.
INFOlevel messages are not written to the log file. If this option is not specified, all
gpfdistmessages are written to the log file.
You can specify this option to reduce the information written to the log file.
O_SYNCflag. Any writes to the resulting file descriptor block
gpfdistuntil the data is physically written to the underlying hardware.
For a HAWQ with multiple segments, there might be a delay between segments when writing data from different segments to the file. You can specify a time to wait before HAWQ closes the file to ensure all the data is written to the file.
gpfdist. After executing
gpfdistwith the
--ssl <certificate_path>option, the only way to load data from this file server is with the
gpfdist://protocol.
The location specified in <certificate_path> must contain the following files:
- The server certificate file,
server.crt
- The server private key file,
server.key
- The trusted certificate authorities,
root.crt
The root directory (
/) cannot be specified as <certificate_path>.
Running gpfdist as a Windows Service
HAWQ Loaders allow
gpfdist to run as a Windows Service.
Follow the instructions below to download, register and activate
gpfdist as a service:
Update your HAWQ Loaders for Windows package to the latest version. See HAWQ Loader Tools for Windows for install and configuration information.
gpfdistas a Windows service:
- Open a Windows command window
Run the following command:
sc create gpfdist binpath= "<loader_install_dir>\bin\gpfdist.exe -p 8081 -d \"<external_load_files_path>\" -l \"<log_file_path>\""
You can create multiple instances of
gpfdistby running the same command again, with a unique name and port number for each instance:
sc create gpfdistN binpath= "<loader_install_dir>\bin\gpfdist.exe -p 8082 -d \"<external_load_files_path>\" -l \"<log_file_path>\""
Activate the
gpfdistservice:
- Open the Windows Control Panel and select Administrative Tools > Services.
- Highlight then right-click on the
gpfdistservice
See Also
hawq load, CREATE EXTERNAL TABLE | http://hdb.docs.pivotal.io/212/hawq/reference/cli/admin_utilities/gpfdist.html | 2017-07-20T18:35:03 | CC-MAIN-2017-30 | 1500549423320.19 | [] | hdb.docs.pivotal.io |
Overview¶
MetAMOS represents a focused effort to create automated, reproducible, traceable assembly & analysis infused with current best practices and state-of-the-art methods. MetAMOS for input can start with next-generation sequencing reads or assemblies, and as output, produces: assembly reports, genomic scaffolds, open-reading frames, variant motifs, taxonomic or functional annotations, Krona charts and HTML report.
Citation¶
If you use MetAMOS in your research, please cite the following manuscript (in addition to the individual software component citations listed in stdout):
TJ Treangen, S Koren, DD Sommer, B Liu, I Astrovskaya, B Ondov, Irina Astrovskaya, Brian Ondov, Aaron E Darling, Adam M Phillippy, Mihai Pop MetAMOS: a modular and open source metagenomic assembly and analysis pipeline Genome Biology 14 (1), R2
Contents:
- Hardware requirements
- Installation
- Single binary
- Test suite
- Quick Start
- iMetAMOS
- Workflows
- Generic tools (or plug-in framework)
- MetAMOS directory structure
- Output
- Supported Programs
- FAQs
- Experimental: TweetAssembler v0.1b | http://metamos.readthedocs.io/en/latest/ | 2017-07-20T18:28:09 | CC-MAIN-2017-30 | 1500549423320.19 | [] | metamos.readthedocs.io |
Rajiv Gandhi Equity Savings Scheme 2012 (RGESS) – SEBI Circular admin February 8, 2013 Rajiv Gandhi Equity Savings Scheme 2012 (RGESS) – SEBI Circular2013-02-08T08:25:26+00:00 Mutual Funds 1 Comment Rajiv Gandhi Equity Savings Scheme 2012 (RGESS) – SEBI Circular Download (PDF, 39KB) Subscribe Updates, Its FREE! Email ID: +Admin admin February 8, 2013 Those who found this page were searching for:banks nominated for rajiv gandhi equity schemeTax savings in Rajive Gandhi suraksha schemesebi circular for rgessrgss reliance equity schemergss equity schemergess sebireliance rajiv gandhi equityrajiv gandhi saving scheme faqs in word formatrajiv gandhi equity scheme pamphletwhat is the requirement of for rgss mutual funds FundsCircular Rajiv Gandhi,Rajiv Gandhi,SEBIRajiv Gandhi Equity Savings Scheme 2012 (RGESS) - SEBI Circularadmin [email protected]AdministratorInvestmentKit Docs Circular Rajiv Gandhi, Rajiv Gandhi, SEBI
The Rajiv Gandhi Scheme is very attractive.SEBI should elaborate on the site as it is for a layperson who wants to save a d provide for pension.The letter of SEBI, it is observed is not understood by the masses .
How much go deposit
Lock in period
After how long will they get the amount and how much
will that be taxed then.No, but please clarify.
SEBI ought to understand that this is more for the masses
Demat account its advantages and if it is a lock in will they have to put the amount after 3 years in circulatiom
If yes,then where is the purpose of providing of pension by availing the Rajiv Gandhi Yojana come in
Is it like a Mutual Fund Scheme where the responsibility of giving the return and promising the amount will be paid by the banks nominated for the scheme. | https://docs.investmentkit.com/rajiv-gandhi-equity-savings-scheme-2012-rgess-sebi-circular/ | 2017-07-20T18:43:09 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.investmentkit.com |
New-AzureRmSqlDatabaseFailoverGroup
Syntax
New-AzureRmSqlDatabaseFailoverGroup [-ResourceGroupName] <String> [-ServerName] <String> [-AllowReadOnlyFailoverToPrimary <AllowReadOnlyFailoverToPrimary>] -FailoverGroupName <String> [-FailoverPolicy <FailoverPolicy>] [-GracePeriodWithDataLossHours <Int32>] [-PartnerResourceGroupName <String>] -PartnerServerName <String> [<CommonParameters>]
Description
Creates a new Azure SQL Database Failover Group for the specified servers.
Two Azure SQL Database TDS endpoints are created at FailoverGroupName.SqlDatabaseDnsSuffix (for example, FailoverGroupName.database.windows.net) and FailoverGroupName.secondary.SqlDatabaseDnsSuffix. These endpoints may be used to connect to the primary and secondary servers in the Failover Group, respectively. If the primary server is affected by an outage, automatic failover of the endpoints and databases will be triggered as dictated by the Failover Group's failover policy and grace period.
Newly created Failover Groups do not contain any databases. To control the set of databases in a Failover Group, use the 'Add-AzureRmSqlDatabaseToFailoverGroup' and 'Remove-AzureRmSqlDatabaseFromFailoverGroup' cmdlets.
During preview of the Failover Groups feature, only values greater than or equal to 1 hour are supported for the '-GracePeriodWithDataLossHours' parameter.
Examples
Example 1
C:\> $failoverGroup = New-AzureRMSqlDatabaseFailoverGroup -ResourceGroupName rg -ServerName primaryserver -PartnerServerName secondaryserver -FailoverGroupName fg -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1
This command creates a new Failover Group with failover policy 'Automatic' for two servers in the same resource group.
Example 2
C:\> $failoverGroup = New-AzureRMSqlDatabaseFailoverGroup -ResourceGroupName rg1 -ServerName primaryserver -PartnerResourceGroupName rg2 -PartnerServerName secondaryserver1 -FailoverGroupName fg -FailoverPolicy Manual
This command creates a new Failover Group with failover policy 'Manual' for two servers in different resource groups.
Required Parameters
The name of the Azure SQL Database Failover Group to create.
The name of the secondary server of the Azure SQL Database Failover Group.
The name of the resource group.
The name of the primary Azure SQL Database Server of the Failover Group.
Optional Parameters
Whether an outage on the secondary server should trigger automatic failover of the read-only endpoint. This feature is not yet supported.
The failover policy of the Azure SQL Database Failover Group.
Interval before automatic failover is initiated if an outage occurs on the primary server and failover cannot be completed without data loss.
The name of the secondary resource group of the Azure SQL Database Failover Group.
Inputs
System.String
Outputs
System.Object | https://docs.microsoft.com/en-us/powershell/module/azurerm.sql/new-azurermsqldatabasefailovergroup?view=azurermps-4.2.0 | 2017-07-20T18:49:16 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.microsoft.com |
This topic describes the Pan and Zoom feature of RadSlideView.
When the DataSource of the RadSlideView is set to SlideViewPictureSource, the default item template contains PanAndZoomImage. This control
provides the pan and zoom behavior of the SlideView. The pan gesture is used for pan and the pinch gesture for zoom.
To include the PanAndZoomImage in the DataTemplate, the following namespace has to be declared:
The ItemTemplate can be set to a DataTemplate different from the default and the pictures from the DataSource of the SlideView can be accessed by
binding to the Source property. The default ItemTemplate for the SlideViewPictureSource is:
<telerikPrimitives:RadSlideView x:
<telerikPrimitives:RadSlideView.ItemTemplate>
<DataTemplate>
<telerikSlideView:PanAndZoomImage
</DataTemplate>
</telerikPrimitives:RadSlideView.ItemTemplate>
</telerikPrimitives:RadSlideView>
The PanAndZoomImage exposes the following properties: MaximumZoom and ZoomMode, which allow
customization of the zoom behavior.
The MaximumZoom property is of type Point and provides the option to limit the zooming to a specific state. The default value is (4d,4d)
The ZoomMode can be None, Free and FitToPhysicalSize:
None means that no zooming is applied
Free means that the user can zoom until the value of the MaximumZoom property is reached
FitToPhysicalSize - the maximum zoom scale is automatically calculated depending on the physical
dimensions of the displayed page (the default) | http://docs.telerik.com/help/windows-phone/radslideview-features-panandzoom.html | 2017-07-20T19:44:36 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.telerik.com |
News¶
Frontiers in Neuroscience commentary - 2009-10-05¶
Rolf Kötter wrote A primer of visual stimulus presentation software about the Vision Egg and PsychoPy for Frontiers in Neuroscience.
talk at SciPy 09 (video available online) - 2009-08-20¶
AndrewStraw gave a talk on the Vision Egg and another piece of his software, Motmot, at SciPy 09. See the talk at . The Vision Egg part starts at 25:18.
talk and tutorial at CNS*2009 - 2009-07-22¶
AndrewStraw is giving a talk and demo/tutorial on the Vision Egg and another piece of his software, the Motmot camera utilities at CNS*2009 at the Python for Neuroinformatics Workshop.
Vision Egg article in Frontiers in Neuroinformatics - 2008-10-08¶
An article about the Vision Egg has now been accepted for publication. The citation is:
- Straw, Andrew D. (2008) Vision Egg: An Open-Source Library for Realtime Visual Stimulus Generation. Frontiers in Neuroinformatics. doi: 10.3389/neuro.11.004.2008 link
This article covers everything from descriptions of some of the high-level Vision Egg features, to low-level nuts-and-bolts descriptions of OpenGL and graphics hardware relevant for vision scientists. Also, there is an experiment measuring complete latency (from USB mouse movement to on-screen display). In the best configuration (a 140Hz CRT with vsync off), this latency averaged about 12 milliseconds.
If you use the Vision Egg in your research, I’d appreciate citations to this article.
BCPy2000, using the Vision Egg, released - 2008-10-01¶
The Vision Egg has found another application: Brain-Computer Interfaces.
Dr. Jeremy Hill (Max Planck Institute for Biological Cybernetics) announced the release of BCPy2000, a framework for realtime biosignal analysis in python, based on the modular, popular (but previously pythonically challenged) BCI2000 project.
Vision Egg 1.1.1 released - 2008-09-18¶
This is primarily a bug fix release to Vision Egg 1.1.
Changes for 1.1.1¶
- Various small bugfixes and performance improvements:
- Removed old CVS cruft from VisionEgg/PyroClient.py VisionEgg/PyroHelpers.py
- Fix trivial documentation bugs to have the correct version number.
- Workaraound pygame/SDL issue when creating Font objects. (r1491, reported by Jeremy Hill)
- bugfix: allow 4D as well as 3D vectors to specify vertices (r1472, r1474)
- fix comments: improve description of coordinate system transforms (r1473)
- Use standard Python idiom (r1475)
- Further removal of ‘from blah import *‘ (r1476, r1501)
- Minor performance improvement (r1486)
- Remove unintended print statement (r1487 thanks to Jeremy Hill)
- properly detect String and Unicode types (r1470, reported by Dav Clark)
- update license to mention other code (r1502)
Vision Egg 1.1 released - 2008-06-07¶
This release brings the Vision Egg up to date with numpy and the new ctypes-based PyOpenGL, and includes lots of smaller bugfixes and removes old cruft that had been accumulating.
Changes for 1.1¶
- ‘package
Vision Egg 1.0 released - 2006-01-03¶
Changes for 1.0¶
- Major enhancements to the ephys server/GUI code to use normal (or slightly modified) demo scripts in this environment were one.
- Added win32_vretrace.WaitForRetrace() (but it’s not used for much, yet)
- Enhancements to EPhys Server/GUI sequencer
- Added ‘lat-long rectangle’ to available 3D masking windows
- Moved controller.CONSTANTS into FlowControl module namespace
- Numerous bugfixes
Quest.py announced - 2005-04-08¶
The popular QUEST algorithm by Denis Pelli has been ported to Python. See the Quest page for more details.
Pylink (by SR Research) - Eye tracking in Python - 2004-02-25¶.
Release 0.9.9 - 2003-09-19¶ | http://visionegg.readthedocs.io/en/latest/News.html | 2017-07-20T18:28:01 | CC-MAIN-2017-30 | 1500549423320.19 | [] | visionegg.readthedocs.io |
Client Security:
Java
This tutorial shows you how to set up a Riak Java client to authenticate itself when connecting to Riak.
If you are using trust- or PAM-based authentication, you can use the security setup described below. Certificate-based authentication is not yet supported in the Java client.
This tutorial does not cover certificate generation. It assumes that all
necessary certificates have already been created and are stored in a directory
called
/ssl_dir. This directory name is used only for example purposes.
Java Client Basics
When connecting to Riak using a Java-based client, you typically do so
by instantiating separate
RiakNode objects for each node in your
cluster, a
RiakCluster object registering those
RiakNode objects,
and finally a
RiakClient object that registers the general cluster
configuration. In this document, we will be working with only one node.
If you are using Riak security, all connecting clients should have
access to the same Certificate Authority (CA) used on the server side,
regardless of which security source you
choose. All clients should also provide a username, regardless of
security source. The example below sets up a single node object (we’ll
simply call it
node) that connects to Riak on
localhost and on port
8087 and specifies
riakuser as a username. That object will be used to
create a cluster object (we’ll call it
cluster), which will in turn be
used to create a
client object. The setup below does not specify a CA:
import com.basho.riak.client.api.RiakClient; import com.basho.riak.client.api.RiakCluster; import com.basho.riak.client.api.RiakNode; RiakNode node = new RiakNode.Builder() .withRemoteAddress("127.0.0.1") .withRemotePort(8087) // This will specify a username but no password or keystore: .withAuth("riakuser", null, null) .build(); RiakCluster cluster = new RiakCluster.Builder(node) .build(); RiakClient client = new RiakClient(cluster);
This client object is not currently set up to use any of the available security sources. This will change in the sections below.
Password-based Authentication
To enable our client to use password-based auth, we can use most of the
setup from the example above, with the exception that we will specify a
password for the client in the
withAuth method in the
node object’s
constructor rather than leaving it as
null. We will also pass a
KeyStore object into that method.
import java.io.FileInputStream; import java.io.InputStream; import java.security.KeyStore; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; // Generate an InputStream from the CA cert InputStream inputStream = new InputStream("/ssl_dir/cacertfile.pem"); // Generate an X509Certificate from the InputStream and close the stream CertificateFactory certFactory = CertificateFactory.getInstance("X.509"); X509Certificate caCert = (X509Certificate) certFactory.generateCertificate(inputStream); inputStream.close(); // Generate a KeyStore object KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); ks.load(null, "password".toCharArray()); ks.setCertificateEntry("cacert", caCert); RiakNode node = new RiakNode.Builder() .withRemoteAddress("127.0.0.1") .withRemotePort(8087) .withAuth("riakuser", "rosebud", ks) .build(); // Construct the cluster and client object in the same fashion as above
PAM- and Trust-based Authentication
If you are using PAM- or trust-based authentication, the only difference from password-based authentication is that you do not need to specify a password.
Certificate-based Authentication
Certificate-based authentication is not currently supported in the official Riak Java client. | http://docs.basho.com/riak/kv/2.2.0/developing/usage/security/java/ | 2017-07-20T18:39:06 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.basho.com |
Rfam Help¶
Rfam is a collection of non-coding RNA families represented by manually curated sequence alignments, consensus secondary structures, and predicted homologues. This documentation is maintained by the Rfam team.
Contents¶
Get in touch¶
If you have any questions or feedback, feel free to submit a GitHub issue or email us at [email protected]. | http://rfam.readthedocs.io/en/latest/ | 2017-07-20T18:21:13 | CC-MAIN-2017-30 | 1500549423320.19 | [] | rfam.readthedocs.io |
Removes an account from a Database Mail profile.
Transact-SQL Syntax Conventions
Syntax
sysmail_delete_profileaccount_sp { [ @profile_id = ] profile_id | [ @profile_name = ] 'profile_name' } , { [ @account_id = ] account_id | [ @account_name = ] 'account_name' }
Arguments
[ @profile_id = ] profile_id
The profile ID of the profile to delete. profile_id is int, with a default of NULL. Either the profile_id or the profile_name may be specified.
[ @profile_name = ] 'profile_name'
The profile name of the profile to delete. profile_name is sysname, with a default of NULL. Either the profile_id or the profile_name may be specified.
[ @account_id = ] account_id
The account ID to delete. account_id is int, with a default of NULL. Either the account_id or the account_name may be specified.
[ @account_name = ] 'account_name'
The name of the account to delete. account_name is sysname, with a default of NULL. Either the account_id or the account_name may be specified.
Return Code Values
0 (success) or 1 (failure)
Result Sets
None
Remarks.
Permissions
Execute permissions for this procedure default to members of the sysadmin fixed server role.
Examples
The following example shows removing the account
Audit Account from the profile
AdventureWorks Administrator.
EXECUTE msdb.dbo.sysmail_delete_profileaccount_sp @profile_name = 'AdventureWorks Administrator', @account_name = 'Audit Account' ;
See Also
Database Mail
Create a Database Mail Account
Database Mail Configuration Objects
Database Mail Stored Procedures (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-delete-profileaccount-sp-transact-sql | 2017-07-20T20:17:39 | CC-MAIN-2017-30 | 1500549423320.19 | [array(['../../includes/media/yes.png', 'yes'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)] | docs.microsoft.com |
.
Fundamental Features¶
- Enables everyone to contribute to your content—while retaining your full control.
- Supports all four common collaboration and support patterns in several ways:
- Active contribution (create content, contribute corrections)
- Passive contribution (report issues, assign tasks, provide feedback)
- Active consumption (search and browse for content)
- Passive consumption (subscribe to notifications)
- Continually involves, helps and optimally supports your collaborators by sending out reminders, editing and submission hints to avoid losing traction or interest.
Core Components¶)
- todo/issue lists
- social login and user profiles (allauth)
- generic analytics support (analytical)
- web analytics respecting your privacy (Piwik integration)
- multiple themes (available for free at)
- multi-language support
- multiple websites support
- database support for PostgreSQL, MySQL, Oracle, and more
Roadmap¶
We are thrilled to get these awesome features out to you:
- group spaces with automatic mailing list ([email protected])
- documentation area / wiki (probably by use of CMS functionality)
- discussion forum / Q&A board (Askbot, Misago)
- content download for magazine generation
- unobtrusive, integrated, transparent document and asset management
- DropBox, ownCloud, Windows OneDrive, Google Drive, Apple iCloud, etc. integration
- fully integrated search (Haystack)
- live chat or chat integration
- surveys (django-crowdsourcing, ntusurvey)
Who is using django Organice?¶
Examples of websites running django Organice:
Download and Contributions¶
Official repositories: (kept in sync)
Getting Help¶
- Documentation is available at
- Questions? Please use StackOverflow. Tag your questions with
django-organice.
- Found a bug? Please use either the Bitbucket or GitHub issue tracker (you choose)
- Need support? You’re welcome to use our Gitter chat room. | http://docs.organice.io/en/latest/overview.html | 2017-07-20T18:27:01 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.organice.io |
VpcCidrBlockAssociation
Describes an IPv4 CIDR block associated with a VPC.
Contents
- associationId
The association ID for the IPv4 CIDR block.
Type: String
Required: No
- cidrBlock
The IPv4 CIDR block.
Type: String
Required: No
- cidrBlockState
Information about the state of the CIDR block.
Type: VpcCidrBlockState object
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_VpcCidrBlockAssociation.html | 2017-12-11T02:38:09 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.aws.amazon.com |
Expression Data Import Service¶
Overview¶
The Differential Expression Import Service allows you to upload your own pre-processed differential expression datasets generated by microarray, RNA-Seq, or proteomic technologies to your private workspace and analyze them using annotations and analysis tools and compare them with other expression data sets in PATRIC. Currently, PATRIC only supports differential gene expression data in the form of log ratios, generated by comparing samples/conditions/time points.
The Differential Expression Import Service can be accessed from the Services Menu at the top of the PATRIC website page and via the PATRIC Command Line Interface (CLI).
Experiment Information¶
Specify Experiment Title/Description, Organism, and Pubmed ID for keeping track of uploaded data.
Data information¶
What is this?¶
At PATRIC, you can upload your own pre-processed differential expression datasets generated by microarray, RNA-Seq, or proteomic technologies to your workspace and analyze them using annotations and analysis tools. Currently, PATRIC only supports differential gene expression data in the form of log ratios, generated by comparing samples/conditions/time points. You may also compare your data with other transcriptomics datasets available at PATRIC. Data uploaded in your workspace is private and protected.
Experiment Data¶
Upload a data file containing differential gene expression values in the form of log ratios. The file should be in one of the supported formats described below. Optionally, you may also upload metadata related to sample comparisons in the prescribed format to help later in the data analysis.
Supported IDs¶
- RefSeq Locus Tag
- PATRIC Feature ID
- NCBI GI Number
- NCBI Protein ID
- SEED ID
- PATRIC Legacy ID
File Format¶
Currently, PATRIC allows you to upload your transcriptomics datasets in the form of differential gene expression measured as log ratios. Data can be uploaded in multiple file formats: comma separated values (.csv), tab delimited values (.txt), or Excel (.xls or .xlsx). Click to download Sample Data template in Gene Matrix Format. Files should contain data in one of the following formats:
Gene Matrix:
Gene List:
Data is presented in three columns: Gene ID, Sample ID, and expression value. Expression value should be in the form of log ratio (i.e. log2 (test/control)). Below is an example of transcriptomics data in Gene List format:
Experiment Type¶
This field specifies the the experiment type Transcriptomics, Proteomics, or Phenomics.
ID Type¶
In order to take full advantage of PATRIC data, gene ID’s provided in the experiment data are mapped to PATRIC. Due to differences in annotation that may exist some genes may go unmapped. Unmapped genes will be excluded from subsequent analysis.
Optional Metadata¶
PATRIC allows you to upload Metadata Template | https://docs.patricbrc.org/user_guide/differential_expression_data_and_tools/expression_data_import_service.html | 2017-12-11T02:22:11 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.patricbrc.org |
Tools & Resources
Changing the Keyboard Layout
Changing the keyboard layout affects both keystrokes and the on-screen keyboard. Once Windows boots, the keyboard layout is set by the Windows operating system. A restart is required to commit the keyboard layout changes.
- Go to Menu > Computer > Change Keyboard Layout.
The Select the keyboard language (layout) window appears.
- Select a keyboard layout.
- Click OK.
- Click OK to restart the endpoint. | http://docs.trendmicro.com/en-us/enterprise/endpoint-encryption-50-patch-4-administrator-guide/c_working_fde/c_fde_preboot/t_fde_preboot_keybrd.aspx | 2017-12-11T02:09:32 | CC-MAIN-2017-51 | 1512948512054.0 | [] | docs.trendmicro.com |
Help Center
Local Navigation
Preserving the aspect ratio of an object
Aspect ratio refers to the ratio between the height and the width of a rectangular object.
You can preserve the aspect ratio of the view box so that it is not stretched when it is displayed in a view port that has a different aspect ratio. By default, the aspect ratio of the view box is not preserved; content is stretched so that it fits view port.
The Preserve Aspect Ratio property is a combination of the following:
- scaling, which defines how the BlackBerry® device scales the view box in the view port. You can choose one of the following options:
- alignment, which defines how the BlackBerry device aligns the view box in the view port.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/pt-br/developers/deliverables/7116/Preserving_the_aspect_ratio_of_an_object_628241_11.jsp | 2015-02-27T04:14:32 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.blackberry.com |
In addtion to fitnesse which is delivered as part of the download, several additional components are required on the client side to run the acceptance tests. To leverage an existing web automation library (WATIR), tests are written in Ruby.
1. Ruby 1.8.4 or higher: available at. For Windows, there is a one-click installer at.
2. WATIR 1.4.1 or higher: available at. WATIR is the web automation testing library that drives the Internet Explorer.
3. AutoIt: available at automation library is used to control the mouse and interact with Windows Dialogs. | http://docs.codehaus.org/display/MAP/Pre-Requisitesa | 2015-02-27T04:13:01 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.codehaus.org |
Page Contents
Product Index
“We are such stuff as dreams are made on; and our little life is rounded with a sleep.” Shakespeare, The Tempest, Act IV, Scene I.
Sepulture is a complete burial ground environment for Poser and DAZ Studio with 20 separate props and night and daytime scenes.
Visit our site for further technical support questions or concerns.
Thank you and enjoy your new products! | http://docs.daz3d.com/doku.php/public/read_me/index/15083/start | 2015-02-27T03:58:36 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.daz3d.com |
This create a new Category Blog Menu Item:
To edit an existing Category Blog Menu Item, click its Title in Menu Manager: Menu Items.
Used to show articles belonging to a specific Category Blog Layout has the following Category Options, as shown below.
Blog Layout Options control the appearance of the blog layout.
The Category Blog: | https://docs.joomla.org/index.php?title=Help25:Menus_Menu_Item_Article_Category_Blog&oldid=66611 | 2015-02-27T04:51:47 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.joomla.org |
Undergraduate Catalog
ROTC (Army)
Coordinator: David A. Campion
For students seeking to serve as commissioned officers in the U.S. Army, Army Reserve, or National Guard upon graduation, Lewis & Clark maintains a partnership with the Army Reserve Officer Training Corps (ROTC) Battalion at the University of Portland. This partnership enables students to integrate their military training as cadets with a traditional liberal arts education.
Students interested in ROTC should meet with the ROTC coordinator as soon as they enroll at Lewis & Clark. The ROTC coordinator serves as a first-year advisor to these students until they declare a major. (Students may also have an advisor who is teaching one of their first-year classes.) Thereafter, the ROTC coordinator continues to meet with students regularly to review their academic performance, and to help them plan their course schedule and balance their studies with their ROTC commitments and commissioning requirements. The ROTC coordinator is Lewis & Clark's liaison to the commanding officer and professor of military science at the University of Portland Army ROTC Battalion.
Lewis & Clark students may earn up to 2 semester hours of practicum credit per semester, to a maximum of 8 credits, while they are actively enrolled as cadets in ROTC. To do so, they should enroll in ROTC 244 Practicum. Supervised by the ROTC coordinator, students in this course write about their field experiences and integrate those experiences with other parts of the Lewis & Clark education. This practicum will be graded on a credit-no credit basis and follows all of the normal Lewis & Clark rules and regulations governing internship and practicum credit.
Students may also transfer up to 4 semester hours of credit for physical education classes completed in ROTC training. A maximum of 4 semester hours of physical education credit is applicable toward graduation requirements. Students who take PE/A 101 Activities and/or PE/A 102 Varsity Athletics at Lewis & Clark, therefore, will not be able to transfer a full 4 semester hours of credit for physical education classes completed in ROTC training.
Students enrolled as cadets may satisfy the ROTC military history requirement by completing HIST 299 Independent Study. This directed study, taken for a grade, is limited to cadets and is worth 4 semester hours of credit. It may also count as an elective toward the history major or minor.
Faculty
David A. Campion. Dr. Robert B. Pamplin Jr. Associate Professor of History, chair of the Department of History, ROTC coordinator. British and South Asian history. Ph.D. 2002, M.A. 1997 University of Virginia. B.A. 1991 Georgetown University.
ROTC 244 Practicum
Faculty: Campion.
Content: Integration of ROTC field experiences with a liberal arts education. Credit-no credit. May be repeated for credit.
Prerequisites: None.
Restrictions: Sophomore standing and consent required. Open only to ROTC cadets.
Usually offered: Annually, fall and spring semester.
Semester credits: 1-2. | http://docs.lclark.edu/undergraduate/rotc/ | 2015-02-27T03:58:23 | CC-MAIN-2015-11 | 1424936460472.17 | [array(['/images/printer.png', 'Print This Course Print This Course'],
dtype=object) ] | docs.lclark.edu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.