content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
EventGraph
FBX Animation Pipeline
Maya Animation Rigging Toolset
Skeletal Meshes
Animation Blueprints
Physics-Based Animation
On the left, the character is not using IK setups. In the middle, IK is used to keep the feet planted on the small colliding objects. On the right, IK is used to make the character's punch animation stop when it hits the moving block.
Inverse Kinematics (IK) provide a way to handle joint rotation from the location of an end-effector rather than via direct joint rotation. In practice, you provide an effector location and the IK solution then solves the rotation so that the final joint coincides with that location as best it can. This can be used to keep a character's feet planted on uneven ground, and in other ways to produce believable interactions with the world.
Unreal Engine uses a 2-bone IK system that is ideal for things such as arms and legs.
For examples of Hand and Foot IK, you can also reference the Animation Content Examples page under section 1.8.
If you are already familiar with what IK is and how it works, you can skip this section!
Most animated skeletons in Unreal are driven by direct rotational data fed straight into the bones of the character or Skeletal Mesh. This can be thought of as forward kinematics, or direct application of rotation to joints or bones. Below is a diagram illustrating the concept:
As its name implies, inverse kinematics works in the other direction. Instead of applying rotation to bones, we instead give the bone chain a target (also known as an end effector), providing a position that the end of the chain should try to achieve. The user or animator moves the effector and the IK solver (the algorithm that drives rotation in an IK system) rotates the bones such that the final bone in the chain ends at the location of the target. In the image below, the end effector is designated by the red cross.
In UE4, IK can be used to override and augment existing animations to make the motions of a character or Skeletal Mesh appear to be more reactive to their environment.
There are many ways to utilize IK for your characters, from keeping feet or paws planted on the ground to having a character appear to grip and hold onto moving objects. For the purposes of this documentation, we will cover the most common setup: planting feet on uneven ground or stairs.
One of the more important considerations for IK use is that it generally requires setup in a few different locations. At a high level, these are:
Some setup for handling the location of the effector. This is often done within the Pawn or Character Blueprint.
Setup in the Animation Blueprint Event Graph to take in the effector location from the Pawn or Character. This will be used for the IK solver.
Setup of the 2-Bone IK node within the character's Animation Blueprint Anim Graph.
As with all things, a little bit of planning goes a long way. Make sure you have an idea if what you need your IK setup to do. Is it for a character's feet? Their hands? What will they be doing where they will need to react? The more of these questions you can answer early, the easier the setup will be. Fortunately, with the power of UE4's Blueprint visual scripting, it will be easy enough to add functionality later.
For the first example, we will give an overview of setting up simple IK on a character to help their feet remain planted on uneven ground.
This example can be found in the Content Examples project. Just open the map named Animation.umap and look at example 1.8.
The first step will be to set up the Pawn or Character Blueprint to properly handle the necessary data. This essentially means that we need to perform some traces from the feet so that we can keep track of when there is some sort of obstacle in place that they should step on.
Be aware that in the following examples, a few variables were added just to simplify wire connections within the Blueprint, to make them a little less visually confusing for documentation. These variables will not exist in the actual Content Example project.
The Construction Script of the Character Blueprint really just sets up two critical pieces of data:
Click the image for a larger view or Right-click and Save As.
For this setup, the Event Graph is essentially responsible for handling the trace operation, which simply casts down through the foot of the character, looking for some sort of obstacle. If it finds something, it stores the distance so that it can be used later in the Animation Blueprint to move the effector for the IK.
One of the important points about this graph is the use of a custom function called IKFoottrace. This function takes in a distance and Socket name, using those as the basis for the trace operation. It then returns an offset value based on those results that will be later used to offset the location of the IK effector.
In this image, you can see the IKFoottrace function. Click the image for a larger view or Right-click and Save As.
And here is the event graph. With the help of the above function, you can see that its main job in this instance is just to perform traces for the right and left feet.
Here is the base level of the Event Graph. Click the image for a larger view or Right-click and Save As.
The result of this is that during each tick of the game, there is a downward trace taking place, looking for an impact point which would designate some uneven piece of ground to be accounted for. When found, the distance to that point is stored as an IK offset to be used later on in the Animation Blueprint.
In the image above, the green diamond represents the location of the Socket used as the trace starting point. The red arrow represents the trace itself. first part of the Animation Blueprint we will look at is the Event Graph. Generally speaking, the main purpose of the Event Graph in an Animation Blueprint is to take in data from other sources - such as the Character or Pawn Blueprint - and then translating those into variable updates that can be used in the AnimGraph.
In this case, the first thing we do is get the current Pawn and then make sure to cast that to the specific Pawn-based class in which we did all of our setup. This allows us to communicate directly that specific instance of the Pawn Blueprint and read the data stored in its variables.
With the IK offset data that was stored in the Pawn Blueprint, we can generate location vectors to later be used by the IK effectors.
Click the image for a larger view or Right-click and Save As.
The AnimGraph culminates our setup by applying the information assembled thus far and using it to update the existing animation created for the character. For this example, the AnimGraph is very simple in that it is really only playing a single animation. In most cases, the Play JumpingJacks node would be replaced by any number of other nodes to produce the desired motion.
The important part is where we switch our current space from Local to Component. All animations that are played on the character are done in Local space. That is the fastest way to calculate it, since local transformations are relative to each bone's parent in the hierarchy. However, bone manipulations, such as applying 2-bone IK, must be done in Component Space, which is relative to the Root bone.
Because of this, we have to switch the data from Local to Component just long enough to perform our IK calculations. At the same time, the Final Animation Pose node can only take in Local Space data, so we have to convert back once the IK is applied.
For more information on coordinate spaces for animation, please see Coordinate Space Terminology. | https://docs.unrealengine.com/en-US/Engine/Animation/IKSetups | 2019-01-16T03:31:16 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.unrealengine.com |
After sign in as a seller or as a vendor, your users might encountered to this notification.
Error! Your account is not enabled for selling, please contact the admin.
This is how you fix this:
- Log in with your admin account.
- Navigate to Dashboard > Users >All users > Select a user with problem.
- Find “Selling” and select the check box to enable or disable product adding capability
- Click Update User.
| http://docs.drfuri.com/martfury/5-enabled-for-selling/ | 2019-01-16T04:17:26 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2018/07/selling.png',
None], dtype=object) ] | docs.drfuri.com |
DescribeSnapshots.
description- A description of the snapshot.
owner-alias- Value from an Amazon-maintained list (
amazon|
aws-marketplace|
microsoft) of snapshot owners. Not to be confused with the user-configured AWS account alias, which is set from the IAM console.
owner-id- The ID of the AWS account that owns the snapshot. 1000; if
MaxResultsis given a value larger than 1000, only 1000
Returns the snapshots owned by the specified owner. Multiple owners can be specified.
Type: Array of strings
Required: No
- RestorableBy.N
One or more AWS accounts IDs that can create volumes from the snapshot.
Type: Array of strings
Required: No
- SnapshotId.N
One or more snapshot IDs.
Default: Describes snapshots for which you have launch Errors.: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSnapshots.html | 2019-01-16T04:55:14 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.aws.amazon.com |
Load Data in Sort Key Order
Load your data in sort key order to avoid needing to vacuum.
If each batch of new data follows the existing rows in your table, your data is properly stored in sort order, and you don't need to run a vacuum. You don't need to presort the rows in each load because COPY sorts each batch of incoming data as it loads.
For example, suppose that you load data every day based on the current day's activity. If your sort key is a timestamp column, your data is stored in sort order. This order occurs because the current day's data is always appended at the end of the previous day's data. For more information, see Loading Your Data in Sort Key Order. | https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-sort-key-order.html | 2019-01-16T04:17:13 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.aws.amazon.com |
.
Generated on January 15, 2019 at 07:01:07 MST.
You are viewing docs built from a recent snapshot of the develop branch. Switch to docs for the latest stable release, 2018.3.3.
© 2019 SaltStack. All Rights Reserved, SaltStack Inc. | Privacy Policy | https://docs.saltstack.com/en/develop/ref/states/all/salt.states.apache_module.html | 2019-01-16T03:37:41 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.saltstack.com |
Contents Governance, Risk, and Compliance (GRC) Policy and Compliance Management Risk Management Audit Management Vendor Risk Management Previous Topic Next Topic Create an audit report template ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents OtherActivate Audit ManagementNext TopicEstablish profile types, profile classes, and profiles On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-governance-risk-compliance/page/product/grc-common/task/create-audit-report-temp.html | 2019-01-16T04:27:16 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.servicenow.com |
The PhantomNet Manual
PhantomNet is a mobility testbed, providing researchers with a set of hardware and software resources that they can use to develop, debug, and evaluate their mobility designs. Resources available in PhantomNet include EPC/EPS software (OpenEPC), hardware access points (ip.access enodeb), PC nodes with mobile radios (HUAWEI cellular modems), and a large set of commodity bare metal nodes, virtual nodes and other resouces inherited from the main Emulab site. In addition to raw resources, PhantomNet provides configuration directives and scripts to assist researchers in setting up their mobility experiments. Users specify their experiment via Emulab NS file templates augmented with PhantomNet-specific functionality. In complement to these template NS files, PhantomNet does the work of configuring the EPC software components to operate within the underlying Emulab environment.
The PhantomNet facility is built on top of Emulab and is run by the Flux Research Group, part of the School of Computing at the University of Utah. | https://docs.phantomnet.org/ | 2019-01-16T04:20:22 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.phantomnet.org |
Before you deploy the vSphere Replication appliance, you must prepare the environment.
Procedure
- Verify that you have vSphere and vSphere Web Client installations for the protected and recovery sites.
- In the vSphere Web Client, select the vCenter Server instance on which you are deploying vSphere Replication, click , and verify that the VirtualCenter.FQDN value is set to a fully-qualified domain name or a literal address.Note:.
What to do next
You can deploy the vSphere Replication appliance. | https://docs.vmware.com/en/VMware-Site-Recovery/services/com.vmware.srmaas.install_config.doc/GUID-89331F70-3191-436F-8E9A-71559F77BFE4.html | 2019-01-16T04:52:15 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.vmware.com |
Vector Browser SDK is wrapper around browser-node, exposes API for cross-chain integration.
Let's gear up for a 20 mins ride to get our custom integration for crossChain swap rolling.
npm install @connext/vector-sdk
TLDR; This method initializes the SDK.
It connect/creates a channel unique to user, using loginProvider with a given counterparty/router(Liquidity Provider) on a given chains. It also checks for pending transfer, and returns the offChain balance of user's channel on sender & recipient chain.
TLDR; This method give you the estimate fee for transfer.
You can either use the above function to make the deposit or the custom function below. | https://vector-docs.connext.network/vector-sdk | 2021-10-16T04:48:27 | CC-MAIN-2021-43 | 1634323583423.96 | [] | vector-docs.connext.network |
To protect the wiki against automated account creation, we kindly ask you to answer the question that appears below (more info):
To pass captcha, please enter the... 8th tenth ...characters from the sequence 86275cf887 in the reverse order of the listing above:
86275cf887
Real name is optional.
If you choose to provide it, this will be used for giving you attribution for your work.
edits
pages
recent contributors | https://docs.joomla.org/index.php?title=Special:UserLogin&type=signup&returnto=JCacheStorageWincache::test/11.1 | 2015-03-26T23:24:22 | CC-MAIN-2015-14 | 1427131292683.3 | [] | docs.joomla.org |
Release Notes - Fauna 2.12.0
Released 2020-05-05
The Fauna team is pleased to announce the availability of Fauna 2.12.0.
Highlights
All drivers now support the specification of a query timeout in milliseconds. When the timeout period has elapsed, the active query is terminated and an error is returned.
The Go driver now accepts the native
nilin place of
f.Null().
The Go driver now supports the various type check functions, including
IsArrayand similar.
The C# driver now supports the
Documentsfunction.
Issues fixed
Fixed an issue that prevented index tasks operating on deleted collections from completing.
Fixed an issue that prevented
Ref("classes/users/self")from returning a reference to the current logged-in user.
Next steps
Learn more about Fauna from our product page.
Was this article helpful?
We're sorry to hear that.
Tell us how we can improve!
Visit Fauna's Discourse forums or email [email protected]
Thank you for your feedback! | https://docs.fauna.com/fauna/v3/release_notes/faunadb/2.12.0.html | 2021-10-16T03:51:31 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fauna.com |
Troubleshooting Time Zone Configuration¶
Processes use different time zones¶
Processes on FreeBSD (and thus pfSense® software) only pick up time changes when they are started. If the firewall has not been rebooted since the last time zone change, doing so will ensure that all running processes are using the correct time zone.
Clock does not use the expected zone offset¶
If the clock is several hours off, but accurate to the minute, it is most likely a time zone setting issue. If using a GMT offset time (e.g. -0500), change to a more specific geographic time zone such as America/New_York instead.
Using geographic zones is the best practice as they use an accurate offset, include local Daylight Saving Time behavior, and also consider historical changes in time zones.
The following text from the time zone database explains the behavior of the GMT zones further:
#).
This behavior is also noted on the Wikipedia page for the time zone database. | https://docs.netgate.com/pfsense/en/latest/troubleshooting/time-zone.html | 2021-10-16T02:13:55 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netgate.com |
Group permissions for linked accounts#
After you establish Domain trust between the principal account and delegate accounts, principal account administrators can create groups and assign permissions.
Principal account administrative users can view group permissions for their account.
Manage group permissions in the principal account#
Log in to the Rackspace Technology Customer Portal.
Select Account > User Management from the global navigation menu.
Select the User groups tab.
Use the Group permissions drop-down menu to navigate to linked accounts.
Select a trusting account and follow the steps to manage your user groups.
Federated Identity groups have additional product permissions. Select the Federated Identity Users tab and choose from the available options.
Toggle the switch to allow group members to create new AWS accounts.
This option is available for only the principal account.
Select a default AWS IAM policy from the drop-down list.
Edit additional product permissions.
| https://docs.rackspace.com/docs/portal-onboarding-guide/administrative/access_permissions | 2021-10-16T02:07:12 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../_images/acct_products.png', '**account switcher**'],
dtype=object) ] | docs.rackspace.com |
Adding SSL (HTTPS) to Sourcegraph with a self-signed certificateAdding SSL (HTTPS) to Sourcegraph with a self-signed certificate
This is for external Sourcegraph instances that need a self-signed certificate because they don’t yet have a certificate from a globally trusted Certificate Authority (CA). It includes how to get the self-signed certificate trusted by your browser.
Configuring NGINX with a self-signed certificate to support SSL requires:
- Adding SSL (HTTPS) to Sourcegraph with a self-signed certificate
1. Installing mkcert1. Installing mkcert
While the OpenSSL CLI can can generate self-signed certificates, its API is challenging unless you’re well versed in SSL.
A better alternative is mkcert, an abstraction over OpenSSL written by Filippo Valsorda, a cryptographer working at Google on the Go team.
To set up mkcert on the Sourcegraph instance:
- Install mkcert
- Create the root CA by running:
sudo CAROOT=~/.sourcegraph/config mkcert -install
2. Creating the self-signed certificate2. Creating the self-signed certificate
Now that the root CA has been created, mkcert can issue a self-signed certificate (
sourcegraph.crt) and key (
sourcegraph.key).
sudo CAROOT=~/.sourcegraph/config mkcert \ -cert-file ~/.sourcegraph/config/sourcegraph.crt \ -key-file ~/.sourcegraph/config/sourcegraph.key \ $HOSTNAME_OR_IP
Run
sudo ls -la ~/.sourcegraph/config and you should see the CA and SSL certificates and keys.
3. Adding SSL support to NGINX3. Adding SSL support to NGINX
Edit the default
~/.sourcegraph / { ... } } }
4. Changing the Sourcegraph container to listen on port 4434. Changing the Sourcegraph container to listen on port 443
Now that NGINX is listening on port 7443, we need to configure the Sourcegraph container to forward
443 to 7443 by adding
--publish 443:7443 to the
docker run command:
docker container run \ --rm \ --publish 7080:7080 \ --publish 443:7443 \ \ --volume ~/.sourcegraph/config:/etc/sourcegraph \ --volume ~/.sourcegraph/data:/var/opt/sourcegraph \ sourcegraph/server:3.32.0
Run the new Docker command, then validate by opening your browser at.
If running Sourcegraph locally, the certificate will be valid because
mkcert added the root CA to the list trusted by your OS.
5. Getting the self-signed certificate to be trusted (valid) on external instances5. Getting the self-signed certificate to be trusted (valid) on external instances
To have the browser trust the certificate, the root CA on the Sourcegraph instance must be installed locally by:
1. Installing mkcert locally
2. Downloading
rootCA-key.pem and
rootCA.pem from
~/.sourcegraph/config/mkcert on the Sourcegraph instance to the location of
mkcert -CAROOT on your local machine:
# Run locally: Ensure directory the root CA files will be downloaded to exists mkdir -p "$(mkcert -CAROOT)"
# Run on Sourcegraph host: Ensure `scp` user can read (and therefore download) the root CA files sudo chown $USER ~/.sourcegraph/config/root*
# Run locally: Download the files (change username and hostname) scp [email protected]:~/.sourcegraph/config/root* "$(mkcert -CAROOT)"
3. Install the root CA by running:
mkcert -install
Open your browser again at and this time, your certificate should be valid.
Getting the self-signed cert trusted on other developer machinesGetting the self-signed cert trusted on other developer machines
This is largely the same as step 5, except easier. For other developer machines to trust the self-signed cert:
- Install mkcert.
- Download the
rootCA-key.pemand
rootCA.pemfrom Slack or other internal system.
- Move the
rootCA-key.pemand
rootCA.pemfiles into the
mkcert -CAROOTdirectory on their machine.
- Run
mkcert -installon their machine. | https://docs.sourcegraph.com/admin/ssl_https_self_signed_cert_nginx | 2021-10-16T01:40:44 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.sourcegraph.com |
MLflow Model Serving on Databricks
Preview
This feature is in Public Preview.
MLflow Model Serving allows you to host machine learning models from Model Registry as REST endpoints that are updated automatically based on the availability of model versions and their stages.
When you enable model serving for a given registered model, Databricks automatically creates a unique cluster for the model and deploys all non-archived versions of the model on that cluster. Databricks restarts the cluster if an error occurs and terminates the cluster when you disable model serving for the model. Model serving automatically syncs with Model Registry and deploys any new registered model versions. Deployed model versions can be queried with a standard REST API request. Databricks authenticates requests to the model using its standard authentication.
While this service is in preview, Databricks recommends its use for low throughput and non-critical applications. Target throughput is 200 qps and target availability is 99.5%, although no guarantee is made as to either. Additionally, there is a payload size limit of 16 MB per request.
Each model version is deployed using MLflow model deployment and runs in a Conda environment specified by its dependencies.
Note
- The cluster is maintained as long as serving is enabled, even if no active model version exists. To terminate the serving cluster, disable model serving for the registered model.
- The cluster is considered an all-purpose cluster, subject to all-purpose workload pricing.
- Global init scripts are not run on serving clusters.
Requirements
- MLflow Model Serving is available for Python MLflow models. You must declare all model dependencies in the conda environment.
- To enable Model Serving, you must have cluster creation permission.
Model serving from Model Registry
Model serving is available in Databricks from Model Registry.
Enable and disable model serving
You enable a model for serving from its registered model page.
Click the Serving tab. If the model is not already enabled for serving, the Enable Serving button appears.
Click Enable Serving. The Serving tab appears with Status shown as Pending. After a few minutes, Status changes to Ready.
To disable a model for serving, click Stop.
Validate model serving
From the Serving tab, you can send a request to the served model and view the response.
Model version URIs
Each deployed model version is assigned one or several unique URIs. At minimum, each model version is assigned a URI constructed as follows:
<databricks-instance>/model/<registered-model-name>/<model-version>/invocations
For example, to call version 1 of a model registered as
iris-classifier, use this URI:
https://<databricks-instance>/model/iris-classifier/1/invocations
You can also call a model version by its stage. For example, if version 1 is in the Production stage, it can also be scored using this URI:
https://<databricks-instance>/model/iris-classifier/Production/invocations
The list of available model URIs appears at the top of the Model Versions tab on the serving page.
Manage served versions
All active (non-archived) model versions are deployed, and you can query them using the URIs. Databricks automatically deploys new model versions when they are registered, and automatically removes old versions when they are archived.
Note
All deployed versions of a registered model share the same cluster.
Manage model access rights
Model access rights are inherited from the Model Registry. Enabling or disabling the serving feature requires ‘manage’ permission on the registered model. Anyone with read rights can score any of the deployed versions.
Score deployed model versions
To score a deployed model, you can use the UI or send a REST API request to the model URI.
Score via UI
This is the easiest and fastest way to test the model. You can insert the model input data in JSON format and click Send Request. If the model has been logged with an input example (as shown in the graphic above), click Load Example to load the input example.
Score via REST API request
You can send a scoring request through the REST API using standard Databricks authentication. The examples below demonstrate authentication using a personal access token.
Given a
MODEL_VERSION_URI like
https://<databricks-instance>/model/iris-classifier/Production/invocations (where
<databricks-instance> is the name of your Databricks instance) and a Databricks REST API token called
DATABRICKS_API_TOKEN, here are some example snippets of how to query a served model:
Snippet to query a model accepting dataframe inputs.
curl -X POST -u token:$DATABRICKS_API_TOKEN $MODEL_VERSION_URI \ -H 'Content-Type: application/json' \ -d '[ { "sepal_length": 5.1, "sepal_width": 3.5, "petal_length": 1.4, "petal_width": 0.2 } ]'
Snippet to query a model accepting tensor inputs. Tensor inputs should be formatted as described in TensorFlow Serving’s API docs.
curl -X POST -u token:$DATABRICKS_API_TOKEN $MODEL_VERSION_URI \ -H 'Content-Type: application/json' \ -d '{"inputs": [[5.1, 3.5, 1.4, 0.2]]}'
import numpy as np import pandas as pd import requests def create_tf_serving_json(data): return {'inputs': {name: data[name].tolist() for name in data.keys()} if isinstance(data, dict) else data.tolist()} def score_model(model_uri, databricks_token, data): headers = { "Authorization": f"Bearer {databricks_token}", "Content-Type": "application/json", } data_json = data.to_dict(orient='records') if isinstance(data, pd.DataFrame) else create_tf_serving_json(data) response = requests.request(method='POST', headers=headers, url=model_uri, json=data_json) if response.status_code != 200: raise Exception(f"Request failed with status {response.status_code}, {response.text}") return response.json() # Scoring a model that accepts pandas DataFrames data = pd.DataFrame([{ "sepal_length": 5.1, "sepal_width": 3.5, "petal_length": 1.4, "petal_width": 0.2 }]) score_model(MODEL_VERSION_URI, DATABRICKS_API_TOKEN, data) # Scoring a model that accepts tensors data = np.asarray([[5.1, 3.5, 1.4, 0.2]]) score_model(MODEL_VERSION_URI, DATABRICKS_API_TOKEN, data)
You can score a dataset in Power BI Desktop using the following steps:
Open dataset you want to score.
Go to Transform Data.
Right-click in the left panel and select Create New Query.
Go to View > Advanced Editor.
Replace the query body with the code snippet below, after filling in an appropriate
DATABRICKS_API_TOKENand
MODEL_VERSION_URI.
(dataset as table ) as table => let call_predict = (dataset as table ) as list => let apiToken = DATABRICKS_API_TOKEN, modelUri = MODEL_VERSION_URI, responseList = Json.Document(Web.Contents(modelUri, [ Headers = [ #"Content-Type" = "application/json", #"Authorization" = Text.Format("Bearer #{0}", {apiToken}) ], Content = Json.FromValue(dataset) ] )) in responseList, predictionList = List.Combine(List.Transform(Table.Split(dataset, 256), (x) => call_predict(x))), predictionsTable = Table.FromList(predictionList, (x) => {x}, {"Prediction"}), datasetWithPrediction = Table.Join( Table.AddIndexColumn(predictionsTable, "index"), "index", Table.AddIndexColumn(dataset, "index"), "index") in datasetWithPrediction
Name the query with your desired model name.
Open the advanced query editor for your dataset and apply the model function.
For more information about input data formats accepted by the server (for example, pandas split-oriented format), see the MLflow documentation.
Monitor served models
The serving page displays status indicators for the serving cluster as well as individual model versions.
- To inspect the state of the serving cluster, use the Model Events tab, which displays a list of all serving events for this model.
- To inspect the state of a single model version, click the Model Versions tab and scroll to view the Logs or Version Events tabs.
Customize the serving cluster
To customize the serving cluster, use the Cluster Settings tab on the Serving tab .
- icons in the Actions column of the Tags table.
Known errors
ResolvePackageNotFound: pyspark=3.1.0
This error can occur if a model depends on
pyspark and is logged using Databricks Runtime 8.x.
If you see this error, specify the
pyspark version explicitly when logging the model, using
the `conda_env` parameter. | https://docs.databricks.com/applications/mlflow/model-serving.html | 2021-10-16T01:40:36 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../../_images/enable-serving.gif', 'Enable serving'], dtype=object)
array(['../../_images/serving-tab.png', 'Serving tab'], dtype=object)
array(['../../_images/cluster-settings.png', 'Cluster settings'],
dtype=object) ] | docs.databricks.com |
All of the Pidgin related projects use Review Board for handling contributions at reviews.imfreedom.org.
There are a few things you'll need to set up to be able to submit a code review to these projects. This includes installing RBTools as well as some additional Mercurial configuration.
The recommended way to install RBTools is via pip and can be done with the following command.
pip3 install -U "RBTools>=1.0.3"
Once RBTools is installed you need to make sure that
rbt
is available on your
$PATH. To do this, you may need to
add
$HOME/.local/bin to your
$PATH. The exact
procedure to do this is dependent on your setup and outside of the
scope of this document.
This configuration for Mercurial is to make your life as a contributor
easier. There a few different ways to configure Mercurial, but these
instructions will update your user specific configuration in
$HOME/.hgrc.
The first thing we need to do is to install the evolve extension. This
extension makes rewriting history safe and we use it extensively in our
repositories. You can install it with a simple
pip3 install -U
hg-evolve. We will enable it below with some other bundled
extensions, but you can find more information about it
here.
When working with Mercurial repositories it is very important to make
sure that your username is set properly as it is added to every commit
you make. To set your username you must add it to the
[ui]
section in your
$HOME/.hgrc like the following example.
[ui] username = Full Name <[email protected]>
Next we need to make sure that the evolve
and rebase extensions are loaded. To do so add the
lines in the following example. You do not need to put anything after
the
= as this will tell Mercurial to look for them in the
default places for extensions.
[extensions] evolve = rebase =
Next we're going to create a revsetalias. This will be used to make it easier to look at your history and submit your review request.
[revsetalias] wip = only(.,default)
This alias will show us just the commits that are on our working branch
and not on the default branch. The default branch is where all
accepted code contributions go. Optionally, you can add the
wip command alias below which will show you the revision
history of what you are working on.
[alias] wip = log --graph --rev wip
There are quite a few other useful configuration changes you can make, and a few examples can be found below.
[ui] # update a large number of settings for a better user experience, highly # recommended!! tweakdefaults = true [alias] # make hg log show the graph as well as commit phase lg = log --graph --template phases
Below is all of the above configuration settings to make it easier to copy/paste.
[ui] username = Full Name <[email protected]> # update a large number of settings for a better user experience, highly # recommended!! tweakdefaults = true [extensions] evolve = rebase = [alias] # make hg log show the graph as well as commit phase lg = log --graph --template phases # show everything between the upstream and your wip wip = log --graph --rev wip [revsetalias] wip = only(.,default)
To be able to submit a review request you need to have an account on our JetBrains Hub instance at hub.imfreedom.org. You can create an account here in a number of ways and even turn on two factor authentication. But please note that if you turn on two factor authentication you will need to create an application password to be able to login to Review Board.
Once you have that account you can use it to login our Review Board instance at reviews.imfreedom.org. Please note, you will have to login via the web interface before being able to use RBTools.
Once you have an account and have logged into our Review Board site, you
can begin using RBTools. In your shell, navigate to a Mercurial clone of
one of the Pidgin or purple-related projects, then run the
rbt login command. You should only need to do this once,
unless you change your password or have run the
rbt logout
command.
Before starting a new review request, you should make sure that your
local copy of the repository is up to date. To do so, make sure you are
on the default branch via
hg update default. Once you are on the
default branch, you can update your copy with
hg pull --update. Now that you're starting with the most
recent code, you can proceed with your contributions.
While it's not mandatory, it is highly recommended that you work on your contributions via a branch. If you don't go this path, you will have issues after your review request is merged. This branch name can be whatever you like as it will not end up in the main repositories, and you can delete it from your local repository after it is merged. See cleanup for more information.
You can create the branch with the following command:
hg branch my-new-branch-name
Now that you have a branch started, you can go ahead and work like you
normally would, committing your code at logical times, etc. Once you
have some work committed and you are ready to create a new review
request, you can type
rbt post wip and you should be good to
go. This will create a new review request using all of the committed work
in your repository and will output something like below.
Review request #403 posted.
At this point, your review request has been posted, but it is not yet
published. This means no one can review it yet. To do that, you need to
go to the URL that was output from your
rbt post command
and verify that everything looks correct. If this review request fixes
any bugs, please make sure to enter their numbers in the bugs field on
the right. Also, be sure to review the actual diff yourself to make sure
it includes what you intended it to and nothing extra.
Once you are happy with the review request, you can hit the publish
button which will make the review request public and alert the reviewers
of its creation. Optionally you can pass
rbt post in the future to automatically open the draft
review in your web browser.
rbt post has a ton of options, so be sure to check them out
with
rbt post --help. There are even options to
automatically fill out the bugs fixed fields among other things.
Typically with a code review, you're going to need to make some updates. However there's also a good chance that your original branching point has changed as other contributions are accepted. To deal with this you'll need to rebase your branch on top of the new changes.
Rebasing, as the name suggests is the act of replaying your previous
commits on top of a new base revision. Mercurial makes this pretty easy.
First, make sure you are on your branch with
hg up my-branch-name. Now you can preview the rebase with
hg rebase -d default --keepbranches --dry-run. We prefer
doing a dry-run just to make sure there aren't any major surprises. You
may run into some conflicts, but those will have to be fixed regardless.
If everything looks good, you can run the actual rebase with
hg rebase -d default --keepbranches. Again if you run into
any conflicts, you will have to resolve them and they will cause the
dry-run to fail. Once you have fixed the merge conflicts, you'll then
need to mark the files as resolved with
hg resolve --mark filename. When you have resolved all of
the conflicted files you can continue the rebase with
hg rebase --continue. You may run into multiple conflicts,
so just repeat until you're done.
After rebasing you can start addressing the comments in your review and
commit them. Once they are committed, you can update your existing
review request with
rbt post --update. If for some reason
rbt can not figure out the proper review request to
update, you can pass the number in via
rbt post --review-request-id #. Note that when using
--review-request-id you no longer need to specify
--update.
Just like an initial
rbt post, the updated version will be
in a draft state until you publish it. So again, you'll need to visit the
URL that was output, verify everything, and click the publish button.
This will typically only be done by the Pidgin developers with push access. If you want to test a patch from a review request, please see the patch section below.
It is HIGHLY recommended that you use a separate
clone of the repository in question when you want to land review requests.
This makes it much easier to avoid accidentally pushing development work
to the canonical repository which makes everyone's life easier. Also, the
mainline repositories now auto publish, so if you do not selectively push
commits, all of your draft commits will be published. You can name this
additional clone whatever you like, but using something like
pidgin-clean is a fairly common practice. This makes it easy
for you to know that this clone is only meant for landing review requests,
and other admistrative work like updating the ChangeLog and COPYRIGHT
files.
When you are ready to land a review request you need to make sure you are
on the proper branch. In most cases this will be the branch named
default and can be verified by running the command
hg branch. Next you need to make sure that your local copy
is up to date. You can do this by running
hg pull --update.
Please note, if you run
hg pull and then immediately run
hg pull --update you will not update to
the most recent commit as this new invocation of
hg pull has
not actually pulled in any new commits. To properly update, you'll need
to run
hg update instead.
Once your local copy is up to date you can land the review request with
rbt land --no-push --review-request-id # where
#
is the number of the review request you are landing. The
--no-push argument is to disable pushing this commit
immediately. Most of our configuration already enables this flag for you,
but if you're in doubt, please use the
--no-push argument.
Once the review request has been landed, make sure to verify that the revision history looks correct, run a test build as well as the unit tests, and if everything looks good, you can continue with the housekeeping before we finally push the new commits.
The housekeeping we need to do entails a few things. If this is a big new feature or bug fix, we should be documenting this in the ChangeLog file for the repository. Please follow the existing convention of mentioning the contributor as well as the issues addressed and the review request number. Likewise, if this is someone's first contribution you will need to add them to the COPYRIGHT file in the repository as well. If you had to update either of these files, review your changes and commit them directly.
Now that any updates to ChangeLog and COPYRIGHT are completed, we can
actually start pushing the changes back to the canonical repository.
Currently not all of the canonical repositories are publishing
repositories so we'll need to manually mark the commits as public. This
is easily accomplished with
hg phase --public.
Note, if you are not using a separate clone of the
canonical repository you will need to specify a revision to avoid
publishing every commit in your repository. If you run into issues or
have more questions about phases see the
official documentation.
Now that the changes have been made public, we can finally push to the
canonical repository with
hg push. Once that is done, you'll
also need to go and mark the review request as
Submitted in the Review Board web interface.
If you want to test a patch locally for any reason, you first need to
make sure that you are on the target branch for the review request which
is listed on the review request page. In most cases this will be the
default branch. Regardless you'll need to run
hg up branch-name before applying the patch.
Now that you are on the correct branch, you can apply the patch with
rbt patch # where
# is the id of the review
request you want to test. This will apply the patch from the review
request to your working copy without committing it.
Once you're done with your testing you can remove the changes with
hg revert --no-backup --all. This will return your
repository to exactly what it was before the patch was applied. The
--no-backup argument says to not save the changes that you
are reverting and the
--all argument tells Mercurial to
revert all files.
Whether or not your pull request has been accepted, you probably want to clean it up from your local repository. To do so, you need to update to a branch other than the branch you built it on. In the following example, we're going to remove the branch named my-new-branch-name that we used to create a review request.
hg up default hg prune -r 'branch(my-new-branch-name)'
Now, all commits that were on the my-new-branch-name branch will have their contents removed but interally Mercurial keeps track that these revisions have been deleted.
You can repeat this for any other branches you need to clean up, and you're done! | https://docs.imfreedom.org/purple3/chapter-code-contributions.html | 2021-10-16T03:22:12 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.imfreedom.org |
Tempest Field Guide to Scenario tests¶
What are these tests?¶
Scenario tests are “through path” tests of OpenStack function. Complicated setups where one part might depend on completion of a previous part. They ideally involve the integration between multiple OpenStack services to exercise the touch points between them.
Any scenario test should have a real-life use case. An example would be:
“As operator I want to start with a blank environment”:
upload a glance image
deploy a vm from it
ssh to the guest
create a snapshot of the vm
Why are these tests in Tempest?¶
This is one of Tempest’s core purposes, testing the integration between projects.
Scope of these tests¶
Scenario tests should always use the Tempest implementation of the OpenStack API, as we want to ensure that bugs aren’t hidden by the official clients.
Tests should be tagged with which services they exercise, as determined by which client libraries are used directly by the test.
Example of a good test¶
While we are looking for interaction of 2 or more services, be specific in your interactions. A giant “this is my data center” smoke test is hard to debug when it goes wrong.
A flow of interactions between Glance and Nova, like in the introduction, is a good example. Especially if it involves a repeated interaction when a resource is setup, modified, detached, and then reused later again. | https://docs.openstack.org/tempest/latest/field_guide/scenario.html | 2021-10-16T02:46:41 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.openstack.org |
Sauce Connect Proxy Changelog
For best performance and to take advantage of our latest security enhancements, upgrade to the latest version.
#v4.6.4 and above
Changelogs for Sauce Connect v4.6.4 and above are hosted here.
#v4.6.3
Release Date: December 9, 2020.
#New Features
sc client configuration now stored in DB: sc client may be configured via command-line arguments, config file or any combination of both. Configuration will be stored in DB regardless of the origin to enable better support and debugging.
sc client now checks for server messages during startup sequence: Info messages will include newly released versions info, deprecation warnings, client platform support information. This feature would allow SC team to communicate updates directly sc client users.
More sc client startup log: The log now shows explicit messages about sc client failure to connect to SC server at tunnel startup time.
Basic authentication for multiple upstream proxies in a PAC file is now supported: Use the
--pac-auth <username:password@host:port> command-line option. The option can be used multiple times for each authenticated host in the PAC file.
#Internal Tooling and Improvements
- Moved sauceproxy-rest library to GitLab, while keeping it mirrored to GitHub for backwards compatibility.
- Changed sauceproxy-rest library status from open-source to private to avoid exposing internal API that is subject to change.
- Added logging of what certificate is returned from REST connection, useful for identifying proxy or firewall that does https inspection.
- Added logging when loading certificates from keychain on OS X.
- Added dev flags to debug SC over SSL+SNI connection without DNS resolution of KGP servername.
#Bug Fixes
You can now use OpenSSL library function flags to control TLS protocol versions used by the connection between KGP client and server.
You can now start tunnels using
--cainfoand
--capathcommand-line options at tunnel startup. These options were previously only used by
Doctor.
- If you have your certificates in a non-default system location, you no longer need to use the
--no-http-cert-verifyworkaround.
- If you're using MITM, you no longer need to enable SSL Bumping for connections to Sauce Labs REST.
We've removed
Doctorattempts to resolve non-existent maki hosts
- Attempts to run domain name resolution check for hard-coded defunct maki hosts created confusing errors; removed these checks.
#Known Issues
- When attempting to run two or more instances of sc client on the same host in High Availability mode, second and subsequent instances will fail to start with error due to conflicting SC metrics port assignment.
- As a workaround, use the
--metrics-address :0command-line option in your sc client.
#v4.6.2
Release Date: May 31, 2020.
#Bug Fixes
- Sauce Connect now correctly handles server response when parsing
HEADrequests that use a
Transfer-Encoding: chunkedheader.
#v4.6.1
Release Date: May 18, 2020.
#New Features
We are changing how we manage SSL certificates to improve assurance and compatibility with SSL-inspecting web proxies.
Public Certificates are now supported: We've enabled support for public certificates and deprecated support for private certificates. You'll need to ensure that the operating system on which you're running Sauce Connect has its certificate store set up correctly:
- Linux
- OpenSSL stores CA certificates, which are accessed by the sc client.
- The default OpenSSL certificates directory can be found using:
openssl version -d.
- Set the
SSL_CERT_DIRenvironment variable to this folder or another containing certificates in PEM format.
- Optional: Set the
SSL_CERT_FILEenvironment variable to a file of certificates in PEM format.
- Certificates will be automatically updated, manual certificate update can be achieved via the command-line
update-ca-certificates.
- Windows
- The sc client loads certificates directly from the CA and ROOT Windows Store
- Windows Update will keep certificates up to date. See the following Microsoft docs for more information:
- macOS and OS X
- Certificates will be read from the macOS Keychain Access automatically.
- Alternatively, if the Homebrew OpenSSL package is installed, you can use the default
cert.pemfile,
--tunnel-cainfo /usr/local/etc/openssl/cert.pem.
OCSP tunnel certificate validation: This feature lets the sc client validate that the tunnel endpoint's public certificate has not been revoked. OCSP relies on Public Key Infrastructure and needs to make additional HTTP requests to OCSP servers associated with the tunnel endpoint’s certificate chain. This is configurable via our new OCSP-specific command-line options and existing flags compatible with OCSP.
Selenium Relay is no longer enabled by default: You can still enable this feature on a specified port using the
--se-port option.
App Notarization - macOS Catalina support: Effective with this release, all Sauce Connect Proxy executables will be Apple-notarized to support the more stringent security standards introduced by macOS Catalina.
#Bug Fixes
- Fixed the compatibility of
--pac,
--proxyand
--proxy-userpwdflags. You can now use them in the same command line.
- Characters used in tunnel identifier names must now be only ASCII, so that they're captured correctly in the Sauce Labs UI.
- Removed ANSI color codes from the Sauce Connect log to improve readability.
- Fixed WebSockets handling functionality on HTTP/2 servers.
#v4.6.0 and below
v4.6.0 and below, which were supporting Private Certificates, reached end of life and are no longer available for download.
To align with security best practices, Sauce Connect Proxy began supporting certificates signed by Public Certificate Authorities effective with v4.6.1.
To request historical information, please contact our Support Team. | https://docs.saucelabs.com/secure-connections/sauce-connect/changelog/ | 2021-10-16T01:44:51 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.saucelabs.com |
The Security Agent shortcuts appear on the Windows Start menu on the endpoint.
Trend Micro Worry-Free Business Security Agent is listed in the Add/Remove Programs list of the Control Panel on the endpoint.
The Security Agent appears on the Devices screen on the web console and is grouped in the Servers (default) or Desktops (default) group, depending on the operating system type of the endpoint.
If you do not see the Security Agent, run a connection verification task from Administration > Global Settings > System (tab) > Agent Connection Verification.
The following Security Agent services display on Microsoft Management Console:
Trend Micro Security Agent Listener (tmlisten.exe)
Trend Micro Security Agent RealTime Scan (ntrtscan.exe)
Trend Micro Security Agent NT Proxy Service (TmProxy.exe)
This service is only available on Windows 2008.
Trend Micro Security Agent Firewall (TmPfw.exe) if the firewall was enabled during installation
Trend Micro Unauthorized Change Prevention Service (TMBMSRV.exe) if Behavior Monitoring or Device Control was enabled during installation
Trend Micro Common Client Solution Framework (TmCCSF.exe)
If the next screen shows -2, this means the agent can communicate with the server. This also indicates that the problem may be in the server database; it may not have a record of the agent.
Verify that client-server communication exists by using ping and telnet.
If you have limited bandwidth, check if it causes connection timeout between the server and the client.
Check if the \PCCSRV folder on the server has shared privileges and if all users have been granted full control privileges
Verify that the Trend Micro Security Server proxy settings are correct.
The European Institute for Computer Antivirus Research (EICAR) has developed a test virus you can use to test your installation and configuration. This file is an inert text file whose binary pattern is included in the virus pattern file from most antivirus vendors. It is not a virus and does not contain any program code.
You can download the EICAR test virus from the following URL:
Alternatively, you can create your own EICAR test virus by typing the following into a text file, and then naming the file eicar.com:
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
Flush the cache in the cache server and local browser before testing. | https://docs.trendmicro.com/en-us/smb/worry-free-business-security-100-server-help/installing-agents/security-agent-insta1234/performing-post-inst.aspx | 2021-10-16T03:21:15 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.trendmicro.com |
Document credits
0.46
After registering as a customer in our webshop and purchasing credit, you will receive instructions on how to use our OCR Service
Install the OCRDocs app within your environment.
Business Central Online (SaaS): Search the Extension Marketplace for OCRDocs and install the extension.
Business Central OnPrem: ask your Dynamics partner for help or send an email to [email protected]
Go to the OCRDocs setup page and get started
Our OCRDocs software can be used free of charge within your environment only if you use our OCR Service
The Dutch ICT terms and conditions can be requested by sending an email to [email protected] with the request to send the terms and conditions. | https://ocr-docs.com/en/ | 2021-10-16T02:58:32 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['https://ocr-docs.com/101-home_default/document-credits.jpg',
'Document credits'], dtype=object) ] | ocr-docs.com |
Settings
The plugin’s settings apply to all events. Before you start selling etickets, make sure that all necessary settings have been entered.
By clicking on the tabs on the left you can change various parts of the settings. Then press Save to save the settings.
Payment provider account
Fast Events is integrated with Mollie as payment provider, providing a variety of payment options. As such the plugin is only available for associations/companies residing in a SEPA country. With Mollie there are no fixed recurring costs, you only pay for successful transactions. The prices are very competitive. The transactions of, for example, iDEAL (the Netherlands) cost only € 0.29 excluding VAT. Press the button below to create your free Mollie account.
After you have created your free account, you will receive an email from Mollie with your login details. Confirm the email with the button in the email.
Log in to the Mollie dashboard and go through the wizard to enter all data (Your personal info, chamber of commerce id, bankaccount, VAT number (if applicable), your identification (passport), website where you want use payments and finally the payment methods. During the process you will be asked to transfer 1 cent from your company’s bank account to prove that you are the owner.
Live API-key and Test API-key
Copy the keys from the Mollie Dashboard -> Developers into the
Live API-key and
Test API-key fields.
Refund costs
If you do refunds in the Orders menu. This amount is deducted from the reimbursement.
Error page
The Error page is the page where users end up if:
The customer cancels the payment or if they left the payment-screen open for a long time and it times out. Timeout periods can differ per payment method.
The customer did pay, but the Payment provider has not yet informed the plugin that the payment succeeded. You can provide a shortcode in the page, so the user can check again if the payment came through. See example below.
If you choose
Host email then it is sufficient to fill in the Sender name and Sender email. This setting is the default after installation of the plugin.
But choosing the right Email-server type depends to a large extent on how many emails can be sent per day. Check with you hosting provider how many emails you can send per day (or any other period) and compare this with how many orders (= 1 email) you expect per day. If the expected amount is more than you can send per day you have to go back to your hosting provider to check if you can upgrade your hosting-package with more emails?
Or you can use professional companies that can send your email, such as Amazon SES, Mailgun, Sendgrid, Postmark App, … and many more. If you go down this path, you can choose for the other Email-server type options.
SMTP is always possible for all email providers, but we have a number of native implementation as well, which are the faster counterpart of SMTP as this is a rather ‘chatty’ protocol.
Sender name and email
The name and emailaddress you recipients will see in the received tickets email.
Fast Events can be configured to keep retrying to send new order emails. Checking this option is only wise if you are using SMTP or one of the native APIs. The
Host email solution uses the MTA on the host itself and, if everything is configured correctly, will never return an error. With ‘Host email‘ possible hard-bounces (for example: emailaddress doesn’t exists) or soft-bounces (for example: mailbox full) will be send back to the sender (Send email).
With SMTP or the native API’s there can be errors. For example the host may be (temporary) unreachable, too many request per time-period, … Consult you API provider for other possible errors. In case of errors you have 2 options:
Use the fast_events_email_api_result webhook to inform the WordPress Admin (or another user) that something went wrong
Check the checkbox Email retries and Fast Events will retry sending the email to the SMTP or API-provider again. It will use the
Retry schemeto schedule the next retry.
Retry scheme
The default value is
2,4,8,16,32,64,128, which means the first retry is scheduled after 2 minutes, and then 4 minutes, and so on.
You can define your own scheme.
Consult you SMTP or API provider how it handles hard-bounces and soft-bounces. Usually they provide webhooks to process these bounces.
SMTP settings
- Host email
Check this box if you want use your hosting platform the send emails
The name of the server. Check with your email-provider.
- User
Most of the time this takes the form of an emailadress. Check with your email-provider.
The password of the account. Check with your email-provider.
- Verify peer
Disabling it and you’ll be vulnerable to a Man-in-the-Middle Attack. Incidentally you may disable it if you are fi. testing with an internal SMTP host with a self-signed certificate.
- Port number
Most of the time port
465or
587is used. Check with your email-provider.
- Security protocol
Use
sslor
tls. Check with your email-provider.
Amazon SES API settings
The settings can be found in the Amazon console dashboard. If you still need to create a SES account, make sure you create it in the
EU region as the plugin is only supported in the European SEPA countries if online payments are used.
You can find/create in the Amazon IAM (Identity and Access Management) menu the Access key and Secret key. Make sure the secret key has the right permissions to send email.
Mailgun API settings
The settings can be found in the Mailgun dashboard. If for example your domain is
somedomain.com. The server URL would be:
If you create a new sending domain, make sure you create it in the
EU space of Mailgun as this plugin can only be used by the European SEPA countries. If you don’t host your domain in the European union (USA flag in dashboard), you have to strip the
eu part from the URL. This of course will also works, but it adds some latency to the API request. The ‘mg‘ part depends on your DNS settings.
Mailjet API settings
The settings can be found in the Mailjet dashboard. The URL for the server is:
The Mailjet API key is the combination of the user identifier and API key, separated by a colon. For example
7a8e12:1234a1
Postmark API settings
The settings can be found in the Postmark dashboard. The URL for the server is:
Sendgrid API settings
The settings can be found in the Sendgrid dashboard. The URL for the server is:
Sendinblue API settings
The settings can be found in the Sendinblue dashboard. The URL for the server is:
Sparkpost API settings
The settings can be found in the Sparkpost dashboard. The URL for the server is:
If you create a new sending domain, make sure you create it in the
EU space of Sparkpost as this plugin can only be used by the European SEPA countries. If you don’t host your domain in the European union, you have to strip the
eu part from the URL. This of course will also works, but it adds some latency to the API request.
ReCAPTCHA settings
At RSVP events it can of course occur that sick minds spam you with all kind of different real or bogus emailaddresses, even if you have confirmations enabled. Worse, they may give you a bad reputation, and receiving domains can flag you as spammer. For these cases you can use Google reCAPTCHA. Sign in and setup up your domain; Fast Events only supports v2 at the moment. Once setup, copy the keys to the Site key and Secret key. Switch on the ReCaptcha flag in the Basics tab and the booking screen will have a ReCaptcha.
Settings for instant payments
These settings work together with the Payment app. The app generates a qrcode which the customer can scan with the camera or a banking app (Netherlands and Belgium) to make a payment. The ‘Payment app’ shows immediately if a payment succeeded or not.
Event-id
This is the id of a special event you have to define. The event is just used for reporting purposes. Set the following fields:
- Basic tab
Name“Online payments”. You can of course translate this.
Available start/end datemake the window large enough
Stock0
Redirect after bookingSet a valid URL to thank the user for the payment
Don’t use the other settings
- Type tab
Event typeNo date
Group typeNo group
‘Input tab’: add 2 text-fields
Accountand
Description. Do not translate these fields
Minimum amount
The minimum amount to use for a payment with a qrcode. If you enter a lower value in the app, an error will be returned an no qrcode is generated.
API key
The secret key the Payment app has to use to secure the communication. You can use the button to generate a new secure token. Copy the qrcode and send it as an attachment in an email to the users of the Payment App. Users can than “Share” the qrcode with the Payment App to configure it.
Or they can scan the qrcode to configure the Payment app.
REST API settings
These settings work together with the FE Admin App and the Public API. The App can be used on your mobile (for now only on Android) to view the basic information of events and orders. But you can also resend orders, refund, configure the scan app or payment app, and much more …
API key
The secret key the FE Admin App has to use to secure the communication. You can use the button to generate a new secure token. Copy the qrcode and send it as an attachment in an email to the users of the FE Admin App. Users can than “Share” the qrcode with the FE Admin App to configure it. But if printed or shown, users can also scan it with the camera to configure the app.
Or they can scan the qrcode to configure the FE Admin App.
Action scheduler
Fast Events uses the Action scheduler for delivering webhook information, retries to send emails and timed RSVP events.
Do not make any changes to these parameters until you have a good understanding of how the Action scheduler works and the consequences of the changes. You can find here more information for a detailed explanation. In case you do fully understand it, make the changes and test!
Bear in mind that the Action scheduler can be used by multiple plugins. Make sure to know how these plugins interact with the Action scheduler.
The defaults will do fine for small events, but if you have an event with thousands of orders in a short time frame or scanning requests and webhook consumers for these events, you may consider different settings.
- Purge days
After 30 days completed actions will be removed from the logs. With the Fast Events plugin you could bring this value down to a lower level. Check for the longest retry schedule you use in sending your email, in webhooks or timed RSVP events. But also check other plugins using the Action scheduler, if any.
- Time limit
Most shared hosting environments allow a maximum of 30 seconds execution time for a job. If this is different in your situation you can change this. But don’t forget: long running actions also tie up resources for a long time!
- Batch size
By default if a queue starts running it processes 25 actions. This means with the previous parameter
Time limit, that the system has 30 seconds to process the 25 actions. But the actions issued by Fast Events should finish in a fraction of a second. If you hook up new webhook consumers tell them to return a HTTP 200 response as soon as possible and not do first all kinds of processing and then return a HTTP 200. If you switch on logging for a webhook, you can find the full analysis of the webhook including the
duration. If this is close to 1 second or even bigger, then there is a serious issue.
- Concurrent batches
The default is 1. You could increase this, but before you do make sure your webhook consumers can coop with multiple simultaneous connections. This parameter works together with the next one.
- Additional runners
Because the Action scheduler is only triggered at most once every minute by WP Cron, it rarely happens that multiple concurrent batches are running at the same time. With this parameter you can force Action scheduler to start additional queues at the same time.
Miscellaneous settings
- Custom order statuses
A list of custom statutes separated by a comma. The length of a single status should be 32 characters or less. You can use the custom status fields in the contextmenu of the order-table. Fi. use it as reminder for calling back a customer after an earlier call. For example, the field could be filled with
callback,call finished. You can then easily find the actions by sorting on this field in the order table.
But you can also use it if you occasionally want to sell a book or whatever. Then use, for example, the statuses
processing, shipped. You can then send the customer an email update with the custom filter fast_events_custom_status if the status has changed. A simple solution if you do this occasionally, but if it is more structural then a solution like WooCommerce is recommended.
- Use own domain in Deeplink
In case of a sporting event and if the FE Tracking App is used for passing checkpoints, a link can be clicked in the ‘Thank-you’ page directly after the order, to load the ticket into the App. This link can be added with a shortcode. If this link is clicked on an Android or Apple phone, the FE Tracking App will open and the ticket will be added. If the App is not installed, you will first be asked to install it.
If the link is clicked on a desktop PC, the default display is. This page indicates that the link can only be clicked on a phone.
If this parameter is checked, it is possible to create a page on your own domain with its own content in the local language. For example.
Note
Make sure the page slug is always
add-ticket. | https://docs.fast-events.eu/en/latest/getting-started/settings.html | 2021-10-16T03:43:21 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../_images/Mollie.png', 'Mollie'], dtype=object)] | docs.fast-events.eu |
You can view the ONTAP-defined quality of service (QoS) policy settings that have been applied to a volume or LUN in the Performance Explorer and Workload Analysis IOPS, IOPS/TB, and MB/s charts. The information displayed in the charts is different depending on the type of QoS policy that has been applied to the workload.
A throughput maximum (or
peak) setting defines the maximum throughput that the workload can consume, and thereby limits the impact on competing workloads for system resources. A throughput minimum (or
expected) setting defines the minimum throughput that must be available to the workload so that a critical workload meets minimum throughput targets regardless of demand by competing workloads.
Shared and non-shared QoS policies for IOPS and MB/s use the terms
minimum and
maximum to define the floor and ceiling. Adaptive QoS policies for IOPS/TB, which were introduced in ONTAP 9.3, use the terms
expected and
peak to define the floor and ceiling.
While ONTAP enables you to create these two types of QoS policies, depending on how they are applied to workloads there are three ways that the QoS policy will be displayed in the performance charts.
The following figure shows an example of how the three options are shown in the counter charts.
When a normal QoS policy that has been defined in IOPS appears in the IOPS/TB chart for a workload, ONTAP converts the IOPS value to an IOPS/TB value and Unified Manager displays that policy in the IOPS/TB chart along with the text
QoS, defined in IOPS.
When an adaptive QoS policy that has been defined in IOPS/TB appears in the IOPS chart for a workload, ONTAP converts the IOPS/TB value to an IOPS value and Unified Manager displays that policy in the IOPS chart along with the text
QoS Adaptive - Used, defined in IOPS/TB or
QoS Adaptive - Allocated, defined in IOPS/TB depending on how the peak IOPS allocation setting is configured. When the allocation setting is set to
allocated-space, the peak IOPS is calculated based on the size of the volume. When the allocation setting is set to
used-space, the peak IOPS is calculated based on the amount of data stored in the volume, taking into account storage efficiencies. | https://docs.netapp.com/ocum-99/topic/com.netapp.doc.onc-um-perf-ag/GUID-E4E26D82-3298-4BB0-BC33-6C626D81CDB1.html | 2021-10-16T03:15:12 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netapp.com |
Because the autoshrink functionality shrinks the size of a FlexVol volume, it can also affect when volume Snapshot copies are automatically deleted.
The autoshrink functionality interacts with automatic volume Snapshot copy deletion in the following ways:
This is because the Snapshot reserve is based on a percentage of the volume size (5 percent by default), and that percentage is now based on a smaller volume size. This can cause Snapshot copies to spill out of the reserve and be deleted automatically. | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-vsmg/GUID-7D79563B-ACD2-49B4-8A33-262160C73FB9.html | 2021-10-16T03:35:46 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netapp.com |
Wordpress
About
To integrate Pelcro on Wordpress. Log in to your Wordpress account and follow the following steps:
- Click on Appearance >> Theme Editor
- Find the header file of your theme which contains everything between tags
- Copy and paste the script in your integration page between those two tags.
<script>var Pelcro = window.Pelcro || (window.Pelcro = {}); Pelcro.</script>
Updated over 1 year ago
Did this page help you? | https://docs.pelcro.com/docs/wordpress | 2021-10-16T03:31:26 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.pelcro.com |
Connecting NetSuite
Step 1: In NetSuite, Enable NetSuite Web Services
To enable the NetSuite Web Services feature
Log in to NetSuite as an administrator.
Select Setup > Company > Enable Features.
Select the SuiteCloud tab.
Ensure that the SuiteTalk (Web Services) check box is selected.
Select Save.
Step 2: In NetSuite, Perform the Token-Based Authentication Setup Tasks
Enable token-based authentication.
Create a role that permits logging in with token-based authentication.
Assign the DataDock connection user to the token-based authentication role:
For instructions, see Assign users to a token-based authentication role.
Create a NetSuite application to enable DataDock to use token-based authentication with NetSuite.
Recommended: Name the application "DataDock".
Recommended: Set the user name to something like "DataDock admin".
Take note of the Consumer Key, Consumer Secret, Token ID, and Token Secret. They are required to enable the DataDock NetSuite connection to use token-based authentication.
Warning: The Token ID and Token Secret are displayed only once. After you leave the NetSuite page that displays them, they can never be retrieved from NetSuite. Store the values in a very safe place, and treat them as securely as passwords.
Step 3: In NetSuite, Obtain the Account ID
To find the Account ID, log in to NetSuite as an administrator, and select Setup > Integration > Web Services Preferences > Account ID.
Step 4: In DataDock, Enter connection information
You will need to enter the following fields which you should have values for from performing the previous steps:
User Name
Account ID
Consumer Key
Consumer Secret
Token ID
Token Secret | https://docs.bossinsights.com/data-platform/Connecting-NetSuite.961904642.html | 2021-10-16T03:31:28 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.bossinsights.com |
Overview of Apache Hive Installation and Upgrade in CDH
Installing:
Hive comes along with the base CDH installation and does not need to be installed manually. Use Cloudera Manager to enable or disable the Hive service. If you disable the Hive service, the component always remains present on the cluster. For details on installing CDH with Cloudera Manager, which installs Hive, see Installation Using Cloudera Manager Parcels or Packages.
Upgrading:
Use Cloudera Manager to upgrade CDH and all of its components, including Hive. For details, see Upgrading the CDH Cluster. | https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/hive_install_upgrade_intro.html | 2021-10-16T03:23:12 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.cloudera.com |
Introduction
Delta Lake is an open source project that enables building a Lakehouse architecture on top of data lakes. Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing on top of existing data lakes..
Delta Engine optimizations make Delta Lake operations highly performant, supporting a variety of workloads ranging from large-scale ETL processing to ad-hoc, interactive queries. For information on Delta Engine, see Delta Engine.
Quickstart
The Delta Lake quickstart provides an overview of the basics of working with Delta Lake. The quickstart shows how to build pipeline that reads JSON data into a Delta table, modify the table, read the table, display table history, and optimize the table.
For Databricks notebooks that demonstrate these features, see Introductory notebooks.
To try out Delta Lake, see Sign up for Databricks.
Resources
- For answers to frequently asked questions, see Frequently asked questions (FAQ).
- For reference information on Delta Lake SQL commands, see Delta Lake statements.
- For further resources, including blog posts, talks, and examples, see Delta Lake resources. | https://docs.databricks.com/delta/delta-intro.html | 2021-10-16T02:08:03 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.databricks.com |
SSL/TLS decryption
Encrypting sensitive data is a critical part of protecting your network assets, however encryption also decreases visibility into the network for cybersecurity and forensics. Because encrypted traffic is an increasingly common vector for malicious activity, we recommend that you configure the ExtraHop system to decrypt your critical SSL/TLS traffic to enable detections that can identify suspicious behaviors and potential attacks.
- Your SSL/TLS server traffic must be encrypted with a supported cipher suite.
- You can only decrypt traffic for the services that you provide and control on your network.
Encryption types
When a client initiates a connection to a server over SSL/TLS, a series of handshake exchanges identify the cipher suite that includes the set of algorithms that encrypts the data and authenticates the data integrity.
You can configure the ExtraHop system to decrypt SSL/TLS traffic based on the type of supported cipher suite that the network connection is secured with.
Session key forwarding
When session key forwarding is enabled on the ExtraHop system, a light-weight agent can be installed on the server to forward session keys to the system and the system is able to decrypt the related SSL/TLS traffic.
Perfect Forward Secrecy (PFS) cipher suites mutually derive a session key through a series of exchanges between the client and server—only the client and server know the session key, which is never sent over the wire network. Even if the long-term server key is compromised, the ephemeral session key remains secure.
Certificates and keys
When a certificate and private key for supported cipher suites are uploaded to an ExtraHop system, the system is able to decrypt the related SSL/TLS traffic.
Cipher suites for RSA can be decrypted with a server certificate and private key. When a client connects to a server over SSL/TLS, the server responds with a certificate that validates its identity and shares the public key. The client generates and encrypts a session key and sends the encrypted session key to the server. The client validates that the certificate is signed by a trusted certificate authority and that the server matches the requested domain.
Because the encrypted session key is sent over the wire network during the handshake and the private key is held long term on the server, anyone with access to the traffic, the server certificate, and the private key can derive the session key and decrypt the data. Teams that are responsible for encrypting their traffic might be hesitant to share private keys with other devices on the network to minimize risk.
Best practices
Here are some best practices you should consider when implementing SSL/TLS encryption.
- Turn off SSLv2 to reduce security issues at the protocol level.
- Turn off SSLv3, unless it is required for compatibility with older clients.
- Turn off SSL compression to avoid the CRIME security vulnerability.
- Turn off session tickets unless you are familiar with the risks that might weaken Perfect Forward Secrecy.
- Configure the server to select the cipher suite in order of the server preference.
- Note that session key forwarding is the only option for traffic encrypted with TLS 1.3.
Which traffic to decrypt
The traffic you want to inspect is likely to contain sensitive data, so the ExtraHop system does not write decrypted payload data to disk. The ExtraHop system analyzes the traffic in real-time and then discards the session key unless a Trace appliance is deployed for continuous packet capture. Optionally, the system can be configured to store the session key with the packets, which is a safer approach than sharing the long-term private key with analysts.
Here are some examples of the type of data you should consider decrypting with the ExtraHop system:
- Traffic that is valuable to inspect for security use cases, such as HTTP and database traffic. Decrypting HTTP can surface web application attacks such as SQL injection and Cross-site Scripting, which are among the top OWASP list of common attacks and critical web CVE exploits, such as F5 BIG-IP CVE and Citrix ADC CVE. Decrypted database traffic surfaces suspicious behaviors such as enumeration and unusual table access.
- Traffic where you might need forensic auditing to meet compliance regulations or to investigate incidents on critical systems—such as your customer databases, systems that house valuable intellectual property, or servers that provide critical network services.
You can also identify the type of encrypted traffic for a specific device discovered by the ExtraHop system. Find the device in the system and navigate to the device detail page.
In the left pane, click SSL in the Server Activity section. In the center pane, scroll to the Top Cipher Suites chart.
How to decrypt your SSL traffic
How you decrypt SSL traffic depends on the cipher suite and your server implementation.
If your SSL traffic is encrypted with PFS cipher suites, you can install the ExtraHop session key forwarder software on each server that has the SSL traffic that you want to decrypt. The session key is forwarded to the ExtraHop system and the traffic can be decrypted. Note that your servers must support the session key forwarder software.
- Install the ExtraHop session key forwarder on a Windows server
- Install the ExtraHop session key forwarder on a Linux server
If you have an F5 load balancer, you can share session keys through the balancer and avoid installing the session key forwarding software on each server.
If your SSL traffic is encrypted with RSA cipher suites, you can still install session key forwarder software on your servers (recommended). Alternatively, you can upload the certificate and private key to the ExtraHop system
We recommend that you only decrypt the traffic that you need. You can configure the ExtraHop system to decrypt only specific protocols and map protocol traffic to non-standard ports.
Decrypting packets for forensic audits
If you have a Trace appliance or other packetstore configured, you can store session keys on the Trace appliance and you can download session keys with packet captures so that you can decrypt the packets in a packet analysis tool such as Wireshark. These options enable you to securely decrypt traffic without sharing long-term private keys with analysts.
The system only stores session keys for packets on disk—as packets are overwritten, the related stored session keys are deleted. Only session keys for decrypted traffic are sent to the Trace appliance for storage. The ExtraHop system sends the session key with the associated flow information to the Trace appliance. If a user has packets and session key privileges, the session key is provided when there is a matching flow in the queried time range. Extraneous session keys are not stored, and there is no limit to the number of session keys that the ExtraHop system can receive.
We recommend that you exercise caution when granting privileges to ExtraHop system users. You can specify the privileges that enable users to view and download packets or to view and download packets and stored session keys. Stored session keys should only be available to users who should have access to sensitive decrypted traffic. While the ExtraHop system does not write decrypted payload data to disk, access to session keys enables decryption of the related traffic. To ensure end to end security, the session keys are encrypted when moving between appliances as well as when the keys are stored on disk.
Thank you for your feedback. Can we contact you to ask follow up questions? | https://docs.extrahop.com/8.2/ssl-decryption-concepts/ | 2021-10-16T02:48:25 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['/images/8.2/ssl-pfs.png', None], dtype=object)
array(['/images/8.2/ssl-rsa.png', None], dtype=object)
array(['/images/8.2/top-ssl-cipher-suites-dark.png', None], dtype=object)] | docs.extrahop.com |
What is an HTA file?
The HTMLA stands for Hypertext Markup Language Application is a program that is compatible with Microsoft Windows. The source code of this program includes more than one scripting language such as HTML and JavaScript. For the user interface, an HTML Application is preferred while to fulfill the requirement of program logic any other scripting language is used.
An HTML Application is independent of the security model of the internet browser and runs as a fully trusted application. The extension used for files regarding these applications is HTA. These applications include features of HTML along with the properties of other scripting languages.
Brief History
The HTA was first introduced in 1999 by Microsoft along with the release of Internet Explorer 5. It was compatible with Internet Explorer and so could be executed on Windows operating system only. This technology was patented in 2003. The HTA files are executed as similar to any other .exe files. The HTA files are compatible with today’s updated version of Windows 11 as well.
Technichal Specification
HTAs have the same format as any other HTML page comprises of, while some attributes are used for controlling the styles of borders or icons of programs. Moreover, arguments are provided for the launch of HTA. These applications can be executed using a program named mshta.exe. It can be accessed by simply double-clicking on the file. These programs automatically run along with the Internet explorer. Besides other specifications, these are not independent of the Trident engine browser but are independent of Internet Explorer. It means that these can be executed without using Internet Explorer.
The tags are used for the sake of customization of the appearance of these applications. The conversion from Microsoft HTML application to HTA format is easier i.e you only need to change the extension. As we know that these applications are fully trusted so these comprise more features and advantages as compared to simple HTML files. Text editors can be used to create HTA. These editors can be acquired by Microsoft or any other trusted source.
HTA File Format Example
<HTML> <HEAD> <HTA:APPLICATION <TITLE>HTA - Hello World</TITLE> </HEAD> <BODY> <H2>HTA - Hello World</H2> </BODY> </HTML> | https://docs.fileformat.com/programming/hta/ | 2021-10-16T03:35:40 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
Conductor Groups¶
Overview¶
Large scale operators tend to have needs that involve creating well defined and delinated resources. In some cases, these systems may reside close by or in far away locations. Reasoning may be simple or complex, and yet is only known to the deployer and operator of the infrastructure.
A common case is the need for delineated high availability domains where it would be much more efficient to manage a datacenter in Antarctica with a conductor in Antarctica, as opposed to a conductor in New York City.
How it works¶
Starting in ironic 11.1, each node has a
conductor_group field which
influences how the ironic conductor calculates (and thus allocates)
baremetal nodes under ironic’s management. This calculation is performed
independently by each operating conductor and as such if a conductor has
a
[conductor]conductor_group configuration option defined in its
ironic.conf configuration file, the conductor will then be limited to
only managing nodes with a matching
conductor_group string.
Note
Any conductor without a
[conductor]conductor_group setting will
only manage baremetal nodes without a
conductor_group value set upon
node creation. If no such conductor is present when conductor groups are
configured, node creation will fail unless a
conductor_group is
specified upon node creation.
Warning
Nodes without a
conductor_group setting can only be managed when a
conductor exists that does not have a
[conductor]conductor_group
defined. If all conductors have been migrated to use a conductor group,
such nodes are effectively “orphaned”.
How to use¶
A conductor group value may be any case insensitive string up to 255
characters long which matches the
^[a-zA-Z0-9_\-\.]*$ regular
expression.
Set the
[conductor]conductor_groupoption in ironic.conf on one or more, but not all conductors:
[conductor] conductor_group = OperatorDefinedString
Restart the ironic-conductor service.
Set the conductor group on one or more nodes:
baremetal node set \ --conductor-group "OperatorDefinedString" <uuid>
As desired and as needed, remaining conductors can be updated with the first two steps. Please be mindful of the constraints covered earlier in the document related to ability to manage nodes. | https://docs.openstack.org/ironic/latest/admin/conductor-groups.html | 2021-10-16T03:43:56 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.openstack.org |
Creating Environments
Before you begin¶
- Read about how Environments work in Platformer.
- A Kubernetes cluster connected to the Platformer Console. Read more on connecting a Kubernetes cluster here.
- One of the required permissions below (Platformer Console IAM).
- Organization Admin
- Operational Admin (Project-level)
- Environment Admin (Project-level)
- Environment Creator (Project-level)
Creating a new Environment¶
- Go to Environments on the main navigation panel.
- Click the Create button in the Environments page.
Fill out the required values.
- Environment Name*
- Description*
Default Namespace*
You can control what underlying namespace for this Environment will be named by filling in Kubernetes Metadata > Namespace. Make sure this environment namespace does not collide with another Platformer managed environment namespace.
Hint
You can create any number of namespaces under this Environment later if you want your applications in the Environment to be further isolated with a namespace-per-application or namespace-per-application-group configuration.
Clusters - Select the Kubernetes Clusters you want this Environment to be associated with. (When applications are deployed to this environment, they will be synchronized across all associated Clusters).
Click Save and your environment will be created in a few seconds. | https://docs.platformer.com/user-guides/environments/02-creating-environments/ | 2021-10-16T03:13:49 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.platformer.com |
If you've run out of impressions for the calendar month, you can follow the following steps below to get more impressions and send campaigns again:
Upgrade your plan yourself via the PushOwl dashboard (for free and business plan) or speak to our support team via the in-app chat.
Speak with your account or strategy manager to upgrade your Enterprise account to the next tier
Get one-time impressions using the Impression Top-Up tool. If you aren't sure how many impressions you would need, you can speak to our support team via the in-app chat.
Your Impression credits will reset with the first day of a new calendar month (UTC time). Impressions represent successfully delivered push notifications.
What happens to my scheduled campaigns when I run out of impressions?
If you've run out of impressions, any campaign scheduled will be automatically put on pause.
Once you've added more impressions, you can click on the 3-dot icon/ kebab menu next to the campaign and click on 'Resend' to send the paused campaign.
| https://docs.pushowl.com/en/articles/4889999-my-campaign-is-paused-because-i-ve-run-out-of-impressions | 2021-10-16T01:46:13 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['https://downloads.intercomcdn.com/i/o/301118479/612eb91b47224f720ffdf7a1/Group+2.png',
None], dtype=object) ] | docs.pushowl.com |
Demo site¶
To create a new site on Wagtail we recommend the
wagtail start command in Getting started. We also have a demo site, The Wagtail Bakery, which contains example page types and models. We recommend you use the demo site for testing during development of Wagtail itself.
The source code and installation instructions can be found at | https://docs.wagtail.io/en/latest/getting_started/demo_site.html | 2021-10-16T01:49:29 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.wagtail.io |
Exporting a model for deployment¶
After you train a STT model, your model will be stored on disk as a checkpoint file. Model checkpoints are useful for resuming training at a later date, but they are not the correct format for deploying a model into production. The model format for deployment is a TFLite file.
This document explains how to export model checkpoints as a TFLite file.
How to export a model¶
You can export STT model checkpoints for deployment by using the export script and the
--export_dir flag.
$ python3 -m coqui_stt_training.export \ --checkpoint_dir path/to/existing/model/checkpoints \ --export_dir where/to/export/model | https://stt.readthedocs.io/en/latest/EXPORTING_MODELS.html | 2021-10-16T03:16:52 | CC-MAIN-2021-43 | 1634323583408.93 | [] | stt.readthedocs.io |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
Apigee is a multitenant, self-service, cloud-based platform that runs in a fully redundant (live/live) configuration across multiple datacenters in multiple regions of the globe. Apigee uses Google Cloud Platform (GCP) and Amazon Web Services (AWS) for our cloud-based platform. As part of the services we build on GCP and AWS, we use multiple data centers within each region and service live traffic for our customers across these multiple data centers. We do not have a "live" data center and a "standby" (or "secondary" or "failover") data center. We have two (or more) data centers constantly and simultaneously servicing customer traffic in each region globally.
BCP/DR plan
Apigee Business Continuity Planning and Disaster Recovery (BCP/DR) is a platform-wide plan and does not contain detailed tasks for individual customers. Rather, the platform is configured to process customer data requests regardless of disruptions and outages. The data will continue to flow even if an entire data center is offline. If an entire region were to go offline, a single-region customer could experience an outage of API processing services. For customers looking for more than "in-region" redundant services, Apigee offers a globally redundant level of redundant data centers where traffic can be serviced in multiple regions or countries so that if an entire region goes offline, the data still flows.
Single-region customer services are not automatically transferred to another region because of possible geographic restrictions on data processing and access. Apigee hosts services for customers in the region identified by the customer. Because there may be specific regulations or customer commitments to their users on geographic locations of data, Apigee will not automatically move services to an alternate region, as this could potentially compromise Apigee's commitments to its customers or Apigee customers' commitments to their customers.
Apigee does not share the full BCP/DR plan with any individual customer, as it contains Apigee internal sensitive information and references to our customers. Our privacy policy prevents sharing the platform BCP/DR plan with individual customers that could potentially expose other customer names. We offer this same level of privacy to each customer.
BCP/DR Management
Apigee Information Security team is responsible for the oversight of the Business Resiliency program while a rotating Incident Commander is responsible for management and resolution of all incidents. The Incident Commander has operational and engineering personnel on call at all times along with playbooks for all actions that may need to be taken.
BCP/DR Testing
Apigee performs operational processes that support BCP/DR testing of the platform on a more frequent cadence than our full annual BCP/DR tabletop testing. Each month Apigee performs load swings from our live/live environment while we perform updates to the systems running the service. This process involves taking down one entire data center's worth of systems while the load is handled by the peer datacenter. During this process, after any updates are performed, the first data center is brought back up and services are run live/live again to verify that no issues were introduced. Then the peer datacenter is brought down for the same updates and then brought back online again. Apigee uses tools and techniques to drain traffic and send a small percentage of traffic to recently updated services to check for any issues or errors before going back to full load processing.
This consistent operational process exceeds industry-standard bi-annual resiliency "testing" of our service by making it an operational task that occurs more frequently.
In addition to the operational processes described above, Apigee also conducts tabletop BCP/DR exercises at least once annually where engineering and operations team members are brought together with other Apigee business units to logically simulate and walk through issues, responses, and the impact of decisions made in a mock disaster scenario. This provides additional training and experience for our personnel on our larger BCP/DR plans for the enterprise as a whole in addition to the service itself.
The BCP/DR testing done by Apigee does not use "failover exercises" or "secondary locations" because all of that is built into the running system.
Apigee does maintain Playbooks for use by all operational and engineering teams. These playbooks are reviewed and updated at least annually and used in all of our BCP/DR testing and training exercises.
Apigee does not share BCP/DR test reports with individual customers, because these tests are done at a platform level, not a customer level. We share the results of our operational tasks and annual tabletop exercise test reports with our third-party auditors, and these form the basis for the auditor's review of our compliance with PCI, HIPAA, contractual, and other requirements.
Customer BCP/DR tests
Customers are encouraged to have their own DR plans incorporate Apigee Edge services. Customer can and should consider how Apigee can redirect traffic as needed for customers to maintain end-user services even during a customer data center outage or other disaster event. However, this level of testing is outside the scope of the Apigee DR plan. We encourage customers to perform BCP/DR testing on their own applications and include Apigee Edge in the test.
RTO/RPO
Apigee does not have recovery point and recovery time objectives (RPO/RTO) for our customers or in our contracts related to BCP/DR activities. Our SLAs are the cloud equivalent of the RTO/RPO data points. Because Apigee is a redundant cloud based service with both management and runtime services being architected with redundant live services, RTO and RPO can both be seen as ‘real-time’. Single region customers receive a minimum of redundant services in different datacenters with the same region. Customers desiring higher levels of redundancy can opt for multi-region services.
Pandemic plan
Apigee includes a pandemic plan as part of our overall BCP/DR plan and processes. Because Apigee is a cloud hosted service, there is no requirement for individuals to manage the data center. For business operations such as support, Apigee operates a 24x7 global support team across multiple offices and remote locations. If a pandemic in one area of the globe impacts one of our support locations, personnel in other offices will be alerted and cover the shifts normally handled by the impacted office. For other business services such as sales, the workforce is globally distributed. All teams at Apigee are equipped to work remotely if needed. Tools used within Apigee are cloud-based and lend themselves naturally to a pandemic response plan.
Updates
Apigee reviews and updates our BCP/DR plan at least annually. Information gathered from incidents, product changes, industry standards, risk analysis activities, and BCP/DB testing are used to update the plan.
Business Impact Analysis and Risk Assessments
Google conducts a business impact analysis and a Risk assessment annually. Results of the BIA and the RA are prioritized and documented in the issue tracking system. | https://docs.apigee.com/api-platform/faq/business-continuity-planning-disaster-recovery | 2021-10-16T01:56:46 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.apigee.com |
Boss Data and Boss Embedded - Frequently Asked Questions
In this page, we answer frequently asked questions from our prospects, current customers, and brokers about our platform.
This FAQ page is for Boss Data and Boss Embedded.
For FAQs for Customers, go to Frequently Asked Questions
For FAQs on Boss Cares (PPP), go to Origination FAQ
How does Boss Insights use customer data?
We use your customer’s data to provide insights to both you and your customer. We do not and will not provide insights to any third party.
Boss Insights may use anonymized data to train our data models. These data models will be used only to provide additional or new insights in the future.
Boss Insights does not and will not approach your customers and will not provide any lending/funding services.
Under Canadian federal regulations, we are not allowed to provide support direct to your customers - one of your officers or team members should serve as liaison in case of issues on your customer’s side.
Learn more about our data privacy policy at Privacy | Boss Insights.
Are there any restrictions to the data provided by customers?
We are beholden to whatever data your customer provides.
You will have access to your customer’s data both via our portals (Boss Decision and Boss Monitor) and/or via API (Boss Data).
You will have access only to data they provide. If they have not connected their accounts, it means you will not have access to their data. You can ask your customer to provide more data via Secure Mail (included in Boss Decision, Boss Monitor, and Boss CARES-PPP).
Are data coming from different providers easy to read?
Yes. We’ve built our integrations in a way that the data output is already normalized and easy to read by both humans and external systems connected to our API.
While different providers have different category definitions, we have provided a normalization layer where categories are funneled into one common data model so that the information is easy to understand regardless of the provider. This solves problems including when one provider identifies costs as “expense” and another identifies it as “Expenses” (plural, with an “s”), or “Costs”.
We understand that you may also want to data to be normalized in a different way - and that is why Boss Decision and Boss Monitor has a Custom Chart of Accounts Mapping feature.
Do you store customer data at rest in your server?
Boss Insights stores data at rest on our servers in Canada or United States (depending on your data residency).
We store data at rest on our servers to make it easier for you and your customers to access their insights.
In case an API connection gets broken, you and your customer will still be able to access data. The date the data is last synchronized can be seen as the “Last Reconciliation Point” in all insights that needs time-based data.
How long is customer data stored in your server?
Data is stored in Boss Insights' server until you erase it or at the end of your relationship with Boss Insights.
How long does it take to synchronize data?
Each data type is synchronized separately for each customer.
Once a customer connects their cloud accounts, you should see their data in real-time.
For customers that upload data from non-API-supported applications (using our OCR, such as Payroll and Tax information from IRS), you will instantly see their data after upload, but will not be able to monitor in real-time since their data provider is not API-supported.
What happens when API disconnects?
If there is a configuration issue, or the customer disconnects their cloud connections, you and the customer will still have access to the data that was synchronized before the disconnection.
How will our customer provide access to the account in Boss Embed?
Boss insights will provide a snippet of JavaScript that you can insert into your portal. It will provide you with an interface to present the list so your client can provide authorization for connectivity. This info will be provided during your Boss Data (API) Onboarding.
What is the format of Boss Data API?
Boss Data API comes in JSON and HTML depending on your preference. You may also access and download the raw data via web.
What is the process for getting the API integrated?
Boss Insights will send a Service Order and Master Services Agreement. Once signed, there will be onboarding which will include a sandbox account and production account.
While Boss Insights provides technical documentation to get started right away at, total onboarding time may vary between 1-3 weeks, depending on the brevity and complexity of your current system, including migration of your customers from another LOS or CRM to Boss Data.
Is the data real time or is there a manual task required to get the most current information?
Boss Insights APIs provides the most current information and there is no manual task or pull required to get the real time information.
What support does Boss Insights provide for integrating Boss Data?
Boss Insights will schedule an onboarding session with you and will provide support documentation for reference. You may also find our Documentation at. You may also submit tickets to.
How do you gather tax information?
Since the Internal Revenue Service (IRS, U.S.) and the Canada Revenue Agency (CRA) does not allow connections via API, users have the option to install our browser plugin to get their tax information. Our extension users Optical Character Recognition (OCR) to scan the webpage and identify related information. You may check our help article on How to Import Data to see how our extension works.
Unlike Payroll Information, tax information can only be gathered using our extension. Webpage uploads (Option 3) will not work. For more information, visit How to Import Data.
Can some information be excluded from the extension?
Yes. We can exclude information such as SSN from the data output. Please reach out to us while onboarding or file a ticket request at.
How do you gather payroll information?
Like tax information, most payroll providers do not allow connections via API - so users have the option to install our browser plugin or to save the webpage as a file and upload it to our portal to get their payroll information. You may check our help article on How to Import Data to see how our extension and payroll data upload works.
What do end users see when they connect their cloud applications?
What shows up on your customer’s screen when they connect to cloud applications depends on the cloud application they are connecting to:
Some applications will show Boss Insights and Boss Insights' logo.
Some applications may be able to show your organisation’s brand name.
If you want to ensure that your customers will see only your logo, and that they will not see our (Boss Insights) logo, you will need to setup an “app” for every data provider.
Remember: We have 1,000+ app connections on our portal.
Do I need to setup my own API Keys for cloud applications?
Some third-party applications may require you to setup your own API Keys and “apps”, whether you are using Boss Data and Boss Embedded to create your own applications or even just for Boss Decision and Boss Monitor. This requirement depends on the third-party application’s terms and conditions.
For applications that require your own API Keys and “apps”, you are not required to create your own API Keys on your sandbox. You will only be required when using your production environment.
If you need help in acquiring your own API Keys, please refer to or submit a support ticket at. | https://docs.bossinsights.com/data-platform/Boss-Data-and-Boss-Embedded---Frequently-Asked-Questions.984809473.html | 2021-10-16T03:35:44 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.bossinsights.com |
)
4. Select the Bill From Date to the date from which the first invoice should be created
5. Click on Add Addresses to start adding the address
6. Click on Add Metadata to start adding relevant metadata
- Select the metadata type (String, Numeric or Date)
- Add a name and value
7. Click on Add Email address to start adding a email address
- Fill in the relevant Key
- Fill in the actual email address
- Fill in the Display Name
8. Click Save & Return to save the customer | https://docs.cloudbilling.nl/docs/step-by-step-guides/adding-a-customer.html | 2021-10-16T02:20:56 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.cloudbilling.nl |
Couchbase Web Console
The Couchbase Web Console is the main tool for managing the Couchbase environment.
The interface for Couchbase Server’s Web Console offers a new, modern look on usability in a browser-based application. Re-designed for intelligence, comfort, and speed, you will see a clean new look and experience a streamlined interface to Couchbase’s.
When you start the Couchbase Web Console, by default the introductory Dashboard page is displayed.
The Dashboard screen contains three sections: Services, Data Services, and Buckets.
Data Services
The Data Services section provides information on the memory and disk usage information of your cluster.
Buckets
The Buckets section provides the following two graphs:
For more details, see Bucket setup. | https://docs.couchbase.com/server/5.5/admin/ui-intro.html | 2021-10-16T02:47:27 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['_images/web-console.png', 'web console'], dtype=object)
array(['_images/ui-cluster.png', 'ui cluster'], dtype=object)
array(['_images/web-console-cluster-overview-buckets.png',
'web console cluster overview buckets'], dtype=object)] | docs.couchbase.com |
Date: Wed, 23 Nov 1994 02:15:05 -0800 From: "Jordan K. Hubbard" <jkh> To: CVS-commiters, cvs-ports Subject: cvs commit: ports/x11 Makefile Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
jkh 94/11/23 02:15:02 Modified: x11 Makefile Log: Remove ghostview from the Makefile - it's broken with some mysterious missing header file (ps.h) and I haven't time to try and figure out why.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=78734+0+/usr/local/www/mailindex/archive/1994/cvs-ports/19941120.cvs-ports | 2021-10-16T04:01:38 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.freebsd.org |
Getting Started
Building and launching your apps with Appollo
Create a project
A project is the group of apps you plan to launch (e.g Shopify/WooCommerce/Wix) together.
Within the Appollo dashboard, the first thing you will do is create a project:
Once you click the "Create project" button, a modal appears that asks for your App's name, your Webhook Url and your Redirect Url. Fill this in and click "Create project" to start your first project.
Complete app details
As soon as you click "Create project" you should see a long form. This form contains all the details we'll need to make sure your app gets published everywhere.
Publish your app
Once your form is complete, you can publish your app by clicking the "Publish" button at the bottom of the form.
Your app will now be in review by our internal team and you should hear back within 48 hours if there are any updates your should make before successfully submitting to all app stores.
Updated 4 months ago | https://docs.tryappollo.com/docs/getting-started | 2021-10-16T03:44:10 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['https://files.readme.io/a4085c8-Homepage.png', 'Homepage.png'],
dtype=object)
array(['https://files.readme.io/a4085c8-Homepage.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/51766f4-Create_project_modal.png',
'Create project modal.png'], dtype=object)
array(['https://files.readme.io/51766f4-Create_project_modal.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/89751d5-Opened_project.png',
'Opened project.png'], dtype=object)
array(['https://files.readme.io/89751d5-Opened_project.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/12a76ef-Published_project.png',
'Published project.png'], dtype=object)
array(['https://files.readme.io/12a76ef-Published_project.png',
'Click to close...'], dtype=object) ] | docs.tryappollo.com |
Full Text Search: Fundamentals
Full Text Search (FTS) lets you create, manage, and query specially purposed indexes, defined on JSON documents within a Couchbase bucket.
Features of Full Text Search
Full Text Search provides extensive capabilities for natural-language querying. These.
Full Text Search is powered by Bleve, an open source search and indexing library written in Go. Full Text Search uses Bleve for the indexing of documents, and also makes available Bleve’s extensive range of query types. These include:
Match, Match Phrase, Doc ID, and Prefix queries
Conjunction, Disjunction, and Boolean field queries
Numeric Range and Date Range queries
Geospatial queries
Query String queries, which employ a special syntax to express the details of each query (see Query String Query for information)
Full Text Search includes pre-built text analyzers for the following languages: Arabic, CJK characters (Chinese, Japanese, and Korean), English, French, Hindi, Italian, Kurdish, Persian, and Portuguese. Additional languages have been added to Couchbase Server 5.5.
Authorization for Full Text Search. | https://docs.couchbase.com/server/5.5/fts/full-text-intro.html | 2021-10-16T01:51:15 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.couchbase.com |
What is a DB3 File?
The DB3 file is a database file created by the SQLite software which is a lightweight, self-contained database program that creates databases using plain files; contains the database anatomy as well as data records; commonly used for retrieving or storing structured data using SQL. These files can be used in smart devices or where the record keeping or other data management required but on a low space environment.
DB3 File Format
The DB3 file format is associated with RDBMS (Relational Database Management System) SQLite, a popular choice for embedded database. There is no any specific file extension defined in an SQLite_3 in its specification, it may use extensions including .db,.sqlite.
The Database Secification
Commonly SQLite, version 3 related db file format can be considered as the DB3 file format used as the publicly documented native format for the SQLite database engine since June 2004. The DB3 file format, is a cross-platform, transferable between between big-endian and little-endian architectures or 32-bit and 64-bit systems. These features make DB3 a popular choice as an application file format. The main SQLite_3 database (DB3) file consists of one or more pages. All sizes of all pages are equal in same database. The page size in bytes is a power of two between 512 and 65536 inclusive. | https://docs.fileformat.com/database/db3/ | 2021-10-16T02:23:31 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
Assign Territories to Record
Register to Windows Service Territory Creation
User can create territories automatically based on a shape/excel file and count of records plotted on the map to get the desired number of balanced territories. To do this, user can go to Territory Management and use the option of ‘By File’ to upload a Shape file or an Excel file to get it plotted on the map. User can also choose the option of ‘By Overlay’ to plot a saved shape/excel file on the map.
User can go to the ‘Plot Records Card’ and select the required ‘DataSource’ and the respective ‘View’ to plot the same on the map. These records will be distributed among the territories that are to be created.
Using the options of ‘Select’ or ‘Multi-select’ from the ‘Alignment Tool’ user can choose the required regions and right click to select the option of ‘Create Territory’ to open the window. Here, user can select the option to ‘Create Multiple Territories’ and enter the number of territories they want to create out of the selected regions on the map. In the screenshot below, we have entered 4 for an instance.
Note: Only those records which lie within the regions selected will be considered while creating multiple balanced territories.
This will calculate the number of records and divide the selected regions automatically to get 4 balanced territories. By default, territories will be named as Territory_0, Territory_1, Territory_2 etc. User can change the names as per their requirement.
The count of records in the territories created automatically might vary slightly
User can also set the required user as a manager for the territories processed, by clicking on the ‘Select manager’ icon.
Users who have already been assigned as a manager for another territory will not be visible in the list of users to be selected as a manager of the territory.
User can also set the colors for the territories using the option of ‘Fill color’.
User can further choose to take an action on the territories processed on the map by using ‘Actions’. There are three Actions available
Create:
If the user is satisfied with the territories processed, then they can choose this option to create the territories processed on the map within the CRM.
Save:
If user needs to work on the territories processed to further make any changes or take an opinion on the same from a colleague, they can use this option to save the territories processed as Draft Territories within the CRM.
Realign:
User can further rework on the territories processed using this option. This will allow the user to revise the number of territories processed.
Create New Territory
Assign Territories to Record
Last modified
4mo ago
Copy link | https://docs.maplytics.com/features/territory-management/auto-territory-creation | 2021-10-16T01:40:16 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.maplytics.com |
Improvements
[FISH-31] - HTTP/2 Support for JDK Native ALPN APIs
[FISH-148] - Support multirelease JARs in WARs
[FISH-151] - Implement MicroProfile JWT-Auth 1.1.1
[FISH-171] - Support for multi HTTPAuthenticationMechanism
[FISH-185] - Add set-network-listener-security-configuration Command
[FISH-186] - Admin Console Integration for Certificate Management
[FISH-187] - Make domain_name Parameter domainname in Cert Management Commands
[FISH-189] - Add Warning when Adding to Certificate to the Keystore
[FISH-191] - Add Additional Help Text to Cert Management Commands
[FISH-192] - Add --reload Parameter to Certificate Management Commands
[FISH-205] - Allow dynamic reconfiguration of log levels for Payara Micro instance
[FISH-208] - Improvements in stop-domain process
[FISH-219] - Indicate missing default value when using custom template for create-domain
Bug Fixes
[FISH-188] - Fix Adding PEMs with Add-to-keystore and Add-to-truststore Commands
[FISH-190] - Missing Help Text for Certificate Management Commands
[FISH-195] - Missing --nodedir and --node Options on Certificate Management Commands
[FISH-197] - JDBCRealm requires the Message Digest field although a default value should be used
[FISH-200] - generate-self-signed-certificate places a PrivateKeyEntry in the Truststore
[FISH-207] - Disabling applications via their deployment group targets not working
[FISH-211] - PayaraMicro APIs not initializable when run via RootLauncher
[FISH-216] - Add-to-keystore and add-to-truststore Commands don’t add CA signed certs correctly
[FISH-236] - GitHub #4688 Typo in docker file - removal of /tmp/tmpfile
[FISH-260] - Missing invocation on top of invocation stack
[FISH-263] - Community Contribution: NPE when enabling versioned application with Microprofile Config | https://docs.payara.fish/enterprise/docs/5.23.0/release-notes/release-notes-21-0.html | 2021-10-16T02:16:20 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.payara.fish |
Verify that you have added intelligence successfully to Splunk Enterprise Security
After you add new intelligence sources or configure included intelligence sources, verify that the intelligence is being parsed successfully and that threat indicators are being added to the threat intelligence KV Store collections. The modular input responsible for parsing intelligence runs every 12 hours.
Verify that the intelligence source is being downloaded
This verification procedure is relevant only for URL-based sources and TAXII feeds.
- From the Enterprise Security menu bar, select Audit > Threat Intelligence Audit.
- Find the intelligence source and confirm that the download_status column states threat list downloaded.
For TAXII feeds, the UI states Retrieved document from TAXII feed.
- Review the Intelligence Audit Events to see if there are errors associated with the lookup name.
If the download fails, attempt the download directly from the terminal of the Splunk server using a curl or wget utility. If the intelligence source can be successfully downloaded using one of these utilities, but is not being downloaded successfully in Splunk Enterprise Security, ask your system administrator whether you need to specify a custom user-agent string to bypass network security controls in your environment. See step 12 in Add a URL-based threat source.
Verify that threat indicators exist in the threat collections
For threat intelligence sources, verify that the threat intelligence was successfully parsed and threat indicators exist in the threat collections.
- Select Security Intelligence > Threat Intelligence > Threat Artifacts.
- Search for the threat source name in the Intel Source ID field.
- Confirm that threat indicators exist for the threat source.
Troubleshoot parsing errors
Review the following log files to troubleshoot errors that can occur when parsing intelligence sources in order to add them to Enterprise Security.
Troubleshoot FSISAC threat sources
If you are having trouble with your FSISAC threat source, it appears to be stuck, and you're seeing the following in your traceback log:
2020-06-03 18:36:12,461+0000 INFO pid=6580 tid=MainThread file=threatlist.py:download_taxii:361 | status="TAXII feed polling starting" stanza="FS_TEST" 2020-06-03 18:36:12,516+0000 INFO pid=6580 tid=MainThread file=__init__.py:_poll_taxii_11:49 | Certificate information incomplete - falling back to AUTH_BASIC. 2020-06-03 18:36:12,516+0000 INFO pid=6580 tid=MainThread file=__init__.py:_poll_taxii_11:68 | Auth Type: AUTH_BASIC
It could be due to a bug in libtaxii that requires version 1.1.113 or higher to support the vendor's requirement of including the Server Name Indication System (SNI). Libtaxii 1.1.113.x is only available in versions of Enterprise Security 6.x and! | https://docs.splunk.com/Documentation/ES/6.0.0/Admin/Verifythreatintel | 2021-10-16T03:23:10 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
The FeedHQ API¶
FeedHQ implements the Google Reader API. This document, while being FeedHQ’s API documentation, aims to be a reference for developers seeking information about the details of this API.
The Google Reader API was never publicly released or documented but developers have reverse-engineered it and built an ecosystem of applications that use this API implementation for syncing mobile or desktop apps with Google Reader accounts.
A handful of resources are available in various places of the internet but it’s tedious for developers to get an accurate and extensive idea of how the API works. This is FeedHQ’s attempt to fix that.
- Terminology
- API Reference
- user-info
- unread-count
- disable-tag
- rename-tag
- subscription/list
- subscription/edit
- subscription/quickadd
- subscription/export
- subscription/import
- subscribed
- stream/contents
- stream/items/ids
- stream/items/count
- stream/items/contents
- tag/list
- edit-tag
- mark-all-as-read
- preference/list
- preference/stream/list
- friend/list
- Undocumented / not implemented | https://feedhq.readthedocs.io/en/latest/api/index.html | 2021-10-16T02:59:07 | CC-MAIN-2021-43 | 1634323583408.93 | [] | feedhq.readthedocs.io |
Information about Cupolas And How They Are Used
Many different types of unique constructions are usually there and it is important to understand how critical they are. The thing about constructions is that the type of structure is always going to be designed first. The process of building different types of structures that constructions is always unique and very critical. You may quickly notice that some specific things are usually able to stand out when it comes to buildings. Some of the things that are considered to be very unique on different types of constructions usually stand out. For your own constructions, there is always a lot that you are able to learn especially when you are very specific. In both old and new buildings today, you may notice that they have church cupolas especially because of what they are able to bring. Found mainly on church buildings, even the oldest ones, you’ll realize that these are usually unique because of that shape they usually have. With many of the different structures, you will quickly be able to notice that they are available in different sizes and styles.. One of the things that you realize is that they are usually very good for writing purposes. In addition to that, they are usually considered to be much better than sunroofs. The other benefit is that they are very good for ventilation purposes. The balance of temperatures that you’ll be able to get will also be quite good moment you decide to use these. In addition to that, they are also going to allow you to have very comfortable time within the building. In addition to that, they are able to provide these conditions all through.. When you are interested in installing these cupolas, you just want to make sure that you have been able to install them from the right company. At the same time, you may want to take the time to learn more about other benefits of these cupolas. | http://docs-prints.com/2021/03/21/the-10-laws-of-and-how-learn-more-7/ | 2021-10-16T03:02:39 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs-prints.com |
Direction Modifier
Summary
This modifier changes the direction of particles passing through it.
Note that the modifier will draw an arrow to show the direction the particles will take (except in Ring and Spray modes). You can turn this off if desired by unchecking the 'Visible in Editor' switch.
If this switch is checked, the direction for the particles will be shown by an arrow drawn on screen:
This will not move if you rotate the modifier, as this does not affect the direction, unless 'Operation' is set to 'Use Modifier Rotation' (see below). Uncheck this switch to hide the arrow.
Operation
This setting has seven options:
Relative
The new direction set by the 'Heading' and 'Pitch' parameters will be relative to the particle's current direction, rather than an absolute direction as found with the 'Absolute' option.
Note that you may encounter gimbal lock with this option when the Y-axis direction exceeds 90 degrees (either positive or negative). This is currently a limitation.
Absolute [default setting]
The Absolute setting will cause the particle to move along the heading chosen in 3D world space. For example, if the particle is currently moving along the positive X-axis, setting the heading to -90 degrees will cause it to move along the negative X-axis, which is actually 180 degrees from the initial direction.
Important: this is a direction in global space. The new direction will always be the same, regardless of the current direction of the particles or the rotation of the emitter.
Spray
This option will cause the particles to spray out from their current direction of travel. The width of the spray is governed by the 'Spray Strength' setting (see below). The best results with this option can be seen with a very narrow particle stream with all particles initially travelling in the same direction.
Circular
This will cause the particle to change its direction relative to its current direction. Each frame, the modifier will alter the particle's direction relative to its current direction. So for example, if the 'Heading' is set to +5 degrees, each frame the heading will increase by 5 degrees. The result is that the particle will move in a circle unless the modifier stops acting on it. Exactly how long it takes to move in a circle depends on the Heading, the 'Acuteness of Turn', and the modifier's falloff. The end result of this is that the particle will move in a circle. You can then use the 'Y-axis Kick' to move the particle along the Y-axis (note: that's the emitter object Y axis, not the world Y axis). This is an easy way to make the particle move in a spiral.
You can rotate the emitter to alter the plane in world space on which the particle moves.
Jitter
This mode is identical to the Circular mode but instead of a consistent change in direction each frame, the direction will change randomly. So if the 'Heading' setting is set to 45 degrees, the particle heading will change randomly each frame between +45 and -45 degrees.
Ring
In this mode you can make the particles adopt a flat ring or disc shape. The particles will deviate from their original path by a maximum value set in the 'Angle Limit' parameter. The speed with which they deviate is controlled by the 'Step Per Frame' parameter.
To form a ring, you should turn off 'Sub-Frame Emission' in the emitter. This is because you are trying to generate a specific pattern and sub-frame emission is there precisely to avoid such things! If you leave it on, you will see an irregular disc instead of a ring. The best results with this option can be seen with a very narrow particle stream with all particles initially travelling in the same direction.
Use Modifier Rotation
With this mode, the direction is the direction in which the modifier points. This lets you simply rotate the modifier to point in the desired direction, without having to enter a heading and pitch value. The 'Variation' parameters can still be used to vary the direction from the actual pointed direction.
Random Seed
The seed for the random-number generator.
Heading (and Variation)
This is the particle heading (corresponding to the ‘H’ angle in an object rotation). The variation parameter adds some random variation into the actual heading produced.
Pitch (and Variation)
This is the particle pitch (corresponding to the ‘P’ angle in an object rotation). The variation parameter adds some random variation into the actual pitch produced.
You may be wondering why there is no setting for Bank. This is because it is not needed. To make a particle move in any direction only requires the heading and pitch. Bank would make the particle spin on its own axis, which is not useful in this case.
Use Acuteness of Turn
This switch is only available if 'Operation' is set to 'Absolute'. If it is checked, the modifier will use the 'Acuteness of Turn' setting to determine how sharply the particle changes direction. Turning it off will cause an immediate direction change with no curve.
Acuteness of Turn
This setting determines the sharpness of the turn the particle makes when changing direction. A value of 100% will cause the particle to turn immediately to its new direction. A value of 0% will mean that it will not turn at all!
The setting required depends on what you want to do and on factors such as the particle speed. Generally, values of 15-20% produce nice, smooth curves.
Y-Axis Kick (and Variation)
This setting is only used with the Circular mode. When applied to a particle it causes the particle to move up or down on the Y-axis. Combined with circular movement this will cause the particle to move in a spiral.
Spray Strength
This parameter lets you control the strength of the Spray mode.
Step Per Frame
This is only used in 'Ring' mode. It controls how fast the particles deviate from their current path to the limit given in the 'Angle Limit' setting. A large number will cause a very rapid change in direction, small numbers will cause a more gradual change.
Angle Limit
This is only used in 'Ring' mode. It gives the maximum angle by which the particle will deviate from its current path. For example, if a particle is currently travelling along the Z-axis with no X or Y axis movement, an angle limit of 45 degrees will result in a maximum 45-degree deviation away from the Z-axis (this could be in any direction along X or Y). The speed with which the limit is reached is governed by the 'Step Per Frame' setting.
You can set this parameter to a maximum of 90 degrees. Note that at or near this setting, you may start to see some distortion begin to appear in the particle stream. | http://docs.x-particles.net/html/directmod.php | 2021-10-16T03:29:37 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../images/directionmod_1.jpg', None], dtype=object)
array(['../images/directmod_2.jpg', None], dtype=object)] | docs.x-particles.net |
What is an FKB file?
FKB is an eBook file extension that was developed by Flipkart.com for the Flipkart software application. This type of file extension is usually found on Windows 10 users. There is a specification in the FBK file that it can range in size from a couple of hundred kilobytes to numerous megabytes depending on the length of the book and the number of images included. For downloading the file from the Flipkart eBooks Android Application, first, must buy the eBook, then click on it in the Library and the book will start downloading. If you can’t see the purchased document, then refresh the system for better functioning.
Supported Operating Systems
The following operating systems supports FKB files:
- Windows 7
- Windows 8
- Windows 10
- Windows Server 2003/2008/2012/2016
- Mac OS X
- FreeBSD
- iOS
- Linux
- Android
- NetBSD
- OpenBSD
Problems to open an FKB file
- Improper installation of supporting programs. Make sure it is downloaded properly with an updated version
- Low hardware space
- Corrupt or infected files. This problem can easily be solved through an antivirus installed in your system
- Broken Links | https://docs.fileformat.com/ebook/fkb/ | 2021-10-16T03:36:50 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
Message-ID: <527545467.43779.1563238254219.JavaMail.confluence@hou-1.docs.confluence.prod.cpanel.net> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_43778_251060523.1563238254219" ------=_Part_43778_251060523.1563238254219 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
cPanel & WHM versions 11.48 and late=
r include functionality to validate that you download all cPanel & WHM-=
delivered files in an uncorrupted state. This avoids any possibility of cor=
ruption due to a compromise of the
next.cpa=
nel.net mirror system or the server=E2=80=99s c=
onnection to cPanel, L.L.C. systems.
The signature verification logic require=
s that all assets you download from the
httpupdate m=
irror system meet either of the following criteria:
The system validates assets that you dow= nload from other cPanel, L.L.C. systems, such as the public portion of our = GPG keys, via SSL connections.
cPanel & WHM uses two primary GPG ke=
ys to sign assets delivered through our
httpupdate m=
irror system. The system uses release keys to sign all assets intended for =
the normal mirror system. The system uses development keys to sign internal=
development builds and builds destined for the
next.cpanel.net mirror system.
cPanel & WHM systems that track release tiers=
or Long Te=
rm Support tiers only need access to the "release" keys. To track =
experimental development builds on the
.cpanel.net mirror system, you must enable the development keys.
The Security section of WHM's = ;Tweak Settings in= terface (WHM >> Home >> Server Configuration = >> Tweak Settings) contains the Signature v= alidation on assets downloaded from cPanel & WHM mirrors sett= ing. This setting controls the types of signatures that cPanel & W= HM accepts and defaults to Release Key Only.
cPanel & WHM also provides support for custom third-par= ty cPAddons Site Software installations. By default, cPanel & WHM= doesn't validate the security of third-party cPAddon= s in the same way it does for cPanel & WHM-delivered cPAddons. If you k= now that all third-party cPAddons residing on the system system are correct= ly-signed, you can enable signature verification.
If files that you download from the the&=
nbsp;
next.cpanel.net mir=
ror system mirrors become corrupt in transit, an error message that in=
dicates what type of failure occurred will appear. Most cPanel & WHM su=
bsystems will automatically switch to a different mirror to download a vali=
d version of the requested file. | https://hou-1.docs.confluence.prod.cpanel.net/exportword?pageId=2433985 | 2019-07-16T00:50:54 | CC-MAIN-2019-30 | 1563195524290.60 | [] | hou-1.docs.confluence.prod.cpanel.net |
Transmutator¶
Transmutator is a general purpose migration framework. It focuses on automating actions you perform to upgrade (or downgrade) a product.
Warning
This project is experimental. At this stage, it just describes concepts. Perhaps the concepts are implemented by some existing tools.
A typical migration for a web service could include:
- ask admin for confirmation
- enable maintenance page
- stop frontends
- backup data
- update configuration
- provision machines (upgrade software)
- migrate databases
- restart frontends
- run smoketests
- disable maintenance page.
Ressources¶
- Documentation:
- PyPI:
- Code repository:
- Bugtracker:
- Roadmap:
- Continuous integration: | https://transmutator.readthedocs.io/en/stable/ | 2019-07-16T00:25:22 | CC-MAIN-2019-30 | 1563195524290.60 | [] | transmutator.readthedocs.io |
The Select Background Image dialog allows you to choose a file or sequence of files for a viewport background.
Cityscape model with a sky image used as the viewport background
You can also convert a set of sequentially numbered files to an Image File List (IFL). This is the same process used by the IFL Manager Utility.
To select a background image for a viewport:
To select a set of still images as a viewport background:
The files must be sequentially numbered (for example, image01.bmp, image02.bmp, image03.bmp).
The Image File List (IFL) file is saved to the target directory.
Displays the Image File List Control dialog to create an IFL file. Available only when Sequence is on and there are sequentially numbered files in the displayed directory.
Selects the type of gamma to be used for the selected file. Available only when Enable Gamma Selection is turned on in the Gamma panel.
Ignores the image’s own gamma and uses the system default gamma instead, as set in the Gamma panel.
[Sequence and Preview group] | http://docs.autodesk.com/3DSMAX/15/ENU/3ds-Max-Help/files/GUID-066B9B88-3B04-4D7C-B67C-80D851F19405.htm | 2014-08-20T10:42:44 | CC-MAIN-2014-35 | 1408500804220.17 | [] | docs.autodesk.com |
otTools are presented in the feature list.
Geotools is used by a number of projects including Web Feature Servers, Web Map Servers, and desktop applications. For more information check out some fascinating screenshots and applications powered by Geotools!.
Open Source
If your organization is making use of GeoTools we invite you to help set the release schedule. Please contact us on the developers list; we would love to know what your team is up to when..
- one of the longest running Java projects - check out our History of open development since 1996
For more information please visit our About page.
News and Events
From:
Could not access the content at the URL because it is not from an allowed source.
You may contact your site administrator and request that this URL be added to the list of allowed sources.
From wiki blog posts:
Blog Posts
- Blog: GeoTools Release Week created byOct 13, 2009
Edit this Website
Due to vandalism the following procedure is more complicated then we would like - please be patient and go through all the steps.. | http://docs.codehaus.org/pages/viewpage.action?pageId=127926471 | 2014-08-20T10:42:47 | CC-MAIN-2014-35 | 1408500804220.17 | [] | docs.codehaus.org |
:
As you can see from this message, the <trackingNumbers> element contains the tracking numbers, but in a format that is not easily consumed.
The above input message needs to be transformed into the following output message::. | http://docs.codehaus.org/pages/diffpages.action?originalId=228171018&pageId=233050485 | 2014-08-20T10:55:17 | CC-MAIN-2014-35 | 1408500804220.17 | [] | docs.codehaus.org |
In the Help25:Menus_Menu_Item_Article_Category_List#List_Layouts section, there is an indication that it is possible to define the # of hits to show in a list. This functionality is not available in my Current installation (j2.5.6). Should it be removed from these docs? Crispness 10:59, 7 August 2012 (CDT)
The help screens should exactly match the version of Joomla to which they refer, so yes, please make the change. Thanks. Chris Davenport 15:22, 7 August 2012 (CDT)
OK. Here's my screenshot from my Menu Manager - Edit Manu Item -List Contacts in a Category - List Layouts. Are we talking about the same thing? Because its not there on mine. (although I wish it was) Crispness 01:35, 8 August 2012 (CDT) | http://docs.joomla.org/index.php?title=Help25_talk:Menus_Menu_Item_Article_Category_List&oldid=70771 | 2014-08-20T12:28:12 | CC-MAIN-2014-35 | 1408500804220.17 | [] | docs.joomla.org |
Getting MMTk and Jikes RVM and Eclipse working.
- Download:
dist/BaseBaseNoGC_ia32-linux/rvm HelloWorld.
Creating The Base Tutorial Collector
-). | http://docs.codehaus.org/pages/viewpage.action?pageId=114788258 | 2014-08-20T10:59:21 | CC-MAIN-2014-35 | 1408500804220.17 | [] | docs.codehaus.org |
Description
The current implementation status of the GridCoveargeReader normally associates a single reader with a single coverage (e.g., TIFF, GTOPO30, and so on). The intention of this proposal is to break the 1-1 association and allow GridCoverageReader to expose multiple covearges in an efficient way, in order to properly deal with sources storing multiple coverages with different structures and features (e.g., not mosaics, which can deal with multiple coverages having some degree of uniformity).
The current GridCoverageReader interface already allows to expose multiple coverages, but in an inefficient and thread unsafe way, as it is intended to read GridCoverages from the input stream in a sequential order In order to access different coverages, once the reader is open, one needs to check if more coverages are available by checking hasMoreGridCoverages and then skip to the desired coverage, and get the name of current coverage through the getCurrentSubname method. Thus, there are two issues:
- Need to open a new reader for each different coverage read, which is rather inefficient as normally the opening of a reader requires parsing the raster data header, or gathering network or database connections, something that is better done only once instead
- Need to linearly scan the contents of the reader to reach the desired coverage, with no form of random access, hampering the usage of the current API in storages that contain a large amount of coverages
Additional notes on base implementation changes:
Base class implementing GridCoverageReader is AbstractGridCoverage2DReader. The implementation of AbstractGridCoverage2DReader will be changed so that implementors of the current API won't be affected by the change, but allowing new multi-coverage readers to get full benefit from the new API
An outlook into the future
This work is meant as a stepping stone to allow a smooth transition between old style coverage APIs (based on current GridCoverageReader which doesn't have proper logic to deal with time/elevation domains) and new coverage-api currently living on unsupported/coverage-experiment, and loosely based on the DataAccess paradigm, as well as the WCS protocol, which allows for
- proper geospatial dimensions management (time domain, vertical domain, custom domains) | http://docs.codehaus.org/pages/viewpage.action?pageId=230400081 | 2014-08-20T10:53:31 | CC-MAIN-2014-35 | 1408500804220.17 | [] | docs.codehaus.org |
Introduction
InterSystems IRIS supports parts of the WS-Security, WS-Policy, WS-SecureConversation, and WS-ReliableMessaging specifications, which describe how to add security to web services and web clients. This topic summarizes the tools and lists the supported standards.
If your InterSystems IRIS web client uses a web service that requires authentication, and if you do not want to use the features described in this book, you can use the older WS-Security login feature. See “Using the WS-Security Login Feature,” in the book Creating Web Services and Web Clients.
Tools in InterSystems IRIS Relevant to SOAP Security
InterSystems IRIS provides the following tools that are relevant to security for web services and web clients:
Ability to provide trusted certificates for InterSystems IRIS to use to validate certificates and signatures received in inbound messages.
Ability to represent X.509 certificates. You can store, in the IRISSYS database, certificates that you own and certificates of entities you will communicate with. For certificates that you own, you can also store the corresponding private keys, if you need to sign outbound messages.
In the IRISSYS database, an X.509 certificate is contained within an InterSystems IRIS credential set, specifically within an instance of %SYS.X509Credentials
Opens in a new window. You use the methods of this class to load a certificate (and optionally, the associated private key file, if applicable) into the database. You can execute the methods directly or you can use the Management Portal.
You can specify who owns the credential set and who can use it.
The %SYS.X509Credentials
Opens in a new window class also provides methods to access certificates by alias, by thumbprint, by subject key identifier, and so on. For reasons of security, the %SYS.X509Credentials
Opens in a new window class cannot be accessed using the normal object and SQL techniques.
Support for SSL (Secure Sockets Layer) and TLS (Transport Layer Security). You use the Management Portal to define InterSystems IRIS SSL/TLS configurations, which you can then use to secure the communications to and from an InterSystems IRIS web service or client, by means of X.509 certificates.
SSL/TLS configurations are discussed in the Security Administration Guide.
WS-Policy support. InterSystems IRIS provides the ability to attach WS-Policy information to an InterSystems IRIS web service or web client. A policy can specify the items like the following:
Use of WS-SecureConversation.
Use of SSL/TLS.
WS-Security features to use or to expect.
WS-Addressing headers to use or expect. WS-Addressing headers are described in Creating Web Services and Web Clients, which also describes how to add these headers manually.
Use of MTOM (Message Transmission Optimization Mechanism) packaging. MTOM is described in Creating Web Services and Web Clients, which also describes how to manually use MTOM packaging.
The policy is created in a separate configuration class. In the class, you use an XData block to contain the policy (which is an XML document) and to specify the parts of the service or client to which it is attached. You can attach the policy to the entire service or client or to specific methods (or even to specific request or response messages).
You can use a Studio wizard to create this configuration class. The wizard provides a set of predefined policies with a rich set of options. Wherever a policy requires an X.509 certificate, the wizard enables you to choose among the certificates that are stored in IRISSYS. Similarly, wherever a policy requires SSL/TLS, you can also choose an existing SSL/TLS configuration, if applicable.
You can make direct edits to the policy later if wanted. The policy is in effect when you compile the configuration class.
Support for creating and working with WS-Security elements directly. InterSystems IRIS provides a set of XML-enabled classes to represent WS-Security header elements such as <UsernameToken> and <Signature>. These specialized classes provide methods that you use to create and modify these elements, as well as references between them.
If you use WS-Policy support, InterSystems IRIS uses these classes automatically. If you use WS-Security support directly, you write code to create instances of these classes and insert them into the security header.
In all cases, when an InterSystems IRIS web service or client receives a SOAP message with WS-Security elements, the system creates instances of these classes to represent these elements. It also creates instances of %SYS.X509Credentials
Opens in a new window to contain any certificates received in the inbound messages.
Support for creating and working with WS-SecureConversation elements directly. InterSystems IRIS provides a set of XML-enabled classes to represent these elements. You define a callback method in the web service to control how the web service responds to a request for a secure conversation.
You can either use WS-Policy or you can use WS-Security and WS-SecureConversation directly. If you use WS-Policy, the system automatically uses the WS-Security tools as needed. If you use WS-Security or WS-SecureConversation directly, more coding is necessary.
A Brief Look at the WS-Security Header
A SOAP message carries security elements within the WS-Security header element — the <Security> subelement of the SOAP <Header> element. The following example shows some of the possible components:
These elements are as follows:
The timestamp token (<Timestamp>) includes the <Created> and <Expires> elements, which specify the range of time during which this message is valid. A timestamp is not, strictly speaking, a security element. If the timestamp is signed, however, you can use it to avoid replay attacks.
The binary security tokens (<BinarySecurityToken>) are binary-encoded tokens that include information that enable the recipient to verify a signature or decrypt an encrypted element. You can use these with the signature element, encryption element, and assertion element.
The username token (<UsernameToken>) enables a web client to log in to the web service. It contains the username and password required by the web service; these are included in clear text by default. There are several options for securing the password.
The assertion element (<Assertion>) includes the SAML assertion that you create. The assertion can be signed or unsigned.
The assertion element can include a subject confirmation element (<SubjectConfirmation>). This element can use the Holder-of-key method or the Sender-vouches method. In the former case, the assertion carries key material that can be used for other purposes.
The encrypted key element (<EncryptedKey>) includes the key, specifies the encryption method, and includes other such information. This element describes how the message has been encrypted. See “Overview of Encryption,” later in this book.
The signature element (<Signature>) signs parts of the message. The informal phrase “signs parts of the message” means that the signature element applies to those parts of the message, as described in “Overview of Digital Signatures,” later in this book.
The figure does not show this, but the signature element contains <Reference> elements that point to the signed parts of the message.
As shown here, an encrypted key element commonly includes a reference to a binary security token included earlier in the same message, and that token contains information that the recipient can use to decrypt the encrypted key. However, it is possible for <EncryptedKey> to contain the information needed for decryption, rather than having a reference to a token elsewhere in the message. InterSystems IRIS supports multiple options for this.
Similarly, a digital signature commonly consists of two parts: a binary security token that uses an X.509 certificate and a signature element that has a direct reference to that binary security token. (Rather than a binary security token, an alternative is to use a signed SAML assertion with the Holder-of-key method.) It is also possible for the signature to consist solely of the <Signature> element; in this case, the element contains information that enables the recipient to validate the signature. InterSystems IRIS supports multiple options for this as well.
Standards Supported in InterSystems IRIS
This section lists the support details for WS-Security, WS-Policy, WS-SecureConversation, and WS-ReliableMessaging for InterSystems IRIS web services and web clients.
WS-Security Support in InterSystems IRIS
InterSystems IRIS supports the following parts of WS-Security 1.1 created by OASIS (
Opens in a new window):
WS-Security headers (
Opens in a new window)
X.509 Token Profile 1.1 (
Opens in a new window)
XML Encryption (
Opens in a new window) with the following choice of algorithms:
Block encryption (data encryption): AES-128 (default), AES-192, or AES-256
Key transport (key encryption): RSA-OAEP (default) or RSA-v1.5
XML Signature with Exclusive XML Canonicalization (
Opens in a new window) with the following choice of algorithms:
Digest method: SHA1 (default), SHA256, SHA384, or SHA512
Signature algorithm: RSA-SHA1, RSA-SHA256 (default), RSA-SHA384, RSA-SHA512, HMACSHA256, HMACSHA384, or HMACSHA512
Note that you can modify the default signature algorithm. To do so, access the Management Portal, click System Administration, then Security, then System Security, and then System-wide Security Parameters. The option to specify the default signature algorithm is labeled Default signature hash.
For encryption or signing, if the binary security token contains an X.509 certificate, InterSystems IRIS follows the X.509 Certificate Token Profile with X509v3 Token Type. If the key material uses a SAML assertion, InterSystems IRIS follows the WS-Security SAML Token Profile specification.
You can specify the message parts to which the digital signature applies.
UsernameToken Profile 1.1 (
Opens in a new window)
Most of WS-Security SAML Token Profile 1.1 (
Opens in a new window) based on SAML version 2.0. The exception is that InterSystems IRIS SOAP support does not include features that refer to SAML 1.0 or 1.1.
For outbound SOAP messages, InterSystems IRIS web services and web clients can sign the SAML assertion token. However, it is the responsibility of your application to define the actual SAML assertion.
For inbound SOAP messages, InterSystems IRIS web services and web clients can process the SAML assertion token and validate its signature. Your application must validate the details of the SAML assertion.
Full SAML support is not implemented. “SAML support in InterSystems IRIS” refers only to the details listed here.
WS-Policy Support in InterSystems IRIS
Both the WS-Policy 1.2 (
Opens in a new window) and the WS-Policy 1.5 (
Opens in a new window) frameworks are supported along with the associated specific policy types:
WS-SecurityPolicy 1.1 (
Opens in a new window)
WS-SecurityPolicy 1.2 (
Opens in a new window)
Web Services Addressing 1.0 - Metadata (
Opens in a new window)
Web Services Addressing 1.0 - WSDL Binding (
Opens in a new window)
WS-MTOMPolicy (
Opens in a new window)
Note that <PolicyReference> is supported only in two locations: in place of a <Policy> element within a configuration element or as the only child of a <Policy> element.
WS-SecurityPolicy 1.2 is supported as follows. Equivalent parts of WS-SecurityPolicy 1.1 are also supported.
4.1.1 SignedParts supported with exceptions:
Body supported
Header supported
Attachments not supported
4.1.2 SignedElements not supported
4.2.1 EncryptedParts supported with exceptions:
Body supported
Header not supported
Attachments not supported
4.2.2 EncryptedElements not supported
4.3.1 RequiredElements not supported
4.2.1 RequiredParts supported:
Header supported
5.1 sp:IncludeToken supported
5.2 Token Issuer and Required Claims not supported
5.3 Derived Key properties supported only for X509Token and SamlToken
5.4.1 UsernameToken supported
5.4.2 IssuedToken not supported
5.4.3 X509Token supported
5.4.4 KerberosToken not supported
5.4.5 SpnegoContextToken not supported
5.4.6 SecurityContextToken not supported
5.4.7 SecureConversationToken supported
5.4.8 SamlToken supported
5.4.9 RelToken not supported
5.4.10 HttpsToken supported only for TransportBinding Assertion
5.4.11 KeyValueToken supported
6.1 [Algorithm Suite] partially supported:
Basic256, Basic192, Basic128 supported
Basic256Rsa15, Basic192Rsa15, Basic128Rsa15 supported
Basic256Sha256, Basic192Sha256, Basic128Sha256 supported
Basic256Sha256Rsa15, Basic192Sha256Rsa15, Basic128Sha256Rsa15 supported
TripleDes, TripleDesRsa15, TripleDesSha256, TripleDesSha256Rsa15 not supported
InclusiveC14N, SOAPNormalization10, STRTransform10 not supported
XPath10, XPathFilter20, AbsXPath not supported
6.2 [Timestamp] supported
6.3 [Protection Order] supported
6.4 [Signature Protection] supported
6.5 [Token Protection] supported
6.6 [Entire Header and Body Signatures] supported
6.7 [Security Header Layout] supported
7.1 AlgorithmSuite Assertion per 6.1
7.2 Layout Assertion per 6.7
7.3 TransportBinding supported only with HttpsToken
7.4 SymmetricBinding supported
7.5 AsymmetricBinding supported:
Only for tokens supported in section 5.4
Only for properties in section 6
8.1 SupportingTokens Assertion supported
8.2 SignedSupportingTokens Assertion supported
8.3 EndorsingSupportingTokens Assertion supported
8.4 SignedEndorsingSupportingTokens Assertion supported
8.5 Encrypted SupportingTokens Assertion supported
8.6 SignedEncrypted SupportingTokens Assertion supported
8.7 EndorsingEncrypted SupportingTokens Assertion supported
8.8 SignedEndorsingEncrypted SupportingTokens Assertion supported
9.1 Wss10 Assertion supported with exceptions:
sp:MustSupportRefKeyIdentifier supported
sp:MustSupportRefIssuerSerial supported
sp:MustSupportRefExternalURI not supported
sp:MustSupportRefEmbeddedToken not supported
9.2 Wss11 Assertion supported with exceptions:
sp:MustSupportRefKeyIdentifier supported
sp:MustSupportRefIssuerSerial supported
sp:MustSupportRefExternalURI not supported
sp:MustSupportRefEmbeddedToken not supported
sp:MustSupportRefKeyThumbprint supported
sp:MustSupportRefKeyEncryptedKey supported
sp:RequireSignatureConfirmation supported
10.1 Trust13 Assertion supported with exceptions:
sp:MustSupportClientChallenge not supported
sp:MustSupportServerChallenge not supported
sp:RequireClientEntropy supported
sp:RequireServerEntropy supported
sp:MustSupportIssuedTokens not supported -- ignored for now
sp:RequireRequestSecurityTokenCollection not supported
sp:RequireAppliesTo not supported
Trust10 Assertion (see
Opens in a new window)Note:
The Trust10 Assertion is supported only in a trivial way; InterSystems IRIS converts it to a Trust13 Assertion to avoid throwing an error.
WS-SecureConversation Support in InterSystems IRIS
InterSystems IRIS supports parts of WS-SecureConversation 1.3 (
Opens in a new window), as follows:
It supports the SCT Binding (for issuing SecureConversationTokens based on the Issuance Binding of WS-Trust) and the WS-Trust Cancel binding (see “Canceling Contexts” in
Opens in a new window).
It supports the case when the service being used acts as its own Security Token Service.
It supports only the simple request for a token and simple response.
InterSystems IRIS also supports the necessary supporting parts of WS-Trust 1.3 (
Opens in a new window). Support for WS-Trust is limited to the bindings required by WS-SecureConversation and is not a general implementation.
WS-ReliableMessaging Support in InterSystems IRIS
InterSystems IRIS supports WS-ReliableMessaging 1.1 and 1.2 for synchronous messages over HTTP. Only anonymous acknowledgments in the response message are supported. Because only synchronous messages are supported, no queueing is performed.
See
Opens in a new window and
Opens in a new window. | https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSOAPSEC_INTRO | 2021-06-12T20:49:46 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['images/gsoapsec_secheader.png',
'A sample SOAP message header with digital signature and encrypted key elements.'],
dtype=object) ] | docs.intersystems.com |
The ability to add a game vendor becomes available after installing and activating the ACES Gambling plugin.
1) Go to “Games” – “Vendors.”
2) Add a Name, Slug, Description, and Logo. And press the “Add New Taxonomy” button.
The minimum recommended size for the vendor logo: height – 40px, the width can be any.
| https://docs.mercury.is/article/how-to-add-a-game-vendor/ | 2021-06-12T20:11:03 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['https://docs.mercury.is/wp-content/uploads/2020/06/game-vendor-1.png',
None], dtype=object) ] | docs.mercury.is |
Recommended sizes:
For a site logo – 173×40 px (or 346×80 px for retina), but the width is not fixed and can be different.
For a site icon – 512×512 px.
1) Go to “Appearance” – “Customize” – “Site Identity.”
2) Click to the “Select logo” button. Upload and select a logo image.
3) Click to the “Select site icon” button. Upload and select a site icon (favicon) image.
4) Publish the changes.
| https://docs.mercury.is/article/how-to-add-change-a-logo-and-site-icon-favicon/ | 2021-06-12T19:36:16 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['https://docs.mercury.is/wp-content/uploads/2020/05/Annotation-on-2020-05-06-at-14-18-08.png',
None], dtype=object)
array(['https://docs.mercury.is/wp-content/uploads/2020/05/Annotation-on-2020-05-06-at-14-19-32.png',
None], dtype=object)
array(['https://docs.mercury.is/wp-content/uploads/2020/05/Annotation-on-2020-05-06-at-14-20-47.png',
None], dtype=object)
array(['https://docs.mercury.is/wp-content/uploads/2020/05/Annotation-on-2020-05-06-at-14-22-26.png',
None], dtype=object) ] | docs.mercury.is |
Evaluate Microsoft Defender for Office 365
Important
The improved Microsoft 365 security center is now available. This new experience brings Defender for Endpoint, Defender for Office 365, Microsoft 365 Defender, and more into the Microsoft 365 security center. Learn what's new.
Important
Microsoft Defender for Office 365 evaluation is in public preview. This preview version is provided without a service level agreement. Certain features might not be supported or might have constrained capabilities.
Conducting a thorough security product evaluation can help give you informed decisions on upgrades and purchases. It helps to try out the security product's capabilities to assess how it can help your security operations team in their daily tasks.
The Microsoft Defender for Office 365 evaluation experience is designed to eliminate the complexities of device and environment configuration so that you can focus on evaluating the capabilities of Microsoft Defender for Office 365. With evaluation mode, all messages sent to Exchange Online mailboxes can be evaluated without pointing MX records to Microsoft. The feature only applies to email protection and not to Office Clients like Word, SharePoint, or Teams.
If you don't already have a license that supports Microsoft Defender for Office 365, you can start a free 30-day evaluation and test the capabilities in the Office 365 Security & Compliance center (). You'll enjoy the quick set-up and you can easily turn it off if necessary.
Note
If you're in the unified Microsoft 365 security portal (security.microsoft.com) you can start a Defender for Office 365 evaluation here: Email & Collaboration > Policies & Rules > Threat Policies > Additional Policies.
How the evaluation works
Defender for Office 365 in evaluation mode creates Defender for Office 365 email policies that log verdicts, such as malware, but don't act on messages. You are not required to change your MX record configuration.
With evaluation mode, Safe Attachments, Safe Links, and mailbox intelligence based impersonation policies are set up on your behalf. All Defender for Office 365 policies are created in non-enforcement mode in the background and are not visible to you.
As part of the setup, evaluation mode also configures Enhanced Filtering for Connectors. It improves filtering accuracy by preserving IP address and sender information, which are otherwise lost when mail passes through an email security gateway (ESG) in front of Defender for Office 365. Enhanced Filtering for Connectors also improves the filtering accuracy for your existing Exchange Online Protection (EOP) anti-spam and anti-phishing policies.
Enabled Enhanced Filtering for Connectors improves filtering accuracy but may alter deliverability for certain messages if you have an ESG in front of Defender for Office 365, and currently do not bypass EOP filtering. The impact is limited to EOP policies; MDO policies setup as part of the evaluation are created in non-enforcement mode. To minimize potential production impact, you can bypass all EOP filtering by creating a transport rule to set the Spam Confidence Level (SCL) to -1. See Use the EAC to create a mail flow rule that sets the SCL of a message for details.
When the evaluation mode is set up, you will have a report updated daily with up to 90 days of data quantifying the messages that would have been blocked if the policies were implemented (for example, delete, send to junk, quarantine). Reports are generated for all Defender for Office 365 and EOP detections. They are aggregated per detection technology (for example, impersonation) and can be filtered by time range. Additionally, message reports can be created on-demand to create custom pivots or to deep dive messages using Threat Explorer.
With the simplified set-up experience, you can focus on:
- Running the evaluation
- Getting a detailed report
- Analyzing the report for action
- Presenting the evaluation outcome
Before you begin
Licensing
To access the evaluation, you'll need to meet the licensing requirements. Any of the following licenses will work:
- Microsoft Defender for Office 365 Plan 1
- Microsoft Defender for Office 365 Plan 2
- Microsoft 365 E5, Microsoft 365 E5 Security
- Office 365 E5
If you don't have one of those licenses, then you'll need to obtain a trial license.
Trial
To obtain a trial license for Microsoft Defender for Office 365, you need to have the Billing admin role or Global admin role. Request permission from someone that has the Global admin role. Learn about subscriptions and licenses
Once you have the proper role, the recommended path is to obtain a trial license for Microsoft Defender for Office 365 (Plan 2) in the Microsoft 365 admin center by going to Billing > Purchase services. The trial includes a 30-day free trial for 25 licenses. Get a trial for Microsoft Defender for Office 365 (Plan 2).
You'll have a 30-day window with the evaluation to monitor and report on advanced threats. You'll also have the option to buy a paid subscription if you want the full Defender for Office 365 capabilities.
Roles
Exchange Online roles are required to set up Defender for Office 365 in evaluation mode. Assigning a Microsoft 365 compliance or security admin role won't work.
The following roles are needed:
Enhanced filtering
Your Exchange Online Protection policies, such as bulk and spam protection, will remain the same. However, the evaluation turns on enhanced filtering for connectors, which may impact your mail flow and Exchange Online Protection policies unless bypassed.
Enhanced filtering for connectors allows tenants to use anti-spoofing protection. Anti-spoofing is not supported if you're using an email security gateway (ESG) without having turned on Enhanced filtering for connectors.
URLs
URLs will be detonated during mail flow. If you don't want specific URLs detonated, manage your list of allowed URLs appropriately. See Manage the Tenant Allow/Block List for details.
URL links in the email message bodies won't wrap, to lessen customer impact.
Prepare the corresponding details that you will need to set up how your email is currently routed, including the name of the inbound connector that routes your mail. If you are just using Exchange Online Protection, you won't have a connector. Learn about mail flow and email routing
Supported email routing scenarios include:
- Third-party partner and/or on-premises service provider: The inbound connector that you want to evaluate uses a third-party provider and/or you're using a solution for email security on-premises.
- Microsoft Exchange Online Protection only: The tenant that you want to evaluate uses Office 365 for email security and the Mail Exchange (MX) record points to Microsoft.
If you're using a third-party email security gateway (ESG), you'll need to know the provider's name. If you're using an ESG on-premises or non-supported vendors, you'll need to know the public IP address(es) for the devices.
Supported third-party partners include:
- Barracuda
- IronPort
- Mimecast
- Proofpoint
- Sophos
- Symantec
- Trend Micro
Scoping
You will be able to scope the evaluation to an inbound connector. If there's no connector configured, then the evaluation scope will allow admins to gather data from any user in your tenant to evaluate Defender for Office 365.
Find the Microsoft Defender for Office 365 evaluation set-up card in the Office 365 Security & Compliance center () from three access points:
- Threat management > Dashboard
- Threat management > Policy
- Reports > Dashboard
Setting up the evaluation
Once you start the set-up flow for your evaluation, you'll be given two routing options. Depending on your organization's mail routing setup and evaluation needs, you can select whether you are using a third-party and/or on-premises service provider or only Microsoft Exchange Online.
If you are using a third-party partner and/or on-premises service provider, you'll need to select the name of the vendor from the drop-down menu. Provide the other connector-related details.
Select Microsoft Exchange Online if the MX record points to Microsoft and you have an Exchange Online mailbox.
Review your settings and edit them if necessary. Then, select Create evaluation. You should get a confirmation message to indicate that your set-up is complete.
Your Microsoft Defender for Office 365 evaluation report is generated once per day. It may take up to 24 hours for the data to populate.
Exchange rules (optional)
If you have an existing gateway, enabling evaluation mode will activate enhanced filtering for connectors. This improves filtering accuracy by altering the incoming sender IP address. This may change the filter verdicts and if you are not bypassing Exchange Online Protection this may alter deliverability for certain messages. In this case you might want to temporarily bypass filtering to analyze impact. To bypass, navigate to the Exchange admin center and create a policy of SCL -1 (if you don't already have one). For details on the rule components and how they work, see Mail flow rules (transport rules) in Exchange Online.
Evaluate capabilities
After the evaluation report has been generated, see how many advanced threat links, advanced threat attachments, and potential impersonations were identified in the emails and collaboration workspaces in your organization.
Once the trial has expired, you can continue to access the report for 90 days. However, it won't collect any more information. If you want to continue using Microsoft Defender for Office 365 after your trial has expired, make sure you buy a paid subscription for Microsoft Defender for Office 365 (Plan 2).
You can go to Settings to update your routing or turn off your evaluation at any time. However, you need to go through the same set-up process again should you decide to continue your evaluation after having turned it off.
Provide feedback
Your feedback helps us get better at protecting your environment from advanced attacks. Share your experience and impressions of product capabilities and evaluation results.
Select Give feedback to let us know what you think. | https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/office-365-evaluation?view=o365-worldwide&viewFallbackFrom=o365-bc-worldwide | 2021-06-12T21:53:11 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.microsoft.com |
Raspberry Pi 3_3b ID for board option in “platformio.ini” (Project Configuration File):
[env:raspberrypi_3b] platform = linux_arm board = raspberrypi_3b
You can override default Raspberry Pi 3 Model B settings per build environment using
board_*** option, where
*** is a JSON object path from
board manifest raspberrypi_3b.json. For example,
board_build.mcu,
board_build.f_cpu, etc.
[env:raspberrypi_3b] platform = linux_arm board = raspberrypi_3b ; change microcontroller board_build.mcu = bcm2837 ; change MCU frequency board_build.f_cpu = 1200000000L | https://docs.platformio.org/en/stable/boards/linux_arm/raspberrypi_3b.html | 2021-06-12T21:30:49 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.platformio.org |
You are viewing documentation for Kubernetes version: v1.19
Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
What is Kubernetes?
This combines over 15 years of Google's experience running production workloads at scale with best-of-breed ideas and practices from the community.
Going back in time, share of-premises, on major public clouds,.
Why you need Kubernetes and what it can do scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your.
What Kubernetes is. a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn't matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
What's next
- Take a look at the Kubernetes Components
- Ready to Get Started? | https://v1-19.docs.kubernetes.io/docs/concepts/overview/what-is-kubernetes/ | 2021-06-12T20:02:33 | CC-MAIN-2021-25 | 1623487586390.4 | [] | v1-19.docs.kubernetes.io |
This page contains documentation on the specific parameters required by each supported bidder. These docs only apply to Prebid.js bidders. For Prebid Server, AMP, or Prebid Mobile, see the Prebid Server Bidders page.
For each bidder listed below, you’ll find the following information:
You can also download the full CSV data file. | https://docs.prebid.org/dev-docs/bidders.html | 2021-06-12T21:19:14 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.prebid.org |
Compatibility¶
- What are the backend Prolog compiler requirements to run Logtalk?
- Can I use constraint-based packages with Logtalk?
- Can I use Logtalk objects and Prolog modules at the same time?
What are the backend Prolog compiler requirements to run Logtalk?¶
See the backend Prolog compiler requirements guide.
Can I use constraint-based packages with Logtalk?¶ either explicit module qualification or encapsulate the call using the {}/1 control construct (thus bypassing the Logtalk compiler).
Can I use Logtalk objects and Prolog modules at the same time?¶
Yes. In order to call a module predicate from within an object (or category) you may use an use_module/2 directive or use explicit module qualification (possibly wrapping the call using the Logtalk control construct {}/1 that allows bypassing of the Logtalk compiler when compiling a predicate call). Logtalk also allows modules to be compiled as objects (see the Prolog integration and migration for details). | https://logtalk3.readthedocs.io/en/latest/faq/compatibility.html | 2021-06-12T20:11:32 | CC-MAIN-2021-25 | 1623487586390.4 | [] | logtalk3.readthedocs.io |
.p
Securing Connections section in the How-to Guides section of the
documentation.
What’s the deal with
start_date?¶ job on the
first of the month. This is no longer required. Airflow will now auto align
the
start_date.
How can I create DAGs dynamically?¶)
What are all the
airflow tasks run commands in my process list?¶
There are many layers of
airflow tasks run commands, meaning it can call itself.
Basic
airflow tasks run: fires up an executor, and tell it to run an
airflow tasks run --localcommand. If using Celery, this means it puts a command in the queue for it to run remotely on the worker. If using LocalExecutor, that translates into running it in a subprocess pool.
Local
airflow tasks run --local: starts an
airflow tasks run --rawcommand (described below) as a subprocess and is in charge of emitting heartbeats, listening for external kill signals and ensures some cleanup takes place if the subprocess fails.
Raw
airflow tasks run --rawruns the actual operator’s execute method and performs the actual work.
How can my airflow dag run faster?¶
There are a few variables we could control to improve airflow dag performance:
parallelism: This variable controls the number of task instances that runs simultaneously across the whole Airflow cluster. | https://airflow-apache.readthedocs.io/en/latest/faq.html | 2021-06-12T20:28:19 | CC-MAIN-2021-25 | 1623487586390.4 | [] | airflow-apache.readthedocs.io |
Note: Vendors are required prior to capturing expenses.
Vendor Index
To view the index of vendors, click on the Vendors button in the navigation bar. If you have created vendors, you will see them listed in this window, along with amounts outstanding to the vendor, these are tracked through your expenses. The totals for outstanding and paid amounts are also displayed. In the top right of the window, you can search for a vendor in your list.
Creating a Vendor
To create a vendor, click on the '+ Vendor' button, you will be brought to a window that has a single field for the vendor's name.
By clicking on the '+ Create Vendor' button, the vendor will be saved.
Editing a Vendor
In the vendor index window, if you click on a vendor's name, you will go to the edit vendor window.
In this window, you will be able to change the vendor's name and view any payments made to that vendor.
| https://docs.clica.co/vendors | 2021-06-12T20:16:25 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/img/article-img/mil4luwCFa.png', None], dtype=object)
array(['/img/article-img/LKULOlPEDV.png', None], dtype=object)
array(['/img/article-img/RUE1luIbZo.png', None], dtype=object)] | docs.clica.co |
Optimize network performance
Network performance can have a dramatic impact on a user's experience. In complex architectures with many different services, minimizing the latency at each network hop can affect the overall performance. In this unit, you'll learn about the importance of network latency and how to reduce it within your architecture. We'll also discuss strategies to minimize network latency between Azure resources and between users and Azure.
The importance of network latency
Latency is a measure of delay. Network latency is the time that it takes for data to travel between a source to a destination across a network. The time that it takes for data to travel from the source to a destination and for the destination to respond is commonly known as a round-trip delay.
In a traditional datacenter environment, latency might be minimal because resources often share the same location and a common set of infrastructure. The time taken to get from source to destination is typically lower when resources are physically close together.
In comparison, a cloud environment is built for scale. Cloud-hosted resources might not be in the same rack, datacenter, or region. This distributed approach can have an impact on the round-trip time of your network communications. While all Azure regions are interconnected by a high-speed fiber backbone, the speed of light is still a physical limitation. Calls between services in different physical locations will still have network latency directly correlated to the distance between them.
In addition, depending on the communication needs of an application, more round trips might be required. Each round trip comes with a latency tax, and each round trip adds to the overall latency. The following illustration shows how the latency perceived by the user is the combination of the round trips required to service the request.
Let's look at how to improve performance between Azure resources, and also from your users to your Azure resources.
Latency between Azure resources
Imagine that you work for a healthcare organization that's pilot testing a new patient booking system. This system runs on several web servers and a database. All of the resources are located in the West Europe Azure region. The scope of your pilot test is available only for users in Western Europe. This architecture minimizes your network latency, because all of your resources are colocated inside a single Azure region.
Suppose that your pilot testing of the booking system was successful. As a result, the scope of your pilot test has expanded to include users in Australia. When the users in Australia view your website, they'll incur the additional round-trip time that's necessary to access all of the resources that are located in West Europe. Their experience will be diminished because of the additional network latency.
To address your network latency issues, your IT team decides to host another front-end instance in the Australia East region. This design helps reduce the time for your web servers to return content to users in Australia. But their experience is still diminished because there's significant latency for data that's being transferred between the front-end web servers in Australia East and the database in West Europe.
There are a few ways you could reduce the remaining latency:
- Create a read-replica of the database in Australia East. A read replica allows reads to perform well, but writes still incur latency. Azure SQL Database geo-replication allows for read-replicas.
- Sync your data between regions with Azure SQL Data Sync.
- Use a globally distributed database such as Azure Cosmos DB. This database allows both reads and writes to occur regardless of location. But it might you'll improve your network latency depends on your application and data architecture. Azure provides mechanisms to resolve latency issues through several services.
Latency between users and Azure resources
You've looked at the latency between your Azure resources, but you should also consider the latency between your users and your cloud application. You want to optimize delivery of the front-end user interface to your users. Let's look at some ways to improve the network performance between your users and your application.
Use a DNS load balancer for endpoint path optimization
In our example scenario, your IT team created an additional web front-end node in Australia East. But users have to explicitly specify which front-end endpoint they want to use. As the designer of a solution, you want to make the experience as smooth as possible for users.
Azure Traffic Manager could help. Traffic Manager is a DNS-based load balancer that you can use to distribute traffic within and across Azure regions. Rather than having the user browse to a specific instance of your web front end, Traffic Manager can route users based on a set of characteristics:
- Priority: You specify an ordered list of front-end instances. If the one with the highest priority is unavailable, Traffic Manager routes the user to the next available instance.
- Weighted: You set a weight against each front-end instance. Traffic Manager then distributes traffic according to those defined ratios.
- Performance: Traffic Manager routes users to the closest front-end instance based on network latency.
- Geographic: You set up geographical regions for front-end deployments and route your users based on data sovereignty mandates or localization of content.
Traffic Manager profiles can also be nested. For example, you could initially route your users across different geographies (such as Europe and Australia) by using geographic routing. Then you can route them to local front-end deployments by using the performance routing method.
Recall that the organization in our example scenario deployed a web front end in West Europe and another front end in Australia. Let's assume that they deployed Azure SQL Database with their primary deployment in West Europe and a read replica in Australia East. Let's also assume the application can connect to the local SQL instance for read queries.
Your team deploys a Traffic Manager instance in performance mode and adds the two front-end instances as Traffic Manager profiles. As a user, you navigate to a custom domain name (for example, contoso.com) which routes to Traffic Manager. Traffic Manager then returns the DNS name of the West Europe or Australia East front end based on the best network latency performance.
It's important to note that this load balancing is only handled via DNS. No inline load balancing or caching is happening here. Traffic Manager simply returns the DNS name of the closest front end to the user.
Use a CDN to cache content close to users
Your website likely uses some form of static content, either whole pages or assets such as images and videos. This static content can be delivered to users faster by using a content delivery network (CDN), such as Azure Content Delivery Network.
With content deployed to Azure Content Delivery Network, those items are copied to multiple servers around the globe. Let's say one of those items is a video served from blob storage:
HowToCompleteYourBillingForms.MP4. The team then configures the website so that each user's link to the video references the CDN edge server nearest them, rather than referencing blob storage. This approach puts content closer to the destination, which reduces latency and improves the user experience. The following illustration shows how using Azure Content Delivery Network puts content closer to the destination, which reduces latency and improves the user experience.
Content delivery networks can also be used to host cached dynamic content. Extra consideration is required though, because cached content might be out of date compared with the source. Context expiration can be controlled by setting a time to live (TTL). If the TTL is too high, out-of-date content might be displayed and the cache would need to be purged.
One way to handle cached content is with a feature called dynamic site acceleration, which can increase performance of webpages with dynamic content. Dynamic site acceleration can also provide a low-latency path to additional services in your solution. An example is an API endpoint.
Use ExpressRoute for connectivity from on-premises to Azure
Optimizing network connectivity from your on-premises environment to Azure is also important. For users who connect to applications, whether they're hosted on virtual machines or on platform as a service (PaaS) offerings like Azure App Service, you'll want to ensure they have the best connection to your applications.
You can always use the public internet to connect users to your services, but internet performance can vary and might be affected by outside issues. Also, you might not want to expose all of your services over the internet. You might want a private connection to your Azure resources. Site-to-site VPN over the internet is also an option. VPN overhead and internet variability can have a noticeable impact on network latency for high-throughput architectures.
Azure ExpressRoute can help. ExpressRoute is a private, dedicated connection between your network and Azure. It gives you guaranteed performance and ensures that your users have the best path to all of your Azure resources. The following illustration shows how an ExpressRoute circuit provides connectivity between on-premises applications and Azure resources.
If we consider our example scenario once again, your team decides to further improve user experience for users who are in their facilities by provisioning an ExpressRoute circuit in both Australia East and West Europe. This option gives users a direct connection to their booking system. It also ensures the lowest latency possible for their application.
Considering the impact of network latency on your architecture is important to ensure the best possible performance for your users. In this unit, we've looked at some options to lower network latency between users and Azure and between Azure resources.
Check your knowledge
Need help? See our troubleshooting guide or provide specific feedback by reporting an issue. | https://docs.microsoft.com/en-us/learn/modules/azure-well-architected-performance-efficiency/3-optimize-network-performance | 2021-06-12T21:52:19 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['media/3-network-latency.png',
'An illustration showing network latency among resources placed at different geographical locations in the cloud.'],
dtype=object)
array(['media/3-cdn.png',
'An illustration showing usage of Azure Content Delivery Network to reduce latency.'],
dtype=object)
array(['media/3-expressroute-connection-overview.png',
'An architectural diagram showing an ExpressRoute circuit connecting the customer network with Azure resources.'],
dtype=object) ] | docs.microsoft.com |
Security for Elasticsearch
Content
Security Overview
Search Guard is an Enterprise Security and Alerting suite for Elasticsearch and the entire Elastic Stack. It provides TLS encryption, Role Based Access Control (RBAC) to Elasticsearch indices, Document- and Field-level security controls and Audit Logging and Alerting capabilities.
Search Guard provides all the tools you need to encrypt, secure and monitor access to data stored in Elasticsearch, including Kibana, Logstash and Beats.
Search Guard comes in three flavours:
- Free Community Edition
- Enterprise Edition
- Compliance Edition for covering regulations like PCI, GDPR, HIPAA or SOX
TLS enryption for Elasticsearch
Search Guard encrypts all data flows in your Elasticsearch cluster, both on REST and on Transport layer. This ensures that:
- No one can sniff any data
- No once can tamper with your data
- Only trusted nodes can join your Elasticsearch cluster
Search Guard supports all modern cipher suites including Elliptic Curve Cryptography (ECC) and let’s you choose the allowed TLS versions.
Search Guard provides hostname validation and DNS lookups to ensure the validity and integrity of your certificates. Certificates can be changed by hot-reloading them on a running cluster without any downtimes. Search Guard supports both CRL and OCSP for revoking compromised certificates.
Elasticsearch authentication and authorization
Search Guard supports all major industry standards for authentication and authorization like:
- LDAP and Active Directory
- JSON Web token
- TLS client certificates
- Proxy authentication
- Kerberos
- OpenID Connect
- SAML
If you do not want or need any external authentication tool you can also use the built-in user database:
Elasticsearch security controls
Search Guard adds Role-Based Access Control (RBAC) to your Elasticsearch cluster and indices. Search Guard roles define and control what actions a user is allowed to perform on any given Elasticsearch index. Roles can be defined and assigned to users on-the-fly without the need for any node or cluster restart.
You can use pre-defined action groups like READ, WRITE or DELETE to define access permissions. You can also mix action groups with single permissions to implement very fine-grained access controls if required. For example, grant a user the right to read data in an Elasticsearch index, but deny the permission to create an index alias.
Index names are dynamic and support wildcards, regular expressions, date/math expressions and variable substitution for dynamic role definitions.
Document- and Field-level access to Elasticsearch data
Search Guard also controls access to documents and fields in your Elasticsearch indices. You can use Search Guard roles to define:
- To what documents in an index a user has access to
- What fields a user can access and what other fields should be removed
- What fields should be anonymized
This can be defined at runtime. You do not need to decide upfront at ingest time but can apply all security controls to already existing indices and data.
Audit Logging
Search Guard tracks and monitors all data flows inside your Elasticsearch cluster and can produce audit trails on several levels. This includes
- Security related events like failed login attempts, missing privileges, spoofed headers or invalid TLS certificates
- Successfully executed queries
- Read-access to documents
- Write-access to documents including the changes in JSON patch format
The Audit Logging capabilities of Search Guard especially help to keep your Elasticsearch cluster compliant with regualations like GDPR, PCI, SOX or HIPAA.
Alerting - Anomaly detection for your Elasticsearch data
Search Guard comes with an Alerting module that periodically scans the data in your Elasticsearch cluster for anomalies and send out notifications on various channels like Email, PagerDuty, Slack, JIRA or any endpoint that provides a Webhook.
The Elasticsearch Alerting module provides a fully fledged escalation model so you can choose to send notifications on different channels based on the severity of the incident. Search Guard will also notify you once an anomaly is not detected anymore and everything is back to normal.
Additional resources | https://docs.search-guard.com/latest/security-for-elasticsearch | 2021-06-12T20:52:11 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.search-guard.com |
# Kubernetes Integration
# Background
Kubernetes supports a variety (opens new window) of ways to perform authentication against the API Server. While there is tremendous flexibility in the core product, operators can encounter various practical challenges:
- Cloud providers typically support only their native IAM implementation, which may not integrate with your IdP
- OIDC providers may not provide group claims, requiring manual mappings to RBAC roles
- Your IdP may not be reachable by the kubernetes control plane
- Access is managed per cluster without central control
- Dynamic privilege escalation during incidents are slow or cumbersome RBAC changes
- VPN based protection may not be possible or desirable
Similarly, Kubernetes supports native audit logging (opens new window) capabilities, but can also run into practical challenges:
- Cloud provider deployments may be ecosystem locked with limited tooling, if any
- Cross-cluster and cross-service audit trails must be stitched together by the operator
# Solution
Pomerium can be leveraged as a proxy for user requests to the API Server.
- Any supported IdP can be supported for authentication, in any environment
- Group membership is supported consistently
- Centralized, dynamic, course grained access policy
- Global, cross resource access and audit trail
- API server protection without the operational challenges of VPN
- Can be hosted inside your kubernetes cluster!
# How it works
Building on top of a standard Kubernetes and Pomerium deployment:
- Pomerium is given access to a Kubernetes service account with impersonation (opens new window) permissions
- A policy route is created for the API server and configured to use the service account token
- Kubernetes RoleBindings operate against IdP Users and Group subjects
- Users access the protected cluster through their standard tools, using pomerium-cli as an auth provider in
~/.kube/config
- Pomerium authorizes requests and passes the user identity to the API server for fine grained RBAC
# Kubeconfig Setup
After installing the pomerium-cli, you must configure your
kubeconfig for authentication.
Substitute
mycluster.pomerium.io with your own API Server's
from in Pomerium's policy:
# Add Cluster kubectl config set-cluster via-pomerium --server= # Add Context kubectl config set-context via-pomerium --user=via-pomerium --cluster=via-pomerium # Add credentials command kubectl config set-credentials via-pomerium --exec-command=pomerium-cli \ --exec-arg=k8s,exec-credential,
# More info
See the complete walkthrough for a working end-to-end example. | https://0-13-0.docs.pomerium.io/docs/topics/kubernetes-integration | 2021-06-12T21:32:49 | CC-MAIN-2021-25 | 1623487586390.4 | [] | 0-13-0.docs.pomerium.io |
Benchmark Model Performance
The Benchmark utility analyzes your model and layer construction to estimate mobile performance.
note
Supported Platforms
Currently, benchmarking is only supported for Keras models. It provides a compatibility report and runtime prediction for Core ML.
Benchmark using the Fritz CLIBenchmark using the Fritz CLI
Using the Fritz CLI, you can easily benchmark a Keras Model.
If you have not done so, please :ref:
setup_python_library_ref before continuing.
The report will summarize all model layers and give an estimated runtime:
To get an existing grade report, specify the version uid of a previously uploaded model:
Benchmark inside Python codeBenchmark inside Python code
You can easily benchmark models inside of your Python code.
If you have not done so, please :ref:
setup_python_library_ref before continuing.
1. Load the Keras Model and create a KerasFile object1. Load the Keras Model and create a KerasFile object
First you must load the Keras model into memory and create a KerasFile object. The
KerasFile is a subclass of
FrameworkFileBase which provides a standard interface for serializing and deserialzing models from various frameworks.
2. Create a new ModelVersion2. Create a new ModelVersion
Next, you will upload the Keras model you wish to benchmark to Fritz. This will trigger the benchmarking process.
3. Run Benchmark3. Run Benchmark
Finally, run model benchmarking and view the summary. This will print out the report. | https://docs.fritz.ai/training/python-sdk/benchmark/ | 2021-06-12T20:34:39 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/img/training/grade_report.png',
'Example Model Grade Report (a few layers not pictured)'],
dtype=object) ] | docs.fritz.ai |
RowIndicatorCustomDrawEventArgs Class
Provides data for the GridView.CustomDrawRowIndicator event.
Namespace: DevExpress.XtraGrid.Views.Grid
Assembly: DevExpress.XtraGrid.v18.2.dll
Declaration
public class RowIndicatorCustomDrawEventArgs : RowObjectCustomDrawEventArgs
Public Class RowIndicatorCustomDrawEventArgs Inherits RowObjectCustomDrawEventArgs
Remarks
The GridView.CustomDrawRowIndicator event fires before an indicator cell is painted. It enables you to paint indicator cells manually or modify their appearance settings before they are painted using the default mechanism. RowIndicatorCustomDrawEventArgs class properties allow you to identify the row whose indicator cell is being painted and provide settings common to all custom painting events. Refer to the Custom Painting Basics help topic for details.
The RowIndicatorCustomDrawEventArgs class differs from its ancestor only in the RowIndicatorCustomDrawEventArgs.Info property value. It doesn’t introduce any other functionality. | https://docs.devexpress.com/WindowsForms/DevExpress.XtraGrid.Views.Grid.RowIndicatorCustomDrawEventArgs?v=18.2 | 2021-06-12T21:15:35 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.devexpress.com |
Deploying a Production
You can deploy a production using either the Management Portal or Atelier. The Management Portal automates some steps that you need to perform manually using Atelier. InterSystems IRIS® installation, the test system should be a new InterSystems IRIS Interoperability > Manage > Deploy Changes > Deploy Atelier or importing the classes from the Management Portal System Explorer, then you have to perform these steps manually.
In order to export and deploy a production, you must have the appropriate permissions, for example:
%Ens_Deploy:USE to access to the Interoperability > Manage > Deployment Changes page and deployment actions
%Ens_DeploymentPkg:USE to export the XML to the server
%Ens_DeploymentPkgClient:WRITE to export the XML locally using the web browser
%Ens_DeploymentPkgClient:USE to deploy the XML using the web browser
By default, these resources are granted automatically only to users with the role %EnsRole_Administrator. For more information, see Ensemble Resources to Protect Activities.
Exporting a Production
To export the XML for a production using the Management Portal, open the production, click Production Settings and the Actions tab and then click the Export button. InterSystems IRIS selects all business services, business processes, business operations, and some related classes, and then displays the following form to allow you to add export notes and additional components.
maps—the defined and generated classes are included.
Complex record maps—the defined and generated classes are included.
Lookup tables
User classes referenced in code
System default settings or schedule specifications that are set as deployable. You can save the export file to the server or locally via the browser’s downloading capability. If you export it to the server, you can specify the file location. If you export it via the web browser, you can specify the file name.
The deployment package contains the following information about how it was created:
Name of the system running InterSystems IRIS.
InterSystems IRIS.
If a production uses XSD schemas for XML documents or uses an old format schema for X12 documents, the schemas are not included in the XML deployment file and have to be deployed through another mechanism. InterSystems IRIS InterSystems IRIS Globals guide for details on exporting globals.
Deploying a Production on a Target System
The Management Portal automates the process of deploying a production from a development system to a live system. This section describes what InterSystems IRIS does when you are loading a new version of a production on a live system.
Once you have the deployment package XML file, you can load it on a target system. In the Management Portal, select the correct namespace and click Interoperability, Manage, Deployment Changes, Deploy, and then click the Open Deployment or Open Local Deployment button, depending on whether the XML deployment package is located on the server or on the local machine. The Open Local Deployment button is not active if you are on the server machine. After you select the XML deployment package file,, InterSystems IRIS, InterSystems IRIS, use the Open Deployment button to select the rollback file, then click the Deploy button. | https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=EGDV_DEPLOYING | 2021-06-12T20:23:59 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['images/egdv_production_export.png',
'Export from Production dialog box with various components of the production selected for export'],
dtype=object) ] | docs.intersystems.com |
manila-manage¶
control and manage shared filesystems¶
- Author
[email protected]
- Date
2014-06-11
OpenStack LLC
- Version
2014.2
- Manual section
1
- Manual group
shared filesystems
DESCRIPTION¶
manila-manage controls shared filesystems service. More information about OpenStack Manila is at
OPTIONS¶
The standard pattern for executing a manila-manage command is:
manila-manage <category> <command> [<args>]
For example, to obtain a list of all hosts:
manila-manage host list
Run without arguments to see a list of available command categories:
manila-manage
Categories are shell, logs, service, db, host, version and config. Detailed descriptions are below.
These sections describe the available categories and arguments for manila-manage.
Manila Db¶ Logs¶
manila-manage logs errors
Displays manila errors from log files.
manila-manage logs syslog <number>
Displays manila alerts from syslog.
Manila Shell¶.
Manila Config¶
manila-manage config list
Returns list of currently set config options and its values.
BUGS¶
Manila is sourced in Launchpad so you can view current bugs at OpenStack Manila | https://docs.openstack.org/manila/victoria/cli/manila-manage.html | 2021-06-12T20:57:02 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.openstack.org |
Back to Publisher API Reference
pbjs.aliasBidder(adapterName, aliasedName, options)
To define an alias for a bidder adapter, call this method at runtime:
pbjs.aliasBidder('appnexus', 'newAlias', options: { gvlid: 111111} );
Defining an alias can help avoid user confusion since it’s possible to send parameters to the same adapter but in different contexts (e.g, The publisher uses
"appnexus" for demand and also uses
"newAlias" which is an SSP partner that uses the
"appnexus" adapter to serve their own unique demand).
If you define an alias and are using
pbjs.sendAllBids, you must also set up additional line items in the ad server with keyword targeting that matches the name of the alias. For example:
hb_pb_newalias
hb_adid_newalias
hb_size_newalias
hb_deal_newalias
The options object supports these parameters:
Creating an alias for a Prebid Server adapter is done differently. See ‘extPrebid’
config in the
s2sConfig object.
Back to Publisher API Reference | https://docs.prebid.org/dev-docs/publisher-api-reference/aliasBidder.html | 2021-06-12T20:56:18 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.prebid.org |
Package org.codehaus.groovy.ast.tools
Class WideningCategories
- java.lang.Object
- org.codehaus.groovy.ast.tools.WideningCategories
public class WideningCategories extends ObjectThis class provides helper methods to determine the type from a widening operation for example for a plus operation..
Method Detail
isInt
public static boolean isInt(ClassNode type)Used to check if a type is an int or Integer.
- Parameters:
type- the type to check
isDouble
public static boolean isDouble(ClassNode type)Used to check if a type is an double or Double.
- Parameters:
type- the type to check
isFloat
public static boolean isFloat(ClassNode type)Used to check if a type is a float or Float.
- Parameters:
type- the type to check
isIntCategory
public static boolean isIntCategory(ClassNode type)It is of an int category, if the provided type is a byte, char, short, int.
isLongCategory
public static boolean isLongCategory(ClassNode type)It is of a long category, if the provided type is a long, its wrapper or if it is a long category.
isBigIntCategory
public static boolean isBigIntCategory(ClassNode type)It is of a BigInteger category, if the provided type is a long category or a BigInteger.
isBigDecCategory
public static boolean isBigDecCategory(ClassNode type)It is of a BigDecimal category, if the provided type is a BigInteger category or a BigDecimal.
isDoubleCategory
public static boolean isDoubleCategory(ClassNode type)It is of a double category, if the provided type is a BigDecimal, a float, double. C(type)=double
isFloatingCategory
public static boolean isFloatingCategory(ClassNode type)It is of a floating category, if the provided type is a a float, double. C(type)=float
lowestUpperBound
public static ClassNode lowestUpperBound(List<ClassNode> nodes)Given a list of class nodes, returns the first common supertype. For example, Double and Float would return Number, while Set and String would return Object.
- Parameters:
nodes- the list of nodes for which to find the first common super type.
- Returns:
- first common supertype
lowestUpperBound
public static ClassNode lowestUpperBound(ClassNode a, ClassNode b)Given two class nodes, returns the first common supertype, or the class itself if there are equal. For example, Double and Float would return Number, while Set and String would return Object. This method is not guaranteed to return a class node which corresponds to a real type. For example, if two types have more than one interface in common and are not in the same hierarchy branch, then the returned type will be a virtual type implementing all those interfaces. Calls to this method are supposed to be made with resolved generics. This means that you can have wildcards, but no placeholder.
- Parameters:
a- first class node
b- second class node
- Returns:
- first common supertype
implementsInterfaceOrSubclassOf
public static boolean implementsInterfaceOrSubclassOf(ClassNode source, ClassNode targetType)Determines if the source class implements an interface or subclasses the target type. This method takes the
lowest upper bound class nodetype into account, allowing to remove unnecessary casts.
- Parameters:
source- the type of interest
targetType- the target type of interest | https://docs.groovy-lang.org/docs/latest/html/api/org/codehaus/groovy/ast/tools/WideningCategories.html | 2021-06-12T21:18:54 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.groovy-lang.org |
Package org.codehaus.groovy.jsr223
Class ScriptStaticExtensions
- java.lang.Object
- org.codehaus.groovy.jsr223.ScriptStaticExtensions
Method Detail
$static_propertyMissing
public static ScriptEngine $static_propertyMissing(ScriptEngineManager self, String languageShortName)Provides a convenient shorthand for accessing a Scripting Engine with name
languageShortNameusing a newly created
ScriptEngineManagerinstance.
- Parameters:
self- Placeholder variable used by Groovy categories; ignored for default static methods
languageShortName- The short name of the scripting engine of interest
- Returns:
- the ScriptEngine corresponding to the supplied short name or null if no engine was found
- Since:
- 1.8.0 | https://docs.groovy-lang.org/docs/latest/html/api/org/codehaus/groovy/jsr223/ScriptStaticExtensions.html | 2021-06-12T20:56:22 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.groovy-lang.org |
About This Book
This book is a guide to managing I/O and devices with ObjectScript on InterSystems IRIS® data platform database.
Its chapters are:
Introduction to InterSystems IRIS I/O
Local Interprocess Communication
TCP Client/Server Communication
UDP Client/Server Communication
There is also a detailed Table of Contents.
The following documents provide information about related concepts: | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GIOD_PREFACE | 2021-06-12T20:38:50 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.intersystems.com |
How to List your Project/dApp¶
Disclaimer: The content in this section is entirely managed by the projects themselves. Moonbeam is a permissionless network. Any project can deploy its contracts to Moonbeam.
Minimum Content¶
In general, to be considered and added to this list, your project/dApp must meet the following requirements in terms of content:
- Introduction (see template file)
- Show working contracts and/or front-ends deployed or connected to the Moonbase Alpha TestNet
- Explain to users how they can test or integrate your project/dApp
- Link the GitHub repos of the code
- Link to communication channels
Getting Started¶
This guide will help you get started on listing your project/dApp in the Moonbeam docs site.
Forking/Cloning the Repo¶
The main idea is to fork a repository, modify it with your changes, and then submit a PR.
So, as mentioned before, first fork this repository.
Choose Category and Copy Template¶
Next, choose the category that relates to your project the most. There is a folder per category. If you think we are missing a category, contact us via our Discord channel, we are happy to add it to the list.
You can use the
template.md file (which you can find here) as reference.
For example, let's say your project is named "Rocket Project" and related to DeFi. Then, you would need to copy this file inside the following folder:
moonbeam-project-directory |--apis |--assets |--bridges |--defi |--|--rocket-project.md |--explorers ...
Changing Title - Description - First Heading¶¶
Images related to your documentation can be saved inside the
images folder, located in the repo's main directory. Please create a folder where to save your images. For our previous example, this would be in:
moonbeam-project-directory |--apis ... |--defi |--explorers |--images |--|--rocket-project |--|--|--image1.png |--|--|--image2.svg |--marketplaces ...
Submitting PR¶
Once you are done with your documentation, you can submit your pull-request from your forked repo.
Our team will check this PR to make sure it complies with the minimum requirements to be listed. | https://docs.moonbeam.network/dapps-list/ | 2021-06-12T20:07:48 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['images/list-dapps-banner.png', 'Template banner image'],
dtype=object) ] | docs.moonbeam.network |
A Project can have multiple Sprite Atlases for different purposes (for example, Variant Atlases with lower-resolution Textures for hardware with different limitations). If you enable all available Sprite Atlases, you might encounter conflicts (refer to Resolving different Sprite Atlas scenarios for more information).
To prevent these issues, properly prepare Sprite Atlases for distribution with the following steps:
Unity includes Sprite Atlases in a Project’s build by default, and automatically loads them at run time. Clear the Include in Build setting of the selected Sprite Atlas to disable this behavior.
If ‘Include in Build’ is disabled, Unity still packs the Sprite Atlas into a *.spriteatlas file in the Project’s Assets folder. However, Sprites which reference Textures in an disabled Sprite Atlas appear invisible as the reference Texture is not available or loaded. Unity does not include the disabled Sprite Atlas in the Project’s published build, and does not automatically load it at run time. To do so, a script is required to load the Sprite Atlas via Late Binding. | https://docs.unity3d.com/cn/2018.3/Manual/SpriteAtlasDistribution.html | 2021-06-12T22:04:15 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.unity3d.com |
ProVision 5.3.2 is a minor release with bug fixes.
PHP Compatibility
Please note that ProVision version 5.3.0+ will require php version 5.6.
5.3.0 Peering DB Changes
ProVision version 5.3.0+ uses the Peering DB 2.0 API. As of PeeringDB 2.0, SQL dump files are no longer provided. If you are using ProVision 5.3.0 or higher, you must follow the new install process detailed at Local Installations: Peering Setup. If you are a ProVision Cloud customer and are hosted out 6connect's environment, this has already been setup and requires no further action on your part.
Contact 6connect at [email protected] to schedule a demo or get more information.
Additional Features
IPAM Rules for DHCP Pools
IPAM Rules may now be applied to DHCP Pool ranges when using Smart Assign.
For Smart Assign DHCP Pool creation, existing IPAM Rules may be applied to reserve additional addresses out of the pool range. To create an IPAM Rule, see IPAM Rule.
DHCP Pools and IP Rules
For DHCP Pools, ProVision automatically reserves the first and last address of the pool for Gateway and Broadcast addresses, respectively.
If an additional IPAM Rule is applied, the rule will begin with the second address in the block.
For example: if a DHCP Pool is created using 10.0.0.64/29 with an IPAM Rule of "Reserve First Three", the resulting pool range would be 10.0.0.68 through 10.0.0.70, as the first four as well as the final address would be reserved.
To apply an IPAM Rule to a DHCP Pool, select "Apply an IPAM Rule" to view a list of existing rules.
Select a rule, as well as any other criteria, and click "Add Pool".
The resulting Pool will be created with the adjusted range.
Bug Fixes/Improvements
IM-XXX: General CPNR connector enhancements.
IM-2434: Resolved an issue that prevented updates to Contact Info from saving.
IM-2438: Resolved an issue that prevented updates to Contacts from saving.
IM-2343: Resolve an issue where smart assign was unable to assign to /32 and /128 blocks under subassignable parents. Blocks of /32s and /128s are now subassignable as a result.
IM-2360: Smart Browse now only shows blocks not assigned to the current resource.
IM-2367: Subassignable blocks in IPAM Manage may now be merged, so long as it, its parent, and its sibling block are all assigned the same resource.
IM-2389: Users lists in the User Permissions Chart and Check User Permissions areas no longer include deactivated users.
IM-2446: Restored pagination on the Resource Entries lists.
IM-2449: Replaced "No User Groups" page with a "Permissions Error" page that displays for users attempting to log in with no Groups or Resource permissions.
IM-2450: The "Group Information" hover display now lists the resources the current user's group(s) affects. | https://docs.6connect.com/display/DOC/ProVision+5.3.2 | 2021-06-12T20:58:47 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.6connect.com |
In the top bar, you will see an icon with an arrow if you hover over it, this will bring up another menu for your user account.
Edit Details
In this window, you can change details like:
First & Last Name
Subscription
Here, you will see your current subscription, when it will end, and if it is active or not.
Support
In this window, you will see the list of current tickets if you have submitted any, if you want to send a new one, click on the New Ticket button, and you will be directed to the page where you can send a ticket through to our team.
| https://docs.clica.co/account-support | 2021-06-12T20:51:54 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/img/article-img/5Asv1McHbu.png', None], dtype=object)
array(['/img/article-img/ObsAONSqy1.png', None], dtype=object)
array(['/img/article-img/nK8m6fZqsJ.png', None], dtype=object)
array(['/img/article-img/c0ATFLrZ04.png', None], dtype=object)
array(['/img/article-img/cL1t1Un6m9.png', None], dtype=object)
array(['/img/article-img/EYs7NWffaT.png', None], dtype=object)] | docs.clica.co |
Games & Teams¶
Note
These commands are toggled, if you want to remove something from the list, run the command again.
CouchBot can alert you when members of a specific stream team have gone live. For developers, we even have the game announcmenet feature allowing anyone playing a particular game to be announced.
If you have a game or want to announce a game, then use the following settings.
If you have a team or want to announce a team, then use the following settings. An example is the sutv channel on Twitch. | https://docs.couch.bot/gameteam.html | 2021-06-12T19:52:09 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.couch.bot |
This guide covers the steps for a basic configuration of sahara. It will help you to configure the service in the most simple manner.
Sahara is packaged with a basic sample configuration file:
sahara.conf.sample-basic. This file contains all the essential
parameters that are required for sahara. We recommend creating your
configuration file based on this basic example.
If a more thorough configuration is needed we recommend using the
tox
tool to create a full configuration file by executing the following
command:
$ tox -e genconfig
Running this command will create a file named
sahara.conf.sample
in the
etc/sahara directory of the project.
After creating a configuration file by either copying the basic example
or generating one, edit the
connection parameter in the
[database] section. The URL provided here should point to an empty
database. For example, the connection string for a MySQL database will be:
connection=mysql://username:password@host:port/database
Next you will configure the Identity service parameters in the
[keystone_authtoken] section. The
auth_uri parameter
should point to the public Identity API endpoint. The
identity_uri
should point to the admin Identity API endpoint. For example:
auth_uri= identity_uri=
Specify the
username,
password and
project_name.
These parameters must specify an Identity user who has the
admin role
in the given project. These credentials allow sahara to authenticate and
authorize its users.
Next you will configure the default Networking service. If using
neutron for networking the following parameter should be set
in the
[DEFAULT] section:
With these parameters set, sahara is ready to run.
By default the sahara’s log level is set to INFO. If you wish to increase
the logging levels for troubleshooting, set
debug to
true in the
[DEFAULT] section of the configuration file.
By default sahara is configured to use the neutron. Additionally, if the
cluster supports network namespaces the
use_namespaces property can
be used to enable their usage.
[DEFAULT] use_namespaces=True
Note
If a user other than
root will be running the Sahara server
instance and namespaces are used, some additional configuration is
required, please see Non-root users for more information.
During cluster setup sahara must access instances through a secure
shell (SSH). To establish this connection it may use either the fixed
or floating IP address of an instance. By default sahara is configured
to use floating IP addresses for access. This is controlled by the
use_floating_ips configuration parameter. With this setup the user
has two options for ensuring that the instances in the node groups
templates that requires floating IPs gain a floating IP address:
From Newton changes were made to allow the coexistence of clusters using
floating IPs and clusters using fixed IPs. If
use_floating_ips is
True it means that the floating IPs can be used by Sahara to spawn clusters.
But, differently from previous versions, this does not mean that all
instances in the cluster must have floating IPs and that all clusters
must use floating IPs. It is possible in a single Sahara deploy to have
clusters setup using fixed IPs, clusters using floating IPs and cluster that
use both.
If not using floating IP addresses (
use_floating_ips=False) sahara
will use fixed IP addresses for instance management. When using neutron
for the Networking service the user will be able to choose the
fixed IP network for all instances in a cluster.
Sahara can be configured to send notifications to the OpenStack
Telemetry module. To enable this functionality the following parameter
enable should be set in the
[oslo_messaging_notifications] section
of the configuration file:
[oslo_messaging_notifications] enable = true
And the following parameter
driver should be set in the
[oslo_messaging_notifications] section of the configuration file:
[oslo_messaging_notifications] driver = messaging
By default sahara is configured to use RabbitMQ as its message broker.
If you are using RabbitMQ as the message broker, then you should set the
following parameter in the
[DEFAULT] section:
rpc_backend = rabbit
You may also need to specify the connection parameters for your
RabbitMQ installation. The following example shows the default
values in the
[oslo_messaging_rabbit] section which may need
adjustment:
rabbit_host=localhost rabbit_port=5672 rabbit_hosts=$rabbit_host:$rabbit_port rabbit_userid=guest rabbit_password=guest rabbit_virtual_host=/
By default sahara is configured to use the heat engine for instance creation. The heat engine uses the OpenStack Orchestration service to provision instances. This engine makes calls directly to the services required for instance provisioning.
Sahara’s public API calls may be restricted to certain sets of users by
using a policy configuration file. The location of the policy file(s)
is controlled by the
policy_file and
policy_dirs parameters
in the
[oslo_policy] section. By default sahara will search for
a
policy.json file in the same directory as the
sahara.conf
configuration file.
Example 1. Allow all method to all users (default policy).
{ "default": "" }
Example 2. Disallow image registry manipulations to non-admin users.
{ "default": "", "data-processing:images:register": "role:admin", "data-processing:images:unregister": "role:admin", "data-processing:images:add_tags": "role:admin", "data-processing:images:remove_tags": "role:admin" }
Sahara uses the
api-paste.ini file to configure the data processing API
service. For middleware injection sahara uses pastedeploy library. The location
of the api-paste file is controlled by the
api_paste_config parameter in
the
[default] section. By default sahara will search for a
api-paste.ini file in the same directory as the configuration file.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/sahara/queens/admin/configuration-guide.html | 2021-06-12T21:19:40 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.openstack.org |
You are viewing documentation for Kubernetes version: v1.20
Kubernetes v1.20 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Resource metrics pipeline
Resource usage metrics, such as container CPU and memory usage,
are available in Kubernetes through the Metrics API. These metrics can be accessed either directly
by the user with the
kubectl top command, or by a controller in the cluster, for example
Horizontal Pod Autoscaler, to make decisions.
The Metrics API:
- it is discoverable through the same endpoint as the other Kubernetes APIs under the path:
/apis/metrics.k8s.io/
- it offers the same security, scalability, and reliability guarantees
The API is defined in k8s.io/metrics repository. You can find more information about the API there.
Note: The API requires the metrics server to be deployed in the cluster. Otherwise it will be not available.
Measuring Resource Usage
CPU
CPU is reported as the average usage, in CPU cores, over a period of time. This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The kubelet chooses the window for the rate calculation.
Memory
Memory is reported as the working set, in bytes, at the instant the metric was collected. In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate. It includes all anonymous (non-file-backed) memory since Kubernetes does not support swap. The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
Metrics Server.
Metrics Server collects metrics from the Summary API, exposed by Kubelet on each node, and is registered with the main API server via Kubernetes aggregator.
Learn more about the metrics server in the design doc. | https://v1-20.docs.kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/ | 2021-06-12T20:29:16 | CC-MAIN-2021-25 | 1623487586390.4 | [] | v1-20.docs.kubernetes.io |
LogsSurvey of Frequently Used Logs
The following table shows frequently used logs and what they record. A full list can be found in the WhatsUp Gold Logs Library.
This logger:
Records the following:
Action Log
Triggered and scheduled tasks referred to in WhatsUp Gold as actions.
Actions Applied
State that triggered the action and the device the action was applied to.
Activity Log
System-level events including startup and initialization, shutdown, and restart.
Blackout Summary Log
Records actions not triggered during a scheduled blackout period.
Discovery Scan Log
History of network discovery scans and including controls to relaunch.
Error Log - General
Error notifications, failure notifications, and unhandled exception messages re-directed from standard error.
Error Log - Logger Health Messages
Reveals both failures and routine health checkpoint messages for the WhatsUp Gold polling engine.
Error Log - Passive Monitor
Error and failure messages gathered from passive monitors.
Error Log - Performance Monitor
Error and failure messages gathered from performance monitors.
Policy Audit
Configuration record changes based on deployed patterns and policies.
Recurring Action Log
Records scheduled tasks that are recurring.
SNMP Trap Log (SNMP Trap Log, Syslog)
History of SNMP trap notifications.
Scheduled Report Log
Records recurring and scheduled report events.
Syslog
Syslog events. (Appropriate syslog listener needs to be configured.)
Web User Activity Log
Records user interaction with the WhatsUp Gold web UI.
Windows Event Log
Records Windows Event Log entries. (Appropriate WMI listener needs to be configured.)
Task Log (Configuration Management)
Aggregates messages generated by Configuration Management.
Start vs. Run (Configuration Management)
Startup vs. running configuration change events tracked by Configuration Management.
Policy Audit (Configuration Management)
Configuration Management Policy out of compliance events.
NTA Log
Network Traffic Analysis (NTA) events.
Unclassified Traffic
Events for traffic over unexpected ports and the network interface that carried the traffic.
Hyper-V Event Log (Virtual Monitoring)
Hyper-V virtual machine events received from configured monitors.
VMware Event Log (Virtual Monitoring)
VMware virtual machine events received from configured monitors.
Wireless Log
Wireless device and monitoring events.
Alert Center
View alert center engine events.
APM Applications State Change Log
Transitions in application monitoring states
APM Resolved Items Log
Resolved actions for all instances or components in the selected application, profile, or selected component.
See Also
Logs
About Logs
Logging Quick Start
Consolidated Logs
Actions Applied Log
Actions Activity Log
Scan History
General Error Log
Hyper-V Event Log
Logger Health Messages
Passive Monitor Error Log
Performance Monitor Error Log
SNMP Trap Log
Task Log
VMware Event Log
Wireless
Alert Center Log View
Network Traffic Analyzer Logs
Applications State Change Log
APM-Resolved Items Log
Quick Help Links | https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/41456.htm | 2021-07-24T01:47:28 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.ipswitch.com |
The Minerva roadmap is an approximate outline what we think will be needed in the future. It is subject to changes depending on market conditions, community feedback and funding.
Last update: 20 July, 2021 | Change Log
The focus will be to make the Minerva Wallet fully financially viable by introducing the $MIVA SuperToken. Growing the Minerva community is the primary focus and this will be supported through Airstreams (streamed Airdrops) and other incentives via $MIVA.
Introduce $MIVA SuperTokens via a 1 million Airstreaming event (Final Tweet)
Listing on Honeyswap; CoinGecko listing
$MIVA Auction & Airstream #2 (total up to 2 million Supertokens)
Enable $MIVA farming options
Make 'Farm Postion' NFTs available through the Eporio secondary market
Start new cooperations (Superfluid, Connext, Eporio)
Google Play Store listing
Launching the new Minerva Wallet website
Show fiat values for all ERC-20 tokens and improve token auto-discovery
Fiat on-ramp integration with Ramp.network (xDai, ETH, Dai and USDC)
Improve WalletConnect session management
Improve security settings options to configure the access to keys and seed
Allow the protection of transactions with PIN/fingerprint/face
Change to fiat currencies other than EUR, e.g. USD, GBP, etc.
Simplify account management for all networks
Enable Polygon (Matic) support
Extend fiat on-ramp integration with Ramp.network to Polygon (Matic)
After laying the foundation to reduce the complexity in multi-chain applications, growing the Minerva Wallet community is the major focus in Q3. This is planned to be realized through the integration of new networks and various activities around the $MIVA SuperToken.
Update CoinGecko Listing - show Circulating supply
List on Symmetric DEX
Setting up a $MIVA rewarded test program
Extend options for Minerva Streaming Farms
CoinMarketCap listing
Increase $MIVA liquidity on Honeyswap through additional farming
List $MIVA on Polygon (Quickswap or Sushiswap)
Conduct Gnosis Auctions for the Minerva community
Get $MIVA listed on a CEX
Change account name
Enable mobile WalletConnect
Automatically update token balances
Enable ERC-721 support and NFT management in accounts
Integrate Gnosis Safe account for main networks
Enable Arbitrum and Binance Smart Chain (BSC)
Integrate bridging of tokens between EVM chains via Connext
Enable backup wallet to personal data vault (on Swarm)
Integrate EIP-1559 support
Add streaming token support to start, change and stop streams
Major update and improvement of the Activity Screen
Allow import of private keys for main networks
Improved backup procedure for Secret Words
Allow import of arbitrary Secret Words
Multi-language UI
UI improvements:
Add address to contacts list
Improve UI - separate Tokens / Investments / NFTs
Improve and adapt services integration
Warn on high gas cost and allow additional actions
Notification Auto-Dismiss
This is likely the quarter, where many DApps will be going towards multi-chain support and therefore the Minerva Wallet will be able to fulfil the needs of many new users coming to the space.
Integration of WalletConnect v2.0
Integrate bridging of tokens between various EVM chains via Connext
Integrate NFC card support
Evaluate integration of BrightID
Enable adding custom networks
Enable configuration of custom RPCs - e.g. for DAppNode
Integrate push notification service
pending transactions, allow to cancel and speed up
Integrate login with DIDAuth or similar
Integrate JSON-LD for Verifiable Credentials
Show pending transactions, allow to cancel and speed up
Minerva should be able to serve communities and use cases on xDai Chain (e.g. 1Hive, Honeyswap or ZeroAlpha) and ARTIS ∑1 (e.g. 7Energy - Energy Communities) especially well.
Add ARTIS τ1 faucet button
Enable manual token adding to accounts
Integrate WalletConnect v1.0 support for supported main networks
Automate ERC-20 token discovery and icon management
Allow export of private keys for main networks
Improving the discovery of account balances
Improving the backup of metadata needed for the wallet recovery
UI improvements:
New account layout prepared for coin, tokens and collectibles
Preparations to improve the way tokens are shown in accounts
While the development from Q1-Q3/2020 was more in a proof-of-concept mode, the Q4/2020 development is targeted towards main networks integration and UI simplification & extension.
Enable 7 test networks - ARTIS τ1, LUKSO L14, POA Sokol, Kovan, Rinkeby, Ropsten and Görli
Extend settings with legal terms, main network activation and info links
Create wallet in offline mode
Implement an experimental multiple device mode
Full support of checksum addresses
Enable main networks - Ethereum, xDai Chain, ARTIS ∑1 and POA Network
UI improvements:
Improve address field and gas estimate calculation
Link send and receive screen | https://docs.minerva.digital/roadmap | 2021-07-24T02:29:38 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.minerva.digital |
For.For. - Trigonometry Arc functions. | https://docs.trifacta.com/display/r076/ASIN%20Function | 2021-07-24T02:36:46 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.trifacta.com |
UiPath.OracleNetSuite.Activities.UpdateRecord
The Update Record activity uses the the NetSuite update operation to update a specific record (internalid).
After updating Update Record activity inside the NetSuite Application Scope activity.
- Click the Configure button inside the Update Record activity (this opens the Object Wizard).
- Select the Object that you want to update and enter, at a minimum, the internalid of the record.
- Create and enter a
ResponseStatusvariable for the Output property.
配置
To enter your Update Record property values, you must use the Object Wizard by clicking the Configure button.
To learn more, see the Wizards section in the About page.
- Internalid - The id of the NetSuite record that you want to update. To get the internalid value, | https://docs.uipath.com/activities/lang-zh_CN/docs/oracle-netsuite-update-record | 2021-07-24T01:51:35 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['https://files.readme.io/9694490-UpdateRecord_MSC.png',
'UpdateRecord_MSC.png'], dtype=object)
array(['https://files.readme.io/9694490-UpdateRecord_MSC.png',
'Click to close...'], dtype=object) ] | docs.uipath.com |
Abstract
This PEP describes a command-line driven online help facility for Python. The facility should be able to build on existing documentation facilities such as the Python documentation and docstrings. It should also be extensible for new types and modules.
Interactive use
Simply typing "help" describes the help function (through repr() overloading).
"help" can also be used as a function.
The function takes the following forms of input:
help( "string" ) -- built-in topic or global help( <ob> ) -- docstring from object or type help( "doc:filename" ) -- filename from Python documentation
If you ask for a global, it can be a fully-qualified name, such as:
help("xml.dom")
You can also use the facility from a command-line:
python --help if
In either situation, the output does paging similar to the "more" command.
Implementation
The help function is implemented in an onlinehelp module which is demand-loaded.
There should be options for fetching help information from environments other than the command line through the onlinehelp module:
onlinehelp.gethelp(object_or_string) -> string
It should also be possible to override the help display function by assigning to onlinehelp.displayhelp(object_or_string).
The module should be able to extract module information from either the HTML or LaTeX versions of the Python documentation. Links should be accommodated in a "lynx-like" manner.
Over time, it should also be able to recognize when docstrings are in "special" syntaxes like structured text, HTML and LaTeX and decode them appropriately.
A prototype implementation is available with the Python source distribution as nondist/sandbox/doctools/onlinehelp.py.
Built-in Topics
help( "intro" ) - What is Python? Read this first!
help( "keywords" ) - What are the keywords?
help( "syntax" ) - What is the overall syntax?
help( "operators" ) - What operators are available?
help( "builtins" ) - What functions, types, etc. are built-in?
help( "modules" ) - What modules are in the standard library?
help( "copyright" ) - Who owns Python?
help( "moreinfo" ) - Where is there more information?
help( "changes" ) - What changed in Python 2.0?
help( "extensions" ) - What extensions are installed?
help( "ack" ) - Who has done work on Python lately?
Security Issues
This module will attempt to import modules with the same names as requested topics. Don't use the modules if you are not confident that everything in your PYTHONPATH is from a trusted source. | http://docs.activestate.com/activepython/2.7/peps/pep-0233.html | 2018-11-12T23:16:15 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.activestate.com |
or IBM BigFix.
After you install a universal forwarder, it gathers information locally and sends it to a Splunk deployment..
Also, kind:
-, letting requires transformation of any kind.
Note: Except for a few cases, you cannot use a universal forwarder to process data before it reaches the indexer. If you need to make any changes to your data before you index it, Splunk Enterprise polls less frequently over time if it cannot contact a host, and eventually stops Splunk deployment, but you must first use a Windows instance of Splunk Enterprise to get! | http://docs.splunk.com/Documentation/Splunk/6.5.0/Data/ConsiderationsfordecidinghowtomonitorWindowsdata | 2018-11-12T23:19:06 | CC-MAIN-2018-47 | 1542039741151.56 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
The basis of a LiveView project is the LiveView data table. TIBCO LiveView takes your streaming data and lets you view it as a continuously updating table. Data is streamed through LiveView data tables, which are in turn contained within LiveView projects. A single project must have a minimum of one table and may have many more.
LiveView data tables are similar to standard relational database tables in that they consist of table rows and columns. Table columns store simple, scalar data types including numbers, strings, or timestamps, as described in Data Types Supported in LiveView Tables.
A table row is a horizontal record of values that fit into each different table column. However, unlike static SQL tables, LiveView data tables are designed to show continuously updating data received from a data stream.
When you use a project to start LiveView Server, you bring streaming data into a LiveView table via a data stream. A data stream is a series of ordered tuples. LiveView data table configuration requires that the table's input stream is named DataIn.
Every time a table's DataIn stream sends a tuple, a row is published in the LiveView table.
Tables are required to have at least one field designated as a primary key, which identifies the table record. If the table's primary key contains a field or fields for which each entry is unique, the table adds a new row every time a new tuple arrives. If the table's primary key has nonunique entries, then the first arrival of a primary key value creates a new table row and successive arrivals of tuples with the same primary key value update the rows with matching key fields.
Note
When applying the input tuple, any columns in the table that are missing in the input tuple or are null valued do not overwrite the old column value.
If you need to explicitly write a null value into a column that already has a non-null value, you must delete that row and
re-write it, defining all fields that should have non-null values. If you need a value that represents removed or missing
data that you can write in one step, choose a non-null value within the allowed values for that data type (such as -10000000
for int,
empty for string, and so on). Choose a value that is invalid based on the business use of that field and column, and that will
not be generated from normal operations.
The configuration file for a LiveView data table has the following required parts:
<liveview-configuration>declares that this is an lvconf file. These tags contains the following namespace and schema attributes:
<liveview-configuration xmlns:
This top-level element is automatically populated when you create an lvconf file in StreamBase Studio.
<data-table>is the first child of
<liveview-configuration>and declares that this file configures a data table. A data table receives tuples from a StreamBase data stream and publishes the tuples as rows.
<fields>is a container for one or more
<field>elements. The
<fields>element as a whole declares the data table's schema.
<field>is a child of
<fields>; it defines each field of the data table. The required attributes are
nameand
type.
The field name must start with an alphabetic character (uppercase or lowercase) and may contain underscores and numbers. Field names cannot use any of the LiveView Reserved Words.
Field data types must be of one of the supported data types for LiveView tables.
<primary-key>lists the field or fields in the table that make rows unique.
Most LiveView data tables also have the following parts configured:
<data-sources>declares one or more data sources for a table. Data source options are: a LiveView author-time aggregation table or an EventFlow application that feeds data to the LiveView data table, possibly transforming the data in some way first.
<indexes>specifies table fields that are to be indexed. Data table indexes are very useful for improving query performance.
The full list of available elements, attributes, and their descriptions is provided in the LiveView Graphical Configuration Reference. | http://docs.streambase.com/latest/topic/com.streambase.sb.ide.help/data/html/lv-intro/lv-tables.html | 2018-11-12T23:18:21 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.streambase.com |
Using theories¶
Theories are a powerful tool for test-driven development, allowing you to test a specific behaviour against all permutations of a set of user-defined parameters known as “data points”.
Adding theories¶
Adding theories is done by defining data points and a theory function:
#include <criterion/theories.h> TheoryDataPoints(suite_name, test_name) = { DataPoints(Type0, val0, val1, val2, ..., valN), DataPoints(Type1, val0, val1, val2, ..., valN), ... DataPoints(TypeN, val0, val1, val2, ..., valN), } Theory((Type0 arg0, Type1 arg1, ..., TypeN argN), suite_name, test_name) { }
suite_name and
test_name are the identifiers of the test suite and
the test, respectively. These identifiers must follow the language
identifier format.
Type0/arg0 through
TypeN/argN are the parameter types and names of theory
theory function and are available in the body of the function.
Datapoints are declared in the same number, type, and order than the parameters
inside the
TheoryDataPoints macro, with the
DataPoints macro.
Beware! It is undefined behaviour to not have a matching number and type of
theory parameters and datatypes.
Each
DataPoints must then specify the values that will be used for the
theory parameter it is linked to (
val0 through
valN).
Assertions and invariants¶
You can use any
cr_assert or
cr_expect macro functions inside the body of a
theory function.
Theory invariants are enforced through the
cr_assume(Condition) macro function:
if
Condition is false, then the current theory iteration aborts without
making the test fail.
On top of those, more
assume macro functions are available for common operations:
Configuring theories¶
Theories can optionally recieve configuration parameters to alter the behaviour
of the underlying test; as such, those parameters are the same ones as the ones
of the
Test macro function (c.f. Configuration reference).
Full sample & purpose of theories¶
We will illustrate how useful theories are with a simple example using Criterion:
The basics of theories¶
Let us imagine that we want to test if the algebraic properties of integers, and specifically concerning multiplication, are respected by the C language:
int my_mul(int lhs, int rhs) { return lhs * rhs; }
Now, we know that multiplication over integers is commutative, so we first test that:
#include <criterion/criterion.h> Test(algebra, multiplication_is_commutative) { cr_assert_eq(my_mul(2, 3), my_mul(3, 2)); }
However, this test is imperfect, because there is not enough triangulation to insure that my_mul is indeed commutative. One might be tempted to add more assertions on other values, but this will never be good enough: commutativity should work for any pair of integers, not just an arbitrary set, but, to be fair, you cannot just test this behaviour for every integer pair that exists.
Theories purposely bridge these two issues by introducing the concept of “data point” and by refactoring the repeating logic into a dedicated function:
#include <criterion/theories.h> TheoryDataPoints(algebra, multiplication_is_commutative) = { DataPoints(int, [...]), DataPoints(int, [...]), }; Theory((int lhs, int rhs), algebra, multiplication_is_commutative) { cr_assert_eq(my_mul(lhs, rhs), my_mul(rhs, lhs)); }
As you can see, we refactored the assertion into a theory taking two unspecified integers.
We first define some data points in the same order and type the parameters have,
from left to right: the first
DataPoints(int, ...) will define the set of values passed
to the
int lhs parameter, and the second will define the one passed to
int rhs.
Choosing the values of the data point is left to you, but we might as well use
“interesting” values:
0,
-1,
1,
-2,
2,
INT_MAX, and
INT_MIN:
#include <limits.h> TheoryDataPoints(algebra, multiplication_is_commutative) = { DataPoints(int, 0, -1, 1, -2, 2, INT_MAX, INT_MIN), DataPoints(int, 0, -1, 1, -2, 2, INT_MAX, INT_MIN), };
Using theory invariants¶
The second thing we can test on multiplication is that it is the inverse function of division. Then, given the division operation:
int my_div(int lhs, int rhs) { return lhs / rhs; }
The associated theory is straight-forward:
#include <criterion/theories.h> TheoryDataPoints(algebra, multiplication_is_inverse_of_division) = { DataPoints(int, 0, -1, 1, -2, 2, INT_MAX, INT_MIN), DataPoints(int, 0, -1, 1, -2, 2, INT_MAX, INT_MIN), }; Theory((int lhs, int rhs), algebra, multiplication_is_inverse_of_division) { cr_assert_eq(lhs, my_div(my_mul(lhs, rhs), rhs)); }
However, we do have a problem because you cannot have the theory function divide
by 0. For this purpose, we can
assume than
rhs will never be 0:
Theory((int lhs, int rhs), algebra, multiplication_is_inverse_of_division) { cr_assume(rhs != 0); cr_assert_eq(lhs, my_div(my_mul(lhs, rhs), rhs)); }
cr_assume will abort the current theory iteration if the condition is not
fulfiled.
Running the test at that point will raise a big problem with the current
implementation of
my_mul and
my_div:
[----] theories.c:24: Assertion failed: (a) == (bad_div(bad_mul(a, b), b)) [----] Theory algebra::multiplication_is_inverse_of_division failed with the following parameters: (2147483647, 2) [----] theories.c:24: Assertion failed: (a) == (bad_div(bad_mul(a, b), b)) [----] Theory algebra::multiplication_is_inverse_of_division failed with the following parameters: (-2147483648, 2) [----] theories.c:24: Unexpected signal caught below this line! [FAIL] algebra::multiplication_is_inverse_of_division: CRASH!
The theory shows that
my_div(my_mul(INT_MAX, 2), 2) and
my_div(my_mul(INT_MIN, 2), 2)
does not respect the properties for multiplication: it happens that the
behaviour of these two functions is undefined because the operation overflows.
Similarly, the test crashes at the end; debugging shows that the source of the crash is the divison of INT_MAX by -1, which is undefined.
Fixing this is as easy as changing the prototypes of
my_mul and
my_div
to operate on
long long rather than
int.
What’s the difference between theories and parameterized tests ?¶
While it may at first seem that theories and parameterized tests are the same, just because they happen to take multiple parameters does not mean that they logically behave in the same manner.
Parameterized tests are useful to test a specific logic against a fixed, finite set of examples that you need to work.
Theories are, well, just that: theories. They represent a test against an universal truth, regardless of the input data matching its predicates.
Implementation-wise, Criterion also marks the separation by the way that both are executed:
Each parameterized test iteration is run in its own test; this means that one parameterized test acts as a collection of many tests, and gets reported as such.
On the other hand, a theory act as one single test, since the size and contents of the generated data set is not relevant. It does not make sense to say that an universal truth is “partially true”, so if one of the iteration fails, then the whole test fails. | https://criterion.readthedocs.io/en/v2.2.1/theories.html | 2018-11-12T23:03:14 | CC-MAIN-2018-47 | 1542039741151.56 | [] | criterion.readthedocs.io |
Network requests in Office for Mac
Office for Mac applications provide a native app experience on the macOS platform. Each app is designed to work in a variety of scenarios, including states when no network access is available. When a machine is connected to a network, the applications automatically connect to a series of web-based services to provide enhanced functionality. The following information describes which endpoints and URLs the applications try to reach, and the services provided. This information is useful when troubleshooting network configuration issues and setting policies for network proxy servers. The details in this article are intended to complement the Office 365 URL and address ranges article, which includes endpoints for computers running Microsoft Windows. Unless noted, the information in this article also applies to Office 2019 for Mac and Office 2016 for Mac, which are available as a one-time purchase from a retail store or through a volume licensing agreement.ote
The URL type is defined as follows:
ST: Static - The URL is hard-coded into the client application.
SS: Semi-Static - The URL is encoded as part of a web page or redirector.
CS: Config Service - The URL is returned as part of the Office Configuration Service.
Office for Mac default configuration
Installation and updates
The following network endpoints are used to download the Office for Mac installation program from the Microsoft Content Delivery Network (CDN).
First app launch
The following network endpoints:
MSA: Microsoft Account - typically used for consumer and retail scenarios
OrgID: Organization Account - typically used for commercial scenarios
Note
For subscription-based and retail licenses, signing in both activates the product, and enables access to cloud resources such as OneDrive. For Volume License installations, users are still prompted to sign-in (by default), but that is only required for access to cloud resources, as the product is already activated.
Product activation
The following network endpoints apply to Office 365 Subscription and Retail License activations. Specifically, this does NOT apply to Volume License installations.
What's New content
The following network endpoints apply to Office 365 Subscription only.
Researcher
The following network endpoints apply to Office 365 Subscription only. all Office applications for Office 365 Subscription only.
Crash reporting
The following network endpoint applies to all Office applications for both Office 365 Subscription and Retail/Volume License activations. When a process unexpectedly crashes, a report is generated and sent to the Watson service.
Options for reducing network requests and traffic
The default configuration of Office for Mac provides the best user experience, both in terms of functionality and keeping the machine up to date. In some scenarios, try for Mac build 15.25 [160726] or later.
Telemetry
Office:
defaults write com.microsoft.Word SendAllTelemetryEnabled -bool FALSE
defaults write com.microsoft.Excel SendAllTelemetryEnabled -bool FALSE
defaults write com.microsoft.Powerpoint SendAllTelemetryEnabled -bool FALSE
defaults write com.microsoft.Outlook SendAllTelemetryEnabled -bool FALSE
defaults write com.microsoft.onenote.mac SendAllTelemetryEnabled -bool FALSE
defaults write com.microsoft.autoupdate2 SendAllTelemetryEnabled -bool FALSE
defaults write com.microsoft.Office365ServiceV2 SendAllTelemetryEnabled -bool FALSE
Heartbeat telemetry is always sent and cannot be disabled.
Crash reporting
When a fatal application error occurs, the application will unexpectedly terminate and upload a crash report to the 'Watson' service. The crash report consists of a call-stack, which is the list of steps the application was processing leading up to the crash. These steps help the engineering team identify the exact function that failed and why.
In some cases, the contents of a document will cause the application to crash. If the app identifies the document as the cause, it will ask the user if it's okay to also send the document along with the call-stack. Users can make an informed choice to this question. IT administrators may have strict requirements about the transmission of documents and make the decision on behalf of the user to never send documents. The following preference can be set to prevent documents from being sent, and to suppress the prompt to the user:
defaults write com.microsoft.errorreporting IsAttachFilesEnabled -bool FALSE
Note
If SendAllTelemetryEnabled is set to FALSE, all crash reporting for that process is disabled. To enable crash reporting without sending usage telemetry, the following preference can be set:
defaults write com.microsoft.errorreporting IsMerpEnabled -bool TRUE
Updates
Microsoft releases Office for Mac updates at regular intervals (typically once a month). We strongly encourage users and IT administrators to keep machines up to date to ensure the latest security fixes are installed. In cases where IT administrators want to closely control and manage machine updates, the following preference can be set to prevent the AutoUpdate process from automatically detecting and offering product updates:
defaults write com.microsoft.autoupdate2 HowToCheck -string for Mac builds 15.27 or later, as they include specific fixes for working with NTLM and Kerberos servers.
See also
Office 365 URLs and IP address ranges | https://docs.microsoft.com/en-us/office365/enterprise/network-requests-in-office-2016-for-mac?redirectSourcePath=%252far-sa%252farticle%252f%2525D8%2525B7%2525D9%252584%2525D8%2525A8%2525D8%2525A7%2525D8%2525AA-%2525D8%2525A7%2525D9%252584%2525D8%2525B4%2525D8%2525A8%2525D9%252583%2525D8%2525A9-%2525D9%252581%2525D9%25258A-office-2016-for-mac-afdae969-4046-44b9-9adb-f1bab216414b | 2018-11-12T23:33:36 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.microsoft.com |
Submissions from 2018
Community Connections: Relief through Athletics, Joshua Arthur
Snacking in Bed: Blending Residential and Agricultural Typologies, Taylor Chin
New England Pathway to Recovery: Drug and Alcohol Addiction Treatment Center around Nature in Rocky Hill, Connecticut, Michael P. Lombardi Jr.
BioHarvest: Energy Efficient Design for the Standardization of Biomimetic Technologies at Stanford University in Palo Alto, CA, USA, Gabriella Santostefano
Designing Efficiently for Natural Disaster Relief: Cultural Integration and Prevention Methods, Bryan Smith
Impermanence Materialized: Exploration of Temporary Architecture at the Kumbh Mela in Allahabad, India, Taylor Sutherland
Submissions from 2017
Clean Venice: Infrastructure & Place-Making in Venice, Italy, Nicholas Musilli
Open Source Architecture: Redefining Residential Architecture in Islamabad, Mariam Yaqub
Submissions from 2016
Continuation of a Life Worth Living: Empathetic Design for the Alzheimer’s Community, Leslie Hulbert
An Architecture of Movement: the New Headquarters for USA Dance, Bethany Robertson
A Floating Community: a ‘Platform’ for Future Sustainable Development, Christopher M. Rossi
Submissions from 2015
Redefining Transportation Culture: a New Union Station for Los Angeles, Pawel Honc
The Motor City – Stimulating Architecture, Jacob Levine
The Modern Urban Neighborhood: the Role of Dwelling in Neighborhood Revitalization, Zachary Nelson
From the Border: Migrant Youth shelter, Alexandra Reilly
On Track: Integrated Efficiency for Equestrian Architecture, Anthony Scerbo
Submissions from 2014
Cohousing and the Greater Community: Re-establishing Identity in Taunton’s Weir Village, Andrew Kremzier
Humility and Homelessness: a Housing Continuum, Jessica MacDonald
Awareness at a Threshold: Urban Exchange through Public Space, Matthew Spears
Nature and Architecture: a Holistic Response, Jarrod Martin
Community Reclamation: the Hybrid Building, Laura Maynard
New Urban Living: High-Rise Vertical Farming in a Mixed Use Building, Boston, MA, Zachary Silvia
Framing Emotive and Perspective Space : the Sundance Center for the Exhibition and Study of Film, Joshua Stiling
LAM: Laughing My Architecture Of, Elizabeth Straub
Union Wadding Artist Complex: Pawtucket, Rhode Island, Jennifer Turcotte
Volvo Museum of Automotive History: Boston Massachusetts, C. Patrick McCabe
Center for the Creation and Performance of the Arts, Dennis P. McGowan
Closer: Designing a Manufacturing Facility for the Zuni Pueblo Solar Energy Reinvestment Initiative, Seth Van Nostrand
Vertical Communities: an Alternative to Suburban Sprawl, Zev O’Brien-Gould
Adaptive Reuse of the Big Box Store, Mark C. Roderick
Re-conceptualizing Performance and Event in the Public Realm: a Multicultural Funeral Home, Ashley Rodrigues
Awakening Experience: Amish Youth and the Search for a Modern Identity, Nicole Secinaro
Fort Point Channel: Maglev Transit Hub and South Station Expansion Master Plan, Steven Seminelli
Social Rejuvenation: A New Community Center, Lancaster, PA, John Snavely
Newport Aquarium Oceanic Research and Discovery Center: to Further Our Knowledge of the Ocean, Steven R. Toohey
Living in the Spectrum: Autistic Children Center, Jennifer Villegas | https://docs.rwu.edu/archthese/ | 2018-11-12T22:44:20 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.rwu.edu |
Stop adding htaccess rules or fiddling around with the code to redirect your website URLs.
YITH GeoIP Language Redirect allows you to set up redirects in few clicks: thanks to a highly user-friendly interface, you can make custom redirects using specific parameters, including a country filter.
Moreover, you will be able to choose the status code to apply to the HTTP request generated during the redirect.
Discover this and all the plugin features in this guide. | https://docs.yithemes.com/yith-geoip-language-redirect-for-woocommerce/ | 2018-11-12T22:04:49 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.yithemes.com |
Chapter 1: Through-The-Web¶
Getting Started with Content Types¶
If you don’t know what a content type is, don’t worry! Sit back, relax, and do the tutorial! I’ll save the mumbo jumbo definitions for another day. In this first part, we will make a Todo list without touching any code. It won’t be fancy, but it will give you a good idea of how things work in Plone.
The way Plone handles content is a little different than your average relational database driven framework, so if you don’t understand something right away, sit back, relax, and finish the tutorial.
Generally speaking, content-types are just that: types of content. By default, in Plone you get the News Item content-type, the Event content-type and so on. So if you add a content item that is of Event type, you are using the Event content-type. In our case, we will create a new content-type that will represent a Todo Item.
Create a New Content Type¶
First we need to create a new content type to represent an item on our Todo list. This will be a type with one field, that which needs to be done.
Navigate to site setup as shown below, or just enter in your browser. This is where you can configure Plone for happy fun time.
Now comes the fun part. We want to create our own type Through-The-Web aka. TTW. This type will be a Todo Item. Let’s click Dexterity Content Types (or go directly to).
Create a Todo List Item by clicking Add New Content Type.
Fill in the fields as seen below and then click Add.
Now you will see that there is a new type to play with. There are two important things we need to do here: we need to adjust some behaviors, and add some fields. Let’s look at behaviors first.
By default, all Plone content-types have Dublin Core metadata enabled (you may know it as title and description. We don’t need this for our über simple Todo list item. Uncheck Dublin Core metadata and then click Save.
Next we need to add some fields. Because this type is so simple, we will add just one field, but feel free to go CRAZY. Start by going back to the Fields tab and clicking Add new field....
Add a field called Todo, or anything else you want. But! Note that it’s very important that the Short Name field value is title. By using this key short name, we make sure that all Todo Items are searchable from smart search. Update the field as seen below and click Add.
You will see that a new field has been added to your content type. If you are feeling adventuresome, click on the settings tab next to the field to set other properties, or just see what’s available.
Trying out the Todo Item content-type¶
Now it’s time to reap the rewards of all of your effort. Let’s put all of our Todo Items in one particular folder so that we can have collections of items throughout the site. For this tutorial, we will be putting everything in the root of the site so it’s easy to debug.
From the root, add a new folder called TODO list.
Add a new Todo Item to the new Todo folder.
Celebrate!
You may be wondering about earlier, when we asked you to make sure that the short name for the Todo Item was called title. The time has come to let you in on a little secret. Calling the short name either title or description will automatically add that text to the livesearch menu. WHAT?!? I know! When life gives you lemonade, spike it with vodka and enjoy liberally! You can now search for your Todo Items in Live Search.
But wait a minute... This todo item is marked private, and that doesn’t really make sense. It’s a good thing Plone has an easy solution for that. In the next section, we will go over the basics of that magical, mystical word: workflow.
Getting Started with Workflows¶
So what is a workflow? It is a mechanism to control the flow of a content item through various states in time. Most commonly, and by default in Plone, you deal with a publication workflow. For example: A writer writes up a News Item and submits it for review. Then the in-house reviewing team goes through the text and publishes the News Item so it is public for the entire world to see.
The Todo Item we added in the last section is marked as private because by default all new Plone content items are assigned a workflow called simple_publication_workflow. I know what you are thinking: simple publication whodie whatie grble gobble??!?! Just like before, let’s bypass trying to explain what that means and just fix it. Relax, enjoy, and finish the tutorial!
Todo Items really have 2 states that we are interested in: open and complete. Let’s make that happen.
Head over to the ZMI at.
In the ZMI, open the portal_workflow tool.
On this page, we see all content-types in our portal mapped to a workflow. Our new type, Todo Item, is mapped to (Default). You can see right below that the default is Simple Publication Workflow. This is just too complex for our little Todo Item.
So let’s create a new one that suites our needs perfectly! Click the contents tab at the top of the page to get a listing of all the available workflows.
You can poke around here all you like, but the details of each one of these workflows are better left to another tutorial. When in doubt, you can always come back to these workflows to see examples of how things can be done. Onwards and upwards!
Let’s create a new workflow for our Todo Items and call it todo_item_workflow. We will make a new workflow by copying and customising one of the workflows that are already there. Duplicate the one_state_workflow.
Rename the copied workflow to todo_item_workflow.
You will be spit back out to the workflow contents page. Click the workflow to start editing.
Let’s update the name of the workflow so we don’t double take later on.
Workflow is something that takes time to get used to if you have never encountered the concept. The best analogy in our case is to a car. The car engine has two simple states: on and off. To transition from on to off and vice versa, it needs some action from the driver. The same for our TODO items. They have two states: open and completed. In order to get them from open to completed, the user needs to click something. Don’t understand yet? Relax, sit back, and finish the tutorial.
Lets start by adding our base states. We will call them open and complete. From the edit workflow screen, click on the States tab.
Delete the currently listed state.
Add two states with the ids open and completed.
Next lets add transitions. They will take the TODO item from open to completed and vice versa (in case a user wants to revert an item back to open). Click on the Transitions tab.
Add two transitions: complete and reopen. When a user completes a task, it will move into the completed state. When a user reopens a task, it will go back to the open state.
Let’s add a few details to these new transitions. Let’s start with complete. Click on complete to edit the transition.
First add a title so you remember later what this does. Description is optional but adding one will help you keep your thoughts clear and remind the future you what the today you is thinking. The destination state should be set to completed. We also want to make sure that only people with mega permissions, or the creator of the todo item itself, can change the state so we add Modify portal content to the Permissions box.
All this means nothing if we don’t give the user a chance to change the state. Next to Display in actions box, we can set the title for what will be displayed in the workflow drop down box of the item (where Pending, Reject, etc. where earlier). Let’s call it Complete. Last but not least, we need to add the URL that the action points to. I could make this tutorial 100 years long and explain why you have to do this, but accept that it has to be done, relax, and follow this formula:
URL = %(content_url)s/content_status_modify?workflow_action=X
where X is the id of the transition. So for this case, in the URL box, you will add
%(content_url)s/content_status_modify?workflow_action=complete
Double check everything and click Save.
If your brain isn’t hurting yet it will be soon. Go back to the transitions listing.
Let’s update the reopen transition and update in a similar manner. This time, the destination state is open, and following the formula above, the URL is %(content_url)s/content_status_modify?workflow_action=reopen.
Now we have 2 states and 2 transitions, but they aren’t 100% linked together ... yet. Go back to the workflow listing, click the States tab and then click on completed to edit the state.
Add a title, since this is what users see in the top right corner of the TODO items, and then check reopen as a possible transition. This means that when a TODO item is completed, it will only allow the user to reopen it (and not re-complete it, for example). In the same respect, open the open transition, add a title, and mark complete as a possible transition.
When we create a new TODO item, we need to tell Plone what the first state is. Go back to the workflow states listing, and make open the initial state.
And that’s it! Almost... Last but not least, we need to assign our new workflow to our TODO item type. Go back to the main workflow screen.
Instead of mapping to the (Default) workflow, we are going to map to the id of our new workflow, todo_item_workflow, and then click Change.
If you already have TODO items in your site, you MUST click Update Security Settings to update the workflow for the items. Instead of going into gross detail about why this is the case, just sit back, relax, finish the tutorial, and remember to click this button any time you make changes (yes! you can continue to change and update your workflows!).
Could the time have arrived? Time to try it out? YES! Go to your Todo folder and add a new TODO Item. Validate that the workflow works as expected. By toggling between the states.
Congrats! You have now passed Plone Workflow 101. Next we will transition from developing through the web (TTW) to developing on the filesystem. | https://tutorialtodoapp.readthedocs.io/en/latest/chapter_1.html | 2018-11-12T23:07:00 | CC-MAIN-2018-47 | 1542039741151.56 | [] | tutorialtodoapp.readthedocs.io |
Welcome to Splunk Enterprise 6.5
If you are new to Splunk Enterprise, read the Splunk Enterprise Overview. If you are familiar with Splunk Enterprise and want to explore the new features interactively, download the Splunk Enterprise 6.5 Overview app from Splunkbase.
For system requirements information, see the Installation Manual.
Before proceeding, review the Known Issues for this release.
Splunk Enterprise 6.5 was released in September 2016.
Planning to upgrade from an earlier version?
If you plan to upgrade from an earlier version of Splunk Enterprise to version 6.5, read How to upgrade Splunk Enterprise in the Installation Manual for information you need to know before you upgrade.
See About upgrading to 6.5: READ THIS FIRST for specific migration tips and information that might affect you when you upgrade.
The Deprecated features topic lists computing platforms, browsers, and features for which Splunk has deprecated or removed support in this release.
What's New in 6.5
Documentation updates
Legacy app building documentation in Developing Views and Apps for Splunk Web has been revised, updated, and moved to the Splunk developer portal. See Develop apps using the Splunk Web framework for this new content.
REST API updates
This release includes the following new and updated REST API endpoints.
- admin/Duo-MFA
- admin/Duo-MFA/{name}
- admin/ProxySSO-auth
- admin/ProxySSO-auth/{proxy_name}
- admin/ProxySSO-auth/{proxy_name}/disable
- admin/ProxySSO-auth/{proxy_name}/enable
- admin/ProxySSO-groups
- admin/ProxySSO-groups/{group_name}
- admin/ProxySSO-user-role-map
- admin/ProxySSO-user-role-map/{user_name}
- datamodel/model
- datamodel/model/{name}
- kvstore/status
- messages
- messages/{message_name}
- replication/configuration/health
- saved/searches
- saved/searches/{name}
- saved/searches/{name}/dispatch
- search/jobs
- server/info
- server/status/installed-file-integrity
- server/status/resource-usage/hostwide
- server/sysinfo
- services/collector
- services/collector/raw
- storage/passwords
- storage/passwords/{name}
The REST API Reference Manual describes the endpoints.
This documentation applies to the following versions of Splunk® Enterprise: 6.5.0, 6.5.1, 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9
I'd like to know if the universal forwarder for Splunk 6.5.4 is compatible with RHEL 7.3 | http://docs.splunk.com/Documentation/Splunk/6.5.2/ReleaseNotes/MeetSplunk | 2018-11-12T22:40:57 | CC-MAIN-2018-47 | 1542039741151.56 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
.
Added in version 6.3 of package base.
Changed in version 6.7.0.4 of package base: Improved element-reachability guarantee for streams in for.
In the case of lazy streams, this function forces evaluation only of the sub-streams, and not the stream’s elements.
In case extracting elements from s involves a side effect, they will not be extracted until the first element is extracted from the resulting stream.
Unlike most for forms, these forms are evaluated lazily, so each body will not be evaluated until the resulting stream is forced. This allows for/stream and for*/stream to iterate over infinite sequences, unlike their finite counterparts.
Added in version 6.3.0.9 of package base.
To supply method implementations, the #:methods keyword should be used in a structure type definition. The following three methods should be implemented:
stream-empty? : accepts one argument
stream-first : accepts one argument
stream-rest : accepts one argument
If the c argument is a flat contract or a chaperone contract, then the result will be a chaperone contract. Otherwise, the result will be an impersonator contract.
When an stream/c contract is applied to an asynchronous channel, the result is not eq? to the input. The result will be either a chaperone or impersonator of the input depending on the type of contract.
Contracts on streams are evaluated lazily by necessity (since streams may be infinite). Contract violations will not be raised until the value in violation is retrieved from the stream. As an exception to this rule, streams that are lists are checked immediately, as if c had been used with listof.
If a contract is applied to a stream, and that stream is subsequently used as the tail of another stream (as the second parameter to stream-cons), the new elements will not be checked with the contract, but the tail’s elements will still be enforced.
Added in version 6.1.1.8 of package base. | https://docs.racket-lang.org/reference/streams.html | 2018-11-12T22:07:58 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.racket-lang.org |
Often, it is useful to rename a frequently-used field directly in the dataset, instead of using an alias in each visual.
The following steps demonstrate how to rename a field..
Notice that the Base Column name cannot be edited, but you can change the Display Name of the column.
Change it from
iso_cc to
ISO Country Code.
Click Apply.
Under Dataset: World Life Expectancy, click Save.
As a result of this change, all new visuals created from this dataset use the new name automatically. | http://docs.arcadiadata.com/4.1.0.0/pages/topics/data-custom-rename-fields.html | 2018-11-12T22:07:06 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.arcadiadata.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.