content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
ECMAScript modules are the official standard format to package JavaScript code for reuse. Modules are defined using a variety of
import and
export statements.
The following example of an ES module exports a function:
// addTwo.mjs function addTwo(num) { return num + 2; } export { addTwo };
The following example of an ES module imports the function from
addTwo.mjs:
// app.mjs import { addTwo } from './addTwo.mjs'; // Prints: 6 console.log(addTwo(4));
Node.js fully supports ECMAScript modules as they are currently specified and provides limited interoperability between them and the existing module format, CommonJS.
Node.js contains support for ES Modules based upon the Node.js EP for ES Modules and the ECMAScript-modules implementation.
Expect major changes in the implementation including interoperability support, specifier resolution, and default behavior.
Experimental support for ECMAScript modules is enabled by default. Node.js.json
"type"field
Files ending with
.js will the root of the volume, Node.js defers to the default, a
package.json with no
"type" field..
A folder containing a
package.json file, and all subfolders below that folder until the next folder containing another
package.json, are a package scope. The
"type" field defines how to treat
.js files within the package scope. Every package in a project’s
node_modules folder contains its own
package.json file, so each project’s dependencies have their own package scopes. If a
package.json file does not have a
"type" field, the default
"type" is
"commonjs".
The package scope applies not only to initial entry points (
nodejs extension (since both
.js and
.mjs files are treated as ES modules within a
"module" package scope).
Within a
"type": "commonjs" package scope, Node.js can be instructed to interpret a particular file as an ES module by naming it with an
.mjs extension (since both
.js and
.cjs files are treated as CommonJS within a
"commonjs" package scope).
--input-typeflag
Strings passed in as an argument to
--eval or
-e or
-p), or piped to
node via
STDIN, will" will/", "./feature": "./feature/index.js", "./feature/": "./feature/", "./package.json": "./package.json" } }
As a last resort, package encapsulation can be disabled entirely by creating an export for the root of the package
"./": "./". This will expose will encapsulation be lost but module consumers will be unable to
import feature from 'my-mod/feature' as they will need to provide the full path
import feature from 'my-mod/feature/index.js.
To set the main entry point for a package, it is advisable to define both
"exports" and
"main" in the package’s
package.json file:
{ "main": "./main.js", "exports": "./main.js" }
The benefit of doing this is that when using the
"exports" field all subpaths of the package will no longer be available to importers under
require('pkg/subpath.js'), and instead they will get a new error,
ERR_PACKAGE_PATH_NOT_EXPORTED.
Entire folders can also be mapped with package exports:
// ./node_modules/es-module-package/package.json { "exports": { "./features/": "./src/features/" } }
With the above, all modules within the
./src/features/ folder are exposed deeply to
import and
require:
import feature from 'es-module-package/features/x.js'; // Loads ./node_modules/es-module-package/src/features/x.js
When using folder mappings, ensure that you do want to expose every module inside the subfolder. Any modules which are not public should be moved to another folder to retain the encapsulation benefits of exports.
For possible new specifier support in future, array fallbacks are supported for all invalid specifiers:
{ "exports": { "./submodule": ["not:valid", "./submodule.js"] } }
Since
"not:valid" is not a valid specifier,
"./submodule.js" is used instead as the fallback, as if it were the only"- matched when the package is loaded via
importor
import(). Can reference either an ES module or CommonJS file, as both
importand
import()can load either ES module or CommonJS sources. Always matched when the
"require"condition is not matched.
"require"- matched when the package is loaded via
require(). As
require()only supports CommonJS, the referenced file must be CommonJS. Always matched when the
"import"condition is not matched.
"node"- matched for any Node.js environment. Can be a CommonJS or ES module file. This condition should always come after
"import"or
"require".
"default"- the generic fallback that will always match., and thus ignored by Node.js. Runtimes or tools other than Node.js may use them at their discretion. Further restrictions, definitions, or guidance on condition names may or
-u.
In addition to the
"exports" field it is possible to define internal package import maps that only apply to import specifiers from within the package itself.
Entries in the imports field must always start with
# to ensure they are clearly disambiguated from package specifiers.
For example, the imports field can be used to gain the benefits of conditional exports for internal modules:
// package.json { "imports": { "#dep": { "node": "dep-node-native", "default": "./dep-polyfill.js" } }, "dependencies": { "dep-node-native": "^1.0.0" } }
where
import '#dep' would now get the resolution of the external package
dep-node-native (including its exports in turn), and instead get the local file
./dep-polyfill.js relative to the package in other environments.
Unlike the exports field, import maps permit mapping to external packages because this provides an important use case for conditional loading and also can be done without the risk of cycles, unlike for exports.
Apart from the above, the resolution rules for the imports field are otherwise analogous to the exports field. could instead be written where any version of Node.js receives only CommonJS sources, and any separate ES module sources the package may contain could be intended only for other environments such as browsers. Such a package would be usable by any version of Node.js, since
import can refer to CommonJS files; but it would not provide any of the advantages of using ES module syntax.
A package could may" } }
importSpecifiers: and
data: URLs are supported. A specifier like
'' may be supported by browsers but it is not supported in Node.js.
Specifiers may not begin with
/ or
//. These are reserved for potential future use. The root of the current volume may be referenced via.
data:Imports
data: URLs are supported for importing with the following MIME types:
text/javascriptfor ES Modules
application/jsonfor JSON
application/wasmfor WASM.
data: URLs only resolve Bare specifiers for builtin modules and Absolute specifiers. Resolving Relative specifiers will not work because
data: is not a special scheme. For example, attempting to load
./foo from
data:text/javascript,import "./foo"; will fail to resolve since there is no concept of relative resolution for
data: URLs. An example of a
data: URLs being used is:
import 'data:text/javascript,console.log("hello!");'; import _ from 'data:application/json,"world!"';
import.meta
The
import.meta metaproperty is an
Object that contains the following property:
url<string> The absolute
file:URL of the module.
A file extension must be provided when using the
import keyword. Directory indexes (e.g.
'./startup/index.js') must also be fully specified.
This behavior matches how
import behaves in browser environments, assuming a typically configured server.
NODE_PATH
NODE_PATH is not part of resolving
import specifiers. Please use symlinks if this behavior is desired.
require,
exports,
module.exports,
__filename,
__dirname
These CommonJS variables are not available in ES modules.
require can be imported into an ES module using
module.createRequire().
Equivalents of
__filename and
__dirname can be created inside of each file via
import.meta.url.
import { fileURLToPath } from 'url'; import { dirname } from 'path'; const __filename = fileURLToPath(import.meta.url); const __dirname = dirname(__filename);
require.
require.extensions
require.extensions is not used by
import. The expectation is that loader hooks can provide this workflow in the future.
require.cache
require.cache is not used by
import. It has a separate cache..
require
require always treats the files it references as CommonJS. This applies whether
require is used the traditional way within a CommonJS environment, or in an ES module environment using
module.createRequire().
To include an ES module into CommonJS, use
import().
importstatements
An
import statement can reference an ES module or a CommonJS module. Other file types such as JSON; unless the package’s
package.json contains an
"exports" field, in which case files within packages need to be accessed via the path defined in
"exports".
import { sin, cos } from 'geometry/trigonometry-functions.mjs';
Only the “default export” is supported for CommonJS files or packages:
import packageMain from 'commonjs-package'; // Works import { method } from 'commonjs-package'; // Errors
It is also possible to import an ES or CommonJS module for its side effects only.
import()expressions
Dynamic
import() is supported in both CommonJS and ES modules. It can be used to include ES module files from CommonJS code.
CommonJS, JSON, and native modules can be used with
module.createRequire().
// cjs.cjs module.exports = 'cjs'; // esm.mjs import { createRequire } from 'module'; const require = createRequire(import.meta.url); const cjs = require('./cjs.cjs'); cjs === 'cjs'; // true
Builtin modules will provide named exports of their public API. A default export is also provided which is the value of the CommonJS exports. The default export can be used for, among other things, modifying the named exports. Named exports of builtin modules are updated only by calling
module.syncBuiltinESMExports().
import EventEmitter from 'events'; const e = new EventEmitter();
import { readFile } from 'fs'; readFile('./foo.txt', (err, source) => { if (err) { console.error(err); } else { console.log(source); } });
import fs, { readFileSync } from 'fs'; import { syncBuiltinESMExports } from 'module'; fs.readFileSync = () => Buffer.from('Hello, ESM'); syncBuiltinESMExports(); fs.readFileSync === readFileSync;
Currently importing JSON modules are only supported in the
commonjs mode and are loaded using the CJS loader. WHATWG JSON modules specification are still being standardized, and are experimentally supported by including the additional flag
--experimental-json-modules when running Node.js.
When the
--experimental-json-modules flag is included both the
commonjs and
module mode will use the new experimental JSON loader.';
The
--experimental-json-modules flag is needed for the module to work.
node index.mjs # fails node --experimental-json-modules index.mjs # works-wasm-modules index.mjs
would provide the exports interface for the instantiation of
module.wasm.
Note: This API is currently being redesigned and will still change.
To customize the default module resolution, loader hooks can optionally be provided via a
--experimental-loader ./loader-name.mjs argument to Node.js.
When hooks are used they only apply to ES module loading and not to any CommonJS modules loaded.
resolvehook.
The
conditions property on the
context is an array of conditions for Conditional exports that apply to this resolution request. They can be used for looking up conditional mappings elsewhere or to modify the list when calling the default resolution logic.
The current package exports conditions will always be in the
context.conditions array passed into the hook. To guarantee default Node.js module specifier resolution behavior when calling
defaultResolve, the
context.conditions array passed to it must include all elements of the
context.conditions array originally passed into the
resolve hook.
/** * @param {string} specifier * @param {{ * parentURL: !(string | undefined), * conditions: !(Array<string>), * }} context * @param {Function} defaultResolve * @returns {!(Promise<{ url: string }>)} */ export async function resolve(specifier, context, defaultResolve) { const { parentURL = null } = context; if (Math.random() > 0.5) { // Some condition. // For some or all specifiers, do some custom logic for resolving. // Always return an object of the form {url: <string>}. return { url: parentURL ? new URL(specifier, parentURL).href : new URL(specifier).href, }; } if (Math.random() < 0.5) { // Another condition. // When calling `defaultResolve`, the arguments can be modified. In this // case it's adding another value for matching conditional exports. return defaultResolve(specifier, { ...context, conditions: [...context.conditions, 'another-condition'], }); } // Defer to Node.js for all other specifiers. return defaultResolve(specifier, context, defaultResolve); }
getFormathook
Note: The loaders API is being redesigned. This hook may disappear or its signature may change. Do not rely on the API described below.
The
getFormat hook provides a way to define a custom method of determining how a URL should be interpreted. The
format returned also affects what the acceptable forms of source values are for a module when parsing. This can be one of the following:
Note: These types all correspond to classes defined in ECMAScript.
Note: If the source value of a text-based format (i.e.,
'json',
'module') is not a string, it will be converted to a string using
util.TextDecoder.
/** * @param {string} url * @param {Object} context (currently empty) * @param {Function} defaultGetFormat * @returns {Promise<{ format: string }>} */ export async function getFormat(url, context, defaultGetFormat) { if (Math.random() > 0.5) { // Some condition. // For some or all URLs, do some custom logic for determining format. // Always return an object of the form {format: <string>}, where the // format is one of the strings in the preceding table. return { format: 'module', }; } // Defer to Node.js for all other URLs. return defaultGetFormat(url, context, defaultGetFormat); }
getSourcehook {{ format: string }} context * @param {Function} defaultGetSource * @returns {Promise<{ source: !(SharedArrayBuffer | string | Uint8Array) }>} */ export async function getSource(url, context, defaultGetSource) { const { format } = context; if (Math.random() > 0.5) { // Some condition. // For some or all URLs, do some custom logic for retrieving the source. // Always return an object of the form {source: <string|buffer>}. return { source: '...', }; } // Defer to Node.js for all other URLs. return defaultGetSource(url, context, defaultGetSource); }
transformSourcehook
NODE_OPTIONS='--experimental-loader ./custom-loader.mjs' node x.js {!(SharedArrayBuffer | string | Uint8Array)} source * @param {{ * url: string, * format: string, * }} context * @param {Function} defaultTransformSource * @returns {Promise<{ source: !(SharedArrayBuffer | string | Uint8Array) }>} */ export async function transformSource(source, context, defaultTransformSource) { const { url, format } = context; if (Math.random() > 0.5) { // Some condition. // For some or all URLs, do some custom logic for modifying the source. // Always return an object of the form {source: <string|buffer>}. return { source: '...', }; } // Defer to Node.js for all other sources. return defaultTransformSource(source, context, defaultTransformSource); }
getGlobalPreloadCodehook
Note: The loaders API is being redesigned. This hook may disappear or its signature may change. Do not rely on the API described below.
Sometimes it can be necessary to run some code inside of the same global scope that the application will run in. This hook allows to return a string that will be ran as sloppy-mode script on startup.
Similar to how CommonJS wrappers work, the code runs in an implicit function scope. The only argument is a
require-like function that can be used to load builtins like "fs":
getBuiltin(request: string).
If the code needs more advanced
require features, it will have to construct its own
require using
module.createRequire().
/** * @returns {string} Code to run before application startup */ export function getGlobalPreloadCode() { return `\ globalThis.someInjectedProperty = 42; console.log('I just set some globals!'); const { createRequire } = getBuiltin('module'); const require = createRequire(process.cwd() + '/<preload>'); // [...] `; }
dynamicInstantiatehook.
The various loader hooks can be used together to accomplish wide-ranging customizations of Node.js’ code loading and evaluation behaviors..mjs
Will print the current version of CoffeeScript per the module at the URL in
main.mjs. resolver has the following properties:
The algorithm to load an ES module specifier is given through the ESM_RESOLVE method below. It returns the resolved URL for a module specifier relative to a parentURL.
The algorithm to determine the module format of a resolved URL is provided by ESM_FORMAT, which returns the unique module format for any file. unless stated otherwise.
defaultConditions is the conditional environment name array,
["node", "import"].
The resolver can throw the following errors:
ESM_RESOLVE(specifier, parentURL)
- Let resolved be undefined.
-
If specifier is a valid URL, then
- Set resolved to the result of parsing and reserializing specifier as a URL.
-
Otherwise, if specifier starts with "/", "./" or "../", then
- Set resolved to the URL resolution of specifier relative to parentURL.
-
Otherwise, if specifier starts with "#", then
- Set resolved to the destructured value of the result of PACKAGE_IMPORTS_RESOLVE(specifier, parentURL, defaultConditions).
-
Otherwise,
- Note: specifier is now a bare specifier.
- Set resolved the result of PACKAGE_RESOLVE(specifier, parentURL).
-
If resolved contains any percent encodings of "/" or "\" ("%2f" and "%5C" respectively), then
- Throw an Invalid Module Specifier error.
-
If the file at resolved is a directory, then
- Throw an Unsupported Directory Import error.
-
If the file at resolved does not exist, then
- Throw a Module Not Found error.
- Set resolved to the real path of resolved.
- Let format be the result of ESM_FORMAT(resolved).
- Load resolved as module format, format.
- Return resolved.
PACKAGE_RESOLVE(packageSpecifier, parentURL)
- Let packageName be undefined.
-
If packageSpecifier is an empty string, then
- Throw an Invalid Module Specifier error.
- Let packageSubpath be "." concatenated with the substring of packageSpecifier from the position at the length of packageName.
- Let selfUrl be the result of PACKAGE_SELF_RESOLVE(packageName, packageSubpath, parentURL).
- If selfUrl is not undefined, return selfUrl.
-
If packageSubpath is "." and packageName is a Node.js builtin module, then
- Return the string "nodejs:" concatenated with packageSpecifier.
-
While parentURL is not the file system root,
- Let packageURL be the URL resolution of "node_modules/" pjson is not null and pjson.exports is not null or undefined, then
- Let exports be pjson.exports.
- Return the resolved destructured value of the result of PACKAGE_EXPORTS_RESOLVE(packageURL, packageSubpath, pjson.exports, defaultConditions).
-
Otherwise, if packageSubpath is equal to ".", then
- Return the result applying the legacy LOAD_AS_DIRECTORY CommonJS resolver to packageURL, throwing a Module Not Found error for no resolution.
-
Otherwise,
- Return the URL resolution of packageSubpath in packageURL.
- Throw a Module Not Found error.
PACKAGE_SELF_RESOLVE(packageName, packageSubpath, parentURL)
- Let packageURL be the result of READ_PACKAGE_SCOPE(parentURL).
-
If packageURL is null, then
- Return undefined.
- Let pjson be the result of READ_PACKAGE_JSON(packageURL).
-
If pjson is null or if pjson.exports is null or undefined, then
- Return undefined.
-
If pjson.name is equal to packageName, then
- Return the resolved destructured value of the result of PACKAGE_EXPORTS_RESOLVE(packageURL, subpath, pjson.exports, defaultConditions).
- Otherwise, return undefined.
PACKAGE_EXPORTS_RESOLVE(packageURL, subpath, exports, conditions)
- If exports is an Object with both a key starting with "." and a key not starting with ".", throw an Invalid Package Configuration error.
-
If subpath is equal to ".", then
- Let mainExport be undefined.
-
If exports is a String or Array, or an Object containing no keys starting with ".", then
- Set mainExport to exports.
-
Otherwise if exports is an Object containing a "." property, then
- Set mainExport to exports["."].
-
If mainExport is not undefined, then
- Let resolved be the result of PACKAGE_TARGET_RESOLVE( packageURL, mainExport, "", false, conditions).
-
If resolved is not null or undefined, then
- Return resolved.
-
Otherwise, if exports is an Object and all keys of exports start with ".", then
- Let matchKey be the string "./" concatenated with subpath.
- Let resolvedMatch be result of PACKAGE_IMPORTS_EXPORTS_RESOLVE( matchKey, exports, packageURL, false, conditions).
-
If resolvedMatch.resolve is not null or undefined, then
- Return resolvedMatch.
- Throw a Package Path Not Exported error.
PACKAGE_IMPORTS_RESOLVE(specifier, parentURL, conditions)
- Assert: specifier begins with "#".
-
If specifier is exactly equal to "#" or starts with "#/", then
- Throw an Invalid Module Specifier error.
- Let packageURL be the result of READ_PACKAGE_SCOPE(parentURL).
-
If packageURL is not null, then
- Let pjson be the result of READ_PACKAGE_JSON(packageURL).
-
If pjson.imports is a non-null Object, then
- Let resolvedMatch be the result of PACKAGE_IMPORTS_EXPORTS_RESOLVE(specifier, pjson.imports, packageURL, true, conditions).
-
If resolvedMatch.resolve is not null or undefined, then
- Return resolvedMatch.
- Throw a Package Import Not Defined error.
PACKAGE_IMPORTS_EXPORTS_RESOLVE(matchKey, matchObj, packageURL, isImports, conditions)
-
If matchKey is a key of matchObj, and does not end in "*", then
- Let target be the value of matchObj[matchKey].
- Let resolved be the result of PACKAGE_TARGET_RESOLVE( packageURL, target, "", isImports, conditions).
- Return the object { resolved, exact: true }.
- Let expansionKeys be the list of keys of matchObj ending in "/", sorted by length descending.
-
For each key expansionKey in expansionKeys, do
-
If matchKey starts with expansionKey, then
- Let target be the value of matchObj[expansionKey].
- Let subpath be the substring of matchKey starting at the index of the length of expansionKey.
- Let resolved be the result of PACKAGE_TARGET_RESOLVE( packageURL, target, subpath, isImports, conditions).
- Return the object { resolved, exact: false }.
- Return the object { resolved: null, exact: true }.
PACKAGE_TARGET_RESOLVE(packageURL, target, subpath, internal, conditions)
-
If target is a String, then
- If subpath has non-zero length and target does not end with "/", throw an Invalid Module Specifier error.
-
If target does not start with "./", then
-
If internal is true and target does not start with "../" or "/" and is not a valid URL, then
- Return PACKAGE_RESOLVE(target + subpath, packageURL + "/")_.
- Otherwise, throw an Invalid Package Target error.
- If target split on "/" or "\" contains any ".", ".." or "node_modules" segments after the first segment, throw an Invalid Module Specifier error.
- Let resolvedTarget be the URL resolution of the concatenation of packageURL and target.
- Assert: resolvedTarget is contained in packageURL.
- If subpath split on "/" or "\" contains any ".", ".." or "node_modules" segments, throw an Invalid Module Specifier error.
- Return the URL resolution of the concatenation of subpath and resolvedTarget.
-
Otherwise, if target is a non-null Object, then
- If exports contains any index property keys, as defined in ECMA-262 6.1.7 Array Index, throw an Invalid Package Configuration error.
-
For each property p of target, in object insertion order as,
-
If p equals "default" or conditions contains an entry for p, then
- Let targetValue be the value of the p property in target.
- Let resolved be the result of PACKAGE_TARGET_RESOLVE( packageURL, targetValue, subpath, internal, conditions).
- If resolved is equal to undefined, continue the loop.
- Return resolved.
- Return undefined.
-
Otherwise, if target is an Array, then
- If _target.length is zero, return null.
-
For each item targetValue in target, do
- Let resolved be the result of PACKAGE_TARGET_RESOLVE( packageURL, targetValue, subpath, internal, conditions), continuing the loop on any Invalid Package Target error.
- If resolved is undefined, continue the loop.
- Return resolved.
- Return or throw the last fallback resolution null return or error.
- Otherwise, if target is null, return null.
-,
- Throw an Unsupported File Extension error.
READ_PACKAGE_SCOPE(url)
- Let scopeURL be url.
-
While scopeURL is not the file system root,
- Set scopeURL to the parent URL of scopeURL.
- If scopeURL ends in a "node_modules" path segment, return null.
- Let pjson be the result of READ_PACKAGE_JSON(scopeURL).
-
If pjson is not null, then
- Return pjson.
- Return null.
READ_PACKAGE.
The current specifier resolution does not support all default behavior of the CommonJS loader. One of the behavior differences is automatic resolution of file extensions and the ability to import directories that have an index file.
The
--experimental index.mjs success! $ node index # Failure! Error: Cannot find module $ node --experimental-specifier-resolution=node index success!
© Joyent, Inc. and other Node contributors
Licensed under the MIT License.
Node.js is a trademark of Joyent, Inc. and is used with its permission.
We are not endorsed by or affiliated with Joyent. | https://docs.w3cub.com/node~12_lts/esm | 2021-02-25T08:32:38 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.w3cub.com |
Search with Everyday Words Using Einstein Natural Language Search (Beta)
Natural language search lets users enter common words and phrases in the search box to find the records that they want. Natural language search is supported for accounts, cases, contacts, leads, and opportunities.
Where: This change applies to Lightning Experience in Enterprise, Performance, and Unlimited editions.
Who: Natural language search requires the Einstein Search permission set license.
Why: Everyday words together with certain objects and relative time conditions let Einstein Search apply these words as filters to your Salesforce org’s records.
For example, if your sales rep enters my closed cases last year, Einstein Search interprets those words as a person does. The results show every closed case owned by that sales rep in the last year. Examples of natural language search are available in the Help documentation.
| https://releasenotes.docs.salesforce.com/en-us/summer20/release-notes/rn_search_ai_natural_lang_search.htm | 2021-02-25T07:51:54 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['release_notes/images/rn_search_ai_nls_cases_ex.png',
'natural language search results for my closed cases'],
dtype=object) ] | releasenotes.docs.salesforce.com |
User management¶
- Adding or removing an administrative user
- Allowing and restricting new user registration
- Resetting a user password
- Managing permissions
- Viewing a list of users
- Viewing a list of currently active users
- Viewing a user profile
- Sending a system message
- Moving a project to another compute node
- Deleting a user
- Deleting a project | https://docs.anaconda.com/ae-notebooks/admin-guide/user-mgmt/ | 2021-02-25T07:52:34 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.anaconda.com |
Managing instance arrays¶
This is often the very first step in using the Metal Cloud. Make sure you have created an account with us by signing up here
Deploying an instance array using the Infrastructure Editor¶
The MetalCloud servers (called Instances) are groupped in InstanceArrays. By default an infrastructure is created for you called “my-infrastructure” in a datacenter geographically close to you.
Click on the
Create your first InstanceArray
Select your configuration, number of servers, operating system, drive size and boot type..
Alter firewall rules
By default all traffic is blocked except if it originates from what our systems detects as being your IP. You need to explicitly enable additional IPs or ports before you deploy.
Deploy the infrastructure
Operations in the MetalCloud are not immediatelly deployed. In fact they can be reverted until the infrastructure is “Deployed”. Click on the big “Deploy” button from the bottom of the screen.
Retrieving server access credentials using the UI¶
Once an infrastructure is active you can access the server’s credentials by clicking on the instance array.
This will pop-up the access credentials window:
Here you can find, for each instance (server):
- the quick ssh access link
- root password
Clicking on an instance opens up further information:
The host can be any one of the:
- Hosts’ public ip address (89..)
- Hosts’s permanent DNS entry:
instance-58417.bigstep.io
- Host’s long-form DNS entry:
instance-58417.instance-array-37139.vanilla.my-infrastructure.8186.bigstep.io
Both of them are Type A DNS entries and they point to the same IP address.
Note: It is recommended that you register your public SSH key in the Account settings section so that it gets automatically added on the hosts at deploy time.
Deploying an instance array using the CLI¶
This tutorial uses the CLI. Visit using the CLI for more details.
List available templates
$ metalcloud-cli volume-template list Volume templates I have access to as user [email protected]: +-------+-----------------------------------------+----------------------------------+-------+---------------------------+-----------+ | ID | LABEL | NAME | SIZE | STATUS | FLAGS | +-------+-----------------------------------------+----------------------------------+-------+---------------------------+-----------+ | 6 | ubuntu-12-04 | Ubuntu-12.04 | 40960 | deprecated_deny_provision | | | 13 | centos6-5 | CentOS6.5 | 40960 | deprecated_allow_expand | | | 14 | centos6-6 | CentOS6.6 | 41000 | deprecated_allow_expand | | | 18 | centos71v1 | CentOS 7.1 | 40960 | deprecated_allow_expand | | +-------+-----------------------------------------+----------------------------------+-------+---------------------------+-----------+ Total: 4 Volume templates
Provision an Instance
$ metalcloud-cli instance-array create -boot pxe_iscsi -firewall-management-disabled -infra demo -instance-count 1 -label gold
Add a drive array to the instance
Use the ID of the template, for instance 18 for CentOS 7.1
$ metalcloud-cli drive-array create -ia gold -infra demo -size 100000 -label gold-da -template 18
Deploy the infrastructure
$ metalcloud-cli infrastructure deploy -id demo
Retrieving server access credentials using the CLI¶
To retrive your ssh credentials use the following command:
$ metalcloud-cli instance-array get -id gold -show-credentials
Checking the power status of all the instances in this instance array using the CLI¶
To retrieve your instances status use:
metalcloud-cli instance-array get -id gold -show-power-status | https://docs.bigstep.com/en/latest/guides/managing_instance_arrays.html | 2021-02-25T07:47:36 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['../_images/managing_instance_arrays1.png',
'../_images/managing_instance_arrays1.png'], dtype=object)
array(['../_images/managing_instance_arrays2.png',
'../_images/managing_instance_arrays2.png'], dtype=object)] | docs.bigstep.com |
# About Cognite BestDay
Cognite BestDay runs on top of Cognite Data Fusion (CDF) and is a central hub for decision support, providing a more continuous, data-driven approach to production optimization in your day-to-day operations. Detect and assess deviations and underperformance and act to boost production and reduce production losses by knowing your maximum achievable production and use it as a baseline for visualizing inefficiencies.
Cognite BestDay gives access to real-time and historical data, where different disciplines and shifts can have one common collaboration platform for decision making, knowing and understanding the production targets, be notified on abnormalities, log actions, as well as adding comments to the data and time ranges discussed.
# Maximum production capacity
To optimize production, you need to know your maximum production capacity to have an objective and comparable baseline-calculation across assets. The full calculation with criterias and constraints provides transparency to the operator and production engineer around their production targets.
BestDay calculates your maximum production capacity based on your:
- historical actual production and experienced deferments
- defined criteria or production constraints, for instance, regulatory constraints for the allowed concentration of oil in disposed water or the power consumption per produced barrel.
BestDay uses a statistical approach with configurable conditions and constraints and provides transparency into the calculations on different network levels from wells to topside processing facilities. Configurable data science models identify production deviations and search scheduled deferments to propose potential links or explanations.
What´s next:
Check the Changelog to see recent changes | https://docs.cognite.com/bestday/ | 2021-02-25T07:45:49 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['/assets/img/overview_bestday.fe0faf97.png', 'BestDay overview'],
dtype=object) ] | docs.cognite.com |
Introduction":"secret","storage":"kong","cookie_secure":false} admin_listen = 0.0.0.0:8001, 0.0.0.0:8444 ssl. | https://docs.konghq.com/enterprise/2.3.x/start-kong-securely/ | 2021-02-25T07:51:56 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.konghq.com |
In this Quick Start guide, we walk through the process of setting up an Unreal Engine Project to work with a professional video card from AJA Video Systems. At the end of this guide:
you'll have video input from your AJA card playing inside your Unreal Engine Project.
you'll be able to capture camera viewpoints both from the Editor and from your runtime application, and send them out to an SDI port on your AJA card.
you'll know where to go when you want to set up more advanced adjustments to your video inputs, such as correcting lens deformation and applying chroma-keying effects.
For a working example that shows many of the elements described below put into practice, see the Virtual Studio showcase, available on the Learn tab of the Epic Games Launcher.
Prerequisites:
Make sure that you have a supported card from AJA Video Systems, and that you've installed the necessary drivers and software. For details, see the AJA Media Reference page.
Make sure that your card is working correctly, and that you have some video input feeding in to at least one of the card's SDI ports.
Open an Unreal Engine Project that you want to integrate with your video feeds. This page shows the steps in the Third Person Blueprint template, but the same steps will work equally well in any Project.
The AJA Media components used in this guide are built on top of the Media Framework , and we'll use Blueprints to script the video capturing process at runtime. Some prior knowledge of these topics is recommended but not required.
1 - Set Up the Project
Before you can get video input from your AJA card into your Unreal Engine Level, and send output from the Unreal Engine through one of your AJA card's SDI ports, you'll need to do some basic setup to enable the AJA Media Player Plugin in your Project.
If you started your Unreal Engine Project from one of the Templates in the Film, Television, and Live Events category, the necessary plugins may already be enabled for you. If not, follow the instructions below to enable them.
Steps
Open the Project that you want to use with AJA video I/O in the Unreal Editor.
From the main menu, select Edit > Plugins.
In the Plugins window, find the AJA Media Player plugin under the Media Players category. Check its Enabled checkbox.
Find the Media Framework Utilities Plugin under the Media category. Check its Enabled checkbox, if it's not already checked.
Click Restart Now to restart the Unreal Editor and reopen your Project.
End Result
Your Project is now ready to accept video from the AJA card, and to send rendered output to the card. In the next sections, we'll hook it up and start playing video in and out.
2 - Rendering Video Input in the Unreal Engine
In this process, we'll make video input from the AJA card visible in the current Level in the Unreal Editor. This process uses a Media Bundle: a kind of Asset that packages together several different types of Assets involved in the Media Framework, and that offers you control over some advanced features like lens deformation, chroma-keying, color correction, and more.
Steps
In your Content Browser, expand the Sources panel (1). Right-click, and choose New Folder from the context menu (2).
Rename your new folder AJA.
Open your new folder, right-click in the Content Browser and choose Media > Media Bundle.
Your new Asset's name is automatically selected in the Content Browser, so you can give it a descriptive name:
Type a new name, like AjaMediaBundle, and press Enter. A new folder of Media Framework Assets is automatically created next to your Media Bundle, named with the suffix _InnerAssets.
Save your new Assets by clicking the Save All button in the Content Browser.
Double-click your new Media Bundle to edit its properties. The Media Bundle is capable of playing video from any kind of media source the Engine supports, so you'll need to tell it that you want to get the video from your AJA card.
In the Media Source property, select Aja Media Source from the drop-down list:
Once you've identified the type of Media Source you want the Media Bundle to handle, you can then set up any configuration properties offered by that type of source.
The most important thing to set here for the Aja Media Source is the Configuration setting, to make sure that the bundle is set up to capture video from the right device and input port, using the same resolution and frame rate as the actual video feed. Click the arrow to open the settings submenu, select the options that match your setup, then click Apply in the submenu.
The options you see may vary depending on the devices you have installed. For details on all of the properties you can set for an AJA Media Source, see the AJA Media Reference page.
If you want to apply any compensation to the incoming video to account for lens distortion, you can set up the physical properties of the lens in the Lens Parameters section.
These Lens Parameters just set up the physical properties of the lens. You'll actually activate the lens compensation later, when you edit the Material Instance used by the Media Bundle.
Save your Media Bundle when you're done setting up its properties, and return to the AJA folder in the Content Browser.
Drag your AjaMediaBundle Asset from the Content Browser into the Level Viewport.
You'll see a new plane appear, showing the video currently being played over the port configured for your Media Bundle. Use the transform tools in the Viewport toolbar to move, rotate, and resize it.
If your Media Bundle doesn't start playing automatically, select it, then click the Media Bundle > Request Play Media button in the Details panel.
Now, we'll see how to apply keying and compositing effects to the video stream.
Back in the Media Bundle Editor, click the Open Material Editor button in the Toolbar to edit the Material Instance that this Media Bundle uses to draw its incoming video feed on to an object in the Level.
This Material Instance is saved inside the AjaMediaBundle_InnerAssets folder that was created automatically with your Media Bundle.
In the Material Instance Editor, you'll see a number of properties exposed for you to configure keying, cropping, and color correction, and to activate the correction for the lens distortion that you set up in the Media Bundle.
While you adjust the settings in the Material Instance Editor, you can see the effect of your changes on the video feed playing back in the main Level Viewport.
You may find it more convenient to see the effects of the changes you make in the preview panel of the Material Instance Editor instead. To do this, temporarily enable the IsValid setting, and set its value to
1.0.
Click the arrow at the top left of the viewport toolbar, and enable the Realtime option in the menu.
You'll be able to judge the effect of your changes more easily by changing the preview mesh to a plane or a cube. Use the controls at the bottom of the viewport:
When you're done, return the IsValid setting to its previous value.
When you're done changing the Material Instance properties, click the Save button in the Toolbar.
End Result
At this point, you should have video playing over an SDI port showing up inside your Unreal Engine Level, and you should understand where to set up more advanced features like lens deformation and chroma-keying.
If you're already familiar with the Media Framework, another way you could get video into your Level is to create a new AjaMediaSource Asset in your Project, and set it up with the same source properties you set up inside the Media Bundle in the procedure above. Then, create your own MediaPlayer and MediaTexture Assets to handle the playback of that source in your Level. For details, see the Media Framework documentation. However, we recommend using the Media Bundle, as shown above, to get the best balance between ease of use and professional quality video features.
3 - Output Captures from the Unreal Editor
In this process, you'll set up an AJA Media Output object, and use the Media Captures panel in the Unreal Editor to output the view from selected cameras in the Level to your AJA card.
Steps
Right-click in the Content Browser, and select Media > Aja Media Output.
Name your new Asset AjaMediaOutput.
Double-click your new Asset to open it up for editing. Just like when you created your Aja Media Source, you have to set up the Configuration property to control the properties of the video feed that the Unreal Engine sends to your AJA card. Click the arrow to open the submenu, select the options that match your video setup, then click Apply in the submenu.
For details on all of the properties you can set in the AJA Media Output, see the AJA Media Reference page.Save and close your Media Output when you're done.
Now we'll place two cameras in the Level, to give us viewpoints for the output we'll send to the AJA card. In the Place Actors panel, open the Cinematic tab, and drag two instances of the Cine Camera Actor into the viewport.
Place the cameras where you want them in the Level, so that they're showing different viewpoints on the scene.
Piloting the camera is a fast and easy way to set its viewpoint exactly the way you want it. See Pilot Actors in the Viewport .
From the main menu, choose Window > Media Capture. You'll use the Media Capture window to control when the Editor sends output to your AJA port, and what camera it should use in the Level.
Under the Media Viewport Capture area, find the Viewport Captures control. Click the + button to add a new capture to this list.
Expand the new entry. First, we'll add the cameras that we want to capture from. In the Locked Camera Actors control, click the + button to add a new entry.
Then, use the drop-down list to choose one of the cameras you placed in the Level.
Repeat the same steps to add the other camera to the list.
Now, set up the output that you want to capture these cameras to. Set the Media Output control to point to the new AJA Media Output Asset that you created above. You can do this by selecting it in the drop-down list, or drag your AJA Media Output Asset from the Content Browser and drop it into this slot.
At the top of the window, click the Capture button.
You'll see a new frame at the bottom of the window that shows a preview of the output being sent to the AJA card. If you have this port hooked up to another downstream device, you should start to see the output coming through.
Each camera that you added to the Locked Camera Actors list for this viewport capture is represented by a corresponding button above the video preview. Click the buttons to switch the capture back and forth between the two views.
End Result
Now you've set up the Unreal Editor to stream output from cameras in your Level to a port on your AJA card. Next, we'll see how to use Blueprint scripting to do the same thing in a running Unreal Engine Project.
4 - Output Captures at Runtime
The Media Capture window that you used in the last section is a practical and easy way to send captures to the AJA card. However, it only works inside the Unreal Editor. To do the same thing when you're running your Project as a standalone application, you'll need to use the Blueprint API provided by the Media Output. In this procedure, we'll set up a simple toggle switch in the Level Blueprint that starts and stops capturing when the player presses a key on the keyboard.
The Virtual Studio showcase, available on the Learn tab of the Epic Games Launcher, contains a UMG interface widget that demonstrates how you could control capturing from an on-screen user interface.
Steps
From the main toolbar in the Unreal Editor, choose Blueprints > Open Level Blueprint.
We'll need to start from the AJA Media Output Asset that you've created, where you identify the port you want to output to. In the Variables list in the My Blueprint panel, click the + button to add a new variable.
In the Details panel, set the Variable Name to AjaMediaOutput, and use the Variable Type drop-down list to make it an Aja Media Output Object Reference.
Enable the Instance Editable setting (1), and compile the Blueprint. Then, in the Default Value section, set the variable to point to the AJA Media Output Asset that you created in your Content Browser (2).
Press Ctrl, and drag the AjaMediaOutput from the Variables list in the My Blueprint panel into the Event Graph.
Click and drag from the output port of the AjaMediaOutput variable node, and choose Media > Output > Create Media Capture.
Hook up your nodes to the Event BeginPlay node as shown below:
This creates a new Media Capture object from your Aja Media Output. The Media Capture offers two main Blueprint functions that we'll use to control the capturing: Capture Active Scene Viewport and Stop Capture.
First, we'll save the new Media Capture object into its own variable, so we can get access to it again elsewhere. Click and drag from the output port of the Create Media Capture node, and choose Promote to Variable.
Rename the new variable MediaCapture in the Variables list in the My Blueprint panel.
It's important to save the Media Capture to a variable here. If you don't, the Unreal Engine's garbage collector may destroy it automatically before you're done with it.
Press Ctrl and drag the MediaCapture variable into the Event Graph.
Click and drag from the output port of the MediaCapture variable node, and choose Media > Output > Capture Active Scene Viewport. Do it again, and choose Media > Output > Stop Capture.
Right-click in the Event Graph and choose Input > Keyboard Events > P. Click and drag from the Pressed output of the P node and choose Flow Control > FlipFlop.
Connect the A output of the FlipFlop node to the input event of the Capture Active Scene Viewport node, and connect the B output of the FlipFlop node to the input event of the Stop Capture node, as shown below:
Compile and save the Blueprint, and try playing your Project. Click the arrow next to the Play button in the main Toolbar, and choose either the New Editor Window (PIE) or Standalone Game option..
After your project starts up, you should be able to press the P button on your keyboard to toggle sending the output from the Engine to the AJA card.
End Result
At this point, you should have a basic idea of how to work with Aja Media Sources, Media Bundles, and the Media Capture system, and you should understand how all of these elements work together to get professional video in and out of the Unreal Engine.
On Your Own
Now that you've seen the basics of how to get a new Project exchanging video input and output with an AJA card, you can continue learning on your own:
Explore the in-engine keying solution in the Material Instance created by your Media Bundle. Try passing some green-screen video into your card's input port, and use the keying controls in the Material Instance to remove the background.
Explore the Virtual Studio showcase to see what it adds to this basic setup, like its on-screen UI that switches cameras and controls video capture at runtime. | https://docs.unrealengine.com/en-US/WorkingWithMedia/ProVideoIO/AJAQuickStart/index.html | 2021-02-25T08:34:27 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['./../../../../Images/WorkingWithMedia/ProVideoIO/AJAQuickStart/aja-qs-banner.jpg',
'aja-qs-banner.png'], dtype=object) ] | docs.unrealengine.com |
Source.
This document contains two primary sections and a third section for notes. The first section explains how to use existing streams within an application. The second section explains how to create new types of streams.
There are four fundamental stream types within Node.js:
Writable: streams to which data can be written (for example,
fs.createWriteStream()).
Readable: streams from which data can be read (for example,
fs.createReadStream()).
Duplex: streams that are both
Readableand
Writable(for example,
net.Socket).
Transform:
Duplexstreams that can modify or transform the data as it is written and read (for example,
zlib.createDeflate()).
Additionally, this module includes the utility functions
stream.pipeline(),
stream.finished() and
stream.Readable.from()..
Both
Writable and
Readable streams will store data in an internal buffer that can be retrieved using
writable.writableBuffer or
readable.readableBuffer, respectively.
The amount of data potentially buffered depends on the
highWaterMark option passed into. are an abstraction for a destination to which data is written.
Examples of
Writable streams include:
process.stdout,
process.stderr');
stream.Writable
'close'
The
'close' event is emitted when the stream and any of its underlying resources (a file descriptor, for example) have been closed. The event indicates that no more events will be emitted, and no further computation will occur.
A
Writable stream will always emit the
'close' event if it is created with the
emitClose option.
); } } }
'error'
The
'error' event is emitted if an error occurred while writing or piping data. The listener callback is passed a single
Error argument when called.
The stream is closed when the
'error' event is emitted unless the
autoDestroy option was set to
false when creating the stream.
After
'error', no further events other than
'close' should be emitted (including
'error' events).
('Something is piping into the writer.'); assert.equal(src, reader); }); reader.pipe(writer);
])
error<Error> Optional, an error to emit with
'error'event.' event.
Calling the
stream.write() method after calling
stream.end() will raise an error.
// Write 'hello, ' and then end with 'world!'. const fs = require('fs'); const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
writable.setDefaultEncoding(encoding)
encoding<string> The new default.writableHighWaterMark
Return the value of
highWaterMark passed when constructing this
Writable.
writable.writableLength
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark.
writable.writableObjectMode
Getter for the property
objectMode of a given
Writable stream.
writable.write(chunk[, encoding][, callback])
chunk<string> | <Buffer> | <Uint8Array> | <any> Optional data to write. For streams not operating in object mode,
chunkmust be a string,
Bufferor
Uint8Array. For object mode streams,
chunkmay be any JavaScript value other than
null.
encoding<string> | <null> The encoding, if
chunkis a string. Default:
'utf8'
callback<Function> Callback for when this chunk of data is flushed.
callback is called asynchronously and before
'error' is emitted..
Readable streams are an abstraction for a source from which data is consumed.
Examples of
Readable streams include:
process.stdin
All
Readable streams implement the interface defined by the
stream.Readable class.
Readable streams effectively operate in one of two modes: flowing and paused. These modes are separate from object mode. A
Readable stream can be in object mode or not, regardless of whether it is in flowing mode or paused mode.:
'data'event handler.
stream.resume()method.
stream.pipe()method to send the data to a
Writable.
The
Readable can switch back to paused mode using one of the following:
stream.pause()method.
stream.unpipe()method.
The important concept to remember is that a
Readable will not generate data until a mechanism for either consuming or ignoring that data is provided. If the consuming mechanism is disabled or taken away, the
Readable will attempt to stop generating the data.
For backward compatibility reasons, removing
'data' event handlers will not automatically pause the stream. Also, if there are piped destinations, then calling
stream.pause() will not guarantee that the stream will remain paused once those destinations drain and ask for more data..
Adding a
'readable' event handler automatically makes the stream stop flowing, and the data has to be consumed via
readable.read(). If the
'readable' event handler is removed, then the stream will start flowing again if there is a
'data' event handler.
The "two modes" of operation for a
Readable stream are a simplified abstraction for the more complicated internal state management that is happening within the
Readable stream implementation.
Specifically, at any given point in time, every
Readable is in one of three possible states:
readable.readableFlowing === null
readable.readableFlowing === false
readable.readableFlowing === true
When
readable.readableFlowing is
null, no mechanism for consuming the stream's data is provided. Therefore, the stream will not generate data. While in this state, attaching a listener for the
'data' event, calling the
readable.pipe() method, or calling the
readable.resume() method will switch
readable.readableFlowing to
true, causing the
Readable to begin actively emitting events as data is generated.
Calling
readable.pause(),
readable.unpipe(), or receiving backpressure will cause the
readable.readableFlowing to be set as
false, temporarily halting the flowing of events but not halting the generation of data. While in this state, attaching a listener for the
'data' event will not switch
readable.readableFlowing to
true.
const { PassThrough, Writable } = require('stream'); const pass = new PassThrough(); const writable = new Writable(); pass.pipe(writable); pass.unpipe(writable); // readableFlowing is now false. pass.on('data', (chunk) => { console.log(chunk.toString()); }); pass.write('ok'); // Will not emit 'data'. pass.resume(); // Must be called to make stream emit 'data'.
While
readable.readableFlowing is
false, data may be accumulating within the stream's internal buffer..
stream.Readable
.
.`); });
'end'
The
'end' event is emitted when there is no more data to be consumed from the stream..
'pause'
The
'pause' event is emitted when
stream.pause() is called and
readableFlowing is not
false.
', function() { // There is some data to read now. let data; while (data = this.read()) { console.log(data); } });
In general, the
readable.pipe() and
'data' event mechanisms are easier to understand than the
'readable' event. However, handling
'readable' might result in increased throughput.
If both
'readable' and
'data' are used at the same time,
'readable' takes precedence in controlling the flow, i.e.
'data' will be emitted only when
stream.read() is called. The
readableFlowing property would become
false. If there are
'data' listeners when
'readable' is removed, the stream will start flowing, i.e.
'data' events will be emitted without calling
.resume().
'resume'
The
'resume' event is emitted when
stream.resume() is called and
readableFlowing is not
true.
readable.destroy([error])
error<Error> Error which will be passed as payload in
'error'event
end<boolean> End the writer when the reader ends. Default:
true.
Duplexor a
Transformstream fs = require('fs'); fs = require('fs');.
The
process.stderr and
process.stdout
Writable streams are never closed until the Node.js process exits, regardless of the specified options.
readable.read([size])
size<number> Optional argument to specify how much data to read.(''); });
A
Readable stream in object mode will always return a single item from a call to
readable.read(size), regardless of the value of the
size argument.
If the
readable.read() method returns a chunk of data, a
'data' event will also be emitted.
Calling
stream.read([size]) after the
'end' event has been emitted will return
null. No runtime error will be raised.
readable.readable.readableHighWaterMark
Returns the value of
highWaterMark passed when constructing this
Readable.
readable.readableLength
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark.
readable:
getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume() method has no effect if there is a
'readable' event listener.
readable.setEncoding(encoding)
encoding<string> The encoding to use. fs = require('fs');[,.)
stream<Stream> An "old style" readable stream
Prior to Node.js 0.10, streams.
const { OldReader } = require('./old-api-module.js'); const { Readable } = require('stream'); const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
readable[Symbol.asyncIterator]()
const fs = require('fs'); async function print(readable) { readable.setEncoding('utf8'); let data = ''; for await (const chunk of readable) { data += chunk; } console.log(data); } print(fs.createReadStream('file')).catch(console.error);
If the loop terminates with a
break or a
throw, the stream will be destroyed. In other terms, iterating over a stream will consume the stream fully. The stream will be read in chunks of size equal to the
highWaterMark option. In the code example above, data will be in a single chunk if the file has less then 64KB of data because no
highWaterMark option is provided to
fs.createReadStream().
stream.Duplex
Duplex streams are streams that implement both the
Readable and
Writable interfaces.
Examples of
Duplex streams include:
stream.Transform
Transform streams are
Duplex streams where the output is in some way related to the input. Like all
Duplex streams,
Transform streams implement both the
Readable and
Writable interfaces.
Examples of
Transform streams include:
transform.destroy([error])
error<Error>
Destroy the stream, and.
A function to get notified when a stream is no longer readable, writable or has experienced an error or a premature close event.
const { finished } = require('stream'); const rs = fs.createReadStream('archive.tar'); finished(rs, (err) => { if (err) { console.error('Stream failed.', err); } else { console.log('Stream is done reading.'); } }); rs.resume(); // Drain the stream.
Especially useful in error handling scenarios where a stream is destroyed prematurely (like an aborted HTTP request), and will not emit
'end' or
'finish'.
The
finished API is promisify-able as well;
const finished = util.promisify(stream.finished); const rs = fs.createReadStream('archive.tar'); async function run() { await finished(rs); console.log('Stream is done reading.'); } run().catch(console.error); rs.resume(); // Drain the stream.
stream>
...transforms<Stream> | <Function>
source<AsyncIterable>
destination<Stream> | <Function>
source<AsyncIterable>
callback<Function> Called when the pipeline is fully done.
err<Error>
valResolved value of
Promisereturned by
destination.
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
const { pipeline } = require('stream'); const fs = require('fs'); const zlib = require('zlib'); // Use the pipeline API to easily pipe a series of streams // together and get notified when the pipeline is fully done. // A pipeline to gzip a potentially huge tar file efficiently: pipeline( fs.createReadStream('archive.tar'), zlib.createGzip(), fs.createWriteStream('archive.tar.gz'), (err) => { if (err) { console.error('Pipeline failed.', err); } else { console.log('Pipeline succeeded.'); } } );
The
pipeline API is promisify-able as well:
const pipeline = util.promisify(stream.pipeline); async function run() { await pipeline( fs.createReadStream('archive.tar'), zlib.createGzip(), fs.createWriteStream('archive.tar.gz') ); console.log('Pipeline succeeded.'); } run().catch(console.error); reasons.({ options.
The new stream class must then implement one or more specific methods, depending on the type of stream being created, as detailed in the chart below:
The implementation code for a stream should never call the "public" methods of a stream that are intended for use by consumers (as described in the API for stream consumers section). Doing so may lead to adverse side effects in application code consuming the stream..
For many simple cases, it is possible to construct a stream without relying on inheritance. This can be accomplished by directly creating instances of the
stream.Writable,
stream.Readable,
stream.Duplex or
stream.Transform objects and passing appropriate methods as constructor options.
const { Writable } = require('stream'); const myWritable = new Writable({ write(chunk, encoding, callback) { // ... } });
The
stream.Writable class is extended to implement a
Writable stream.
Custom
Writable streams must call the
new stream.Writable([options]) constructor and implement the
writable._write()'.
objectMode<boolean> Whether or not the
stream.write(anyObj)is a valid operation. When set, it becomes possible to write JavaScript values other than string,
Bufferor
Uint8Arrayif supported by the stream implementation. Default:
false.
emitClose<boolean> Whether or not the stream should emit
'close'after it has been destroyed. Default:
true.
write<Function> Implementation for the
stream._write()method.
writev<Function> Implementation for the
stream._writev()method.
destroy<Function> Implementation for the
stream._destroy()method.
final<Function> Implementation for the
stream._final()method.
autoDestroy<boolean> Whether this stream should automatically call
.destroy()on itself after ending. Default:
true.) explicitly set to
false in the constructor options, then
chunk will remain the same object that is passed to
.write(), and may be a string rather than a
Buffer. This is to support implementations that have an optimized handling for certain string data encodings. In that case, the
encoding argument will indicate the character encoding of the string. Otherwise, the
encoding argument can be safely ignored.
The
writable._write() method is prefixed with an underscore because it is internal to the class that defines it, and should never be called directly by user programs.
writable._writev(chunks, callback)
chunks<Object[]> The chunks to be written. Each chunk has following format:
{ chunk: ..., encoding: ... }.
callback<Function> A callback function (optionally with an error argument) to be invoked when processing is complete for the supplied chunks.
This function MUST NOT be called by application code directly. It should be implemented by child classes, and called by the internal
Writable class methods only.
The
writable._writev() method may be implemented in addition or alternatively to
writable._write() in stream implementations that are capable of processing multiple chunks of data at once. If implemented and if there is buffered data from previous writes,
_writev() will be called instead of
_write().. { _write(chunk, encoding, callback) { if (chunk.toString().indexOf('a') >= 0) { callback(new Error('chunk is invalid')); } else { callback(); } } }); this._decoder = new StringDecoder(options && optionsof size
n. Default:.)
size<number> Number of bytes to read asynchron.
Once the
readable._read() method has been called, it will not be called again until more data is pushed through the
readable.push() method. Empty data such as empty buffers and strings will not cause
readable._read() to
Bufferencoding, such as
'utf8'or
'ascii'.
trueif additional chunks of data may continue(); } }
The
readable.push() method is { // Do some work. } } }); = String(i); const buf = Buffer.from(str, 'ascii'); this.push(buf); } } }and
Readableconstructors. Also has the following fields:
allowHalfOpen<boolean> If set to
false, then the stream will automatically end the writable side when the readable side ends. Default:
true.
readable<boolean> Sets whether the
Duplexshould be readable. Default:
true.
writable<boolean> Sets whether the
Duplexshould be writable.
A
Transform stream is a
Duplex stream where the output is computed in some way from the input. Examples include zlib streams or crypto streams that compress, encrypt, or decrypt data..
Care must be taken when using
Transform streams in that data written to the stream can cause the
Writable side of the stream to become paused if the output on the
Readable side is not consumed.
new stream.Transform([options])
options<Object> Passed to both
Writableand
Readableconstructors. Also has the following fields:
transform<Function> Implementation for the
stream._transform()method.
flush<Function> Implementation for the
stream._flush()method.) { // ... } });
'end'
The
'end' event is from the
stream.Readable class. The
'end' event is emitted after all data has been output, which occurs after the callback in
transform._flush() has been called. In the case of an error,
'end' should not be emitted.
'finish'
The
'finish' event is from the
stream.Writable class. The
'finish' event is emitted after
stream.end() is called and all chunks have been processed by
stream._transform(). In the case of an error,
'finish' should not be emitted.
transform._flush(callback)
callback<Function> A callback function (optionally with an error argument and data) to be called when remaining data has been flushed.
transform
Bufferto be transformed, this is the encoding type. If chunk is a buffer, then this is the special value
'buffer'. Ignore it in that case.
callback<Function> A callback function (optionally with an error argument and data) to be called after the supplied
chunkhas been processed.
transform
transform.
With the support of async generators and iterators in JavaScript, async generators are effectively a first-class language-level stream construct at this point.
Some common interop cases of using Node.js streams with async generators and async iterators are provided below.
(async function() { for await (const chunk of readable) { console.log(chunk); } })();
Async iterators register a permanent error handler on the stream to prevent any unhandled post-destroy errors.
We can construct a Node.js readable stream from an asynchronous generator using the
Readable.from() utility method:
const { Readable } = require('stream'); async function * generate() { yield 'a'; yield 'b'; yield 'c'; } const readable = Readable.from(generate()); readable.on('data', (chunk) => { console.log(chunk); }););
Prior to Node.js 0.10, the
Readable stream interface was simpler, but also less powerful and less useful.
stream.read()method,
'data'events would begin emitting immediately. Applications that would need to perform some amount of work to decide how to handle data were required to store read data into buffers so the data would not be lost.
stream.pause()method was advisory, rather than guaranteed. This meant that it was still necessary to be prepared to receive
'data'events even when the stream was in a paused state.
In Node.js 0.10, the
Readable class was added. For backward:
'data'event listener is added.
stream.resume()method is never called.
For example, consider the following code:
// WARNING! BROKEN! net.createServer((socket) => { // We add an 'end' listener, but never consume the data. socket.on('end', () => { // It will never get here. socket.end('The message was received but was not processed.\n'); }); }).listen(1337);
Prior to Node.js 0.10, the incoming message data would be simply discarded. However, in Node.js 0-0discrepancy.
© Joyent, Inc. and other Node contributors
Licensed under the MIT License.
Node.js is a trademark of Joyent, Inc. and is used with its permission.
We are not endorsed by or affiliated with Joyent. | https://docs.w3cub.com/node~14_lts/stream | 2021-02-25T08:11:02 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.w3cub.com |
Example - Developing custom form controls
The following example demonstrates how to create a form control that allows users to choose a color from a drop-down list. You can use the same basic approach to create any type of custom form control.
Defining the code of a custom form control
- Open your web project in Visual Studio (using the WebSite.sln or WebApp.sln file).
- Right-click the CMSFormControls folder and choose Add -> New Item.
- Create a new Web User Control and call it ColorSelector.ascx.
- Always use the CMSFormControls folder (or a sub-folder) to store the source files of custom form controls. The location ensures that the files are exported along with registered form controls in the system.
- Add a standard DropDownList control onto the user control's form:
Set the DropDownList's ID property to drpColor.
<asp:DropDownList</asp:DropDownList>
Switch to the code behind and add a reference to the following namespaces:
using System; using System.Web.UI.WebControls; using CMS.FormControls; using CMS.Helpers;
Make the user control class inherit from FormEngineUserControl:
public partial class CMSFormControls_ColorSelector : FormEngineUserControl
Add the following members into the class:
/// <summary> /// Gets or sets the value entered into the field, a hexadecimal color code in this case. /// </summary> public override object Value { get { return drpColor.SelectedValue; } set { // Selects the matching value in the drop-down EnsureItems(); drpColor.SelectedValue = System.Convert.ToString(value); } } /// <summary> /// Property used to access the Width parameter of the form control. /// </summary> public int SelectorWidth { get { return ValidationHelper.GetInteger(GetValue("SelectorWidth"), 0); } set { SetValue("SelectorWidth", value); } } /// <summary> /// Returns an array of values of any other fields returned by the control. /// </summary> /// <returns>It returns an array where the first dimension is the attribute name and the second is its value.</returns> public override object[,] GetOtherValues() { object[,] array = new object[1, 2]; array[0, 0] = "ProductColor"; array[0, 1] = drpColor.SelectedItem.Text; return array; } /// <summary> /// Returns true if a color is selected. Otherwise, it returns false and displays an error message. /// </summary> public override bool IsValid() { if ((string)Value != "") { return true; } else { // Sets the form control validation error message this.ValidationError = "Please choose a color."; return false; } } /// <summary> /// Sets up the internal DropDownList control. /// </summary> protected void EnsureItems() { // Applies the width specified through the parameter of the form control if it is valid if (SelectorWidth > 0) { drpColor.Width = SelectorWidth; } // Generates the options in the drop-down list if (drpColor.Items.Count == 0) { drpColor.Items.Add(new ListItem("(select color)", "")); drpColor.Items.Add(new ListItem("Red", "#FF0000")); drpColor.Items.Add(new ListItem("Green", "#00FF00")); drpColor.Items.Add(new ListItem("Blue", "#0000FF")); } } /// <summary> /// Handler for the Load event of the control. /// </summary> protected void Page_Load(object sender, EventArgs e) { // Ensure drop-down list options EnsureItems(); }
The above code overrides three members inherited from the FormEngineUserControl class that are most commonly used when developing form controls:
- Value - it is necessary to override this property for every form control. It is used to get and set the value of the field provided by the control.
- GetOtherValues() - this method is used to set values for other fields of the object in which the form control is used. It must return a two dimensional array containing the names of the fields and their assigned values. Typically used for multi‑field form controls that need to store data in multiple database columns, but only occupy a single field in the form.
- IsValid() - this method is used to implement validation for the values entered into the field. It must return true or false depending on the result of the validation.
The SelectorWidth property provides a way to access the value of a parameter that will be defined for the form control later in the example. The value of the property is used in the EnsureItems() method to set the width of the internal drop-down list.
Tip: You can access the settings of the field to which the form control is assigned through the FieldInfo property of the form control (inherited from the FormEngineUserControl class). For example:
// Checks whether the field using the form control is Required if (this.FieldInfo.AllowEmpty)
- Save the both code files. Build your project if it is installed as a web application.
Registering the custom form control in the system
- Log into the Kentico administration interface.
- Open the Form controls application.
- Click New form control.
- Leave the Create a new form control option selected.
- Enter the following values:
- Display name: Custom color selector
- Code name: Leave the (automatic) option
- Type: Selector
- File name: ~/CMSFormControls/ColorSelector.ascx (you can click Select to choose the file)
- Click Save.
The system creates your control and the General tab of the control's editing interface opens.
- Select Text and Page types in the Control scope section.
- Click Save.
- Switch to the Properties tab.
- Click New field.
- Set the following values for the form control parameter:
- Field name: SelectorWidth
- Data type: Integer number
- Display field in the editing form: yes (checked)
- Field caption: Drop-down list width
- Form control: Text box
- Click Save.
This parameter allows users to specify the width of the color selector directly from the administration interface whenever they assign the control to a form field. The code of the form control already ensures that the value is properly applied.
Now you can test the control by placing it into a page editing form.
Placing the form control in a page editing form
- Open the Page types application.
- Edit () the Product page type.
- Select the Fields tab to access the field editor for the page type.
- Click New field.
- Set the following properties for the field:
- Field name: ProductColor
- Data type: Text
- Size: 100
Display field in the editing form: no (clear the check box)
This field stores the name of the color selected for the product. It will not be available in the editing form, the value is set automatically by the GetOtherValues() method of the ColorSelector.ascx control (notice that the Field name matches the name used in the code of the method).
- Click Save.
- Click New field again to add another field.
- Set the following parameters for this field:
- Field name: ProductHexaColor
- Data type: Text
- Size: 100
- Display attribute in editing form: yes (checked)
- Field caption: Color
Form control: Custom color selector
This field stores the hexadecimal code of the selected color. In the code of the form control, the value is handled through the Value property. The field is visible in the page's editing form according to the design of the custom form control.
- Set the width of the selected via the Drop-down list width option in the Editing control settings section. For example, enter 200. This is the SelectorWidth parameter that you defined for the form control.
- Click Save.
Result
- Open the Pages application.
- Create a new page of the Product page type (for example under the /Products section of the sample Corporate site).
- Select the Do not create an SKU option.
The page's editing form contains the new form control.
The interface of the Color page field uses the custom form control. The width of the displayed drop-down list matches the value that you entered into the form control's parameter (200). If you do not choose any color, the validation error message defined in the code of the form control appears.
Getting and setting values of other fields using the API
You can access the data of the current form through the Form property of the form control (inherited from the FormEngineUserControl class).
To retrieve the values entered into other fields, use the GetFieldValue method:
- Form.GetFieldValue(string fieldName) - returns an object containing the value of the specified field.
For example, use the following code to get the value of the ProductName field (New product is returned if the field is empty):
string productName = CMS.Helpers.ValidationHelper.GetString(Form.GetFieldValue("ProductName"), "New product");
To set the value of a field, you can use the following approach:
- Form.Data.SetValue(string fieldName, object value) - sets a value for the specified field.
To modify the value of a field before the form is validated or saved, you need to place the code inside a handler of the underlying form's OnBeforeValidate or OnBeforeSave events. For example:
protected void Page_Load(object sender, EventArgs e) { // Assigns a handler method to the OnBeforeValidate event of the form where the control is used Form.OnBeforeValidate += BeforeValidateHandler; } private void BeforeValidateHandler(object sender, EventArgs e) { // Sets a value into the form's "TextFieldName" field Form.Data.SetValue("TextFieldName", "TextFieldValue"); } | https://docs.xperience.io/k82/custom-development/developing-form-controls/example-developing-custom-form-controls | 2021-02-25T07:58:53 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.xperience.io |
Thursday, May 31, 2018
Source code repository management made simple.
Packages extends Satis, adding useful management functionality.
Download the latest release.
Packages automatically registers GitLab and GitHub project web hooks to keep Satis up to date. Packages also features a web management interface that allows for easy management of exposed packages and configured source control repositories.
The application is made up of three things: Remotes and Packages and Plugins. A Remote is somewhere that one or more projects, known as Packages, are hosted, for example GitHub or GitLab. Plugins integrate new source code hosts, or generate your Composer repository index, or generate documentation, or run unit tests, etc, when you push code to your repositories.
Packages exposes a landing page and the necessary JSON files to allow
composer to use your Packages installation
as a private repository for your Composer packages.
Packages version 3 works on a plugin based system based around source code repositories. Packages can trigger, with each code push, many automated tasks like documentation generation or code analysis. The simple event-based architecture allows easy creation of new automation tasks.
Currently implemented plugins:
GitLab integration plugin Provides project sync support and automatic webhook registration within GitLab.
GitHub integration plugin Provides project sync support and automatic webhook registration within GitHub.
Satis plugin Updates Satis when source code is updated.
Clone Project plugin Clones the source code repository to allow for further analysis locally.
Terramar Labs | http://docs.terramarlabs.com/packages/3.0/introduction | 2021-02-25T08:04:38 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.terramarlabs.com |
SketchUp Files
SketchUp files with a .SKP extension (see) can be imported as 2D data suitable for machining into a VCarve Desktop job using the File ► Import Vectors... command from the menu bar or the import vectors icon on the Drawing tab. To import data from a SketchUp file you must already have created or opened a job to import the data into.
As a SketchUp model is usually a 3D representation of the part, the SketchUp importer offers a number of options to allow you to start manufacturing the model.
We will illustrate the two main choices for how the model will be imported using the SketchUp model shown to the left.
The model shown in the screenshots is a cabinet constructed by following the instructions in the Fine Woodworking 'Google SketchUp guide for Woodworkers: The Basics' DVD which is available via the Fine Woodworking site at. Vectric have no affiliation with Fine Woodworking, we are just using screenshots of the model constructed while following their tutorials to illustrate the process of importing a SketchUp model.
Layout of Imported Data
In the first section there are two main choices for how the data from the model will be imported, 'Exploded Flat Layout' and 'Three Views - Front, Top, Side' as shown below.
Exploded Flat Layout
This option will take each component in the model and orientate it flat ready for machining.
Once this option is selected a number of sub-options also become available.
Part Orientation
This section controls what Aspire considers to be the 'top' face of each part.
Auto Orientate
If this option is selected, for each part in the model, the 'face' with the largest area based on its outer perimeter (i.e. ignoring holes etc.) is considered to be the 'top' face and the part is automatically rotated so that this face is facing upwards in Z. This strategy works very well for models which are to be manufactured from sheet goods where there are no features on particular faces which need to be on the 'top' (such as pockets).
Orientate by material
This option allows the user to control more explicitly the orientation of each part in the model. Within SketchUp the user can 'paint' the face of each component/group with a material/color of their choice to indicate which face will be orientated on top when the model is imported. When this option is selected simply chose the material which has been used to indicate the top face from the drop down list. If a part is found in the model which does not have a face with the specified material, that part will be oriented by making the largest face the top.
Gap between parts
This field lets the user specify the gap between parts when they are first imported. After importing, the nesting functions within VCarve Desktopcan be used to layout the parts with more control and across multiple sheets
Three Views - Front, Top, Side
This option will create an 'engineering drawing' style layout of the SketchUp model as shown in the screenshot below.
The size of the model is preserved and it is relatively simple to pick up dimensions for parts you are going to manufacture from the various views. The colors of the lines you see are taken from the colors of the original SketchUp layers the various parts of the model are on.
Create Circles / Arcs
SketchUp does not maintain true arc or circle information for the boundaries of its parts. This is a problem when it comes to machining as the 'polygonal' SketchUp representation can give very poor machining results. For this reason, VCarve Desktop offers the option to refit circles and arcs to imported data.
The screenshot above left shows the results of importing a part with a filleted corner and hole with these options unchecked. The 'fillet' is made up of a series of straight line segments and the circular 'hole' is actually a polygon made up of straight lines.
The screen shot above right shows the same part imported with both these options checked ✓. The 'fillet' now consists of a single smooth arc and the circular 'hole' now also consists of arcs rather than straight line segments. Both these features will machine more cleanly in this form.
Data to Import
A SketchUp model will often contain parts that you do not wish to machine (such as hinges, knobs etc.) or data which will be cut from different thicknesses of material and hence different parts need to be imported into different VCarve Desktop jobs. To allow control over what is imported you can choose to only import parts of the model which are on particular layers using this section of the dialog.
To only import data from selected layers, choose the 'import visible data on selected layers' option and click the check box next to each layer to indicate if you want to import data from that layer. Note that the number of parts on each layer is displayed next to the layer name.
It is very easy to assign different parts of the model to different layers within SketchUp to help with the import process into VCarve Desktop. The screenshot below shows the result of only importing data on the 'Door' layer from the example.
Component / Group Handling
This section of the form allows advanced handling of how 'parts' within the SketchUp model are identified and treated on import.
Group imported partses
This option is normally selected for all but the simplest models as it allows each 'part' of the model to be selected, moved and nested easily after import. You will need to ungroup the imported data after nesting etc. to allow individual features to be machined. By default, VCarve Desktop will treat each SketchUp group / component as a single part UNLESS it contains other groups or components within it, in which case each lowest level group / component will be treated as a separate part.
Items which you retain in groups can be ungrouped at any time in the usual ways.
If the right-click menu-option to Ungroup back onto original object layers is used (which is the default option when using the icon or shortcut U) then the software will place the ungrouped items back onto the original layers they were created on in SketchUp.
Keep components starting with two underscores (__) togetheres
If you have a complex model which contain 'parts' which are made up of other groups / components, you will need to do some work on your model to identify these parts for VCarve Desktop. The way this is done is by setting the name of the groups / components that you wish to be treated as a single part to start with__ (two underscore characters). For example, if you had a model of a car and you wanted the wheels / tires / hub nuts to be treated as a single part even though the Tire, Wheel and other parts were separate components, you would group the parts together and name them something like __WheelAssembly in SketchUp. When this model was imported, and VCarve Desktop reached the group/component with a name starting with __ it would treat all subsequent child objects of that object as being the same part.
Replace outer boundary (for flat jobs only!)
There is a style of 'building' with SketchUp where individual 'parts' are made up of several components 'butted' against each other. The screenshot below shows such a component. will try to create a single outer boundary and delete all the vectors which were part of this boundary. The screenshot below shows the result of importing the same data with this option checked, ✓ this time the part has been ungrouped and the outer vector selected.
This data is now ready to be machined directly. It is important to understand the limitations of this option. It can be substantially slower. Creating robust boundaries for each part can consume a lot of processing power. Any feature which shares an edge with the boundary will be deleted. If the tabs on the top of this part were to have been machined 'thinner', this approach would not have been suitable as the bottom edge of the tabs has been removed.
IMPORTANT
The new features will help a lot of SketchUp users dramatically reduce the time it takes to go from a SketchUp design to a machinable part using Vectric Software. It is important to understand though that while these options provide a useful set of tools, in many cases there will still be additional editing required to ensure the part is ready to toolpath. Understanding the options and how they work will allow the part to be designed in SketchUp with these in mind and therefore help to minimize the time to machine once the data is imported.
Note
Sketchup files will only open in the same bit version you are running e.g. A file saved in a 32 bit version of Sketchup will only open up in a 32 bit version of the software. | http://docs.vectric.com/docs/V10.5/VCarveDesktop/ENU/Help/form/sketchup-files/index.html | 2021-02-25T08:15:04 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/d5d7316096aee1954674bb49f01edabb34daa878-811x524.png',
None], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/8cb244f204b627d6ec3157d656490745fc38132a-724x716.png',
'test'], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/548651ac5c152df1e221ed9d7fdf1b70c850ce33-578x717.png',
'test'], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/f645e14f8be4bbba6e67c10ecf49f1249037fd2f-266x254.png',
'test'], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/904e8b5ec08aa65fdd5715983832a1053d516dfe-229x254.png',
'test'], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/e5e07adfbd6836abac0969b380a8c5d1a9df419b-488x487.png',
None], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/38f5daa859c9f36e7aaade9f18b4f4986bce3d7e-517x100.png',
None], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/f667213437a48be4f3023d2adccb0e303c1970c8-536x112.png',
None], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/c52d3fb9d61ed02f83f44face37accacf6f91e8e-726x130.png',
None], dtype=object)
array(['https://cdn.sanity.io/images/i7cqi3ri/10_5/aae75025bafcd69740addfc072db3bb3e0599f38-725x130.png',
None], dtype=object) ] | docs.vectric.com |
Configuring Anaconda Client¶
Anaconda Client gives you the ability to upload packages to your on-site Anaconda Repository and provides highly granular access control capabilities. The instructions below describe how to configure Client to use your local Repository instead of Anaconda Cloud.
Client configuration¶
On each machine that accesses your on-site Repository, run this command as the machine’s local user:
anaconda config --set url:<port>/api
Or, to set the default repo on a system-wide basis, run this command:
anaconda config --set url:<port>/api --site
NOTE: Replace
your.server.name with the name of your local
Repository and
<port> with the name of the port used by Repository.
The system level
config file is used only if no user-level
config file is present.
To show the system and user
config file locations and
configuration settings:
anaconda config --show
Conda configuration¶
When the above
anaconda config steps are completed, you can access
all packages and channels from the local on-site Repository instead of
the public Anaconda.org.
Users can then add individual accounts to their
.condarc file by
running the following command:
conda config --add channels USERNAME
If you still want to access certain channels from the public Anaconda.org, run:
conda config --add channels
NOTE: Replace
USERNAME with your username.
Conda channel priority¶
To set a preferred priority for the channels conda searches for package
installs, edit your
~/.condarc file and change the order. Channels
at the top are searched first.
For example:
channels: - channel -<token>/<channel2> -<channel1> - defaults
The order of search is:
- Private on-site Repository channel.
- Private Anaconda.org channel2.
- Public Anaconda.org channel1.
- Default channel on the on-site Repository.
Pip configuration¶
To install PyPI packages from your Repository, add your channel to
your
~/.pip/pip.conf configuration file.
Edit the file and add an extra-index-url entry to the global config section:
[global] extra-index-url =:<port>/pypi/USERNAME/simple
NOTE: Replace
your.server.name with the name of your local
Repository,
<port> with the name of the port used by Repository
and
USERNAME with your username.
Kerberos configuration¶
If you have enabled Kerberos authentication as described in Configuring Repository to use Kerberos, your browser and Client should be able to authenticate to Repository using Kerberos.
In macOS/Unix, configure the file
/etc/krb5.conf:
[libdefaults] default_realm = YOUR.DOMAIN [realms] YOUR.DOMAIN = { kdc = your.kdc.server } [domain_realm] your.anaconda.repository = YOUR.DOMAIN
NOTE: Replace
YOUR.DOMAIN with your domain,
your.kdc.server
with your Kerberos key distribution center (KDC) and
your.anaconda.repository with your local Repository server.
If your configuration is correct, you should be able to authenticate
using the command line tool
kinit:
kinit jsmith anaconda login
NOTE: Replace
jsmith with your username.
Browser Setup¶
Many browsers do not present your Kerberos credentials by default, to prevent leaking credentials to untrusted parties. In order to use Kerberos authentication, you must whitelist Repository as a trusted party to receive credentials.
You must restart your browser after configuring the whitelist in order for changes to be reflected.
Safari¶
Safari requires no configuration—it automatically presents your credentials without whitelisting.
Chrome¶
The AuthServerWhitelist policy must be set to
your.anaconda.repository to allow Chrome to present credentials
to Repository with the hostname
your.anaconda.repository.
Depending on your DNS configuration,
DisableAuthNegotiateCnameLookup may also be required to prevent
Chrome from canonicalizing the hostname before generating a
service name.
NOTE: Replace
your.anaconda.repository with your local Repository
server.
To configure on macOS:
defaults write com.google.Chrome AuthServerWhitelist "your.anaconda.repository"
On Linux:
mkdir -p /etc/opt/chrome/policies/managed mkdir -p /etc/opt/chrome/policies/recommended chmod -w /etc/opt/chrome/policies/managed echo '{"AuthServerWhitelist": "your.anaconda.repository"}' > /etc/opt/chrome/policies/managed/anaconda_repo_policy.json
On Windows, use Group Policy objects to set the Authentication
server whitelist setting to
your.anaconda.repository.
For more information, see Chrome’s SPNEGO authentication and administration documentation.
Firefox¶
- Navigate to the configuration page
about:config.
negotiate.
- Set the configuration item
network.negotiate-auth.trusted-uristo
your.anaconda.repository
NOTE: Replace
your.anaconda.repository with your local Repository
server. | https://docs.anaconda.com/anaconda-repository/admin-guide/install/config/config-client/ | 2021-02-25T08:38:03 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.anaconda.com |
Azure SQL Data Warehouse Sink Connector for Confluent Platform¶
The Azure SQL Data Warehouse sink connector allows you to export data from Apache Kafka® topics to an Azure SQL Data Warehouse. The connector polls data from Kafka to write to the data warehouse based on the topics subscription. Auto-creation of tables and limited auto-evolution are also supported. This connector is compatible with Azure Synapse SQL pool
Prerequisites¶
The following are required to run the Kafka Connect Azure SQL Data Warehouse Sink Connector:
- Confluent Platform 4.0.0 or above, or Kafka 1.0.0 or above
- Java 1.8
- At minimum,
INSERTpermission is required for this connector. See Permissions: GRANT, DENY, REVOKE (Azure SQL Data Warehouse, Parallel Data Warehouse).
- If
auto.create=true,
CREATE TABLEand
CREATE SCHEMApermissions are required.
- If
auto.evolve=true,
ALTER ANY SCHEMApermissions are required.
Limitations¶
- This connector can only insert data into an Azure SQL Data Warehouse. Azure SQL Data Warehouse does not support primary keys, and because updates, upserts, and deletes are all performed on the primary keys, these queries are not supported for this connector.
-...
Important
For backwards-compatible table schema evolution, new fields in record schemas must be optional or have a default value.
Install the Azure SQL Data Warehouse-sql-dw:latest
You can install a specific version by replacing
latest with a version number. For example:
confluent-hub install confluentinc/kafka-connect-azure-sql-dw SQL Data Warehouse Sink Connector Configuration Properties.
Quick Start¶
In this quick start, the Azure SQL Data Warehouse sink connector is used to export data produced by the Avro console producer to an Azure SQL Data Warehouse instance.
- Azure Prerequisites
-
- Confluent Prerequisites
- Confluent Platform
- Confluent CLI (requires separate installation)
- Azure SQL Data Warehouse Sink Connector
Note
Though this quick start requires the Azure CLI for creating the resources and the mssql-cli for querying the data from the resources, both of these can also be managed through the Azure Portal: see Create and query an Azure SQL Data Warehouse.
Create an Azure SQL Data Warehouse instance¶arguments.
az sql server create \ --name <your-sql-server-name> \ --resource-group quickstartResourceGroup \ --location eastus2 \ --admin-user <your-username> \ --admin-password <your-password>
Enable a server-level firewall rule.
Pass your IP address for the
start-ip-addressand
end-ip-addressargument to enable connectivity to the server.
az sql server firewall-rule create \ --name quickstartFirewallRule \ --resource-group quickstartResourceGroup \ --server <your-sql-server-name> \ --start-ip-address <your-ip-address> --end-ip-address <your-ip-address>
Create a SQL Data Warehouse instance.
az sql dw create \ --name quickstartDataWarehouse \ --resource-group quickstartResourceGroup \ --server <your-sql-server-name>
This can take a couple of minutes.
Load the Connector¶
Create an
azure-sql-dw-quickstart.propertiesfile and add the following properties.
Make sure to substitute your SQL server name, username, and password in the
azure.sql.dw.url,
azure.sql.dw.userand
azure.sql.dw.passwordarguments respectively.
Start the Azure SQL Data Warehouse sink connector by loading the connector’s configuration with the following command:
confluent local load azure-sql-dw -- -d azure-sql-dw-quickstart.properties
Confirm that the connector is in a
RUNNINGstate.
confluent local status azure-sql-dw Azure SQL Data Warehouse for Data¶_productstable to see its contents.
select * from kafka_products;
Your output should resemble the one below (the rows and columns may possibly be in a different order):
+------------+---------+-----------+ | quantity | price | name | |------------+---------+-----------| | 10 | 0.99 | tape | | 3 | 2.75 | scissors | | 5 | 1.99 | notebooks | +------------+---------+-----------+ | https://docs.confluent.io/5.5.0/connect/kafka-connect-azure-sql-dw/index.html | 2021-02-25T07:58:23 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.confluent.io |
The Ccino App is a web-based application that is compatible with any mobile device with a browser and is the one of the most important parts of Ccino's services.
To get started with the Ccino App you will need to register a new account:
Visit the Ccino Financing website attached below
Select "I'm a Contractor"
Select "Sign Up"
You will then be guided to the sign up page for the Ccino App.
Ccino: Financing Software For Small Businesses
In order to create a new Ccino business account, you will need to enter some of your info including your business:
Your full name
Your business name
Your business title
Your email and phone number
Once you completed the info you will enter directly into the Ccino App. | https://docs.getccino.com/getting-started/register-for-the-ccino-app | 2021-02-25T08:05:33 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.getccino.com |
DEPRECATION WARNING
This documentation is not using the current rendering mechanism and will be deleted by December 31st, 2020. The extension maintainer should switch to the new system. Details on how to use the rendering mechanism can be found here.
Ready-for-use Extensions¶
If you like to optimize the following extensions for search engines
- Organiser - responsive TYPO3 for the Lobby and the Orgainsers
- Quick Shop - responsive e-commerce with TYPO3
you have to include the static template only:
- SEO [1] (seo_dynamic_tag)
You have to configure only the properties
- Condition: Single view begin
- Database: Pid list
If you have any question, please refer to the Manual of the Organiser and the Quick Shop. | https://docs.typo3.org/typo3cms/extensions/seo_dynamic_tag/4.0.2/Integrators/Extensions/Index.html | 2021-02-25T07:34:09 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.typo3.org |
MIRACL has been under continuous development since 1988. It is currently licensed to hundreds of leading companies in the United States, Brazil, Britain, Germany, France, Switzerland, South Africa and Australia. Its cryptographic runtimes can be found in chips, operating systems and software applications in industries ranging from defense and intelligence to financial services and software as a service companies.
MIRACL licenses are offered according to a dual licensing scheme. The FOSS license applicable to cryptographic implementations is the Affero GPL (AGPL) License, version 3. MIRACL is offered as a standard commercial license with any subscription to the CertiVox Key Management Service. Companies that are not comfortable with AGPL and are using MIRACL without a subscription to the CertiVox Key Management Service can acquire a commercial license for use of the software from by contacting [email protected].
From a purely theoretical viewpoint, there is no incompatibility between AGPL and commercial applications. One may be running a commercial service while making the source code open and available to third-parties. Of course, things are likely different in practice. AGPL employs so-called 'strong copyleft' – for example: the demand that all the software linked against free software (free in GNU/FSF sense) is also free software and freely available. GNU Public License is the most famous of such 'strong copyleft' FOSS licenses. The GPL copyleft clause triggers when an application is distributed outside of company boundaries. The GPL license was created at a time when the web did not exist, let alone the possibility to use applications remotely through a web browser. Because of this, companies could deploy GPL code commercially on a web server without betraying the letter, but arguably betraying the spirit of the GPL. This is called the ASP loophole. This is the context in which Affero was designed. The basic idea is that making AGPL software available through a web server constitutes distribution, and this is enough to trigger the strong copyleft provisions that many are already familiar with because of GPL. In other words, all of the software that links to the AGPL library must also be released with a compatible Free or Open-Source license. Commercial companies or applications developed that are deployed in the financial services, national defense or intelligence industries are unlikely to want to have to disclose and distribute the source code with which they use MIRACL. If that is the case, closed source licenses are available that do not require the company, application or organization to disclose the source code with which it uses MIRACL. This is called selling/buying a GPL exception in GNU parlance (others simply call this 'dual licensing').
Yes, you will. Exactly like regular GPL, linking your code to GPL code creates derivative work (in the copyright sense of the term) and this is enough to trigger the 'copyleft' provisions. FSF is adamant on this interpretation and so is CertiVox. Q: What's the price of a commercial license and / or support from CertiVox?
CertiVox issues a commercial license for MIRACL when a subscription to the CertiVox Key Management Service is issued. Additionally, CertiVox will offer enhanced developer support for MIRACL, which will optionally include cryptographic design consulting. CertiVox will publish publicly available pricing for both in the next few weeks. If you need a commercial license and / or support immediately, please contact [email protected]. | https://libraries.docs.miracl.com/miracl-explained/licensing | 2018-05-20T11:49:41 | CC-MAIN-2018-22 | 1526794863410.22 | [] | libraries.docs.miracl.com |
.
See Also
Reference
System Stored Procedures (Transact-SQL)
Other Resources
Updatable Subscriptions for Transactional Replication
Queued Updating Conflict Detection and Resolution
Help and Information
Getting SQL Server 2005 Assistance | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms174351(v=sql.90) | 2018-05-20T12:53:20 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
Parallel (MPI) Applications¶
What is an MPI?¶
High-performance compute clusters are often used to run Parallel Applications - i.e. software applications which simultaneously use resources from two or more computers at the same time. This can allow software programs to run bigger jobs, run them faster, and to work with larger data-sets than can be processed on a single computer. Parallel programming is hard - developing software to run on a single computer is difficult enough, but extending applications to run across multiple computers at the same time means doing many more internal checks while your program is running to make sure your software runs correctly, and to deal with any errors that occur.
A number of standards for parallel programming have been produced to assist software developers in this task. These published standards are often accompanied by an implementation of a software application programming interface (API) which presents a number of standard methods for parallel communication to software developers. By writing their software to be compatible with a published API, software developers can save time by relying on the API to deal with the parallel communications themselves (e.g. transmitting messages, dealing with errors and timeouts, interfacing with different hardware, etc.). The APIs for parallel processing are commonly known as message-passing interfaces (MPIs).
What MPIs are available?¶
A number of different MPIs are under active development; for Linux clusters, there are a number of common versions available, including:
- OpenMPI; a modern, open-source implementation supporting a wide array of hardware, Linux distributions and applications
- MPICH; an older open-source implementation largely superceded by OpenMPI, but still available for compatibility reasons
- MVAPICH; an open-source MPI supporting verbs transport across Infiniband fabrics
- Intel MPI; a commercial MPI optimised for Intel CPUs and interconnects
- IBM Platform MPI, HPMPI; commercial MPIs optimised for particular commercial applications and interconnects
The choice of which MPI to use for any particular use-case can depend on the application you want to run, the hardware you have available to run it on, if you have a license for a commercial application, and many other factors. Discussion and comparison of the available MPIs is outside the scope of this documentation - however, it should be possible to install and run any application that supports your underlying platform type and Linux distribution on an Alces Flight Compute cluster.
How do I use an MPI?¶
Most MPIs are distributed as a collection of:
- Software libraries that your application is compiled against
- Utilities to launch and manage an MPI session
- Documentation and integrations with application and scheduler software
You can use your Alces Flight Compute cluster to install the MPI you want to use, then compile and install the software application to be run on the cluster. Alternatively, users can install their own MPI and application software manually into the
/opt/apps/ directory of the cluster.
To run a parallel application, users typically start a new MPI session with parameters which instruct the MPI which nodes to include in the job, and which application to run. Each MPI requires parameters to be specified in the correct syntax - most also require a list of the compute nodes that will be participating in the job to be provided when a new session is started.
Running an MPI job via the cluster scheduler¶
Most users utilise the cluster job-scheduler to orchestrate launching of parallel jobs. The job-scheduler is responsible for identifying which nodes will be participating in the parallel job, and passing that information on to the MPI. When an MPI is installed on your Alces Flight Compute cluster using the
alces gridware command, an integration for your chosen job-scheduler is automatically installed and configured at the same time. Please see the next section of this documentation for more information on launching a parallel job via your cluster job-scheduler.
Running an MPI job manually¶
In some environments, users may wish to manually run MPI jobs across the compute nodes in their cluster without using the job-scheduler. This can be useful when writing and debugging parallel applications, or when running parallel applications which launch directly on compute nodes without requiring a scheduler. A number of commercial applications may fall into this category, including Ansys Workbench, Ansys Fluent, Mathworks Matlab and parallelised R-jobs.
Note
Before running applications manually on compute nodes, verify that auto-scaling of your cluster is not enabled. Auto-scaling typically uses job-scheduler information to control how many nodes are available in your cluster, and should be disabled if running applications manually. Use the command
alces configure autoscaling disable to turn off autoscaling before attempting to run jobs manually on your cluster compute nodes.
The example below demonstrates how to manually run the Intel Message-passing Benchmark application through OpenMPI on an Alces Flight Compute cluster. The exact syntax for your application and MPI may vary, but users should be able to follow the concepts discussed below to run their own software. You will need at least two compute nodes available to run the following example.
- Install the application and MPI you want to run. The benchmarks software depot includes both OpenMPI and IMB applications, so install and enable that by running these commands:
-
alces gridware depot install benchmark
-
alces gridware depot enable benchmark
- Create a list of compute nodes to run the job on. The following command will use your genders group to create a hostfile with one hostname per line (the pdsh module will need to be loaded if it hasn’t already with
module load services/pdsh):
-
cd ; nodeattr -n nodes > mynodesfile
- Load the module file for the IMB application; this will also load the OpenMPI module file as a dependency. Add the module file to load automatically at login time:
-
module initadd apps/imb
-
module load apps/imb
- Start the parallel application in a new mpirun session, with the following parameters:
-
-np 2- use two CPU cores in total
-
-npernode 1- place a maximum of one MPI thread on each node
-
-hostfile mynodesfile- use the list of compute nodes defined in the file
mynodesfilefor the MPI job (as generated in step 2 above)
-
$IMBBIN/IMB-MPI1- run the binary IMB-MPI1, located in the
$IMBBINdirectory configured by the
apps/imbmodule
-
PingPong- a parameter to the IMB-MPI1 application, this option instructs it to measure the network bandwidth and latency between nodes
[alces@login1(scooby) ~]$ mpirun -np 2 -npernode 1 -hostfile mynodesfile $IMBBIN/IMB-MPI1 PingPong benchmarks to run PingPong #------------------------------------------------------------ # Intel (R) MPI Benchmarks 4.0, MPI-1 part #------------------------------------------------------------ # Date : Sat May 14 15:37:49 2016 # Machine : x86_64 # System : Linux # Release : 3.10.0-327.18.2.el7.x86_64 # Version : #1 SMP Thu May 12 11:03:55 UTC 2016 # MPI Version : 3.0 # MPI Thread Environment: # Calling sequence was: # /opt/gridware/depots/2fe5b915/el7/pkg/apps/imb/4.0/gcc-4.8.5+openmpi-1.8.5/bin//IMB-MPI1 PingPong # Minimum message length in bytes: 0 # Maximum message length in bytes: 4194304 # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions : MPI_FLOAT # MPI_Op : MPI_SUM # # List of Benchmarks to run: # PingPong #--------------------------------------------------- # Benchmarking PingPong # #processes = 2 #--------------------------------------------------- #bytes #repetitions t[usec] Mbytes/sec 0 1000 3.37 0.00 1 1000 3.22 0.30 2 1000 3.89 0.49 4 1000 3.96 0.96 8 1000 3.99 1.91 16 1000 3.87 3.95 32 1000 3.90 7.83 64 1000 3.91 15.59 128 1000 4.62 26.44 256 1000 4.86 50.19 512 1000 5.89 82.95 1024 1000 6.08 160.58 2048 1000 6.98 279.72 4096 1000 10.35 377.26 8192 1000 17.43 448.32 16384 1000 31.13 501.90 32768 1000 56.90 549.22 65536 640 62.37 1002.09 131072 320 127.54 980.10 262144 160 230.23 1085.88 524288 80 413.88 1208.08 1048576 40 824.77 1212.45 2097152 20 1616.90 1236.93 4194304 10 3211.40 1245.56 # All processes entering MPI_Finalize | http://docs.alces-flight.com/en/stable/mpiapps/mpiapps.html | 2018-05-20T11:49:00 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.alces-flight.com |
< Main Index Usage Plugins >
We've added a usage plugin that will
allow us to get anonymous usage statistics for JBoss Tools
usage. More details about how it works can be found here.
The following screenshot shows what details we
get for screen resolutions after a few days of nightly
build usage.
Related
Jira
We have added a "Search and install
runtimes" feature to JBoss Tools similar to what
previously only were available from JBoss Developer
Studio installer.
The feature is available
under JBoss Tools Preferences and allows you add any
time to scan for additional runtimes and servers
instead of configuring each individual runtime
manually.
The "Search" action recognizes JBoss
AS server, JBoss EAP/EPP/SOA, Seam, a standalone Seam,
JBPM and Drools runtime. There is a filter to export
JBoss Runtime preferences that can be used within the
standard Eclipse Export wizard. The standard Eclipse
Export wizard doesn't export any JBPM and WTP server
configurations. The Export/Import action within the
Runtime preference page will export/import those
configurations too.
Related Jira | http://docs.jboss.org/tools/whatsnew/core/core-news-3.2.0.M2.html | 2018-05-20T12:11:18 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.jboss.org |
Checklist: Configuring and Distributing Trust Anchors
Updated: October 7, 2009
Applies To: Windows Server 2008 R2
Tip
This topic applies to DNSSEC in Windows Server 2008 R2. DNSSEC support is greatly enhanced in Windows Server 2012. For more information, see DNSSEC in Windows Server 2012.
This checklist provides links to important procedures you can use to configure and distribute trust anchors.
Note
When a reference link takes you to a conceptual topic or to a subordinate checklist, return to this topic after you review the conceptual topic or you complete the tasks in the subordinate checklist so that you can proceed with the remaining tasks in this checklist.
Checklist: Configuring and Distributing Trust Anchors
See Also
Concepts
Checklist: Implementing DNSSEC
Appendix C: DNSSEC PowerShell Scripts | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee649143(v=ws.10) | 2018-05-20T12:50:51 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
What's new in Microsoft Deployment Toolkit (MDT) 2013 Guide
Microsoft Deployment Toolkit (MDT) 2013 is the next version of the Microsoft Solution Accelerator for operating system and application deployment. The purpose of this guide is to explain the changes in MDT 2013 from MDT 2012 Update 1.
This guide specifically discusses the MDT 2013 release and assumes familiarity with previous MDT version concepts, features, and capabilities..
What’s New in This Release
MDT includes several improvements that adds new features and improves your user experience with MDT. In addition, this release of MDT includes many other small enhancements and bug fixes that are not listed below.
Support for Upgrading from Previous Versions of MDT
MDT supports upgrading from MDT 2012 Update 1.
Note
Create a backup of the existing MDT infrastructure before attempting an upgrade.).
Support for System Center 2012 R2 Configuration Manager
ZTI and UDI deployment methods in MDT support integration with System Center 2012 R2 Configuration Manager, including new capabilities such as multiple Network Access Accounts.
Improved Deployment to x86-based Computers that Use the UEFI Standard
Unified Extensible Firmware Interface (UEFI) is a specification that defines a software interface between an operating system and platform firmware. UEFI is a more secure replacement for the older basic input/output system (BIOS) firmware interface present in some personal computers, which is vulnerable to malware that performs attacks during the boot or power on self-test processes..
For more information about UEFI support in MDT, see the section, "Deploy to Computers with UEFI", in the MDT document Using the Microsoft Deployment Toolkit.
What’s Been Removed from This Release?
This release of MDT does not include the following features that existed in previous versions of MDT:
Deployment of Windows 8.1 Preview
Deployment of Windows Server 2012 R2 Preview
ZTI with
System Center Configuration Manager 2007
System Center 2012 Configuration Manager
System Center 2012 Configuration Manager SP1
System Center 2012 R2 Configuration Manager Preview
Use of the Windows ADK for Windows 8
Use of the Windows AIK for Windows 7
Deployment of Windows XP or Windows Server 2003
Deployment of Windows Vista or Windows Server 2008
Out-of-box Group Policy objects (GPOs) from Security Compliance Manager (SCM). Tools and GPOs must be installed with SCM before they can be used in MDT.
Operating System Support in This Release
The following table lists the operating system support that LTI and ZTI deployments provide in this release of MDT.
● = supported
Windows ADK Support
The following table lists the Windows ADK and Windows AIK support that LTI, ZTI, and UDI deployments provide in MDT.
● = supported
Microsoft .NET Framework and Windows PowerShell Support
The following table lists the versions of the Microsoft .NET Framework and Windows PowerShell that are supported by MDT by Windows operating system version. | https://docs.microsoft.com/en-us/sccm/mdt/whats-new-in-mdt | 2018-05-20T12:06:38 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.microsoft.com |
Tracking the license usage for products helps you to estimate the overall license requirements for your environment and to keep it correctly licensed. You can filter the license usage data by time period.
Prerequisites
To view and generate license use reports for the products in vSphere, use data.
- If you select a custom time period, select the start and end dates, and click Recalculate.
Results
The Report Summary shows the license usage for each product as a percentage of the license capacity for the product over the selected period. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vcenterhost.doc/GUID-2E9331C8-69D6-4063-922C-57C8278C3244.html | 2018-05-20T12:03:59 | CC-MAIN-2018-22 | 1526794863410.22 | [] | docs.vmware.com |
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
14.05.14 - Apigee Edge cloud release notes
On Tuesday, May.
-.
Help or comments?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://docs.apigee.com/release-notes/content/140514-apigee-edge-cloud-release-notes | 2017-09-19T18:56:50 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.apigee.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Namespace: Amazon.ElasticLoadBalancing
Assembly: AWSSDK.dll
Version: (assembly version)
The IAsyncResult returned by the call to BeginCreateAppCookieStickinessPolicy.
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MELBELBClientCreateAppCookieStickinessPolicyIAsyncResultNET35.html | 2017-09-19T19:11:11 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.aws.amazon.com |
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
Reports: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://ja.docs.apigee.com/api/reports | 2017-09-19T19:01:03 | CC-MAIN-2017-39 | 1505818685993.12 | [] | ja.docs.apigee.com |
class OESzmapResults
This class represents OESzmapResults, a container for the results of OESzmap calculations with the function OECalcSzmapResults.
OESzmapResults() OESzmapResults(const OESzmapResults &rhs)
Default and copy constructors. Typically an empty, default OESzmapResults is passed to OECalcSzmapResults where it is filled with the calculated results.
OESzmapResults rslt = new OESzmapResults();
operator bool() const
True indicates that OECalcSzmapResults was called with a valid OESzmapEngine when this object was created.
bool GetComponent(double *compArray, unsigned int componentType) const OESystem::OEIterBase<double> *GetComponent(unsigned int componentType) const
Returns the calculated values of a particular OEComponent, specified by componentType, for the 3D point provided to OECalcSzmapResults. Component values for each probe orientation are the low-level data used to compose OEEnsemble values (see OESzmapResults.GetEnsembleValue). The number of values in the output compArray or the iterator is OESzmapResults.NumOrientations and values are returned in the same order as the probe orientations.
Console.WriteLine("interaction:"); foreach (double coulomb in rslt.GetComponent(OEComponent.Interaction)) { Console.WriteLine(coulomb); }
Returns false or an empty iterator if the OEComponent type is not recognized.
bool GetCoords(float *xyz) const bool GetCoords(double *xyz) const
Returns the coordinates of the 3D point where calculations were performed by OECalcSzmapResults to create this object.
The point is passed back in a float or double array of size three with coordinates in {x,y,z} order.
float[] point = new float[3]; rslt.GetCoords(point);
Returns false if this OESzmapResults is uninitialized.
double GetEnsembleValue(unsigned int ensembleType) const
Returns the calculated value of a particular OEEnsemble, specified by ensembleType, for the 3D point provided to OECalcSzmapResults. Ensemble values are the results of calculations over all orientations of the probe. In general, these are built by Boltzmann summation of various combinations of OEComponent values (see OESzmapResults.GetComponent).
double nddg = rslt.GetEnsembleValue(OEEnsemble.NeutralDiffDeltaG);
Returns 0.0 if the OEEnsemble type is not recognized or this OESzmapResults is uninitialized.
bool GetProbabilities(double *probArray) const OESystem::OEIterBase<double> *GetProbabilities() const
Returns the statistical mechanical probabilities for each probe orientation at the 3D point provided to OECalcSzmapResults. Probability values can be used to Boltzmann weight OEComponent values and are used to select which probe orientations are returned by OESzmapResults.PlaceProbeSet. The number of values in the output probArray or the iterator is OESzmapResults.NumOrientations and values are returned in the same order as the probe orientations.
double[] prob = new double[rslt.NumOrientations()]; rslt.GetProbabilities(prob); Console.WriteLine("greatest prob = {0:F3}" + prob[order[0]]);
Returns false or an empty iterator if this OESzmapResults is uninitialized.
bool GetProbabilityOrder(unsigned int *orderArray) const OESystem::OEIterBase<unsigned int> *GetProbabilityOrder() const
Returns an array or iterator of indices referring to probe orientations or associated OEComponent and probability values, sorted in the order of increasing probability (see OESzmapResults.GetProbabilities). Hence, the first (orderArray[0]) is the index of the orientation with the greatest probability (probArray[orderArray[0]]). The number of values in the output orderArray or the iterator is OESzmapResults.NumOrientations.
uint[] order = new uint[rslt.NumOrientations()]; rslt.GetProbabilityOrder(order); Console.WriteLine("conf with greatest prob = " + order[0]);
Returns false or an empty iterator if this OESzmapResults is uninitialized.
unsigned int NumOrientations() const
Returns the number of orientations for the probe molecule used in the calculation. Equals the number of values retuned by calls to OESzmapResults.GetComponent, OESzmapResults.GetProbabilities, or OESzmapResults.GetProbabilityOrder.
OEChem::OEAtomBase *PlaceNewAtom(OEChem::OEMolBase &mol, unsigned int element=OEChem::OEElemNo::O) const
Adds a new atom to the input molecule with atomic coordinates of the 3D point provided to OECalcSzmapResults when the object was created.
OEGraphMol amol = new OEGraphMol(); OEAtomBase patom = rslt.PlaceNewAtom(amol); Console.WriteLine("vdw = " + patom.GetStringData("vdw"));
The new atom has been annotated with ensemble values for this point as generic data. String versions of the data have been formatted to two decimal places for convenient display.
The atom type can be controlled through the optional element parameter, which defaults to oxygen.
Returns a pointer to the newly created atom to facilitate further customization.
bool PlaceProbeMol(OEChem::OEMolBase &outputMol, unsigned int orientation=0u, bool annotate=true) const
Modifies the outputMol to be a copy of one probe orientation, placed at the 3D point provided to OECalcSzmapResults when the object was created. The probe orientation can be controlled through the optional orientation parameter (the default value of 0 refers to the first probe conformation).
OEGraphMol pmol = new OEGraphMol(); rslt.PlaceProbeMol(pmol, order[0]);
If the optional parameter annotate is true (the default), the molecule will be annotated with OEComponent data for that orientation. See OESzmapResults.PlaceProbeSet for more information on this annotation.
Returns false if this OESzmapResults is uninitialized.
double PlaceProbeSet(OEChem::OEMCMolBase &probeSet, double probCutoff, bool clear=true) const double PlaceProbeSet(OEChem::OEMCMolBase &probeSet, unsigned int maxConfs=0u, bool clear=true) const
Modifies the multi-conformer probeSet to contain one or more orientations of the probe, each placed at the 3D point provided to OECalcSzmapResults when the object was created. The probe set is returned in probability order (see OESzmapResults.GetProbabilityOrder).
There are three ways to select which probe orientations are placed in the probeSet:
If just the probeSet parameter is provided, without other options, all probe orientations will be returned.
OEMol mcmol = new OEMol(); rslt.PlaceProbeSet(mcmol);
If the real number parameter probCutoff is used, probe orientations will be added until the total cumulative probability is at least that amount. Cumulative probabilities are > 0.0 and <= 1.0.
double probCutoff = 0.5; rslt.PlaceProbeSet(mcmol, probCutoff); Console.WriteLine("nconf to yield 50pct = " + mcmol.NumConfs());
Finally, if the integer parameter maxConfs is used, no more than number of probe orientations will be returned. A value of 0 is a special signal to return all orientations.
bool clear = false; double cumulativeProb = rslt.PlaceProbeSet(mcmol, 10, clear); Console.WriteLine("best 10 cumulative prob = {0:F3}", cumulativeProb);
If the optional parameter clear is set to false, any previous orientations in the probeSet will not be cleared, allowing conformers for multiple 3D points as well as multiple orientations to be stored in the probeSet. By default, previous orientations are cleared away before the new orientations are added.
Each orientation has been annotated with OEComponent data for that orientation. In addition, the total interaction + psolv + wsolv + vdw energy of each is recorded as the energy of the conformation (accessible using the GetEnergy() method of the conformer). String versions of the data have been formatted to two decimal places for convenient display. String data is also stored as SD data, so they are included in VIDA’s spreadsheet and can be saved to .sd files.
Returns the cumulative probability of all the orientations returned, or 0.0 if this OESzmapResults is uninitialized. | https://docs.eyesopen.com/toolkits/csharp/szmaptk/OESzmapClasses/OESzmapResults.html | 2017-09-19T18:54:12 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.eyesopen.com |
STL_QUERY_METRICS
Contains metrics information, such as the number of rows processed, CPU usage, input/output, and disk use, for queries that have completed running in user-defined query queues (service classes). To view metrics for active queries that are currently running, see the STV_QUERY_METRICS system table.
Query metrics are sampled at one second intervals. As a result, different runs of the same query might return slightly different times. Also, query segments that run in less than one second might not be recorded.
STL_QUERY_METRICS tracks and aggregates metrics at the query, segment, and step level.
For information about query segments and steps, see Query Planning And Execution Workflow. Many metrics (such as
max_rows,
cpu_time, and so on) are summed across node slices. For more
information about node slices, see Data Warehouse System
Architecture.
To determine the level at which the row reports metrics, examine the
segment and
step_type columns.
If both
segmentand
step_typeare
-1, then the row reports metrics at the query level.
If
segmentis not
-1and
step_typeis
-1, then the row reports metrics at the segment level.
If both
segmentand
step_typeare not
-1, then the row reports metrics at the step level.
The SVL_QUERY_METRICS view and the SVL_QUERY_METRICS_SUMMARY view aggregate the data in this table and present the information in a more accessible form.
This table is visible to all users. Superusers can see all rows; regular users can see only their own data. For more information, see Visibility of Data in System Tables and Views.
Table Rows
Sample Query
To find queries with high CPU time (more the 1,000 seconds), run the following query.
Copy
Select query, cpu_time / 1000000 as cpu_seconds from stl_query_metrics where segment = -1 and cpu_time > 1000000000 order by cpu_time; query | cpu_seconds ------+------------ 25775 | 9540
To find active queries with a nested loop join that returned more than one million rows, run the following query.
Copy
select query, rows from stl_query_metrics where step_type = 15 and rows > 1000000 order by rows; query | rows ------+----------- 25775 | 2621562702
To find active queries that have run for more than 60 seconds and have used less than 10 seconds of CPU time, run the following query.
Copy
select query, run_time/1000000 as run_time_seconds from stl_query_metrics where segment = -1 and run_time > 60000000 and cpu_time < 10000000; query | run_time_seconds ------+----------------- 25775 | 114 | http://docs.aws.amazon.com/redshift/latest/dg/r_STL_QUERY_METRICS.html | 2017-09-19T19:06:40 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.aws.amazon.com |
Users
You can create and manage database users using the Amazon Redshift SQL commands CREATE USER and ALTER USER, or you can configure your SQL client with custom Amazon Redshift JDBC or ODBC drivers that manage the process of creating database users and temporary passwords as part of the database logon process.
The drivers authenticate database users based on AWS Identity and Access Management (IAM) authentication. If you already manage user identities outside of AWS, you can use a SAML 2.0-compliant identity provider (IdP) to manage access to Amazon Redshift resources. You use an IAM role to configure your IdP and AWS to permit your federated users to generate temporary database credentials and log on to Amazon Redshift databases. For more information, see Using IAM Authentication to Generate Database User Credentials.:Copy) | http://docs.aws.amazon.com/redshift/latest/dg/r_Users.html | 2017-09-19T19:13:45 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.aws.amazon.com |
Database Replication and Clustering
Graphing Tungsten Replicator data is supported through Cacti extensions. These provide information gathering for the following data points:
Applied Latency
Sequence Number (Events applied)
Status (Online, Offline, Error, or Other)
To configure the Cacti services:
Download both files from
Place the PHP script into
/usr/share/cacti/scripts.
Modify the installed PHP file with the appropriate
$ssh_user and
$tungsten_home location from your
installation:
$ssh_user should match the
>user used during
installation.
>$tungsten_home is the
installation directory and the
tungsten
subdirectory. For example, if you have installed into
/opt/continuent, use
/opt/continuent/tungsten.
Add SSH arguments to specify the correct
id_rsa file if needed.
Ensure that the configured
>$ssh_user
has the correct SSH authorized keys to login to the server or servers
being monitored. The user must also have the correct permissions and
rights to write to the cache directory.
Test the script by running it by hand:
shell>
php -q /usr/share/cacti/scripts/get_replicator_stats.php --hostname
replserver
If you are using multiple replication services, add
--service
to the command.
servicename
Import the XML file as a Cacti template.
Add the desired graphs to your servers running Continuent Tungsten. If you are using multiple replications services, you'll need to specify the desired service to graph. A graph must be added for each individual replication service.
Once configured, graphs can be used to display the activity and availability. | http://docs.continuent.com/continuent-tungsten-4.0/ecosystem-cacti.html | 2017-09-19T18:56:23 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.continuent.com |
You" \
For certain APIs,:
HMAC signing can be implemented in any language. You can find example code for some popular languages on our GitHub page. If there’s no example for your language of choice, let us know so we can add it!
To sign a request, you’ll need a few things:
The signature is calculated using the following fields:
Using this algorithm:
+be a function that concatenates strings, and let
"\n"indicate a newline character;
HMACbe a function that calculates an HMAC from a string and a secret key, and let
HEXbe a function that returns the string hexadecimal representation of its input; then
HEX( HMAC( your_secret_key, timestamp + "\n" + method + "\n" + path + "\n" + query + "\n" + body ))
Notes:
bodyis only used for JSON bodies with the
application/jsontype. if the body is any other type or is missing, then an empty string should be used instead.
.
Besides the examples in github, here is Python code. | http://docs.svbplatform.com/authentication/ | 2017-09-19T19:00:09 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.svbplatform.com |
This topic applies to Dynamics 365 portals and later versions.
OpenID Connect external identity providers are services that conform to the Open ID Connect specifications. Integrating a provider involves locating the authority (or issuer) URL associated with the provider. A configuration URL can be determined from the authority which supplies metadata required during the authentication workflow. The provider settings are based on the properties of the OpenIdConnectAuthenticationOptions class.
Examples of authority URLs are:
- Google:
- Azure Active Directory:<Azure AD Application>/
Each OpenID Connect provider also involves registering an application (similar to that of an OAuth 2.0 provider) and obtaining a Client Id. The authority URL and the generated application Client Id are the settings required to enable external authentication between the portal and the identity provider.
Note
The Google OpenID Connect endpoint is currently not supported because the underlying libraries are still in the early stages of release with compatibility issues to address. The OAuth2 provider settings for portals endpoint can be used instead.
OpenID settings for Azure Active Directory
To get started, sign into the Azure Management Portal and create or select an existing directory. When a directory is available follow the instructions to add an application to the directory.
- Under the Applications menu of the directory, select Add.
- Choose Add an application my organization is developing.
- Specify a custom name for the application and choose the type web application and/or web API.
- For the Sign-On URL and the App ID URI, specify the URL of the portal for both fields
At this point, a new application is created. Navigate to the Configure section in the menu.
Under the single sign-on section, update the first Reply URL entry to include a path in the URL:. This corresponds to the RedirectUri site setting value
Under the properties section, locate the client ID field. This corresponds to the ClientId site setting value.
- In the footer menu, select View Endpoints and note the Federation Metadata Document field
The left portion of the URL is the Authority value and is in one of the following formats:
-
-
To get the service configuration URL, replace the FederationMetadata/2007-06/FederationMetadata.xml path tail with the path .well-known/openid-configuration. For instance,
This corresponds to the MetadataAddress site setting value.
Create site settings using OpenID
Apply portal site settings referencing the above application.
Note
A standard Azure AD configuration only uses the following settings (with example values):
- Authentication/OpenIdConnect/AzureAD/Authority -
- Authentication/OpenIdConnect/AzureAD/ClientId - fedcba98-7654-3210-fedc-ba9876543210
The Client ID and the authority URL do not contain the same value and should be retrieved separately.
- Authentication/OpenIdConnect/AzureAD/RedirectUri -
Multiple identity providers can be configured by substituting a label for the [provider] tag. Each unique label forms a group of settings related to an identity provider. Examples: AzureAD, MyIdP
See also
Configure Dynamics 365 portal authentication
Set authentication identity for a portal
OAuth2 provider settings for portals
WS-Federation provider settings for portals
SAML 2.0 provider settings for portals
Facebook App (Page Tab) authentication for portals | https://docs.microsoft.com/en-us/dynamics365/customer-engagement/portals/configure-openid-settings | 2017-09-19T18:56:05 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.microsoft.com |
The Flurry plugin lets you log interesting events in your application via the analytics library.
To use Flurry analytics, please register for an account.
To use this plugin, add two entries into the
plugins table of
build.settings. When added, the build server will integrate the plugin during the build phase.
settings = { plugins = { ["CoronaProvider.analytics.flurry"] = { publisherId = "com.coronalabs" }, ["plugin.google.play.services"] = { publisherId = "com.coronalabs" }, }, } | https://docs.coronalabs.com/daily/plugin/flurry/index.html | 2015-06-30T08:19:14 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.coronalabs.com |
Difference between revisions of "Help screens"
From Joomla! Documentation
Revision as of 18:47, 25 August 2012
All the categories under Category:Help screens are further broken down by Categories of the type of Help Screens or they can all be listed by choosing the appropriate Joomla! Version.
- Help Screens show detailed information on how to use the Joomla! administrator interface.
- Each Help Screen shows detailed screenshots of the Joomla! administrator interface.
- The Help Screen pages on this wiki are the ones actually used in the admimnistrator Help interface and available from choosing Help in the administrator interface.
Subcategories
This category has the following 46 subcategories, out of 46 total.
Pages in category ‘Help screens’
The following 4 pages are in this category, out of 4 total. | https://docs.joomla.org/index.php?title=Category:Help_screens&diff=prev&oldid=71860 | 2015-06-30T08:28:40 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Menus Menu Item User Password Reset | https://docs.joomla.org/index.php?title=Help25:Menus_Menu_Item_User_Password_Reset&oldid=81731 | 2015-06-30T09:50:05 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "Can you remove the "Powered by Joomla!" message?"
From Joomla! Documentation
Revision as of 20:53, 26 December 2008
.4] Yes. You may remove that message, which is in footer.php. You may however not remove copyright and license information from the source code. | https://docs.joomla.org/index.php?title=Can_you_remove_the_%22Powered_by_Joomla!%22_message%3F&diff=12321&oldid=11194 | 2015-06-30T08:53:29 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Information for "JStream" Basic information Display titleAPI16:JStream Default sort keyJStream Page length (in bytes)3,498 Page ID9288:47, 22 March 2010 Latest editorDoxiki (Talk | contribs) Date of latest edit17:47, 22 March 2010 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (2)Templates used on this page: SeeAlso:JStream (view source) Description:JStream (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=API16:JStream&action=info | 2015-06-30T09:26:27 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "Joomla brochure"
From Joomla! Documentation
Revision as of 08:45, 26 November 2011
This is a placeholder for the Joomla brochure project.
The objective of this project is to translate the brochure text into different languages so they can be used for brochures and marketing purposes.
Please use the English Joomla brochure as template for other languages, translate to your language.
The naming convention for these pages is Joomla brochure/<ISO language code>-<ISO COUNTRY CODE>, where the language code is lowercase and the country code is uppercase. Pages for languages that are not localized to a country should be named Joomla brochure/<ISO language code>.
Available language pages: "" has no sub pages. | https://docs.joomla.org/index.php?title=Joomla_brochure&diff=63199&oldid=63194 | 2015-06-30T08:59:38 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Getting Started with SilverStripe
Before you start developing your first web application, you'll need to install the latest version of SilverStripe onto a web server. The Getting Started section will show you what server requirements you will need to meet and how to download and install SilverStripe.
To check out the features that SilverStripe offers without installing it, read the Feature Overview and play with the interactive demo website.
Getting support
SilverStripe has an wide range of options for getting support. The forums and IRC channel are the best places to talk and discuss questions and problems with the community. There are also several other websites with SilverStripe documentation to make use of.
- The API Documentation contains technical reference and class information.
- The User Help website contains documentation related to working within the CMS.
New features, API changes and the development roadmap for the product are discussed on the core mailinglist along with UserVoice.
Building your first SilverStripe Web application
Once you have completed the Getting Started guide and have got SilverStripe installed and running, the following Tutorials will lead through the basics and core concepts of SilverStripe.
Make sure you know the basic concepts of PHP5 before attempting to follow the tutorials. If you have not programmed with PHP5 be sure to read the Introduction to PHP5 (zend.com).
SilverStripe Concepts
The Developer Gudes contain more detailed documentation on certain SilverStripe topics, 'how to' examples and reference documentation..
Contributing to SilverStripe
The SilverStripe Framework, Content Management System and related websites are open source and welcome community contributions.isation
Implement SilverStripe's internationalisation system in your own modules.
Core committers
Code of conduct | http://docs.silverstripe.org/en/3.1/ | 2015-06-30T08:22:34 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.silverstripe.org |
Tools Needed
* Hexagon 2
* Wine bottle model
* UV Mapper
* Art Program such as Paint Shop Pro or Photoshop
* Poser4 or above
Support Files
With this tutorial you will learn how to unfold a model that has thickness applied. Also covered in this tutorial are opening and saving the texture map with UV Mapper, brief editing in an art program such as Paint Shop Pro or Photoshop and finally testing the texture map on the model in Poser.
Open zip file of the wine bottle model. You may keep the wine bottle model as a gift for taking this tutorial. It is free for both commercial and noncommercial use.
In order to be able to texture our wine bottle we need to cut it up using the Select Edges tool in Hexagon.
Set your select tool on Select Edges(F3). Click on the Universal Manipulator(U).
My object view is set to smoothed solid.
These are your panning, rotation and zoom tools.
When you apply a thickness to an object the inside of the object will need to be cut as well as the outside. We will be cutting our bottle in half basically.
Let's begin at the bottom of the bottle. Hold the Shift key down so that you can make multiple selections. If you make a mistake and select the wrong edge just click again on the mistaken edge to deselect it using the Ctrl key.
Zoom in close as you near the top of the bottle so you can see all of the edges.
When you get to the top of the bottle rotate as needed and do the same thing on the opposite side as shown;
Keep the Shift key down and select the uppermost edge around the top of the bottle.
When that is done rotate your view so you can see the bottom of the bottle and repeat.
Now comes the tricky part and it may take a bit of practice only because it is easy to select the wrong edges at the top of the bottle. The best way to avoid this is to zoom in close. We will begin with the bottom of the bottle's inner circle. Rotate and zoom in as needed.
Repeat for the top inner edges of the bottle's cork.
Zoom in close to the bottom of the bottle and select the inside edge of the first seam. Begin from the inner selected edge of the circle and move up along the seams to the top of the bottle as shown in the images. Zoom and rotate as needed to get a better view.
Here is a close-up of how the top and bottom selected edges should look.
Entire Edge Selection;
Now comes the fun part.
2.Click on the plus sign to add your edge selections to seams.
3.Click on the little head and your map should appear! See image;
Do not hit the Validate button just yet. We want to make sure our map isn't tangled. This will happen if there was a mistake made when selecting the edges of our bottle.
Click on the area with your unfolded map and then click on the single view pane.
You can see our map is off the grid.
We need to size it down so it fits on the grid. Before we do that though we need to make sure our map isn't angled. Just use your pan tools and zoom in and pan to the right to check your pieces. If there are no overlapping areas and no tangles you may validate your map. Once that is done it can be resized.
Here is what my map looks like zoomed out. Now that it is validated I can use the selection tool to select the entire map and size it down. You can either use rectangle mode or lasso to select your map.
Right click and drag to select the entire map and then resize using the Universal Manipulator's center yellow square. Click to deselect.
Now you need to zoom into your map and select each of the pieces individually and place them onto the grid. I think it's easier to use the lasso tool for this step. Do not be afraid to resize or rotate the pieces as needed.
You have mapped your bottle!
For this step you will need to use a UV Mapper. You can pick it up for free at.
Open up UV Mapper. Click on File/Load Model and navigate to your folder where you saved your mapped bottle and open it. Your map should appear without a problem. To save it as a texture map go to File/Save Texture Map and select the size you want. I chose 1200 x 1200 bmp and left the boxes unchecked.
Don't worry if the map looks a little distorted it will look fine in your art program.
We need to check to see if our map will work so open up your map in Paint Shop Pro or Photoshop or other art program. Save your map with a new name for editing.
Use the magic wand selection tool to select the white area of the map and then click on select/inverse. Create a new layer and leave the selection activated.
We are not going for anything really fancy here as this is just to test the map to see if it works. Just use the flood-fill tool to fill in all of the pieces with color. Merge/flatten layers.
Save the texture map and exit your art program.
Open up Poser 4 or above and then import the wavefront obj of your bottle. Leave settings at default. Drop object to floor if needed.
Click on Render/Materials and then click on load texture map and navigate to your model/map. Click on OK.
Render your image to see if your map worked. If all went well then your bottle of wine will look similar to mine!
Congratulations! You have finished the Unfold tutorial.:o)
Unfold tutorial by Debbie Overstreet 'Samanthie' | http://docs.daz3d.com/doku.php/artzone/pub/tutorials/hexagon/hexagon-misc14 | 2015-06-30T08:15:15 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.daz3d.com |
Changes related to "Absolute Basics of How a Component Functions"
← Absolute Basics of How a Component Functions
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&hideminor=1&target=Absolute_Basics_of_How_a_Component_Functions | 2015-06-30T08:59:34 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
============ Installation ============ Installing django-crispy-forms ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Install', ) Template packs ~~~~~~~~~~~~~~ Since version 1.1.0 of django-crispy-forms. If your form CSS framework is not supported, you can create a template pack for it and submit a pull request in github. You can easily switch between both using ``CRISPY_TEMPLATE_PACK`` setting variable, setting it to ``bootstrap`` or ``uni_form``. .. _`Uni-form`: .. _`Bootstrap`: Setting media files ~~~~~~~~~~~~~~~~~~~ You will need to include the proper media files, depending on what CSS framework you are using. This might involve one or more CSS and JS files. Read CSS framework's docs for help on how to set it up. Uni-form static files ~~~~~~~~~~~~~~~~~~~~~ `Uni-form`_ files are composed of two CSS files and one JS library based on jQuery. Uni-form's JS library provides some nice interactions, but you will need to link a copy of jQuery. Preferably you should use a `version hosted`_ on Google's CDN servers since the user's browser might already have it cached. .. _`: | http://django-crispy-forms.readthedocs.org/en/1.1.4/_sources/install.txt | 2015-06-30T08:19:09 | CC-MAIN-2015-27 | 1435375091925.14 | [] | django-crispy-forms.readthedocs.org |
Shorting
One way to make up for a loss-making position is our Short and Trailing Stop-Short feature. It's an exciting feature for traders that are looking for an alternative for their traditional stop-loss.
Shorting is the practice of making a profit while the price of an asset goes down. Our way of shorting is a little bit different than "traditional" shorting, in that our shorting is more like a buyback function. When you expect a position to make a more significant loss, you initiate a Short, and your bot will sell the position. When you think the price has reached its bottom, you consolidate your short and directly buy back the position.
Let's say you bought 1 BTC for 8,000 USD, and you get a hunch that BTC is going to drop in value. You can then go to your Cryptohopper dashboard and click "short" on your BTC position. Cryptohopper will then sell your BTC, reserve these funds, and track the position in your shorting tab. You can then set up a buyback price, indicating at what price you want to buy back the asset. If/when the asset hits your chosen buyback price, your hopper will automatically repurchase it.
Shorting can be done automatically and manually, so it’s an exciting feature for both manual and automatic traders.
#Manual Shorting
Manual shorting can be initiated by directly selecting a position and selecting “Short positions” from the bulk actions menu. This will move the position to “Short Positions”. Effectively, your position has been sold for your quote currency, but your Hopper will track the position now and calculate the “profit” you have from selling it and rebuying it later. This can be done both from the regular as the advanced view.
Let’s say you’ve shorted some positions. How does the reporting work? What do all the numbers mean?
.jpg)
- The Percentage on the left indicates your result. This is different from open positions because a positive number will actually reflect a decline in price.
Take QKC in the figure above - since you sold it, the position has gone down 2.25%. This figure is green because you saved yourself from a 2.25% loss.
- Moving on to the figure on the right, in the case of QKC, you see it in yellow: -13.79%. This figure indicates your actual result.
QKC was sold at -16.04%, but since your position has gone down 2.25%, your actual indicates a loss of -13.79%. The actual figure will show you at what point you could buy back an investment to help you break even on bags (loss-making positions).
#Automatic Shorting
Manual shorting is the easy part, but we’re algorithmic traders, so it’s in our nature to automate as much as we can. To do this, we do need to understand the configurations that we can adjust. This is also where you could use the “Trailing Stop-Short”, a great tool to automate your shorting. Like a Trailing Stop-Buy and the Trailing Stop-Loss, the Trailing Stop-Short keeps tracking the price so that it helps you close the short position when the price rises again.
We explain every single setting here, but first go through the essentials to get your automatic shorting going. Seeing all those potential configurations may seem daunting, but having a plan helps find what’s best for your Hopper.
Go to your Base Config -> select “Shorting Settings” -> and enable “Automatic Shorting”.
To further configure shorting correctly, there are a few basic configurations. Ask yourself these questions:
- When do you want your Hopper to start shorting automatically?
Most often, this is when a specific position is in a loss. You can enable that setting when your strategy sends a sell signal (indicating that it will go down) or select “always short instead of sell”. We advise the first option for beginners.
- When should your short be “liquidated”, or, when should your Hopper buy back the position?
You can do this based on your strategy (advanced), by a certain “profit” (when you saved yourself from a certain percentage loss) or by using the TSS (Trailing Stop-Short), which closes the short when the price goes up by a certain percentage.
#Automatic Shorting with Trailing Stop-Short (Beginner)
A simple way to set up automatic shorting is with the help of the Trailing Stop-Short. When in your Base Config -> select “Shorting Settings” -> “Automatic Shorting” is enabled, enable Trailing Stop-Short.
Now we need to configure when your Hopper should close its Short automatically. For that, we have the “Arm trailing stop-short at” & “Trailing stop-short percentage”.
The “Arm trailing stop-short at” determines when your Hopper should start trailing the short position. Basically, how much loss should a position have before it will trail the price?
The “Trailing stop-short percentage” determines how much a price should rise again to close the short (buy back the same coin).
Did you know you could even use the Trailing Stop-Short to make a profit? Enable “Use trailing stop-short only,” and it will disable your shorting-take profit and only use your TSS settings.
#Automatic Shorting with your strategy (Advanced)
This is more advanced because it’s harder to make a good strategy that indicates the best moments to short. Enable “Open short based on strategy” and “Close short based on strategy” in your shorting config.
Next, make sure to select a good strategy for buying and a good strategy for selling. This is both done in your config at “Strategy” or “Sell Strategy.”
Where do you get the right strategies? Either download them from the Marketplace or create one yourself in the Strategy Builder.
These are the basics of shorting! Do you want more info about every single possible configuration? Read about the Shorting Configuration here. | https://docs.cryptohopper.com/docs/explore-features/shorting/ | 2022-01-17T00:39:38 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/automatic-trading/trailingstopshort.png',
'shorting short trailing stop stop-short automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/shorting/shortbulkactions.jpg',
'shorting short trailing stop stop-short automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/shorting/Shorts+(1',
'shorting short trailing stop stop-short automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/shorting/shortingonstrategty.jpg',
'shorting short trailing stop stop-short automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/shorting/tsssettings.jpg',
'shorting short trailing stop stop-short automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/shorting/shortingonstrategty.jpg',
'shorting short trailing stop stop-short automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/mirror-trading/configurestrategysmall.gif',
'shorting short trailing stop stop-short automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object) ] | docs.cryptohopper.com |
HELICS Terminology¶
Before digging into the specifics of how a HELICS co-simulation runs, there are a number of key terms and concepts that need to be clarified first.
Simulator - A simulator is the executable that is able to perform some analysis function, often but not always, by solving specific problems to generate a time series of values. Simulators are abstract in the sense that it largely refers to the software in a non-executing state, outside of the co-simulation. We might say things like, “That simulator doesn’t model xyz appropriately for this analysis.” or “This simulator has been parallelized and runs incredibly quickly on many-core computers.” Any time we are talking about a specific instance of a simulator running a specific model you are really talking about a…
Federate - Federates are the running instances of simulators that have been assigned specific models and/or have specific values they are providing to and receiving from other federates. For example, we can have ten distribution circuit models that we are connecting for a co-simulation. Each could be run by the simulator GridLAB-D, and when they are running, they become ten unique federates providing unique values to each other. A collection of federates working together in a co-simulation is called a “federation.”
Model - A model is synonymous with an agent in the co-simulation. A simulator contains the calculations for the model. Depending on the needs of the co-simulation, a federate can be configured to contain one or many models. For example, if we want to create a co-simulation of electric vehicles, we may write a simulator (executable program) to model the physics of the electric vehicle battery. We can then designate any number of agents/models of this battery by configuring the transfer of signals between the “Battery Federate” (which has N batteries modeled) and another federate.
Signals - Signals are the the information passed between federates during the execution of the co-simulation. Fundamentally, co-simulation is about message-passing via these signals. HELICS divides these messages into two types: value signals and message signal. The former is used when coupling two federates that share physics (e.g. batteries providing power to wheel motors on an electric car) and the later is used to couple two federates with information (e.g. a battery charge controller and a charge relay on a battery). There are various techniques and implementations of the message-passing infrastructure that have been implemented in the core. There are also a variety of mechanisms within a co-simulation to define the nature of the data being exchanged (data type, for example) and how the data is distributed around the federation.
Interface - A structure by which a federate can communicate with other federates. Includes Endpoints, Publications, and Inputs.
Core - The core is the software that has been embedded inside a simulator to allow it to join a HELICS federation. In general, each federate has a single core, making the two synonymous (core <=> federate). The two most common configurations are: (1) one core, one federate, one model; (2) one core, one federate, multiple models. There are sometimes cases where a single executable is used to represent multiple federates and all of those federates use a single core (one core, multiple federates, multiple models). Cores are built around specific message buses with HELICS supporting a number of different bus types. Selection of the message bus is part of the configuration process required to form the federation. Additional information about cores can be found in the Advanced Topics.
Broker - The broker is a special executable distributed with HELICS; it is responsible for performing the two key tasks of a co-simulation: (1) maintaining synchronization in the federation and (2) facilitating message exchange. Each core (federate) must connect to a broker to be part of the federation. Brokers receive and distribute messages from any federates that are connected to it, routing them to the appropriate location. HELICS also supports a hierarchy of brokers, allowing brokers to pass messages between each other to connect federates associated with different brokers and thus maintain the integrity of the federation. The broker at the top of the hierarchy is called the “root broker” and it is the message router of last resort. | https://docs.helics.org/en/latest/user-guide/fundamental_topics/helics_terminology.html | 2022-01-17T01:13:53 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.helics.org |
Mobile SDK Integrations
Public Endpoints
This section covers which endpoints are publicly exposed and can be used by the mobile app to trigger a device registration, authentication, or de-registration.
Facet Management
Facets are a way of managing mobile app access to a FIDO Control Center application. Think of a facet as a mobile app unique identifier that is allowed to accept and process policies created within the Control Center.
- Adding a Facet ID to the RP
a. Get the Auth token:
curl -X POST "<rp url>/HYPR/rest/login" -H "accept: application/json" -H "API_KEY: <API Key from Control Center> " -H "Content-Type: application/x-www-form-urlencoded" -d "appId=<App ID from Control Center>"
{"AUTH_TOKEN":"<Auth token>"}
b. Example of getting the Auth Token:
curl -X POST "" -H "accept: application/json" -H "API_KEY: lpoooqitbk9sq3irpkjbetdcft" -H "Content-Type: application/x-www-form-urlencoded" -d "appId=hYPRAndroidTestQAApp"
{"AUTH_TOKEN":"d6a30514-af36-4e80-99b0-e93fb5dd2f64"}
c. Adding the Facet ID (Get this from the mobile app's logs)
d. Add the Facet ID:
curl -X POST "<rp url>/HYPR/rest/usermanagement/addFacet" -H "accept: application/json" -H "AUTH_TOKEN: <Auth Token>" -H "Content-Type: application/json" -d "{\"facetId\":\"YOUR_FACET_ID\"}"
cURL Result: true or false depending on success
e. Example of adding the Facet ID:
curl -X POST "" -H "accept: application/json" -H "AUTH_TOKEN: d6a30514-af36-4e80-99b0-e93fb5dd2f64" -H "Content-Type: application/json" -d "{\"facetId\":\"YOUR_FACET_ID\"}"
cURL Result: true or false depending on success ID to the database. Then use the public endpoints to test registration, authentication, and de-registration.
Updated over 1 year ago | https://docs.hypr.com/installinghypr/docs/mobile-application-integration | 2022-01-17T00:48:49 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.hypr.com |
List Files Using the SFTP Connector - Mule 4
The Anypoint Connector for SFTP (SFTP Connector) List operation returns a list of messages representing files or folders in the directory path:
The path you define in the Directory Path parameter can be absolute, or it can be relative to a working directory.
By default, the operation does not read or list files or folders within any subfolders of the directory path.
To list files or folders within any subfolders, set the Recursive parameter to
TRUE.
Configure the List Operation in Studio
To add and configure the List operation in Studio, follow these steps:
In the Mule Palette view, search for
sftpand select the List operation.
Drag the Read operation onto the Studio canvas.
In the General tab of the operation configuration screen, click the plus sign (+) next to the Connector configuration field to access the global element configuration fields.
Specify the connection information and click OK.
In the General tab, set the Directory path field to
~/dropFolderto set the path of the file to list.
The following screenshot shows the List operation configuration:
In the XML editor, the
<sftp:list> configuration looks like this:
<sftp:list doc:
The following XML example lists the folder contents of messages in the directory path without the subfolder contents. The For Each and Choice components manage each directory in the list differently from the way they manage each file:
<flow name="list"> <sftp:list <foreach> <choice> <when expression="#[attributes.directory]"> <flow-ref </when> <otherwise> <logger message="Found file #[attributes.path] which content is #[payload]" /> </otherwise> </choice> </foreach> </flow>
Poll a Directory
Because Mule 4 doesn’t have a polling message source, you can combine a Scheduler source with the SFTP List operation to poll a directory to look for new files to process.
In the following poll directory example, a flow lists the contents of a folder once per second. The flow then processes the files one by one, deleting each file after it is processed because there is a Delete operation in the For Each component.
The following screenshot shows the flow in Studio:
To create the flow, follow these steps:
In Studio, drag the Scheduler component onto the Studio canvas.
Drag the SFTP List operation to the right of Scheduler.
In the General tab of the operation configuration screen, click the plus sign (+) next to the Connector configuration field to access the global element configuration fields.
Specify the connection information and click OK.
In the General tab, set the Directory path field to
/config/dropFolderto set the path of the file to list.
Drag a For each component to the right of the List operation.
Drag a Flow Reference component inside the For each component.
Set the Flow name field to
processFileto specify the flow reference that process the files.
Drag an SFTP Delete operation to the right of the Flow Reference component.
Set the Connector configuration field to the previously configured connection in the List operation.
Set the Path field to
#[attributes.path].
Drag a Transform Message component below the first flow.
Select the new flow and change the Name field to
processFile.
Select the Transform Message component in the new flow, and in the Output view, paste the following DataWeave expression:
%dw 2.0 output application/json --- characters: payload.characters.*name map ( (item, index) -> {name: item} )
Drag a Logger component to the right of Transform Message.
Set the Message field to
payload.
Drag an Objet Store Store operation to the right of Logger.
Set the Key field to
test-file-' random() as String '.json', and Value to
payload.
Save your Mule application.
In the XML editor, the configuration looks like this:
?xml <sftp:config <sftp:connection </sftp:config> <os:object-store <flow name="poll"> <scheduler> <scheduling-strategy> <fixed-frequency </scheduling-strategy> </scheduler> <sftp:list <foreach> <flow-ref <sftp:delete </foreach> </flow> <flow name="processFile" maxConcurrency="1"> <ee:transform doc: <ee:message > <ee:set-payload ><![CDATA[%dw 2.0 output application/json --- characters: payload.characters.*name map ( (item, index) -> {name: item} )]]></ee:set-payload> </ee:message> </ee:transform> <logger level="ERROR" message="#[payload]" /> <os:store doc: </flow> </mule>
Match Filter
When listing files, use the File Matching Rules field, which accepts files that match the specified criteria. This parameter defines the possible attributes to either accept or reject a file.
These attributes are optional and ignored if you do not provide values for them. Use an
AND operator to combine individual attributes.
To configure the parameter in Studio, set the File Matching Rules parameter to
Edit inline and complete the desired attributes:
Timestamp since
Files created before this date are rejected.
Timestamp until
Files created after this date are rejected.
Not updated in the last
Minimum time that should pass since a file was last updated to not be rejected. This attribute works in tandem with Time unit.
Updated in the last
Maximum time that should pass from when a file was last updated to not be rejected. This attribute works in tandem with Time unit.
Time unit
A Not updated in the last attributes. Defaults to
MILLISECONDS.
Filename pattern
Similar to the current filename pattern filter but more powerful. Glob expressions (default) and regex are supported. You can select which one to use by setting a prefix, for example,
glob:*.{java, js}or
regex:[0-9]test.csv.
Path pattern
Same as Filename pattern but applies over the entire file path rather than just a filename.
Directories
Match only if the file is a directory.
Regular files
Match only if the file is a regular file.
Sym links`
Match only if the file is a symbolic link.
Min size
Inclusive lower boundary for the file size, expressed in bytes.
Max size
Inclusive upper boundary for the file size, expressed in bytes.
The following screenshot shows the File Matching Rules configuration in Studio:
In the XML editor, the configuration looks like this:
<sftp:matcher
Top-Level, Reusable Matcher
You can use the file matcher as either a named top-level element that enables reuse or as an inner element that is proprietary to a particular component.
The following example shows a top-level reusable matcher:
<sftp:matcher <flow name="smallFiles"> <sftp:list ... </flow>
Repeatable Streams
The List operation makes use of the repeatable streams functionality introduced in Mule 4. The operation returns a list of messages, where each message represents a file in the list and holds a stream to the file. A stream is repeatable by default.
For more information on this topic, see Streaming in Mule 4. | https://docs.mulesoft.com/sftp-connector/1.3/sftp-list | 2022-01-17T01:08:41 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['_images/sftp-list-operation-1.png',
'List operation configuration in Studio'], dtype=object)
array(['_images/sftp-list-operation-1.png',
'List operation configuration in Studio'], dtype=object)
array(['_images/sftp-list-operation-2.png',
'Pool a directory flow in Studio'], dtype=object)
array(['_images/sftp-list-operation-3.png',
'File Matching Rules parameter configuration'], dtype=object)] | docs.mulesoft.com |
A Rhino management sub task that allows its own subtasks to be executed in the context of a single client-demarcated transaction.
This task starts a user transaction then executes the nested subtasks.
If all the subtasks complete successfully, the user transaction is committed.
If any tasks fails with a fatal BuildException, or fails with a NonFatalBuildException but its failonerror flag is set to
true, the user transaction is rolled back.
The following sub task elements can be provided in any number and in any order.
The User Transaction task will execute these sub tasks in the specified order until a sub task fails by throwing a
org.apache.tools.ant.BuildException which will be re-thrown to Ant with some contextual information regarding the sub task that caused it. | https://docs.rhino.metaswitch.com/ocdoc/books/rhino-documentation/2.6.2/rhino-administration-and-deployment-guide/management-tools/tools-for-general-operations-administration-and-maintenance/scripting-with-apache-ant/usertransaction.html | 2022-01-17T01:21:19 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.rhino.metaswitch.com |
Generate Vulnerability Scan Order
Generate an order
We offer both monthly or yearly subscriptions per target. A target can be defined as one of the following - A website, hostname, IP address.
You can generate a free order for your customer allowing them to perform one single scan.
Generate order view
Go to your Servertastic Dashboard and click the Generate order tab on the left hand side, just like any other order.
- Once you have done this select the Cyber Security filter
- Select the yearly or monthly scan product
- Add your unique reference or use the generated one
- Click Generate Token
Placing the order
Once this is completed.
_10<<
The scan
The scan will now be added to a scan queue and will scan automatically once the scan is complete you will be notified via email.
Updated 5 days ago | https://docs.servertastic.com/docs/generate-vulnerability-scan-order | 2022-01-17T00:28:27 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['https://files.readme.io/3ee67db-Servertastic_Staging_2021-11-22_at_11.31.21_am.jpg',
'Servertastic Staging 2021-11-22 at 11.31.21 am.jpg Generate order view'],
dtype=object)
array(['https://files.readme.io/3ee67db-Servertastic_Staging_2021-11-22_at_11.31.21_am.jpg',
'Click to close... Generate order view'], dtype=object)
array(['https://files.readme.io/7886f20-Servertastic_Staging_2021-11-22_at_12.43.50_pm.jpg',
'Servertastic Staging 2021-11-22 at 12.43.50 pm.jpg'], dtype=object)
array(['https://files.readme.io/7886f20-Servertastic_Staging_2021-11-22_at_12.43.50_pm.jpg',
'Click to close...'], dtype=object)
array(['https://files.readme.io/8ba48d0-Servertastic_Staging_2021-11-22_at_12.57.24_pm.jpg',
'Servertastic Staging 2021-11-22 at 12.57.24 pm.jpg'], dtype=object)
array(['https://files.readme.io/8ba48d0-Servertastic_Staging_2021-11-22_at_12.57.24_pm.jpg',
'Click to close...'], dtype=object)
array(['https://files.readme.io/243691c-Servertastic_Staging_2021-11-22_at_1.59.03_pm.jpg',
'Servertastic Staging 2021-11-22 at 1.59.03 pm.jpg'], dtype=object)
array(['https://files.readme.io/243691c-Servertastic_Staging_2021-11-22_at_1.59.03_pm.jpg',
'Click to close...'], dtype=object)
array(['https://files.readme.io/886d00c-Servertastic_Staging_2021-11-22_at_12.57.24_pm.jpg',
'Servertastic Staging 2021-11-22 at 12.57.24 pm.jpg'], dtype=object)
array(['https://files.readme.io/886d00c-Servertastic_Staging_2021-11-22_at_12.57.24_pm.jpg',
'Click to close...'], dtype=object)
array(['https://files.readme.io/f31c4e1-Servertastic_Staging_2021-11-22_at_3.48.20_pm.jpg',
'Servertastic Staging 2021-11-22 at 3.48.20 pm.jpg'], dtype=object)
array(['https://files.readme.io/f31c4e1-Servertastic_Staging_2021-11-22_at_3.48.20_pm.jpg',
'Click to close...'], dtype=object) ] | docs.servertastic.com |
unreal.AnimNotifyState_TimedNiagaraEffectAdvanced¶
- class unreal.AnimNotifyState_TimedNiagaraEffectAdvanced(outer=None, name='None')¶
Bases:
unreal.AnimNotifyState_TimedNiagaraEffect
Same as Timed Niagara Effect but also provides some more advanced abilities at an additional cost.
C++ Source:
Plugin: Niagara
Module: NiagaraAnimNotifies
File: AnimNotifyState_TimedNiagaraEffect.h
Editor Properties: (see get_editor_property/set_editor_property)
destroy_at_end(bool): [Read-Write] Whether the Niagara system should be immediately destroyed at the end of the notify state or be allowed to finish
location_offset(Vector): [Read-Write] Offset from the socket or bone to place the Niagara system
notify_color(Color): [Read-Write] Color of Notify in editor
rotation_offset(Rotator): [Read-Write] Rotation offset from the socket or bone for the Niagara system
socket_name(Name): [Read-Write] The socket or bone to attach the system to
template(NiagaraSystem): [Read-Write] The niagara system to spawn for the notify state
- get_notify_progress(mesh_comp) → float¶
Returns a 0 to 1 value for the progress of this component along the notify.
- Parameters
mesh_comp (MeshComponent) –
- Returns
-
- Return type
- | https://docs.unrealengine.com/4.27/en-US/PythonAPI/class/AnimNotifyState_TimedNiagaraEffectAdvanced.html | 2022-01-17T01:26:47 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.unrealengine.com |
Sensitivity of timestamptz-interval arithmetic to the current timezone
The moment-moment overloads of the "-" operator for timestamptz, timestamp, and time section recommends that you avoid arithmetic that uses hybrid interval semantics—in other words that you perform interval arithmetic using only values that have just one of the fields of the internal [mm, dd, ss] representation tuple non-zero. The section Custom domain types for specializing the native interval functionality explains a coding practice that supports this recommendation.
Following the recommendation, this demonstration uses only pure days interval values and pure seconds interval values.
It shows how the outcome of adding or subtracting the day component of a pure days interval value to or from a timestamptz value is critically dependent on the session's TimeZone setting. In particular, the outcome is defined by special rules when, and only when, the starting and resulting timestamptz values straddle the "spring forward" moment using a timezone that respects Daylight Savings Time. (In the same way, special rules apply, too, when, the starting and resulting values straddle the "fall back" moment.
The demonstration shows that, in contrast, the outcome of corresponding arithmetic that uses a pure seconds interval value is independent of the session's TimeZone setting. This is also the case when a pure months interval value is used. But the demonstration's pedagogy doesn't need to illustrate this. Its focus is the special rules for a pure days interval value that crosses a Daylight Savings Time boundary.
The philosophy of the demonstration's design
When you run a query that selects a timestamptz value at the ysqlsh prompt, you'll see a text rendition whose spelling depends on the session's TimeZone setting. This behavior is critical to the data type's usefulness. But it can confound the interpretation of demonstrations that, like the present one, aim to show what happens to actual internally represented timestamptz values under critical operations. You can adopt the practice always to observe results with a current TimeZone setting of UTC. But the most robust test of your understanding is always to use a PL/pgSQL encapsulation that uses assert statement(s) to check that the actual outcome of a test agrees with what your mental model predicts. The demonstration that is presented on this page uses the assert approach. Critically, the entire test uses only timestamptz values (and, of course, interval values) to avoid conflating the outcome with the effects of data type conversions to text—supposedly to allow the human to use what is seen to confirm understanding of the rules.
Further, by using a table function encapsulation, the demonstration also also displays the results—as long as the assertions all hold. It has two display modes:
- Display all the results using UTC.
- Display the results that were computed with a session timezone set to X using that same timezone X.
To ensure that the starting timestamptz values, and the expected result timestamptz values that the assert statements check, are maximally free of extraneous conversion effects, these are all assigned as constants using the double precision overload of the to_timestamp() built-in function. The input for this overload is the number of seconds from the so-called start of the epoch. Do this:
set timezone = 'UTC'; select pg_typeof(to_timestamp(0)) as "data type", to_timestamp(0) as "start of epoch";
This is the result:
data type | start of epoch --------------------------+------------------------ timestamp with time zone | 1970-01-01 00:00:00+00
The demonstration uses, in turn, five starting moments for the timestamptz-interval addition tests. The first four are 20:00 on the Saturday evening before the "spring forward" moment in the small hours of the immediately following Sunday morning in a timezone of interest. The timezones are chosen so that two are in the Northern Hemisphere (located one to the west and one to the east of the Greenwich Meridian) and so that two are in the Southern Hemisphere. One of these, relatively unusually, "springs forward" by just thirty minutes. Each of the other three "springs forward" by the much more common amount of one hour. Here's the list. The unusual one is called out,
- America/Los_Angeles
- Europe/Amsterdam
- Australia/Sydney
- Australia/Lord_Howe (DST is, unusually, only 30 min ahead of Standard Time)
Internet search easily finds the 2021 "spring forward" moments and amounts for these timezones. And simple tests like this confirm that the facts are correct with respect to YugabyteDB's internal representation of the tz database:
set timezone = 'Australia/Lord_Howe'; select to_char('2021-10-03 01:59:59'::timestamptz, 'hh24:mi:ss (UTC offset = TZH:TZM)') as "Before 'spring forward'", to_char('2021-10-03 02:30:01'::timestamptz, 'hh24:mi:ss (UTC offset = TZH:TZM)') as "After 'spring forward'";
This is the result:
Before 'spring forward' | After 'spring forward' --------------------------------+-------------------------------- 01:59:59 (UTC offset = +10:30) | 02:30:01 (UTC offset = +11:00)
The "spring forward" moment is 02:00. So 01:59:59 is still in Winter Time, but only just, with a UTC offset of +10:30. Somebody in this timezone watching the self-adjusting clock on their smartphone would see it jump, in two seconds of elapsed wall-clock time, from 01:59:59 to 02:30:01, showing that Summer Time has now arrived, bringing the new UTC offset of +11:00.
The test also uses midsummer's eve in UTC in a control test. By definition, UTC does not respect Daylight Savings Time.
These ad hoc queries determine the seconds from the start of the epoch for the four chosen "spring forward" moments, and for midsummer's eve in UTC.
select (select extract(epoch from '2021-03-13 20:00:00 America/Los_Angeles' ::timestamptz)) as "Los Angeles DST start", (select extract(epoch from '2021-03-27 20:00:00 Europe/Amsterdam' ::timestamptz)) as "Amsterdam DST start", (select extract(epoch from '2021-10-02 20:00:00 Australia/Sydney' ::timestamptz)) as "Sydney DST start", (select extract(epoch from '2021-10-02 20:00:00 Australia/Lord_Howe' ::timestamptz)) as "Lord Howe DST start", (select extract(epoch from '2021-06-23 20:00:00 UTC' ::timestamptz)) as "UTC mid-summer";
This is the result:
Los Angeles DST start | Amsterdam DST start | Sydney DST start | Lord Howe DST start | UTC mid-summer -----------------------+---------------------+------------------+---------------------+---------------- 1615694400 | 1616871600 | 1633168800 | 1633167000 | 1624478400
These values are used, as manifest constants, in the test table function's source code. And the reports show that they were typed correctly.
The demonstration
The demonstration uses the interval_arithmetic_results() table function. Its design is very similar to that of the plain_timestamp_to_from_timestamp_tz() table function, presented in the "sensitivity of the conversion between timestamptz and plain timestamp to the UTC offset" section.
The interval_arithmetic_results() function depends on some helper functions. First create a trivial wrapper for to_char() to improve the readability of the output without cluttering the code by repeating the verbose format mask.
drop function if exists fmt(timestamptz) cascade; create function fmt(t in timestamptz) returns text language plpgsql as $body$ begin return to_char(t, 'Dy dd-Mon hh24:mi TZH:TZM'); end; $body$;
Now create a type to represent the facts about one timezone: 20:00 on the Saturday evening before the "spring forward" moment; the name of the timezone for which this is the "spring forward" moment; and the size of the "spring forward" amount.
drop type if exists rt cascade; create type rt as ( -- On the Saturday evening before the "spring forward" moment -- as seconds from to_timestamp(0). s double precision, -- The timezone in which "s" has its meaning. tz text, -- "spring forward" amount in minutes. spring_fwd_amt int);
Create and execute the test table function thus. You can easily confirm, with ad hoc tests, that it is designed so that its behavior is independent of the session's TimeZone setting. The design establishes the expected resulting timestamptz values, after adding either '24 hours'::interval or '1 day'::interval to the "spring forward" moments, crossing the Daylight Savings Time transition.
drop function if exists interval_arithmetic_results(boolean) cascade; create function interval_arithmetic_results(at_utc in boolean) returns table(z text) language plpgsql as $body$ declare set_timezone constant text not null := $$set timezone = '%s'$$; tz_on_entry constant text not null := current_setting('timezone'); secs_pr_hour constant double precision not null := 60*60; interval_24_hours constant interval not null := '24 hours'; interval_1_day constant interval not null := '1 day'; begin z := '--------------------------------------------------------------------------------'; return next; if at_utc then z := 'Displaying all results using UTC.'; return next; else z := 'Displaying each set of results using the timezone in which they were computed.'; return next; end if; z := '--------------------------------------------------------------------------------'; return next; declare -- 20:00 (local time) on the Saturday before the "spring forward" moments in a selection of timezones. r rt not null := (0, '', 0); start_moments constant rt[] not null := array [ (1615694400, 'America/Los_Angeles', 60)::rt, (1616871600, 'Europe/Amsterdam', 60)::rt, (1633168800, 'Australia/Sydney', 60)::rt, (1633167000, 'Australia/Lord_Howe', 30)::rt, -- Nonce element. Northern midsummer's eve. (1624478400, 'UTC', 0)::rt ]; begin foreach r in array start_moments loop execute format(set_timezone, r.tz); declare t0 constant timestamptz not null := to_timestamp(r.s); t0_plus_24_hours constant timestamptz not null := t0 + interval_24_hours; t0_plus_1_day constant timestamptz not null := t0 + interval_1_day; expected_t0_plus_24_hours constant timestamptz not null := to_timestamp(r.s + 24.0*secs_pr_hour); expected_t0_plus_1_day constant timestamptz not null := case r.spring_fwd_amt when 60 then to_timestamp(r.s + 23.0*secs_pr_hour) when 30 then to_timestamp(r.s + 23.5*secs_pr_hour) when 0 then to_timestamp(r.s + 24.0*secs_pr_hour) end; begin assert t0_plus_24_hours = expected_t0_plus_24_hours, 'Bad "t0_plus_24_hours"'; assert t0_plus_1_day = expected_t0_plus_1_day, 'Bad "t0_plus_1_day"'; /* Display the internally represented values: EITHER: using 'UTC' to show what they "really" are OR: using the timezone in which they were computed to show the intended usability benefit for the local observer. */ if at_utc then execute format(set_timezone, 'UTC'); -- Else, leave the timezone set to "r.tz". end if; z := r.tz; return next; z := ''; return next; z := 't0: '||fmt(t0); return next; z := 't0_plus_24_hours: '||fmt(t0_plus_24_hours); return next; z := 't0_plus_1_day: '||fmt(t0_plus_1_day); return next; z := '--------------------------------------------------'; return next; end; end loop; end; execute format(set_timezone, tz_on_entry); end; $body$; select z from interval_arithmetic_results(true);
This is the result:
-------------------------------------------------------------------------------- Displaying all results using UTC. -------------------------------------------------------------------------------- America/Los_Angeles t0: Sun 14-Mar 04:00 +00:00 t0_plus_24_hours: Mon 15-Mar 04:00 +00:00 t0_plus_1_day: Mon 15-Mar 03:00 +00:00 -------------------------------------------------- Europe/Amsterdam t0: Sat 27-Mar 19:00 +00:00 t0_plus_24_hours: Sun 28-Mar 19:00 +00:00 t0_plus_1_day: Sun 28-Mar 18:00 +00:00 -------------------------------------------------- Australia/Sydney t0: Sat 02-Oct 10:00 +00:00 t0_plus_24_hours: Sun 03-Oct 10:00 +00:00 t0_plus_1_day: Sun 03-Oct 09:00 +00:00 -------------------------------------------------- Australia/Lord_Howe t0: Sat 02-Oct 09:30 +00:00 t0_plus_24_hours: Sun 03-Oct 09:30 +00:00 t0_plus_1_day: Sun 03-Oct 09:00 +00:00 -------------------------------------------------- UTC t0: Wed 23-Jun 20:00 +00:00 t0_plus_24_hours: Thu 24-Jun 20:00 +00:00 t0_plus_1_day: Thu 24-Jun 20:00 +00:00 --------------------------------------------------
The execution finishes without error, confirming that the assertions hold.
Interpretation and statement of the rules
Recall that when a timestamptz value is observed using UTC, you see the actual yyyy-mm-dd hh24:mi:ss value that the internal representation holds.
You can see clearly that the rule for adding the pure seconds '24 hours'::interval value is unremarkable. Clock-time-semantics is used to produce a value that is simply exactly 24 hours later than the starting timestamptz value. On the other hand, "spring forward" moment, then the result is given by adding less than 24 hours. The delta is equal to the size of the "spring forward" amount.
In other words, when timestamptz-interval arithmetic uses a pure days interval value in a current timezone that causes crossing the Daylight Savings Time transition, the resulting timestamptz value is calculated using calendar-time-semantics. The rule to add less than 24 hours aligns exactly with the human experience. If you go to bed at your normal time on the Saturday evening before the "spring forward" moment (in a region whose timezone observes Daylight Savings Time with a one hour "spring forward" amount), and if you get up after your normal number of hours in bed, then the self-adjusting clock on your smart phone will read one hour later than it usually does—hence the mnemonic "spring forward". In other words, you'll experience a waking day on the Sunday that's one hour shorter than usual—just twenty-three hours.
You might find that the displayed results feel counter-intuitive until you've fully grasped all the central concepts here. But things usually feel satisfyingly natural when you observe the very same results using the timezone that was in force when the interval arithmetic was performed.
Invoke the table function again to show the results this way—in other words, to emphasize the intended usability benefit, for the local observer, of the special rules for pure days interval arithmetic:
select z from interval_arithmetic_results(false);
This is the new result:
-------------------------------------------------------------------------------- Displaying each set of results using the timezone in which they were computed. -------------------------------------------------------------------------------- America/Los_Angeles t0: Sat 13-Mar 20:00 -08:00 t0_plus_24_hours: Sun 14-Mar 21:00 -07:00 t0_plus_1_day: Sun 14-Mar 20:00 -07:00 -------------------------------------------------- Europe/Amsterdam t0: Sat 27-Mar 20:00 +01:00 t0_plus_24_hours: Sun 28-Mar 21:00 +02:00 t0_plus_1_day: Sun 28-Mar 20:00 +02:00 -------------------------------------------------- Australia/Sydney t0: Sat 02-Oct 20:00 +10:00 t0_plus_24_hours: Sun 03-Oct 21:00 +11:00 t0_plus_1_day: Sun 03-Oct 20:00 +11:00 -------------------------------------------------- Australia/Lord_Howe t0: Sat 02-Oct 20:00 +10:30 t0_plus_24_hours: Sun 03-Oct 20:30 +11:00 t0_plus_1_day: Sun 03-Oct 20:00 +11:00 -------------------------------------------------- UTC t0: Wed 23-Jun 20:00 +00:00 t0_plus_24_hours: Thu 24-Jun 20:00 +00:00 t0_plus_1_day: Thu 24-Jun 20:00 +00:00 --------------------------------------------------
From this perspective, adding one day takes you to the same wall-clock time on the next day. But watching a stop watch until it reads twenty-four hours, takes you to the next day at a moment where the wall-clock reads one hour (or thirty minutes in one of the unusual timezones) later than when you started the stop watch.
Observe what happens at the 'fall back' moments
You might like to redefine the start_moments array in the interval_arithmetic_results() function's source code to use the "fall back" moments for each of the timezones. Internet search finds these easily. Doing this will show you that pure days interval arithmetic semantics respects the feeling you get on the Sunday after the transition that you have one hour more than usual of waking time—hence the mnemonic "fall back". "fall back" moment, then the result is given by adding more than 24 hours. The delta is equal to the size of the "fall back" amount. | https://docs.yugabyte.com/latest/api/ysql/datatypes/type_datetime/timezones/timezone-sensitive-operations/timestamptz-interval-day-arithmetic/ | 2022-01-17T01:43:59 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.yugabyte.com |
Site configuration
In addition to the usual Jekyll configuration options, there are many options specific to Open SDG. These are detailed below, along with usage examples. All of these settings go in the
_config.yml file. Alternatively, you can add any/all of these settings to a
site_config.yml file in your data folder (usually
data/site_config.yml).
This document covers the "site configuration", which is distinct from "data configuration". See more details on data configuration.
Note about "strings": Many of the settings detailed here contain human-readable "strings" (ie, text). In most cases, they can be replaced by translation keys for better multilingual support. For example, "Indicator" could be replaced with "general.indicator".
To see many of these options in action, the site starter repository contains an example config file.
accessible_charts¶
Optional: This setting can be set to
true to enable chart functionality that is intended to increase accessibility by adding support for screenreaders and keyboard navigation. If omitted, this defaults to
false, however setting this to
true is recommended.
accessible_charts: false
accessible_tabs¶
Optional: This setting can be set to
true to enable tab functionality that is compliant with the WAI-ARIA best practices. This adds improved keyboard navigation of the tabs. If omitted, this defaults to
false, however setting this to
true is recommended.
accessible_tabs: false
analytics¶
Optional: This setting can be used to facilitate the installation of Google Analytics. You can do this in multiple ways:
- ua (analytics.js)
- gtag (gtag.js)
- gtm (Google Tag Manager)
Google provides a number that would be used in each of these cases. The numbers typically have the following prefixes:
- UA-xxxxxxxx
- G-xxxxxxxx
- GTM-xxxxxxxx
To use this setting, put the appropriate number next to the corresponding item. For example:
analytics: ua: UA-xxxxxxxx gtag: G-xxxxxxxx gtm: GTM-xxxxxxxx
Notes:
- The don't need to use all of them. You can use 1, 2, or none at all.
- The
uaoption was previously called
ga_prodwhich also still works.
- As an alternative to using these settings, you can alternatively override the
_includes/head-custom.htmland/or
_includes/scripts-custom.htmlfiles in order to insert any Google Analytics snippets you might need.
- The
uaoption also captures certain custom events, such as the clicking of the contrast toggle button.
- If you are using the
cookie_consent_formsetting, these analytics will be automatically included in the cookie consent form. This allows the user to decline the setting of the following cookies: "_gat", "_gid", and "ga". If using the
gtagapproach, then some additional cookies may be set (see the official documentation). You can specify these with an
extra_cookiesoption, for example:
analytics: gtag: G-xxxxxxxx extra_cookies: - _ga_123456789 - _gac_gb_123456789
breadcrumbs¶
Optional: This setting can contain breadcrumb settings for each of the supported collection types:
goal,
indicator, and
post. Each should have a list of label/path objects. For example, the following configuration would add the breadcumbs
Home > Updates at the top of each post:
breadcrumbs: post: - label: Home path: / - label: Updates path: news/
Or with the addition of translation keys for multilingual sites:
breadcrumbs: post: - label: general.home path: / - label: menu.updates path: news/
Here is a full exmaple including
goal and
indicator as well:
breadcrumbs: post: - label: general.home path: / - label: menu.updates path: news/ goal: - label: general.home path: / - label: general.goals path: goals/ indicator: - label: general.home path: / - label: general.goals path: goals/
Note that
indicator will automatically add a final item, which is a link to the goal that the indicator belongs to. You do not need to specify this, since it is done dynamically and automatically.
configuration_edit_url¶
Optional: This setting controls the URL of the "Edit Configuration" that appear on the staging site's indicator pages. It should be a full URL. Note that you can include
[id] in the URL, and it will be dynamically replaced with the indicator's id (dash-delimited).
configuration_edit_url:[id].md
contrast_type¶
Optional: This setting allows you to change the type of contrast button your site uses. The available settings are:
default: Two buttons containing "A" - one for on and one for off (this is the default if you omit this setting)
long: If you use this option one single button will be displayed with the text 'High contrast' / 'Default contrast', depending on which mode of contrast is active.
single: One button containing "A" which toggles on/off, depending on which mode of contrast is active. This is recommended for the cleanest display.
Example:
contrast_type: single
cookie_consent_form¶
Optional: This setting allows you to turn on a cookie consent form that users will see as soon as they visit the site, which allows users to control whether the certain services and cookies are used. See the cookies and privacy documentation for more details.
Here is an example showing the available options and their default values:
cookie_consent_form: enabled: false
country¶
Required: This setting should contain two more (indented) settings:
name and
adjective. This are intended to allow the platform to refer to the country (or if appropriate, locality or organisation) using the platform.
country: name: Australia adjective: Australian
create_goals¶
Optional: This setting can be used to automatically create the goal pages. Without this setting, you will need a file for each goal (per language), in a
_goals folder.
This setting can contain several (indented) sub-settings:
layout: This can be used to specify which Jekyll layout will be used for the goal pages. You can create and use your own layout, but several layouts are included with Open SDG. These can be found in the _layouts folder in the repository. For example, to use the "goal-with-progress.html" layout, you would enter "goal-with-progress" (without the ".html") in this setting.
previous_next_links: You can set this to
trueto turn on previous/next links on goal pages, allowing users to "page" through the goals, directly from one to the next.
goals: This optional item can include an array of objects, each with a
contentfield. Use this to specify specific content for goal pages, which can include Markdown, or can be a translation key. They should be in order of goal number.
create_goals: layout: goal previous_next_links: true goals: - content: My content for goal 1 - content: My content for goal 2 with a [link]() - content: custom.my_translation_key_for_goal_3
create_indicators¶
Optional: This setting can be used to automatically create the indicator pages. Without this setting, you will need a file for each indicator (per language), in an
_indicators folder. This setting should include another (indented) setting indicating the Jekyll layout to use for the indicators. You can optionally turn on previous/next links as well.
create_indicators: layout: indicator previous_next_links: true
create_pages¶
Optional: This setting can be used to automatically create 4 platform-dependent pages:
- the home page
- the indicators.json page
- the search results page
- the reporting status page
Without this setting, you will need a file for each of these 4 pages (per language), in a
_pages folder. This setting can include more advanced settings (see this jekyll-open-sdg-plugins code) but can also simply be set to
true.
create_pages: true
If you would like to use the alternative frontpage (
frontpage-alt) alongside a dedicated "goals" page, you can used this configuration:
create_pages: - folder: / layout: frontpage-alt - folder: /goals layout: goals - folder: /reporting-status layout: reportingstatus - filename: indicators.json folder: / layout: indicator-json - folder: /search layout: search
The
folder property is required, and controls the URL path of the page. The
filename property is optional, and is needed only in the rare case where your page needs an unusual filename (such as "indicators.json" in the example above). All other properties are treated like "frontmatter" in a regular Jekyll page, such as the
layout properties above.
custom_css¶
Optional: This setting can be used to load additional CSS files on each page. It should be a list of relative paths to CSS files.
custom_css: - /assets/css/custom.css
NOTE: This approach is deprecated. It is recommended to instead put your custom styles into a _sass folder.
custom_js¶
Optional: This setting can be used to load additional JavaScript files on each page. It should be a list of relative paths to JavaScript files, or remote paths to third-party Javascript files.
custom_js: - /assets/js/custom.js -
data_edit_url¶
Required: This setting controls the URL of the "Edit Data" that appear on the staging site's indicator pages. It should be a full URL. Note that you can include
[id] in the URL, and it will be dynamically replaced with the indicator's id (dash-delimited).
data_edit_url:[id].csv
data_fields¶
Optional: This setting can be used if your data source has non-standard fields for unit and/or series -- for example, if you have CSV files with units in a "UNIT_MEASURE" column, rather than the usual "Units". If this is omitted, the following defaults are used:
data_fields: series: Series units: Units
If your data source is coming directly from SDMX, for example, you might use something like this:
data_fields: series: SERIES units: UNIT_MEASURE
date_formats¶
Optional: This setting can be used to control date formats for use in the site, such as in the news/category/post layouts. Any number date formats can be entered, and each must have an arbitrary
type, such as "standard". Make sure that each
type has a variant for each of your languages. For example, here is how you might configure a "standard" date format:
date_formats: - type: standard language: en format: "%b %d, %Y" - type: standard language: es format: "%d de %b de %Y"
The
% variables in the formats correspond to the variables listed in this Ruby DateTime documentation.
Note that the "standard" type is used in Open SDG's news/post functionality. Additional format types can be added for custom purposes.
decimal_separator¶
Optional: This setting can be used to replace the default decimal separator --
. -- with any other symbol. For example, the following is how you could use a comma as a decimal separator:
decimal_separator: ','
disclaimer¶
Optional: This setting controls the content of the disclaimer that appears at the top of each page. If you are not happy with the default ("ALPHA: This is a development website. We welcome your feedback.") then you can use something like the following example configuration:
disclaimer: phase: BETA message: This is my disclaimer message.
The above configuration would result in: "BETA: This is my disclaimer message."
If you only want to change the phase (to "BETA" for example), you can omit the
message like so:
disclaimer: phase: BETA
As always, you can use translation keys.
Required: This setting should contain three more (indented) settings for email addresses:
questions,
suggestions, and
functional. This allows the platform to direct users to appropriate inboxes from various parts of your site.
environment¶
Required: This setting should be either
staging or
production. Certain features of the platform, such as data management links, will only appear on
staging. Typically you will have this set to
staging in the
_config.yml file, and set to
production in the
_config_prod.yml file.
environment: staging
favicons¶
Optional: This setting controls the favicon markup. Possible settings are
legacy and
favicon.io. We recommend using
favicon.io and will default to this in the future. But currently the default is
legacy if omitted.
favicons: favicon.io
footer_language_toggle¶
Optional: This setting controls the type of language toggle to be used in the footer. Possible settings are
dropdown,
links, and
none. If this is omitted, the default is
none.
footer_language_toggle: none
footer_menu¶
Required: This setting controls the footer menu for the platform. It should contain a list of menu items, each containing a
path and a translation key.
The following example provides a footer menu matching older versions of Open SDG, which included options for social media and email contacts.
footer_menu: - path: mailto:[email protected] translation_key: menu.contact_us - path: translation_key: general.twitter - path: translation_key: general.facebook - path: faq/ translation_key: menu.faq - path: cookies/ translation_key: menu.cookies
Note that the
path of an item can be a translation key itself. This is useful if you want the link to go to different URLs depending on what language is active (for example if you have multiple language-specific Twitter accounts).
frontpage_cards¶
Optional: This setting is only used in the
frontpage-alt layout. It can display any number of "cards" in 3-column rows, beneath the grid of goal tiles. It should be a list of cards. Each configuration is optional, and here is an displaying one card with all of the options:
frontpage_cards: - # This controls the color of the line at the top of the card. rule_color: orange # This sets the title of the card. title: My card title # This sets the content of the card. Markdown is supported. Note that all # internal links should be relative to the frontpage. For example, instead # of [link](/path) you should use [link](path). content: | * List item * List item with [link]() # This displays any file in the `_includes` folder. include: components/download-all-data.html # This controls the text for a call-to-action button. button_label: My button text # This controls the URL the button links to. button_link: - title: My second card etc...
frontpage_goals_grid¶
Optional: This setting is only used in the
frontpage-alt layout. It can display a title and description above the grid of goal tiles. It can be configured in the following way:
frontpage_goals_grid: title: title goes here description: description goes here
Markdown is supported in the description. However note that all internal links
should be relative to the frontpage. For example, instead of
[link](/path) you should use
[link](path).
frontpage_heading¶
Optional: This setting can control the heading that appears on the front page. This setting is only used in the
frontpage layout.
frontpage_heading: Australian data for Sustainable Development Goal indicators
frontpage_instructions¶
Optional: This setting can control the instructions that appear on the front page. This setting is only used in the
frontpage layout.
frontpage_instructions: Click on each goal for Australian statistics for Sustainable Development Goal global indicators.
frontpage_introduction_banner¶
Optional: This setting adds a banner to your site's homepage, in order to introduce your users to your site. This setting is used in both the
frontpage and
frontpage-alt layouts. To add a banner update the
_config.yml file with these settings:
frontpage_introduction_banner: title: title goes here description: description goes here
goal_image_base¶
Optional: This setting controls the base URL for downloading the imagery for the goal images. The platform will use this as a base, and complete the URLs (behind the scenes) by adding a language and number. For example, if you set this to, then the platform will try to download the Spanish image for Goal 4 at:.
If omitted, the following default will be used:
goal_image_base:
goal_image_extension¶
Optional: This setting controls the type of file (the file "extension") that will be used for the goal images. If omitted, the default will be
png. The precending dot (eg,
.png) is not needed.
goal_image_extension: png
NOTE: Please ensure that files of this type are available at the location specified in
goal_image_base in each language that you use. For example, if your
goal_image_base is, and your
goal_image_extension is
svg, and your language is French, the goal 5 icon should be available at:.
goals_page¶
Optional: This setting controls certain aspects of the
goals layout. The available settings are:
title: Controls the title of the goals page. Defaults to "Goals".
description: Controls the introductory text under the title. If omitted there will be no introductory text.
Here is an example of using these settings:
goals_page: title: title goes here description: description goes here
As always, for multilingual support, these settings can refer to translation keys, and the description can include Markdown.
graph_color_headline¶
Optional: This setting can be used to control the color of the "headline" (eg, the national dataset, without any disaggregations selected) on charts. The default is #004466.
graph_color_headline_high_contrast¶
Optional: This setting can be used to control the color of the "headline" (eg, the national dataset, without any disaggregations selected) on charts, in high-contrast mode. The default is #55a6e5.
graph_color_set¶
Optional: This setting can be used to customize the color set used in the charts. There are five possible entries:
graph_color_set: 'accessible'a 6-color set that is specifically chosen for optimal accessibility (recommended)
graph_color_set: 'default'a deprecated 6-color set that is still the default (for reasons of backwards compatibility)
graph_color_set: 'sdg'to use the 17 SDG colors in all charts
graph_color_set: 'goal'to use shades of the color of the current indicator's goal
graph_color_set: 'custom'to use a set of customized colors. In this case, write the hexadecimal color codes of the colors you want to use to the list in
graph_color_list(see below).
NOTE: Whatever color scheme you choose here, please ensure that all colors satisfy the accessibility (minimum contrast) standards in your region. These colors will need to be visible on white and black backgrounds. The
accessiblecolor scheme is designed to meet this requirement, and so it is recommended.
graph_color_list¶
Optional: This setting can be used to define a set of colors to be used in the charts. Precondition is
graph_color_set to be
custom. Enter a list of hexadecimal color codes.
graph_color_list': ['3fd64f','cfd63f','4eecec','ec4ed9']
graph_color_number¶
Optional: This setting can be used to limit the length of the list of colors selected via
graph_color_set. The maximum value for
graph_color_set: 'default' is 6, for
graph_color_set: 'sdg' is 17, for
graph_color_set: 'goal' is 9 and for
graph_color_set: 'custom' the length of
graph_color_list. If nothing is defined here, the corresponding maximum is used. Be aware that the number selected here affects how many datasets can be displayed simultaneously in the charts (2 times this value - once as a normal line or bar and once as a dashed line or bar).
graph_title_from_series¶
Optional: This setting can be set to
true to use the currently-selected series as the chart title, whenever possible. Example:
graph_title_from_series: true
header¶
Optional: This setting can control aspects of the header that is displayed at the top of each page. The available options are:
include: This specifies an include file, assumed to be inside of
_includes/components/header/, to use for the header.
Here is an example, showing the default that is used if this setting is omitted:
header: include: header-default.html
The configuration above will include the file
_includes/components/header/header-default.html at the top of each page.
The
header-menu-left-aligned.html option is also available, and is recommended.
header_language_toggle¶
Optional: This setting controls the type of language toggle to be used in the header. Possible settings are
dropdown,
links, and
none. If this is omitted, the default is
dropdown. The general recommendation is to use
dropdown if you have more than 3 languages, and otherwise to use
links.
header_language_toggle: dropdown
hide_empty_metadata¶
Optional: This setting can be used to hide any metadata fields that are empty. In other words, this setting can ensure that if an indicator has no data for a particular metadata field, that field will not display at all. The default behavior if for all metadata fields to be displayed, regardless of whether the indicator has the required data.
hide_empty_metadata: true
hide_single_series¶
Optional: This setting can be used to hide the "Series" toggle on indicator pages whenever there is only a single series to chose from.
hide_single_series: true
hide_single_unit¶
Optional: This setting can be used to hide the "Unit" toggle whenever there is only a single unit to chose from.
hide_single_unit: true
ignored_disaggregations¶
Optional: This setting causes any number of disaggregations (eg, columns in CSV files) to be ignored. This means that they will not receive drop-downs in the left sidebar on indicator pages.
This can be useful in cases where the source data may contain columns that you prefer not to appear in the platform. For example, perhaps your source data is SDMX, and contains required SDMX fields like UNIT_MULT, which you do not need visible on the platform. You could ignore it with this configuration:
ignored_disaggregations: - UNIT_MULT
indicator_config_form¶
Optional: This setting controls the behavior of the indicator config forms. The available settings are:
enabled: Whether or not to generate these configuration forms
dropdowns: This can be used to convert any
stringfield into a dropdown. Each item should have these properties:
jsonschema: The path into the jsonschema's
propertiesobject, to the property that you would like to convert into a dropdown. In most cases this is simply the name of the property, but in nested situations, you can use dot-syntax to drill down into the jsonschema object.
values: A list of values for the dropdown.
labels: An optional list of human-readable labels, corresponding to the
valueslist.
For example, the following would convert the
reporting_status property into a dropdown:
site_config_form: dropdowns: - jsonschema: reporting_status values: - complete - notstarted
repository_link: This will display a "Go to repository" link on the configuration page. You can enter a pattern with the placeholder
[id]and it will be replaced with the indicator id (eg, 1-1-1). For example, on indicator 1-1-1,[id]will link to.
translation_link: This will display a "Go to translation" link beneath each metadata field. This is used to give the editor a shortcut to whereever it is that the translations are maintained. You can enter a pattern with the placeholder
[id]it will be replaced as described above. In addition, your pattern can include these other placeholders:
[language]: This will be replaced with the current language.
[group]: This will be replaced with the first part of the translation key. Eg, if the translation key is
foo.barthen
[group]will be replaced with
foo.
[key]: This will be replaced with the second part of the translation key. Eg, if the translation key is
foo.barthen
[key]will be replaced with
bar.
The appropriate value for this translation_link setting depends on the specifics of how you maintain translations. For example, if your translations are maintained in Weblate then you might take advantage of Weblate's useful search feature, but having a translation_link of:[group]/?q=+context%3A%3D[key]
For another example, if you are maintaining translations in the
translations folder in your data repository, then you might have a translation_link of:[language]/[group].yml
Links to the forms appear in the "Edit" tab on indicator pages.
indicator_data_form¶
Optional: This setting controls the behavior of the indicator data forms. The available settings are:
enabled: Whether or not to generate these data forms
repository_link: This will display a "Go to repository" link on the configuration page. You can enter a pattern with the placeholder
[id]and it will be replaced with the indicator id (eg, 1-1-1). For example, on indicator 1-1-1,[id]will link to.
Links to the forms appear in the "Edit" tab on indicator pages.
indicator_metadata_form¶
Optional: This setting controls the behavior of the indicator metadata forms. The available settings are the same as in
indicator_config_form above, plus the following extra options:
scopes: A list of the "scopes" that you would like to include in the form. If let blank, this will default to "national" and "global".
exclude_fields: A list of the fields that you would like to omit from the form.
translated: This setting is only for multilingual implementations that are using the "subfolder approach" for multilingual metadata. When this option is enabled, the contents of the metadata forms are translated (based on the current language), to allow you to save different files for each language. If you are not using the "subfolder approach" for multilingual metadata (or you don't know what that is) then you can safely leave this disabled.
For example:
indicator_metadata_form: enabled: true scopes: - national - global exclude_fields: - my_excluded_field_name translated: false
Links to the forms appear in the "Edit" tab on indicator pages.
indicator_tabs¶
Optional: This setting controls the order and contents of the data tabs on indicator pages. This can be used to rearrange the tabs, or to hide particular tabs. This can also be overridden for particular indicators in the indicator configuration.
For each of the four tab slots, you can set either:
chart,
table,
map,
embed, or
hide.
chart: This will display the chart/graph in the specified tab.
table: This will display the data table in the specified tab.
map: This will display map in the specified, so long as the other requirements for displaying a map are met (such as the
data_show_mapsetting and a
GeoCodedata column).
embed: This will display embedded content in the specified tab, so long as the other requirements for displaying embedded content are met (such as the
embedded_feature_urlor
embedded_feature_htmlsettings).
hide: This will hide the specified tab altogether.
The default settings, if omitted are the following:
indicator_tabs: tab_1: chart tab_2: table tab_3: map tab_4: embed
But for example, if you would like your indicators to start with the table selected, you could do this:
indicator_tabs: tab_1: table tab_2: chart tab_3: map tab_4: embed
Or if you would like your indicators to only have tables and maps, you could do this:
indicator_tabs: tab_1: table tab_2: map tab_3: hide tab_4: hide
languages¶
Required: This setting controls the languages to be used on the site. This should be a list of language codes, and the first is assumed to be the default.
languages: - es - en
languages_public¶
Optional: This setting can be used if you are not happy with any of the standard language codes. For example, if the standard code for a language is
xyz but you would prefer that it show up in your URLs as
abc, then you could do the following:
languages_public: - language: xyz language_public: abc
logos¶
Optional: Normally Open SDG uses a logo at
assets/img/SDG_logo.png, with the alt text of "Sustainable Development Goals - 17 Goals to Transform our World". However you can use this setting to take full control of the logo and alt text:
logos: - src: assets/img/my-other-image-file.png alt: My other alt text
You can also specify multiple logos, one per language:
logos: - language: en src: assets/img/en/logo.png alt: my alt text - language: es src: assets/img/es/logo.png alt: mi texto alternativo
metadata_edit_url¶
Required: This setting controls the URL of the "Edit Metadata" that appear on the staging site's indicator pages. It should be a full URL. Note that you can include
[id] in the URL, and it will be dynamically replaced with the indicator's id (dash-delimited).
metadata_edit_url:[id].md
metadata_tabs¶
Optional: This setting can control the metadata tabs which appear on the indicator pages. This is directly tied to the "schema" of your data repository (ie, the
_prose.yml file). The "scope" in each object must correspond to the "scope" of the fields in that schema file. The following configuration is assumed if this setting is omitted:
metadata_tabs: - scope: national title: indicator.national_metadata description: indicator.national_metadata_blurb - scope: global title: indicator.global_metadata description: indicator.global_metadata_blurb - scope: sources title: indicator.sources description: ''
About the "Sources" tab:
While the "scopes" above, such as "national" and "global", are arbitrary, the "sources" scope is special. The "Sources" tab will only display if the scope under
metadata_tabs is specifically
sources.
Required: This setting controls the main navigation menu for the platform. It should contain a list of menu items, each containing a
path and a translation key.
menu: - path: reporting-status/ translation_key: menu.reporting_status - path: about/ translation_key: menu.about - path: faq/ translation_key: menu.faq
Menu items can also be turned into dropdowns by putting additional menu items under a
dropdown setting. For example, this would move "about/" and "faq/" under a "More information" dropdown:
menu: - path: reporting-status/ translation_key: menu.reporting_status - translation_key: More information dropdown: - path: faq/ translation_key: menu.faq - path: about/ translation_key: menu.about
news¶
Optional: This setting can be used to control the behavior of the
news and
post layouts. The available settings are:
category_links: Whether you would like the
categoriesof posts to generate links to dedicated category pages. Default is
true, but set to
falseto disable category links.
non_global_metadata¶
Optional: This setting can be used to control the text of the tab containing non-global metadata. The default text is "National Metadata", but if you are implementing a sub-national platform, you could use "Local Metadata", or similar. Note that using a translation key is recommended for better multilingual support.
non_global_metadata: indicator.national_metadata
NOTE: This approach is deprecated. It is now possible to have complete control over all the metadata tabs using the
metadata_tabs configuration setting (see above).
plugins¶
Required: This is a general Jekyll setting, but it is mentioned here to indicate the required plugins. At a minimum you should include the following:
plugins: - jekyll-remote-theme - jekyll-open-sdg-plugins
progress_status¶
Optional: This setting controls certain aspects of the progress status functionality. The available settings are:
status_heading: Controls the heading that describes the progress status, whenever it appears.
status_types: A list of progress status types to use. Each item should have these settings:
value: The value of the status type, as it is set in the indicator configuration (eg, 'target_achieved').
label: The human-readable label for the status type. Can be a translation key (eg, 'status.target_achieved').
image: The internal path to the image to use (if any) for this progress status.
alt: An alt tag for the image above.
Here is an example of using these settings:
progress_status: status_heading: heading goes here status_types: - value: not_available label: status.progress_not_available image: assets/img/progress/not-available.png alt: status.progress_not_available - value: target_achieved label: status.progress_target_achieved image: assets/img/progress/target-achieved.png alt: status.progress_target_achieved
As always, for multilingual support, the label/alt/heading settings can refer to translation keys.
For more information on how to use these status types, see the indicator configuration setting for
progress_status.
remote_data_prefix¶
Required: This setting tells the platform where to find your hosted data repository.
remote_data_prefix:
Note that this is typically a remote URL, but it also works as a relative path to a local folder on disk. For example:
remote_data_prefix: my-data-build-folder
remote_theme¶
Required: This is not specific to Open SDG, but it is very important to always use a specific version of Open SDG (as opposed to using the latest version). For example, to use version 0.8.0 of the platform, use the following:
remote_theme: open-sdg/[email protected]
This is far safer and more recommended than using the latest version, such as the following (which is not recommended):
remote_theme: open-sdg/open-sdg
reporting_status¶
Optional: This setting controls certain aspects of the reporting status page. The available settings are:
title: Controls the title of the reporting status page. Defaults to "Reporting status".
description: Controls the introductory text under the title. If omitted there will be no introductory text.
disaggregation_tabs: Whether or not to display disaggregation status tabs. If omitted, this defaults to false. If you enable this setting, you should also use "expected_disaggregations" in your indicator configuration, in order to provide the disaggregation status report with useful metrics. For more information see expected_disaggregations.
status_types: A list of reporting status types to use. Each item should have these settings:
value: The value of the status type, as it is set in the indicator configuration (eg, 'complete').
label: The human-readable label for the status type. Can be a translation key (eg, 'status.reported_online').
hide_on_goal_pages: Optional: Whether to hide this status type on goal pages. Useful for the most commonly-occuring type.
Here is an example of using these settings:
reporting_status: title: title goes here description: description goes here disaggregation_tabs: true status_types: - value: notstarted label: status.exploring_data_sources hide_on_goal_pages: false - value: complete label: status.reported_online hide_on_goal_pages: true - value: notapplicable label: status.not_applicable hide_on_goal_pages: false
As always, for multilingual support, the title/description settings can refer to translation keys, and description can include Markdown.
repository_url_data¶
Optional: This setting specifies the URL of the data repository, which is used in other settings. Currently this -- if available -- will be used as a prefix for the "repository_link" options in
indicator_config_form,
indicator_metadata_form, and
indicator_data_form.
Here is an example of using this setting:
repository_url_data:
repository_url_site¶
Optional: This setting specifies the URL of the site repository, which is used in other settings. Currently this -- if available -- will be used as a prefix for the "repository_link" option in
site_config_form.
Here is an example of using this setting:
repository_url_site:
search_index_boost¶
Optional: This setting can be used to give a "boost" to one or more fields in the search index. The boost number should be a positive integer. The higher the number, the more "relevant" that field will be in search results. If omitted, the following defaults will be used:
search_index_boost: - field: title boost: 10
The following example shows additional fields that can be boosted:
search_index_boost: # The title of the indicator, goal, or page. - field: title boost: 10 # The content of the indicator, goal, or page. - field: content boost: 1 # The id number of the indicator or goal. - field: id boost: 5
Additionally, any fields set in the
search_index_extra_fields setting may also be boosted. For example:
search_index_boost: # Assumes that "national_agency" was set in "search_index_extra_fields". - field: national_agency boost: 5
search_index_extra_fields¶
Optional: This setting can be used to "index" additional metadata fields in your indicators, for the purposes of affecting the site-wide search. For example, if you have a metadata field called
national_agency and you would like the sitewide search to include that field, add it in a list here, like so:
search_index_extra_fields: - national_agency
Another example of how
search_index_extra_fields could be used, is to configure search terms for indicator pages. For example, if you wanted indicator 3.a.1 to show as a result of 'smoking' or 'smokers' being searched for, you could set an indicator configuration field called
data_keywords and then "index" that field, like so:
search_index_extra_fields: - data_keywords
Then in your indicator configuration you would have:
data_keywords: smoking, smokers
series_toggle¶
Optional: This setting enables the special treatment of the "Series" column in the data. If set to
true, when an indicator's data includes a "Series" column, it will be displayed above "Units" as radio buttons. If omitted or
false, the normal behavior is that the "Series" column will display below "Units" as checkboxes. Example:
series_toggle: true
site_config_form¶
Optional: This setting controls the behavior of the site config form. The available the same as in the
indicator_config_form described above.
The default location for the site configuration page is
/config.
validate_indicator_config¶
Optional: This setting, if true, will run a validation of each indicator's configuration during the site build. This defaults to
false.
validate_site_config¶
Optional: This setting, if true, will run a validation of the site configuration during the site build. This defaults to
false.
x_axis_label¶
Optional: This setting, if provided, will display as a label beneath the X axis on charts. Note that this is also available on the configuration of individual indicators, where it will override this setting. | https://open-sdg.readthedocs.io/en/latest/configuration/ | 2022-01-17T00:16:41 | CC-MAIN-2022-05 | 1642320300253.51 | [] | open-sdg.readthedocs.io |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
On Wednesday, October 25, 2017, a new cloud version of Developer Services Portal is ready for you to apply. See How do I apply Apigee updates to my developer portal in the public cloud?
New Features
The following section describes the new features in this release.
Dev environments for Pantheon hosted sites redirected from apigee.com to apigee.io
Dev environments for Pantheon hosted sites are now being redirected from
apigee.com to
apigee.io.
Bugs fixed
The following bugs are fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users. | https://docs.apigee.com/release/notes/17102500-apigee-developer-services-portal-release-notes?hl=zh-Tw&skip_cache=true | 2022-01-17T01:05:33 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.apigee.com |
Deployment policies and settings
AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment
policies (All at once, Rolling, Rolling with additional batch,
Immutable, and Traffic splitting) and options that let you configure batch size and health check behavior during
deployments. By default, your environment uses all-at-once deployments. If you created the environment with the EB CLI and it's a scalable environment (you
didn't specify the
--single option), it uses rolling deployments.
With rolling deployments, Elastic Beanstalk splits the environment's Amazon EC2 instances into batches and deploys the new version of the application to one batch at a time. It leaves the rest of the instances in the environment running the old version of the application. During a rolling deployment, some instances serve requests with the old version of the application, while instances in completed batches serve other requests with the new version. For details, see How rolling deployments work.
To maintain full capacity during deployments, you can configure your environment to launch a new batch of instances before taking any instances out of service. This option is known as.
Traffic-splitting deployments let you perform canary testing as part of your application deployment. In a traffic-splitting deployment, Elastic Beanstalk launches a full set of new instances just like during an immutable deployment. It then forwards a specified percentage of incoming client traffic to the new application version for a specified evaluation period. If the new instances stay healthy, Elastic Beanstalk forwards all traffic to them and terminates the old ones. If the new instances don't pass health checks, or if you choose to abort the deployment, Elastic Beanstalk moves traffic back to the old instances and terminates the new ones. There's never any service interruption. For details, see How traffic-splitting deployments work.
Some policies replace all instances during the deployment or update. This causes all accumulated Amazon EC2 burst balances to be lost. It happens in the following cases:
Managed platform updates with instance replacement enabled
Immutable updates
Deployments with immutable updates or traffic splitting enabled
, and in the Regions list, select your AWS Region.
In the navigation pane, choose Environments, and then choose the name of your environment from the list.
Note
If you have many environments, use the search bar to filter the environment list.
In the navigation pane, choose Configuration.
In the Rolling updates and deployments configuration category, choose Edit.
In the Application Deployments section, choose a Deployment policy, batch settings, and health check options.
Choose Apply.
The Application deployments section of the Rolling updates and deployments page has the following options for application deployments:.
Traffic splitting – Deploy the new version to a fresh group of instances and temporarily split incoming client traffic between the existing application version and the new one.
For the Rolling and Rolling with additional batch deployment policies you can configure:).
For the Traffic splitting deployment policy you can configure the following:
Traffic split – The initial percentage of incoming client traffic that Elastic Beanstalk shifts to environment instances running the new application version you're deploying.
Traffic splitting evaluation time – The time period, in minutes, that Elastic Beanstalk waits after an initial healthy deployment before proceeding to shift all incoming client traffic to the new application version that you're deploying.
> Amazon.
How traffic-splitting deployments work
Traffic-splitting deployments allow you to perform canary testing. You direct some incoming client traffic to your new application version to verify the application's health before committing to the new version and directing all traffic to it.
During a traffic-splitting deployment, Elastic Beanstalk creates a new set of instances in a separate temporary Auto Scaling group. Elastic Beanstalk then instructs the load balancer to direct a certain percentage of your environment's incoming traffic to the new instances. Then, for a configured amount of time, Elastic Beanstalk tracks the health of the new set of instances. If all is well, Elastic Beanstalk shifts remaining traffic to the new instances and attaches them to the environment's original Auto Scaling group, replacing the old instances. Then Elastic Beanstalk cleans up—terminates the old instances and removes the temporary Auto Scaling group.
The environment's capacity doesn't change during a traffic-splitting deployment. Elastic Beanstalk launches the same number of instances in the temporary Auto Scaling group as there are in the original Auto Scaling group at the time the deployment starts. It then maintains a constant number of instances in both Auto Scaling groups for the deployment duration. Take this fact into account when configuring the environment's traffic splitting evaluation time.
Rolling back the deployment to the previous application version is quick and doesn't impact service to client traffic. If the new instances don't pass health checks, or if you choose to abort the deployment, Elastic Beanstalk moves traffic back to the old instances and terminates the new ones. You can abort any deployment by using the environment overview page in the Elastic Beanstalk console, and choosing Abort current operation in Environment actions. You can also call the AbortEnvironmentUpdate API or the equivalent AWS CLI command.
Traffic-splitting deployments require an Application Load Balancer. Elastic Beanstalk uses this load balancer type by default when you create your environment using the Elastic Beanstalk console or the EB CLI.
Deployment option namespaces
You can use the configuration options in the aws:elasticbeanstalk:command namespace to configure your deployments. If you choose the traffic-splitting policy, additional options for this policy are available in the aws:elasticbeanstalk:trafficsplitting namespace.
Use the
DeploymentPolicy option to set the deployment type. The following values are supported:.
When you enable rolling deployments, set the
BatchSize and
BatchSizeType options to configure the size of each batch. For
example, to deploy 25 before"
To perform traffic-splitting deployments, forwarding 15 percent of client traffic to the new application version and evaluating health for 10 minutes, specify the following options and values.
Example .ebextensions/traffic-splitting.config
option_settings: aws:elasticbeanstalk:command: DeploymentPolicy: TrafficSplitting aws:elasticbeanstalk:trafficsplitting: NewVersionPercent: "15" EvaluationTime: "10"
The EB CLI and Elastic Beanstalk console apply recommended values for the preceding options. You must remove these settings if you want to use configuration files to configure the same. See Recommended values for details. | https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html | 2022-01-17T02:37:08 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['images/environment-cfg-rollingdeployments.png',
'Elastic Beanstalk application deployment configuration page'],
dtype=object)
array(['images/environment-cfg-healthchecks.png',
'Elastic Beanstalk application deployments configuration page'],
dtype=object) ] | docs.aws.amazon.com |
Transfer CFT 3.6 Users Guide Save PDF Selected topic Selected topic and subtopics All content Transfer serialization Requester mode If the number of outgoing connections (CNXOUT) is 1 for a partner, then all transfers with the same level of priority are activated in order of their submitted request. This function is referred to as transfer serialization. However, a failed transfer that has an H or K state does not block activation of subsequent requests for this same partner. Related Links | https://docs.axway.com/bundle/TransferCFT_36_UsersGuide_allOS_en_HTML5/page/Content/Transfers/Trans_concepts/Transfer_serialization.htm | 2022-01-17T01:32:24 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.axway.com |
Amplify API Management Save PDF Selected topic Selected topic and subtopics All content Work with a UDDI registry Connect to a UDDI registry from Policy Studio to retrieve or publish WSDL files. 32 minute read Connect to a UDDI registry This section explains how to configure a connection to a UDDI registry in the Registry Connection Details dialog. It explains how to configure connections to UDDI v2 and UDDI v3 registries, and how to secure a connection over SSL. Configure a registry connection UDDI v3, the Inquiry URL, Publish URL, and Security URL specify the URLs of the WSDL for the inquiry, publishing, and security web services that the registry exposes. These fields can use the same URL if the WSDL for each service is at the same URL. For example, a WSDL file at can contain three URLs: These are the service endpoint URLs: Type This optional field applies to UDDI v2 only. The only supported authentication type is UDDI_GET_AUTHTOKEN. Username Enter the user name required to authenticate to the registry, if required. Password Enter the password for this user, if required. The user name and password apply to UDDI v2 and v3. These are generally required for publishing, but depend on the configuration on the registry side. HTTP Proxy: The HTTP proxy settings apply to UDDI v2 and v3: Proxy Host If the UDDI registry location entered above requires a connection to be made through an HTTP proxy, enter the host name of the proxy. Proxy Port If a proxy is required, enter the port on which the proxy server is listening. Username If the proxy has been configured to only accept authenticated requests, Policy Studio sends this user name and password to the proxy using HTTP Basic authentication. Password Enter the password to use with the user name specified in the field above. HTTPS Proxy: The HTTPS proxy settings apply to UDDI v2 and v3: SSL Proxy Host If the Inquiry URL or Publish URL uses the HTTPS protocol, the SSL proxy host entered is used instead of the HTTP proxy entered above. In this case, the HTTP proxy settings are not used. Proxy Port Enter the port that the SSL proxy is listening on. Secure a connection to a UDDI registry You. Configure Policy Studio to trust a registry certificate For an SSL connection, you must configure the registry server certificate as a trusted certificate. Assuming mutual authentication is not required, the simplest way to configure an SSL connection between Policy Studio and UDDI registry is to add the registry certificate to the Policy Studio default truststore (the cacerts file). You can do this by performing the following steps in Policy Studio: Select the Environment Configuration > window. Click Browse next to the Keystore field. Browse to the following file: INSTALL_DIR/policystudio/jre/lib/security/cacerts Click Open, and enter the Keystore password. Click Add to Keystore. Browse to the registry SSL certificate imported earlier, select it, and click OK. Restart Policy Studio. You should now be able to connect to the registry over SSL. Configure mutual SSL authentication If mutual SSL authentication is required (if Policy Studio must authenticate to the registry), Policy Studio must have an SSL private key and certificate. In this case, you should create a keystore containing the Policy Studio key and certificate. You must configure Policy Studio to load this file. For example, edit the INSTALL_DIR/policystudio/policystudio.ini file, and add the following arguments: -Djavax.net.ssl.keyStore=/home/axway/osr-client.jks -Djavax.net.ssl.keyStorePassword=changeit This example shows an osr-client.jks keystore file used with Oracle Service Registry (OSR), which is the UDDI registry provided by Oracle. Note You can also use Policy Studio to create a new keystore (.jks) file. Click New keystore instead of browsing to the cacerts file as described earlier. Retrieve WSDL files from a UDDI registry Policy Studio can retrieve a WSDL file from the file system, from a URL, or from a UDDI registry. This section explains how to retrieve a WSDL file. Select or add a UDDI registry. You can select one of the following categories to search by: wsdlSpec: Search for tModels classified as wsdlSpec (uddi-org:typescategoryBags found at the businessEntity level and in all contained or referenced businessServices. Publish WSDL files to a UDDI registry When you have registered a WSDL file in the web service repository, you can use the Publish WSDL wizard to publish the WSDL file to a UDDI registry. You can also use the Find WSDL wizard to search for the selected WSDL file in a UDDI registry. Find WSDL files You can search a UDDI registry to determine if a web service is already published in the registry. To search for a selected WSDL file in a specified UDDI registry, perform the following steps: In the Policy Studio tree, expand the APIs > Web Service Repository node. Right-click a WSDL node and select Find in UDDI Registry to launch the Find WSDL wizard. In the Find WSDL dialog, select a UDDI registry from the list. You can add or edit a registry connection using the buttons provided. You can select an optional language Locale from the list. The default is No Locale. Click Next. The WSDL Found in UDDI Registry window displays the result of the search in a tree. The Node Counts field shows the total numbers of each UDDI entity type returned from the search (businessEntity, businessService, bindingTemplate, and tModel). You can right-click to edit a UDDI entity node in the tree, if necessary (for example, add a description, add a category or identifier node, or delete a duplicate node). Click the Refresh button to run the search again. Click Finish. The Find WSDL wizard provides is a quick and easy way of finding a selected WSDL file published in a UDDI registry. For more fine-grained ways of searching a UDDI registry (for example, for specific WSDL or UDDI entities), see Retrieve WSDL files from a UDDI registry. Publish WSDL files To publish a WSDL file registered in the Web Service Repository to a UDDI registry, perform the following steps: Expand the API > Web Service Repository tree node. Right-click a WSDL node and select Publish WSDL to UDDI Registry to launch the Publish WSDL Wizard. Perform the steps in the wizard as described in the next sections. Enter virtualized service address and WSDL URL for publishing in UDDI registry When you register a WSDL file in the web service repository, API Gateway exposes a virtualized version of the web service. The host and port for the web service are changed dynamically to point to the machine running API Gateway. The client can then retrieve the WSDL for the virtualized web service from API Gateway, without knowing its real location. This window enables you to optionally override the service address locations in the WSDL file with the virtualized addresses exposed by API Gateway. You can also override the WSDL URL published to the UDDI registry. Complete the following fields: Mapping of Service Addresses to Virtualized Service Addresses: You can enter multiple virtual service address mappings for each service address specified in the selected WSDL file. If you do not enter a mapping, the original address location in the WSDL file is published to the UDDI registry. If one or more mappings are provided, corresponding UDDI bindingTemplates are published in the UDDI registry, each with a different access point (virtual service address). This enables you to publish the access points of a service when it is exposed on different ports/schemes using API Gateway. When you launch the wizard, the mapping table is populated with a row for each wsdl:service, wsdl:port, soap:address, soap12:address, or http:address in the selected WSDL file. To modify an existing entry, select a row in the table, and click Edit. Alternatively, click Add to add an entry. In the Virtualize Service Address dialog, enter the virtualized service address. For example, if API Gateway is running on a machine named roadrunner, the new URL on which the web service is available to clients is:. WSDL URL: You can enter a WSDL URL to be published to the UDDI registry in the corresponding tModel overviewURL fields. If you do not enter a URL, the WSDL URL in the Original WSDL URL field is used. For example, an endpoint service is at. Assume this service is virtualized in API Gateway and exposed at, where HOST is the machine on which API Gateway is running. The URL is entered as the virtual service address, and is entered as the WSDL URL to publish. Note If incorrect URLs are published, you can edit these in the UDDI tree in later steps in this wizard, or when browsing the registry. Click Next when finished. View WSDL to UDDI mapping result You can use this window to view the unpublished mapping of the WSDL file to a UDDI registry structure. You can also edit a specific mapping in the tree view. This window includes the following fields: Mapping of WSDL to a UDDI Registry Structure: The unpublished mappings from the WSDL file to the UDDI registry are displayed in the table. For example, this includes the relevant businessService, bindingTemplate, tModel, Identifier, Category mappings. You can select a tree node to display its values in the table below. You can optionally edit the values for a specific mapping in the table (for example, update a value, or add a key or description for the selected UDDI entity). You can also right-click a tree node to edit it (for example, add a description, add a category or identifier node, or delete a duplicate node). Retrieve service address from WSDL instead of bindingTemplate access point: When selected, this ensures that the bindingTemplate access point does not contain the service port address, and is set to WSDL instead. This means that you must retrieve the WSDL to get the service access point. When selected, the bindingTemplate contains an additional tModelInstanceInfo that points to the uddi:uddi.org.wsdl:address tModel. This option is not selected by default. Include WS-Policy as: When selected, you can choose one of the following options to specify how WS-Policy statements in the WSDL file are included in the registry: Remote Policy Expressions: Each WS-Policy URL in the WSDL that is associated with a mapped UDDI entity is accessed remotely. For example, a businessService is categorized with the uddi:w3.org:ws-policy:v1.5:attachment:remotepolicyreference tModel where the keyValue holds the remote WS-Policy URL. This is the default option. Reusable Policy Expressions: Each WS-Policy URL in the WSDL that is associated with a mapped UDDI entity has a separate tModel published for it. Other UDDI entities (for example, businessService) can then refer to these tModels. These are reusable because UDDI entities published in the future can also use these tModels. You can do this in Select a duplicate publishing approach, by selecting the Reuse duplicate tModels option. Click Next when finished. Select a registry for publishing Use this window to select a UDDI registry in which to publish the WSDL to UDDI mapping. Complete the following fields: Select Registry: Select an existing UDDI registry to browse for WSDL files from the Registry drop-down list. To configure the location of a new UDDI registry, click Add. Similarly, to edit an existing UDDI registry location, click Edit. Select Locale: You can select an optional language locale from this list. The default is No Locale. Click Next when finished. Select a duplicate publishing approach This window is displayed only if mapped WSDL entities already exist in the UDDI registry. Otherwise, the wizard skips to the next step. This window includes the following fields: Select Duplicate Mappings: The Mapped WSDL to publish pane on the left displays the unpublished WSDL mappings from an earlier step. The Duplicates for WSDL mappings in UDDI registry pane on the right displays the nodes already published in the registry. The Node List at the bottom right shows a breakdown of the duplicate nodes. Edit Duplicate Mappings: You can eliminate duplicate mappings by right-clicking a tree node in the right or left pane, and selecting edit to update a specific mapping in the dialog. Select the Refresh button on the right to run the search again, and view the updated Node List. Alternatively, you can configure the options in the next field. Select Publishing Approach for Duplicate Entries: Select one of the following options: Reuse duplicate tModels: Publishes the selected entries from the tree on the left, and reuses the selected duplicate entries in the tree on the right. This is the default option. Some or all duplicate tModels (for example, for portType, binding, and reusable WS-Policy expressions) that already exist in the registry can be reused. This means that a new businessService that points to existing tModels is published. Any entries selected on the left are published, and any referred to tModels on the left now point to selected duplicate tModels on the right. By default, this option selects all businessServices on the left, and all duplicate tModels on the right. If there is more than one duplicate tModels, only the first is selected. Overwrite duplicates: Publishes the selected entries from the tree on the left, and overwrites the selected duplicate entries in the tree on the right. When a UDDI entity is overwritten, its UUID key stays the same, but all the data associated with it is overwritten. This is not just a transfer of additions or differences. You can also overwrite some duplicates and create some new entries. By default, this option selects all businessServices and tModels on the left and all duplicate businessServices and tModels on the right. If there is more than one duplicate, only the first is selected. The default overwrites all selected duplicates and does not create any new UDDI entries, unless there is a new referred to tModel (for example, for a reusable WS-Policy expression). Ignore duplicates: Publishes the selected entries from the tree on the left, and ignores all duplicates. You can proceed to publish the mapped WSDL to UDDI data. New UDDI entries are created for each item that is selected in the tree on the left. Click Next when finished. Note If you select duplicate businessServices in the tree, and select Overwrite duplicates, the wizard skips to Publish WSDL when you click Next. Create or search for business Use this window to specify a businessEntity for the web service. You can create a new businessEntity or search for an existing one in the UDDI registry. Complete the following fields: Create a new businessEntity: This is the default option. Enter a Name and Description for the businessEntity, and click Publish. Search for an existing businessEntity: To search for an existing businessEntity name, perform the following steps: Select the Search for an existing businessEntity in the UDDI registry option. In the Search tab, ensure the Name Search option is selected. Enter a Name option (for example, Acme Corporation). Alternatively, you can select the Advanced Search option to search by different criteria such as Keys, Categories, or tModels. You can also select a range of search options on the Advanced tab (for example, Exact match, Case sensitive, or Service subset). The Node Counts field shows the total numbers of each UDDI entity type returned from the search (businessEntity, businessService, bindingTemplate,and tModel). Click Next when finished. Publish WSDL Use this to publish the WSDL to the UDDI registry. Selected businessEntity for Publishing: This field displays the name and tModel key of the businessEntity to be published. Click the Publish WSDL button on the right. Published WSDL: This field displays the tree of the UDDI mapping for the WSDL file. You can right-click to edit or delete any nodes in the tree if necessary, and click Refresh to run the search again. Click Publish WSDL to publish your updates. Click Finish. | https://docs.axway.com/bundle/axway-open-docs/page/docs/apim_policydev/apigw_web_services/general_uddi/index.html | 2022-01-17T00:13:50 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.axway.com |
Configure Your SSO Provider (Optional)
Deploy True Passwordless MFA across the enterprise with your favorite Single Sign-on Solution.
Optional Step
Configuring your SSO at this stage is optional.
If you'd like to start by testing Desktop MFA only first, you may skip this section and come back to it later. For admins interested in a team pilot with Desktop MFA, proceed to inviting your teammates and come back to SSO integration at a later time.
Configuring Single Sign-on Solutions
Bring together True Passwordless Security with your Single Sign-on (SSO) Solutions to get all the benefits of security & productivity across your identity & access management infrastructure.
HYPR connects seamlessly with industry leading Single Sign-On (SSO) solutions fully supporting SAML, OIDC/OAUTH, and more.
HYPR includes plugins and guides to integrate with industry leading providers. You can follow the instructions below and be sure to contact your HYPR Team Expert if you have any questions.
Next Steps
Updated over 1 year ago | https://docs.hypr.com/installinghypr/docs/configure-passwordless-sso | 2022-01-17T01:15:39 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.hypr.com |
Target frameworks in SDK-style projects
When you target a framework in an app or library, you're specifying the set of APIs that you'd like to make available to the app or library. You specify the target framework in your project file using a target framework moniker (TFM).) has access to Xamarin-provided iOS API wrappers for iOS 10, or an app that targets Universal Windows Platform (UWP,
uap10.0) has access to APIs that compile for devices that run Windows 10.
For some target frameworks, such as .NET Framework, the APIs are defined by the assemblies that the framework installs on a system and may include application framework APIs (for example, ASP.NET).
For package-based target frameworks (for example, .NET 5+, .NET Core, and .NET Standard), the APIs are defined by the NuGet packages included in the app or library.
Latest versions
The following table defines the most common target frameworks, how they're referenced, and which version of .NET Standard they implement. These target framework versions are the latest stable versions. Prerelease versions aren't shown. A target framework moniker (TFM) is a standardized token format for specifying the target framework of a .NET app or library.
Supported target frameworks
A target framework is typically referenced by a TFM. The following table shows the target frameworks supported by the .NET SDK and the NuGet client. Equivalents are shown within brackets. For example,
win81 is an equivalent TFM to
netcore451.
* .NET 5 and later TFMs include some operating system-specific variations. For more information, see the following section, .NET 5+ OS-specific TFMs.
.NET 5+ OS-specific TFMs
The
net5.0 and
net6.0 TFMs include technologies that work across different platforms. Specifying an OS-specific TFM makes APIs that are specific to an operating system available to your app, for example, Windows Forms or iOS bindings. OS-specific TFMs also inherit every API available to their base TFM, for example, the
net5.0 TFM.
.NET 5 introduced the
net5.0-windows OS-specific TFM, which includes Windows-specific bindings for WinForms, WPF, and UWP APIs. .NET 6 introduces further OS-specific TFMs.
The following table shows the compatibility of the .NET 5+ TFMs.
To make your app portable across different platforms but still have access to OS-specific APIs, you can target multiple OS-specific TFMs and add platform guards around OS-specific API calls using
#if preprocessor directives.
Suggested targets
Use these guidelines to determine which TFM to use in your app:
Apps that are portable to multiple platforms should target a base TFM, for example,
net5.0. This includes most libraries but also ASP.NET Core and Entity Framework.
Platform-specific libraries should target platform-specific flavors. For example, WinForms and WPF projects should target
net5.0-windowsor
net6.0-windows.
Cross-platform application models (Xamarin Forms, ASP.NET Core) and bridge packs (Xamarin Essentials) should at least target the base TFM, for example,
net6.0, but might also target additional platform-specific flavors to light-up more APIs or features.
OS version in TFMs
You can also specify an optional OS version at the end of an OS-specific TFM, for example,
net6.0-ios15.0. The version indicates which APIs are available to your app or library. It does not control the OS version that your app or library supports at run time. It's used to select the reference assemblies that your project compiles against, and to select assets from NuGet packages. Think of this version as the "platform version" or "OS API version" to disambiguate it from the run-time OS version.
When an OS-specific TFM doesn't specify the platform version explicitly, it has an implied value that can be inferred from the base TFM and platform name. For example, the default platform value for iOS in .NET 6 is
15.0, which means that
net6.0-ios is shorthand for the canonical
net6.0-ios15.0 TFM. The implied platform version for a newer base TFM may be higher, for example, a future
net7.0-ios TFM could map to
net7.0-ios16.0. The shorthand form is intended for use in project files only, and is expanded to the canonical form by the .NET SDK's MSBuild targets before being passed to other tools, such as NuGet.
The .NET SDK is designed to be able to support newly released APIs for an individual platform without a new version of the base TFM. This enables you to access platform-specific functionality without waiting for a major release of .NET. You can gain access to these newly released APIs by incrementing the platform version in the TFM. For example, if the iOS platform added iOS 15.1 APIs in a .NET 6.0.x SDK update, you could access them by using the TFM
net6.0-ios15.1.
Support older OS versions
Although a platform-specific app or library is compiled against APIs from a specific version of that OS, you can make it compatible with earlier OS versions by adding the
SupportedOSPlatformVersion property to your project file. The
SupportedOSPlatformVersion property indicates the minimum OS version required to run your app or library. If you don't explicitly specify this minimum run-time OS version in the project, it defaults to the platform version from the TFM.
For your app to run correctly on an older OS version, it can't call APIs that don't exist on that version of the OS. However, you can add guards around calls to newer APIs so they are only called when running on a version of the OS that supports them. This pattern allows you to design your app or library to support running on older OS versions while taking advantage of newer OS functionality when running on newer OS versions.
The
SupportedOSPlatformVersion value (whether explicit or default) is used by the platform compatibility analyzer, which detects and warns about unguarded calls to newer APIs. It's burned into the project's compiled assembly as an UnsupportedOSPlatformAttribute assembly attribute, so that the platform compatibility analyzer can detect unguarded calls to that assembly's APIs from projects with a lower
SupportedOSPlatformVersion value. On some platforms, the
SupportedOSPlatformVersion value affects platform-specific app packaging and build processes, which is covered in the documentation for those platforms.
Here is an example excerpt of a project file that uses the
TargetFramework and
SupportedOSPlatformVersion MSBuild properties to specify that the app or library has access to iOS 15.0 APIs but supports running on iOS 13.0 and above:
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net6.0-ios15.0</TargetFramework> <SupportedOSPlatformVersion>13.0</SupportedOSPlatformVersion> </PropertyGroup> ... </Project>
How to specify a target framework
Target frameworks are specified in a project file. When a single target framework is specified, use the TargetFramework element. The following console app project file demonstrates how to target .NET 5:
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>net5.0</TargetFramework> </PropertyGroup> </Project>
When you specify multiple target frameworks, you may conditionally reference assemblies for each target framework. In your code, you can conditionally compile against those assemblies by using preprocessor symbols with if-then-else logic.
The following library project targets APIs of .NET Standard (
netstandard1.4) and .NET Framework (
net40 and
net45). Use the plural TargetFrameworks element with multiple target frameworks. using preprocessor directives, .NET Core, or .NET 5+ TFM, replace dots and hyphens with an underscore, and change lowercase letters to uppercase (for example, the symbol for
netstandard1.4 is
NETSTANDARD1_4). You can disable generation of these symbols via the
DisableImplicitFrameworkDefines property. For more information about this property, see DisableImplicitFrameworkDefines.
The complete list of preprocessor symbols for .NET target frameworks is:_2_0,
NET_2_0_OR_GREATER,
NET_1_1_OR_GREATER, and
NET_1_0_OR_GREATER.
Deprecated target frameworks
The following target frameworks are deprecated. Packages that target these target frameworks should migrate to the indicated replacements. | https://docs.microsoft.com/en-us/dotnet/standard/frameworks?wt.mc_id=MVP | 2022-01-17T02:56:45 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.microsoft.com |
Setting Up Custom Error Pages
When:
- 400 Bad File Request. Usually means the syntax used in the URL is incorrect (for example, uppercase letter should be lowercase letter; wrong punctuation marks).
- 401 Unauthorized. Server is looking for some encryption key from the client and is not getting it. Also, wrong password may have been entered.
- 403 Forbidden/Access denied. Similar to 401; a special permission is needed to access the site – a password or username, if it is a registration issue.
- 404 Not Found. Server cannot find the requested file. File has either been moved or deleted, or the wrong URL or document name was entered. This is the most common error.
- 500 Internal Server Error. Could not retrieve the HTML document because of server configuration problems.
- 503 Service Temporarily Unavailable. The site is temporarily unavailable due to maintenance. | https://docs.plesk.com/en-US/onyx/reseller-guide/65246/ | 2022-01-17T01:18:18 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.plesk.com |
Factor in square free Factorizes a polynomial with square-free factors. That is, given a polynomial , a square-free factorization is a factorization into powers of square-free factors Syntax factor_in_square_free(Polynomial) factor_in_square_free(Polynomial, Ring) Description factor_in_square_free(Polynomial) Factorizes a polynomial over with square-free factors. The output is a list , following the notation above. factor_in_square_free(Polynomial, Ring) Factorizes a polynomial over a ring with square-free factors. The output is a list , following the notation above. Related functions Factor in square free multiplicity, Factor, Roots Table of Contents Syntax Description Related functions | https://docs.wiris.com/en/calc/commands/expressions_polynomials/factor_in_square_free | 2022-01-17T01:48:42 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.wiris.com |
TOPICS×
SRP and UGC Essentials
Introduction
If unfamiliar with the storage resource provider (SRP) and its relationship to user-generated content (UGC), visit Community Content Storage and Storage Resource Provider Overview .
This section of the documentation provides some essential information about SRP and UGC.
StorageResourceProvider API .
Utility Method to Access UGC .
Utility Method to Access. | https://docs.adobe.com/content/help/en/experience-manager-64/communities/develop/srp-and-ugc.html | 2020-05-25T02:47:43 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.adobe.com |
Using Table Aliases.
USE AdventureWorks2008R2; GO SELECT c.CustomerID, s.Name FROM Sales.Customer AS c JOIN Sales.Store AS s ON c.CustomerID = s.BusinessEntityID ;:
SELECT Sales.Customer.CustomerID, /* Illegal reference to Sales.Customer. */ s.Name FROM Sales.Customer AS c JOIN Sales.Store AS s ON c.CustomerID = s.BusinessEntityID ;
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms187455(v=sql.105) | 2018-11-13T01:12:43 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.microsoft.com |
User Management
-
-
-
-
Setting a User's Password
-
Account Lockout and Password Policies
Enabling and Disabling a User Account
Creating a UPN Suffix for a Forest
Locating Disabled User Accounts
Viewing a User's Managed Objects
Creating a Large Quantity of Users
Viewing a User's Group Membership
-
Transferring Group Membership to Another | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd391974%28v%3Dws.10%29 | 2018-11-13T00:48:34 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.microsoft.com |
When I was in high school, I, like most students had to read William Golding’s Lord of the Flies. I loved it. The characters were memorable. It was thought-provoking, and the action gripping. Golding won the Nobel Prize for Literature in 1983. So when I ran across a copy of his book, The Spire, on sale for a measly dollar I snapped it up. Last weekend I settled down with it expecting an enjoyable read. What a disappointment! I gave up after sixty pages. There was nothing to grab my attention or interest. The protagonist bored me. The plot was a yawn. The collapse of the spire and Dean Jocelin’s faith, while riddled with symbolism, was old news before it happened. I begrudge the dollar.
I put The Spire back in the bookcase and pulled out David Morrell’s, The Shimmer. I had read his trilogy, Brotherhood of the Rose, many years ago and remembered my excitement in the read. In the first chapter–only four plus pages–of The Shimmer, major action of the hair-raising variety grips the reader and continues with brief respites to the end. Unexplainable natural phenomenon, heroism, love reclaimed, and horrific violence kept my attention all the way.
Golding strove for a literary work; Morrell presented a thriller. Very different. Nevertheless, whatever the genre, it should evoke interest.
I’m now writing the very last chapter of Escape From Xanadu. and hoping it will interest readers–given my rant in the last few paragraphs.
For Indie authors, the website, bookfuel.com may be of interest. They seem to give a lot of service for not so much money. Check it out.
One thought on ““Doc’s” Blog #3”
I have not met many people who really liked Lord of the Flies at all, Doc. You’re one of the few. I loved that story. I’ll recommend The Shimmer to my daughter, she has read a couple of Morrell’s. Going to enjoy checking out your blog from now on…and looking forward to the final scenes of Escape from Xanadu. | https://docsanborn.wordpress.com/2015/07/07/docs-blog-3/ | 2018-11-13T00:12:32 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docsanborn.wordpress.com |
Ordbok
As your application grows, configuration can get a bit chaotic, especially if you have multiple versions (local, deployed, staging, etc.) Ordbok brings order to that chaos.
Ordbok abstracts the loading of a configuration from YAML files into a Python dictionary, and also has a specific setup for use with Flask. See TODO for plans to expand this.
Svenska Akademiens ordbok by droemmaskin on deviantART. Provided under Attribution-NonCommercial-ShareAlike 3.0 Unported
Basic Usage
Ordbok is designed to allow users to define a hierarchy of YAML configuration files and specify environments. The default configuration has three tiers:
config.yml,
local_config.yml, and Environmental Variables. The later tiers override the earlier ones, and earlier configurations can explicitly require certain variables to be defined in a later one. This can be particularly useful when you expect, say, certain variables to be specified in the environment on a production server and want to fail hard and explicitly when that variable isn't present. | https://ordbok.readthedocs.io/en/latest/ | 2018-11-13T00:06:14 | CC-MAIN-2018-47 | 1542039741176.4 | [] | ordbok.readthedocs.io |
Vertex Groups Panel¶
Reference
The Vertex Group panel.
Vertex Groups are maintained within the Object Data Properties Editor, and there in the Vertex Groups panel.
- Active Vertex Group
A List Views.
- Lock
- Locks the group from being editable. You can only rename or delete the group.
- Add
+
- Create an empty vertex group.
- Remove
-
- Deletes the active vertex group.
- Specials
- Sort Vertex Groups
- Sorts Vertex Groups alphabetically.
-
-.
-
Vertex Group Panel in Edit or Weight Paint Mode.
When you switch either to Edit Mode or to Weight Paint Mode Vertex weights can be edited. The same operations are available in the 3D Views menu. | https://docs.blender.org/manual/en/dev/modeling/meshes/properties/vertex_groups/vertex_groups.html | 2018-11-13T00:13:06 | CC-MAIN-2018-47 | 1542039741176.4 | [array(['../../../../_images/modeling_meshes_properties_vertex-groups_introduction_panel.png',
'../../../../_images/modeling_meshes_properties_vertex-groups_introduction_panel.png'],
dtype=object)
array(['../../../../_images/modeling_meshes_properties_vertex-groups_vertex-groups_panel-edit.png',
'../../../../_images/modeling_meshes_properties_vertex-groups_vertex-groups_panel-edit.png'],
dtype=object) ] | docs.blender.org |
CQL pushdown filter (deprecated)
Optimize the processing of the data by moving filtering expressions in Pig as close to the data source as possible.
Hadoop is deprecated for use with DataStax Enterprise. DSE Hadoop and BYOH (Bring Your Own Hadoop) are deprecated. Pig is also deprecated and will be removed when Hadoop is removed.
DataStax Enterprise includes a CqlStorage URL option, use_secondary. Setting the option to true optimizes the processing of the data by moving filtering expressions in Pig as close to the data source as possible. To use this capability:
Create an index for the Cassandra table.
For Pig pushdown filtering, the secondary index must have the same name as the column being indexed.
Include the use_secondary option with a value of true in the url format for the storage handler. The option name reflects the term that used to be used for a Cassandra index: secondary index. For example:
newdata = LOAD 'cql://ks/cf_300000_keys_50_cols?use_secondary=true' USING CqlNativeStorage(); | https://docs.datastax.com/en/datastax_enterprise/5.0/datastax_enterprise/ana/dsehadoop/anaPigCqlPush.html | 2018-11-13T00:39:28 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.datastax.com |
To supply credentials to pull from a private Docker registry, create an archive of your Docker credentials, then add it as a URI in your application definition. app definitionStep 2: Add URI path to app definition
Add the path to the archive file login credentials to your app definition.
"uris": [ "" ]
For example:
{ "id": "/some/name/or/id", "cpus": 1, "mem": 1024, "instances": 1, "container": { "type": "DOCKER", "docker": { "image": "some.docker.host.com/namespace/repo", "network": "HOST" } }, "uris": [ "" ] }
The Docker image will now pull using the security credentials you specified. | https://docs.mesosphere.com/1.9/deploying-services/private-docker-registry/ | 2018-11-13T00:11:15 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.mesosphere.com |
Coupling Flags with Triggers¶
In general, it is best to be explicit about setting or clearing a flag. This makes the code more maintainable and easier to follow and reason about. However, rarely, due to the fact that handlers for a given flag are independent and thus there are no guarantees about the order in which they may execute, it is sometimes necessary to enforce that two flags must be set at the same time or that one must be cleared if the other is set.
As an example of when this might be necessary, consider a charm which provides two config values, one that determines the location from which resources should be fetched, with a default location provided by the charm, and another which indicates that a particular feature be installed and enabled. If the charm is deployed and fetches all of the resources, it might set a flag that indicates that all resources are available and any installation can proceed. However, if both resource location and feature flag config options are changed at the same time, the handlers might be invoked in an order that causes the feature installation to happen before the resource change has been observed, leading to the feature using the wrong resource. This problem is particularly intractable if the layer managing the resource location and readiness options is different than the layer managing the feature option, such as with the apt layer.
Triggers provide a mechanism for a flag to indicate that when a particular flag is set, another specific flag should be either set or cleared. To use a trigger, you simply have to register it, which can be done from inside a handler, or at the top level of your handlers file:
from charms.reactive.flags import register_trigger from charms.reactive.flags import set_flag from charms.reactive.decorators import when register_trigger(when='flag_a', set_flag='flag_b') @when('flag_b') def handler(): do_something() register_trigger(when='flag_a', clear_flag='flag_c') set_flag('flag_c')
When a trigger is registered, then as soon as the flag given by
when is
set, the other flag is set or cleared at the same time. Thus, there is no
chance that another handler will run in between.
Keep in mind that since triggers are implicit, they should be used sparingly. Most use cases can be better modeled by explicitly setting and clearing flags.
Example uses of triggers¶
Remove flags immediately when config changes
In the
apt layer, the
install_sources config option specifies which
repositories and ppa’s to use for installing a package, so these need to be
added before installing any package. This is easy to do with flags: you create
a handler that adds the sources and then sets the flag
apt.sources_configured. The handler that installs the packages reacts to
that flag with
@when('apt.sources_configured'). This works perfectly the
first time but what happens if the
install_sources config option gets
changed after they are first configured? Then the
apt.sources_configured
flag needs to be cleared immediately before any new packages are installed.
This is where triggers come in: You create a trigger that unsets the
apt.sources_configured flag when the install_sources config changes.
register_trigger(when='config.changed.install_sources', clear_flag='apt.sources_configured') @when_not('apt.sources_configured') def sources_handler(): configure_sources() set_state('apt.sources_configured') @when_all('apt.needs_update', 'apt.sources_configured') def update(): charms.apt.update() clear_flag('apt.sources_configured') @when('apt.queued_installs') @when_not('apt.needs_update') def install_queued(): charms.apt.install_queued() clear_flag('apt.queued_installs') @when_not('apt.queued_installs') def ensure_package_status(): charms.apt.ensure_package_status() | https://charmsreactive.readthedocs.io/en/latest/triggers.html | 2018-11-13T01:01:46 | CC-MAIN-2018-47 | 1542039741176.4 | [] | charmsreactive.readthedocs.io |
Retrieves information about a specified DevEndpoint.
Note
When you create a development endpoint in a virtual private cloud (VPC), AWS Glue returns only a private IP address, and the public IP address field is not populated. When you create a non-VPC development endpoint, AWS Glue returns only a public IP address.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-dev-endpoint --endpoint-name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--endpoint-name (string)
Name of the DevEndpoint for which to retrieve.
DevEndpoint -> (structure)
A DevEndpoint definition.
EndpointName -> (string)The name of the DevEndpoint.
RoleArn -> (string)The AWS ARN of the IAM role used in this DevEndpoint.
SecurityGroupIds -> (list)
A list of security group identifiers used in this DevEndpoint.
(string)
SubnetId -> (string)The subnet ID for this DevEndpoint.
YarnEndpointAddress -> (string)The YARN endpoint address used by this DevEndpoint.
PrivateAddress -> (string)A private IP address to access the DevEndpoint within a VPC, if the DevEndpoint is created within one. The PrivateAddress field is present only when you create the DevEndpoint within your virtual private cloud (VPC).
ZeppelinRemoteSparkInterpreterPort -> (integer)The Apache Zeppelin port for the remote Apache Spark interpreter.
PublicAddress -> (string)The public IP address used by this DevEndpoint. The PublicAddress field is present only when you create a non-VPC (virtual private cloud) DevEndpoint.
Status -> (string)The current status of this DevEndpoint.
NumberOfNodes -> (integer)The number of AWS Glue Data Processing Units (DPUs) allocated to this DevEndpoint.
AvailabilityZone -> (string)The AWS availability zone where this DevEndpoint is located.
VpcId -> (string)The ID of the virtual private cloud (VPC) used by this DevEndpoint.
ExtraPythonLibsS3Path -> (string)
Path(s) to one or more Python libraries in an S3 bucket that should be loaded in your DevEndpoint. Multiple values must be complete paths separated by a comma.
Please note that only pure Python libraries can currently be used on a DevEndpoint. Libraries that rely on C extensions, such as the pandas Python data analysis library, are not yet supported.
ExtraJarsS3Path -> (string)
Path to one or more Java Jars in an S3 bucket that should be loaded in your DevEndpoint.
Please note that only pure Java/Scala libraries can currently be used on a DevEndpoint.
FailureReason -> (string)The reason for a current failure in this DevEndpoint.
LastUpdateStatus -> (string)The status of the last update.
CreatedTimestamp -> (timestamp)The point in time at which this DevEndpoint was created.
LastModifiedTimestamp -> (timestamp)The point in time at which this DevEndpoint was last modified.
PublicKey -> (string)The public key to be used by this DevEndpoint for authentication. This attribute is provided for backward compatibility, as the recommended attribute to use is public keys.
PublicKeys -> (list)
A list of public keys to be used by the DevEndpoints for authentication. The use of this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.
Note
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of public keys: call the UpdateDevEndpoint API with the public key content in the deletePublicKeys attribute, and the list of new keys in the addPublicKeys attribute.
(string)
SecurityConfiguration -> (string)The name of the SecurityConfiguration structure to be used with this DevEndpoint. | https://docs.aws.amazon.com/cli/latest/reference/glue/get-dev-endpoint.html | 2018-11-13T00:56:47 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.aws.amazon.com |
Clones a Controller.
Instantiates a Controller object of the specified type and raises it's Controller.AfterConstruction event.
Customizes business class metadata before loading it to the Application Model's BOModel node.
Returns the properties over which the FullTextSearch Action is executed, based on the FilterController.FullTextSearchTargetPropertiesMode property's value.
Sets a specified View for a View Controller. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.SystemModule.FilterController._methods | 2018-11-13T00:27:18 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.devexpress.com |
.
If a non-zero named buffer object was bound to the
GL_ARRAY_BUFFER target (see glBindBuffer) when the desired pointer was previously specified, the
pointer returned is a byte offset into the buffer object's data store.
GL_INVALID_ENUM is generated if
pname is not an accepted value.
GL_INVALID_VALUE is generated if
index is greater than or equal to
GL_MAX_VERTEX_ATTRIBS.
glGetVertexAttrib, glVertexAttribPointer
Copyright © 2003-2005 3Dlabs Inc. Ltd. This material may be distributed subject to the terms and conditions set forth in the Open Publication License, v 1.0, 8 June 1999.. | http://docs.gl/es2/glGetVertexAttribPointerv | 2018-11-13T00:15:52 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.gl |
The Measurement Text Box displays a box in which numeric values can be entered with the ability to select which units the value is displayed in.
Changing the display units once a value has been entered will recalculate the displayed value based on the conversion between the units.
When the form control is applied the name entered will display as the caption for the form control. This can be changed by selecting the Caption property.
A user form control that accepts numeric values and gives a choice of units for the value entered. Automatic conversion between units is supported and mathematical expressions can be used in the control. | http://docs.driveworkspro.com/Topic/ControlMeasurementTextBox | 2018-11-13T01:39:52 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.driveworkspro.com |
Verifying the data migration progress
After the migration has begun, you can do the following to track its progress:
Verify that Redis
master_link_statusis
upin the
INFOcommand on ElastiCache primary node. You can also find this information in the ElastiCache console. Select the cluster and under CloudWatch metrics, observe Master Link Health Status. After the value reaches 1, the data is in sync.
You can check that for the ElastiCache replica has an online state by running the
INFOcommand on your Redis on EC2 instance. Doing this also provides information about replication lag.
Verify low client output buffer by using the CLIENT LIST
Redis command on your Redis on EC2 instance.
After the migration is complete, the ElastiCache cluster shows the status of in-sync. This status means that all data is now replicated. The data is in sync with any new writes coming to the primary node of your Redis instance. | https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Migration-Verify.html | 2021-09-17T05:09:49 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.aws.amazon.com |
. This parameter is deprecated, use
generatorsinstead.
string endDescription (renderable) – the comment sent when the build is finishing. This parameter is deprecated, use
generatorsinstead. This parameter is deprecated, use
generatorsinstead.
verify (boolean) – disable ssl verification for the case you use temporary self signed certificates
debug (boolean) – logs every requests and their response | https://docs.buildbot.net/2.10.1/manual/configuration/reporters/gerrit_verify_status.html | 2021-09-17T03:52:40 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.buildbot.net |
General settings for HS4 may be accessed by clicking Setup and then selecting the General tab.
Click the REGISTER button to edit or change your license ID and password. This is useful when upgrading a trial license or moving from one edition of HS4 to another.
Click the EDIT CONFIG button to manage your system's selected data file.
This setting allows HS4 to calculate sunrise and sunset times for your system. Click the SELECT LOCATION button to select the worldwide location that's closest to your HS4 installation.
Alternately, you may manually enter your longitude and latitude for even greater accuracy..
Click the CLEAR SAVED DATA. | https://docs.homeseer.com/plugins/viewsource/viewpagesrc.action?pageId=17466578 | 2021-09-17T03:11:26 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.homeseer.com |
When a web alarm action is triggered, an information card displaying relevant details about the affected device and cause of the alarm or appears. The alert message embedded within the information card identifies the specific monitors on the device that are currently down.
Click View to launch the Web Alarms interface. This view displays a table of all active web alarms which can be acknowledged, muted, or dismissed either individually or in bulk. Or, you can temporarily Mute or Dismiss the alarm entirely from this card. If multiple alarms have been triggered for the same device you have the option to Dismiss All as well. | https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/42052.htm | 2021-09-17T04:39:26 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.ipswitch.com |
For an overview of the security, please check Features > Security.
This section will explain in detail the implementations of security on ScaffoldHub.
All the security files must be replicated on both frontend and backend.
frontend/src/security
backend/src/security
Frontend security is just for the application not to show what users are not allowed to do and can be easily hacked because frontend files are just HTML, CSS, and Javascript. Real security happens on the backend.
Every action a user can perform on the application has a related permission.
allowedRoles: The user roles that contain that permission.
allowedStorage: The file storage folders that permission can access.
When the user signs-in, he receives a secure JWT token.
The frontend then sends this token on each request via the Authorization header.
Using an authentication middleware, the backend validates this token, fetches the current user, and assigs him to the request.
Each endpoint validates if the user has the permission to access that resource.
Some endpoints, like sign-in and sign-up, do not require the user to be authenticated, and for those cases, it just doesn't validate the presence of the user on the request.
Menus have their permission assigned to them and are only shown if the user contains a role that contains that permission.
Action buttons also have validations to check if the user has permission. | https://docs.scaffoldhub.io/architecture/security | 2021-09-17T04:40:07 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.scaffoldhub.io |
How to enable high availability for your Oracle VMs
Owing to the nature of Oracle workloads and the need to pin them to processor cores, workloads will be automatically moved in the event of a host failure only if you've enabled the high availability (HA) feature on each VM.
Note
If you do not enable the HA feature for your VMs, if a failure occurs, UKCloud will need to manually move the Oracle VM to a new host, at which point you'll be notified to restart your application.
This article describes how to enable HA for your Oracle VMs.
Intended audience
To complete the steps in this guide you must have access to Oracle Enterprise Manager Cloud Control.
Enabling high availability for a VM
To enable HA:
Log in to the Oracle Enterprise Manager Cloud Control console at:
If you need more detailed instructions, see the Getting Started Guide for UKCloud for Oracle Software.
On the Infrastructure -- Oracle VM Cloud Services page, click the Servers icon.
The Servers page lists the servers (VMs) you've requested.
Select the VM for which you want to enable HA.
From the Action menu, select Modify Configuration.
On the Modify Configuration page, select Enable High Availability.
Click OK.
Next steps
For more information about UKCloud for Oracle Software, see:
Getting Started Guide for UKCloud for Oracle Software.
How to build an Oracle virtual machine
UKCloud for Oracle Software FAQs. | https://docs.ukcloud.com/articles/oracle/orcl-how-enable-ha.html | 2021-09-17T03:30:40 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.ukcloud.com |
Instantiating a Cloudera Manager Image
Complete the following steps:
- SLES:
chkconfig cloudera-scm-server on
- RHEL 7.x /CentOS 7.x.x:
systemctl enable cloudera-scm-server.service
- Ubuntu:
update-rc.d -f cloudera-scm-server defaults | https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/installation/topics/cdpdc-instantiating-cm-image.html | 2021-09-17T03:33:41 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.cloudera.com |
The default view for mobile in the Booking centre is to just show what courts are booked, colour coded to highlight why and by whom. For example, courts inviting you to play would be shown in yellow. Your bookings are show in green.
This view is a condensed view so that at least 4 courts can be shown at the same time on landscape mobile view. If you click the button in any court slot a popup will give you more details.
Towards the top right of the screen you'll see a button looking like this:
Click this and the court slot buttons will expand to show full details of the booking as per the popup mentioned above. Some people prefer this at the expense of not seeing so much of the booking schedule on the screen.
ManageMyMatch remembers this setting for you so that next time you invoke the Booking centre it'll show it as you left it. You can toggle the view by repeatedly clicking this button.
Please sign in to leave a comment. | https://docs.managemymatch.com/hc/en-us/articles/360022402933-How-do-I-see-match-players-on-mobile-in-the-court-booking-view- | 2021-09-17T05:01:19 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['/hc/article_attachments/360033127694/Screenshot_2019-04-29_at_18.02.34.png',
'Screenshot_2019-04-29_at_18.02.34.png'], dtype=object) ] | docs.managemymatch.com |
Creating page multivariate experiments
#Create a page multivariate test in One Platform
To create a page experiment, follow the following simple steps:
- Login to your account in One Platform.
- Go to the Experiments Overview page.
- Click Add experiment.
- Enter a Name and select experiment type "Page Multivariate Experiment".
- Enter a hypothesis.
- Select a Primary metric (and optional secondary metrics).
- Select the Page you’d like to test.
- Click Next.
#Design variants
The next step is to define the experiment variants that will validate or invalidate your hypothesis. It’s important to always ensure there is a baseline in place (known as a control) that will act as a point of comparison for any results you receive from your experiment. Below are some rules which guide best practice when designing your variants:
- Only test one element at a time so that you know what change is responsible for the results you are seeing. This helps you to build an understanding of how different elements influence customer behaviour, once you’ve got a good picture you can focus your experiments on the most impactful elements.
- Set up control and treatment. Retain a version of your original creative in the experiment, this is your control. The new variations you are testing are called treatments. These are challenger variants you want to test against your control.
You can create up to five variants to test against your original page experience. To get started, select a control variant from the list. This is an existing variant on the page which will be involved in the experiment.
Once you’ve selected your control, create test variants by changing one thing about your experience. This can be the placement design, layout, location, or offer combinations. Learn more about the types of elements you can test on your page.
#Creating placements using the placement builder
If you’re changing something about the placement design or the types of offers that are allowed to show on your page, you’ll need to create new placements and/or establish different campaign control rules before building your experiment and adding variants.
- To make changes to your placement design, go to the Placements section (Transactions > Placements) and click Add placement. Use the editor panel to make a change (e.g., the button color). When you’re done, click Save. Repeat this process to create additional variations of your placement for the experiment.
- To change your campaign or marketplace control rules, go to the Controls tab (Transactions > Controls), and click Add controls, making sure you link to a placement. Repeat this process to create additional variations of your controls for the experiment.
To start configuring your test variants, click anywhere in the variant section for Variant 1. Enter a Variant name, select one or more placement, and input a Target element (this step is only required for embedded placements). To create multiple variants, click New variant in the variant tab section. Once you’ve finished designing all of your treatment variants, click Next.
#Set traffic allocation
The traffic allocation step is used to define how many of your customers you wish to include in your experiment. Enter a numeric value or use the slider to specify the percentage of all customers visiting the page to include in your experiment cohort.
#Variant distribution
Distribution refers to how the allocated traffic should be split between each of the test variants in the experiment. All variants are weighted equally by default. A customer who is included in your experiment has an equal chance of seeing any of your variants.
#Start your experiment
Click Start experiment when you’re ready for your customers to see the experiment. When the status field says Live, your experiment is running on the page. Most experiments will be live within 5 minutes.
#How long should your experiment run?
We recommend you keep an experiment running for at least two weeks to ensure you collect enough data. Do not end your experiment until at least one variant has a 95% probability to beat baseline. It could take up to or more than 30 days to gain enough data to make a confident decision about the outcome. | https://docs.rokt.com/docs/user-guides/rokt-ecommerce/experiments/page-multivariate-experiments | 2021-09-17T03:29:52 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.rokt.com |
These are the following project wallets and their functions.
Everything labeled with "linear lock" is behind a contract that linearly vests tokens over n months of time. Locked tokens are inaccessible by anyone unless properly vested.
Project & Community allocated funds are to be used as direct rewards towards the community and users of the service. The final goal of this allocation is to become fully circulating and in user hands.
To be used for:
Staking rewards
Liquidity rewards
Anonymity rewards
Airdrop
Feature incentives
Allocation to be used towards liquidity and token circulation related topics such as dealing with exchanges.
To be used for:
Providing liquidity to AMMs (~300,000 tokens)
Providing tokens to exchanges
IFO allocation (-4,000,000 tokens)
Project related wallet to ensure long-term future of Typhoon
To be used for:
Ongoing day-to-day cost
Marketing
Salaries
Hiring
Audits
(initially also to pay early investors and partners)
This contract holds the initial 4,000,000 tokens put aside on IFO date for long-term support towards the project. | https://docs.typhoon.network/tokenomics/project-token-wallets | 2021-09-17T03:31:21 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.typhoon.network |
Deforming a Drawing with the Perspective Tool
The Perspective tool lets you deform a drawing selection and alter its perspective.
- In the Tools toolbar, select the Perspective
tool from the Contour Editor drop-down menu or press Alt + 0.
- In the
Camera orDrawing view, select a drawing to deform.
- Click and drag the different anchor points to deform the shape.
| https://docs.toonboom.com/help/harmony-12/essentials/Content/_CORE/_Workflow/015_CharacterDesign/Drawing_Tools/202_H2_Deforming_a_Drawing_Using_the_Perspective_Tool.html | 2021-09-17T04:04:27 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['../../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/_Skins/stageEssentials.png',
None], dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/Stage/Drawing/an_polyline_tool-01.png',
None], dtype=object)
array(['../../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Resources/Images/HAR/Stage/Drawing/an_polyline_tool_03.png',
None], dtype=object)
array(['../../../../Resources/Images/HAR/Stage/Drawing/an_polyline_tool.png',
None], dtype=object) ] | docs.toonboom.com |
Information can be added in a configuration file as shown below:
Under Entity Type, expand Repository, Users and Configuration upon clicking the adjacent arrow keys.
Click the sub-items like Reports, Query Objects, Parameter Objects, Analytical Objects, Dashboards, Dashboard Widgets, etc. to view the list of objects in the Select Entities pane.
You can also select the existing Schedules for Query Objects, Parameter Value Groups for Parameter Objects, Build Options for Analytical Objects etc. by clicking their respective check-boxes. These would be fetched from the Report Server while building the cab file.
Figure 10: Adding objects to configuration file
Under Select entities, you can then either select the parent (say Category) to select all objects under it or individually select the objects you wish to add to the configuration file. The Selection Summary pane enables you to view the number of all the selected objects.
Click Select All Data From Report Server to select all the objects from Intellicus repository.
Select Include Dependent Objects check box to select all the dependent objects for the selected entry type and entity if you are not sure to specifically select them. All the dependent objects will be considered for building the cab file. For example, if you have selected the entry type as Dashboard and unsure of the dashboard related objects, if you select the Include Dependent Objects check box so that the cab file is built using all the objects associated to the selected dashboard.
For each of the selected items, you can select all or specific sub-items:
- Reports
- Query Objects
- Parameter Objects
- Analytical Objects
- Dashboards
- Dashboard Widgets
- Approval Processes
- Report Schedules
- User Roles and Access Rights
- Connections
- Printer settings
- Web Client Properties
- Report Server Properties
- License File
- Templates | https://docs.intellicus.com/documentation/basic-configurations-must-read-19-0/ipackager-manual-19-0/adding-objects-to-the-configuration-file-19-0/ | 2021-09-17T04:27:15 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['https://docs.intellicus.com/wp-content/uploads/2019/14/addingobjects_ipackager.png',
None], dtype=object) ] | docs.intellicus.com |
This page describes the provider specific properties and the details you need to input to create connections to the specific databases. There are some properties that are common to these databases, you can find those information, here.
Figure 4: Web Service
Provide the following properties to create a connection to a web service: | https://docs.intellicus.com/documentation/basic-configurations-must-read-19-0/working-with-database-connections-19-0/web-service/ | 2021-09-17T04:52:17 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['https://docs.intellicus.com/wp-content/uploads/2019/13/WEBServiceconnection.png',
'Web service connection'], dtype=object) ] | docs.intellicus.com |
UK.
Let's get started
First, take a look at our Getting Started Guide to learn the basics, then you can:
Oracle Enterprise Manager overview
Other resources
Still have questions?
Find answers to common questions in our UKCloud for Oracle Software FAQ.
Get in touch
We want to know what you think. If you have an idea for how we could improve any of our services, send an email to [email protected]. | https://docs.ukcloud.com/articles/oracle/orcl-home.html | 2021-09-17T03:18:10 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.ukcloud.com |
Before using Pulsars in a live experience, make the following checks:
- Ensure Pulsars are paired with a Beacon in the volume (see Turn on the Beacon and enable pairing and Pair Pulsars with a Beacon).
- Ensure the Beacon is enabled and that on the System tab, no warning indicators are displayed.
- Turn on the Pulsars and in Evoke, check the following indicators:
- On the System tab, in the Clusters section, Pulsars are displayed.
- The connection status for all clusters is green.
- Battery levels are sufficient for the experience.
If any of the Pulsar batteries is running low (indicated in the System tree by the low battery indicator), replace it before continuing (see Swap clusters).
- No warning indicators are displayed.
- Objects are tracked and labeled in the 3D Scene.
If you need to find out which physical Pulsar is linked to its representation in Evoke, press the Pulsar's power button. It is then selected in Evoke and its status light blinks to indicate that it's currently selected.
Monitor your system
To monitor your system to ensure consistent performance, from Evoke 1.3 and later, you can remotely trigger the collection of data for a system health report while continuing to run a live experience. The report includes raw data for each camera, indicating how much data it has seen and how well it corresponds to the other cameras.
Important
If data is collected over too short a time period, these metrics may be highly variable for some cameras.
It is strongly recommended that before analysis, you collect data over a sustained period of time, and during a standardized type of activity.
These API services are available for system health reporting:
- Start/stop/cancel report collection
- Get latest report
Information on the status of your system is accessible via the API.
An example implementation (system_health_report.py) can be found in the supplied Evoke API sample scripts.
To access the example, extract the files from the .zip file installed in this default location to a suitable folder:
C:\Program Files\Vicon\Evoke1.4\SDK
You can find system_health_report.py in the sample_scripts subfolder.
The information generated for each camera includes:
- Centroid count: The number of 2D centroids seen by the camera
- Labeled centroid count: The number of centroids that correspond to a 3D model point on a tracked object
- Average reprojection error: RMS error between the observed centroid coordinates and the coordinates computed by projecting the 3D model point onto the image
The following reports were generated by running the example. Three reports were generated, showing the system running normally (Report 1), a camera having been moved (Report 2), the issue fixed and the system healthy again (Report 3):
For more information, see Vicon Evoke API & automation. | https://docs.vicon.com/display/Evoke14/Prepare+for+a+live+experience | 2021-09-17T04:36:18 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.vicon.com |
Date: Mon, 23 Mar 2009 19:03:19 +0000 From: Matthew Seaman <[email protected]> To: John Almberg <[email protected]> Cc: [email protected] Subject: Re: utility that scans lan for client? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig41016E050E8B37EBAA150462 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable John Almberg wrote: > I've tried googling for this, but I guess I don't know the name of a=20 > utility such as this... >=20 > What I'm looking for is a utility that can scan a LAN for attached=20 > clients... i.e., computers that are attached to the LAN. >=20 > I have one box (an appliance that I have no access to), that is on the = > LAN but I don't know what IP address it's using. I'd like to complete m= y=20 > network map, and that is the one empty box on my chart. >=20 > Yes, I am obsessive :-) >=20 > Any help, much appreciated. nmap Matthew --=20 Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: Ramsgate Kent, CT11 9PW --------------enig41016E050E8B37EBAA150462 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.11 (FreeBSD) Comment: Using GnuPG with Mozilla - iEYEAREIAAYFAknH3PwACgkQ8Mjk52CukIw+rwCdHr8VyZ6iXl8KUa0rJSpMYyLM F8kAniq/8D5MDiSaSbYDAfD0PGw/FCV0 =wrb9 -----END PGP SIGNATURE----- --------------enig41016E050E8B37EBAA150462--
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=287189+0+archive/2009/freebsd-questions/20090329.freebsd-questions | 2021-09-17T03:03:56 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
Major language areas
Arrays, collections, and LINQ
C# and .NET provide many different collection types. Arrays have syntax defined by the language. Generic collection types are listed in the System.Collections.Generic namespace. Specialized collections include System.Span<T> for accessing continuous memory on the stack frame, and System.Memory<T> for accessing continuous memory on the managed heap. All collections, including arrays, Span<T>, and Memory<T> share a unifying principle for iteration. You use the System.Collections.Generic.IEnumerable<T> interface. This unifying principle means that any of the collection types can be used with LINQ queries or other algorithms. You write methods using IEnumerable<T> and those algorithms work with any collection.
Arrays
An array is a data structure that contains a number of variables that are accessed through computed indices. The variables contained in an array, also called the elements of the array, are all of the same type. This type is called the element type of the array.
Array types are reference types, and the declaration of an array variable simply sets aside space for a reference to an array instance. Actual array instances are created dynamically at runtime the contents of the array.
int[] a = new int[10]; for (int i = 0; i < a.Length; i++) { a[i] = i * i; } for (int i = 0; i < a.Length; i++) { Console.WriteLine($"a[{i}] = {a[i]}"); }
This example creates and operates on a single-dimensional array. C# also supports multi-dimensional arrays. The number of dimensions of an array type, also known as the rank of the array type, is one plus the number of commas between the square brackets of the array type. The following example allocates a single-dimensional, a two-dimensional, and a three-dimensional array, respectively. don next };
The length of the array is inferred from the number of expressions between
{ and
}. Array initialization can be shortened further such that the array type doesn't have to be restated.
int[] a = { 1, 2, 3 };
Both of the previous examples are equivalent to the following code:
int[] t = new int[3]; t[0] = 1; t[1] = 2; t[2] = 3; int[] a = t;
The
foreach statement can be used to enumerate the elements of any collection. The following code enumerates the array from the preceding example:
foreach (int item in a) { Console.WriteLine(item); }
The
foreach statement uses the IEnumerable<T> interface, so can work with any collection.
String interpolation
C# string interpolation enables you to format strings by defining expressions whose results are placed in a format string. For example, the following example prints the temperature on a given day from a set of weather data:
Console.WriteLine($"The low and high temperature on {weatherData.Date:MM-DD-YYYY}"); Console.WriteLine($" was {weatherData.LowTemp} and {weatherData.HighTemp}."); // Output (similar to): // The low and high temperature on 08-11-2020 // was 5 and 30.
An interpolated string is declared using the
$ token. String interpolation evaluates the expressions between
{ and
}, then converts the result to a
string, and replaces the text between the brackets with the string result of the expression. The
: in the first expression,
{weatherData.Date:MM-DD-YYYY} specifies the format string. In the preceding example, it specifies that the date should be printed in "MM-DD-YYYY" format.
Pattern matching
The C# language provides pattern matching expressions to query the state of an object and execute code based on that state. You can inspect types and the values of properties and fields to determine which action to take. The
switch expression is the primary expression for pattern matching.
Delegates and lambda expressions
A delegate type represents references to methods with a particular parameter list and return type. Delegates make it possible to treat methods as entities that can be assigned to variables and passed as parameters. Delegates are similar to the concept of function pointers found in some other languages. Unlike function pointers, delegates are object-oriented and type-safe.
The following example declares and uses a delegate type named
Function.
delegate double Function(double x); class Multiplier { double _factor; public Multiplier(double factor) => _factor = factor; public double Multiply(double x) => x * _factor; } class DelegateExample { static double[] Apply(double[] a, Function f) { var result = new double[a.Length]; for (int i = 0; i < a.Length; i++) result[i] = f(a[i]); return result; } public static void Main() { double[] a = { 0.0, 0.5, 1.0 }; double[] squares = Apply(a, (x) => x * x); double[] sines = Apply(a, Math.Sin); Multiplier m = new Multiplier(2.0); double[] doubles = Apply(a, m.Multiply); } }
An instance of the
Function delegate type can reference any method that takes a
double argument and returns a
double value. The
Apply method applies a given
Function to the elements of a
double[], returning a
double[] with the results. In the
Main method,
Apply is used to apply three different functions to a
double[].
A delegate can reference either a static method (such as
Square or
Math.Sin in the previous example) or an instance method (such as
m.Multiply in the previous example). A delegate that references an instance method also references a particular object, and when the instance method is invoked through the delegate, that object becomes
this in the invocation.
Delegates can also be created using anonymous functions, which are "inline methods" that are created when declared. Anonymous functions can see the local variables of the surrounding methods. The following example doesn't create a class:
double[] doubles = Apply(a, (double x) => x * 2.0);
A delegate doesn't know or care about the class of the method it references. The referenced method must have the same parameters and return type as the delegate.
async / await
C# supports asynchronous programs with two keywords:
async and
await. You add the
async modifier to a method declaration to declare the method is asynchronous. The
await operator tells the compiler to asynchronously await for a result to finish. Control is returned to the caller, and the method returns a structure that manages the state of the asynchronous work. The structure is typically a System.Threading.Tasks.Task<TResult>, but can be any type that supports the awaiter pattern. These features enable you to write code that reads as its synchronous counterpart, but executes asynchronously. For example, the following code downloads the home page for Microsoft docs:
public async Task<int> RetrieveDocsHomePage() { var client = new HttpClient(); byte[] content = await client.GetByteArrayAsync(""); Console.WriteLine($"{nameof(RetrieveDocsHomePage)}: Finished downloading."); return content.Length; }
This small sample shows the major features for asynchronous programming:
- The method declaration includes the
asyncmodifier.
- The body of the method
awaits the return of the
GetByteArrayAsyncmethod.
- The type specified in the
returnstatement matches the type argument in the
Task<T>declaration for the method. (A method that returns a
Taskwould use
returnstatements without any argument). declarative information by defining and using attributes.
The following example declares a
HelpAttribute attribute that can be placed on program entities to provide links to their associated documentation.
public class HelpAttribute : Attribute { string _url; string _topic; public HelpAttribute(string url) => _url = url; public string Url => _url; public string Topic { get => _topic; set => _topic = value; } }
All attribute classes derive from the Attribute base class provided by the .NET library. Attributes can be applied by giving their name, along with any arguments, inside square brackets just before the associated declaration. If an attribute’s name ends in
Attribute, that part of the name can be omitted when the attribute is referenced. For example, the
HelpAttribute can be used as follows.
[Help("")] public class Widget { [Help("", Topic = "Display")] public void Display(string text) { } }
This example attaches a
HelpAttribute to the
Widget class. It adds metadata defined by attributes can be read and manipulated at runtime using reflection. When a particular attribute is requested using this technique, the constructor for the attribute class is invoked with the information provided in the program source. The resulting attribute instance is returned. If additional information was provided through properties, those properties are set to the given values before the attribute instance is returned.
The following code sample demonstrates how to get the
HelpAttribute instances associated to the
Widget class and its
Display method.
Type widgetType = typeof(Widget); object[] widgetClassAttributes = widgetType.GetCustomAttributes(typeof(HelpAttribute), false); if (widgetClassAttributes.Length > 0) { HelpAttribute attr = (HelpAttribute)widgetClassAttributes[0]; Console.WriteLine($"Widget class help URL : {attr.Url} - Related topic : {attr.Topic}"); } System.Reflection.MethodInfo displayMethod = widgetType.GetMethod(nameof(Widget.Display)); object[] displayMethodAttributes = displayMethod.GetCustomAttributes(typeof(HelpAttribute), false); if (displayMethodAttributes.Length > 0) { HelpAttribute attr = (HelpAttribute)displayMethodAttributes[0]; Console.WriteLine($"Display method help URL : {attr.Url} - Related topic : {attr.Topic}"); }
Learn more
You can explore more about C# by trying one of our tutorials. | https://docs.microsoft.com/en-us/dotnet/csharp/tour-of-csharp/features | 2021-09-17T05:29:42 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.microsoft.com |
3.4.0
New Features
Attribute
http.statusCodehas been added to external span events representing the status code on an http response. This attribute will be included when added to an ExternalSegment in one of these three ways:
- Using
NewRoundTripperwith your http.Client
- Including the http.Response as a field on your
ExternalSegment
- Using the new
ExternalSegment.SetStatusCodeAPI to set the status code directly
To exclude the
http.statusCodeattribute from span events, update your agent configuration like so, where
cfgis your
newrelic.Configobject.cfg.SpanEvents.Attributes.Exclude = append(cfg.SpanEvents.Attributes.Exclude, newrelic.SpanAttributeHTTPStatusCode)
Error attributes
error.classand
error.messageare now included on the span event in which the error was noticed, or on the root span if an error occurs in a transaction with no segments (no chid spans). Only the most recent error information is added to the attributes; prior errors on the same span are overwritten.
To exclude the
error.classand/or
error.messageattributes from span events, update your agent configuration like so, where
cfgis your
newrelic.Configobject.cfg.SpanEvents.Attributes.Exclude = append(cfg.SpanEvents.Attributes.Exclude, newrelic.newrelic.SpanAttributeErrorClass, newrelic.SpanAttributeErrorMessage)
Changes
Use.// Transactions previously named"GET main.handleGetUsers"// will be change to something like this match the full path"GET /user/:id" | https://docs.newrelic.com/docs/release-notes/agent-release-notes/go-release-notes/go-agent-340/?q= | 2021-09-17T04:31:45 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.newrelic.com |
- Alerts and Monitoring >
- Analyze Slow Queries >
- Performance Advisor >
- Slow Query Threshold
Slow Query Threshold¶
The Performance Advisor recognizes a query as slow if it takes longer
to execute than the value of
slowOpThresholdMs.
By default, this value is
100 milliseconds. You can change the
threshold with either the
profile
command or the db.setProfilingLevel()
mongosh method.
Example
The following
profile command example sets the threshold at 200
milliseconds:
If you are running MongoDB 3.6 or later, you can customize the
percentage of slow queries in your logs used by the Performance Advisor
by specifying the
sampleRate parameter.
Example
This sets the slow query threshold to a lower value of 100 milliseconds but also sets the sample rate to 10%. | https://docs.opsmanager.mongodb.com/current/performance-advisor/slow-query-threshold/ | 2021-09-17T04:36:04 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.opsmanager.mongodb.com |
Implementation¶
QEngine¶
A Qrack::QEngine stores a set of permutation basis complex number coefficients and operates on them with bit gates and register-like methods.
The state vector indicates the probability and phase of all possible pure bit permutations, numbered from \(0\) to \(2^N-1\), by simple binary counting. All operations except measurement should be “unitary,” except measurement. They should be representable as a unitary matrix acting on the state vector. Measurement, and methods that involve measurement, should be the only operations that break unitarity. As a rule-of-thumb, this means an operation that doesn’t rely on measurement should be “reversible.” That is, if a unitary operation is applied to the state, their must be a unitary operation to map back from the output to the exact input. In practice, this means that most gate and register operations entail simply direct exchange of state vector coefficients in a one-to-one manner. (Sometimes, operations involve both a one-to-one exchange and a measurement, like the QInterface::SetBit method, or the logical comparison methods.)
A single bit gate essentially acts as a \(2\times2\) matrix between the \(0\) and \(1\) states of a single bits. This can be acted independently on all pairs of permutation basis state vector components where all bits are held fixed while \(0\) and \(1\) states are paired for the bit being acted on. This is “embarassingly parallel.”
To determine how state vector coefficients should be exchanged in register-wise operations, essentially, we form bitmasks that are applied to every underlying possible permutation state in the state vector, and act an appropriate bitwise transformation on them. The result of the bitwise transformation tells us which input permutation coefficients should be mapped to each output permutation coefficient. Acting a bitwise transformation on the input index in the state vector array, we return the array index for the output, and we move the double precision complex number at the input index to the output index. The transformation of the array indexes is basically the classical computational bit transformation implied by the operation. In general, this is again “embarrassingly parallel” over fixed bit values for bits that are not directly involved in the operation. To ease the process of exchanging coefficients, we allocate a new duplicate permutation state array vector, which we output values into and replace the original state vector with at the end.
The act of measurement draws a random double against the probability of a bit or string of bits being in the \(1\) state. To determine the probability of a bit being in the \(1\) state, sum the probabilities of all permutation states where the bit is equal to \(1\). The probablity of a state is equal to the complex norm of its coefficient in the state vector. When the bit is determined to be \(1\) by drawing a random number against the bit probability, all permutation coefficients for which the bit would be equal to \(0\) are set to zero. The original probabilities of all states in which the bit is \(1\) are added together, and every coefficient in the state vector is then divided by this total to “normalize” the probablity back to \(1\) (or \(100\%\)).
In the ideal, acting on the state vector with only unitary matrices would preserve the overall norm of the permutation state vector, such that it would always exactly equal \(1\), such that on. In practice, floating point error could “creep up” over many operations. To correct we this, Qrack can optionally normalize its state vector, depending on constructor arguments. Specifically, normalization is enabled in tandem with floating point error mitigation that floors very small probability amplitudes to exactly 0, below the estimated level of typical systematic float error for a gate like “H.” In fact, to save computational overhead, since most operations entail iterating over the entire permutation state vector once, we can calculate the norm on the fly on one operation, finish with the overall normalization constant in hand, and apply the normalization constant on the next operation, thereby avoiding having to loop twice in every operation.
Qrack has been implemented with
float precision complex numbers by default. Optional use of
double precision costs us basically one additional qubit, entailing twice as many potential bit permutations, on the same system. However, double precision complex numbers naturally align to the width of SIMD intrinsics. It is up to the developer, whether precision and alignment with SIMD or else one additional qubit on a system is more important.
QUnit¶
Qrack::QUnit is a “fundamentally” optimized layer on top of Qrack::QEngine types. QUnit optimizations include a broadly developed, practical realization of “Schmidt decomposition,” (see [Pednault2017],) per-qubit basis transformation with gate commutation, 2-qubit controlled gate buffering and “fusion,” optimizing out global phase effects that have no effect on physical “observables,” (i.e. the expectation values of Hermitian operators,) a classically efficient SWAP still equivalent to the quantum operation, and many “syngeristic” and incidental points of optimization on top of these general approaches. Publication of an academic report on Qrack and its performance is planned soon, but the Qrack::QUnit source code is freely publicly available to inspect.
VM6502Q Opcodes¶
This extension of the MOS 6502 instruction set honors all legal (as well as undocumented) opcodes of the original chip. See [6502ASM] for the classical opcodes.
The accumulator and X register are replaced with qubits. The Y register is left as a classical bit register. A new “quantum mode” and number of new opcodes have been implemented to facilitate quantum computation, documented in MOS-6502Q Opcodes.
The quantum mode flag takes the place of the
unused flag bit in the original 6502 status flag register. When quantum mode is off, the virtual chip should function exactly like the original MOS-6502, so long as the new opcodes are not used. When the quantum mode flag is turned on, the operation of the other status flags changes. An operation that would reset the “zero,” “negative,” or “overflow” flags to 0 does nothing. An operation that would set these flags to 1 instead flips the phase of the quantum registers if the flags are already on. In quantum mode, these flags can all be manually set or reset with supplementary opcodes, to engage and disengage the conditional phase flip behavior. The “carry” flag functions in addition and subtraction as it does in the original 6502, though it can exist in a state of superposition. A “CoMPare” operation overloads the function of the carry flag in the original 6502. For a “CMP” instruction in the quantum 6502 extension, the carry flag analogously flips quantum phase when set, if the classical “CMP” instruction would usually set the carry flag. The intent of this flag behavior, setting and resetting them to enable conditional phase flips, is meant to enable quantum “amplitude amplification” algorithms based on the usual status flag capabilities of the original chip.
When an operation happens that would necessarily collapse all superposition in a register or a flag, the emulator keeps track of this, so it can know when its emulation is genuinely quantum as opposed to when it is simply an emulation of a quantum computer emulating a 6502. When quantum emulation is redundant overhead on classical emulation, the emulator is aware, and it performs only the necessary classical emulation. When an operation happens that could lead to superposition, the emulator switches back over to full quantum emulation, until another operation which is guaranteed to collapse a register’s state occurs. | https://vm6502q.readthedocs.io/en/latest/implementation.html | 2021-09-17T02:50:56 | CC-MAIN-2021-39 | 1631780054023.35 | [] | vm6502q.readthedocs.io |
General Questions
How much does the Team Server cost?
The Team Server (public version) is free of charge for Mendix partners. We may offer a Premium version with additional features in the future.
How much storage space is provided with the Team Server?
Storage space is unlimited for projects connected to a commercial license. 1 GB free storage is provided for your company account for projects not (yet) connected to a commercial license.
How do I access the Team Server?
The Team Server is delivered as a plugin to sprintr. Start using the Team Server in your sprintr project by activating the plugin or by creating a new Team Server project in the Mendix Modeler.
What access controls come standard with the Team Server?
The Team Server gives you all the controls you need to manage who has access. Just toggle access on and off for each sprintr project member. Once activated, they can use their MxID to access the Team Server from within the Mendix Modeler.
How secure is the Team Server?
Mendix is extremely serious about security. The Mendix cloud environment adheres to all of our existing security principles including access via SSL, and those expressed in the Mendix Information and Security Policy.
My data is valuable and confidential – what happens to it?
We adhere to strict security standards and regard you to be the sole owner of your data. Only Mendix system administrators can access data and will only do so for trouble shooting. You can get a backup of your data by using default Subversion tools at any time.
How do I know the Team Server will be consistently available?
The team server runs in the trusted Mendix cloud environment and on a trusted infrastructure provider. Availability follows the same guidelines as all Mendix products and we always have daily backups of all data.
Usage questions
How do I merge changes from one development line to another?
The modeler automates most of this process; you can simply merge development lines by selecting model revisions on the team server. The Mendix Modeler will do the merging and will keep track of consistency. Read more
How do I resolve a conflict when two changes cannot be combined?
Resolving a conflict can be done in by using the ‘Use mine’ and ‘Use theirs’ button in the version control dock. Read more
How can I access the history of my project?
The history of the project is a list of all revisions that have been committed in reverse chronological order. The history form quickly shows you revision number, date, time, author and message of each revision; it can be accessed from within the Mendix Modeler as well as sprintr. Read more | https://docs.mendix.com/refguide6/team-server-faq | 2018-07-15T21:20:52 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.mendix.com |
Hardware PrerequisitesHardware Prerequisites
The hardware prerequisites are a single bootstrap node, Mesos master nodes, and Mesos agent nodes.
Bootstrap nodeBootstrap node
1 node with 2 cores, 16 GB RAM, 60 GB HDD. This is the node where DC/OS installation is run. This bootstrap node must also have:
- A high-availability (HA) TCP/Layer 3 load balancer, such as HAProxy, to balance the following TCP ports to all master nodes: 80, 443.
- An unencrypted SSH key that can be used to authenticate with the cluster nodes over SSH. Encrypted SSH keys are not supported.
Cluster nodesCluster nodes
The cluster nodes are designated Mesos masters and agents during installation.
The supported operating systems and environments are listed on the version policy page. 3
Agent nodesAgent nodes
The table below shows the agent node hardware requirements.
The agent nodes must also have:
A
/vardirectory with 10 GB or more of free space. This directory is used by the sandbox for both Docker and DC/OS Universal container runtime.
The agent’s work directory,
/var/lib/mesos/slave, should be on a separate device. This protects all the other services from a task overflowing the disk.
- To maintain backwards compatibility with frameworks written before the disk resource was introduced, by default the disk resource is not enforced.
- You can enable resource enforcement by inserting the environment variable MESOS_ENFORCE_CONTAINER_DISK_QUOTA=true into one of the Mesos agent extra config files (e.g.
/var/lib/dcos/mesos-slave-common).
- Disk quotas are not supported by Docker tasks, so these can overflow the disk regardless of configuration.
DC/OS is installed to
/opt/mesosphere.
/opt/mesospheremust be on the same mountpoint as
/. This is required because DC/OS installs systemd unit files under
/opt/mesosphere. All systemd units must be available for enumeration during the initializing of the initial ramdisk at boot. If
/optis on a different partition or volume, systemd will fail to discover these units during the initialization of the ramdisk and DC/OS will not automatically restart upon reboot...
- Each node is network accessible from the bootstrap node.
- Each node has unfettered IP-to-IP connectivity from itself to all nodes in the DC/OS cluster.
- All ports should be open for communication from the master nodes to the agent nodes and vice versa.
- UDP must be open for ingress to port 53 on the masters. To attach to a cluster, the Mesos agent node service (
dcos-mesos-slave) uses this port to find
leader.mesos. this shell script for an example of how to install the software requirements for DC/OS masters and agents on a CentOS 7 host.
All NodesAll Nodes
DockerDocker
Docker must be installed on all bootstrap and cluster nodes. The supported Docker versions are listed on the version policy page.
Recommendations. systemd handles starting Docker on boot and restarting it when it crashes.
Run Docker commands as the root user (with
sudo) or as a user in the docker user group.
Distribution-Specific Installation
Each Linux distribution requires Docker to be installed in a specific way:
- CentOS - Install Docker from Docker’s yum repository.
- RHEL - Install Docker by using a subscription channel. For more information, see Docker Formatted Container Images on Red Hat Systems.
- CoreOS - Comes with Docker pre-installed and pre-configured.
For more more information, see Docker’s distribution-specific installation instructions.
Disable sudo password promptsDisable sudo password prompts
To use the GUI or CLI installation methods, you must disable password prompts for sudo.
Add the following line to your
/etc/sudoers file. This disables the sudo password prompt.
%wheel ALL=(ALL) NOPASSWD: ALL
Alternatively, you can SSH as the root user.
Enable Time synchronizationEnable Time synchronization
Time synchronization is a core requirement of DC/OS. There are various methods of ensuring time sync. NTP is the typical approach on bare-metal. Many cloud providers use hypervisors, which push time down to the VM guest operating systems. In certain circumstances, hypervisor time-sync may conflict with NTP.
You must understand how to properly configure time synchronization for your
environment. When in doubt, enable NTP and check using
/opt/mesosphere/bin/check-time.
Enable Check TimeEnable Check Time
You must set the
ENABLE_CHECK_TIME environment variable in order for
/opt/mesosphere/bin/check-time to function. It’s recommended
that you enable this globally. e.g. on CoreOS an entry in
/etc/profile.env
of
export ENABLE_CHECK_TIME=true with set the appropriate variable.
Using NTPUsing.
Important:
- If you specify
exhibitor_storage_backend: zookeeper, the bootstrap node is a permanent part of your cluster. With
exhibitor_storage_backend: zookeeperthe leader state and leader election of your Mesos masters is maintained in Exhibitor ZooKeeper on the bootstrap node. For more information, see the configuration parameter documentation.
- The bootstrap node must be separate from your cluster nodes.
DC/OS setup file
Download and save the DC/OS setup file to your bootstrap node. This file is used to create your customized DC/OS build file. Contact your sales representative or [email protected] for access to this file.
Docker Nginx (advanced installer)Docker Nginx (advanced installer)
For advanced install only, install the Docker Nginx image with this command:
sudo docker pull nginx
Cluster nodesCluster nodes
For advanced install only, your cluster nodes must have the following prerequisites. The cluster nodes are designated as Mesos masters and agents during installation.
Data compression (advanced installer)Data compression (advanced installer) (advanced installer)Cluster permissions (advanced installer)
Note: It may take a few minutes for your node to come back online after reboot.
Locale requirementsLocale requirements
You must set the
LC_ALL and
LANG environment variables to
en_US.utf-8.
For info on setting these variables in Red Hat, see How to change system locale on RHEL
On Linux:
localectl set-locale LANG=en_US.utf8
- For info on setting these variable in CentOS7, see How to set up system locale on CentOS 7.
Next stepsNext steps
Install Docker on CentOS
Docker’s CentOS-specific installation instructions are always going to be the most up to date for the latest version of Docker. However, the following recommendations and instructions should make it easier to manage the Docker installation over time and mitigate several known issues with various other configurations.… | https://docs.mesosphere.com/1.9/installing/ent/custom/system-requirements/ | 2018-07-15T21:16:18 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.mesosphere.com |
Upgrading Telerik UI Trial to Telerik UI Developer License or Newer Version
The purpose of this topic is to explain you how to upgrade Telerik UI Trial to Telerik Developer License or a newer version.
Automatic Upgrade to newer version of UI for WPF
Utilize the VS Extensions wizards for this purpose:
Upgrade to Newer Version or Other License of UI for WPF
In order to upgrade your controls to a newer version of the suite, you need to perform the following instructions:
Download the installation method you prefer:
If you have installed the trial version of UI for WPF and try to install the developer version of the same release, you will receive the following message:
So, you should remove the trial version first.
If the upgrade is major (i.e. from Q2 2011 to Q3 2011), check the Release History.
Back up your application.
Update all the Telerik references in your project in Visual Studio to point to the new DLLs.
Clean the solution.
Recompile your project.
Run the project.
How to check the version of dll files using Visual Studio:
In order to check are the dll files trial or dev version, you need to perform the following instructions:
Open the project containing the dll-s with Visual Studio.
Double click on one of the following dll files so the properties window of the dll to be shown: Telerik.Windows.Controls or Telerik.Windows.Documents.Core.
Expand the Version folder.
Double Click on the version and you will see the whole information about the dll.
If this is Trial Version this will be written in the FileDescription property.
For example:
- Telerik.Windows.Controls Trial Version
If your dll files contain this message in the FileDescription then they are Trial version and you have to replace them with Development assemblies.
In case the project does not build:
- Please make sure that all the assemblies you have referenced are with the same version.
- If this does not help, delete the bin and obj folders of the project manually and Rebuild. | https://docs.telerik.com/devtools/wpf/installation-and-deployment/upgrading-instructions/installation-upgrading-from-trial-to-developer-license-wpf | 2018-07-15T21:22:09 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.telerik.com |
Method annotation used to invert test case results. If a JUnit 3/4 test case method is
annotated with
@NotYetImplemented the. | http://docs.groovy-lang.org/latest/html/gapi/groovy/transform/NotYetImplemented.html | 2016-07-23T13:08:19 | CC-MAIN-2016-30 | 1469257822598.11 | [] | docs.groovy-lang.org |
TripleO can be used in baremetal as well as in virtual environments. This section contains instructions on how to setup your environments properly.
TripleO can be used in a virtual environment using virtual machines instead of actual baremetal. However, one baremetal machine is still needed to act as the host for the virtual machines.
By default, this setup creates 3 virtual machines:
Each virtual machine must consist of at least 4 GB of memory and 40 GB of disk space [1].
Note
The virtual machine disk files are thinly provisioned and will not take up the full 40GB initially.
The baremetal machine must meet the following minimum system requirements:
TripleO currently supports the following operating systems:.
Make sure sshd service is installed and running.
The user performing all of the installation steps on the virt host needs to have sudo enabled. You can use an existing user or use the following commands to create a new user called stack with password-less sudo enabled. Do not run the rest of the steps in this guide as root.
Example commands to create a user:
sudo useradd stack sudo passwd stack # specify a password echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack sudo chmod 0440 /etc/sudoers.d/stack
Make sure you are logged in as the non-root user you intend to use.
Example command to log in as the non-root user:
su - stack
Enable needed repositories:
Enable epel:sudo yum -y install epel-release
Enable last known good RDO Trunk Delorean repository for core openstack packagessudo curl -L -o /etc/yum.repos.d/delorean.repo
Enable latest RDO Trunk Delorean repository only for the TripleO packagessudo curl -L sudo curl -L -o /etc/yum.repos.d/delorean-deps.repo
Stable Branch
Skip all repos mentioned above, other than epel-release which is still required.
Enable latest RDO Stable Delorean repository for all packagessudo curl -L -o /etc/yum.repos.d/delorean-liberty.repo
Enable the Delorean Deps repositorysudo curl -L -o /etc/yum.repos.d/delorean-deps-liberty.repo
Install instack-undercloud:
sudo yum install -y instack-undercloud
The virt setup automatically sets up a vm for the Undercloud installed with the same base OS as the host. See the Note below to choose a different OS:
Note
To setup the undercloud vm with a base OS different from the host, set the $NODE_DIST environment variable prior to running instack-virt-setup:
CentOS
export NODE_DIST=centos7
RHEL
export NODE_DIST=rhel7
Run the script to setup your virtual environment:
Note
By default, the overcloud VMs will be created with 1 vCPU and 5120 MiB RAM and the undercloud VM with 2 vCPU and 6144 MiB. To adjust those values:export NODE_CPU=4 export NODE_MEM=16384
Note the settings above only influence the VMs created for overcloud deployment. If you want to change the values for the undercloud node:export UNDERCLOUD_NODE_CPU=4 export UNDERCLOUD_NODE_MEM=16384
RHEL
Download the RHEL 7.1 cloud image or copy it over from a different location, for example:—7/7.1/x86_64/product-downloads, and define the needed environment variables for RHEL 7.1 prior to running instack-virt-setup:export DIB_LOCAL_IMAGE=rhel-guest-image-7.1-20150224.0.x86_64.qcow2
RHEL Portal Registration
To register the Undercloud vm"
RHEL Satellite Registration
To register the Undercloud vm export REG_ACTIVATION_KEY="[activation key]"
Ceph
To use Ceph you will need at least one additional virtual machine to be provisioned as a Ceph OSD; set the NODE_COUNT variable to 3, from a default of 2, so that the overcloud will have exactly one more:export NODE_COUNT=3
Note
The TESTENV_ARGS environment variable can be used to customize the virtual environment configuration. For example, it could be used to enable additional networks as follows:export TESTENV_ARGS="--baremetal-bridge-names 'brbm brbm1 brbm2'"
Note
The LIBVIRT_VOL_POOL and LIBVIRT_VOL_POOL_TARGET environment variables govern the name and location respectively for the storage pool used by libvirt. The defaults are the ‘default’ pool with target /var/lib/libvirt/images/. These variables are useful if your current partitioning scheme results in insufficient space for running any useful number of vms (see the Minimum Requirements):# you can check the space available to the default location like df -h /var/lib/libvirt/images # If you wish to specify an alternative pool name: export LIBVIRT_VOL_POOL=tripleo # If you want to specify an alternative target export LIBVIRT_VOL_POOL_TARGET=/home/vm_storage_pool
If you don’t have a ‘default’ pool defined at all, setting the target is sufficient as the default will be created with your specified target (and directories created as necessary). It isn’t possible to change the target for an existing volume pool with this method, so if you already have a ‘default’ pool and cannot remove it, you should also specify a new pool name to be created.instack-virt-setup
If the script encounters problems, see Troubleshooting instack-virt-setup Failures.
When the script has completed successfully it will output the IP address of the instack vm that has now been installed with a base OS.
Running sudo virsh list --all [2] will show you now have one virtual machine called instack and 2 called baremetal[0-1].
You can ssh to the instack vm as the root user:
ssh root@<instack-vm-ip>
The vm contains a stack user to be used for installing the undercloud. You can su - stack to switch to the stack user account.
Continue with Undercloud Installation.
Footnotes
TripleO can be used in an all baremetal environment. One machine will be used for Undercloud, the others will be used for your Overcloud.
To deploy a minimal TripleO cloud with TripleO you need the following baremetal machines:
For each additional Overcloud role, such as Block Storage or Object Storage, you need an additional baremetal machine.
The baremetal machines must meet the following minimum specifications:
TripleO is supporting only the following operating systems:
The overcloud nodes will be deployed from the undercloud machine and therefore the machines need to have have their network settings modified to allow for the overcloud nodes to be PXE boot’ed using the undercloud machine. As such, the setup requires that:
Refer to the following diagram for more information
Select a machine within the baremetal environment on which to install the undercloud.
Install RHEL 7.1 x86_64 or CentOS 7 x86_64 on this machine.
If needed, create a non-root user with sudo access to use for installing the Undercloud:
sudo useradd stack sudo passwd stack # specify a password echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack sudo chmod 0440 /etc/sudoers.d/stack
RHEL
If using RHEL, register the Undercloud for package installations/updates. and enabled for registered systems:.
Create a JSON file describing your Overcloud baremetal nodes, call it instackenv.json and place in your home directory. The file should contain a JSON object with the only field nodes containing list of node descriptions.
Each node description should contains required fields:
Some fields are optional if you’re going to use introspection later:
It is also possible (but optional) to set Ironic node capabilities directly in the JSON file. This can be useful for assigning node profiles or setting boot options at registration time:
capabilities - Ironic node capabilities. For example:
"capabilities": "profile:compute,boot_option:local"
For example:
{ "nodes": [ { "pm_type":"pxe_ipmitool", "mac":[ "fa:16:3e:2a:0e:36" ], "cpu":"2", "memory":"4096", "disk":"40", "arch":"x86_64", "pm_user":"admin", "pm_password":"password", "pm_addr":"10.0.0.8" }, { "pm_type":"pxe_ipmitool", "mac":[ "fa:16:3e:da:39:c9" ], "cpu":"2", "memory":"4096", "disk":"40", "arch":"x86_64", "pm_user":"admin", "pm_password":"password", "pm_addr":"10.0.0.15" }, { "pm_type":"pxe_ipmitool", "mac":[ "fa:16:3e:51:9b:68" ], "cpu":"2", "memory":"4096", "disk":"40", "arch":"x86_64", "pm_user":"admin", "pm_password":"password", "pm_addr":"10.0.0.16" } ] }
Ironic drivers provide various level of support for different hardware. The most up-to-date information about Ironic drivers is at, but note that this page always targets Ironic git master, not the release we use.
There are 2 generic drivers:
Ironic also provides specific drivers for some types of hardware:
There are also 2 testing drivers: | http://docs.openstack.org/developer/tripleo-docs/environments/environments.html | 2016-07-23T13:04:25 | CC-MAIN-2016-30 | 1469257822598.11 | [] | docs.openstack.org |
Subtraction operatorNamespace: SnmpSharpNet
Assembly: SnmpSharpNet (in SnmpSharpNet.dll) Version: 0.5.0.0 (0.5.0.0)
Syntax
Return ValueNew object with subtracted values of the 2 parameter objects. If both parameters are null references then null is returned. If either of the two parameters is null, the non-null objects value is set as the value of the new object and returned.
Remarks
Subtract the value of the second Integer32 class value from the first Integer32 class value. Values of the two objects are subtracted and a new class is instantiated with the result. Original values of the two parameter classes are preserved. | http://www.docs.snmpsharpnet.com/docs-0-5-0/html/M_SnmpSharpNet_Integer32_op_Subtraction.htm | 2012-05-24T10:53:30 | crawl-003 | crawl-003-008 | [] | www.docs.snmpsharpnet.com |
TCustomAction is the base class for actions meant to be used with menu items and controls.
TCustomAction = class(TContainedAction);
class TCustomAction : public TContainedAction;
TCustomAction introduces support for the properties and methods of menu items and controls that are linked to action objects. Use TCustomAction as a base class when deriving your own actions that publish specific properties of associated controls.
Action objects centralize the response to user commands (actions) and represent user interface elements in applications that use action bars. They provide an easy way to synchronize, for example, the enabled state and caption of a speed button and a menu item, and handle the response when users click on these components. Each such component, called the client, has its properties dynamically updated by the action and forwards user actions to the action for a response.
You can work with actions at design-time in the action list editor of an action list or the customize dialog of an action manager. The action list or action manager is a container for actions, which it organizes into categories.
Component and control properties and events that are supported in TCustomAction, either directly or through an ancestor, are:
OnUpdate
OnExecute | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/ActnList_TCustomAction.html | 2012-05-24T10:31:24 | crawl-003 | crawl-003-008 | [] | docs.embarcadero.com |
IPortCandidateProviderthat works for
IPortOwnerimplementations that have
IShapeGeometryinstances in their lookup.
addExistingPortsEnabled:Boolean
Gets or sets a property that determines whether existing ports should be added to the list of ports
The default value is
false.
public function get addExistingPortsEnabled():Boolean
public function set addExistingPortsEnabled(value:Boolean):void
See also
minimumSegmentLength:Number
Gets or sets the minimum length a segment needs to have in order to be used to add port candidates.
The default value is
10.0.
public function get minimumSegmentLength():Number
public function set minimumSegmentLength(value:Number):void
public function ShapeGeometryPortCandidateProvider(portOwner:IPortOwner, ratios:Array = null)
Creates an instance that inserts a port candidate at the given ratios of each segment of the shape's path.Parameters
protected function createList(context:IInputModeContext, owner:IPortOwner):Iterable
Creates the list of port candidates using the
IShapeGeometry obtained from the
owner's lookup.
ParametersReturns
override protected function getPortCandidates(context:IInputModeContext):Iterable
Creates an enumeration of possibly port candidates.
This method is used as a callback by most of the getter methods in this class. Subclasses habe to override this method to provide the same candidates for all use-cases.
ParametersReturns
public static const EMPTY:IPortCandidateProvider
A generic implementation of the
IPortCandidateProvider interface that provides
no candidates at all. | http://docs.yworks.com/yfilesflex/doc/api/client/com/yworks/graph/model/ShapeGeometryPortCandidateProvider.html | 2012-05-24T10:28:48 | crawl-003 | crawl-003-008 | [] | docs.yworks.com |
Ser.Bits Property
Function:
Specifies the number of data bits in a word TXed/RXed by the serial port for the currently selected port (selection is made through ser.num)
Type:
Enum (pl_ser_bits, byte)
Value Range:
0- PL_SER_BB_7: data word TXed/RXed by the serial port is to contain 7 data bits
1- PL_SER_BB_8 (default): data word TXed/RXed by the serial port is to contain 8 data bits.
See Also:
UART Mode, Serial Settings
Details
This property is only relevant when the serial port is in the UART mode (ser.mode= 0- PL_SER_MODE_UART). | http://docs.tibbo.com/taiko/ser_bits.htm | 2012-05-24T07:33:02 | crawl-003 | crawl-003-008 | [] | docs.tibbo.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.