content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
This chapter describes how to use the Knox CLI (Command Line Interface) to run diagnostic tests..
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.3.6/bk_Security_Guide/content/knox_cli_testing_tools.html
2022-06-25T01:47:02
CC-MAIN-2022-27
1656103033925.2
[]
docs.cloudera.com
. DISCUSSION The cbimport command is used to import data from various sources into a Couchbase cluster. Each supported format is a subcommand of the cbimport utility. REPORTING BUGS Report urgent issues to the Couchbase Support Team at [email protected]. Bugs can be reported to the Couchbase Jira Bug Tracker at.
https://docs.couchbase.com/server/6.6/tools/cbimport.html
2022-06-25T02:29:17
CC-MAIN-2022-27
1656103033925.2
[]
docs.couchbase.com
Theme Definition. Theme Type Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Defines the type of Theme to associate with this definition. Type must be a class that extends InteractableThemeBase public: property Type ^ ThemeType { Type ^ get(); void set(Type ^ value); }; public Type ThemeType { get; set; } member this.ThemeType : Type with get, set Public Property ThemeType As Type
https://docs.microsoft.com/pl-pl/dotnet/api/microsoft.mixedreality.toolkit.ui.themedefinition.themetype?view=mixed-reality-toolkit-unity-2020-dotnet-2.8.0
2022-06-25T01:41:01
CC-MAIN-2022-27
1656103033925.2
[]
docs.microsoft.com
Debug WordPress follow these steps depending on your installation type: Approach A (Bitnami installations using system packages): Execute the following command: $ sudo chmod g+w /bitnami/wordpress/wp-config.php Retrieve the previous permissions configuration after activating the plugin. $ sudo chmod g-w /bitnami/wordpress/wp-config.php Approach B (Self-contained Bitnami installations): Execute the following command: $ sudo chmod g+w /opt/bitnami/apps/wordpress/htdocs/wp-config.php deactivate all the plugins in the MariaDB database named bitnami_wordpress by connecting to MariaDB and running the SQL sentence below: UPDATE wp_options SET option_value = 'a:0:{}' WHERE option_name = 'active_plugins'; You can connect to the bitnami_wordpress database using the user bn_wordpress and the random password located in the WordPress wp-config.php configuration file running the command below: NOTE: Depending on your installation type, the WordPress wp-config.php configuration file can be found in the following locations: - Approach A (Bitnami installations using system packages): /opt/bitnami/wordpress/wp-config.php - Approach B (Self-contained Bitnami installations): /opt/bitnami/apps/wordpress/htdocs/wp-config.php $ the commands below, depending on your installation type: Approach A (Bitnami installations using system packages): $ sudo chown -R bitnami:daemon /bitnami/wordpress/wp-content $ sudo chmod -R g+w /bitnami/wordpress/wp-content Approach B (Self-contained Bitnami installations): $.
https://docs.bitnami.com/azure/apps/wordpress/troubleshooting/debug-errors/
2022-06-25T01:01:24
CC-MAIN-2022-27
1656103033925.2
[]
docs.bitnami.com
This topic describes basic concepts for creating and managing unstructured file sources. The term “unstructured files" refers to data stored in a filesystem that is NOT usually accessed by a DBMS or similar software. Unstructured files can consist of anything from a simple directory to the root of a complex application like Oracle Enterprise Business Suite. Like with other data types, you can configure a dSource to sync periodically with a set of unstructured files external to the Delphix Engine. The dSource is a copy of these physical files stored on the Delphix Engine. On Unix platforms, dSources are created and periodically synced by an implementation of the rsync utility. On Windows, files are synced using the robocopy utility, which is distributed with Windows. dSources enable you to provision “vFiles,” which are virtual copies of data that are fully functional read write copies of the original files source.You can mount vFiles across one target environment or many.
https://docs.delphix.com/docs536/delphix-administration/unstructured-files-and-app-data/getting-started-with-unstructured-files
2022-06-25T01:27:36
CC-MAIN-2022-27
1656103033925.2
[]
docs.delphix.com
Inventory¶ Using items, combining them, crafting new ones or trading them with other characters is at the heart of many games. The Inventory module has been meticulously crafted to support a wide variety of situations that involve the use and management of items. Requirements The Inventory module is an extension of Game Creator 2 and won't work without it
https://docs.gamecreator.io/inventory/
2022-06-25T01:51:22
CC-MAIN-2022-27
1656103033925.2
[array(['/assets/images/inventory/cover.png', 'Inventory'], dtype=object)]
docs.gamecreator.io
Higher-Order Composites¶ The dimod package includes several example higher-order composed samplers. HigherOrderComposite¶ - class HigherOrderComposite(child_sampler)[source]¶ Convert a binary quadratic model sampler to a binary polynomial sampler. Energies of the returned samples do not include the penalties. - Parameters sampler ( dimod.Sampler) – A dimod sampler Example This example uses HigherOrderCompositeto instantiate a composed sampler that submits a simple Ising problem to a sampler. The composed sampler creates a binary quadratic model (BQM) from a higher order problem. >>> sampler = dimod.HigherOrderComposite(dimod.ExactSolver()) >>> h = {0: -0.5, 1: -0.3, 2: -0.8} >>> J = {(0, 1, 2): -1.7} >>> sampleset = sampler.sample_hising(h, J, discard_unsatisfied=True) >>> set(sampleset.first.sample.values()) == {1} True PolyFixedVariableComposite¶ - class PolyFixedVariableComposite(child_sampler)[source]¶ Composite that fixes variables of a problem. Fixes variables of a binary polynomial and modifies linear and k-local terms accordingly. Returned samples include the fixed variable. - Parameters sampler ( dimod.PolySampler) – A dimod polynomial sampler. Examples This example uses PolyFixedVariableCompositeto instantiate a composed sampler that submits a simple high-order Ising problem to a sampler. The composed sampler fixes a variable and modifies linear and k-local terms biases. >>> h = {1: -1.3, 2: 1.2, 3: -3.4, 4: -0.5} >>> J = {(1, 4): -0.6, (1, 2, 3): 0.2, (1, 2, 3, 4): -0.1} >>> poly = dimod.BinaryPolynomial.from_hising(h, J, offset=0) >>> sampler = dimod.PolyFixedVariableComposite(dimod.ExactPolySolver()) >>> sampleset = sampler.sample_poly(poly, fixed_variables={3: -1, 4: 1}) PolyScaleComposite¶ - class PolyScaleComposite(child)[source]¶ Composite to scale biases of a binary polynomial. - Parameters child ( PolySampler) – A binary polynomial sampler. Examples >>> linear = {'a': -4.0, 'b': -4.0} >>> quadratic = {('a', 'b'): 3.2, ('a', 'b', 'c'): 1} >>> sampler = dimod.PolyScaleComposite(dimod.HigherOrderComposite(dimod.ExactSolver())) >>> response = sampler.sample_hising(linear, quadratic, scalar=0.5, ... ignored_terms=[('a','b')]) PolyTruncateComposite¶ - class PolyTruncateComposite(child_sampler, n, sorted_by='energy', aggregate=False)[source]¶ Composite that truncates returned samples. Post-processing is expensive and sometimes one might want to only treat the lowest-energy samples. This composite layer allows one to pre-select the samples within a multi-composite pipeline. - Parameters child_sampler ( dimod.PolySampler) – A dimod binary polynomial sampler. n (int) – Maximum number of rows in the returned sample set. sorted_by (str/None, optional, default='energy') – Selects the record field used to sort the samples before truncating. Note that sample order is maintained in the underlying array. aggregate (bool, optional, default=False) – If True, aggregate the samples before truncating. Note If aggregate is True, SampleSet.record.num_occurrencesare accumulated but no other fields are.
https://docs.ocean.dwavesys.com/en/stable/docs_dimod/reference/sampler_composites/higher_order_composites.html
2022-06-25T01:23:01
CC-MAIN-2022-27
1656103033925.2
[]
docs.ocean.dwavesys.com
xdmp.unquote( arg as String, [default-namespace as String?], [options as String[]] ) as Sequence Parses a string as XML, returning one or more document nodes. If no format is specified in $options, it is inferred from the input. If the first non-whitespace character is either '{' or '[' it is JSON. Otherwise it is XML. If neither "repair-full" nor "repair-none" is present, the default is specified by the XQuery version of the caller. In XQuery version 1.0 and 1.0-ml the default is "repair-none". In XQuery version 0.9-ml the default is "repair-full". If $arg is the empty string, xdmp:unquote returns an empty document node. fn.head(xdmp.unquote('<foo/>')); => <foo/> It returns this as a document node. fn.head( xdmp.unquote('<foo>hello</foo>', null, ['repair-none', 'default-language=en']) ); => <foo xml:hello</foo> It returns this as a document node and does not perform tag repair on the node. fn.head( xdmp.unquote('<foo>hello</foo>', 'bar', ['repair-none', 'default-language=en']) ); => <foo xml:hello</foo> It returns this as a document node and does not perform tag repair on the node. Note that the node is in the "bar" namespace. Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/9.0/xdmp.unquote
2022-06-25T01:47:18
CC-MAIN-2022-27
1656103033925.2
[]
docs.marklogic.com
Extract an archive New in version 2014.1.0. salt.states.archive. extracted(name, source, source_hash=None, source_hash_name=None, source_hash_update=False, skip_files_list_verify=False, skip_verify=False, password=None, options=None, list_options=None, force=False, overwrite=False, clean=False, clean_parent=False, user=None, group=None, if_missing=None, trim_output=False, use_cmd_unzip=None, extract_perms=True, enforce_toplevel=True, enforce_ownership_on=None, archive_format=None, **kwargs)¶ New in version 2014.1.0. Changed in version 2016.11.0: This state has been rewritten. Some arguments are new to this release and will not be available in the 2016.3 release cycle (and earlier). Additionally, the ZIP Archive Handling section below applies specifically to the 2016.11.0 release (and newer). Ensure that an archive is extracted to a specific directory. Important Changes for 2016.11.0 In earlier releases, this state would rely on the if_missing argument to determine whether or not the archive needed to be extracted. When this argument was not passed, then the state would just assume if_missing is the same as the name argument (i.e. the parent directory into which the archive would be extracted). This caused a number of annoyances. One such annoyance was the need to know beforehand a path that would result from the extraction of the archive, and setting if_missing to that directory, like so: extract_myapp: archive.extracted: - name: /var/www - source: salt://apps/src/myapp-16.2.4.tar.gz - user: www - group: www - if_missing: /var/www/myapp-16.2.4 If /var/www already existed, this would effectively make if_missing a required argument, just to get Salt to extract the archive. Some users worked around this by adding the top-level directory of the archive to the end of the name argument, and then used --strip or --strip-components to remove that top-level dir when extracting: extract_myapp: archive.extracted: - name: /var/www/myapp-16.2.4 - source: salt://apps/src/myapp-16.2.4.tar.gz - user: www - group: www With the rewrite for 2016.11.0, these workarounds are no longer necessary. if_missing is still a supported argument, but it is no longer required. The equivalent SLS in 2016.11.0 would be: extract_myapp: archive.extracted: - name: /var/www - source: salt://apps/src/myapp-16.2.4.tar.gz - user: www - group: www Salt now uses a function called archive.list to get a list of files/directories in the archive. Using this information, the state can now check the minion to see if any paths are missing, and know whether or not the archive needs to be extracted. This makes the if_missing argument unnecessary in most use cases. Important ZIP Archive Handling Note: this information applies to 2016.11.0 and later. Salt has two different functions for extracting ZIP archives: archive.unzip, which uses Python's zipfile module to extract ZIP files. archive.cmd_unzip, which uses the unzip CLI command to extract ZIP files. Salt will prefer the use of archive.cmd_unzip when CLI options are specified (via the options argument), and will otherwise prefer the archive.unzip function. Use of archive.cmd_unzip can be forced however by setting the use_cmd_unzip argument to True. By contrast, setting this argument to False will force usage of archive.unzip. For example: /var/www: archive.extracted: - source: salt://foo/bar/myapp.zip - use_cmd_unzip: True When use_cmd_unzip is omitted, Salt will choose which extraction function to use based on the source archive and the arguments passed to the state. When in doubt, simply do not set this argument; it is provided as a means of overriding the logic Salt uses to decide which function to use. There are differences in the features available in both extraction functions. These are detailed below. Command-line options (only supported by archive.cmd_unzip) - When the options argument is used, archive.cmd_unzip is the only function that can be used to extract the archive. Therefore, if use_cmd_unzip is specified and set to False, and options is also set, the state will not proceed. Permissions - Due to an upstream bug in Python, permissions are not preserved when the zipfile module is used to extract an archive. As of the 2016.11.0 release, archive.unzip (as well as this state) has an extract_perms argument which, when set to True (the default), will attempt to match the permissions of the extracted files/directories to those defined within the archive. To disable this functionality and have the state not attempt to preserve the permissions from the ZIP archive, set extract_perms to False: /var/www: archive.extracted: - source: salt://foo/bar/myapp.zip - extract_perms: False Directory into which the archive should be extracted Archive to be extracted Note This argument uses the same syntax as its counterpart in the file.managed state. Hash of source file, or file with list of hash-to-file mappings Note This argument uses the same syntax as its counterpart in the file.managed state. Changed in version 2016.11.0: If this argument specifies the hash itself, instead of a URI to a file containing hashes, the hash type can now be omitted and Salt will determine the hash type based on the length of the hash. For example, both of the below states are now valid, while before only the second one would be: foo_app: archive.extracted: - name: /var/www - source: - source_hash: 3360db35e682f1c5f9c58aa307de16d41361618c bar_app: archive.extracted: - name: /var/www - source: - source_hash: sha1=5edb7d584b82ddcbf76e311601f5d4442974aaa5 When source_hash refers to a hash file, Salt will try to find the correct hash by matching the filename part of the source URI. When managing a file with a source of salt://files/foo.tar.gz, then the following line in a hash file would match: acbd18db4cc2f85cedef654fccc4a4d8 foo.tar.gz This line would also match: acbd18db4cc2f85cedef654fccc4a4d8 ./dir1/foo.tar.gz: /var/www: archive.extracted: - source: - source_hash: - source_hash_name: ./dir2/foo.tar.gz.11.0. Set this to True if archive should be extracted if source_hash has changed and there is a difference between the archive and the local files. This would extract regardless of the if_missing parameter. Note that this is only checked if the source value has not changed. If it has (e.g. to increment a version number in the path) then the archive will not be extracted even if the hash has changed. Note Setting this to True along with keep_source set to False will result the source re-download to do a archive file list check. If it's not desirable please consider the skip_files_list_verify argument. New in version 2016.3.0. Set this to True if archive should be extracted if source_hash has changed but only checksums of the archive will be checked to determine if the extraction is required. It will try to find a local cache of the source and check its hash against the source_hash. If there is no local cache available, for example if you set the keep_source to False, it will try to find a cached source hash file in the Minion archives cache directory. Note The current limitation of this logic is that you have to set minions hash_type config option to the same one that you're going to pass via source_hash argument. Warning With this argument set to True Salt will only check for the source_hash against the local hash of the sourse. So if you, for example, remove extracted files without clearing the Salt Minion cache next time you execute the state Salt will not notice that extraction is required if the hashes are still match. New in version 3000. If True, hash verification of remote file sources ( http://, https://, ftp://) will be skipped, and the source_hash argument will be ignored. New in version 2016.3.4. For source archives not local to the minion (i.e. from the Salt fileserver or a remote source such as http(s) or ftp), Salt will need to download the archive to the minion cache before they can be extracted. To remove the downloaded archive after extraction, set this argument to False. New in version 2017.7.3. Same as keep_source, kept for backward-compatibility. Note If both keep_source and keep are used, keep will be ignored. For ZIP archives only. Password used for extraction. New in version 2016.3.0. Changed in version 2016.11.0: The newly-added archive.is_encrypted function will be used to determine if the archive is password-protected. If it is, then the password argument will be required for the state to proceed. For tar and zip archives only. This option can be used to specify a string of additional arguments to pass to the tar/zip command. If this argument is not used, then the minion will attempt to use Python's native tarfile/zipfile support to extract it. For zip archives, this argument is mostly used to overwrite existing files with o. Using this argument means that the tar or unzip command will be used, which is less platform-independent, so keep this in mind when using this option; the CLI options must be valid options for the tar/ unzip implementation on the minion's OS. New in version 2016.11.0. Changed in version 2015.8.11,2016.3.2: XZ-compressed tar archives no longer require J to manually be set in the options, they are now detected automatically and decompressed using the xz CLI command and extracted using tar xvf. This is a more platform-independent solution, as not all tar implementations support the J argument for extracting archives. Note For tar archives, main operators like -x, --extract, --get, -c and -f/ --file should not be used here. For tar archives only. This state uses archive.list to discover the contents of the source archive so that it knows which file paths should exist on the minion if the archive has already been extracted. For the vast majority of tar archives, archive.list "just works". Archives compressed using gzip, bzip2, and xz/lzma (with the help of the xz CLI command) are supported automatically. However, for archives compressed using other compression types, CLI options must be passed to archive.list. This argument will be passed through to archive.list as its options argument, to allow it to successfully list the archive's contents. For the vast majority of archives, this argument should not need to be used, it should only be needed in cases where the state fails with an error stating that the archive's contents could not be listed. New in version 2016.11.0. If a path that should be occupied by a file in the extracted result is instead a directory (or vice-versa), the state will fail. Set this argument to True to force these paths to be removed in order to allow the archive to be extracted. Warning Use this option very carefully. New in version 2016.11.0. Set this to True to force the archive to be extracted. This is useful for cases where the filenames/directories have not changed, but the content of the files have. New in version 2016.11.1. Set this to True to remove any top-level files and recursively remove any top-level directory paths before extracting. Note Files will only be cleaned first if extracting the archive is deemed necessary, either by paths missing on the minion, or if overwrite is set to True. New in version 2016.11.1. If True, and the archive is extracted, delete the parent directory (i.e. the directory into which the archive is extracted), and then re-create that directory before extracting. Note that clean and clean_parent are mutually exclusive. New in version 3000. The user. The group. If specified, this path will be checked, and if it exists then the archive will not be extracted. This path can be either a directory or a file, so this option can also be used to check for a semaphore file and conditionally skip extraction. Changed in version 2016.3.0: When used in combination with either user or group, ownership will only be enforced when if_missing is a directory. Changed in version 2016.11.0: Ownership enforcement is no longer tied to this argument, it is simply checked for existence and extraction will be skipped if if is present. Useful for archives with many files in them. This can either be set to True (in which case only the first 100 files extracted will be in the state results), or it can be set to an integer for more exact control over the max number of files to include in the state results. New in version 2016.3.0. Set to True for zip files to force usage of the archive.cmd_unzip function to extract. New in version 2016.11.0. For ZIP archives only. When using archive.unzip to extract ZIP archives, Salt works around an upstream bug in Python to set the permissions on extracted files/directories to match those encoded into the ZIP archive. Set this argument to False to skip this workaround. New in version 2016.11.0. This option will enforce a single directory at the top level of the source archive, to prevent extracting a 'tar-bomb'. Set this argument to False to allow archives with files (or multiple directories) at the top level to be extracted. New in version 2016.11.0. When user or group is specified, Salt will default to enforcing permissions on the file/directory paths detected by running archive.list on the source archive. Use this argument to specify an alternate directory on which ownership should be enforced. Note This path must be within the path specified by the name argument. New in version 2016.11.0. One of tar, zip, or rar. Changed in version 2016.11.0: If omitted, the archive format will be guessed based on the value of the source argument. If the minion is running a release older than 2016.11.0, this option is required. Examples tar with lmza (i.e. xz) compression: graylog2-server: archive.extracted: - name: /opt/ - source: - source_hash: md5=499ae16dcae71eeb7c3a30c75ea7a1a6 tar archive with flag for verbose output, and enforcement of user/group ownership: graylog2-server: archive.extracted: - name: /opt/ - source: - source_hash: md5=499ae16dcae71eeb7c3a30c75ea7a1a6 - options: v - user: foo - group: foo tar archive, with source_hash_update set to True to prevent state from attempting extraction unless the source_hash differs from the previous time the archive was extracted: graylog2-server: archive.extracted: - name: /opt/ - source: - source_hash: md5=499ae16dcae71eeb7c3a30c75ea7a1a6 - source_hash_update: True
https://docs.saltproject.io/en/3003/ref/states/all/salt.states.archive.html
2022-06-25T01:38:43
CC-MAIN-2022-27
1656103033925.2
[]
docs.saltproject.io
Sunlight Template Portal - Cloud edition This section describes the implementation of a new repository, in order to upload existing custom template images. This feature also enables the provision of a URL address for uploading a template image with custom preferences described in a .json format.. For example : a. Provider : Sunlight b. Architecture : x86_64 c. Cluster type : vm d. OS Distribution : Ubuntu e. OS Version : 18.04 f. Minimum Ram (MB) : 2048 g. Minimum DIsk (GB) : 20 Specify the Remote status as 'Enabled'. Select Browse , choose the required template tar file and press the SUBMIT button..
https://docs.sunlight.io/template_portal/
2022-06-25T01:53:01
CC-MAIN-2022-27
1656103033925.2
[array(['../img/template_portal_login.png', 'Template Portal Login'], dtype=object) array(['../img/template_portal_main_page.png', 'Template Portal Main Page'], dtype=object) array(['../img/upload_template_form.png', 'Upload Template Form'], dtype=object) array(['../img/delete_image_template_portal.png', 'Delete image template portal'], dtype=object) array(['../img/download_image_template_portal.png', 'Download image template portal'], dtype=object) array(['../img/confirm_download_image_template_portal.png', 'Confim download image template portal'], dtype=object) array(['../img/progress_bar_image_template_portal.png', 'Progress bar image template download'], dtype=object) array(['../img/logout_template_portal.png', 'Upload Template Form'], dtype=object) ]
docs.sunlight.io
. Example. How the Algorithm Works The Microsoft Neural Network algorithm uses a Multilayer Perceptron network, also called a Back-Propagated Delta Rule network, composed of up to three layers of neurons, or perceptrons. These layers are an input layer, an optional hidden layer, and an output layer. In a Multilayer Perceptron network, each neuron receives one or more inputs and produces one or more identical outputs. Each output is a simple non-linear function of the sum of the inputs to the neuron. Inputs only pass forward from nodes in the input layer to nodes in the hidden layer, and then finally they pass to the output layer; there are no connections between neurons within a layer. . Using the Algorithm. See Also Concepts Data Mining Algorithms Feature Selection in Data Mining Using the Data Mining Tools Viewing a Mining Model with the Microsoft Neural Network Viewer Other Resources CREATE MINING MODEL (DMX) Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms174941(v=sql.90)
2018-04-19T15:55:44
CC-MAIN-2018-17
1524125936981.24
[]
docs.microsoft.com
JSON Web Token (JWT) is used to represent claims that are transferred between two parties such as the enduser app implementation uses information such as logging, content filtering, and authentication/authorization that is stored in this token. The token is Base64-encoded and sent to the app implementation in a HTTP header variable. The JWT is self-contained and is divided into three parts as the header, the payload, and the signature. For more information on JWT, see. To authenticate endusers, the App Manager passes attributes of the app invoker to the backend app implementation using JWT. In most production deployments, service calls go through the App Manager or a proxy service. If you enable JWT generation in the App Manager, each app request will carry a JWT to the back-end service. When the request goes through the App/appm", "exp":1345183492181, "":"admin", "":"app2", "":"/placeFinder", "":"1.0.0", "":"Silver", "":"peter" } The above token contains, - Token expiration time ("exp") - Subscriber to the API, usually the app developer ("") - Application through which API invocation is done ("") - Context of the API ("") - API version ("") - Tier/price band for the subscription ("") - Enduser of the app who's action invoked the API ("") Information on how to enable and pass information in the JWT in the App Manager are described below. Configuring JWT Before passing enduser attributes, you enable and configure the JWT implementation in the <AppM_HOME>/repository/conf/app-manager.xml file. The relevant elements are described below. If you do not configure these elements, they take their default values. Change the value of the <AddClaimsSelectively> element to true, to send the claims you select in the Step 4 - Advanced Configuration of creating the Web app to the backend using JWT. By default, this is set to false to send all the claims that are associated to the user profile. For more information on claims, see Claim Management. Follow the steps below to view the claims that are associated to a user role by default. - Log in to the AppM management console using the following URL: https://<AppM_HOST>:<AppM_PORT>/carbon/ - Click Configure, then click Users and Roles. - Click Users, and then click User Profile of the corresponding user. Similar to passing end user attributes to the backend using JWT, you can pass authentication information to the backend of AppM by returning a SAML response. To enable returning the SAML response to backend, set the value of the <AddSAMLResponseHeaderToOutMessage> property to true in the <AppM_HOME>/repository/conf/app-manager.xml file.
https://docs.wso2.com/display/APPM110/Passing+End+User+Attributes+to+the+Backend+Using+JWT
2018-04-19T15:45:04
CC-MAIN-2018-17
1524125936981.24
[]
docs.wso2.com
After: - Visualizing Results - Creating Alerts - Communicating Results Through REST API - Analytics JavaScript (JS) API Overview Content Tools Add-ons Activity
https://docs.wso2.com/display/DAS300/Communicating+Results
2018-04-19T15:39:38
CC-MAIN-2018-17
1524125936981.24
[array(['https://docs.wso2.com/download/attachments/24970704/pdf-white-icon.png?api=v2', 'Download PDF icon'], dtype=object) ]
docs.wso2.com
Configure the Web Server to Redirect Requests to an Exact Destination (IIS 7) Applies To: Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Vista Configure the redirection destination to be an exact destination when you want to change the default redirection behavior. When you configure the destination to be an exact destination, all incoming requests are redirected to the exact destination instead of the relative destination. This is useful when you want all requests to be redirected to the same Web page, such as when a site is down for maintenance or when it is undergoing construction. Note You must first enable redirection and configure the redirection destination. For more information about how to enable redirection and configure the destination, see Configure the Web Server to Redirect Requests to a Relative Destination (IIS 7)., under Redirect Behavior, select Redirect all requests to exact destination (instead of relative to destination). In the Actions pane, click Apply. Command Line To configure the redirection destination to be an exact destination, use the following syntax: appcmd set config /section:httpRedirect /exactDestination:true | false By default, this attribute is false, but you can specify true for the exactDestination attribute. To do this, type the following at the command prompt, and then press ENTER: appcmd set config /section:httpRedirect /exactDestination:true For more information about Appcmd.exe, see Appcmd.exe (IIS 7). Configuration The procedure in this topic affects the following configuration elements: <exactDestination> attribute of the <httpRedirect> element WMI Use the following WMI classes, methods, or properties to perform this procedure: - HttpRedirectSection.ExactDestination Redirection in IIS 7
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770409(v=ws.10)
2018-04-19T16:43:03
CC-MAIN-2018-17
1524125936981.24
[]
docs.microsoft.com
Dear OA LodgeMaster Users, It’s finally the time we all have been waiting for - the next version of OA LodgeMaster is nearly ready for general release! Thank you to everyone who has waited patiently as we’ve developed the next great version of OA LodgeMaster and thanks to the users who have helped us test the new system over the past two months. After nearly three years in development, and nearly two months in beta testing, we are almost ready to release the next version of the Order of the Arrow LodgeMaster system. We have worked hard to ensure the new version is built to provide a robust and easy to use platform. We also took the time to incorporate more than 100 user requested enhancements to make the platform easier to use. The new LodgeMaster client has been completely rewritten from the ground up. It completely removes the need for Silverlight - instead the new version is completely web-based and runs in any modern web browser, including mobile phones and tablets. While the functionality is like the current version, the look of the OA LodgeMaster client has been modernized - content dynamically scales to fit any size screen. The new Lodge Client will be new yet familiar to all users. OA LodgeMaster also now leverages the new ArrowID single sign-on system - you will be able to sign into LodgeMaster with you existing ArrowID account. Users will connect their ArrowID to their old account by signing in with the old username and password once, from then on you’ll just need your ArrowID. New users will be invited by email instead of creating a username and password for them. We’ll have many more details on the enhancements we’ve made to the client with the final release and starting at the beginning of May we’ll begin hosting a series of webinars on the new version. We’ll also be hosting a series of trainings at NOAC this summer. Important Release Notes: - The entire OA LodgeMaster system will be unavailable Friday, April 13, 2018 – Sunday, April 15, 2018 while the LodgeMaster team performs the system upgrades. - All offline data MUST be synchronized with the OA LodgeMaster server by Thursday, April 12, 2018 at 11:59pm eastern. Any data not synchronized by the 12th will not be available after the OA LodgeMaster release. Your old offline database WILL NOT be compatible with the new version of OA LodgeMaster. Any changes after the 12th at 11:59pm eastern that are made in the old program will need to be manually re-done in the new system. - OA LodgeMaster 4 will be available on Monday, April 16, 2018. OA LodgeMaster should be considered offline and unavailable until Monday morning. - Users with the offline client will need to uninstall the OA LodgeMaster Lodge Client 3.3.11 before installing the 4.0.0 Lodge Client. - During the upgrade window, status updates will be posted to the OA Status Page. The status page is located at status.oa-bsa.org. Due to the size of the project, a few of the modules have not be moved into the new OA LodgeMaster Client. The finance and inventory modules will remain on the Silverlight platform for a short time more. We expect to release the inventory module along with the 2018 charter and JTE forms around a month after the initial release. We will continue to support any Lodge that uses the finance or inventory modules until we move them over to the LodgeMaster 4 platform. We are excited for this major release of OA LodgeMaster. We have spent the past 3 years building and creating a new platform for the Order of the Arrow’s membership and event management. We are thrilled to be able to offer this program, free of charge, to Lodge’s across the country. Over the next two weeks, expect more email reminders and look for updates to be posted to status.oa-bsa.org. Yours in Brotherhood, The OA LodgeMaster Team Chadd Blanchard, Project Lead Michael Card, Project Adviser Robert Anstett, Development Lead Mike Gaffney, Support Lead
https://docs.oa-bsa.org/display/OALMLC/2018/04/01/LodgeMaster+4+Release+Information
2018-04-19T15:51:55
CC-MAIN-2018-17
1524125936981.24
[]
docs.oa-bsa.org
There are several pieces of the RightScale system that use a structured hierarchy to provide a logical system of inheritance. It's important to understand the different levels of inheritance so that you can effectively design and manage your RightScale deployments. A misunderstanding or misuse of the inheritance rules often leads to problems such as stranded servers, failed scripts, and inconsistent/inaccurate configurations. It's recommended that each member of your team be educated on the topics below.
http://docs.rightscale.com/cm/rs101/laws_of_inheritance_and_hierarchy.html
2018-04-19T15:28:10
CC-MAIN-2018-17
1524125936981.24
[]
docs.rightscale.com
.Model Assembly: AWSSDK.dll Version: (assembly version) The CreateLaunchConfigurationRequest type exposes the following members .NET Framework: Supported in: 4.5, 4.0, 3.5 .NET for Windows Store apps: Supported in: Windows 8.1, Windows 8 .NET for Windows Phone: Supported in: Windows Phone 8.1, Windows Phone 8
https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TAutoScalingCreateLaunchConfigurationRequestNET45.html
2018-04-19T15:52:49
CC-MAIN-2018-17
1524125936981.24
[]
docs.aws.amazon.com
Install metasfresh on Docker These ports are used additionally in order to use the java client: They are exposed to the docker host in the docker-compose_javaclient.yml file. Accordingly these ports should not be in use by other programs on the docker host, otherwise the docker image will not boot. By using docker-compose --file one can control which configuration file should be used (i.e. docker-compose_javaclient.yml). # change to the docker dir cd metasfresh-docker # check if metasfresh-docker still running docker-compose ps # if metasfresh-docker is still running, stop it and remove the images docker-compose down # start metasfresh-docker with access to the java client docker-compose --file docker-compose_javaclient.yml up -d # check if all docker images were booted correctly docker-compose ps # all images must show the status "Up" Assure that your computer can resolve the dockerhost by a DNS name (i.e. MYDOCKERHOST). I.e. by adding the servername with the IP of the dockerhost to your local host file. Additionally your computer must be able to reach the database directly. Add the hostname db to your local host file and set it to the IP-Adress of your dockerhost Now download the java client via and install and use it as usual If you got questions or problems just ask for support in the public forum: forum.metasfresh.org
http://docs.metasfresh.org/howto_collection/EN/How_do_I_use_Java_Client_using_Docker.html
2018-04-19T15:25:20
CC-MAIN-2018-17
1524125936981.24
[]
docs.metasfresh.org
Objective To grant another RightScale account access to your registered vSphere cloud so that users within that other RightScale account will be able to launch and manage instances (VMs) in your vSphere environment. Prerequisites - Registered vSphere (cloud) environment. See Register a vSphere Cloud with RightScale. - Log in access to the RightScale Cloud Appliance for vSphere (RCA-V) interface. Steps Log in to the RCA-V interface that is connected to the vSphere environment to which you're going to grant access. Enter the RCA-V's IP address of the instance in a browser window. Use the IP address from the previous step. (e.g. 10.100.100.90) - Username: rightscale Password: vscale2013@@ (default) Note: The person who initially set up the RCA-V should have changed the default password. Please contact that person directly if you need to retrieve the password. Go to Cloud Configuration > Tenants. Click Add Tenant. - Tenant Name - Provide a name for the tenant and click Continue. (e.g. Development) Save the Tenant Name. - Tenant Password - Create a password for the tenant. - Add Zone - Select and add one or more zones to associate with the tenant. Remember, you can create multiple tenants which leverage resources in the same zones (i.e. datacenter:cluster combination). Since multiple RightScale accounts have access to essentially the same pool of VMs, you may want users to see other active VMs that were launched from outside of their account. Use the See Unowned VMscheckbox to specify whether or not users will also see other VMs. Click Save. In order to grant another RightScale account access to your vSphere cloud environment, you must give the following information to a user (within that RightScale account) that has 'admin' user permissions. (Note: Only 'admin' users can add cloud credentials to a RightScale account.) Cloud Token - In the RightScale Cloud Management Dashboard go to Settings > Account Settings > Administered Clouds. Tenant Name and Tenant. A tenant may only be associated with a single RightScale account.
http://docs.rightscale.com/rcav/v2.0/rcav_grant_account.html
2018-04-19T15:17:43
CC-MAIN-2018-17
1524125936981.24
[]
docs.rightscale.com
Perl 5 to Perl 6 guide - Syntax Syntactic differences between Perl 5 and Perl 6. perlsyn - Perl syntax DESCRIPTION A (hopefully) comprehensive description of the differences between Perl 5 and Perl 6 with regards to the syntax elements described in the perlsyn document. NOTE I will not be explaining Perl 6 syntax in detail. This document is an attempt to guide you from how things work in Perl 5 to the equivalents in Perl 6. For full documentation on the Perl 6 syntax, please see the Perl 6 documentation. Free Form Perl 6 is still largely free form. However, there are a few instances where the presence or lack of whitespace is now significant. For instance, in Perl 5, you can omit a space following a keyword (e. g. while($x < 5) or my($x, $y)). In Perl 6, that space is required, thus while ($x < 5) or my ($x, $y). In Perl 6, however, you can omit the parentheses altogether: while $x < 5 . This holds for if, for, etc. Oddly, in Perl 5, you can leave spaces between an array or hash and its subscript, and before a postfix operator. So $seen {$_} ++ is valid. No more. That would now have to be %seen{$_}++. If it makes you feel better, you can use backslashes to "unspace" whitespace, so you can use whitespace where it would otherwise be forbidden. See Whitespace for details. Declarations As noted in the Functions guide, there is no undef in Perl 6. A declared, but uninitialized scalar variable will evaluate to its type. In other words, my $x;say $x; will give you "(Any)". my Int $y;say $y; will give you "(Int)". # starts a comment that runs to the end of the line as in Perl 5. Embedded comments start with a hash character and a backtick ( #`), followed by an opening bracketing character, and continue to the matching closing bracketing character. Like so: if #`( why would I ever write an inline comment here? ) True As in Perl 5, you can use pod directives to create multiline comments, with =begin comment before and =end comment after the comment. Truth and Falsehood The one difference between Perl 5 truth and Perl 6 truth is that, unlike Perl 5, Perl 6 treats the string "0" as true. Numeric 0 is still false, and you can use prefix + to coerce string "0" to numeric to get it to be false. Perl 6, additionally has an actual Boolean type, so, in many cases, True and False may be available to you without having to worry about what values count as true and false. Statement Modifiers Mostly, statement modifiers still work, with a few exceptions. First, for loops are exclusively what were known in Perl 5 as foreach loops and for is not used for C-style for loops in Perl 6. To get that behavior, you want loop. loop cannot be used as a statement modifier. In Perl 6, you cannot use the form do {...} while $x. You will want to replace do in that form with repeat. Similarly for do {...} until $x. Compound Statements The big change from Perl 5 is that given is not experimental or disabled by default in Perl 6. For the details on given see this page. Loop Control last, and redo have not changed from Perl 5 to Perl 6. continue, however, does not exist in Perl 6. You would use a NEXT block in the body of the loop. # Perl 5my = '';for (1..5)continue # Perl 6my = '';for 1..5 For Loops As noted above, C-style for loops are not called for loops in Perl 6. They are just loop loops. To write an infinite loop, you do not need to use the C idiom of loop (;;) {...}, but may just omit the spec completely: loop {...} Foreach Loops In Perl 5, for, in addition to being used for C-style for loops, is a synonym for foreach. In Perl 6, for is just used for foreach style loops. Switch Statements Perl 6 has actual switch statements, provided by given with the individual cases handled by when and default. The basic syntax is: given EXPR The full details can be found here. Goto goto probably works similarly in Perl 6 to the way it does in Perl 5. However, as of this writing, it does not seem to be functional. For what is planned for goto, see. The Ellipsis Statement ... (along with !!! and ???) are used to create stub declarations. This is a bit more complicated than the use of ... in Perl 5, so you'll probably want to look at for the gory details. That said, there doesn't seem to be an obvious reason why it shouldn't still fulfill the role it did in Perl 5, despite its role being expanded in Perl 6. PODs: Embedded Documentation Pod has changed between Perl 5 and Perl 6. Probably the biggest difference is that you need to enclose your pod between =begin pod and =end pod directives. There are a few tweaks here and there as well. For instance, as I have discovered while writing these documents, the vertical bar ("|") is significant in X<> codes, and it's not clear how to get a literal "|" into them. Your best bet may be to use the Perl 6 interpreter to check your pod. You can do this by using the --doc switch. E. g. perl6 --doc Whatever.pod. This will output any problems to standard error. (Depending on how/where you've installed perl6, you may need to specify the location of Pod::To::Text.) Details on Perl 6 style pod is at.
https://docs.perl6.org/language/5to6-perlsyn
2018-04-19T15:41:09
CC-MAIN-2018-17
1524125936981.24
[]
docs.perl6.org
- Release Notes - What's New - FAQs - Licensing - Updating and Upgrading to NetScaler SD-WAN 9.3 - Single-Step Upgrade for SD-WAN Appliances - Before You Begin - Getting Started by Using NetScaler SD-WAN - About SD-WAN VPX Standard Edition - Deployment - Deployment use Cases - SD-WAN Overlay Routing - High Availability Deployment - Configuration - Basic Configuration Mode - How-To-Articles - Virtual Routing and Forwarding - - NetScaler SD-WAN WANOP 9.3 - The WANOP Client Plug-in - Configuring Service Class Association with SSL Profiles - Standard MIB Support - Best Practices - Security - Reference Material - Hardware platforms -
https://docs.citrix.com/en-us/netscaler-sd-wan/9-3/sd-wan-auto-secure-peering-manual-secure-peering-deployments.html
2018-04-19T15:50:52
CC-MAIN-2018-17
1524125936981.24
[]
docs.citrix.com
template<class Arg> class OEMatchFunc : public OESystem::OEUnaryPredicate<Arg> This class represents OEMatchFunc template functor that identifies objects (OEAtomBase, OEBondBase) that match a query substructure specified in construction. See also The following methods are publicly inherited from OEUnaryPredicate: The following methods are publicly inherited from OEUnaryFunction: The following specializations exist for this template: OEMatchFunc(const char *smarts) Constructs the functor with the internal substructure search specified by the ‘smarts’ (SMARTS) pattern. OEMatchFunc(const OEQMolBase &qmol) Constructs the functor with the internal substructure search specified by the ‘qmol’ query molecule. OEMatchFunc(const OEMolBase &mol, unsigned int aexpr, unsigned int bexpr) Constructs the functor with the internal substructure search that is created from ‘mol’ using the specified. bool operator()(const Arg &arg) const Returns true if the ‘arg’ object matched by the query molecule defined in construction. operator bool() Returns true, if the substructure search object was initialized correctly. OESystem::OEUnaryFunction<Arg, bool> *CreateCopy() const Deep copy constructor that returns a copy of the object. The memory for the returned OEMatchFunc object is dynamically allocated and owned by the caller.
https://docs.eyesopen.com/toolkits/python/oechemtk/OEChemClasses/OEMatchFunc.html
2019-05-19T17:45:59
CC-MAIN-2019-22
1558232255071.27
[]
docs.eyesopen.com
Gplus Adapter for Salesforce As an agent, you’ll be handling calls and making sure that you keep on top of your KPIs. Gplus Adapter is your softphone for handling calls (both inbound and outbound) and other interactions, such as chat or emails. The softphone is launched from your Contact management or ticket management system. ImportantWhat you see in the adapter depends on your contact center and your role within it, so you might not be able to do or see all the things covered in this help. If you think you should be able to do or see something you can't, check with your supervisor or system administrator. To get quickly up and running with your Gplus Adapter for Salesforce, see Getting Started. This page was last modified on August 10, 2017, at 06:38. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSAAS/latest/Agent/GPAStandardMode
2019-05-19T16:50:06
CC-MAIN-2019-22
1558232255071.27
[]
docs.genesys.com
A Direct Debit is a financial transaction in which one person withdraws funds from another person’s bank account. The Direct Debit can only collect payments from (IBAN) private banking accounts. Direct Debit therefor supports the payment methods SOFORT Banking and iDEAL. This payment method gives you the possibility to collect funds from your customer bank account on a recurring basis. SEPA Direct Debit is a standard across Europe to collect funds from your customers by means of previous authorization. Customers can withdraw this payment for 56 days, without giving a reason or you having a possibility to appeal.
https://docs.multisafepay.com/payment-methods/direct-debit/what-is-direct-debit/
2019-05-19T16:47:49
CC-MAIN-2019-22
1558232255071.27
[]
docs.multisafepay.com
Contents Now Platform Administration Previous Topic Next Topic Understanding LDAP integration Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Understanding LDAP integration An LDAP integration allows your instance to use your existing LDAP server as the master source of user data. LDAP integration prerequisites LDAP integration timing LDAP integrations are usually done before the instance Go Live, but can be integrated at any time. LDAP server data integrity Some users are concerned about. Most changes (including additions) to your LDAP server are available to the instance within seconds, depending on how many components of the full LDAP integration are in place. To keep LDAP records synchronized, schedule a periodic scan of the LDAP server to pick up changes.. Security. Importing LDAP data to the instance It is recommended that attributes are defined to import only required data. Defined attributes get mapped into the instance user database. We cannot answer the question of which specific attributes are needed because this is determined by the scope of the project and business requirements. Supported types of LDAP servers. LDAP single-sign-on Along with the data population functionality provided with the LDAP import, you can use the External Authentication functionality supported by the application to prevent your users from needing to sign on each time. Multiple LDAP domains The recommended method for handling multiple domains is to create a separate LDAP server record for each domain. Each LDAP server record must point to a domain controller for that domain. This means the local network must allow connections to each of the domain controllers. After expanding to more than one network domain, it is critical that you identify unique LDAP attributes for the application usernames and import coalesce values. A common unique coalesce attribute for Active Directory is objectSid. Unique usernames may vary based on the LDAP data design. Common attributes are email or userPrincipalName. Handling query limits. LDAP query type If an LDAP password is supplied then a "Simple Bind" is performed. If no LDAP password is supplied then "none" is used, in which case the LDAP server must allow anonymous login. LDAP authentication We use provided service account credentials for LDAP to retrieve the user DN from the LDAP server. Given the DN value for the user, we then rebind with LDAP given the users DN and the provided password. Password storage The password that the user enters is contained entirely in their HTTPS session. We do not store that password anywhere. Setting up. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-platform-administration/page/integrate/ldap/reference/r_LDAPIntegrationFAQs.html
2019-05-19T16:57:04
CC-MAIN-2019-22
1558232255071.27
[]
docs.servicenow.com
Cloud Download authentication: Authentication is the security process that validates the claimed identity of a device, entity or person, relying on one or more characteristics bound to that device, entity or person. Authorization: Parses the network to allow access to some or all network functionality by providing rules and allowing access or denying access based on a subscriber’s profile and services purchased. Infrastructure Deep Packet Inspection: DPI provides techniques to analyze the payload of each packet, adding an extra layer of security. DPI can detect and neutralize attacks that would be missed by other security mechanisms. A DoS protection in order to avoid that the Infrastructure is no more accessible for a period of time. Scanning tools such as SATS and DAST assessments perform vulnerability scans on the source code and data flows on web applications. Many of these scanning tools run different security tests that stress applications under certain attack scenarios to discover security issues. IDS & IPS: IDS detect and log inappropriate, incorrect, or anomalous activity. IDS can be located in the telecommunications networks and/or within the host server or computer. Telecommunications carriers build intrusion detection capability in all network connections to routers and servers, as well as offering it as a service to enterprise customers. Once IDS systems have identified an attack, IPS ensures that malicious packets are blocked before they cause any harm to backend systems and networks. IDS typically functions via one or more of three systems: - Pattern matching. - Anomaly detection. - Protocol behavior. Transport For data transport, it is necessary to encrypt data end-to-end. To prevent MITM attacks, no third party should be able to interpret transported data. Another aspect is the data anonymization in order to protect the leakage of private information on the user or any other third party. The use of standards such as IPSec provides “private and secure communications over IP networks, through the use of cryptographic security services, is a set of protocols using algorithms to transport secure data over an IP network.”. In addition, IPSec operates at the network layer of the OSI model, contrary to previous standards that operate at the application layer. This makes its application independent and means that users do not need to configure each application to IPSec standards. IPSec provides the services below : - Confidentiality: A service that makes it impossible to interpret data if it is not the recipient. It is the encryption function that provides this service by transforming intelligible (unencrypted) data into unintelligible (encrypted) data. - Authentication: A service that ensures that a piece of data comes from where it is supposed to come from. - Integrity: A service that consists in ensuring that data has not been tampered with accidentally or fraudulently. - Replay Protection: A service that prevents attacks by re-sending a valid intercepted packet to the network for the same authorization. This service is provided by the presence of a sequence number. - Key management: Mechanism for negotiating the length of encryption keys between two IPSec elements and exchange of these keys. An additional means of protection would be to do the monitoring between users and the cloud as a CASB will provide.
http://docs.automotivelinux.org/docs/en/master/architecture/reference/security/part-7/3-Cloud.html
2019-05-19T16:44:54
CC-MAIN-2019-22
1558232255071.27
[]
docs.automotivelinux.org
Boolean Boolean is the shared name used by Elements on all platforms and languages for the type used to represent a boolean value of either true or false. Booleans are value types. The Boolean type is defined in the RemObjects.Elements.System namespace – which is in scope by default and does not have to be used/imported manually. However, the namespace name can be used to prefix the type names, where necessary, to avoid ambiguities. In C#, the bool keyword can also be used to refer to this type, and Swift defines the Bool aliases in the Swift namespace via the Swift Base Library.
https://docs.elementscompiler.com/API/StandardTypes/Boolean/
2019-05-19T17:22:41
CC-MAIN-2019-22
1558232255071.27
[]
docs.elementscompiler.com
3.8 Release notes This document introduces new features and improvements included in Interana release 3.8. New features Interana release 3.8 includes the following new features:. Interana’s table view can be particularly useful for these kinds of quick analyses. In this example, two boolean filter properties are used in the split-by clause to create a quick crosstab. Using the Chart Options menu to define the value to pivot on, this screenshot shows the audience overlap for segments of music app users who have (and have not) listened to songs by two different artists. Interana Interana’s Info section. The beta implementation of this feature accepts Markdown and is read-only to all users. Currently, the only means of updating content is via email with your Interana technical account manager.
https://docs.interana.com/3/3.x_Release_Notes/3.8_Release_notes
2019-05-19T16:33:00
CC-MAIN-2019-22
1558232255071.27
[]
docs.interana.com
- Reference > mongoShell Methods > - Database Methods > - db.watch() db.watch()¶ On this page Definition¶ db. watch(pipeline, options)¶() Behavior¶ - You cannot run db.watch()on the admin, local, or configdatabase. db. - You can run db.watch()for a database that does not exist. However, once the database is created and you drop the database, the change stream cursor closes. db.watch()is available for replica sets and sharded clusters: - For a replica set, you can issue db.watch()on any data-bearing member. - For a sharded cluster, you must issue db.watch()on a mongosinstance. - You can only use db.watch()with the Wired Tiger storage engine. option has already dropped off the oplog, db.watch() cannot resume the change stream. See Resume a Change Stream for more information on resuming a change stream. Note - You cannot resume a change stream after an invalidate event (for example, a collection drop or rename) closes the stream. - If the deployment is a sharded cluster, a shard removal may cause an open change stream cursor to close, and the closed change stream cursor may not be fully resumable. Resume Token¶¶ Change stream is only available if "majority" read concern support is enabled (default)..
https://docs.mongodb.com/manual/reference/method/db.watch/
2019-05-19T17:30:34
CC-MAIN-2019-22
1558232255071.27
[]
docs.mongodb.com
user agent utils enrichment¶ Warning This enrichment is deprecated. The library powering this enrichment has now been declared as end of life and won’t receive any updates:. Because this could one day result in a potential security issue (due to lack of maintenance), we encourage everyone to move away from this enrichment in favor of the ua parser enrichment. However, please keep in mind that this is not a drop-in replacement: the ua parser enrichment will fill an external com_snowplowanalytics_snowplow_ua_parser_context_1 table (you can find the Iglu schema here) instead of a few fields in atomic.events. In the future, we will move the fields affected by the user agent utils enrichments out of atomic.events into their own context and, as a follow up, remove it entirely. JSON Schema iglu:com.snowplowanalytics.snowplow/user_agent_utils_config/jsonschema/1-0-0 Compatibility r63+ Data provider user-agent-utils This enrichment uses the user-agent-utils library to parse the useragent into the following fields: br_name br_family br_version br_type br_renderengine os_name os_family os_manufacturer dvce_type dvce_ismobile This enrichment has no special configuration: it is either off or on. Enable it with the following JSON: { "schema": "iglu:com.snowplowanalytics.snowplow/user_agent_utils_config/jsonschema/1-0-0", "data": { "vendor": "com.snowplowanalytics.snowplow", "name": "user_agent_utils_config", "enabled": true, "parameters": {} } } Note: As an alternative solution, you could enable ua-parser enrichment either in place of this enrichment or as an accopmanying enhancement. There’s no conflict here as the output data of these enrichments will end up in different tables. The input value for the enrichment comes from ua parameter which is mapped to useragent field in atomic.events table. This enrichment uses 3rd party user-agent-utils library to parse the useragent string. Below is the summary of the fields in atomic.events table driven by the result of this enrichment (no dedicated table).
https://docs.snowplowanalytics.com/open-source/snowplow/configurable-enrichments/user-agent-utils-enrichment/
2019-05-19T16:20:15
CC-MAIN-2019-22
1558232255071.27
[]
docs.snowplowanalytics.com
Snowplow Event Recovery¶ The different Snowplow pipelines being all non-lossy, if something goes wrong during, for example, schema validation or enrichment the payloads (alongside the errors that happened) are stored into. Recovery scenarios¶ The main ideas behind recovery, as presented here, are recovery scenarios. What are recovery scenarios? They are modular and composable processing units that will deal with a specific case you want to recover from. As such, recovery scenarios are, at their essence, made of two things: - an error filter, which will serve as a router between bad rows and the appropriate recovery scenario - a mutate function, which will ) Out of the box recovery scenarios¶ For the most common recovery scenarios, it makes sense to support them out of the box and make them accessible through the recovery job’s configuration which is covered in the next section. Note that, for every recovery scenario leveraging a regex, it’s possible to use capture groups. For example, to remove brackets but keep their content we would have a toReplace argument containing \\{.*\\} and a replacement argument containing $1 (capture groups are one-based numbered). Custom recovery scenario¶ If your recovery scenario is not covered by the ones listed above, you can define your own by extending RecoveryScenario. To extend RecoveryScenario you will need two things: - an errorwhich will be used to filter the incoming bad rows - a mutatefunction which will be used to actually modify the collector payload As an example, we can define a path mutating recovery scenario in RecoveryScenario.scala: final case class ReplaceInPath( error: String, toReplace: String, replacement: String ) extends RecoveryScenario { def mutate(payload: CollectorPayload): CollectorPayload = { if (payload.path != null) payload.path = payload.path.replaceAll(toReplace, replacement) payload } } If you think your recovery scenario will be useful to others, please consider opening a pull request! Configuration¶ Once you have identified the different recovery scenarios you will want to run, you can construct the configuration that will be leveraging them." } ] } Technical details¶ Spark for AWS real-time¶ The Spark job reads bad rows from an S3 location and stores the recovered payloads in another S3 location. To build the fat jar, run: sbt "project spark" assembly. Options¶ Spark for AWS real-time¶ Beam for GCP real-time¶ Using a Docker container (for which the image is available in our registry on Bintray here): Testing¶ You’ll need to have cloned this repository to run those tests and downloaded SBT. A complete recovery¶ You can test a complete recovery, starting from bad rows to getting the data enriched by: - Modifying the bad_rows.jsonfile which should contain examples of bad rows you want to recover - Adding your recovery scenarios to recovery_scenarios.json - Fill out the payloads you’re expecting to generate after the recovery is run in expected_payloads.json. Here you have the choice of specifying a payload containing a querystring or a payload. - If your recovery is relying on specific Iglu repositories additionally to Iglu central, you’ll need to specify those repositories in resolver.json - If your recovery is relying on specific enrichments, you’ll need to add them to enrichments.json Once this is all done, you can run sbt "project core" "testOnly *IntegrationSpec". What this process will do is: - Run the recovery on the bad rows contained in bad_rows.jsonaccording to the configuration in recovery_scenarios.json - Check that the recovered payloads outputted by the recovery conform to the contents of the expected payloads in expected_payloads.json - Check that these recovered payloads pass enrichments, optionally leveraging the additional Iglu repositories and enrichments A custom recovery scenario¶ If you’ve written an additional recovery scenario you’ll need to add the corresponding unit tests to RecoverScenarioSpec.scala and then run sbt test.
https://docs.snowplowanalytics.com/open-source/snowplow/snowplow-event-recovery/0.1.0/
2019-05-19T17:32:42
CC-MAIN-2019-22
1558232255071.27
[]
docs.snowplowanalytics.com
The(); } }Handlers { ... } } } 'expression' attributes (and 'expression' sub-element) id="delayer" input-channel="input" output-channel="output" expression="headers..; } } While the abstract class mentioned above is provided as a convenience, you can add any Advice to the chain, including a transaction advice.. When configuring certain endpoints: ="" />/>
https://docs.spring.io/spring-integration/docs/4.1.x/reference/html/messaging-endpoints-chapter.html
2019-05-19T16:36:14
CC-MAIN-2019-22
1558232255071.27
[]
docs.spring.io
Specifies which codecs are allowed. If no codecs are specified, BVR uses PCMA and PCMU (in that order). This section can appear multiple times in the config. The order of codecs in bvr.config is important in the certain cases: This section is OPTIONAL; MANDATORY parameters only need to be specified if this section exists The header for this section is [[codecs]]
http://docs.blueworx.com/BVR/InfoCenter/V7/Linux/help/topic/com.ibm.wvrlnx.config.doc/lnx_config_options_bvr_codecs.html
2019-05-19T17:24:50
CC-MAIN-2019-22
1558232255071.27
[]
docs.blueworx.com
Installation¶ DebOps scripts are distributed on PyPI, Python Package Index. They can be installed using the pip command: pip install debops You can also use pip to upgrade the scripts themselves: pip install --upgrade debops After the installation is finished, scripts will be available in /usr/local/bin/, which should be in your shell’s $PATH. DebOps prerequisites¶ To use DebOps playbooks, you need some additional tools on your Ansible Controller besides the official scripts. Some of these tools will be installed for you by pip as prerequisites of the scripts. - Ansible - You need to install Ansible to use DebOps playbooks. DebOps is developed and used on current development version of Ansible, however we try not to use certain features until they are available in current stable release. - Python netaddrlibrary This is a Python library used to manipulate strings containing IP addresses and networks. DebOps provides an Ansible plugin (included in Ansible 1.9+) which uses this library to manipulate IP addresses. You can install netaddreither using your favourite package manager, or through pip. - Python ldaplibrary - This Python library is used to access and manipulate LDAP servers. It can be installed through your package manager or using pip. - Python passliblibrary - This Python library is used to encrypt random passwords generated by Debops and store them in the secret/directory. uuidgen - This command is used to generate unique identifiers for hosts which are then saved as Ansible facts and can be used to identify hosts in the playbook. In most Linux or MacOSX desktop distributions this command should be already installed. If not, it can be usually found in the uuid-runtimepackage. encfs - This is an optional application, which is used by the debops-padlockscript to encrypt the secret/directory within DebOps project directories, which holds confidential data like passwords, private keys and certificates. EncFS is available on Linux distributions, usually as the encfspackage. gpg GnuPG is used to encrypt the file which holds EncFS password; this allows you to share the encrypted secret/directory with other users without sharing the password, and using private GPG keys instead. debopsscript will automatically decrypt the keyfile and use it to open an EncFS volume. GnuPG is usually installed on Linux or MacOSX operating systems.
https://debops-documentation-testing.readthedocs.io/en/latest/debops/docs/installation.html
2019-05-19T16:19:55
CC-MAIN-2019-22
1558232255071.27
[]
debops-documentation-testing.readthedocs.io
GetContainerPolicy Retrieves the access policy for the specified container. For information about the data that is included in an access policy, see the AWS Identity and Access Management User Guide. Request Syntax { "ContainerName": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - ContainerName The name of the container. Type: String Length Constraints: Minimum length of 1. Maximum length of 255. Pattern: [\w-]+ Required: Yes Response Syntax { "Policy": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - PolicyNotFoundException The policy that you specified in the request does not exist. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/mediastore/latest/apireference/API_GetContainerPolicy.html
2019-05-19T17:10:36
CC-MAIN-2019-22
1558232255071.27
[]
docs.aws.amazon.com
Ways to Secure Your Network There are several ways you can control security for your cloud network and compute instances: - Public vs. private subnets: You can designate a subnet to be private, which means instances in the subnet cannot have public IP addresses. For more information, see Public vs. Private Subnets. - Security lists: To control packet-level traffic in/out of an instance. You configure security lists in the Oracle Cloud Infrastructure API or Console. For more information about security lists, see Security Lists. - lists both operate at the instance level. However, you configure security lists at the subnet level, which means all instances in a given subnet have the same set of security list rules. Keep this in mind when setting up security for your cloud network and instances. When troubleshooting access to an instance, make sure both the security lists associated with the instance's subnet and the instance's firewall rules are set correctly. If your instance is running Oracle Linux 7, you need to use firewalld to interact with the iptables rules. For your reference, here are commands for opening a port (1521 in this example): sudo firewall-cmd --zone=public --permanent --add-port=1521/tcp sudo or security lists. You configure policies in the Oracle Cloud Infrastructure API or Console. For more information, see Access Control.
https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/waystosecure.htm
2019-05-19T17:43:16
CC-MAIN-2019-22
1558232255071.27
[]
docs.cloud.oracle.com
Project References ... How EBuild resolves Project References EBuild applies sophisticated logic to try its very bets to resolve project references in almost every case. Firstly, before starting the build, EBuild will check all projects in the current solution for project references. Each project reference is matched against the other projects in the solution, and if a matching project is found, it is connected to the project reference, and marked to be built before the project(s) that reference it. If a referenced project cannot be found in the solution, EBuild will try to locate the project file on disk using either its name or its ProjectFile meta data value. If found, the project is loaded into the solution implicitly and connected to the project reference, but marked as not Enabled (i.e. it will not be built). If the referenced project cannot be found either, EBuild checks if the Hintpath of the reference is valid. If none of these steps are successful, the build will fail. EBuild will then determine the best build order for all projects, based on the dependencies. If a circular dependency is detected, the build will fail, otherwise EBuild will start to build each project in the order it has decided. As each project hits the ElementsResolveReferences task, project references are resolved using the following steps: If a live project was connected to the project reference If a live project was connected to the project reference in the previous steps (either because it was part of the solution, or could be located on disk), that project is used to fill the reference: - If the project was built successfully, its output (via the FinlOutoutForReferencing-|collection) will be used to fulfill the reference. - If the referenced project is Enabledbut was not built yet, that means a circular dependency was detected, and the referencing project will fail to build. - If the referenced project failed to built earlier, the referencing project will also fail to build. - If the project is not Enabled(either explicitly by the user, or because it was pulled into the solution implicitly as described above), EBuild will try to locate the project's FinalOutput.xmlfile in the Cache from a previous build. If found, the data from that file (the FinlOutoutForReferencing-|collection) will be used to fulfill the reference. - If the previous step failed, EBuild will fall back to using the HintPath, if valid, to fulfill the reference, and otherwise fail the build. If no live project was connected Otherwise, EBuild will fall back to simply looking at the HintPath. If valid, it will be used to fulfill the project reference, otherwise the build will fail. Covered Scenarios The steps above cover just about any valid scenario for project References: - Both projects are in the solution and Enabled. - Both projects are in the solution, the referenced project is not Enabled, but was built earlier. - The referenced project is not in the solution, but can be located on disk and was built earlier. - The referenced project cannot be located, but the HintPathis valid. Project References when Hosting EBuild in MSBuild When building with EBuild inside Visual Studio, EBuild does not see the whole solution, but instead builds each project individually (wrapped in an MSBuild project task). EBuild will rely on option 3 and 4 from above to resolve project references in that case. The same is true when building an individual project file without .sln from the command line.
https://docs.elementscompiler.com/Projects/References/ProjectReferences/
2019-05-19T16:25:55
CC-MAIN-2019-22
1558232255071.27
[]
docs.elementscompiler.com
All content with label amazon+hibernate_search+hot_rod+infinispan+jboss_cache+listener+non-blocking+release+scala. Related Labels: expiration, publish, datagrid, interceptor, server, replication, transactionmanager, dist, partitioning,, mvcc, tutorial, notification, read_committed, xml, distribution, cachestore, data_grid, resteasy, cluster, br, development, websocket, transaction, async, interactive, xaresource, build, searchable, demo, installation, cache_server, client, migration, jpa, tx, eventing, client_server, testng, infinispan_user_guide, standalone, repeatable_read, snapshot, hotrod, webdav, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, rest more » ( - amazon, - hibernate_search, - hot_rod, - infinispan, - jboss_cache, - listener, - non-blocking, - release, - scala ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+hibernate_search+hot_rod+infinispan+jboss_cache+listener+non-blocking+release+scala
2019-05-19T16:51:41
CC-MAIN-2019-22
1558232255071.27
[]
docs.jboss.org
Understanding the Resolvers File Structure As you may have seen, all core plugins use a similar file and folder structure for resolvers, and we recommend that you do the same for custom plugins. The basic premise is to have the folders and files reflect the hierarchy of the resolvers object, making it easier to find the resolver code you're looking for. Before explaining the file structure, there are two concepts you need to fully understand: - What is a resolver map? - Each Reaction plugin can register its own resolver map, and all registered resolversobjects are then deep merged together into a single object, which is what is provided to the GraphQL library as the full resolver map. If two plugins have conflicting terminal nodes in the resolver tree, the last one registered wins. Currently plugins are loaded in alphabetical order by plugin folder name, grouped by "core" first, then "included", and then "custom". Now, let's take the core payments plugin as an example. Here is what a GraphQL resolver map for the payments plugin would look like in a single file: import { encodePaymentOpaqueId } from "@reactioncommerce/reaction-graphql-xforms/payment"; export default { Payment: { _id: (node) => encodePaymentOpaqueId(node._id) }, Query: { async availablePaymentMethods(_, { shopId }, context) { // ... }, async paymentMethods(_, { shopId }, context) { // ... } } }; You could save that file as resolvers.js in /server/no-meteor in the payments plugin, and then import it and register it: import Reaction from "/imports/plugins/core/core/server/Reaction"; import resolvers from "./server/no-meteor/resolvers"; import schemas from "./server/no-meteor/schemas"; Reaction.registerPackage({ graphQL: { resolvers, schemas }, // ...other props }); While that would work and may even be fine for a simple custom plugin, there are some downsides. We want to be able to test each complex resolver function, which is easier if each function is in its own file. Also, plugins typically keep growing, so our single file might become too large to easily understand over time. So instead we break into separate files and folders as necessary. Whether you use files or folders at each level is up to you and should be based on how complex the functions are and whether they need unit tests. - All files either export a function or an object as their default exports - All index.jsfiles import all of the folders or files at the same level of the tree, exporting them in an object. - We name the files and folders and default import variables to match the object keys, which allows the exports to use ES6 object literal shorthand, and makes it easy to visualize how the folder structure maps to the final resolversobject. Here's how the payments plugin folder structure looks splitting that one file into multiple: For a full understanding, look through these files in the codebase.
https://docs.reactioncommerce.com/docs/next/graphql-resolvers-file-structure
2019-05-19T16:40:03
CC-MAIN-2019-22
1558232255071.27
[array(['https://d33wubrfki0l68.cloudfront.net/5b07d498becb3dacf08f84ec477c455d97713338/96a18/assets/graphql-resolvers-file-structure.png', 'GraphQL resolvers file structure'], dtype=object) ]
docs.reactioncommerce.com
Binding Source Binding Source Binding Source Binding Source Class Definition Encapsulates the data source for a form. public ref [System.ComponentModel.ComplexBindingProperties("DataSource", "DataMember")] public Public Class BindingSource Inherits Component Implements IBindingListView, ICancelAddNew, ICurrencyManagerProvider, IList, ISupportInitializeNotification, ITypedList - Inheritance - BindingSourceBindingSourceBindingSourceBindingSource - Attributes - - Implements - ICollectionICollectionICollectionICollection IEnumerableIEnumerableIEnumerableIEnumerable IListIListIListIList IBindingListIBindingListIBindingListIBindingList IBindingListViewIBindingListViewIBindingListViewIBindingListView ICancelAddNewICancelAddNewICancelAddNewICancelAddNew ISupportInitializeISupportInitializeISupportInitializeISupportInitialize ISupportInitializeNotificationISupportInitializeNotificationISupportInitializeNotificationISupportInitializeNotification ITypedListITypedListITypedListITypedList ICurrencyManagerProviderICurrencyManagerProviderICurrencyManagerProviderICurrencyManagerProvider Examples -1; } } } Remarks. Note.. Caution.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.bindingsource?view=netframework-4.8
2019-05-19T16:49:01
CC-MAIN-2019-22
1558232255071.27
[]
docs.microsoft.com
PublishObjects object (PowerPoint) A collection of PublishObject objects representing the set of complete or partial loaded presentations that are available for publishing to HTML. Remarks You can specify the content and attributes of the published presentation by setting various properties of the PublishObject object. For example, the SourceTypeproperty defines the portion of a loaded presentation to be published. The RangeStartproperty and the RangeEndproperty specify the range of slides to publish, and the SpeakerNotesproperty designates whether or not to publish the speaker's notes. You cannot add to the PublishObjects collection. Example Use the PublishObjects property to return the PublishObjects collection. This example publishes Use Item (index), where index is always "1", to return the single PublishObject object for a loaded presentation. There can be only one PublishObject object for each loaded presentation. This example defines the PublishObject object to be the entire active presentation by setting the SourceType property to ppPublishAll. ActivePresentation.PublishObjects.Item(1).SourceType = ppPublishAll See also PowerPoint Object Model Reference Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/powerpoint.publishobjects
2019-05-19T16:43:43
CC-MAIN-2019-22
1558232255071.27
[]
docs.microsoft.com
When the customer selects the payment method Trustly, the customer needs to select the desired country. By doing so, all online banks that are connected to the payment method Trustly will be presented to the customer. The customer can finalize the payment by logging in with the banking details. Product rules In exceptional cases, the uncleared status can occur within the payment methods Trustly. In this case, it is up to Trustly to inform MultiSafepay of the correct status. This can be a completed status as well as a declined and / or expired status. The uncleared status automatically expires after 5 days. Trustly can be offered in the following currencies - Euros (EUR) - Pounds (GBP) - Swedish krona (SEK)
https://docs.multisafepay.com/payment-methods/trustly/how-does-trustly-work/
2019-05-19T16:51:55
CC-MAIN-2019-22
1558232255071.27
[]
docs.multisafepay.com
Contents Security Operations Previous Topic Next Topic Security incident creation Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Security incident creation Security incidents can be created manually from the form, or automatically via security events received from integrated third-party alert monitoring tools, such as Splunk. If you have a security role, you can use any of the following methods to manually create security incidents. Table 1. Methods for manually creating security incidents Method Description Manually created from the Security Incident list On the Security Incident list, click New to create a new security incident. Manually created from the to create a new security incident. Manually created from an Event Management alert On the Event Management Alerts form, click Create Security incident to create a new security incident from an alert. Manually created from an alert On the Event Management Alert form, click Create Security Incident to create a new security incident. Manually converted from a vulnerability record (if the Vulnerability Response plugin is activated) On the Vulnerability Items form, click Create Security Incident to create a new security incident. Automatic creation of security incidents Generally, security administrators are responsible for setting up alert rules used to automatically generate security incidents. Table 2. Security admin method for creating security incidents Method Description Automatically created using alert rules Security incidents can be created based on alert rules defined in the Event management in your data center application. Security incident manual creationYou can create a security incident from the Security Incident form, as well as from several other forms.Security incident automatic creationThird-party monitoring tools, such as Splunk, can be integrated with Security Incident Response so that security events imported from those tools automatically generate security incidents. You can also import data from third-party tools into security alerts.Record creation from security incidentsAfter you have created and saved a security incident, you can create a change request (CHG), incident (INC), or problem (PRB) record from it. You can also create a customer service case from any security incident. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-security-management/page/product/security-incident-response/concept/si-creation.html
2019-05-19T17:00:30
CC-MAIN-2019-22
1558232255071.27
[]
docs.servicenow.com
Contents IT Service Management Previous Topic Next Topic Modify CAB meeting details Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Modify CAB meeting details You can modify the agenda items for a specific CAB meeting. Before you beginRole required: itil or sn_change_cab.cab_manager About this taskAfter you update the CAB board or change request conditions, refresh the CAB meeting to apply the updates. If you add new board members or attendees to the meeting, they are notified via email invitations after the meeting is refreshed. Procedure Navigate to the CAB meeting whose agenda you want to modify using one of the following steps. OptionDescription Open from the CAB meeting list Navigate to Change > Change Advisory Board > All CAB meetings. Select and open the CAB meeting to modify. Open from the CAB definition list Navigate to Change > Change Advisory Board > All CAB definitions. Select and open the CAB definition to send out the meeting request. Select and open the specific CAB meeting to modify. Modify the CAB meeting agenda in the Agenda Management tab or form section, as appropriate. Table 1. Agenda Management fields Field Description Notification lead time Number of days prior to which change requesters are notified that their changes are coming up for discussion on the CAB agenda. each. From Related Links, you can perform any of the following tasks. Link Description Refresh Agenda Items The agenda items for the CAB meeting are refreshed. If you added or updated attendees, a confirmation message asks if the meeting request must be resent to these attendees. Send meeting request to attendees Click to manually resend the meeting request to the list of attendees. Go to this meeting in CAB Workbench Click to open the meeting in the CAB workbench. This link is available only when it is time for the CAB to begin. Share notes Share notes captured in the Meeting Notes field to the list of meeting attendees. Share Notes is only visible when the meeting is In Progress or Complete. From related lists, you can perform any of the following tasks. Related list Description Agenda Items Manually add agenda items. In the Allotted Time field, the CAB manager can override the default time for any agenda item. Attendees Manually add attendees to a CAB meeting. Click Update to save your changes. Related TopicsOR conditions On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-it-service-management/page/product/change-management/task/refresh-cab-meeting-agenda-items.html
2019-05-19T17:02:49
CC-MAIN-2019-22
1558232255071.27
[]
docs.servicenow.com
Delegated Installation for an Enterprise Certification Authority Applies To: Windows Server 2012 R2, Windows Server 2012 By default, to install a root or subordinate certification authority (CA), you must be a member of the Enterprise Admins group, or Domain Admins for the root domain. By following the instructions in this topic, you can delegate control to an administrator who doesn’t have these high-privilege permissions. Installation by a low-privilege user helps to mitigate the Pass-The-Hash-Attack, with the security threats of lateral movement and privilege escalation, as documented by Microsoft Trustworthy Computing in the downloadable Mitigating Pass-the-Hash (PtH) Attacks and Other Credential Theft Techniques paper. Important Delegated Installation for an Enterprise Certification Authority as described in this article succeeds only if certificate templates are present in Active Directory. If certificate templates are already present. the second, the third, and additional installations of the Enterprise CAs can be delegated as described. When the default Certificate Templates are not available in Active Directory, the installation of the first Enterprise CA fails. For the procedure show in this article to succeed for the first enterprise CA installation in the this forest, you must create a default certificate template in Active Directory: Open a command prompt with the run Run as administrator option. Run the following command only once for the given forest: certutil -installdefaulttemplates Use the following procedure to prepare a forest so that a low-privilege administrator can install and configure an enterprise CA. To prepare a forest for a CA delegated installation Create a security group (as an example, name it CAInstallGroup) and add user accounts to this group for the administrators who will install and configure the enterprise root CA or enterprise subordinate CA. You can create and configure this group by using Active Directory Users and Computers, or you can use Windows PowerShell. To use Windows PowerShell: Start a Windows PowerShell session with the Run as administrator option. For more information about using the Active Directory module for Windows PowerShell, see Active Directory Administration with Windows PowerShell in the Windows Server library. Use the New-ADGroup cmdlet to create a new security group, using the following example: New-ADGroup -Name "CAInstallGroup" -GroupScope Global -Description "Security group to install AD CS Certification Authority" -GroupCategory Security In this example, you can substitute your own name and description for the security group, and change the scope if required. Define a variable for a user account, using the Get-ADUser cmdlet for the user to add to the group, and then use the Add-ADGroupMember cmdlet to add that user to the group, using the following example: $user = get-aduser <user name> Add-ADGroupMember –identify “CAInstallGroup” – Members $user Repeat this command for additional users that you want to add to the group. Grant this security group Full Control to the Active Directory Public Key Service containers: Copy and save the following into a Windows PowerShell script that has the name Modify-PublicKeyServices.Acl.ps1: param( [Parameter(Mandatory = $true)] [ValidateNotNullOrEmpty()] [string]$group ) $groupObj = Get-ADGroup $group $sidGroup = new-object System.Security.Principal.SecurityIdentifier $groupObj.SID # Get forest root domain $rootDomain = ([ADSI]"LDAP://RootDSE").ConfigurationNamingContext #Get public key services container full DN $publicKeyServicesContainer = "CN=Public Key Services,CN=Services,$rootDomain" set-location ad:\ #Get ACL for public key services container $acl = get-acl $publicKeyServicesContainer #Create access rule to be added to ACL $accessRule = new-object System.DirectoryServices.ActiveDirectoryAccessRule( $sidGroup, [System.DirectoryServices.ActiveDirectoryRights]::GenericAll, [System.Security.AccessControl.AccessControlType]::Allow, [System.DirectoryServices.ActiveDirectorySecurityInheritance]::All) #Add this access rule to the ACL $acl.SetAccessRule($accessRule) #Write the changes to the object set-acl -path $publicKeyServicesContainer -aclobject $acl set-location c:\ Run the script by using the following command: Modify-PublicKeyServices.Acl.ps1 –group "CAInstallGroup" If you named the security group from Step 1 to have a different name from CAInstallGroup, substitute your preferred name in the command. Grant the security group permissions to add members to the Cert Publishers domain group: Open Active Directory Users and Computers and make sure that Advanced Features is enabled from the View menu, then do the following steps: Expand Users, right-click Cert Publishers and select Properties. Click the Security tab. Click Advanced. In the Advanced Security Settings for Cert Publishers dialog box, click Add and add the security group (for example, CAInstallGroup) that you created earlier. Select Write Members, and click OK three times. Grant the security group permissions to add members to the Pre-Windows 2000 Compatible Access group: Expand Builtin, right-click Pre-Windows 2000 Compatible Access, and select Properties. Click the Security tab. Click Advanced. In the Advanced Security Settings for Pre-Windows 2000 Compatible Access dialog box, click Add and add the security group (for example, CAInstallGroup) that you created earlier. Select Write Members, and click OK three times. For an enterprise subordinate CA only: For the Subordinate Certification Authority template, on the Security tab, grant Read and Enroll permissions to your security group. An administrator who is not a member of the Enterprise Admins group or Domain Admins group but who is a member of the group that you created can now install and configure an enterprise CA.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn722303(v=ws.11)
2019-05-19T16:39:48
CC-MAIN-2019-22
1558232255071.27
[]
docs.microsoft.com
- Scholantis SIS Data Sync allows teachers to see their classes and work with students more easily than ever before. Once sync is enabled, you'll see your class list when creating or modifying your class site. You can add all your students to your class site with a single click..
https://docs.scholantis.com/display/PUG2013/Scholantis+SIS+Data+Sync
2019-05-19T17:36:13
CC-MAIN-2019-22
1558232255071.27
[]
docs.scholantis.com
. Note The OEM option is not supported for the db.t2.micro, db.t2.small, db.t1.micro or db.m1.small DB instance classes. The default port number for OEM Database Control is 1158; the default port number for OEM Database Express is 5500. You can either accept the port number or choose a different one when you enable the OEM option for your DB instance. You can then go to your web browser and begin using the OEM database tool for your Oracle version. Note The OEM port numbers can't be modified after the option group that specifies the port number has been applied to a DB instance. To change a port number, create a new option group with an updated port number, remove the existing option group, and then apply the new option group. For more information about modifying option groups, see Working with Option Groups. You can access either OEM Database Control or OEM Database Express from your web browser. As an example, if the endpoint for your Amazon RDS instance is mydb.f9rbfa893tft.us-east-1.rds.amazonaws.com, and you specify port 1158, the URL to access OEM Database Control is as follows. Copy to clipboard When you access either tool from you web browser, the login window appears, prompting you for a user name and password. Type the master user name and master password for your DB instance. You are now ready to manage your Oracle databases.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.OEM_DBControl.html
2017-03-23T00:25:37
CC-MAIN-2017-13
1490218186530.52
[]
docs.aws.amazon.com
Operations The following describes components of Amazon SimpleDB operations. Subscriber—Any application, script, or software making a call to the Amazon SimpleDB service. The AWS Access Key ID uniquely identifies each subscriber for billing and metering purposes. Amazon SimpleDB Request—A single web service API call and its associated data that the subscriber sends to the Amazon SimpleDB service to perform one or more operations. Amazon SimpleDB Response—The response and any results returned from the Amazon SimpleDB service to the subscriber after processing the request. The AWS Platform handles authentication success and failure; failed requests are not sent to the Amazon SimpleDB service.
http://docs.aws.amazon.com/AmazonSimpleDB/latest/DeveloperGuide/Operations.html
2017-03-23T00:25:38
CC-MAIN-2017-13
1490218186530.52
[]
docs.aws.amazon.com
class OEBondDisplayBase : public OESystem::OEBase The OEBondDisplayBase class is the abstract interface for representing bond display information within OEDepict TK. See Figure: OEDepict TK bond display class hierarchy. OEDepict TK bond display class hierarchy const void *GetDataType() const This function is used to perform run-time type identification. See also const OEChem::OEBondBase *GetBond() const Returns the const pointer of the OEBondBase object for which display properties are stored in the class derived from the OEBondDisplayBase abstract class. bool IsDataType(const void *) const Returns whether type is the same as the instance this method is called on. See also
https://docs.eyesopen.com/toolkits/python/depicttk/OEDepictClasses/OEBondDisplayBase.html
2017-03-23T00:18:40
CC-MAIN-2017-13
1490218186530.52
[]
docs.eyesopen.com
Multi-backend Riak allows you to run multiple backends within a single Riak cluster. Selecting the Multi backend enables you to use different storage backends for different buckets. Any combination of the three available backends—Bitcask, LevelDB, and Memory—can be used. Configuring Multiple Backends You can set up your cluster to use the Multi backend using Riak’s configuration files. storage_backend = multi {riak_kv, [ %% ... {storage_backend, riak_kv_multi_backend}, %% ... ]}, Remember that you must stop and then re-start each node when you change storage backends or modify any other configuration. Using Multiple Backends In Riak 2.0 and later, we recommend using multiple backends by applying them to buckets using bucket types. Assuming that the cluster has already been configured to use the multi backend, this process involves three steps: - Creating a bucket type that enables buckets of that type to use the desired backends - Activating that bucket type - Setting up your application to use that type Let’s say that we’ve set up our cluster to use the Multi backend and we want to use LevelDB and the Memory backend for different sets of data. First, we need to create two bucket types, one which sets the backend bucket property to leveldb and the other which sets that property to memory. All bucket type-related activity is performed through the riak-admin command interface. We’ll call our bucket types leveldb_backend and memory_backend, but you can use whichever names you wish. riak-admin bucket-type create leveldb_backend '{"props":{"backend":"leveldb"}}' riak-admin bucket-type create memory_backend '{"props":{"backend":"memory"}}' Then, we must activate those bucket types so that they can be used in our cluster: riak-admin bucket-type activate leveldb_backend riak-admin bucket-type activate memory_backend Once those types have been activated, any objects stored in buckets bearing the type leveldb_backend will be stored in LevelDB, whereas all objects stored in buckets of the type memory_backend will be stored in the Memory backend. More information can be found in our documentation on using bucket types. Configuring Multiple Backends Once you’ve set up your cluster to use multiple backends, you can configure each backend on its own. All configuration options available for LevelDB, Bitcask, and Memory are all available to you when using the Multi backend. Using the Newer Configuration System If you are using the newer, riak.conf-based configuration system, you can configure the backends by prefacing each configuration with multi_backend. Here is an example of the general form for configuring multiple backends: multi_backend.$name.$setting_name = setting If you are using, for example, the LevelDB and Bitcask backends and wish to set LevelDB’s bloomfilter setting to off and the Bitcask backend’s io_mode setting to nif, you would do that as follows: multi_backend.leveldb.bloomfilter = off multi_backend.bitcask.io_mode = nif Using the Older Configuration System If you are using the older, app.config-based configuration system, configuring multiple backends involves adding one or more backend- specific sections to your riak_kv settings (in addition to setting the storage_backend setting to riak_kv_multi_backend, as shown above). Note: If you are defining multiple file-based backends of the same type, each of these must have a separate data_rootdirectory defined. While all configuration parameters can be placed anywhere within the riak_kv section of app.config, in general we recommend that you place them in the section containing other backend-related settings to keep the settings organized. Below is the general form for your app.config file: {riak_kv, [ %% ... {multi_backend_default, <<"bitcask_mult">>}, {multi_backend, [ %% Here's where you set the individual multiplexed backends {<<"bitcask_mult">>, riak_kv_bitcask_backend, [ %% bitcask configuration {data_root, "/var/lib/riak/bitcask_mult/"}, {config1, ConfigValue1}, {config2, ConfigValue2} ]}, {<<"bitcask_expiry_mult">>, riak_kv_bitcask_backend, [ %% bitcask configuration {data_root, "/var/lib/riak/bitcask_expiry_mult/"}, {expiry_secs, 86400}, {config1, ConfigValue1}, {config2, ConfigValue2} ]}, {<<"eleveldb_mult">>, riak_kv_eleveldb_backend, [ %% eleveldb configuration {config1, ConfigValue1}, {config2, ConfigValue2} ]}, {<<"second_eleveldb_mult">>, riak_kv_eleveldb_backend, [ %% eleveldb with a different configuration {config1, ConfigValue1}, {config2, ConfigValue2} ]}, {<<"memory_mult">>, riak_kv_memory_backend, [ %% memory configuration {config1, ConfigValue1}, {config2, ConfigValue2} ]} ]}, %% ... ]}, Note that in each of the subsections of the multi_backend setting, the name of each backend you wish to configure can be anything you would like. Directly after naming the backend, you must specify which of the backends corresponds to that name, i.e. riak_kv_bitcask_backend, riak_kv_eleveldb_backend, or riak_kv_memory_backend. Once you have done that, the various configurations for each named backend can be set as objects in an Erlang list. Example Configuration Imagine that you are using both Bitcask and LevelDB in your cluster, and you would like storage to default to Bitcask. The following configuration would create two backend configurations, named bitcask_mult and leveldb_mult, respectively, while also setting the data directory for each backend and specifying that bitcask_mult is the default. storage_backend = multi multi_backend.bitcask_mult.storage_backend = bitcask multi_backend.bitcask_mult.bitcask.data_root = /var/lib/riak/bitcask_mult multi_backend.leveldb_mult.storage_backend = leveldb multi_backend.leveldb_mult.leveldb.data_root = /var/lib/riak/leveldb_mult multi_backend.default = bitcask_mult {riak_kv, [ %% ... {multi_backend_default, <<"bitcask_mult">>}, {multi_backend, [ {<<"bitcask_mult", riak_kv_bitcask_backend, [ {data_root, "/var/lib/riak/bitcask"} ]}, {<<"leveldb_mult", riak_kv_eleveldb_backend, [ {data_root, "/var/lib/riak/leveldb"} ]} ]} %% ... ]} Multi Backend Memory Use Each Riak storage backend has settings for configuring how much memory the backend can use, e.g. caching for LevelDB or for the entire set of data for the Memory backend. Each of these backends suggests allocating up to 50% of available memory for this purpose. When using the Multi backend, make sure that the sum of all backend memory use is at 50% or less. For example, using three backends with each set to 50% memory usage will inevitably lead to memory problems.
http://docs.basho.com/riak/kv/2.2.1/setup/planning/backend/multi/
2017-03-23T00:25:47
CC-MAIN-2017-13
1490218186530.52
[]
docs.basho.com
By default RadCalendarView uses the current device's locale to initialize the values that are displayed along with some other calendar specifics like which is the first day of the week. Here's an example with of the calendar run on a device where the language is set to English (United States): If you want to provide a static locale which disregards the user's preferences, you can use the calendar's setLocale(Locale) method and set the locale of your preference. Here's an example with the local France: calendarView.setLocale(Locale.FRANCE); calendarView.Locale = Java.Util.Locale.France; The result will be that the calendar will display the names of the days and the month in French and the first day of the week will not be Sunday as with the previous example but Monday: RadCalendarView displays the dates in accordance with the java.util.GregorianCalendar. If you would like to use another Calendar implementation you can apply it with calendar's setCalendar(Calendar) method.
http://docs.telerik.com/devtools/android/controls/calendar/calendar-localization
2017-03-23T00:21:29
CC-MAIN-2017-13
1490218186530.52
[]
docs.telerik.com
Integrate Windows Task Scheduler with ArtOfTest.Runner 1. Create a test project and put into a location (shared) which can be accessed from the execution machine. 2. Create a *.bat file which copy the project from the shared location to execution machine and run the list. 3. Create a Basic Task in Windows Task Scheduler to run the already created bat file. - Click the Action menu, and then click Create Basic Task. - Type a name for the task and an optional description, and then click Next. Select a schedule based on the calendar. To schedule a program to start automatically, click Start a program under Action, and then click Next. Click Browse to find the bat file you want to start, and then click Next. Click Finish. Windows Task Scheduler will run the .bat file in the selected time. OnBeforeTestListStarted and OnAfterTestListCompleted methods will be called accordingly.
http://docs.telerik.com/teststudio/features/test-runners/integrate-task-scheduler-with-artoftest
2017-03-23T00:21:36
CC-MAIN-2017-13
1490218186530.52
[array(['/teststudio/img/features/test-runners/integrate-task-scheduler-with-artoftest/fig1.png', 'bat file'], dtype=object) array(['/teststudio/img/features/test-runners/integrate-task-scheduler-with-artoftest/fig2.png', 'basics task'], dtype=object) array(['/teststudio/img/features/test-runners/integrate-task-scheduler-with-artoftest/fig3.png', 'task name'], dtype=object) ]
docs.telerik.com
API Reference¶ Database.Base¶ Contains implementations of database retrieveing objects - class gitdb.db.base. ObjectDBR¶ Defines an interface for object database lookup. Objects are identified either by their 20 byte bin sha - class gitdb.db.base. ObjectDBW(*args, **kwargs)¶ Defines an interface to create objects in the database - class gitdb.db.base. FileDBBase(root_path)¶ Provides basic facilities to retrieve files of interest, including caching facilities to help mapping hexsha’s to objects - class gitdb.db.base. CompoundDB¶ A database which delegates calls to sub-databases. Databases are stored in the lazy-loaded _dbs attribute. Define _set_cache_ to update it with your databases Database.Git¶ - class gitdb.db.git. GitDB(root_path)¶ A git-style object database, which contains all objects in the ‘objects’ subdirectory IMPORTANT: The usage of this implementation is highly discouraged as it fails to release file-handles. This can be a problem with long-running processes and/or big repositories. Database.Loose¶ - class gitdb.db.loose. LooseObjectDB(root_path)¶ A database which operates on loose object files Database.Memory¶ Contains the MemoryDatabase implementation - class gitdb.db.mem. MemoryDB¶ A memory database stores everything to memory, providing fast IO and object retrieval. It should be used to buffer results and obtain SHAs before writing it to the actual physical storage, as it allows to query whether object already exists in the target storage before introducing actual IO stream_copy(sha_iter, odb)¶ Copy the streams as identified by sha’s yielded by sha_iter into the given odb The streams will be copied directly Note: the object will only be written if it did not exist in the target db :return: amount of streams actually copied into odb. If smaller than the amountof input shas, one or more objects did already exist in odb Database.Pack¶ Module containing a database to deal with packs - class gitdb.db.pack. PackedDB(root_path)¶ A database operating on a set of object packs store(istream)¶ Storing individual objects is not feasible as a pack is designed to hold multiple objects. Writing or rewriting packs for single objects is inefficient Database.Reference¶ Base¶ Module with basic data structures - they are designed to be lightweight and fast - class gitdb.base. OInfo(*args)¶ Carries information about an object in an ODB, providing information about the binary sha of the object, the type_string as well as the uncompressed size in bytes. It can be accessed using tuple notation and using attribute access notation: assert dbi[0] == dbi.binsha assert dbi[1] == dbi.type assert dbi[2] == dbi.size The type is designed to be as lightweight as possible. - class gitdb.base. OPackInfo(*args)¶ As OInfo, but provides a type_id property to retrieve the numerical type id, and does not include a sha. Additionally, the pack_offset is the absolute offset into the packfile at which all object information is located. The data_offset property points to the absolute location in the pack at which that actual data stream can be found. - class gitdb.base. ODeltaPackInfo(*args)¶ Adds delta specific information, Either the 20 byte sha which points to some object in the database, or the negative offset from the pack_offset, so that pack_offset - delta_info yields the pack offset of the base object - class gitdb.base. OStream(*args, **kwargs)¶ Base for object streams retrieved from the database, providing additional information about the stream. Generally, ODB streams are read-only as objects are immutable - class gitdb.base. OPackStream(*args)¶ Next to pack object information, a stream outputting an undeltified base object is provided - class gitdb.base. ODeltaPackStream(*args)¶ Provides a stream outputting the uncompressed offset delta information - class gitdb.base. IStream(type, size, stream, sha=None)¶ Represents an input content stream to be fed into the ODB. It is mutable to allow the ODB to record information about the operations outcome right in this instance. It provides interfaces for the OStream and a StreamReader to allow the instance to blend in without prior conversion. The only method your content stream must support is ‘read’ read(size=-1)¶ Implements a simple stream reader interface, passing the read call on to our internal stream - class gitdb.base. InvalidOInfo(sha, exc)¶ Carries information about a sha identifying an object which is invalid in the queried database. The exception attribute provides more information about the cause of the issue Functions¶ Contains basic c-functions which usually contain performance critical code Keeping this code separate from the beginning makes it easier to out-source it into c later, if required gitdb.fun. write_object(type, size, read, write, chunk_size=4096000)¶ Write the object as identified by type, size and source_stream into the target_stream gitdb.fun. stream_copy(read, write, size, chunk_size)¶ Copy a stream up to size bytes using the provided read and write methods, in chunks of chunk_size Note: its much like stream_copy utility, but operates just using methods gitdb.fun. apply_delta_data(src_buf, src_buf_size, delta_buf, delta_buf_size, write)¶ Apply data from a delta buffer using a source buffer to the target file Note: transcribed to python from the similar routine in patch-delta.c gitdb.fun. connect_deltas(dstreams)¶ - Read the condensed delta chunk information from dstream and merge its information - into a list of existing delta chunks - class gitdb.fun. DeltaChunkList¶ List with special functionality to deal with DeltaChunks. There are two types of lists we represent. The one was created bottom-up, working towards the latest delta, the other kind was created top-down, working from the latest delta down to the earliest ancestor. This attribute is queryable after all processing with is_reversed. apply(bbuf, write)¶ Only used by public clients, internally we only use the global routines for performance check_integrity(target_size=-1)¶ Verify the list has non-overlapping chunks only, and the total size matches target_size :param target_size: if not -1, the total size of the chain must be target_size :raise AssertionError: if the size doen’t match compress()¶ Alter the list to reduce the amount of nodes. Currently we concatenate add-chunks :return: self Pack¶ Contains PackIndexFile and PackFile implementations - class gitdb.pack. PackIndexFile(indexpath)¶ A pack index provides offsets into the corresponding pack, allowing to find locations for offsets faster. - class gitdb.pack. PackFile(packpath)¶ A pack is a file written according to the Version 2 for git packs As we currently use memory maps, it could be assumed that the maximum size of packs therefor is 32 bit on 32 bit systems. On 64 bit systems, this should be fine though. Note: at some point, this might be implemented using streams as well, or streams are an alternate path in the case memory maps cannot be created for some reason - one clearly doesn’t want to read 10GB at once in that case stream(offset)¶ Retrieve an object at the given file-relative offset as stream along with its information stream_iter(start_offset=0)¶ Note: Iterating a pack directly is costly as the datastream has to be decompressed to determine the bounds between the objects - class gitdb.pack. PackEntity(pack_or_index_path)¶ Combines the PackIndexFile and the PackFile into one, allowing the actual objects to be resolved and iterated IndexFileCls¶ alias of PackIndexFile collect_streams(sha)¶ As PackFile.collect_streams, but takes a sha instead of an offset. Additionally, ref_delta streams will be resolved within this pack. If this is not possible, the stream will be left alone, hence it is adivsed to check for unresolved ref-deltas and resolve them before attempting to construct a delta stream. collect_streams_at_offset(offset)¶ As the version in the PackFile, but can resolve REF deltas within this pack For more info, see collect_streams - classmethod create(object_iter, base_dir, object_count=None, zlib_compression=1)¶ Create a new on-disk entity comprised of a properly named pack file and a properly named and corresponding index file. The pack contains all OStream objects contained in object iter. :param base_dir: directory which is to contain the files :return: PackEntity instance initialized with the new pack Note: for more information on the other parameters see the write_pack method - classmethod write_pack(object_iter, pack_write, index_write=None, object_count=None, zlib_compression=1)¶ Create a new pack by putting all objects obtained by the object_iterator into a pack which is written using the pack_write method. The respective index is produced as well if index_write is not Non. Note: The destination of the write functions is up to the user. It could be a socket, or a file for instance Note: writes only undeltified objects Streams¶ - class gitdb.stream. DecompressMemMapReader(m, close_on_deletion, size=None)¶ Reads data in chunks from a memory map and decompresses it. The client sees only the uncompressed data, respective file-like read calls are handling on-demand buffered decompression accordingly A constraint on the total size of bytes is activated, simulating a logical file within a possibly larger physical memory area To read efficiently, you clearly don’t want to read individual bytes, instead, read a few kilobytes at least. - Note: The chunk-size should be carefully selected as it will involve quite a bit - of string copying due to the way the zlib is implemented. Its very wasteful, hence we try to find a good tradeoff between allocation time and number of times we actually allocate. An own zlib implementation would be good here to better support streamed reading - it would only need to keep the mmap and decompress it into chunks, that’s all … Close our underlying stream of compressed bytes if this was allowed during initialization :return: True if we closed the underlying stream :note: can be called safely - classmethod new(m, close_on_deletion=False)¶ Create a new DecompressMemMapReader instance for acting as a read-only stream This method parses the object header from m and returns the parsed type and size, as well as the created stream instance. - class gitdb.stream. FDCompressedSha1Writer(fd)¶ Digests data written to it, making the sha available, then compress the data and write it to the file descriptor Note: operates on raw file descriptors Note: for this to work, you have to use the close-method of this instance - class gitdb.stream. DeltaApplyReader(stream_list)¶ A reader which dynamically applies pack deltas to a base object, keeping the memory demands to a minimum. The size of the final object is only obtainable once all deltas have been applied, unless it is retrieved from a pack index. The uncompressed Delta has the following layout (MSB being a most significant bit encoded dynamic size): - MSB Source Size - the size of the base against which the delta was created - MSB Target Size - the size of the resulting data after the delta was applied - A list of one byte commands (cmd) which are followed by a specific protocol: - cmd & 0x80 - copy delta_data[offset:offset+size] - Followed by an encoded offset into the delta data - Followed by an encoded size of the chunk to copy - cmd & 0x7f - insert - insert cmd bytes from the delta buffer into the output stream - cmd == 0 - invalid operation ( or error in delta stream ) - classmethod new(stream_list)¶ Convert the given list of streams into a stream which resolves deltas when reading from it. - class gitdb.stream. Sha1Writer¶ Simple stream writer which produces a sha whenever you like as it degests everything it is supposed to write - class gitdb.stream. FlexibleSha1Writer(writer)¶ Writer producing a sha1 while passing on the written bytes to the given write function - class gitdb.stream. ZippedStoreShaWriter¶ Remembers everything someone writes to it and generates a sha seek(offset, whence=0)¶ Seeking currently only supports to rewind written data Multiple writes are not supported - class gitdb.stream. FDCompressedSha1Writer(fd) Digests data written to it, making the sha available, then compress the data and write it to the file descriptor Note: operates on raw file descriptors Note: for this to work, you have to use the close-method of this instance exc= IOError('Failed to write all bytes to filedescriptor',) fd sha1 write(data) - zip - class gitdb.stream. FDStream(fd)¶ A simple wrapper providing the most basic functions on a file descriptor with the fileobject interface. Cannot use os.fdopen as the resulting stream takes ownership Utilities¶ - class gitdb.util. LazyMixin¶ Base class providing an interface to lazily retrieve attribute values upon first access. If slots are used, memory will only be reserved once the attribute is actually accessed and retrieved the first time. All future accesses will return the cached value as stored in the Instance’s dict or slot. - class gitdb.util. LockedFD(filepath)¶ This class facilitates a safe read and write operation to a file on disk. If we write to ‘file’, we obtain a lock file at ‘file.lock’ and write to that instead. If we succeed, the lock file will be renamed to overwrite the original file. When reading, we obtain a lock file, but to prevent other writers from succeeding while we are reading the file. This type handles error correctly in that it will assure a consistent state on destruction. note with this setup, parallel reading is not possible commit()¶ When done writing, call this function to commit your changes into the actual file. The file descriptor will be closed, and the lockfile handled. Note can be called multiple times open(write=False, stream=False)¶ Open the file descriptor for reading or writing, both in binary mode. note must only be called once gitdb.util. byte_ord(b)¶ Return the integer representation of the byte string. This supports Python 3 byte arrays as well as standard strings. gitdb.util. file_contents_ro_filepath(filepath, stream=False, allow_mmap=True, flags=0)¶ Get the file contents at filepath as fast as possible Note for now we don’t try to use O_NOATIME directly as the right value needs to be shared per database in fact. It only makes a real difference for loose object databases anyway, and they use it with the help of the flagsparameter gitdb.util. make_sha(source='')¶ A python2.4 workaround for the sha/hashlib module fiasco Note From the dulwich project
http://gitdb.readthedocs.io/en/latest/api.html
2018-07-16T04:41:56
CC-MAIN-2018-30
1531676589179.32
[]
gitdb.readthedocs.io
Event scheduling There are a variety of tools available for scheduling actions or tasks to happen in the future. There are a variety of tools available for scheduling actions or tasks to happen in the future. Maintenance schedules Changes to the CMDB can be managed through the Maintenance Schedules Plugin, which allows changes to be proposed and viewed through a timeline. On-call rotation The Group On-Call Rotation Plugin allows a schedule to be defined to determine what users are primary contacts during particular hours of the day. Scheduled reports Once reports are defined, they can be scheduled to be emailed at a specific time, or at regular intervals, using the reporting interface. Scheduled workflows Workflows provide a robust system for automating advanced multi-step processes. Workflows can be triggered by conditions, like business rules, or they can be scheduled for a particular time/recurring schedule, like scheduled jobs. Scheduled jobs Scheduled jobs are scripts which can be set to be automatically performed at a specific date and time, or on a repeating basis. Event registryEvents can be used to schedule actions or tasks to occur when conditions are fulfilled.Register an eventYou can register an event for a specific table and a business rule that fires the event.
https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/time/concept/c_ScheduleEvents.html
2018-07-16T05:04:13
CC-MAIN-2018-30
1531676589179.32
[]
docs.servicenow.com
Configure the blur app option As a security feature, administrators can configure the mobile app to blur when not in focus on a mobile device. About this taskFor example, when you double-click the home button on your mobile device to close apps or navigate back to where you left off, the ServiceNow app appears blurred. Procedure Navigate to System Properties > Mobile UI Properties. Select the Blur native app UI when the application enters the background check box. Figure 1. Blurred app
https://docs.servicenow.com/bundle/kingston-platform-user-interface/page/administer/tablet-mobile-ui/task/t_BlurApp.html
2018-07-16T04:58:01
CC-MAIN-2018-30
1531676589179.32
[]
docs.servicenow.com
TOC & Recently Viewed Recently Viewed Topics Clone a Dispute Note: This feature is not supported when deploying Tenable.io on-prem. - Click Dashboards > Workbenches > PCI ASV. The PCI ASV Attestation Requests page appears. - On the Remediation tab, select the scan for which you wish to clone disputes. The General Information page for the scan appears. - In the top right corner of the page, click Clone Disputes. From the Clone Disputes drop-down menu, select the attestation from which you wish to clone disputes. Note: Only disputes belonging to scans from the previous quarter are available to clone in the Clone Disputes drop-down menu. A Clone Disputes dialog appears. - Click Continue. A Dispute Cloned Successfully message appears. Note: Any newly added assets for the current quarter are not automatically included in the previous quarter's cloned disputes. To include these assets, you must manually add them to a new or existing dispute.
https://docs.tenable.com/cloud/Content/PCI%20ASV/CloneaDispute.htm
2018-07-16T05:02:57
CC-MAIN-2018-30
1531676589179.32
[]
docs.tenable.com
You can download vRealize Hyperic in a variety of packages. The format that you select depends on the operating system on which it will be installed, whether configuration will be automated or customized, and so on. vRealize Hyperic installers can be downloaded from the VMware download page at. On the download page, under Application Management select VMware vRealize Hyperic. The installation packages are described below. JREs are included in some packages and not others. To determine if you need to configure the location of your JRE,see Configuring JRE Locations for vRealize Hyperic Components. vRealize Hyperic vApp A virtual appliance (vApp) is one or more virtual machine image files (.ovf), each with a preconfigured operating system environment and application. The vRealize Hyperic vApp contains two virtual machine images, one for the vRealize Hyperic server and one for the vRealize Hyperic database. Deploying the vRealize Hyperic vApp provides a simplified deployment in which the components are already configured to work, and to work with each other. The vRealize Hyperic vApp is provided as an OVA archive that contains the .ovf descriptor, .mf, and .vmdk files that are necessary to deploy the vRealize Hyperic server and vRealize Hyperic database vApps using vSphere Client. You can also create a vRealize Hyperic vApp in your virtual cloud from a vApp template, using VMware vCloud Director. For installation prerequisites and instructions, see Install vRealize Hyperic vApp. vRealize Hyperic Installer The vRealize Hyperic installer is script-based. You can do a quick install that sets up defaults for most vRealize Hyperic server configuration options, or run it in full mode to respond to the configuration dialog yourself. You can also use this installer to install the vRealize Hyperic agent. RHEL RPMs RPMs are available. The vRealize Hyperic server RPM is the standard vRealize Hyperic installer, wrapped in an Expect script.
https://docs.vmware.com/en/vRealize-Hyperic/5.8.4/com.vmware.hyperic.install.config.doc/GUID-CBB1841C-E1D5-4294-84CA-0EC922D13B06.html
2018-07-16T04:58:04
CC-MAIN-2018-30
1531676589179.32
[]
docs.vmware.com
- CloudPortal Business Manager User Interface Overview - Understanding the dashboard - My Services Tab - User Management - Managing company settings - Managing user profiles and preferences - Viewing API Credentials - SSH keys - Service Health and Support - Reports - Managing your spend - Understand Your Bill - Browse Catalog and Subscriptions - Managing resources
https://docs.citrix.com/zh-cn/cloudportal-business-manager/2-4/cpbm-userguide-wrapper-con/cpbm-user-guide-my-services-tab-con.html
2018-07-16T04:37:25
CC-MAIN-2018-30
1531676589179.32
[]
docs.citrix.com
2 - Create Data Source Connection Mule 4 Example: MySQL Mule 4 Example: Oracle JDBC Database") Connection Properties MySQL and Microsoft SQL Server database configurations provide connection property settings, these properties are injected to the JDBC Connection as additional properties.Mule 4 Examples: Reconnection Settings. Transactional Actions The DB operations when be executed inside a transaction they can decide how they will interact with the transaction. Available Transactional Actions: ALWAYS_JOIN JOIN_IF_POSSIBLE NOT_SUPPORTED. To use the Database connector, simply add it to your application using the Studio palette or add the following dependency in your pom.xml file:
https://docs.mulesoft.com/mule4-user-guide/v/4.1/migration-connectors-database
2018-07-16T04:48:47
CC-MAIN-2018-30
1531676589179.32
[]
docs.mulesoft.com
Create a walkthrough After defining a control, audit managers create walk throughs that will be conducted to observe Walkthrough. Fill in the fields on the form, as appropriate. Table 1. Walkthrough form Field Description Number Read-only field that is automatically populated with a unique identification number. State Open Work in Progress Review Closed Complete Closed Incomplete Closed Skipped Parent The parent audit task. Assigned to The user assigned to this walkthrough. Short description A brief and general description of the walkthrough. Description A more detailed explanation of the walkthrough. Schedule Planned start date The intended date the walkthrough should begin. Planned end date The intended date the walkthrough should end. Planned duration The expected duration of this walkthrough. As with actual duration, the planned duration shows total activity time and takes the walkthrough schedule into consideration. Actual start date The date that this walkthrough actually began. Actual end date The date that this walkthrough actually ended. Actual duration The actual duration of the walkthrough from walkthrough start to walkthrough end. Walkthrough Primary Contact The user to contact for this walkthrough. Other Contacts Other users to contact for this walkthrough, if the primary contact is unavailable. Execution Steps Detail the activities to be performed during the walkthrough. Explanation Intended purpose of the walkthrough. Additional Information Additional information the user conducting the walkthrough needs to be aware of. Results Details of what transpired during the walkthrough. Activity Additional comments Customer-viewable comments. Work notes Comments that are viewable by the audit manager and audit manager. Click Submit.
https://docs.servicenow.com/bundle/jakarta-governance-risk-compliance/page/product/grc-audit/task/t_CreateAWalkthrough.html
2018-07-16T04:31:01
CC-MAIN-2018-30
1531676589179.32
[]
docs.servicenow.com
Difference between revisions of "JAccess::getUsersByGroup" From Joomla! Documentation Revision as of 19::getUsersByGroup Description Method to return a list of user Ids contained in a Group. Description:JAccess::getUsersByGroup [Edit Descripton] public static function getUsersByGroup ( $groupId $recursive=false ) See also JAccess::getUsersByGroup source code on BitBucket Class JAccess Subpackage Access - Other versions of JAccess::getUsersByGroup SeeAlso:JAccess::getUsersByGroup [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JAccess::getUsersByGroup&diff=55834&oldid=48992
2016-02-06T01:16:07
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Difference between revisions of "JInstaller::discover install"::discover_install Description Description:JInstaller::discover install [Edit Descripton] public function discover_install ($eid=null) - Returns - Defined on line 405 of libraries/joomla/installer/installer.php See also JInstaller::discover_install source code on BitBucket Class JInstaller Subpackage Installer - Other versions of JInstaller::discover_install SeeAlso:JInstaller::discover install [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=JInstaller::discover_install/11.1&diff=57106&oldid=47664
2016-02-06T00:34:36
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Technical requirements From Joomla! Documentation Revision as of 16:34, 24 September 2011 by Realityking (Talk | contribs) More details can be found on Joomla.org. Joomla 1.6/1.7 Joomla 1.5 - ↑ 1.0 1.1 In order to use SEO URLs, you will need to have the Apache mod_rewrite extension installed. -.
https://docs.joomla.org/index.php?title=Technical_requirements&oldid=62220
2016-02-06T01:42:57
CC-MAIN-2016-07
1454701145578.23
[array(['/images/d/da/Compat_icon_1_6.png', 'Joomla 1.6'], dtype=object) array(['/images/8/87/Compat_icon_1_7.png', 'Joomla 1.7'], dtype=object) array(['/images/c/c8/Compat_icon_1_5.png', 'Joomla 1.5'], dtype=object)]
docs.joomla.org
Article split requests From Joomla! Documentation Revision as of 16:00, 15 March 2013 by Tom Hutchison (Talk | contribs) Articles and/or pages marked for version specific split. Ideally, this category should remain empty, please verify the listed items are still being discussed. If older than 1 week, proceed with splitting them into the appropriate J namespaces. Pages in category ‘Article split requests’ The following 39 pages are in this category, out of 39 total.
https://docs.joomla.org/index.php?title=Category:Article_split_requests&oldid=82643
2016-02-06T01:48:41
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Information for "Article" Basic information Display titleChunk:Article Default sort keyArticle Page length (in bytes)1,676 Page ID131 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsDisallowed Number of redirects to this page1 Number of subpages of this page17 (0 redirects; 17 non-redirects) Page protection EditAllow all users MoveAllow all users Edit history Page creatorChris Davenport (Talk | contribs) Date of page creation18:32, 11 January 2008 Latest editorMATsxm (Talk | contribs) Date of latest edit11:53, 23 December 2014 Total number of edits21 Total number of distinct authors) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Chunk:Article&action=info
2016-02-06T01:16:44
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Ja purity From Joomla! Documentation Revision as of 11:06, 25 September 2008 by Bino87 making major edits. - Thank you. See also Some information about this template Customising the JA_Purity template Retrieved from ‘’ Categories: Needs more contentNeeds improvementTemplates
https://docs.joomla.org/index.php?title=Ja_purity&oldid=10886
2016-02-06T01:26:22
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Databases From BaseX Documentation. [edit] (see below for more). [edit] Access Resources Stored resources and external documents can be accessed in different ways: [edit] XML Documents Various XQuery functions exist to access XML documents in databases: You can access multiple databases in a single query: for $i in 1 to 100 return db:open('books' || $i)//book/title If the DEFAULTDB option is turned on, the path argument of the fn:doc or fn:collection function. [edit] Raw Files Updated with Version 8.4: items of binary type can be output without specifying the obsolete raw serialization method.(db:retrieve('multimedia', 'sample.avi')) [edit] HTTP Services - With REST and WebDAV, all database resources can be requested in a uniform way, no matter if they are well-formed XML documents or binary files. [edit] Update Resources Once you have created a database, additional commands exist to modify its contents: - XML documents can be added with the ADDcommand. - Raw files are added with STORE. - Existing resources can be replaced with the REPLACEcommand. - Resources can be deleted via DELETE. The AUTOFLUSH option can be turned off before bulk operations (i.e. before a large number of new resources is added to the database). The ADDCACHE option will first cache the input before adding it to the database. This is helpful when the input documents to be added are expected to eat up too much main memory. The following commands create an empty database, add two resources, explicitly flush data structures to disk, and finally delete all inserted data: CREATE DB example SET AUTOFLUSH false ADD example.xml SET ADDCACHE true ADD /path/to/xml/documents STORE TO images/ 123.jpg FLUSH DELETE / You may as well use the BaseX-specific XQuery Database Functions to create, add, replace, and delete XML documents: let $root := "/path/to/xml/documents/" for $file in file:list($root) return db:add("database", $root || $file) Last but not least, XML documents can also be added via the GUI and the Database menu. [edit] Export Data All resources stored in a database can be exported, i.e., written back to disk. This can be done in several ways: - Commands: EXPORTwrites all resources to the specified target directory - GUI: Go to Database → Export, choose the target directory and press OK - WebDAV: Locate the database directory (or a sub-directory of it) and copy all contents to another location [edit] In Memory Database - In the standalone context, a main-memory database can be created (using CREATE DB), which can then be accessed by subsequent commands. - If a BaseX server instance is started, and if a database is created in its context (using CREATE DB), other BaseX client instances can access (and update) this database (using OPEN, db:open, etc.) as long as no other database is opened/created by the server.. [edit] Changelog - Version 8.4 - Updated: Raw Files: Items of binary type can be output without specifying the obsolete rawserialization method. - Version 7.2.1 - Updated: fn:document-uriand fn:base-urinow return strings that can be reused with fn:docor fn:collectionto reopen the original document.
http://docs.basex.org/wiki/Databases
2016-02-05T23:51:57
CC-MAIN-2016-07
1454701145578.23
[]
docs.basex.org
public static interface TokenStream.Tokenizer Interface for a Tokenizer component responsible for processing the characters in a TokenStream.CharacterStream and constructing the appropriate TokenStream.Token objects. void tokenize(TokenStream.CharacterStream input, TokenStream.Tokens tokens) throws ParsingException TokenStream.Tokenobjects. input- the character input stream; never null tokens- the factory for TokenStream.Tokenobjects, which records the order in which the tokens are created ParsingException- if there is an error while processing the character stream (e.g., a quote is not closed, etc.)
http://docs.jboss.org/modeshape/2.2.0.Final/api-full/org/modeshape/common/text/TokenStream.Tokenizer.html
2016-02-06T00:10:40
CC-MAIN-2016-07
1454701145578.23
[]
docs.jboss.org
public interface Unmarshaller Marshaller boolean supports(Class<?> clazz) clazz- the class that this unmarshaller is being asked if it can marshal trueif this unmarshaller can indeed unmarshal to the supplied class; falseotherwise Object unmarshal(Source source) throws IOException, XmlMappingException source- the source to marshal from IOException- if an I/O error occurs XmlMappingException- if the given source cannot be mapped to an object
http://docs.spring.io/spring-framework/docs/3.2.0.RELEASE/javadoc-api/org/springframework/oxm/Unmarshaller.html
2016-04-28T22:01:01
CC-MAIN-2016-18
1461860109830.69
[]
docs.spring.io
Lazy child nodes iteration feature is accessible via the org.exoplatform.services.jcr.core.ExtendedNode extended interface, the inheritor of javax.jcr.Node. It provides a new single method shown below: /** * Returns a NodeIterator over all child Nodes of this Node. Does not include properties * of this Node. If this node has no child nodes, then an empty iterator is returned. * * @return A NodeIterator over all child Nodes of this <code>Node</code>. * @throws RepositoryException If an error occurs. */ public NodeIterator getNodesLazily() throws RepositoryException; From the view of end-user or client application, getNodesLazily() works similar to JCR specified getNodes() returning NodeIterator. "Lazy" iterator supports the same set of features as an ordinary NodeIterator, including skip() and excluding remove() features. "Lazy" implementation performs reading from DB by pages. Each time when it has no more elements stored in memory, it reads the next set of items from persistent layer. This set is called "page". The getNodesLazily feature fully supports session and transaction changes log, so it is a functionally-full analogue of specified getNodes() operation. Therefore, when having a deal with huge list of child nodes, getNodes() can be simply and safely substituted with getNodesLazily(). JCR gives an experimental opportunity to replace all getNodes() invocations with getNodesLazily() calls. It handles a boolean system property named " org.exoplatform.jcr.forceUserGetNodesLazily" that internally replaces one call with another, without any code changes. But be sure using it only for development purposes. This feature can be used with the top level products using eXo JCR to perform a quick compatibility and performance tests without changing any code. This is not recommended to be used as a production solution.
https://docs.exoplatform.org/public/topic/PLF40/JCR.APIExtensions.API_and_Usage.html
2017-09-19T13:37:06
CC-MAIN-2017-39
1505818685698.18
[]
docs.exoplatform.org
Message-ID: <979684383.3233.1422513237941.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3232_847194399.1422513237940" ------=_Part_3232_847194399.1422513237940 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: In the course of your work, as a Java developer, you have probably used = a lot of Open Source projects (if you haven't, well you are now ). Wha= t is always amazing but at the same time annoying is that all the projects = out there seem to use their own project directory layout flavour. For insta= nce, Apache projects have their own standard (when they respect it), as Sun= TM projects do to, ... Not using some sort of a standard directo= ry layout leads to some problems: Nevertheless, all the projects don't use the same type of files (Java, X= ML, HTML, JSP, Groovy, SQL scripts, ...) so it might be hard to standardize= the directory layout.=20 The Standard Directory Layout Pattern address the following problem: *[pattern description to be added] *=20 Of course, there are many different strategies, or if you prefer many wa= ys to implement this pattern but we are interested in the Maven one.=20 an= d target. The only other directories that would be expected he= re are metadata like CVS or .svn, and any subproj= ects in a multiproject build (each of which would be laid out as above). The target directory is used to house all output of the bui= ld. The src directory contains all of the source material for b= uilding the project, its site and so on. It contains a subdirectory= for each type of use: mai= n for the main build artifact, test for the unit test c= ode and resources, site and so on. Within artifact producing source directories (ie. main and = test), there is one subdirectory for each type of file= . For instance, Java source files are stored under the subdirector= y java, the resources files (the ones copied directly to the outpu= t directory) under the subdirectory resources, ... The name you gi= ve to each of those subdirectories usually depends on the plugins you use t= o manage those types of files (plugins usually assume some default names, s= o no need to reinvent the wheel there ). Here are the directories that Maven's standard set of plugins use:= =20 And here is what your project directory layout should look like in most = common cases :
http://docs.codehaus.org/exportword?pageId=48030
2015-01-29T06:33:58
CC-MAIN-2015-06
1422115855845.27
[]
docs.codehaus.org
With our decision to start open-sourcing components of Unity, it is important to us to engage our customers and users. This means we want to provide you with all of the securities and flexibilities that having source provides. We’ll also welcome collaborative participation and community development. It is our expectation with this initiative that our community will be able to extend Unity in ways that were previously not possible. Open-source is certainly not a new concept, but we recognize that many Unity users may not have participated in an open-source project before. Therefore, we’ve put together this guide to help you get started. We use distributed version control to version our open-source components on BitBucket. This means you make changes and contribute them back through a process of forking our repository, cloning your fork, pushing your changes to your fork, and opening a pull request for us to review. This might all be new to you, so we’ve tried to help you by going through an example in this guide. We also encourage you to read the various links in the Further Reading section. Questions about licensing, etc, are addressed in the FAQ section.
http://docs.unity3d.com/Manual/ContributingToUnity.html
2015-01-29T06:17:01
CC-MAIN-2015-06
1422115855845.27
[]
docs.unity3d.com
- 'coz concurrency is Groovy The traditional thread-based concurrency model built into Java doesn't match well with the natural human sense for parallelism. While this was not a problem at times, when the level of parallelism in software was low and concurrency offered only limited benefits compared to sequential code, nowadays, with the number of cores on a single main-stream chip doubling almost every year, sequential code quickly looses ground and fails to compete in performance and hardware utilization with concurrent code. Inevitably, for concurrent programming to be effective, the mental models of concurrent systems interactions that people create in their heads have to respect the nature of human brains more than the wires on the chips. Luckily, such abstractions have been around for several decades, used at universities, in telephone switches, the super-computing industry and some other inherently concurrent domains. The current challenge for GPars is to bring these abstractions up to the mainstream software developers to help us solve our practical daily issues. The framework provides straightforward Java or Groovy-based APIs to declare, which parts of the code should be performed in parallel. Collections can have their elements processed concurrently, closures can be turned into composable asynchronous functions and run in the background on your behalf, mutable data can be protected by agents or software transactional memory... Please) Project's main priorities - Clever and clean design - Elegant Java and Groovy APIs - Flexibility through meta-programming - Application-level solutions that scale with number of cores Licencing GPars is distributed under the liberal open-source Apache 2 License. In essence, you are allowed to use GPars on both commercial and open-source projects. Fast Track If you want to start experimenting with GPars right away, use our Fast Track to get up and running within minutes. What people say about GPars Check out the User Voices to hear the opinions of people walking here before you.
http://docs.codehaus.org/pages/viewpage.action?pageId=230394377
2015-01-29T06:29:39
CC-MAIN-2015-06
1422115855845.27
[]
docs.codehaus.org
You're viewing Apigee Edge documentation. View Apigee X documentation. Introduction Fees apply to app developers when they sign up for the rate plan. You can configure fees for any rate plan type except adjustable notification. Fees are optional; you do not have to specify any fees in a rate plan. Configuring fees for a rate plan using the UI Configure fees for a rate plan, as described below. Edge To configure fees, when creating or editing a rate plan select any rate plan type (except Adjustable Notification) and configure: Configuring contract details Configure contract details in the Contract details section when creating or editing a rate plan. Configuring cost of the rate plan Configure the cost of the rate plan in the Cost section when creating or editing a rate plan. Classic Edge (Private Cloud) To configure fees to a rate plan using the Classic Edge UI: - Select the Fees tab in the Rate Plan window. - Enter the following information: Configuring fees \ '{ "advance": false, email:password See Configuration properties for rate plans for a complete list of rate plan options.
https://docs.apigee.com/api-platform/monetization/add-fees-rate-plan?authuser=0&hl=ja
2021-07-23T22:06:46
CC-MAIN-2021-31
1627046150067.51
[]
docs.apigee.com
MASSO Documentation Spindle Speed Override Information: This feature is available on the F2 & F3 Screens. Information: Speed override ranges from 10% to 150% of the specified spindle speed. How to use - Press the F12 Key on the keyboard - Use the + & - keys on your keyboard to change the speed. - You can also use the Pendant to change the speed by rotating the MPG dial in either the + or - direction to increase or decrease the speed. Speed Override set at 100%
https://docs.masso.com.au/index.php/getting-started-guides/machining-with-masso/keyboard-and-key-shortcuts/speed-override
2021-07-23T22:11:26
CC-MAIN-2021-31
1627046150067.51
[array(['https://docs.masso.com.au/application/common/ui_assets/uploads/dd0a9325f3b30a7aa9529984d083a605.png', None], dtype=object) ]
docs.masso.com.au
Snippets are reusable test cases in scope of project. So you can create a snippet and refer to it in your test suites. For example, to login a user we need to perform multiple actions: visit login form, fill it out and submit. Instead of adding these test steps to every test case for logged in user we can simply create once a snippet and then refer to it from many test cases. Besides, to every reference we can assign a set of template variables that can be addressed in the body of the snippet. In other words, we can execute the same snippet, but giving it a different input every time. In order to illustrate the concept we are going to create a simple test case for an imaginary ACME forum app. Let's say we have a test case "user registers and activates the account" where Puppetry visits forum register page, fills out the registration form and submits it. So we have a bran new user in the system and can login to run test flows for an authorized user. However to get the initial state in the beginning of every test group we need to repeat and repeat the login flow (visit login page, fill out the form, submit). Why not to make a snippet and reuse it? Next we press Project / Snippets in the main menu Snippets are quite similar to suites. They have local targets and test cases. We start by defining targets for the login form Now we just into Snippets tab and create the test case: As we are done with the snippet we can navigate back to the suite: We create an test case "user gets logged in" and click under it to the Add a reference button In the following modal window we select our recently created snippet: After extending the test case with other test steps it looks like that: During the test run first all the test steps of the snippet are executed, so user gets logged in. We wait until the page is ready (login form doesn't contain .categories selector, but landing page does). Now we can assert that page header changed (has #user-header-name). In the example above we used TEST_EMAIL template variable defined during registration flow. But imagine that we have a number of already registered accounts (let's say one is inactive, one is active, one is privileged) and we want sequencely log in with each one and assert that the application responds as intended. What we can do is reusing "user logs in" snippet, but with diverse emails. Click on the Edit action next to the reference test step: You get a modal window that we know from the example below. Click to Local Template Variables to expand the template editing interface and add TEST_EMAIL variable: Now you clone the reference and edit it again. Remove TEST_EMAIL and add it again with a new value. As you run the tests the references will execute the snippet with the values we assigned.
https://docs.puppetry.app/snippets
2021-07-23T21:09:58
CC-MAIN-2021-31
1627046150067.51
[]
docs.puppetry.app
Using Outer Joins to Define Join Indexes There are several benefits in defining non-aggregate join indexes with outer joins: These rows allow the join index to satisfy queries with fewer join conditions than those used to generate the index. For example, the Optimizer can choose to scan the outer table rows of a join index to satisfy a query that only references the outer table provided that a join index scan would be more high-performing than scanning the base table or redistributing rows.
https://docs.teradata.com/r/ji8nYcbKBTVEaNYVwKF3QQ/bPtCPNT6Na24TntfBehZug
2021-07-23T22:13:52
CC-MAIN-2021-31
1627046150067.51
[]
docs.teradata.com
Model Zoo The Explore page has a Models section, where you can find all supported neural networks and add them to your account. Search for the relevant model and click the "Add" button to clone the public model to your current workspace. In will appear on the "Neural Networks" page.
https://legacy.docs.supervise.ly/neural-networks/model-zoo/model-zoo/
2021-07-23T23:00:01
CC-MAIN-2021-31
1627046150067.51
[array(['../zoo_a.png', None], dtype=object)]
legacy.docs.supervise.ly
Google IdP for SAML Integration¶ Overview¶ This guide provides an example on how to configure Aviatrix to authenticate against a Google IdP. When SAML client is used, your Aviatrix controller acts as the Identity Service Provider (ISP) that redirects browser traffic from client to IdP (e.g., Google) for authentication. Before configuring SAML integration between Aviatrix and Google, make sure you have a valid Google account with administrator access. Configuration Steps¶ Follow these steps to configure Aviatrix to authenticate against your Google IdP: Step 1. Create a temporary Aviatrix SP Endpoint in the Aviatrix Controller Step 2. Create a Google SAML Application for Aviatrix Step 3. Retrieve Google Google IdP with Controller Login SAML Config If integrating Google IdP with OpenVPN with SAML Authentication This step will ask you to pick a short name to be used for the SAML application name [Endpoint Name]. In the notes below we will refer to this as aviatrix_google. It can be any string that will identify the SAML application you create in the IdP. We will use the string you select for the SAML application name to generate a URL for Google IdP to connect with Aviatrix. This URL is defined below as SP_ACS_URL. This URL should be constructed as: https://<<<your controller ip or host name>>>/flask/saml/sso/<<<aviatrix_google>>> Tip Replace <<<your controller ip or host name>>> with the actual host name or IP address of your controller and <<<aviatrix_google>>> with the [Endpoint Name] you chose to refer to the SAML application. Step 2. Create a Google SAML App for Aviatrix¶ Note This step is usually done by the Google Admin. Login to the Google Admin portal Follow Google documentation to create a new custom application. Click on the Setup My Own Custom App Basic Information Service Provider Details [host]is the hostname or IP of your Aviatrix controller. For example, [Endpoint Name]is an arbitrary identifier. This same value should be used when configuring SAML in the Aviatrix controller. Attribute Mapping Disable “Signed Response” - Open the Service Provider Details for the SAML application just created. Uncheck Signed Response. - Click Save Step 3. Retrieve Google IdP metadata¶ Step 4. Update Aviatrix SP Endpoint¶ Note This step is usually completed by the Aviatrix admin. Google IdP provides IdP Metadata through text obtained in Retrieve Google IdP metadata (Step 3). Continue with updating Aviatrix SAML Endpoint by visiting one of the following links based on your use case: If integrating Google IdP with Controller Login SAML Config If integrating Google. - Click OK Step 5. Test the Integration¶ Tip Be sure to assign users to the new application in Google prior to validating. If you do not assign your test user to the Aviatrix SAML application, you will receive an error. Continue with testing the integration by visiting one of the following links based on your use case: - If integrating Google IdP with Controller Login SAML Config - Click Settings in the left navigation menu - Select Controller - Click on the SAML Login tab - If integrating Google.
https://docs.aviatrix.com/HowTos/SAML_Integration_Google_IdP.html
2021-07-23T22:47:17
CC-MAIN-2021-31
1627046150067.51
[]
docs.aviatrix.com
Sign up for CoreWeave Cloud and generate a kubeconfig from the API Access page. Every time an access token is generated your corresponding kubeconfig will automatically download. Once you have received your credentials, all you have to do is put them in place and download the command line tools. No other setup is necessary, you are instantly ready to deploy your workloads and containers. Cut-and-paste instructions are below. For more detail please reference the official documentation. brew install kubectl curl -LO`curl -s`/bin/linux/amd64/kubectlchmod +x ./kubectlsudo mv ./kubectl /usr/local/bin/kubectl You will have received a pre-populated k8s-conf file from CoreWeave as part of your onboarding package. The snippet below assumes that you have no other Kubernetes credentials stored on your system, if you do you will need to open both files and copy the cluster, context and user from the supplied k8s-conf file into your existing ~/.kube/config file. Replace ~/Downloads with the path to the kube-config supplied by CoreWeave. mkdir -p ~/.kube/mv ~/Downloads/k8s-tenant-test-conf ~/.kube/config Since your new account will not have any resources, listing the secrets is a good start to ensure proper communication with the cluster. $ kubectl get secret git:(master|…NAME TYPE DATA AGEdefault-token-frqgm kubernetes.io/service-account-token 3 5d3h Once access is verified you can deploy the examples found in this repository. Head on over to Examples to deploy some workloads!
https://docs.coreweave.com/coreweave-kubernetes/getting-started
2021-07-23T22:38:02
CC-MAIN-2021-31
1627046150067.51
[]
docs.coreweave.com
Developers Below you will find various resources to aid in the integration and development against our API. Developer Portal Access Have you requested Developer Portal access yet? Click here to fill out the Developer Portal application to request access. Quick Start Guide The purpose of this document is to detail the steps necessary to understand how to develop a proper integration to our API. This document should answer many of the basic questions around how our API operates and how to get integrated quickly and easily. Click here to access the Quick Start Guide. API Reference Many APIs are similar in the way that they handle requests and response. Ours is not much different than the rest. Most of the same concepts apply, only the parameters may vary slightly. This section describes the basics of our API and any information that is specific to our API that you will need to in order to properly code and interact with us. Click here to view the API Reference. Post. Sandbox API Request Logs During development it may be helpful to review your API requests and their responses in order to troubleshoot any issues or unexpected behavior. Fortunately, this can easily be done within our Developer Portal. - Once you are signed into the Developer Portal, click on the Projects link in the left-hand navigation. - On the Projects page, click into the Project you are working on. - Once you click into a Project, you should be viewing the Request Logs tab (by default), and you can use this tab control at any time to return to the API Logs if you should switch tabs. - Within the API Request Logs tab, you will see controls that can be used to search for specific transactions based on Timestamp, Request Method, Response Code, Resource, and Endpoint.
https://docs.fortispay.com/developers
2021-07-23T22:56:58
CC-MAIN-2021-31
1627046150067.51
[]
docs.fortispay.com
Advantages of Indexing Over Hashing There are two categories of query that often perform better with value-ordered indexing. First consider range queries. The following SQL SELECT statement requests all rows from the parts table with a part number ranging between 3517 and 3713, inclusive. The primary key for the table is part_number. SELECT part_number, part_description FROM parts_table WHERE part_number BETWEEN 3517 AND 3713; Depending on the configuration and the database manager, such a query might perform better if executed against a table that uses a value-ordered primary index key rather than a hash-ordered hash key. To enhance the ability of hash keys to retrieve range query data, Teradata Database provides two mechanisms: the capability to store index rows in the order of their index values and the capability to store base table rows within partitions, including range partitions, after they have been hashed to an AMP. See “CREATE INDEX,” “CREATE JOIN INDEX,” and “CREATE TABLE” in SQL Data Definition Language and “CASE_N” and “RANGE_N” in SQL Functions, Operators, Expressions, and Predicates. The only way to avoid the partial hash key issue is to have a thorough understanding of your applications and data demographics before you define your indexes. In other words, if your applications will use partial key selection criteria, either define the index on those frequently retrieved columns or define a secondary index on them. The optimal solution is very dependent on the individual circumstances and there is no single correct way to design for this particular situation.
https://docs.teradata.com/r/ji8nYcbKBTVEaNYVwKF3QQ/11XIRrgXJGgQwxm17NJ3iQ
2021-07-23T23:17:18
CC-MAIN-2021-31
1627046150067.51
[]
docs.teradata.com
History History panel lists operations performed in the current session on a current image. You can undo or redo incorrect actions by clicking on correct state of an image or using Ctrl+Z / Ctrl+Y shortcuts. Short-time memory We save history of changes only for a current image. If you change current image, history panel will clear and you will not be able to redo recent actions.
https://legacy.docs.supervise.ly/annotation/history/
2021-07-23T22:55:51
CC-MAIN-2021-31
1627046150067.51
[]
legacy.docs.supervise.ly
You are viewing version 2.23 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version. Repository Config This page describes spec.spinnakerConfig.config.repository. spec.spinnakerConfig.config.repository repository: artifactory: enabled: searches: - name: baseUrl: permissions: READ: WRITE: repo: groupId: repoType: username: password: Artifactory enabled: true or false. searches: name: The name of the account baseUrl: The base url your artifactory search is reachable at. permissions: - READ: [] A user must have at least one of these roles in order to view this account’s cloud resources. - WRITE: [] A user must have at least one of these roles in order to make changes to this account’s cloud resources. repo: The repo in your artifactory to be searched. groupId: The group id in your artifactory to be searched. repoType: The package type of repo in your artifactory to be searched: maven (default). username: The username of the artifactory user to authenticate as. password: The password of the artifactory user to authenticate as. Supports encrypted value. Last modified September 10, 2020: docs(artifactory): Update operator docs for artifactory (#207) (83ba2aa)
https://v2-23.docs.armory.io/docs/installation/operator-reference/repository/
2021-07-23T23:07:16
CC-MAIN-2021-31
1627046150067.51
[]
v2-23.docs.armory.io
FDFMerge Readme - 1 New Features - 1.1 FDFMerge 7.2 - 1.2 FDFMerge 7.0 - 1.3 FDFMerge 6.0 - 2 Known Limitations and Issues - 2.1 Unicode issues - 2.2 Display problems in Acrobat - 2.3 No support for Signatures - 2.4 No support for FDF Templates - 2.5 Encrypted files may not be used - 2.6 Autosize is not supported - 2.7 The sort and multiple select options for list boxes are not supported - 2.8 Button stamping and merging behavior with regard to formatting - 2.9 FDF files must be in PDDocEncoding - 2.10 Stamping landscaped pages into a button - 3 To Get Help New Features Please see the FDFMerge Options for the full documentation of the new features. FDFMerge 7.2 -Support for comb fields NOTE: This feature is not currently supported on rotated pages –stampnewvalues – partially flatten form fields passed in using FDF and XFDF files. When this feature is present on the command line, all unfilled fields will remain active. FDFMerge 7.0 -mergeflags – Merge F and Ff flags along with the field Values in the FDF. For example, having the following in an FDF file will set the form field to ReadOnly when merging (will not work when stamping form fields, –s on command line) /T(name) /V(John Smith) /Ff 1 -listfonts – Display a list of the fonts available to FDFMerge -nowarning – Do not issue warnings about unused fields that are in the FDF/XFDF files but not present in the PDF form. AutoSize fields are now supported Font Support (Additional fonts now supported) OpenType, TrueType, Type1 and Unicode fonts are now all supported in FDFMerge. In previous versions, FDFMerge used the FontFile parameter in the Form Info file when specifying a Type 1 font. In FDFMerge 7.0, you should only specify the FontName parameter in the Form Info file when using an OpenType, TrueType, Type 1, Unicode font. Some of the classic Base14 fonts are no longer included with the application resources. In particular Times and Helvetica. Both of these fonts are automatically substituted; but, may not appear identical to the classic fonts. If the new font appearance are not accept-able; simply use the versions of Times and Helvetica available on your system. This can be done by setting the Font Directories; please see below. You can run the command $ fdfmerge –listfonts to see a list of all font names available for stamping. The font name should be entered into the Form Info file exactly as it is shown in the font list. Font Directories There are two font directories. The default font directory is found under the AppligentHome directory in APDFLX.X.X/Resource/Font. On Windows, the AppligentHome directory is in the AllUsers Application data directory. Windows : C:\Documents and Settings\All Users\Application Data\Appligent FDFMerge, AP_FONT_DIR is set in the fdfmerge script. The directory set in an environment variable in the script, or be set for the shell. FontFile (deprecated) The FontFile parameter has been deprecated. To use a non-base14 font with FDFMerge, add the font to one of the font directories and specify the FontName in the Form Info File. The FontFile parameter entries will be ignored. Previous versions of FDFMerge would allow the specified font file to override the FontName; currently the FontName has priority. Support for Arabic in form fields FDFMerge now supports Arabic in form fields. In order for the information to appear correctly, the form field must be set “Right To Left”, alignment set to “Right” and contain a font that supports Arabic characters. Known Issues with Version 7.0 1) Auto size is not supported for button fields and radio buttons. Auto size not supported in rotated fields 2) At times, text in form fields may automatically crop to the field borders. To disable this behavior, set CropToField (No) in the Form Info file. 3) FontXScale and FontYScale are currently not working in this release FDFMerge 6.0 the file for ISO 32000 compliance, resulting in a document that is PDF version 1.7. Command collections (-cmds) To process multiple commands quickly and efficiently, use the -cmds <filename> option. The file specified by -cmds, the commands file, should contain one or more single line commands. Each command is just like an ordinary FDF Merge command-line without the executable name. The commands file does not support wildcards in filenames. When you use the -cmds option, many of the FDFMerge. Known Limitations and Issues Unicode issues Higher order ASCII characters not supported in XFDF file. If you use higher order ASCII characters, you must use an FDF file, not an XFDF file. Adobe® Acrobat® problem with newline characters. Due to a problem with the way Acrobat handles newline characters in form fields, use either the -s option or -norebuild option when merging double-byte characters. First character in a CJK form field must be a CJK character. /octal values no longer supported in XFDF. Use hexadecimal codes for CJK characters or FDF files for higher order ASCII characters. Unicode on rotated pages. Unicode characters do not appear correctly on rotated pages. Do not mix UTF-16 hexadecimal codes with UTF-8 characters or ASCII. If you use UTF-16 hexadecimal codes in a value field, use all UTF-16 characters for that value. Do not mix UTF-16 characters with UTF-8 characters or ASCII text. Make sure the font for the Field in question is set to one of the supported CJK fonts. Display problems in Acrobat Acrobat sometimes displays crosses in check boxes in a plain font. This is an issue with Acrobat and not FDFMerge. They should display correctly if you do not use the -norebuild option. No support for Signatures Although the Adobe Acrobat forms tool may be used to add a Signature field, it will be ignored by FDFMerge. If you flatten the file, the signature field will be removed as well. No support for FDF Templates The Form Data Format (FDF) supports a construct known as “Templates” where, within the Acrobat environment, the FDF file will cause additional pages to be added to the PDF file as they are needed. FDFMerge does not support FDF Templates and will not add additional pages to a PDF file. Templates also allow use of non-exact names for form fields. FDFMerge will not work unless the field name in the FDF file exactly matches the name of the form field in the PDF file. Encrypted files may not be used FDFMerge cannot open encrypted files. Users must have Edit permission for any files used by FDFMerge. Autosize is not supported When using text fields in PDF forms, be sure to specify a point size for the text since the Autosize feature is not supported by FDFMerge. The sort and multiple select options for list boxes are not supported If your form has one item selected, FDFMerge’s stamped output will show the selection in red (i.e., the list is displayed in full with the selected choice printed in red). If more than one selected list item is entered into the FDF file, FDFMerge will fail. If multiple selection or sort is checked in the form field properties for the list box, it will not display properly (the entire list will not be displayed, and the selected entry will print in black). If your list box is set to be sorted, FDFMerge will not sort the data in the PDF form—any list will be printed as it is listed in the PDF form. This behavior is true for both stamped and merged documents. Button stamping and merging behavior with regard to formatting In Acrobat, when you put in a button field, you can specify the button to be Text, Icon, or Icon and Text. Buttons containing text (Text or Icon and Text) will be maintained after a merge, but will not be stamped on the form. When stamping a mixed button of Icon and Text, only the icon will stamp. When you set a border around a PDF icon, specify the PDF from within the Acrobat form (so the PDF form displays the button and the border when you view it in Acrobat), and use FDFMerge on the form with the -s option, with no reference to the button in the FormInfo file or the FDF file (i.e., not changing the button in any way using FDFMerge), the button will display with the border. If you use FDFMerge to put the PDF icon into the button, there will not be a border. In other words, if only Acrobat touches the field, the border will come through. If FDFMerge touches the field, the border will not appear. FDF files must be in PDDocEncoding Files exported directly from Acrobat will work correctly. Files created by hand must use the PDDocEncoding scheme for high ASCII characters. See PDF Reference Appendix D on Adobe’s website. This issue concerns using a page of a PDF document to stamp into a button. When using a page for a button that has been rotated to be in a landscape format and it is stamped or merged into a PDF, the image in the output document will be rotated. The current workaround is to set the rotation of the button in the form field to be 90 or 270 degrees. To change the rotation of a form field go to the Field Properties, Common Properties tab, Orientation. To Get Help Documentation for FDFMerge can be found at /fdfmerge/. Contact technical support by: Please provide the following: Product name and version number Operating system Your name, company name, email address, and phone number Description of your question or problem Responses are typically emailed within one business day.
https://docs.appligent.com/fdfmerge/fdfmerge-readme/
2021-07-23T22:17:29
CC-MAIN-2021-31
1627046150067.51
[]
docs.appligent.com
This option downloads an XLSX file (Excel and others) tailored for working with your accounts' attributes. Mandatory fields are colour coded and there is also some information about each column in the notes that you can read if you hover over a column heading. Where a field has restricted values (such as a list of roles or departments) there is a drop-down list in the cell. This will only stop you typing or pasting values into the restricted cells if the values you are entering are invalid, so there is no problem with pasting in lists of data (as long as the values are valid). Columns can be included in any order, and non-mandatory columns you aren't putting any data in can be removed. When you're ready to upload the file, you need to save it as CSV format as all others will be rejected. Anything to watch out for? Computers are famously literal and they will try to do exactly what it thinks you are telling it to. Care must be taken in certain areas. The permission set column, if included on a modify upload, must include any permission sets you want the account to keep because leaving this filed blank will remove all permission sets from the specified account. Mandatory fields - some are mandatory only in combination with others - e.g. on an account creation upload: password is only mandatory if you're setting status as 'active'. Some fields are mandatory on creation uploads, but not on update uploads. The template will tell you. If you're sending a modify upload based on a data download - the best way to avoid problems on a modify upload is to remove any columns that are not the username and are not changing any data. See: submitting uploads. Character set - the upload needs to be in the ISO-8859-1 character set which means that some characters will not be accepted - examples are ï ¿ ½ (and curly apostrophes). Why XLSX format? XLSX is a format usable by most modern spreadsheet packages including many free and open source ones. It enables more help for users on templates that can be unique to a customer.
https://docs.openathens.net/display/public/MD/Bulk+template
2021-07-23T21:18:15
CC-MAIN-2021-31
1627046150067.51
[]
docs.openathens.net
URL: Overview Kin. Input geometries are pre-processed upon ingestion for faster vector tile generation. The data source, geographical position, and zoom level of each Vector Tile are specified in a VTS request. The requested Vector Tile is then returned in the response. The VTS offers a couple advantages over server-side WMS calls: - VTS supports client-side control of styling like image fill - When using vector tiles, the browser only requests new information as needed and caches data along the way; WMS output must be re-rendered on every pan/zoom Important The Kinetica VTS requires access to GPUs, i.e. VTS cannot be used on an Intel build. Configuration Before using the VTS, the service must be enabled and configured via the gpudb.conf configuration file. Settings: Usage Base VTS URI: http://<kinetica-host>:<port>/vts/<layer>/<z>/<x>/<y>.pbf?attributes=<columns> Important The VTS URL needs to be specified in the client-side visualizer's configuration. URI parameters: Example Below is a snippet of a Javascript Mapbox style specification using Kinetica's VTS URL as a source: // Config var tableName = "nyc_neighborhood"; var wktColumn = "geom"; var kineticaUrl = ""; // Mapbox GL map.on('load', function () { map.addLayer({ "id": tableName + "_layer", "version": 8, "type" : "fill", "source": { "type": "vector", "tiles": [kineticaUrl + ":9191/vts/" + tableName + "/{z}/{x}/{y}.pbf?attributes=" + wktColumn], // Note Mapbox uses the params in curly braces as dynamic values. Don't change those. "maxzoom": 20 }, "source-layer": tableName, "paint": { "fill-color": "#EDF00F", "fill-outline-color": "#000000" } }); }); Limitations and Cautions - Vector tiles are kept in memory, so the zoom levels should be used to keep the memory usage of tiles at a reasonable level. A higher zoom level typically results in more tiles and more memory usage - The VTS currently does not support feature attributes - Since VTS sends feature information for each table row to the browser, the client can get overwhelmed with data; performance is dependent upon client hardware
https://docs.kinetica.com/7.1/api/rest/vts_rest/index.html
2021-07-23T22:07:06
CC-MAIN-2021-31
1627046150067.51
[]
docs.kinetica.com
Regulatory Change Management The ServiceNow® Regulatory Change Management application enables you to check upcoming regulatory changes, assess their impact, and implement risk and compliance-related changes. The application ensures overall regulatory compliance. Request apps on the Store Visit the ServiceNow Store website to view all the available apps and for information about submitting requests to the store. For cumulative release notes information for all released apps, see the ServiceNow Store version history release notes. Explore Overview of Regulatory Change Management application User roles in Regulatory Change Management Regulatory Change Management workflow Set up Download and install Regulatory Change Management Configure Regulatory feeds Impact assessments for regulatory event feeds Manage regulatory change tasks Manage source document import tasks Create an action task Create and view issues related to Regulatory Tasks Set up RSS feeds Administration module Map an internal taxonomy Develop Developer training Developer documentation Troubleshoot Ask or answer questions in the community Search the Known Error Portal for known error articles Contact Customer Service and Support
https://docs.servicenow.com/bundle/quebec-governance-risk-compliance/page/product/grc-rcm/reference/reg-change-mgmt-landing-page.html
2021-07-23T23:16:21
CC-MAIN-2021-31
1627046150067.51
[]
docs.servicenow.com
Debugging search template errors Error logging Funnelback logs user interface issues to the modernui log. These are accessible via the log viewer from the collection log files for the collection. The relevant log files are: modernui.Public.log: contains errors for searches made against the public HTTP and HTTPS ports. modernui.Admin.log: contains errors for searches made against the administration HTTPS port. Funnelback also includes some settings that allow this behaviour to be modified so that some errors can be returned as comments within the (templated) web page returned by Funnelback. The following configuration options can be set on a collection’s configuration while you are working on template changes: ui.modern.freemarker.display_errors=true ui.modern.freemarker.error_format=string The error format can also be set to return as HTML or JSON comments. The string error format will return the error as raw text, which results in the error message being rendered (albeit unformatted) when you view the page - this is the recommended option to choose while working on a HTML template as the errors are not hidden from view. Setting these causes the template execution to continue but return an error message within the code. Data model log object The data model includes a log object that can be accessed within the Freemarker template allowing custom debug messages to be printed to the modern UI logs. The log object is accessed as a Freemarker variable and contains methods for different log levels. The parameter passed to the object must be a string. The default log level used by the modern UI is INFO. It is not possible to change this on a per-collection level and changing the level globally requires administrative access. This means when debugging your templates you will probably need to print to the INFO level - but don’t forget to remove your logging once you have completed your testing otherwise a lot of debugging information will be written to the log files for every single query. e.g. <#-- print the query out at the INFO log level --> ${Log.info("The query is: "+question.query)} <#-- print the detected origin out at the DEBUG log level --> ${Log.debug("Geospatial searches are relative to: "+question.additionalParameters["origin"]?join(","))}
https://docs.squiz.net/funnelback/docs/latest/build/results-pages/search-results-html/template-debugging.html
2021-07-23T22:52:50
CC-MAIN-2021-31
1627046150067.51
[]
docs.squiz.net
BPMN API Use this API to generate BPMN files for given processes. You can access the Swagger definition of these APIs using this URL: - replace "pz.symbioweb.com" with the hostname (and port) of your Symbio instance. It is also recommended to use Refit for building corresponding requests. Visit for further information. Scenarios Use the GET endpoint to create a BPMN XML definition for a given process-element:, e.g. GET Use the POST endpoint to import the BPMN which is posted to this endpoint:, e.g.
https://docs.symbioworld.com/developer/rest-api/overview/bpmn-api/
2021-07-23T21:57:42
CC-MAIN-2021-31
1627046150067.51
[]
docs.symbioworld.com
modo.FormContainer Use this container to create edit forms for your data. You can add different editing controls to it (eg. Textfields, ToggleButtons, Dropdowns... basically everything with a get() and set() method) and assign each control to a specific value of a object or Backbone Model. When you pass a object or Backbone Model to the FormContainer, each value is provided to the connected element. If autosave is enabled, all data from all elements is automatically assigned back to the original object/model structure upon changes. You can also use the Modo FormContainer to collect and transfer data to your server by either a AJAX request, or a standard form submission. Constructor modo.FormContainer(params)modo.FormContainer The following list of parameters can be used in the params object, passed to the constructor. Parameters in [brackets] are optional. Properties dirtyBool This flag will be set to true when one of the contained, keyed set/get enabled elements fire a change event. It will be set back to false, after a set() or save() call. Note: This will never switch to false in a autosave enabled FormContainer! defaultDataObject The blank data object will be used as form data when the set() function is called with no data. Useful for setting default values for new data objects. Inherited Properties from modo.Container Methods add(...)this Use this function to add one or more modo / jQuery / DOM elements to the container. Will trigger the Backbone Event "add". Either pass: - Modo elements directly - DOM/jQuery elements directly - Modo elements encapsulated in a object to add them with keys. Example: {mykey: someModoElement} remove(key, [force=false])this This will remove the element with the given key from the container. You cannot remove elements nested inside a modo.FormSlot container by default. Pass force = true to remove the parent FormSlot container of the given element as well. Be careful, this might remove other elements with different keys that are also stored in that FormSlot! removeAll()this Will remove all elements from that FormContainer. set(data)this Will pass a new dataset into the container and will populate all children with a set() function and a given key with its matching data. When you omit the data property, the form will be reset to the defaultData you have pre-defined. reset([options])this Same as calling set() without a parameter, but may be easier to remember. get()object Will return a getJSON()-like formatted object with all current values from all elements with a get() method and a populated key. getElements()array Returns an array of all added elements. save()this Writes all changed data back to the given dataset. send(options)this This method can be used to transport your collected data via HTTP. Submit the parameter ajax = true to have it sent with an AJAX request (and the result returned in a callback), or set ajax = false to submit the data like a traditional <form>-tag (triggers page load). Available Options focus()this Will try and set the input focus to the first element. Inherited Methods from modo.Element, wenn the forms dataset has been changed through a call to set(). save Triggered after the save() method has been called. Inherited Events from modo.Container add Triggered, when a child has been added through add() remove Triggered, when a child has been removed through remove() CSS Classes mdo-formcontainer Will be added to.
http://docs.modojs.com/en/reference/formcontainer
2017-06-22T18:25:56
CC-MAIN-2017-26
1498128319688.9
[]
docs.modojs.com
Part 1: Build a Static Widget Introduction. In this first part we are going to set up our dev environment, scaffold a module and then build a simple Widget inside it. Prerequisites This course assumes the following: You have some experience using Orchard and understand its core concepts. Refreshers and links to related guides will be provided. You can read and write C# code. You have some experience with ASP.NET MVC. This doesn't need to be deep but you should be aware of Razor templates, views, strongly-typed models and similar basics. The course was written and tested against Orchard v1.9.2. It should work in new 1.x branch releases as they come out. Getting help If you get stuck or need some support at any point in the course there are several places you can turn: Open an issue on the Orchard Doc GitHub repo. Setting up First things first. You need to follow the setting up for a lesson guide. This will take you through the initial steps to set up your dev environment and pull a fresh copy of the source code down. When you've completed it please use your back button to come back to this course. Getting the most out of this course Writing an Orchard module that actually does something is going to contain a minimum of 9 different files. You will need to do a lot of development before you can run your module code and see it working in Orchard. At first you might be overwhelmed by this, but here is a little tip; don't be. Just forge ahead with the tutorial and don't worry if terms like drivers, content parts, or placements seem unfamiliar at the moment. As you continue with your module development you will come across these files many times over. Before long you will start recognizing these core files and you will see how it all fits together. Course structure Throughout the course we will alternate between discussing topics and implementing them. The discussion may contain example code or other example scenarios. So that there is no confusion for you as to what you should be doing, when it comes to implementing these lessons into the module it will be explained step-by-step via numbered lists. Later on in the course, as the topics become more advanced, we may go through several sections of discussion before wrapping up the lessons into changes to the codebase. You will also occasionally come across Bonus Exercise sections. These are completely optional. You can skip them, complete them at the time, or come back after completing the course to complete them. They are suggested when there is an extra feature you could implement using the skills you have just learned. Getting started Now that you've completed all of the setup tasks you will have a fresh copy of Orchard configured and ready to go. The rest of this part of the course will walk you through the process required to scaffold an empty module and then build a simple Widget inside of it. Command line scaffolding with Orchard.exe You should now be looking at Visual Studio. Down the side, in your Solution Explorer window you will see many files and folders. The first step to take is to collapse all of the projects down. Its a long list and we need to be able to see an overview of the solution so we can start working with it. You don't need to collapse these individually by hand however: If your Solution Explorer window is not visible click View, Solution Explorer. Click the Collapse All icon in the toolbar along the top of the solution explorer. It looks like this: If you expand your Modules folder you will see a long list of the modules which come packaged with Orchard: There is a utility that is packaged with each copy of Orchard which will let us add our own module into this list. It is called orchard.exe. This is a command line utility which will scaffold up a new empty module and add it to the main solution. There are also other commands you can use with this utility. To scaffold a new module: Press the Save All button (or press Ctrl-Shift-S). It's. Note: If you don't see orchard.exein the binfolder then you didn't follow the steps in the setting up for a lesson guide. You need to have built the solution at least once for this file to exist. Press Ctrl-Shift-Bwithin Visual Studio to build the solution. After a short pause while it loads you will then be presented with the Orchard command line: Note: There is a separate article where you can learn more about orchard.exe and its features. You don't need to read it to understand this course but it will be useful to review in the future as part of your overall training. Type the following command: feature enable Orchard.CodeGenerationand press enter. This will activate the code generation features of orchard.exe. Note: If you get an error saying 'No command found matching arguments "feature enable Orchard.CodeGeneration"' then you didn't follow the steps in the setting up for a lesson guide. You need to run the solution and go through the Orchard Setup screens before this command is available. The code generation command that we will be using is codegen module. Type help codegen module module by entering the following command: codegen module Orchard.LearnOrchard.FeaturedProduct. If you read the help in the last step you might be wondering why we didn't include the /IncludeInSolution:trueargument. This defaults to true so you don't need to add it. Close the Orchard command-line window. This has now created a new, empty module in the file system. Switching back to Visual Studio should show you the File Modification Detected dialog: Click Reload. Note: If you had unsaved changes in your Solution file then click the Dismiss option and add the project manually. In the Solution Explorer, Right click on the Modulesfolder. Choose Add, Existing Project, then navigate to .\src\Orchard.Web\Modules\Orchard.LearnOrchard.FeaturedProduct\, select Orchard.LearnOrchard.FeaturedProduct.csprojand press Open. The basic framework for a module now exists inside the modules section of your solution: Core concepts refresher If you are at the stage of wanting to build modules for Orchard then you should already be familiar with the concept of Content Types, Widgets, Content Items and Content Parts. These are all things that you can manage via the admin dashboard and you will have worked with them if you have built any kind of site in Orchard. To refresh your memory: Content Type: The template for a type of content in Orchard. The most common example is the Pagecontent type which provides the structure for a page of content in an Orchard site. Widgets: You can also make a content type that works as a Widget. The Widgetis a special variation of content type which can be placed into one of the many Zonesa template defines. It's manageable via the admin dashboard at run-time. Content types can opt-in to this system by configuring their Stereotypesetting to Widget. Content Item: This is an instance of a specific content type. When you create a new Pagein Orchard and fill it with content, you have created a Content Item with a Content Type of Page. Content Part: A small module providing some specific functionality. The Content Type is made up by attaching various Content Parts to it. For example you could have a comments content part. It just manages a block of comments for whatever it is attached to. The same comments content part could be attached to a Pagecontent type, a Blogcontent type, or within a Widget. What we will be building As you might have guessed from the module name, we are going to build a very simple featured product module. This first step into extending Orchard will be a small one. The featured product module will be a Widget which shows a static message listing the featured product with a link to that page. It's not going to have any configurable settings behind it so we won't need to look at the database side of things yet. It's not going to be powered by an actual product system. A Widget is a great starting point because it doesn't need to worry about menu settings, titles, URLs or integration into the admin dashboard. It will be a simple banner which you can display on your site by adding a widget via the admin dashboard. This will be enough to show the core concepts of a module. We will come back and make improvements in the next three parts of this course. Let's get started with some development by adding classes and other files to our module. Content part The content part class is the core data structure. When you scaffolded the module it automatically made you a Models folder. Now we need to add the FeaturedProductPart class to this folder: Right click on the Modelsfolder. Choose Add Choose Class... In the Name: field type FeaturedProductPart Click Add Your new class will be created and opened up in the Visual Studio editor. Important note: In order for Orchard to recognize Content Part classes they must be in a namespace ending in .Models. Because you already added this class within the Modelsfolder the namespace is automatically wrapped around your class. In the future, when you're making your own classes don't forget to ensure that you follow this namespace structure. Your content part class will need to derive from the ContentPart class. Normally we would add public properties to store all the related data but as we are keeping it simple this first example won't have any. Add the ContentPart inheritance by following these steps: Type : ContentPartafter your FeaturedProductPartclass definition to inherit from the ContentPartclass. Wait a second and the red squiggles will appear underneath the class. Add the namespace by pressing Ctrl-.on your keyboard to bring up the Quick Actions menu. Select the using Orchard.ContentManagement;option and press enter. That's all you need to do for your first ContentPart class. Your FeaturedProductPart.cs file should now look like this: using Orchard.ContentManagement; namespace Orchard.LearnOrchard.FeaturedProduct.Models { public class FeaturedProductPart : ContentPart { } } Data migrations When your module is enabled in the admin dashboard, Orchard will execute a data migration process. The purpose of the data migration is to register a list of the features contained in the module and any data it uses. We aren't going to use this yet, but the migration is also used for upgrades. As you work on your modules you will want to add and remove bits. The data migration class can make changes and you can transform your existing data to meet your new requirements. The data migration class can be created by hand, following a similar process as the last section but we can also scaffold it with the orchard.exe command line. Let's dive back in to the command line and add a data migration class to the module. Press the Save All button (or press Ctrl-Shift-S). Its. After a short pause while it loads you will then be presented with the Orchard command line: We enabled the code generation feature when scaffolding the module but if you have been playing with Orchard or are just using this guide as a reference it can't hurt to run the command a second time to make sure. Type the following command: feature enable Orchard.CodeGenerationand press enter. This will activate the code generation features of orchard.exe. The command that we will be using is codegen datamigration. Type help codegen datamigration data migration class by entering the following command: codegen datamigration Orchard.LearnOrchard.FeaturedProduct. Close the Orchard command-line window. This has now created a new data migration the file system called Migrations.cs. It will be in the root folder of your module. Switching back to Visual Studio should show you the File Modification Detected dialog: Click Reload. Note: If you had unsaved changes in your Solution file then click the Dismiss option and add the class manually. In the Solution Explorer, right click on the Orchard.LearnOrchard.FeaturedProductfolder. Choose Add, Existing Item, then navigate to .\src\Orchard.Web\Modules\Orchard.LearnOrchard.FeaturedProduct\, select Migrations.csand press Add. Now you have a Migrations.cs file in the root folder of your module's project. By default it has an empty method called Create() which returns an int. For the moment, returning a value of 1 is fine. It's the version number of your data migration and we will look into it in more detail later in this course. As discussed earlier, the Widget is just a ContentType with a Stereotype of Widget. A ContentType is basically just a collection of ContentParts. Every ContentType should contain the CommonPart which gives you the basics like the owner and date created fields. We will also add the WidgetPart so it knows how to widget. Finally we also include the content part we are building, FeaturedProductPart. Let's update the Create() method to implement these plans: Open Migrations.csfrom within your module project if you don't already have it open. Replace the Create()method with the following: public int Create() { ContentDefinitionManager.AlterTypeDefinition( "FeaturedProductWidget", cfg => cfg .WithSetting("Stereotype", "Widget") .WithPart(typeof(FeaturedProductPart).Name) .WithPart(typeof(CommonPart).Name) .WithPart(typeof(WidgetPart).Name)); return 1; } Orchard doesn't have a CreateTypeDefinitionmethod so even within the create we still used AlterTypeDefinition. If it doesn't find an existing definition then it will create a new content type. Ctrl-.on the red squiggles under FeaturedProductPartand CommonPartthen let Visual Studio add the required usingstatements. Try the same under the WidgetPart- you will see Visual Studio doesn't understand where to point the usingstatement at and it only offers you options to generate stubs. We don't want this. Right click on your References and choose Add Reference... Click the Projects tab on the left. Scroll down until you can see Orchard.Widgetsin the list. Hover your mouse over it and a checkbox will appear. Click the checkbox for Orchard.Widgets. Click OK. Now you can try resolving the red squiggly lines under WidgetPartagain: You will now have the correct using Orchard.Widgets.Modelsoption presented to you. Select it. Save your progress so far by clicking the Save all button (or press Ctrl-Shift-S). That's all for the data migration, your Migrations.cs should now look; } } } Update dependencies as you go along In the Create() method of the data migration we introduced a dependency on WidgetPart. This means that our module won't run without the Orchard.Widgets module being installed and enabled within the system. In order to let Orchard know that we have this dependency we need to record it in a manifest file called Module.txt. This is a text file written in YAML format which stores meta information about the module like the name, author, description and dependencies on other modules. If you haven't heard of YAML before don't worry, it is a simple format to understand. We will look at the Module.txt manifest file again in more detail in part 4 of this course, for now we just need to go in and record the dependency we have created with Orchard.Widgets. It is important to record this information as soon as we make a dependency on a module. If we don't record the information then your module can cause exceptions for your users at run-time. You really need to get into the habit of doing it straight away, because not only are they are easy to forget but if you have the module that you depend on already enabled you won't see any errors but your users will. Lets update the manifest now to include the Orchard.Widgets dependency: In the solution explorer, open up Module.txtwhich will be located in the root folder of the module. The last three lines describe the main feature of the module (we have only one feature in this module): Features: Orchard.LearnOrchard.FeaturedProduct: Description: Description for feature Orchard.LearnOrchard.FeaturedProduct. Add an extra row underneath Description:and add a Dependencies:entry like this: Features: Orchard.LearnOrchard.FeaturedProduct: Description: Description for feature Orchard.LearnOrchard.FeaturedProduct. Dependencies: Orchard.Widgets The indentation is important as it creates hierarchy within a YAML document. Indent the line with 8 spaces. How is all this magic working? So far the ContentPart class has been magically detected as long as it uses the .Model namespace, now the data migration is automatically detected just for deriving from DataMigrationImpl. How is all of this happening? Under the hood Orchard uses Autofac, an Inversion of Control container. If you're interested you can learn about how it's integrated in the how Orchard works guide. Don't worry though, you don't really need to know anything deeper about it other than it's in the background and it automatically scans & registers your components for you. Later on we will use Autofac's dependency injection which let us automatically get instances of things we need supplied directly into our classes. Content part driver Everything you see in Orchard is composed from Shapes. If you don't know about shapes you can learn more about them in the accessing and rendering shapes guide. A content part driver is a class that composes the shapes that should be used to view and edit content parts. Drivers live in their own folder called Drivers. A basic driver class will contain three methods: a display driver for viewing a content part in the front end, an editor driver for presenting an editor form in the admin dashboard and an update method to handle changes submitted from the editor form. As the shapes are created in the driver you can also pass data through to a view. Views are discussed in the next section but first we need to wire in the plumbing. The widget that we are building has no configuration, so all this driver will need is the Display method configuring. The other methods will be added in when we revisit the widget it part two. There aren't any command line scaffolding commands for setting up new drivers so you will need to create it manually: Make a new Driversfolder (Right click on the module project in the solution explorer, click Add, New Folder) Add a new class called FeaturedProductDriverby right clicking the Driversfolder, clicking Add, Class... and typing FeaturedProductDriverfor the name (Visual Studio will automatically add the .cson to the end for you) Extend the class so it derives from ContentPartDriver<FeaturedProductPart>(note that the generic type class ends in Part not Driver). Add the missing namespaces using the Ctrl-.shortcut. In the future we will do a lot with the driver class and the way that it builds its display but for this simple example all we need is a simple class to wire the shape to a view. - Inside your FeaturedProductDriverclass add this single method: protected override DriverResult Display(FeaturedProductPart part, string displayType, dynamic shapeHelper) { return ContentShape("Parts_FeaturedProduct", () => shapeHelper.Parts_FeaturedProduct()); } This says that when displaying the FeaturedProductPart return a shape called Parts_FeaturedProduct. By default Orchard will look for this shape in Views\Parts\FeaturedProduct.cshtml which is what we will build next. Your FeaturedProductDriver.cs file should now look like this:()); } } } View Orchard uses Razor template views to display it's shapes. You can supply strongly-typed data models and use many of the normal ASP.NET MVC Razor view features within Orchard. For this first widget our needs are simple and we will only be putting plain HTML markup inside the .cshtml file: Add a new folder inside the Viewsfolder called Parts(Right click on the Viewfolder in the solution explorer, click Add, New Folder and type Parts). Add a new .cshtmlRazor view within the Partsfolder called FeaturedProduct.cshtml Within the FeaturedProduct.cshtmlview file add the following HTML markup: <style> .btn-green { padding: 1em; text-align: center; color: #fff; background-color: #34A853; font-size: 2em; display: block; } </style> <p>Today's featured product is the Sprocket 9000.</p> <p><a href="~/sprocket-9000" class="btn-green">Click here to view it.</a></p> Placement Almost all of the key elements are in place now except for this last one. The configuration inside a driver class tells Orchard how to render that content part. Content parts always exist within a larger composite content item. Placement is used to tell Orchard where to render these components. The Placement.info file goes in the root folder of the module. It is an XML file with a simple structure. You can learn more about the Placement.info file in the understanding placement.info guide. Add the Placement.info file to your module: Right click on the module project in the solution explorer. Choose Add, New Item to get to the add item screen: From the templates categories in the left hand side, choose General Find Text File in the list Enter Placement.infoin the Name: field. Click OK This module has a single shape so we need to set up a <Place> for that shape. - Add this snippet to the empty Placement.infofile: <Placement> <Place Parts_FeaturedProduct="Content:1"/> </Placement> The Content:1 is the zone and priority of that shape. A shape will have several zones defined for it. Typically these include the header, content, meta and footer but they can have any combination of zones defined. In this case the Content is the main content area. The priority means that it will be near the top of the content zone. In more complicated modules there could be several shapes. Setting different priorities will let you organize their display order when you want them to be in the same zone. For example, if another shape had a place of Content:0.5 it would go before it, and Content:15 would go after it. Theme developers can customize these layout preferences by providing their own Placement.info and overriding your initial configuration. This lets theme authors customize your module without having to make changes to the actual code. This means when the module is upgraded to a new version the theme developers' changes will not be overwritten. Trying the module out in Orchard Congratulations, you've made it to the pay off, using the module in Orchard! The last few steps will enable the module in Orchard and assign the widget to a zone in the active template: In Visual Studio, press Ctrl-F5to start the dev server without debugging mode enabled. Log in to the admin dashboard. The login link will be in the footer of the site. Click Modulesin the navigation menu. The first item in the list should be our module, Orchard.LearnOrchard.FeaturedProduct: Click Enable to activate the plugin: You can now add the Widget to a layer in the site. Click Widgets from the navigation menu. In the AsideFirstsection of the Widgets page click the Add button: The Featured Product Widgetwill be in the list, click the item to select it: You can leave most of the Widgetsettings on their defaults. Just set the Titleto Featured Product: Click Save at the bottom of the page. If you go back to the main site now you will see the module in the site: We haven't created a page for the Sprocket 9000 so clicking the button will give a 404 at the moment. Download the code for this lesson You can download a copy of the module so far at this link: To use it in Orchard simply extract the archive into the modules directory at .\src\Orchard.Web\Modules\. For Orchard to recognize it, the folder name should match the name of the module. Make sure that the folder name is Orchard.LearnOrchard.FeaturedProductand then the modules files are located directly under that. Conclusion This first guide in the module introduction course has shown the main components of a module. In the next part we will extend the module to add some interactivity to the module. This means adding database backing, an editor view, configuration settings and we will dip our toes in with some of the Orchard API features. In the final part of the course we will review the module and clean it up to ensure we follow development best practices that have been missed so far. This was a long guide. Take a break now and when you're refreshed come back and read part two of the course.
http://docs.orchardproject.net/en/latest/Documentation/Getting-Started-with-Modules-Part-1/
2017-06-22T18:18:03
CC-MAIN-2017-26
1498128319688.9
[array(['../../Attachments/getting-started-with-modules-part-1/modules-list.png', None], dtype=object) array(['../../Attachments/getting-started-with-modules-part-1/scaffold-complete.png', None], dtype=object) array(['../../Attachments/getting-started-with-modules-part-1/add-contentpart-class.png', None], dtype=object) array(['../../Attachments/getting-started-with-modules-part-1/contentpart-add-inheritance.png', None], dtype=object) array(['../../Attachments/getting-started-with-modules-part-1/enable-viewmodule.png', None], dtype=object) ]
docs.orchardproject.net
Tuning Link Performance for WANs Typically, links that are configured between Solace appliances—both for VPN bridge connections, and neighbor links for multiple-node routing—have performance parameters set by default through Command Line Interface (CLI) configuration commands that are ideal for connectivity over a Local Area Network (LAN) or high-speed Metropolitan Area Network (MAN). However, when deploying Solace routers in a Wide Area Network (WAN), where long message round-trip times and high latencies are typical, Solace recommends tuning VPN bridge and neighbor link parameters to improve link performance over WANs. This is done using the following CLI command configuration options. The CLI command configuration options available for tuning either VPN bridge or neighbor link parameters to improve link performance over a WAN include: - Enable Data Compression—Enabling compression saves precious bytes on narrow WAN pipes, allowing a higher message rate over the WAN link. To epitomize bandwidth use over a WAN, chose the compressed-data option when setting up: - VPN bridge connections - multiple-node routing links between neighboring routers See Managing Multi-Node Routing and Configuring Remote Message VPNs, respectively, for command details on the compressed-data options. - Set Higher Initial Congestion Window Sizes—To prevent latency spikes due to TCP slow-start (possibly due to a combination of bursty traffic over long latency links), the network administrator can configure a higher initial congestion window size on the WAN link, so that a high initial bandwidth is available to be consumed. This initial congestion window is used after connection establishment or recovery from idle. See Configuring TCP Initial Congestion Window Size for command details. VPN Bridging-Specific Tuning Options To improve link performance over a WAN include, you also perform the following tasks for tuning VPN bridge link parameters: - Set Higher Guaranteed Messaging Window Sizes—To maximize Guaranteed Messaging throughput over a WAN link, it is often necessary to increase the window size for Guaranteed messages to compensate for the long round-trip times over the WAN. The window size indicates how many outstanding Guaranteed messages can be sent over the Message VPN bridge connection to the remote router, before an acknowledgment must be received by the sending router. However, configuring an excessively large message spool window size on low-latency VPN bridge links can negatively impact network performance. Contact Solace for technical support before changing this parameter, as they can assist you in choosing the appropriate value for your network conditions. See Configuring Message Spool Window Sizes for command details. - Configure Client Egress Queues’ Message Bursts Levels—To prevent transport congestion discards in a router, the egress per-client priority G-1 (Guaranteed 1) queue on the Message VPN bridge connection must always be able to accept a burst of messages as large as the Guaranteed Messaging window size. Therefore, duly configure for the router that receives the inbound bridge connection, on the client profile assigned to the client username being used for that inbound bridge connection, the minimum number of messages that must be on the egress G-1 queue before the queue’s depth is checked against the maximum depth setting (thereby allowing the queue to absorb a burst of large messages that exceeds the number of allowed work units). See Configuring Egress Queue Minimum Message Bursts for command details. - Configure Explicit Remote Topic Subscriptions—Any topic subscriptions configured against a bridge link cause published messages matching that topic to be sent over the bridge link, even though there may not be any consumers for the message on the receiving router. Therefore, avoid wide-reaching wildcard subscriptions on bridge connections. Instead, use more explicit subscriptions that attract only the traffic that needs to be transported over the WAN. While this recommendation is good advice for any bridge link, it is especially important for WAN links, where bandwidth is at a premium. See Configuring Remote Subscription Topics for command details. - Configure Maximum TCP Window Sizes—If the TCP maximum window size is set to less than the bandwidth-delay product of the bridge link, then the TCP connection operates below its maximum potential throughput. If the maximum window is set to less than about twice the bandwidth-delay product of the bridge link, then occasional packet loss will cause the TCP connection to operate below its maximum potential throughput as it handles the missing acknowledgments and retransmissions. However, there are also problems with a TCP maximum window size that is set too large, so it is important to set this value appropriately for bridge connections. The ideal setting for the TCP maximum window size is approximately twice the bandwidth-delay product of the bridge link. Therefore, duly configure the TCP maximum window size for the router that receives the inbound bridge connection, on the client profile assigned to the client username being used for that inbound bridge connection. See Configuring TCP Max Window Sizes for command details. Multi-Node Routing-Specific Tuning Options The following configuration practices may be used for tuning neighbor link parameters to improve link performance over a WAN. - Limiting Subscription Exports— To reduce bandwidth usage by the routing protocols, and ensure that messages are never sent to clients who should not be receiving messages from remote publishers, only enable the subscription export policy on those Message VPNs which need network-wide visibility. Leave all other Message VPNs at the default setting of not export subscriptions. Set the subscription export policy for a given Message VPN the same for all routers in the network. See Enabling Subscription Exporting for command details.
http://docs.solace.com/Configuring-and-Managing-Routers/Tuning-Link-Performance-for-WANs.htm
2017-06-22T18:28:24
CC-MAIN-2017-26
1498128319688.9
[]
docs.solace.com
Message Spooling As a Solace router receives Guaranteed messages (that is, messages with persistent or non-persistent delivery modes), it processes that message to determine if there are any registered topic subscriptions or queues that match the destination the message was published to. If there is a topic subscription match or a matching queue on the router, the message and all the matches are spooled and then the router acknowledges receipt of the message. After acknowledging the message, the router attempts to deliver it to all the matching clients/routers. As each client/router acknowledges receipt of the message, the associated match is deleted from the match list of the message. Once there are no matches left associated with a message, the message itself is deleted from the spool (that is, it is discarded). If one or more of the clients are offline or have fallen behind, the message is held in the ADB until the message can be delivered. If there are too many messages to hold in the ADB’s memory, messages are written to the disk in large blocks. This means only messages for slow/offline clients need to be written to the disk and they can be written in an efficient manner. Note: If all of the clients acknowledge the message quickly enough, the message does not need to be written to disk. It is possible for a message to not be spooled because of resource or operating limitations. A variety of checks are performed before a message is spooled. These include: - Would spooling the message exceed the router-wide message spool quota? - Would spooling the message exceed the Message VPN’s message spool quota? - Would spooling the message exceed the endpoint’s message spool quota? - Would spooling the message exceed the endpoint’s maximum permitted message size? - Would the message exceed the endpoint’s maximum message size? - Is the destination endpoint shutdown? - If the message is a low-priority message, would spooling the message exceed the endpoint’s reject low‑priority message limit? Depending on the reason for why the message was not spooled to the endpoint, either no acknowledgment is returned to the publisher or a negative acknowledgment (that is, a ‘nack’) is returned, and it is up to the publisher to handle these possibilities. A statistic is then incremented on the router. Note: - If a subscription is deleted when a message is spooled for that subscription, the message will still be delivered. - If an endpoint is deleted (for example, through the Solace CLI) while a message is spooled to that endpoint, the message will not be delivered. Spool Files Guaranteed messages are spooled to a Solace router through the use of spool files that can each hold approximately 8 MB worth of messages. A router can support one million or more spool files—the maximum number of message spool files available depends on the Solace router model. If the router’s spool file usage reaches this limit, it cannot receive any more messages until some spooled messages are acknowledged, which could free some spool files. If a router reaches its maximum spool file usage, negative acknowledgments (that is, ‘nacks’) are returned to all publishing clients. However, spool file thresholds can be configured so that events are generated when the system-level message spool usage gets too high but is not exceeded, or when it gets abnormally low. Refer to System-Level Message Spool Configuration. By default, an event is generated when more than 80% or less than 60% of spool files are in use. Windowed Acknowledgment A windowed acknowledgment mechanism is used at the transport level between the router and individual clients publishing and receiving Guaranteed messages. A windowed acknowledgment prevents the round-trip acknowledgment time from becoming the limiting factor for message throughput. It does this by allowing a configurable number of messages to be in transit between a Solace router and a publishing or subscribing client before an acknowledgment is required. The window size can be configured through a Solace messaging API flow property. Solace APIs also batch acknowledgments from the application to the router. The figure below shows a client application sending multiple acknowledgments for Guaranteed messages, which the API consolidate into a single acknowledgment on the wire that is returned to the router. The size of the batch is configured through an API flow property. Note: With any windowed acknowledgment scheme there is the possibility of failure in the time between a message being received by the client, and the time at which the router receives the acknowledgment. A failure at this time requires all non-acknowledged messages in transit to be sent again. Thus, the number of messages redelivered increases and is directly proportional to the combination of window size and acknowledgment threshold. Cumulative ACK and Acknowledgment Thresholds Delivered-But-Unacknowledged Messages There is a hard limit for the number of Guaranteed messages that can be delivered through a consumer flow to a bound client without that client returning an acknowledgment for those messages. On reaching this limit (plus one window size of messages), the Solace router stops delivering messages to the client until the client acknowledges messages that have already been delivered. Note: You can configure the maximum number of delivered‑but‑unacknowledged messages limit for a queue or topic endpoint provisioned on a Message VPN (refer to Message VPN-Level Guaranteed Messaging Configuration). A Solace router has a system-level limit for the maximum number delivered‑but‑unacknowledged messages for all clients at a given time. On reaching this maximum, no more messages are delivered to clients until some clients return acknowledgments back to the router, or they are disconnected. The maximum number delivered‑but‑unacknowledged messages supported is dependent on the Solace router model. By default, an event is generated when the number of outstanding messages that have not been acknowledged by their receiving clients reaches 80 percent of the system maximum (the set value), and this is followed by a further event the number of client-unacknowledged messages returns below 60 percent of the system maximum (the clear value). The thresholds for when these events are generated are configurable (refer to Delivered Unacked Thresholds). Message Expiry To set a limit on the amount of time that published messages have to be consumed and an acknowledgment of receipt is not returned to the router, you can assign Time‑to-Live (TTL) expiry values to the messages. - If your application is using the Solace messaging APIs or REST service to published Guaranteed messages, you can set a Time‑to-Live (TTL) expiry value on each published Guaranteed message to indicate how long the message should be considered valid. A publisher TTL expiration starts when a message is published and counts down as the message passes through the network. - You can configure a Max Time to Live (TTL) value for a durable endpoint so that received messages are provided with expiration value to limit how long they can remain on that durable endpoint when a Max TTL is used. The Max TTL only applies when a message is on the endpoint. When TTL values are applied to messages in either or both of these ways, messages that are not consumed and acknowledged before their expiration times are reached are discarded or moved to a Dead Message Queue (DMQ) . If a message has both a publisher-assigned TTL and an endpoint-assigned Max TTL, the router will use the minimum of the two TTL values when the message is on the endpoint. Note: If a Message VPN bridge is used so that published messages that match topic subscriptions can be delivered from a remote Message VPN on one router to a local Message VPN on another router, the amount of time the message spends on each router is counted. That is, the amount of time a message spends on the remote router is counted, and its remaining time to live is updated when it is sent to the local Message VPN. For example, with a publisher-provided TTL of eight seconds, if a message spends two seconds on the first router, before it reaches the local Message VPN on the second router, it will have a TTL of six seconds. Using TTLs to expire messages that have not been processed quickly can help prevent stale messages from being delivered to consumers. However, it should be noted that monitoring and processing messages using TTLs can affect the system-level limits for Guaranteed message delivery (for more information, refer to Guaranteed Message Queuing Limits).
http://docs.solace.com/Features/Message-Spooling.htm
2017-06-22T18:24:15
CC-MAIN-2017-26
1498128319688.9
[array(['images/window_ack.png', None], dtype=object)]
docs.solace.com
How to reach out to a developer Once you’ve found a developer you like, request an interview with them by filling in the “Interview Request” panel next to their profile page. This is how all communication on OfferZen starts. Interview requests include the job role you’re looking to fill, a salary amount, whether equity is on the table and lastly, a personal message. Adding a job If this is your first interview request, you’ll need to create a job. Simply add a job title, location and job description. To make your life easier, for future interview requests you’ll be able to select the job you added from the drop down menu. Keep in mind, the jobs you add on OfferZen are only visible to developers when you send them an interview request; OfferZen is not a job board! What’s this salary amount in the interview request? After adding the job, you’ll need to add a salary amount to the interview request. The salary in an interview request is a non-binding amount, that simply ensures both parties are speaking in the same ballpark and not wasting each other’s time. The final offer amount is still negotiable between you and the developer. Preferred salary You’ll see that each developer has a preferred salary on their profile. We help developers choose this number, and base it on what they are currently earning as well as the market related salaries for their skill set. Competing interview requests You’ll be able to view the salary amounts of competing interview requests. If a developer has competing interview requests they will be displayed under the salary amount in the interview requests panel. The amounts here indicate the salary level at which other companies are willing to speak to a dev. Interview request message We find what works well is if you introduce yourself, indicate you’ve read the dev’s profile and that you are interested in speaking to him or her. Now that you’re done, click Send Interview Request. I’ve sent the interview request, now what? Once you send the interview request the developer will receive an email, and should respond to you within 48 hours. Don’t be afraid to send a couple of of interview requests, seeing that not every interview request will result in a hire. To find out what happens once a developer responds to your interview request, check out this guide.
http://docs.offerzen.com/company-guides/sending-interview-requests
2017-06-22T18:26:35
CC-MAIN-2017-26
1498128319688.9
[]
docs.offerzen.com
Managing Guaranteed Messaging The Guaranteed Messaging facility can be used to ensure that the delivery of a message between two applications is guaranteed even in cases where the receiving application is off line, or there is a failure of a piece of network equipment. Once a Solace router has acknowledged receiving a Guaranteed message from a publisher, it is committed to delivering that message. Guaranteed Messaging also ensures that published messages can be reliably delivered to each matching client once and only once, and that messages are delivered in the order they are published. Note: To support Guaranteed Messaging, a Solace router must have Guaranteed Messaging and message spooling enabled. By default, these are not enabled for physical router (that is, an appliance), but they are enabled for a virtual messaging routing (VMR). An appliance must also have an Assured Delivery Blade (ADB) and a Host Bus Adapter (HBA) installed. Feature Interoperability Limitations Observe the following limitations: - Topic endpoint subscriptions follow the “deliver always” paradigm in the Deliver-To-One messaging feature. - With the exception of message spool-specific details, the show User EXEC commands do not make a distinction between durable and non-durable destinations. That is, the same show commands and options exists for both durable and non-durable destinations. - Guaranteed temporary destinations and their content survive a redundancy switch provided that the bind from the client occurs within the switchover time. - Guaranteed messages are not routed between Solace routers. The multi‑node routing feature is for use with Direct messages only. - The Solace JavaScript API does not support Guaranteed Messaging. Functional Parameters to Consider When Provisioning Endpoints Functional parameters to consider when provisioning queues and topic endpoints on Solace routers include: - technology used by connected client. For example: - Solace enterprise Messaging API - Solace Web messaging API - non-Solace technology: OpenMAMA API, REST messaging, MQTT - endpoint durability: durable or non-durable - message delivery type: Guaranteed or Direct - message type: persistent, non-persistent, or Direct The following table lists the supported queue and topic endpoint functionality. When created, the associated client durability and message delivery attributes for queues and topic endpoints are assigned.
http://docs.solace.com/Configuring-and-Managing-Routers/Managing-Guaranteed-Messaging.htm
2017-06-22T18:29:48
CC-MAIN-2017-26
1498128319688.9
[]
docs.solace.com
Theme and Plugin Instalation Go to Themes Click on “Add New” and then on “Upload theme“. Click on “Browse” to select the theme ZIP and then “Install Now“. Attention: The installable ZIP may be included into the main archive downloaded from Envato, if you have selected to download the complete package. After this, you should be redirected back to the Themes tab and see the following notice. You will have to click on “Begin installing plugins” further. Select all of the plugins, select “Install” from the dropdown and then click on “Apply“. WordPress will start installing the required plugins. You should see something like this: Now the theme and plugin(s) have been installed so you can start using it. If it’s the first time you have installed WooCommerce on this website, you will also have to run the WooCommerce Setup Wizard, which is a must: Now you will have to register an Envato API account and setup the API credentials.
http://docs.aa-team.com/wooenvato/documentation/theme-and-plugin-instalation/
2017-06-22T18:43:53
CC-MAIN-2017-26
1498128319688.9
[array(['http://docs.aa-team.com/wp-content/uploads/2015/09/wooenvatoreqplugins.jpg', 'WooEnvato - Install required plugins3'], dtype=object) ]
docs.aa-team.com
Orchard 1.10.2 Published: 4/25/2017?? Orchard 1.10.2 fixes bugs and introduces the following notable changes and features: - [Feature] Custom Lucene analyzer selection - [Feature] New workflow activity to remove a role - [Feature] Strict Transport Security option for SSL - [Feature] Content Picker localization - [Feature] Filter widgets by culture - [Improvement] Vary output cache by cookie - [Improvement] "Publish later" tasks can be removed - [Improvement] Default database indexes on Tags and Taxonomies - [Improvement] Individual form submissions exports - [Improvement] "Save" is renamed "Save Draft" - [Improvement] C#5 validation is enforced - [Improvement] New User.Parameter token - [Bug] Scheduled task is not deleted when the content is published - [Bug] Taxonomy sorting - [Bug] User Media folder names collisions - [Bug] Blog post permissions - [Bug] Configured database isolation level is not respected - [Bug] Gallery search is failing - [Bug] CacheManager is not thread-safe - [Bug] MySql failures on Setup - [Bug] Exporting TextField with empty values down't work - [Bug] Job queue is not processed in batches The full list of fixed bugs for this release can be found here: How to upgrade from a previous version You can find migration instructions here:.
http://docs.orchardproject.net/en/latest/Documentation/Orchard-1-10-2.Release-Notes/
2017-06-22T18:35:09
CC-MAIN-2017-26
1498128319688.9
[]
docs.orchardproject.net
Code Analysis with joern-tools (Work in progress)¶ This tutorial shows how the command line utilities joern-tools can be used for code analysis on the shell. These tools have been created to enable fast programmatic code analysis, in particular to hunt for bugs and vulnerabilities. Consider them a possible addition to your GUI-based code browsing tools and not so much as a replacement. That being said, you may find yourself doing more and more of your code browsing on the shell with these tools. This tutorial offers both short and concise commands that get a job done as well as more lengthly queries that illustrate the inner workings of the code analysis platform joern. The later have been provided to enable you to quickly extend joern-tools to suit your specific needs. Note: If you end up writing tools that may be useful to others, please don’t hesitate to send a pull-request to get them included in joern-tools. Importing the Code¶ As an example, we will analyze the VLC media player, a medium sized code base containing code for both Windows and Linux/BSD. It is assumed that you have successfully installed joern into the directory $JOERN and Neo4J into $NEO4J as described in Installation. To begin, you can download and import the code as follows: cd $JOERN mkdir tutorial; cd tutorial wget tar xfJ vlc-2.1.4.tar.xz cd .. ./joern tutorial/vlc-2.1.4/ Next, you need to point Neo4J to the generated data in .joernIndex. You can do this by editing the configuration file org.neo4j.server.database.location in the directory $NEO4J/conf as follows # neo4j-server.properties org.neo4j.server.database.location=$JOERN/.joernIndex/ Finally, please start the database server in a second terminal: $NEO4J/bin/neo4j console We will now take a brief look at how the code base has been stored in the database and then move on to joern-tools. Exploring Database Contents¶ The Neo4J Rest API¶ Before we start using joern-tools, let’s take a quick look at the way the code base has been stored in the database and how it can be accessed. joern-tools uses the web-based API to Neo4J (REST API) via the library python-joern that in turn wraps py2neo. When working with joern-tools, this will typically not be visible to you. However, to get an idea of what happens underneath, point your browser to: This is the reference node, which is the root node of the graph database. Starting from this node, the entire database contents can be accessed using your browser. In particular, you can get an overview of all existing edge types as well as the properties attached to nodes and edges. Of course, in practice, even for custom database queries, you will not want to use your browser to query the database. Instead, you can use the utility joern-lookup as illustrated in the next section. Inspecting node and edge properties¶ To send custom queries to thedatabase, you can use the tool joern-lookup. By default, joern-lookup will perform node index lookups (see Fast lookups using the Node Index). For Gremlin queries, the -g flag can be specified. Let’s begin by retrieving all nodes directly connected to the root node using a Gremlin query: echo 'g.v(0).out()' | joern-lookup -g (1 {"type":"Directory","filepath":"tutorial/vlc-2.1.4"}) If this works, you have successfully injected a Gremlin script into the Neo4J database using the REST API via joern-tools . Congratulations, btw. As you can see from the output, the reference node has a single child node. This node has two attributes: “type” and “filepath”. In the joern database, each node has a “type” attribute, in this case “Directory”. Directory nodes in particular have a second attribute, “filepath”, which stores the complete path to the directory represented by this node. Let’s see where we can get by expanding outgoing edges: # Syntax # .outE(): outgoing Edges echo 'g.v(0).out().outE()' | joern-lookup -g | sort | uniq -c 14 IS_PARENT_DIR_OF This shows that, while the directory node only contains its path in the filepath attribute, it is connected to its sub-directories by edges of type IS_PARENT_DIR_OF, and thus its position in the directory hierarchy is encoded in the graph structure. Filtering. Starting from a directory node, we can recursively enumerate all files it contains and filter them by name. For example, the following query returns all files in the directory ‘demux’: # Syntax # .filter(closure): allows you to filter incoming objects using the # supplied closure, e.g., the anonymous function { it.type == # 'File'}. 'it' is the incoming pipe, which means you can treat it # just like you would treat the return-value of out(). # loop(1){true}{true}: perform the preceeding traversal # exhaustively and emit each node visited echo 'g.v(0).out("IS_PARENT_DIR_OF").loop(1){true}{true}.filter{ it.filepath.contains("/demux/") }' | joern-lookup -g File nodes are linked to all definitions they contain, i.e., type, variable and function definitions. Before we look into functions, let’s quickly take a look at the node index. Fast lookups using the Node Index¶ Before we discuss function definitions, let’s quickly take a look at the node index, which you will probably need to make use of in all but the most basic queries. Instead of walking the graph database from its root node, you can lookup nodes by their properties. Under the hood, this index is implemented as an Apache Lucene Index and thus you can make use of the full Lucene query language to retrieve nodes. Let’s see some examples. echo "type:File AND filepath:*demux*" | joern-lookup -c echo 'queryNodeIndex("type:File AND filepath:*demux*")' | joern-lookup -g Advantage: echo 'queryNodeIndex("type:File AND filepath:*demux*").out().filter{it.type == "Function"}.name' | joern-lookup -g Plotting Database Content¶ To enable users to familarize themselves with the database contents quickly, joern-tools offers utilities to retrieve graphs from the database and visualize them using graphviz. Retrieve functions by name echo 'getFunctionsByName("GetAoutBuffer").id' | joern-lookup -g | joern-location /home/fabs/targets/vlc-2.1.4/modules/codec/mpeg_audio.c:526:0:19045:19685 /home/fabs/targets/vlc-2.1.4/modules/codec/dts.c:400:0:13847:14459 /home/fabs/targets/vlc-2.1.4/modules/codec/a52.c:381:0:12882:13297 Usage of the shorthand getFunctionsByName. Reference to python-joern. echo 'getFunctionsByName("GetAoutBuffer").id' | joern-lookup -g | tail -n 1 | joern-plot-ast > foo.dot Plot abstract syntax tree Take the first one, use joern-plot-ast to generate .dot-file of AST. dot -Tsvg foo.dot -o ast.svg; eog ast.svg Plot control flow graph echo 'getFunctionsByName("GetAoutBuffer").id' | joern-lookup -g | tail -n 1 | joern-plot-proggraph -cfg > cfg.dot; dot -Tsvg cfg.dot -o cfg.svg; eog cfg.svg Show data flow edges echo 'getFunctionsByName("GetAoutBuffer").id' | joern-lookup -g | tail -n 1 | joern-plot-proggraph -ddg -cfg > ddgAndCfg.dot; dot -Tsvg ddgAndCfg.dot -o ddgAndCfg.svg; eog ddgAndCfg.svg Mark nodes of a program slice echo 'getFunctionsByName("GetAoutBuffer").id' | joern-lookup -g | tail -n 1 | joern-plot-proggraph -ddg -cfg | joern-plot-slice 1856423 'p_buf' > slice.dot; dot -Tsvg slice.dot -o slice.svg; Note: You may need to exchange the id: 1856423. Selecting Functions by Name¶ Lookup functions by name echo 'type:Function AND name:main' | joern-lookup Use Wildcards: echo 'type:Function AND name:*write*' | joern-lookup Output all fields: echo 'type:Function AND name:*write*' | joern-lookup -c Output specific fields: echo 'type:Function AND name:*write*' | joern-lookup -a name Shorthand to list all functions: joern-list-funcs Shorthand to list all functions matching pattern: joern-list-funcs -p '*write* List signatures echo “getFunctionASTsByName(‘write‘).code” | joern-lookup -g Lookup by Function Content¶ Lookup functions by parameters: echo "queryNodeIndex('type:Parameter AND code:*len*').functions().id" | joern-lookup -g Shorthand: echo "getFunctionsByParameter('*len*').id" | joern-lookup -g From function-ids to locations: joern-location echo "getFunctionsByParameter('*len*').id" | joern-lookup -g | joern-location Dumping code to text-files: echo "getFunctionsByParameter('*len*').id" | joern-lookup -g | joern-location | joern-code > dump.c Zapping through locations in an editor: echo "getFunctionsByParameter('*len*').id" | joern-lookup -g | joern-location | tail -n 2 | joern-editor Need to be in the directory where code was imported or import using full paths. Lookup functions by callees: echo "getCallsTo('memcpy').functions().id" | joern-lookup -g You can also use wildcards here. Of course, joern-location, joern-code and joern-editor can be used on function ids again to view the code. List calls expressions: echo "getCallsTo('memcpy').code" | joern-lookup -g List arguments: echo "getCallsTo('memcpy').ithArguments('2').code" | joern-lookup -g
http://joern.readthedocs.io/en/latest/tutorials/unixStyleCodeAnalysis.html
2018-03-17T14:44:57
CC-MAIN-2018-13
1521257645177.12
[]
joern.readthedocs.io
Using Data From nfldb¶ NFLWin comes with robust support for querying data from nfldb, a package designed to facilitate downloading and accessing play-by-play data. There are functions to query the nfldb database in nflwin.utilities, and nflwin.model.WPModel has keyword arguments that allow you to directly use nfldb data to fit and validate a WP model. Using nfldb is totally optional: a default model is already fit and ready to use, and NFLWin is fully compatible with any source for play-by-play data. However, nfldb is one of the few free sources of up-to-date NFL data and so it may be a useful resource to have. Installing nfldb¶ nfldb is pip-installable, and can be installed as an extra dependency ( pip install nflwin[nfldb]). Without setting up the nfldb Postgres database first, however, the pip install will succeed but nfldb will be unuseable. What’s more, trying to set up the database after installing nfldb may fail as well. The nfldb wiki has fairly decent installation instructions, but I know that when I went through the installation process I had to interpret and adjust several steps. I’d at least recommend reading through the wiki first, but in case it’s useful I’ve listed the steps I followed below (for reference I was on Mac OS 10.10). Installing Postgres¶ I had an old install kicking around, so I first had to clean that up. Since I was using Homebrew: $ brew uninstall -force postgresql $ rm -rf /usr/local/var/postgres/ # where I'd installed the prior DB Then install a fresh version: $ brew update $ brew install postgresql Start Postgres and Create a Default DB¶ You can choose to run Postgres at startup, but I don’t use it that often so I choose not to do those steps - I just run it in the foreground with this command: $ postgres -D /usr/local/var/postgres Or in the background with this command: $ pg_ctl -D /usr/local/var/postgres -l logfile start If you don’t create a default database based on your username, launching Postgres will fail with a psql: FATAL: database "USERNAME" does not exist error: $ createdb `whoami` Check that the install and configuration went well by launching Postgres as your default user: $ psql psql (9.5.2) Type "help" for help. USERNAME=# Next, add a password: USERNAME=# ALTER ROLE "USERNAME" WITH ENCRYPTED PASSWORD 'choose a superuser password'; USERNAME=# \q; Edit the pg_hba.conf``file found in your database (in my case the file was ``/usr/local/var/postgres/pg_hba.conf), and change all instances of trust to md5. Create nfldb Postgres User and Database¶ Start by making a user: $ createuser -U USERNAME -E -P nfldb where you replace USERNAME with your actual username. Make up a new password. Then make the nfldb database: $ createdb -U USERNAME -O nfldb nfldb You’ll need to enter the password for the USERNAME account. Next, add the fuzzy string matching extension: $ psql -U USERNAME -c 'CREATE EXTENSION fuzzystrmatch;' nfldb You should now be able to connect the nfldb user to the nfldb database: $ psql -U nfldb nfldb From this point you should be able to follow along with the instructions from nfldb. Using nfldb¶ Once nfldb is properly installed, you can use it with NFLwin in a couple of different ways. Querying Data¶ nfldb comes with a robust set of options to query its database, but they tend to be designed more for ad hoc querying of small amounts of data or computing aggregate statistics. It’s possible to use built-in nfldb queries to get the data NFLWin needs, but it’s slow. So NFLWin has built in support for bulk queries of nfldb in the nflwin.utilities module: >>> from nflwin import utilities >>> data = utilities.get_nfldb_play_data(season_years=[2010], ... season_types=["Regular", "Postseason"]) >>> data.head() gsis_id drive_id play_id offense_team yardline down yards_to_go \ 0 2010090900 1 35 MIN -20.0 0 0 1 2010090900 1 57 NO -27.0 1 10 2 2010090900 1 81 NO 1.0 1 10 3 2010090900 1 109 NO 13.0 1 10 4 2010090900 1 135 NO 13.0 2 10 home_team away_team offense_won quarter seconds_elapsed curr_home_score \ 0 NO MIN False Q1 0.0 0 1 NO MIN True Q1 4.0 0 2 NO MIN True Q1 39.0 0 3 NO MIN True Q1 79.0 0 4 NO MIN True Q1 84.0 0 curr_away_score 0 0 1 0 2 0 3 0 4 0 You can see the docstring for more details, but basically get_nfldb_play_data queries the nfldb database directly for columns relevant to estimating WP, does some simple parsing/preprocessing to get them in the right format, then returns them as a dataframe. Keyword arguments control what parts of seasons are queried. Integration with WPModel¶ While you can train NFLWin’s win probability model ( nflwin.model.WPModel) with whatever data you want, it comes with keyword arguments that allow you to query nfldb directly. For instance, to train the default model on the 2009 and 2010 regular seasons from nfldb, you’d enter the following: >>> from nflwin.model import WPModel >>> model = WPModel() >>> model.create_default_pipeline() Pipeline(...) >>> model.train_model(source_data="nfldb", ... training_seasons=[2009, 2010], ... training_season_types=["Regular"])
http://nflwin.readthedocs.io/en/stable/nfldb.html
2018-03-17T14:31:19
CC-MAIN-2018-13
1521257645177.12
[]
nflwin.readthedocs.io