content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Selected Dates
The DateTimePicker allows you to render a pre-selected date and also define the minimum and maximum dates it displays.
For a complete example on how to select ranges by using the DateTimePicker, refer to the demo on range selection.
Telerik UI for ASP.NET Core includes the DateRangePicker component that can be used for selecting date ranges.
The following example demonstrates how to render a DateTimePicker with an initially selected date and defined min and max dates. The DateTimePicker sets the value only if the entered date is within the defined range and is valid.
@(Html.Kendo().DateTimePicker() .Name("dateTimePicker") .Value(DateTime.Now) .Min(new DateTime(1950, 1, 1, 10, 0, 0)) .Max(new DateTime(2050, 1, 1, 20, 0, 0)) ) | https://docs.telerik.com/aspnet-core/html-helpers/editors/datetimepicker/selected-dates | 2022-01-16T21:23:53 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.telerik.com |
Contributor guide
This guide aims to help anyone who wishes to contribute to the
community.hashi_vault collection.
Note
This guide can be improved with your help! Open a GitHub issue in the repository or contribute directly by following the instructions below.
Quick start
Log into your GitHub account.
Fork the ansible-collections/community.hashi_vault repository by clicking the Fork button in the upper right corner. This will create a fork in your own account.
Clone the repository locally, following the example instructions here (but replace
generalwith
hashi_vault). Pay special attention to the local path structure of the cloned repository as described in those instructions (for example
ansible_collections/community/hashi_vault).
As mentioned on that page, commit your changes to a branch, push them to your fork, and create a pull request (GitHub will automatically prompt you to do so when you look at your repository).
See the guidance on Changelogs and include a changelog fragment if appropriate.
Contributing documentation
Additions to the collection documentation are very welcome! We have three primary types of documentation, each with their own syntax and rules.
README and other markdown files
Markdown files (those with the extension
.md) can be found in several directories within the repository. These files are primarily aimed at developers and those browsing the repository, to explain or give context to the other files nearby.
The main exception to the above is the
README.md in the repository root. This file is more important because it provides introductory information and links for anyone browsing the repository, both on GitHub and on the collection’s Ansible Galaxy page.
Markdown files can be previewed natively in GitHub, so they are easy to validate by reviewers, and there are no specific tests that need to run against them.
Your IDE or code editor may also be able to preview these files. For example Visual Studio Code has built-in markdown preview.
Module and plugin documentation
This type of documentation gets generated from structured YAML, inside of a Python string. It is included in the same code that it’s documenting, or in a separate Python file, such as a doc fragment. Please see the module format and documentation guidance for more information.
This type of documentation is highly structured and tested with
ansible-test sanity. Full instructions are available on the testing module documentation page.
Additionally, the docsite build on pull requests (or built locally) will include module and plugin documentation as well. See the next section for details.
Collection docsite
The collection docsite is what you are reading now. It is written in reStructuredText (RST) format and published on the ansible_documentation site. This is where we have long-form documentation that doesn’t fit into the other two categories.
If you are considering adding an entirely new document here it may be best to open a GitHub issue first to discuss the idea and how best to organize it.
Refer to the Ansible style guide for all submissions to the collection docsite.
RST files for the docsite are in the
docs/docsite/rst/ directory. Some submissions may also require edits to
docs/docsite/extra-docs.yml.
When a pull request is submitted which changes the collection’s documentation, a new docsite will be generated and published to a temporary site, and a bot will post a comment on the PR with a link. This will let you see the rendered docs to help with spotting formatting errors.
It’s also possible to build the docs locally, by installing some extra Python requirements and running the build script:
$ pushd docs/preview $ pip install -r requirements.txt $ ./build.sh
You can then find the generated HTML in
docs/preview/build/html and can open them locally in your browser.
Running tests locally
If you’re making anything more than very small or one-time changes, run the tests locally to avoid having to push a commit for each thing, and waiting for the CI to run tests.
First, review the guidance on testing collections, as it applies to this collection as well.
Integration Tests
Unlike other collections, we require an integration_config.yml file for properly running integration tests, as the tests require external dependencies (like a Vault server) and they need to know where to find those dependencies.
If you have contributed to this collection or to the
hashi_vault lookup plugin in the past, you might remember that the integration tests used to download, extract, and run a Vault server during the course of the tests, by default. This legacy mode is no longer available.
Docker Compose localenv
The recommended way to run the tests has Vault and other dependencies running in their own containers, set up via docker-compose, and the integration tests run in their own container separately.
We have a pre-defined “localenv” setup role for this purpose.
Usage
For ease of typing / length of commands, we’ll enter the role directory first:
$ pushd tests/integration/targets/setup_localenv_docker
This localenv has both Ansible collection and Python requirements, so let’s get those out of the way:
$ pip install -r files/requirements/requirements.txt -c files/requirements/constraints.txt $ ansible-galaxy collection install -r files/requirements/requirements.yml
To set up your docker-compose environment with all the defaults:
$ ./setup.sh
The setup script does the following:
Template a
docker-compose.ymlfor the project.
Generate a private key and self-signed certificate for Vault.
Template a Vault config file.
Bring down the existing compose project.
Bring up the compose project as defined by the vars (specified or defaults).
Template an
integration_config.ymlfile that has all the right settings for integration tests to connect.
Copy the integration config to the correct location if there isn’t already one there (it won’t overwrite, in case you had customized changes).
With your containers running, you can now run the tests in docker (after returning back to the collection root):
$ popd $ ansible-test integration --docker default --docker-network hashi_vault_default -v
The
--docker-network part is important, because it ensures that the Ansible test container is in the same network as the dependency containers, that way the test container can reach them by their container names. The network name,
hashi_vault_default comes from the default docker-compose project name used by this role (
hashi_vault). See the customization section for more information.
Running
setup.sh again can be used to re-deploy the containers, or if you prefer you can use the generated
files/.output/<project_name>/docker-compose.yml directly with local tools.
If running again, remember to manually copy the contents of newly generated
files/.output/integration_config.yml to the integration root, or delete the file in the root before re-running setup so that it copies the file automatically.
Customization
setup.sh passes any additional params you send it to the
ansible-playbook command it calls, so you can customize variables with the standard
--extra-vars (or
-e) option. There are many advanced scenarios possible, but a few things you might want to override:
vault_version– can target any version of Vault for which a docker container exists (this is the container’s tag), defaults to
latest
docker_compose(defaults to
cleanbut could be set to
up,
down, or
none)
up– similar to running
docker-compose up(no op if the project is running as it should)
down– similar to
docker-compose down(destroys the project)
clean– (default) similar to
docker-compose downfollowed by
docker-compose up
none– does the other tasks, including templating, but does not bring the project up or down. With this option, the
community.dockercollection is not required.
vault_crypto_force– by default this is
falseso if the cert and key exist they won’t be regenerated. Setting to
truewill overwrite them.
vault_port_http,
vault_port_https,
proxy_port– all of the ports are exposed to the host, so if you already have any of the default ports in use on your host, you may need to override these.
vault_container_name,
proxy_container_name– these are the names for their respective containers, which will also be the DNS names used within the container network. In case you have the default names in use you may need to override these.
docker_compose_project_name– unlikely to need to be changed, but it affects the name of the docker network which will be needed for your
ansible-testinvocation, so it’s worth mentioning. For example, if you set this to
ansible_hashi_vaultthen the docker network name will be
ansible_hashi_vault_default. | https://docs.ansible.com/ansible/latest/collections/community/hashi_vault/docsite/contributor_guide.html | 2022-01-16T21:32:17 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.ansible.com |
Save & Load
Preferences Save/Load section.
Blend Files
- Save
- Save Prompt
Asks for confirmation before closing or opening a new blend-file if the current file has unsaved changes.
- File Preview Types
Select how blend-file preview are generated. These previews are used both in the File Browser and for previews shown in the operating system’s file browser.
- None
Do not generate any blend-file previews.
- Auto
If there is no camera in the 3D Viewport a preview using a screenshot of the active Workspace is generated. If a camera is in the scene, a preview of the viewport from the camera view is used.
- Screenshot
Generate a preview by taking a screenshot of the active Workspace.
- Camera View
Generate a preview of a Workbench render from the camera’s point of view.
- Default To
- Relative Paths
Default value for Relative Paths when loading external files such as images, sounds, and linked libraries. It will be ignored if a path is already set.
- Compress File
Default value for Compress file when saving blend-files.
- Load UI
Default value for Load UI when loading blend-files.
- Text Files
- Tabs as Spaces
Entering Tab in the Text Editor adds the appropriate number of spaces instead of using characters.
-
Enables Auto Save. Tells Blender to automatically save a backup copy of your work-in-progress files to the Temporary Directory.
Auto Run Python Scripts
Python scripts (including driver expressions) are not executed by default for security reasons..
Katso myös
File Browser
- Defaults
- Filter Files
By activating this, the file region in the File Browser will only show appropriate files (i.e. blend-files when loading a complete Blender setting). The selection of file types may be changed in the file region.
- Show Hidden Files/Data-Blocks
Hide files which start with
.in File Browsers and data IDs.
Vihje
Data-blocks beginning with a
.can be selected by typing in the
.characters. When explicitly written, the setting to hide these data-blocks is ignored.
- Show Recent Locations
Hide the Recent panel of the File Browser which displays recently accessed folders.
- Show System Locations
Hide System Bookmarks in the File Browser. | https://docs.blender.org/manual/fi/dev/editors/preferences/save_load.html | 2022-01-16T21:31:20 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['../../_images/editors_preferences_section_save-load.png',
'../../_images/editors_preferences_section_save-load.png'],
dtype=object) ] | docs.blender.org |
This page displays all groups known to this CloudBees Flow server, including groups defined locally within the server and those groups defined in external repositories such as LDAP.
To edit a local group, click on its name.
To create a new local group, click the New Group link.
Click on an external group to view its contents, but you cannot edit the content through CloudBees Flow. Instead, you must go to the group’s repository directly. You can, however, associate properties with external groups, which can then be used in CloudBees Flow.
In the Filter field, enter a string to be used to filter groups. The filter will automatically apply a trailing '' to find all groups starting with the entered text. Use * for wildcards (for example, searching for *foo will return all groups that include the string foo).
To configure your existing LDAP and Active Directory account repositories to communicate with CloudBees Flow, click the Administration > Directory Providers tabs. | https://docs.cloudbees.com/docs/cloudbees-cd/9.2/automation-platform/help-groups | 2022-01-16T22:56:41 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.cloudbees.com |
Access items for validation¶
As soon as you are assigned an item to validate, you will receive an email notification. To access the validation form, sign in to ELG, click on My grid and select the My Validations option.
You will be re-directed to your Validation Tasks. There, you will see a list with all the items for which you must perform (or have performed) a validation.
On this page, on the left, there are filters to help you sort out the resources. You can apply as many filters as you like and then clear them by clicking on the button above them.
As you can see, each item occupies a row separated in three columns:
the first one provides some basic information on the item (name, version, submission date),
the second column presents the names of the curator and validators, and
the third column shows its status in the publication lifecycle and the validation status (i.e. whether it has been validated and approved or rejected).
In addition, if the item has been validated, there is a box with the validator notes (for internal purposes only) and, if rejected, the review comments. | https://european-language-grid.readthedocs.io/en/release2/all/4_Validating/accessValidationForms.html | 2022-01-16T21:20:41 | CC-MAIN-2022-05 | 1642320300244.42 | [] | european-language-grid.readthedocs.io |
Performance Degradation
Why monitor metric change?
ML models performance often unexpectedly degrade when they are deployed in real-world domains. It is very important to track the true model performance metrics from real-world data and react in time, to avoid the consequences of poor model performance.
Causes of model's performance degradation include:
- Input data changes (various reasons)
- Concept drift predictions you'd like to monitor. You can select as many as you want :-)
Next, choose the metric you'd like to monitor from the following options:
- Missing Count
- Average
- Minimum
- Maximum
- Sum
- Variance
- Standard Deviation
- Mean Squared Error
- Root Mean Squared Error
- Mean Absolute Error
- Precision
- Recall
- F1
- Accuracy
- True Positive Count
- True Negative Count
- False Positive Count
- False Negative Count
Note that the monitor configuration may vary between the detection method you choose.
Creating this monitor using the REST API
POST { "name": "Significant change in prediction average value", "type": "performance_degradation", "scheduling": "0 */4 * * *", "configuration": { "configuration": { "focal": { "source": "SERVING", "timePeriod": "1d" }, "metric": { "type": "avg" }, "actions": [ { "type": "ALERT", "schema": "v1", "severity": "HIGH", ": "1w", "aggregationPeriod": "1d" }, "logicEvaluations": [ { "max": 1.3, "name": "RATIO" } ] }, "identification": { "models": { "id": "seed-0000-fhpy" }, "segment": { "group": null }, "environment": null, "predictions": [ "numeric_percentage" ] } } } | https://docs.aporia.com/monitors/performance-degradation/ | 2022-01-16T21:13:42 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['../../img_monitors/metric-baselines.png',
'Performance Degradation Detection Methods'], dtype=object)
array(['../../img_monitors/performance-degradation-configuration.png',
'Performance Degradation Configuration'], dtype=object) ] | docs.aporia.com |
The functions of the DMS that are available for the user are shown above the File Table with access to the buttons controlled via DMS Security Permissions.
You can control and structure your project files and workflows using folder setup in conjunction with this button access . Also refer to DMS Configurations Guidelines for further details.
Please be aware that Security Group permissions and thus button access is controlled on a folder by folder basis and therefore you may be able to do things in 1 folder and not in another. Refer to your Superuser or DMS Admin for Security Group queries.
Multi Document Select
By selecting multiple files with the Select column Document Actions can be carried out against multiple files in one go.
The maximum no of Documents that can be selected at once is 500.
Document Actions
Add Document
Upload documents into the current folder. Navigate to the required folder and click Add (Security group controlled) to show the Upload Panel.
You can either:
- Drag and drop file(s) to be uploaded into the the green box or
- Select individual files by clicking on the Select button
The maximum file upload to the DMS is 6GB in each upload.
File types are filtered to accept valid files and all files are anti-virus scanned on upload. If a file type you need isn't accepted please contact Clearbox support with details for inclusion.
Once documents have been added to the Wizard they will get a coloured dot shown in the 'Saved' Column, with the following meanings:
Blue Dots shows files that have been recognised as uploaded to the system already. Any Metadata associated withe these files will therefore not be editable during this upload stage. The Revision and Document Purpose are editable need to be saved.Blue Dots shows files that have been recognised as uploaded to the system already. Any Metadata associated withe these files will therefore not be editable during this upload stage. The Revision and Document Purpose are editable need to be saved.
Red Dots show files that have not been recognised on the system already. All Metadata, Revision and Document Purpose fields are editable and need to be saved.Red Dots show files that have not been recognised on the system already. All Metadata, Revision and Document Purpose fields are editable and need to be saved.
Green Dots show files that have had their Metadata details saved in the Wizard. These files are ready to be Uploaded.Green Dots show files that have had their Metadata details saved in the Wizard. These files are ready to be Uploaded.
If default configuration is used: The document filename will appear within the filename box and the title box. You can change the owner, author, title, enter a document number, amend the revision date, choose a revision and select a purpose/suitability.
If BS1192 configuration is used: If a file has been named correctly and in accordance to the drop downs added to the system as part of the BS1192 configuration, then the system will recognise the file when uploaded and pre-populate the File Identifier fields. You then have the option to change the owner, author, title, document number, revision date, revision, purpose/suitability.
Select files (far left had column) to edit metadata fields in the section below. If multiple files are selected then only the common metadata fields will remain enabled, with uncommon fields disabled. The files then have to be individually selected if you need to amend other fields.
All documents can have Upload Comments added to them in the bottom text entry field. Please note that files that have had comments added are not able to be moved in the DMS.
Once the metadata has been set click Save and the blue and red dots shown in 'Saved' column will turn green. If the save is not successful an error message will appear.
The system will only allow one document with the same File Identifier to be uploaded onto the platform. If the same File Identifier is used then it will flag this up to you and provide you with the next number.
Once you are ready to continue select all the files you wish to Upload and click Upload. You will prompted to confirm the number of files you are uploading.
Revise
There are two options available to you to revise documents:
Add (Refer Above)
You can use Add to revise a file that already exists in the DMS. Select the folder where the document is located and Add the revised files.
Please note: This relies on you making sure the documents have the same filename. If they do not, then the documents will be added as new entities and therefore duplicates on the system.
Refer to Add Document for details.
Revise Documents
You can also use the Revise button to revise a file or a group of files. Select the file(s) you want to revise and then select the Revise button.
The selected file(s) will be displayed in the pop up. It will show the filename, title, document number, author, revision, owner, purpose/suitability, date, uploaded by, primary file, secondary file and recipient.
Select each file in turn in the top panel and edit the settings in the bottom panel:
- Revision - Of the revised file
- Document Purpose / Suitability - Of the revised file
- Recipient - Notify users via email of the File Revision
- Primary File - Select a new file from your PC to become the next revision of the the existing file. Use this if you need to revise a file that doesn't have the same filename as the existing one.
- Secondary Files - Select any secondary files to add to this file revision.
- Comment - Add Revision Comments to the File.
Revisioning Codes
The revision codes used in the DMS containers are as follows:
WIP: Both Step and Point Revisions are used, with the user setting Step Revisions and the System automatically setting Point revisions. When a document is uploaded for the first time within WIP it will be given both a Step and a Point revision. When the document is revised the Point revision will be set automatically with the Step revision set by the User. The Revision can be overridden by the user with the above Revision setting.
EG, Upon first upload of a document the revision will default to either $.1 or P01.1 depending on which configuration is used. When the document is revised it will be given a Step revision set by the user EG A.1 or P02.1 and it will be given a Point revison automatically incremented by the system EG A.2 or P02.2
Share: Only Step Revisions are used and they are automatically incremented by the system. The Revision can be overridden by the user with the above Revision setting.
Published: The revisioning works the same as it does in Share.
Allows User to Delete documents in the current folder. Navigate to the required folder, select the required files and click Delete (Security group controlled) to delete the latest revision of each file. Use the Delete function again to delete any further revisions of your files in reverse sequence order. It is not possible to only delete a particular revision of a file without also deleting all prior revisions of it first.
Deleted files will then be removed however remain available for recovery in the Deleted files chart.
E.g. If a file has 3 revisions, using the delete button once will leave the 2nd revision as the primary file. Using delete again, will leave the 1st revision as the primary file. Delete once more will remove all history of that file.
To remove all revisions of a file in a single action, you may prefer to use the Archive function.
Edit Metadata
Edit the Metadata and DMS Codes of selected files.
Access to the Edit Metadata command is controlled by the Metadata Editing role, and in addition access to edit the DMS Codes is controlled by the Metadata Columns role.
Move
Access to the move function is based on a users folder permissions defined in Security Groups. It allows allows files to be moved to a different folder within the same tab they're in. E.g., a file in WIP can only be moved to another folder within WIP.
Please note that a file can only been moved if nothing has been done to it in the DMS.
- The file cannot be moved if it has been edited
- The file cannot be moved if it has multiple revisions
- The file cannot be moved if a secondary file has been added
- The file cannot be moved if it is in the document basket
- The file cannot be moved if it has a comment made against it
To move a file(s), select it/them and click move. A pop up will appear with the folder tree expanded to level 1. Select the folder that the document needs to be moved to and click OK.
Distribute
This button can be used to Distribute shared and published documents
For more detail on distributing documents click here.
Download
Files can be downloaded individually or as a .zip file for multiple documents.
If a single file is required, click on the cloud icon.
If you need multiple files, select them using the selection box and then click download. Within the notification tray it will highlight that the files are downloading. Once the files have downloaded, the notification will be updated and you will be able to click on the blue link text to get the files.
Please note: The maximum size file download is 10GB total in a single ZIP file, however the maximum number of files allowed in a single ZIP is set by the total path and filename lengths.
Archive
Archive an entire File record from a folder and send all it's document revisions to the Archive section of the DMS. Please be sure that this action is required as there is currently no way to restore the information. The archive tab is controlled via Security Permissions, if you need access, please contact your project Superuser.
Add Comms
Create a new Comm with any currently selected DMS files automatically added to the Comm. Any files that are selected will be added after the Comm is Saved and it has reached the Submit stage. Further files can be added before the Comm is finally Submitted.
Link to Folder
Allows the linking of selected files to Dynanic & Static folders.
You can also use the Mouse Right Click menu to link selected files
Dynamic
Link selected files into a Dynamic Folder.
Static
Link selected files into a Static Folder.
Shows all comments that you have not read that are registered against documents in the current folder. Navigate through multiple pages with the page controls at the bottom of the panel.
Basket
The Basket is a function where the user can add and remove files from many areas of the DMS for use at a later stage. It holds a reference to each of the added files, not a copy of them.
Add
You can add files from any folders in the DMS into the Document Basket. The basket has the following limitations:
- Only the latest revision of a file can be added to the basket.
- The basket has a limit of containing 500 items. However some of the functions in the basket can only operate on 100 items (refer individual commands below).
View
Shows how many documents are currently in the Document Basket. Click to open the Basket.
From here you can then run the following commands on the files in the basket:
- Download - Download the selected files with their original Filename (max 500 files),
- Download by BS1192 - For BS1192 configuration projects, download the selected files with their BS1192 ID as the Filename (max 500 files),
- Distribute - Distribute selected files (max 500 files),
- Edit - Edit Metadata for selected files (max 100 files),
- Dynamic - Link selected files to a Dynamic folder (max 100 files),
- Static - Link selected files to a Static folder (max 100 files),
- Add Comms - Add a new Comm with the currently selected basket documents attached (max 100 files),
- Remove - Remove selected files from the Document Basket
- Empty - Empty the Document Basket
- Save Layout - Save your current Document Basket Layout
- Metadata - Run the Metadata report on the currently selected basket documents (max 500 files).
Reports
Metadata
The Metadata report provides details of all files in the current folder and can be exported in .pdf or Excel format using the controls in the top left corner of the report panel.
The report includes URL links to:
- Download each doc directly,
- Add each doc to your basket.
Signoff
The Signoff report provides details of all document distributions in the project and is exported in XLS format.
User Journal
For users with the DMS Admin role, the User journal provides a log of all activities based on a user selection. In other words you can look and see which documents and actions a user has undertaken in the system.
Please note the page Size limit at the bottom of the screen is also the limit of filtered results with the column filters. Therefore you should ensure your page size is large enough to capture all possible results for your filter selection.
All information displayed can be exported to Excel.
The user log is also available in Docs For Review
Options
Save Layout
Save the current layout of your DMS File Table, including visible columns and column sorting order, and apply it to all folders across the current project. Therefore you can have different page layouts to suit each of your projects.
Notifications
Manage your DMS email notifications for the current project. This provides you a XLS Daily Digest of any selected activity for the folders you setup, with the option to Include Subfolders. Digests are only sent if any of the selected activities are triggered on your monitored folders that day.
You can manage the folder notifications you have for the current folder.
Current Folder
Add a Daily Digest notification for the current folder, with the option to Include Subfolders.
Manage Notifications
Shows the folders you are receiving Daily Digests for. Use the Delete button to the right to remove Notifications for that folder. | https://docs.bimxtra.com/display/CSD/DMS+File+Table+Commands | 2022-01-16T21:54:54 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.bimxtra.com |
Installation checklist for DSP
To inquire about the Splunk Data Stream Processor (DSP), contact your Splunk Sales representative.
Use the following checklist to install DSP with assistance from Splunk support.
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/DSP/1.2.1/Admin/About | 2022-01-16T22:39:50 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
You can set up a WSO2 Enterprise Integrator cluster with a third-party load balancer as depicted in the diagram below.
The following sections give you information and instructions on how to cluster different profiles of WSO2 EI:
Clustering the ESB ProfileClustering the Business Process ProfileClustering the Message Broker ProfileClustering the Analytics Profile
Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today. | https://docs.wso2.com/pages/viewpage.action?pageId=92530744 | 2022-01-16T23:04:19 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.wso2.com |
Contribution Guide¶
If you have a bugfix or new feature that you would like to contribute to elasticsearch-dsl-py, please find or open an issue about it first. Talk about what you would like to do. It may be that somebody is already working on it, or that there are particular issues that you should know about before implementing the change.
If you want to be rewarded for your contributions, sign up for the Elastic Contributor Program. Each time you make a valid contribution, you’ll earn points that increase your chances of winning prizes and being recognized as a top contributor.
We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code.
The process for contributing to any of the Elasticsearch repositories is similar.
Please make sure you have signed the Contributor License Agreement..
Run the test suite to ensure your changes do not break existing code:
$ nox -rs lint test
Rebase your changes. Update your local repository with the most recent code from the main elasticsearch-dsl-py repository, and rebase your branch on top of the latest master branch. We prefer your changes to be squashed into a single commit.
Submit a pull request. Push your local changes to your forked copy of the repository and submit a pull request. In the pull request, describe what your changes do and mention the number of the issue where discussion has taken place, eg “Closes #123″. Please consider adding or modifying tests related to your changes.
Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into elasticsearch-dsl-py. | https://elasticsearch-dsl.readthedocs.io/en/latest/CONTRIBUTING.html | 2022-01-16T21:58:39 | CC-MAIN-2022-05 | 1642320300244.42 | [] | elasticsearch-dsl.readthedocs.io |
One of the common errors that Insure++ detects occurs when a program reads or writes beyond the bounds of a valid memory area. This type of problem normally generates a READ_OVERFLOW or WRITE_OVERFLOW error which describes the memory regions being accessed with their addresses and sizes as shown below.
[hello.c:15] **WRITE_OVERFLOW** >> strcat(str, argv[i]); Writing overflows memory: <argument 1> bbbbbbbbbbbbbb | 16 |4| wwwwwwwwwwwwwwwwww Writing (w) : 0xbfffefb0 thru 0xbfffefc3 (20 bytes) To block (b) : 0xbfffefb0 thru 0xbfffefbf (16 bytes) str, declared at hello.c, 11 Stack trace where the error occurred: strcat() (interface) main() hello.c, 15 **Memory corrupted. Program may crash!!**
Overflow Diagrams
The textual information above describes the memory blocks involved in the overflow operation using their memory addresses and sizes.
To gain a more intuitive understanding of the nature of the problem, a text-based overflow diagram is also shown. This pattern attempts to demonstrate the nature and extent of the problem by representing the memory blocks involved pictorially.
bbbbbbbbbbbbbbbb | 16 |2| wwwwwwwwwwwwwwwwww
In this case, the row of b characters represents the available memory block, while the row of w’s shows the range of memory addresses being written. The block being written is longer than the actual mem- ory block, which causes the error.
The numbers shown indicate the size, in bytes, of the various regions and match those of the textual error message.
The relative length and alignment of the rows of characters is intended to indicate the size and relative positioning of the memory blocks which cause the error. The above case shows both blocks beginning at the same position with the written block extending beyond the end of the memory region. If the region being written extended both before and after the available block, a diagram such as the follow- ing would have been displayed.
bbbbbbbbbbbbbbbb | 5 | 16 |2| wwwwwwwwwwwwwwwwwwwwwwww
Completely disjointed memory blocks are indicated with a diagram in the following form:
bbbbbbbbbbbbbbbb | 4 | 40 | 16 | wwwww
Similar diagrams appear for both READ_OVERFLOW and WRITE_OVERFLOW errors. In the former case, the block being read is represented by a row of r characters instead of w’s. Similarly, the memory regions involved in parameter size mismatch errors are indicated using a row of p characters for the parameter block. See PARM_BAD_RANGE for more information. | https://docs.parasoft.com/pages/viewpage.action?pageId=41319145 | 2022-01-16T22:59:24 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.parasoft.com |
XinaBox SW10¶
This SW10 xChip is a temperature-to-digital converter using an on-chip band gap temperature sensor and Sigma-Delta A-to-D conversion technique with an over-temperature detection output. It is capable of measuring ambient temperature ranging from -55°C to +125°C. It is based on the LM75 manufactured by Texas Instruments.
Please note, SW10 and all other xChips is currently only supported in Zerynth Studio with XinaBox CW02. Review the Quick Start guide for interfacing xChips.
Technical Details¶
LM75¶
- I2C-bus interface with up to 8 devices on the same bus
- Temperature range from -55°C to +125°C
- Frequency range 20 Hz to 400 kHz with bus fault time-out to prevent hanging up the bus
- Programmable temperature threshold and hysteresis set points
- Supply current of 1.0 µA in shutdown mode for power conservation
- Stand-alone operation as thermostat at power-up
Contents: | https://testdocs.zerynth.com/latest/official/lib.xinabox.sw10/docs/index.html | 2020-03-28T11:17:54 | CC-MAIN-2020-16 | 1585370491857.4 | [] | testdocs.zerynth.com |
Scheduler
Schedules are used to automatically start your actors at certain times. Each schedule can be associated with a number of actors and actor tasks, and it is also possible to override the settings of each actor (task) in a similar fashion as when invoking the actor (task) using the API.
The schedules use cron expressions to specify the times of the run. The expression has the following structure:
Note that all dates and times in the cron expression are always assumed to be in the UTC time zone. The minimum interval between runs is 10 seconds; if your next run is scheduled sooner than 10 seconds after the previous run, the next run will be skipped.
Examples:
0 8 * * *every day at 8am
0 0 * * 0every 7 days (at 00:00 on Sunday)
*/3 * * * *every 3rd minute
0 0 1 */2 *every other month (at 00:00 on the first day of month, every 2nd month)
Additionally, you can use the following shortcut expressions:
@yearly(
0 0 1 1 *)
@monthly(
0 0 1 * *)
@weekly(
0 0 * * 0)
@daily(
0 0 * * *)
@hourly(
0 * * * *)
You can find more information and examples of cron expressions on crontab.guru. | https://docs.apify.com/scheduler | 2020-03-28T10:57:42 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.apify.com |
Search scopes enable users and designers to limit the scope of any search.
Use the Search Scope settings here to control how scopes will be used by the current SharePoint Search Result Web Part. You have the following options:
• Ignore search scope: The web part will always return results from the entire site, regardless of user selections or incoming URL values.
• Use search scope from URL: The web part will use whatever scope was specified in the incoming URL (which also includes the query). This is the standard scope functionality. Typically, the incoming scope will be the one specified by the user using the scope control of a search form.
• Use fixed search scope: The web part will always return results from a specific fixed scope, regardless of user selections or incoming URL values. If you choose this option, then you must also specify the fixed scope to use by making a selection from the Fixed search scope drop-down list. Note that this list includes all of the scopes defined for your SharePoint installation—you do not need to configure them especially for Search & Preview, as you do if you want to include a scope selector in your Search Box or search form.
Need more help with this?
FIND CONTACT & SUPPORT IN OUR HOMEPAGE ... | https://docs.ontolica.com/search-preview/1/en/topic/search-scope?q=search+preview+road+map+v7+3 | 2020-03-28T11:23:06 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.ontolica.com |
Removing Locksmith
If you've been using Locksmith to protect your shop, uninstallation means two things:
- Removing Locksmith from your theme
- Removing Locksmith from your Shopify apps list
The order is important! Locksmith can't clean up its code from your theme after you remove it from your apps list, so it's critical to have Locksmith remove itself from your theme before removing the app from your list.
1. Removing Locksmith from your theme
Open the app from within Shopify, or log in at uselocksmith.com if you've previously removed it from your apps list.
Next, click on the "Help" button in the upper right corner.
Next, click the red "Remove Locksmith" button.
After clicking this, you'll see Locksmith letting you know that it's syncing your settings with Shopify. Wait for this to complete before continuing.
2. Removing Locksmith from your Shopify apps list
Navigate to the apps section of your Shopify admin area, and use the removal button next to Locksmith.
You'll be prompted for some optional feedback. Fill this out if you like, then click the red "Uninstall" button. Done!
| https://docs.uselocksmith.com/article/221-removing-locksmith | 2020-03-28T11:07:49 | CC-MAIN-2020-16 | 1585370491857.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ddd799f2c7d3a7e9ae472fc/images/5e27859504286364bc9436ee/5e27859598fae.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ddd799f2c7d3a7e9ae472fc/images/5e2785962c7d3a7e9ae68e56/5e278595ded16.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ddd799f2c7d3a7e9ae472fc/images/5e2785962c7d3a7e9ae68e57/5e2785962a482.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ddd799f2c7d3a7e9ae472fc/images/5e27859604286364bc9436ef/5e278596626a5.png',
None], dtype=object) ] | docs.uselocksmith.com |
67.0+, Firefox 60.0+, Safari 11.1+, Microsoft Edge 17.0+, Internet Explorer 11.0, and Node.js 10.13.0+. Other platforms may work as well, they have just not been tested.
Installing the SDKInstalling the SDK
To install the wolkenkit SDK, use npm:
npm install [email protected] install [email protected]
Polyfill old browsers
Please note that for Internet Explorer 11, you additionally need to install the module @babel/polyfill to make things work. For details on how to integrate this polyfill into your application, see its documentation.
Using the SDKUsing the SDK
To use the SDK, call the require function to load the
wolkenkit-client module:
const wolkenkit = require('wolkenkit-client');
In the browserIn the browser
While Node.js supports the
require function out of the box, you have to use a bundler such as webpack if you want to use the wolkenkit SDK inside an application that runs in the browser. For a simple example of how to set this up see the wolkenkit-client-template-spa-vanilla-js repository.
Connecting to an applicationConnecting to an application
To connect to a wolkenkit application call the
wolkenkit.connect function and provide the hostname of the server you want to connect to. Since this is an asynchronous function, you have to call it using the
await keyword:
const app = await wolkenkit.connect({ host: 'local.wolkenkit.io' });
Setting the portSetting the port
By default, the port
443 is being used. To change this, provide the
port property as well:
const app = await wolkenkit.connect({ host: 'local.wolkenkit.io', port: 3000 });
Setting the protocolSetting the protocol
There are two protocols that the wolkenkit SDK can use to connect to the wolkenkit application:
wss(default in the browser)
https(default on Node.js)
const app = await wolkenkit.connect({ host: 'local.wolkenkit.io', protocol: 'wss' });. | https://docs.wolkenkit.io/3.0.0/reference/building-a-client/connecting-to-an-application/ | 2020-03-28T11:31:25 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.wolkenkit.io |
G350 Module¶
This module implements the Zerynth driver for the Ublox G350 (or U260) gsm/gprs chip (System Integration Manual).
The driver must be used together with the standard library GSM Module.
The following functionalities are implemented:
- attach/detach from gprs network
- retrieve and set available operators
- retrieve signal strength
- retrieve network and device info
- socket abstraction (and secure socket if available on the model). Listening sockets for TCP and UDP protocols are not implemented due to the nature of GSM networks.
The communication with G350 is performed via UART without hardware flow control.
This module provides the
g350Exception to signal errors related to the hardware initialization and management. | https://testdocs.zerynth.com/latest/official/lib.ublox.g350/docs/official_lib.ublox.g350_g350.html | 2020-03-28T11:40:44 | CC-MAIN-2020-16 | 1585370491857.4 | [] | testdocs.zerynth.com |
PHP modules list
From Wiki
Question
How to get a complete list of all PHP modules for the different PHP versions with Hive installed on the server?
Answer
To get a list of the PHP modules within HIVE, you just need to run a couple of commands. First, you have to login at the chrooted environment:
root@host:~# chroot /var/suexec/baseos/
Then you have to run the following command to get the PHP modules for the respective PHP version:
bash-3.2# php -m
PHP can be "php php4 php5 php51 php52 php52s php53 php6"
PHP only will list the modules for the default PHP version.
Categories: FAQ | Hive | http://docs.1h.com/index.php?title=PHP_modules_list&oldid=1310 | 2020-03-28T11:26:46 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.1h.com |
This documentation is for an older version of the software. If you are using the current version of Cumulus Linux, this content may not be up to date. The current version of the documentation is available here. If you are redirected to the main page of the user guide, then this page may have been renamed; please search for it there. | https://docs.cumulusnetworks.com/cumulus-linux-321/System-Configuration/ | 2020-03-28T11:49:39 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.cumulusnetworks.com |
The megamenu now looks advanced and neat. What if we want the megamenu to be full width ? It is easy.
- Go back to Appearance > Menus.
- Click on the caret on the top-right of menu item that contains the megamenu subitem to expand it.
- In the CSS Classes text box, enter yamm-fw. This will make the subitem extend to full width of the container.
- If the CSS Classes field is not visible. Click on Screen Options on the top-right corner of the screen and check on CSS Classess under Show advanced menu properties.
- Click on Save Menu.
Megamenu Full-width Output
| https://docs.madrasthemes.com/blog/topics/wordpress-themes/electro/navigation/making-the-megamenu-dropdown-as-full-width/ | 2020-03-28T10:50:55 | CC-MAIN-2020-16 | 1585370491857.4 | [array(['https://docs.madrasthemes.com/wp-content/uploads/2017/10/yamm-fw.png',
None], dtype=object)
array(['https://docs.madrasthemes.com/wp-content/uploads/2017/10/yamm-fw-output.png',
None], dtype=object) ] | docs.madrasthemes.com |
Outsourcing life decisions
Back in June I asked my blog readers to help me decide what to eat one night...I was stuck, not sure whether to go for either Chinese, Indian, Wendy's, KFC or Pizza.
One concerned commenter pointed out that none of the above were particularly healthy.
I settled for Indian as this got the most votes that night, but now Brad Gessler has developed an alternative means of deciding what to eat, called What Shall I Eat (dotcom). My life is almost complete. All I need now is 'What Shall I Wear' to be developed and we're there. | https://docs.microsoft.com/en-us/archive/blogs/alexbarn/outsourcing-life-decisions | 2020-03-28T12:07:31 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.microsoft.com |
Publishing¶
To publish an object means to make it available in the Zope traversal graph and URLS.
A published object may have a reverse-mapping of object to path via
getPhysicalPath() and
absolute_url() but this is not always the
requirement.
You can publish objects by providing a
browser:page view which
implements the
zope.publisher.interfaces.IPublishTraverse interface.
Example publishers¶
A widget to make specified files downloadable: plone.formwidgets.namedfile.widget. | https://docs.plone.org/develop/plone/serving/publishing.html | 2020-03-28T12:47:18 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.plone.org |
Marshal
ByRef
Class
Object
Definition
Enables access to objects across application domain boundaries in applications that support remoting.
[System.Runtime.InteropServices.ComVisible(true)] public abstract class MarshalByRefObject
- Inheritance
-
- Derived
-
- Attributes
-
Inherited Members
System.Object
Examples.
Note
The assembly that contains
Worker must be loaded into both application domains, but it could load other assemblies that would exist only in( Worker::typeid->Assembly->FullName, "Worker"); remoteWorker->PrintDomain(); } /* This code produces output similar to the following: Object is executing in AppDomain "source.exe" Object is executing in AppDomain "New domain" */
using System; using System.Reflection; public class Worker : MarshalByRefObject { public void PrintDomain() { Console.WriteLine("Object is executing in AppDomain \"{0}\"", AppDomain.CurrentDomain.FriendlyName); } } class Example { public static void Main() { // Create an ordinary instance in the current AppDomain Worker localWorker = new Worker(); localWorker.PrintDomain(); // Create a new application domain, create an instance // of Worker in the application domain, and execute code // there. AppDomain ad = AppDomain.CreateDomain("New domain"); Worker remoteWorker = (Worker) ad.CreateInstanceAndUnwrap( typeof(Worker).Assembly.FullName, "Worker"); remoteWorker.PrintDomain(); } } /* This code produces output similar to the following: Object is executing in AppDomain "source.exe" Object is executing in AppDomain "New domain" */( _ GetType(Worker).Assembly.FullName, _ "Worker"), _ Worker) remoteWorker.PrintDomain() End Sub End Class '::RemotingConfiguration)] static void Main() { TestClass^ obj = gcnew TestClass; RemotingServices::SetObjectUriForMarshal( obj, "testUri" ); RemotingServices::Marshal(obj); Console::WriteLine( RemotingServices::GetObjectUri( obj ) ); } };
using System; using System.Runtime.Remoting; using System.Security.Permissions; public class SetObjectUriForMarshalTest { class TestClass : MarshalByRefObject { } [SecurityPermission(SecurityAction.Demand, Flags=SecurityPermissionFlag.RemotingConfiguration)] public static void Main() { TestClass obj = new TestClass(); RemotingServices.SetObjectUriForMarshal(obj, "testUri"); RemotingServices.Marshal(obj); Console.WriteLine(RemotingServices.GetObjectUri(obj)); } }
Imports System.Runtime.Remoting Imports System.Security.Permissions Public Class SetObjectUriForMarshalTest Class TestClass Inherits MarshalByRefObject End Class <SecurityPermission(SecurityAction.Demand, Flags:= SecurityPermissionFlag.RemotingConfiguration )> _
Remarks.
When you derive an object from MarshalByRefObject for use across application domain boundaries, you should not override any of its members, nor should you call its methods directly. The runtime recognizes that classes derived from MarshalByRefObject should be marshaled across app domain boundaries. | https://docs.microsoft.com/en-us/dotnet/api/system.marshalbyrefobject?view=netframework-4.7 | 2017-10-17T02:00:48 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.microsoft.com |
You can use the MED-V Workspace Packager to manage certain settings in the MED-V workspace.
To manage settings in a MED-V workspace
To open the MED-V Workspace Packager, click Start, click All Programs, click Microsoft Enterprise Desktop Virtualization, and then click MED-V Workspace Packager.
On the MED-V Workspace Packager main panel, click Manage Settings.
In the Manage Settings window, you can configure the following MED-V workspace settings:
Click Save as… to save the updated configuration settings in the specified folder. MED-V creates a registry file that contains the updated settings. Deploy the updated registry file by using Group Policy. For more information about how to use Group Policy, see Group Policy Software Installation ().
MED-V also creates a Windows PowerShell script in the specified folder that you can use to re-create this updated registry file.
Related topics
Managing MED-V Workspace Configuration Settings
Manage MED-V Workspace Settings | https://docs.microsoft.com/en-us/microsoft-desktop-optimization-pack/medv-v2/managing-med-v-workspace-settings-by-using-the-med-v-workspace-packager | 2017-10-17T03:28:40 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.microsoft.com |
.
The only invokes once for each feed change no matter how many entries you add or update. The method on your component should expect a Workspaces. These entries can then be served as a feed.
Note: In the code example,
spring-beans-current.xsd is a placeholder. To locate the correct version, see link:[Spring Beans versions.
The
<a:provider> creates an Abdera DefaultProvider and allows you to add workspaces and collections to it. This
provider reference is used by posts:
"feed" or ":feed/:entry"
For reference, see the Ruby On Rails routing.
For example, this filter can be used for content-based routing in Mule:
Configuration Reference
Filters
Entry Last Updated Filter
Filters ATOM entry objects based on their last update date. This is useful for filtering older entries from the feed. This filter works only on Atom Entry objects not Feed objects.
No Child Elements of <entry-last-updated-filter…>
Feed Last Updated Filter
Filters
Javadoc API Reference
The Javadoc for this module can be found here:
Points of Etiquette When Polling Atom Feeds
Make use of HTTP cache. Send ETag and LastModified headers. Recognize 304 Not modified response. This way you can save a lot of bandwidth. Additionally some scripts recognize the LastModified header and return only partial contents, such as hours or other time when the traffic on your site is low. | https://docs.mulesoft.com/mule-user-guide/v/3.8/atom-module-reference | 2017-10-17T01:54:10 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.mulesoft.com |
Using the ImageMosaic extension¶
This tutorial will show you how to configure and publish an ImageMosaic store and coverage, followed by some configuration examples.
Configuring a coverage in GeoServer¶
This is a process very similar to creating a featuretype. More specifically, one has to perform the steps highlighted in the sections below:
Create a new store¶
Go to Data Panel ‣ Stores and click Add new Store.
Select ImageMosaic under Raster Data Source:
ImageMosaic in the list of raster data stores
In order to create a new mosaic it is necessary to choose a workspace and store name in the Basic Store Info section, as well as a URL in the Connection Parameters section. Valid URLs include:
- The absolute path to the shapefile index, or a directory containing the shapefile index.
- The absolute path to the configuration file (*.properties`) or a directory containing the configuration file. If datastore.properties and indexer.properties exist, they should be in the same directory as this configuration file.
- The absolute path of a directory where the files you want to mosaic reside. In this case GeoServer automatically creates the needed mosaic files (.dbf, .prj, .properties, .shp and .shx) by inspecting the data present in the given directory and any subdirectories.
Click Save:
Configuring an ImageMosaic data store
Create a new coverage¶
Navigate to Data Panel ‣ Layers and click Add a new resource.
Choose the name of the store you just created:
Layer Chooser
Click the layer you wish to configure and you will be presented with the Coverage Editor:
Coverage Editor
Make sure there is a value for Native SRS, then click the Submit button. If the Native CRS is UNKNOWN, you must declare the SRS in the Declared SRS field.
Click Save.
Use the Layer Preview to view the mosaic.
Warning
If the created layer appears to be all black, it may be that GeoServer has not found any acceptable granules in the provided index. It is also possible that the shapefile index is empty (no granules were found an editor. Alternately, you can delete the index and let GeoServer recreate it from the root directory.
Configuration examples¶
Below are a few examples of mosaic configurations to demonstrate how we can make use of the ImageMosaic parameters.
DEM/Bathymetry¶
Such a mosaic can be used to serve large amounts of data representing altitude or depth and therefore does not specify colors directly (it needs an SLD to generate pictures). In our case, we have a DEM dataset which consists of a set of raw GeoTIFF files.
The first operation is to create the CoverageStore specifying, for example, the path of the shapefile in the URL field.
Inside the Coverage Editor Publishing tab, you can specify the dem default style “nodata” area.
Note
The “nodata” on the sample mosaic is -9999. The default background value is for mosaics is 0.0.
The result is the following:
Basic configuration
By setting the other configuration parameters appropriately, it is possible to improve both the appearance of the mosaic as well as its performance. For instance, we could:
Make the “nodata” areas transparent and coherent with the real data. To achieve this we need to change the opacity of the “nodata” ColorMapEntry in the dem style to 0.0 and set the BackgroundValues parameter to -9999 so that empty areas will be filled with this value. The result is as follows:
Advanced configuration
Allow multithreaded granules loading. By setting the AllowMultiThreading parameter to true, GeoServer will load the granules in parallel using multiple threads with a increase in performance on some architectures.
The configuration parameters are as follows:
Aerial imagery¶
In this example we are going to create a mosaic that will serve aerial imagery, specifically RGB GeoTIFFs. Because this is visual data, in the Coverage Editor you can use the basic raster style,_2<<
Basic configuration
Note
Those ugly black areas are the result:
The result is the following:
Advanced configuration
Scanned maps¶
In this case we want to show how to serve scanned maps (mostly B&W images) via a GeoServer mosaic.
In the Coverage Editor you can use the basic raster since there is no need to use any of the advanced RasterSymbolizer capabilities.
The result is the following.
Basic configuration
This mosaic, formed by two single granules, shows a typical case where the “nodata” collar areas of the granules overlap, as shown in the picture above. In this case we can use the InputTransparentColor parameter to make the collar areas disappear during the superimposition process — in this case, by using an InputTransparentColor of #FFFFFF.
The final configuration parameters are the following:
This is the result:
Advanced configuration
Dynamic imagery¶
A mosaic need not be static. It can contain granules which change, are added or deleted. In this example, we will create a mosaic that changes over time.
- Create a mosaic in the standard way. (The specific configuration isn’t important.)
This mosaic contains 5 granules. Note that InputTransparentColor is set to #FFFFFF here.
To add new granules, the index that was created when the mosaic was originally created needs to be regenerated. There are two ways to do this:
To update an ImageMosaic through the file system:
- Update the contents of the mosaic by copying the new files into place. (Subdirectories are acceptable.)
- Delete the index files. These files are contained in the top level directory containing the mosaic files and include (but are not limited to) the following:
- <mosaic_name>.dbf
- <mosaic_name>.fix
- <mosaic_name>.prj
- <mosaic_name>.properties
- <mosaic_name>.shp
- <mosaic_name>.shx
- (Optional but recommended) Edit the layer definition in GeoServer, making to sure to update the bounding box information (if changed).
- Save the layer. The index will be recreated.
This mosaic contains 9 granules
Note
Please see the REST section for information on Uploading a new image mosaic.
Multi-resolution imagery with reprojection¶
As a general rule, we want to have the highest resolution granules shown “on top”, with the lower-resolution granules filling in the gaps as necessary.
In this example, we will serve up overlapping granules that have varying resolutions. In addition, we will mix resolutions, such that the higher resolution granule is reprojected to match the resolution of the lower resolution granules.
In the Coverage Editor, use the basic raster style.
Create the mosaic in GeoServer.
One important configuration setting is the SORTING parameter of the layer. In order to see the highest resolution imagery on top (the typical case), it must be set to resolution A. (For the case of lowest resolution on top, use resolution D .)
Make any other configuration changes.
Also, in order to allow for multiple CRSs in a single mosaic, an indexer.properties file will need to be created. Use the following
GranuleAcceptors=org.geotools.gce.imagemosaic.acceptors.HeterogeneousCRSAcceptorFactory GranuleHandler=org.geotools.gce.imagemosaic.granulehandler.ReprojectingGranuleHandlerFactory HeterogeneousCRS=true MosaicCRS=EPSG\:4326 PropertyCollectors=CRSExtractorSPI(crs),ResolutionExtractorSPI(resolution) Schema=*the_geom:Polygon,location:String,crs:String,resolution:String
The MosaicCRS properyty is not mandatory, but it’s a good idea to set a predictable target CRS that all granule footprints can be reprojected into, otherwise the mosaic machinery will use the one of the first indexed granule.
Save this file in the root of the mosaic directory (along with the index files). The result is the following:
Closeup of granule overlap (high resolution granule on right)
To remove the reprojection artifact (shown in the above as a black area) edit the layer configuration to set InputTransparentColor to #000000.
Closeup of granule overlap (high resolution granule on right)
Referring to a datastore configured in GeoServer¶
It is possible to make the mosaic refer to an existing data store. The ``datastore.properties`` file in this case will contain only one or two properties, referring to the store to be used via the StoreName property. For simple cases, e.g., a PostGIS store, the following will be sufficient:
StoreName=workspace:storename
For Oracle or H2, it’s best to also specify the SPI in order to inform the mosaic that it needs to work around specific limitations of the storage (e.g., forced uppercase attribute usage, limitation in attribute name length and the like):
StoreName=workspace:storename SPI=org.geotools.data.oracle.OracleNGDataStoreFactory
The above will be sufficient in case the image mosaic can create the index table and perform normal indexing, using the directory name as the table name. In case a specific table name needs to be used, add an ``indexer.properties`` specifying the TypeName property, e.g.:
TypeName=myMosaicTypeName
In case the index “table” already exists instead, then a ``indexer.properties`` file will be required, with the following contents:
UseExistingSchema=true TypeName=nameOfTheFeatureTypeContainingTheIndex AbsolutePath=true
The above assumes location attribute provides absolute paths to the mosaic granules, instead of ones relative to the mosaic configuration files directory. | http://docs.geoserver.org/latest/en/user/data/raster/imagemosaic/tutorial.html | 2017-10-17T02:09:30 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['../../../_images/vito_config_1.png',
'../../../_images/vito_config_1.png'], dtype=object)
array(['../../../_images/vito_1.png', '../../../_images/vito_1.png'],
dtype=object)
array(['../../../_images/prato_1.png', '../../../_images/prato_1.png'],
dtype=object)
array(['../../../_images/prato_2.png', '../../../_images/prato_2.png'],
dtype=object)
array(['../../../_images/iacovella_1.png',
'../../../_images/iacovella_1.png'], dtype=object)
array(['../../../_images/iacovella_2.png',
'../../../_images/iacovella_2.png'], dtype=object)
array(['../../../_images/tutorial_dynamic1.png',
'../../../_images/tutorial_dynamic1.png'], dtype=object)
array(['../../../_images/tutorial_dynamic2.png',
'../../../_images/tutorial_dynamic2.png'], dtype=object)] | docs.geoserver.org |
A dynamic label would be used for the same kind of attributes as a text box from the form builder. It can be used to display a text value.
A dynamic label linking to a customer name.
Appearance Properties
Style
Render HTML
If you set the property ‘Render HTML’
Common Properties
Name
See Widget Properties.
Data Source Properties
Attribute (Path)
The attribute (path) property specifies which attribute will be shown in the dynamic label. | https://docs.mendix.com/refguide4/dynamic-label-document-template | 2017-10-17T01:49:50 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.mendix.com |
Success Stories
We value our patients' experience at Thornhill Chiropractic and Wellness Centre. If you are currently a patient, please feel free to complete the following Client Experience Questionnaire. The Questionnaire is in Adobe Acrobat format, and requires the free Acrobat Reader to view.
Download & Print Questionnaire
Dr. John Jaskot
Testimonials of the Month
I had been lucky to survive a near fatal car accident in 1985. My neck had been broken and I had come close to being paralyzed, but here I was still walking around. I always felt lucky to have had a second chance at life. When I met Dr John in the summer of 2002, it was to get orthotic sandals. The last thing I wanted was chiropractic care. Truthfully, I was afraid of chiropractors. My husband had been seeing Dr John for several years and kept telling me he was different from the other, but I didn't believe. Dr John had heard about my story of the accident and believed he could help, but he had to convince me first that I could be helped. I decided to put my trust in Dr John and try, telling him I was afraid and to be careful with my neck. Dr John is a gentle person and really cares about each and every patient. After taking x-rays and talking to me, he realized how much stress I was carrying around in my shoulders due to the neck injury and encouraged me to have regular chiropractic care. The road to recovery was long and painful but rewarding. With regular treatments, massage and discovering yoga, I became like super-woman - transformed. Instead of getting older, I was getting younger and never felt better in my life. I thank Dr John for being such a great doctor and above all for caring and making a positive difference in my life. He is one of my favourite people on this earth. I truly believe he is the best chiropractor in the world. If you are like me and skeptical or afraid, the best thing you can do is try, you will never regret it and never look back. It's never too late to move forward with your life and your health. Hope you liked my story. :-)
Claire Lemieux Lamarche
When I first came to see Dr. John Jaskot, I was suffering with a severe sinus infection, as well as vertigo, dizziness, headaches and so on.
My name is Rhonda Shlanger and I am 55 years old. I have been coming to Dr.Jaskot's office for approximately fourteen years.
Let me tell you how I came about seeing this amazing chiropractor. In the summer of 1995, we took a trip to California. One day, I slipped around the pool area at the resort. It was on the cement, but I thought nothing much about it.
Two weeks later, I had a pain in my lower right abdominal area that eventually travelled to my back and radiated down my right leg. I was in excruciating pain and was unable to sit, stand, lie down, or walk.
After seeing medical intervention for about a year, which consisted of drugs, tests, physiotherapy, and even acupuncture. I realized that my body was not responding and the pain persisted. I was told that I had a rotated pelvis with pinched nerve. Relief seemed to be a foreign word to me. I felt discouraged and wondered if I would ever be pain-free. The cost for all this in one year rose to $5000.00.
My husband Art, met Dr.Jaskot at a business meeting around that time, and was very impressed with how this young man presented himself. "One day you will visit this doctor", he told me. Finally, after realizing that I was not getting better through traditional medical treatment, I decided to try this doctor out. On a cold January day in 1996, I hobbled into his office. My spirits were low, since the pain was unbearable. He was my last resort and I honestly did not know what I would do if he could not help me. "So you finally decided to come in!" as he warmly greeted me. He immediately made me feel at ease as he gently examined my back and leg. While teaching me his philosophies about care at the same time.Dr.Jaskot got on the floor and showed me how to gently stretch my back muscles out and used ice packs to reduce inflammation. The ice was initially uncomfortable and even painful, but he told that heat would aggravate my condition. After arranging for aggressive massage therapy as well Dr.Jaskot taught me to take control of my situation.
My life style slowly improved, and eventually, I was able to do low impact exercise, and even go back to work full time. Today I get adjustments on a monthly basis as well as massage therapy. Before, I used to go three times a week! There are times that I still have pain, but I know what to do and will see the chiropractor if I have issues. I am happy to say that after so many years of discomfort, I am almost pain-free.
In my opinion Dr. John Jaskot is a very knowledgeable chiropractor, who not only understands how the back functions, but has an excellent understanding of how our entire bodies function on the whole. He tried to determine why a person is having specific challenges and deals with the cause.
Dr. John Jaskot is a beautiful human being. He possesses a great deal of compassion and he is always positive and upbeat about life. I love his sense of humour and how he can make his patients laugh. Laughing is for the body and soul. What I really admire about him is that I can be myself, and ask him anything without feeling judged or uncomfortable. He never rushes me out of his office and always had made me feel welcome and at ease. This doctor is not only interested in my back issues, he is genuinely interested in my life. Dr. Jaskot deals with the person first and the problem second. If anyone talks to me about similar challenges, I tell them what I have learned from this amazing person who I so admire.
I feel fortunate to have met Dr. John Jaskot, and I am so thankful for all he has done to help me. If it was not for my wonderful husband who encouraged me to see this chiropractor, I would have missed out in a great doctor and one terrific person. | http://chiro-docs.com/index.php?p=416704 | 2017-10-17T01:47:48 | CC-MAIN-2017-43 | 1508187820556.7 | [] | chiro-docs.com |
New User interface Styles
- PDF for offline use
-
- Related Samples:
-
- Related SDKs:
-
Let us know how you feel about this
Translation Quality
0/250
last updated: 2017-03
This article covers the Light and Dark UI Themes that Apple has added to tvOS 10 and how to implement them in a Xamarin.tvOS app.
Overview
tvOS 10 now supports both a Dark and Light User Interface theme that all of the build-in UIKit controls will automatically adapt to, based on the user's preferences. Additionally, the developer can manually adjust UI elements based on the theme that the user has selected and can override a given theme.
The following topics will be covered in detail:
- About the New User Interface Styles
- Adopting the Light and Dark Themes
- Working with Trait Collections
About the New User Interface Styles
As stated above, tvOS 10 now supports both a Dark and Light User Interface theme that all of the build-in UIKit controls will automatically adapt to, based on the user's preferences.
The user can switch this theme by going to Settings > General > Appearance and switching between Light and Dark:
When the Dark theme is selected, all of the User Interface elements will switch to light text on a dark background:
The user has the option to switch the theme at any time and might do so based on the current activity, where the Apple TV is located or the time of day.
The Light UI Theme is the default theme, and any existing tvOS apps will still use the Light theme, regardless of the user's preferences, unless they are modified for tvOS 10 to take advantage of the Dark theme. A tvOS 10 app also has the ability to override the current theme and always use either the Light or Dark theme for some or all of its UI.
Adopting the Light and Dark Themes
To support this feature, Apple has added a new API to the
UITraitCollection class and a tvOS app must opt-in to support the Dark appearance (via a setting in its
Info.plist file).
To opt-in to Light and Dark theme support, do the following:
- In the Solution Explorer, double-click the
Info.plistfile to open it for editing.
- Select the Source view (from the bottom of the editor).
Add a new key and call it
UIUserInterfaceStyle:
Leave the type set to
Stringand enter a value of
Automatic:
- Save the changes to the file.
There are three possible values for the
UIUserInterfaceStyle key:
- Light - Forces the tvOS app's UI to always use the Light theme.
- Dark - Forces the tvOS app's UI to always use the Dark theme.
- Automatic - Switches between the Light and Dark theme based on the user's preferences in Settings. This is the preferred setting.
UIKit Theme Support
If a tvOS app is using standard, built-in
UIView based controls, they will automatically respond to the UI theme without any developer intervention.
Additionally,
UILabel and
UITextView will automatically change their color based on the select UI theme:
- The text will be black in the Light theme.
- The text will be white in the Dark theme.
If the developer ever changes the text color manually (either in the Storyboard or code), they will be responsible for handling color changes based on the UI theme.
New Blur Effects
For supporting the Light and Dark themes in a tvOS 10 app, Apple has added two new Blur Effects. These new effects will automatically adjust the blur based on the UI theme that the user has selected as follows:
UIBlurEffectStyleRegular- Uses a light blur in the Light theme and a dark blur in the Dark theme.
UIBlurEffectStyleProminent- Uses an extra-light blur in the Light theme and an extra-dark blur in the Dark theme.
Working with Trait Collections
The new
UserInterfaceStyle property of the
UITraitCollection class can be used to get the currently selected UI theme and will be a
UIUserInterfaceStyle enum of one of the following values:
- Light - The Light UI theme is selected.
- Dark - The Dark UI theme is selected.
- Unspecified - The View has not been displayed to screen yet, so the current UI theme is unknown.
Additionally, Trait Collections have the following features in tvOS 10:
- The Appearance proxy can be customized based on the
UserInterfaceStyleof a given
UITraitCollectionto change things such as images or item colors based on theme.
- A tvOS app can handle Trait Collection changes by overriding the
TraitCollectionDidChangemethod of a
UIViewor
UIViewControllerclass.
⚠️
NOTE: The Xamarin.tvOS Early Preview for tvOS 10 doesn't fully support
UIUserInterfaceStylefor
UITraitCollectionyet. Full support will be added in a future release.
Customizing Appearance Based on Theme
For User Interface elements that support the Appearance proxy, their appearance can be adjusted based on the UI Theme of their Trait Collection. So, for a given UI element, the developer can specify one color for the Light theme and another color for the Dark theme.
button.SetTitleColor (UIColor.Red, UIControlState.Normal); // TODO - Pseudocode because this isn't currently supported in the preview bindings. var light = new UITraitCollection(UIUserInterfaceStyle.Light); var dark = new UITraitCollection(UIUserInterfaceStyle.Dark); button.ForTraitCollection(light).SetTitleColor (UIColor.Red, UIControlState.Normal); button.ForTraitCollection(dark).SetTitleColor (UIColor.White, UIControlState.Normal);
⚠️
NOTE: Unfortunately, the Xamarin.tvOS Preview for tvOS 10 doesn't fully support
UIUserInterfaceStylefor
UITraitCollection, so this type of customization is not yet available. Full support will be added in a future release.
Responding to Theme Changes Directly
In the developer requires deeper control over the appearance of a UI Element based on the UI theme selected, they can override the
TraitCollectionDidChange method of a
UIView or
UIViewController class.
For example:
public override void TraitCollectionDidChange (UITraitCollection previousTraitCollection) { base.TraitCollectionDidChange (previousTraitCollection); // Take action based on the Light or Dark theme ... }
Overriding a Trait Collection
Based on the design of a tvOS app, there might be times when the developer needs to override the Trait Collection of a given User Interface element and have it always use a specific UI theme.
This can be done using the
SetOverrideTraitCollection method on the
UIViewController class. For example:
// Create new trait and configure it var trait = new UITraitCollection (); ... // Apply new trait collection SetOverrideTraitCollection (trait, this);
For more information, please see the Traits and Overriding Traits sections of our Introduction to Unified Storyboards documentation.
Trait Collections and Storyboards
In tvOS 10, an app's Storyboard can be set to respond to Trait Collections and many UI elements can be made Light and Dark Theme aware. The current Xamarin.tvOS Early Preview for tvOS 10 doesn't support this feature in the Interface Designer yet, so the Storyboard will need to be edited in Xcode's Interface Builder as a workaround.
To enable Trait Collection support, do the following:
Right-click on the Storyboard file in the Solution Explorer and select Open With > Xcode Interface Builder:
To enable Trait Collection support, switch to the File Inspector and check the Use Trait Variations property in the Interface Builder Document section:
Confirm the change to use Trait Variations:
- Save the changes to the Storyboard file.
Apple has added the following abilities when editing tvOS Storyboards in Interface Builder:
The developer can specify different variations of User Interface elements based on UI theme in the Attribute Inspector:
Several properties now have a + beside them which can be clicked to add a UI theme specific version:
The developer can specify a new property or click the x button to remove it:
The developer can preview a UI design in either the Light or Dark theme from within Interface Builder:
The bottom of the Design Surface allows the developer to switch the current UI theme:
The new theme will be displayed in Interface Builder and any Trait Collection specific adjustments will be displayed:
Additionally, the tvOS Simulator now has a keyboard shortcut to allow the developer to quickly switch between the Light and Dark themes when debugging a tvOS app. Use the Command-Shift-D keyboard sequence to toggle between Light and Dark.
Summary
This article has covered the Light and Dark UI Themes that Apple has added to tvOS 10 and how to implement them. | https://docs.mono-android.net/guides/ios/tvos/platform-features/user-interface-styles/ | 2017-10-17T02:10:53 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.mono-android.net |
Using CocosSharp in Xamarin.Forms
- PDF for offline use
-
- Sample Code:
-
- Related APIs:
-
Let us know how you feel about this
Translation Quality
0/250
last updated: 2016-05
CocosSharp can be used to add precise shape, image, and text rendering to an application for advanced visualization
Evolve 2016: Cocos# in Xamarin.Forms
Overview
CocosSharp is a flexible, powerful technology for displaying graphics, reading touch input, playing audio, and managing content. This guide explains how to add CocosSharp to a Xamarin.Forms application. It covers the following:
- What is CocosSharp?
- Adding the CocosSharp Nuget packages
- Walkthrough: Adding CocosSharp to a Xamarin.Forms app
What is CocosSharp?
CocosSharp is an open source game engine that is available on the Xamarin platform. CocosSharp is a runtime-efficient library which includes the following features:
- Image rendering using the CCSprite class
- Shape rendering using the CCDrawNode class
- Every-frame logic using the CCNode.Schedule method
- Content management (loading and unloading of resources such as .png files) using the CCTextureCache class
- Animations using the CCAction class
CocosSharp’s primary focus is to simplify the creation of cross-platform 2D games; however, it can also be a great addition to Xamarin Form applications. Since games typically require efficient rendering and precise control over visuals, CocosSharp can be used to add powerful visualization and effects to non-game applications.
Xamarin.Forms is built upon native, platform-specific UI systems. For example,
Buttons appear differently on iOS and Android, and may even differ by operating system version. By contrast, CocosSharp does not use any platform-specific visual objects, so all visual objects appear identical on all platforms. Of course, resolution and aspect ratio differ between devices, and this can impact how CocosSharp renders its visuals. These details will be discussed later in this guide.
More detailed information can be found in the CocosSharp section.
Adding the CocosSharp Nuget packages
Before using CocosSharp, developers need to make a few additions to their Xamarin.Forms project. This guide assumes a Xamarin.Forms project with an iOS, Android, and PCL project. All of the code will be written in the PCL project; however, libraries must be added to the iOS and Android projects.
The CocosSharp Nuget package contains all of the objects needed to create CocosSharp objects.
The CocosSharp.Forms nuget package includes the
CocosSharpView class, which is used to host CocosSharp in Xamarin.Forms.
Add the CocosSharp.Forms NuGet and CocosSharp will be automatically added as well.
To do this, right-click on the PCL’s Packages folder
and select Add Packages.... Enter the search term
CocosSharp.Forms, select CocosSharp for Xamarin.Forms,
then click Add Package.
Both CocosSharp and CocosSharp.Forms NuGet packages will be added to the project:
Repeat the above steps for platform-specific projects (such as iOS and Android).
Walkthrough: Adding CocosSharp to a Xamarin.Forms app
Follow these steps to add a simple CocosSharp view to a Xamarin.Forms app:
- Creating a Xamarin Forms Page
- Adding a CocosSharpView
- Creating the GameScene
- Adding a Circle
- Interacting with CocosSharp
Once you've successfully added a CocosSharp view to a Xamarin.Forms app, visit the CocosSharp documentation to learn more about creating content with CocosSharp.
1. Creating a Xamarin Forms Page
CocosSharp can be hosted in any Xamarin.Forms container. This sample for this page
uses a page called
HomePage is split in half by a
Grid to
show how Xamarin.Forms and CocosSharp can be rendered simultaneously on the same page.
First, set up the Page so it contains a
Grid and two
Button instances:
public class HomePage : ContentPage { public HomePage () { // This is the top-level grid, which will split our page in half var grid = new Grid (); this.Content = grid; grid.RowDefinitions = new RowDefinitionCollection { // Each half will be the same size: new RowDefinition{ Height = new GridLength(1, GridUnitType.Star)}, new RowDefinition{ Height = new GridLength(1, GridUnitType.Star)}, }; CreateTopHalf (grid); CreateBottomHalf (grid); } void CreateTopHalf(Grid grid) { // We'll be adding our CocosSharpView here: } void CreateBottomHalf(Grid grid) { // We'll use a StackLayout to organize our buttons var stackLayout = new StackLayout(); // The first button will move the circle to the left when it is clicked: var moveLeftButton = new Button { Text = "Move Circle Left" }; stackLayout.Children.Add (moveLeftButton); // The second button will move the circle to the right when clicked: var moveCircleRight = new Button { Text = "Move Circle Right" }; stackLayout.Children.Add (moveCircleRight); // The stack layout will be in the bottom half (row 1): grid.Children.Add (stackLayout, 0, 1); } }
On iOS, the
HomePage appears as shown in the following image:
2. Adding a CocosSharpView
The
CocosSharpView class is used to embed CocosSharp into a Xamarin.Forms app. Since
CocosSharpView inherits from the Xamarin.Forms.View class, it provides a familiar interface for layout, and it can be used within layout containers such as Xamarin.Forms.Grid. Add a new
CocosSharpView to the project by completing the
CreateTopHalf method:
void CreateTopHalf(Grid grid) { // This hosts our game view. var gameView = new CocosSharpView () { // Notice it has the same properties as other XamarinForms Views HorizontalOptions = LayoutOptions.FillAndExpand, VerticalOptions = LayoutOptions.FillAndExpand, // This gets called after CocosSharp starts up: ViewCreated = HandleViewCreated }; // We'll add it to the top half (row 0) grid.Children.Add (gameView, 0, 0); }
CocosSharp initialization is not immediate, so register an event for when the
CocosSharpView has finished its creation. Do this in the
HandleViewCreated method:
void HandleViewCreated (object sender, EventArgs e) { var gameView = sender as CCGameView; if (gameView != null) { // This sets the game "world" resolution to 100x100: gameView.DesignResolution = new CCSizeI (100, 100); // GameScene is the root of the CocosSharp rendering hierarchy: gameScene = new GameScene (gameView); // Starts CocosSharp: gameView.RunWithScene (gameScene); } }
The
HandleViewCreated method has two important details that we’ll be looking at. The first is the
GameScene class, which will be created in the next section. It’s important to note that the app will not compile until the
GameScene is created and the
gameScene instance reference is resolved.
The second important detail is the
DesignResolution property, which defines the game’s visible area for CocosSharp objects. The
DesignResolution property will be looked at after creating
GameScene.
3. Creating the GameScene
The
GameScene class inherits from CocosSharp’s
CCScene.
GameScene is the first point where we deal purely with CocosSharp. Code contained in
GameScene will function in any CocosSharp app, whether it is housed within a Xamarin.Forms project or not.
The
CCScene class is the visual root of all CocosSharp rendering. Any visible CocosSharp object must be contained within a
CCScene. More specifically, visual objects must be added to
CCLayer instances, and those
CCLayer instances must be added to a
CCScene.
The following graph can help visualize a typical CocosSharp hierarchy:
Only one
CCScene can be active at one time. Most games use multiple
CCLayer instances to sort content, but our application uses only one. Similarly, most games use multiple visual objects, but we’ll only have one in our app. A more detailed discussion about the CocosSharp visual hierarchy can be found in the Bouncing Game walkthrough.
Initially the
GameScene class will be nearly empty – we’ll just create it to satisfy the reference in
HomePage. Add a new class to your PCL named
GameScene. It should inherit from the
CCScene class as follows:
public class GameScene : CCScene { public GameScene (CCGameView gameView) : base(gameView) { } }
Now that
GameScene is defined, we can return to
HomePage and add a field:
// Keep the GameScene at class scope // so the button click events can access it: GameScene gameScene;
We can now compile our project and run it to see CocosSharp running. We haven’t added anything to our
GameScene, so the top half of our page is black – the default color of a CocosSharp scene:
4. Adding a Circle
The app currently has a running instance of the CocosSharp engine, displaying an empty
CCScene. Next, we’ll add a visual object: a circle. The
CCDrawNode class can be used to draw a variety of geometric shapes, as outlined in the Drawing Geometry with CCDrawNode guide.
Add a circle to our
GameScene class and instantiate it in the constructor as shown in the following code:
public class GameScene : CCScene { CCDrawNode circle; public GameScene (CCGameView gameView) : base(gameView) { var layer = new CCLayer (); this.AddLayer (layer); circle = new CCDrawNode (); layer.AddChild (circle); circle.DrawCircle ( // The center to use when drawing the circle, // relative to the CCDrawNode: new CCPoint (0, 0), radius:15, color:CCColor4B.White); circle.PositionX = 20; circle.PositionY = 50; } }
Running the app now shows a circle on the left side of the CocosSharp display area:
Understanding DesignResolution
Now that a visual CocosSharp object is displayed, we can investigate the
DesignResolution property.
The
DesignResolution represents the width and height of the CocosSharp area for placing and sizing objects. The actual resolution of the area is measured in pixels while the
DesignResolution is measured in world units. The following diagram shows the resolution of various parts of the view as displayed on an iPhone 5 with a screen resolution of 640x1136 pixels:
The diagram above displays pixel dimensions on the outside of the screen in black text. Units are displayed on the inside of the diagram in white text. Here are some important details displayed above:
- The origin of the CocosSharp display is at the bottom left. Moving to the right increases the X value, and moving up increases the Y value. Notice that the Y value is inverted compared to some other 2D layout engines, where (0,0) is the top-left of the canvas.
- The default behavior of CocosSharp is to maintain the aspect ratio of its view. Since the first row in the grid is wider than it is tall, CocosSharp does not fill the entire width of its cell, as shown by the dotted white rectangle. This behavior can be changed, as described in the Handling Multiple Resolutions in CocosSharp guide.
- In this example, CocosSharp will maintain a display area of 100 units wide and tall regardless of the size or aspect ratio of its device. This means that code can assume that X=100 represents the far-right bound of the CocosSharp display, keeping layout consistent on all devices.
CCDrawNode Details
Our simple app uses the
CCDrawNode class to draw a circle. This class can be very useful for business apps since it provides vector-based geometry rendering – a feature missing from Xamarin.Forms. In addition to circles, the
CCDrawNode class can be used to draw rectangles, splines, lines, and custom polygons.
CCDrawNode is also easy to use since it does not require the use of image files (such as .png). A more detailed discussion of CCDrawNode can be found in the Drawing Geometry with CCDrawNode guide.
5. Interacting with CocosSharp
CocosSharp visual elements (such as
CCDrawNode) inherit from the
CCNode class.
CCNode provides two properties which can be used to position an object relative to its parent:
PositionX and
PositionY. Our code currently uses these two properties to position the center of the circle, as shown in this code snippet:
circle.PositionX = 20; circle.PositionY = 50;
It’s important to note that CocosSharp objects are positioned by explicit position values, as opposed to most Xamarin.Forms views, which are automatically positioned according to the behavior of their parent layout controls.
We’ll add code to allow the user to click one of the two buttons to move the circle to the left or to the right by 10 units (not pixels, since the circle draws in the CocosSharp world unit space). First we’ll create two public methods in the
GameScene class:
public void MoveCircleLeft() { circle.PositionX -= 10; } public void MoveCircleRight() { circle.PositionX += 10; }
Next, we’ll add handlers to the two buttons in
HomePage to respond to clicks. When finished, our
CreateBottomHalf method contains the following code:
void CreateBottomHalf(Grid grid) { // We'll use a StackLayout to organize our buttons var stackLayout = new StackLayout(); // The first button will move the circle to the left when it is clicked: var moveLeftButton = new Button { Text = "Move Circle Left" }; moveLeftButton.Clicked += (sender, e) => gameScene.MoveCircleLeft (); stackLayout.Children.Add (moveLeftButton); // The second button will move the circle to the right when clicked: var moveCircleRight = new Button { Text = "Move Circle Right" }; moveCircleRight.Clicked += (sender, e) => gameScene.MoveCircleRight (); stackLayout.Children.Add (moveCircleRight); // The stack layout will be in the bottom half (row 1): grid.Children.Add (stackLayout, 0, 1); }
The CocosSharp circle now moves in response to clicks. We can also clearly see the boundaries of the CocosSharp canvas by moving the circle far enough to the left or right:
Summary
This guide shows how to add CocosSharp to an existing Xamarin.Forms project, how to create interaction between Xamarin.Forms and CocosSharp, and discusses various considerations when creating layouts in CocosSharp.
The CocosSharp game engine offers a lot of functionality and depth, so this guide only scratches the surface of what CocosSharp can do. Developers interested in reading more about CocosSharp can find many articles in the CocosSharp. | https://docs.mono-android.net/guides/xamarin-forms/advanced/cocossharp/ | 2017-10-17T02:09:13 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['Images/image1.png', 'Add Packages Dialog'], dtype=object)
array(['Images/image2.png', 'Packages Folder'], dtype=object)
array(['Images/image3.png', 'HomePage Screenshot'], dtype=object)
array(['Images/image4.png', 'Typical CocosSharp Hierarchy'], dtype=object)
array(['Images/image5.png', 'Blank GameScene'], dtype=object)
array(['Images/image6.png', 'Circle in GameScene'], dtype=object)
array(['Images/image7.png', 'iPhone 5s Design Resolution'], dtype=object)
array(['Images/image8.png', 'GameScene with Moving Circle'], dtype=object)] | docs.mono-android.net |
History
Last updated on October 20, 2016
About
History is available for nearly all objects from the beginning of 2016.
The following is a list of all objects with History across all OpenX products.
If no history is available, you will see the following message:
Types of History events
History automatically updates after all History events. OpenX provides the following three types of History events. | https://docs.openx.com/Content/publishers/history.html | 2017-10-17T01:51:31 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['../Resources/Images/history_none_available_236x96.png', None],
dtype=object) ] | docs.openx.com |
Teresa Sampang-Flemming RMT
Teresa Sampang-Flemming BA,BSc, RMT
Teresa is a valuable member of our healthcare team. She has been a registered massage therapist and has practiced in Richmond Hill/ Thornhill since 2004. She has a background in Psychology and Kinesiology and holds degrees in both. Prior to her career in massage therapy, she was a certified personal trainer and continues to teach fitness and exercise, in addition to her massage therapy practice. As a kinesiologist, she is skilled in treating sports injuries, particularly injuries of the shoulder joint, in which she has extensive experience. She also has expertise in pregnancy (pre and post natal) massage, and can vouch for the benefits it provides through the discomforts of pregnancy, having had 2 children of her own.
Teresa believes that everyone can benefit from massage therapy, whether it is for stress relief, or pain relief. Everyone deserves to live lives that are relatively pain-free, and Teresa will be more than happy to assist you in achieving that goal.
Daniel Huali Wang, RMT
Daniel Huali Wang, Ph.D, RMT, R.TCMP & R.Ac., DOMP
Registered Massage Therapist
Registered TCM Practitioner and Acupuncturist
Registered Osteopathic Practitioner
Daniel completed his degree as a Medical Doctor from Xi'an Jiatong University, China in 1995 and his PhD degree in Kinesiology from Shanghai University of Sport, Shanghai, China in 2007.
Growing up being exposed to Acupuncture and Traditional Chinese Medicine at a young age, Daniel has a deep appreciation for the wisdom and healing powers of Eastern medicine. Daniel also has experience working in orthopedic rehabilitation, treating various conditions such as fractures, post-surgical tendon repairs, total joint replacements, as well as acute and chronic sports/workplace injuries. With his education and certifications along with his knowledge on rehabilitation exercises, Daniel can offer his patients a complete treatment program that not only focuses on the acute ailment, but long term health goals and lifestyle improvements.
Using a combination of Western philosophies and Eastern knowledge sparked his passion and enthusiasm for holistic health. As a Rehabilitation professional. Daniel believes in a multimodel treatment approach incorporating manual therapy, TCM, acupuncture, exercises and education to help his patients achieve optimal health and restore function. Using these tools, he works to support the body's natuaral healing process to achieve optimal physical, mental and emotional balance and health. Daniel believes in individualized care, and draws from science and traditional wisdom to address the root cause of disease while providing safe, gentle and effective healing to the whole person.
Daniel is in good standing with the College of Massage Therapists of Ontario, the College of Traditional Chinese Medicine and Acupuncture of Ontario and the Ontario Association of Osteopathic Practitioners. | http://chiro-docs.com/index.php?p=416705 | 2017-10-17T01:46:40 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['img/resized_275x347_teresa.png', 'resized 275x347 teresa'],
dtype=object)
array(['img/photo_2017.jpg', 'photo 2017'], dtype=object)] | chiro-docs.com |
This topic includes the following sections:
With Synergy DBL Integration for Visual Studio (SDI), you can use Visual Studio to debug traditional Synergy programs. You'll use standard Visual Studio commands to start and control debugging, and you'll have access to Visual Studio debugging features and some standard Synergy debugger commands. You can also remotely debug a Synergy program, which enables you to debug a Synergy program on a different machine (Windows, UNIX, or OpenVMS), even if you have no Visual Studio project for that program. See Remote debugging below.
To debug, build projects in debug mode. (The debugger can step into a routine only if the project is built in debug mode.) Then, while your project is open in Visual Studio, select a Visual Studio debugging command, such as Debug > Start Debugging from the Visual Studio menu. Note the following:
!SET WATCH a
If you enter one of these commands without the exclamation point, the debugger will interpret it as a variable, which will cause errors.
You can use the Visual Studio Remote Debugger to debug a traditional Synergy program running on Windows, UNIX, or OpenVMS. (This includes xfServerPlus server code when xfServerPlus is on a Windows, UNIX, or OpenVMS machine.) You don’t need to have a Visual Studio project for the program, but the program must be built in debug mode, SDI must be installed, and you’ll need the source file (.dbl) with the code you want to step through.
For OpenVMS, note you must define the DBG_RMT logical (e.g., $ DEFINE DBG_RMT "-rd 1024:80"). Note that paths are ignored when debugging on OpenVMS.
For xfServerPlus debugging, the machine running the Visual Studio remote debugging session may be the xfServerPlus machine, the xfNetLink client machine, or a separate machine.
For OpenVMS, the remote debugger ignores paths. The file you open in Visual Studio determines which program on the OpenVMS machine will be debugged. This may not work if more than one file has the same name.
For an xfServerPlus program, this will be the .dbl file (or files) with the routines that are accessed via xfServerPlus.
For client or stand-alone programs, use dbr with the ‑dv and ‑rd options. For example:
dbr -dv -rd 1965 my_program
For OpenVMS, use the RUN command. For example:
$ RUN MY_PROGRAM
For xfServerPlus server code, restart the xfServerPlus session with remote debugging enabled (or you can start a completely new session on a different port if desired). Then run the xfNetLink client application so that a connection is made to xfServerPlus and the xfServerPlus code is invoked. See Restarting xfServerPlus with remote debugging below, and note the following:
Once connected, the remote debug session works just like any other Visual Studio debug session for traditional Synergy. If you get an error, check the time-out setting and make sure it is long enough for you to complete step 3 through step 6.
If paths on the remote machine don't match specified paths, Visual Studio dialogs will open as you debug, prompting you for paths.
When you are finished debugging an OpenVMS program, unset the DBG_RMT logical.
When you are through debugging an xfServerPlus program, run the Synergy Configuration Program (or restart rsynd without the ‑rd option) to turn off remote debugging. If you leave remote debugging enabled, the server will always wait for the specified time-out before continuing with the application. Note the following for xfServerPlus:
To restart the xfServerPlus session with remote debugging enabled, do one of the following:
For details on using the Synergy Configuration Program, refer to the Synergy Configuration Program online help.
Note that for remote debugging xfServerPlus server code on Windows, xfpl.dbr uses the regular runtime (dbr) instead of the non-interactive runtime (dbs). This means that your environment may change, because dbr always reads the synergy.ini file, whereas dbs reads it only when SFWINIPATH is set. We recommend that you use SFWINIPATH to point to the location of your synergy.ini file and thereby avoid potential problems. For information on dbs, see Running Synergy DBL programs.
rsynd -rd debug_port[:timeout] -w -u xfspAcct
where debug_port is the port number that the xfServerPlus machine should listen on for the Telnet client, and timeout is the number of seconds that the server should wait for a Telnet connection after the xfNetLink-xfServerPlus connection has been made. The default is 100 seconds. This time-out should always be less than the connect time-out set on the client (which defaults to 120 seconds), but it needs to be long enough for you to attach to the xfServerPlus process (step 3 through step 6 above) after the client connects to xfServerPlus. (For more information about starting xfServerPlus, see Running xfServerPlus on UNIX. For complete rsynd syntax, see rsynd program.) | http://docs.synergyde.com/vs/gsChap10Debuggingtraditional.htm | 2017-10-17T02:01:59 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.synergyde.com |
Do You Need Orthotics? We can help! mechanics. These changes can put stress on joints higher up in your body and can lead to more serious problems.
Properly made orthotics are custom molded to the support requirements of your feet. They help restore the normal balance and alignment of your body by gently correcting foot abnormalities. Over time, custom orthotics prescribed by a healthcare practitioner will help to support your every movement, bringing you relief from fatigue and pain, allowing you to enjoy daily activities.
If you suffer from any of these symptoms then orthotics will help or resolve the underling problem causing:
- Foot, ankle, knee, hip, or lower back pain / arthritis.
- Achy / Tired feet
- Flat feet
- Bunions
- Shin splints
- Runner's knee / hip
FASCINATING.
Watch this video on "What does your walk say about you":
We offer a wide range of custom orthotics as well as orthopeadic shoes, braces, and modifications.
The foot assessment consists of Computerized Gait Analysis, Bio-Mechanical exam, Casting, and Muscle Testing.
We utilize all the top Orthotic labs in North America to manufacture the best orthotics on the market.
We can arrange a Podiatrist or Chiropodist referral or dispensing.
| http://chiro-docs.com/index.php?p=416708 | 2017-10-17T01:41:04 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['img/orthotics.jpg', 'orthotics'], dtype=object)
array(['img/orthotic.jpg', 'orthotic'], dtype=object)] | chiro-docs.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the GetCurrentUser operation. Retrieves details of the current user for whom the authentication token was generated. This is not a valid action for SigV4 (administrative API) clients.
Namespace: Amazon.WorkDocs.Model
Assembly: AWSSDK.WorkDocs.dll
Version: 3.x.y.z
The GetCurrent | http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/WorkDocs/TWorkDocsGetCurrentUserRequest.html | 2017-10-17T02:15:32 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.aws.amazon.com |
Cleansing the body at an internal level is a hot topic, and for good reason. We live in a very toxic world. Cleanse, Replenish and Revitalize is the founding principle behind the Isagenix products. Cleansing helps keep the body healthy and is key for long-term weight loss.
Please follow the link to get more information about toxicity in your environment, and how Isagenix can help you today! You can also order products through the link by either sponsoring Juliana Haddad, or call me directly at (905) 886-9778.
Reawaken your body's true potential !
Nutritional Cleansing is the key component of the Cleansing and Fat Burning System and Total Health and Wellness System. Cleansing with nutrients thoroughly 'scrubs' your body from the inside, which helps it work more efficiently. Our detoxification systems then infuses your newly cleansed cells with more nourishment to help optimize health and performance leading to vibrant good health.
Isagenix is the world leader in Nutritional Cleansing. Here are 20 reasons to cleanse with Isagenix:
- Liver Support
- Antioxidant Protection
- Aids in the Loss of Weight
- Enhanced Mental Abilities
- Increased Energy
- Enhanced Immune Support
- Healthy Brain Function
- Better Cellular Function
- Anti-Aging Protection
- Better Digestion
- Enhanced Nutrition
- Weight Control
- Fights Obesity
- Adatogenic Support
- Lessend Cravings
- Overall Wellness
- Balanced Hormones
- Youthful Skin
- Rejuvenation with Trace Minerals
- Gastrointestinal Support
Isagenix is the ultimate transformation system to combat the toxins in our environment, improve body composition, and slow the aging process. To stay healthy in a toxic environment it is paramount that the body has a means to regularly detoxify and get the best nourishment possible.
"As a family of four , we use Isagenix products to boost our immune systems, fortify our nutrition with breakfast protein shakes and detoxify regularly to maintain healthy weight." Dr. JJ | http://chiro-docs.com/index.php?p=416709 | 2017-10-17T01:46:11 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['img/isagenixad.jpg', 'isagenixad'], dtype=object)
array(['img/cellular-cleansing_Isagenix.jpg',
'cellular cleansing Isagenix'], dtype=object)] | chiro-docs.com |
Pear Scab Venturia pirina
J.W. Travis, J.L. Rytter, and K.S. Yoder.
Disease Cycle-bloom period and continuing through fruit set, for both fresh and processing fruit, determine pear scab infection periods by observing | http://docs.metos.at/Pear+Scab?structure=Disease+model_en | 2020-09-18T17:38:29 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metos.at |
Form to create, delete, and modify abilities of process administrators
The AP:Process Administrator form opens when you click View or Create on the Administrator tab in the AP:Administration form. AR System administrators and process administrators use this form to create, delete, and modify the abilities of other process administrators. See Configuring process administrator capabilities.
AP:Process Administrator form — Process Administrator tab
(Click the image to expand it.)
Fields on the AP:Process Administrator form — Process Administrator tab
For information about the Administrative Information tab, see Administrative Information tab.
Note
The first process administrator must be created by your AR System administrator.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/ars1808/form-to-create-delete-and-modify-abilities-of-process-administrators-820497459.html | 2020-09-18T18:13:16 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bmc.com |
Streaming GROUP BY requires that at least one of the grouped expressions be monotonically increasing and non-constant. The only column known in advance to be monotonically increasing is ROWTIME. (See also Monotonic Expressions and Operators.)
The MONOTONIC function allows you to declare that a given expression is monotonically increasing, enabling a streaming GROUP BY to use that expression as a key. The MONOTONIC function evaluates its argument and returns the result (as the same type as its argument).
Note: In an s-Server context, monotonicity always means that a stream is increasing (even though “monotonic” can also refer to steadily decreasing values). Thus, to declare that an expression is “monotonic” is to declare that it is monotonically increasing.
MONOTONIC(<expression>)
By enclosing an expression in MONOTONIC, you are asserting that values of that expression are increasing and never change direction. For example, if you have a stream LINEITEMS consisting of the line items of orders, and you wrote MONOTONIC(orderId), you are asserting that line items are consecutive in the stream. It would be OK if there were line items for order 1000, followed by line items for order 1001, followed by line items for order 1005. It would be illegal if there were then a line item for order 1001, i.e., the line item sequence became 1000, 1001, 1005, 1001. Similarly, the following line item sequences would be illegal: - 987, 974, 823, 973 - 987, 974, 823, 1056
For example the strings in following sequence are not ascending, since “F” alphabetically precedes the other first letters. ‘Monday 26th April, 2010’ ‘Monday 26th April, 2010’ ‘Tuesday 27th April, 2010’ ‘Wednesday 28th April, 2010’ ‘Wednesday 28th April, 2010’ ‘Wednesday 28th April, 2010’ ‘Friday 30th April, 2010’ Note that the definition of MONOTONIC is precisely what is needed for GROUP BY to make progress.
If an expression declared monotonic is not monotonically increasing – i.e., if the assertion is not valid for the actual data – then s-Server’s behavior is unspecified.
In other words, if you are certain that an expression is monotonic, you can use this MONOTONIC function to enable SQLstream to treat the expression as monotonically increasing.
However, if you are mistaken and the values resulting from evaluating the expression change from ascending to descending or from descending to ascending, unexpected results may arise. SQLstream streaming SQL will take you at your word and operate on your assurance that the expression is monotonic. But if in fact it is not monotonic, the resulting SQLstream behavior cannot be determined in advance, and so results may not be as expected or desired. | https://docs.sqlstream.com/sql-reference-guide/built-in-functions/monotonic/ | 2020-09-18T16:06:44 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.sqlstream.com |
To avoid any initialization problems, please perform the following steps in the given order:
- Unplug the solar panel from the control unit.
- Unplug the battery from control unit.
- Unplug the USB cable.
- Remove the jumper from position J1.
- Connect the cable from control unit to CropVIEW unit.
- Connect the battery to control unit.
Now you should see the red LED in the control unit turning off and approximately 2 seconds later, you should
see the three LEDs in the CropVIEW unit turning on and off. At this point we are sure that the system is running.
If the LEDs aren’t turning on, repeat the process.
- Connect the solar panel to the control unit.
- Final check of all connections, LEDs and cables.
- Close the housing. | http://docs.metos.at/Final+setup+of+the+system?structure=CropVIEW | 2020-09-18T18:05:10 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metos.at |
Transfer CFT 3.3.2 Users Guide Save PDF Selected topic Selected topic and subtopics All content Transfer CFT messages and error codes This section lists the different types of messages that Transfer CFT generates, and corrective actions when applicable. It begins with this section, which describes message formats, severity, and additional conventions used in this documentation. Message format Format in the documentation Transfer CFT messages provide information on the status of the Transfer CFT. Messages have the general format and supporting information: The message severity is displayed CFTxxx: the actual message that is displayed on Transfer CFT Explanation The elements, such as variables, in the above message are detailed. Consequence Description of what happens to the Transfer CFT, or lists corrective actions. Action If applicable, add corrective action here. Format in the product> Auto documented messages Certain messages that are auto-documented, for example CFTA01I, CFTA02W, CFTA03E, CFTA04F, may not appear in this documentation. These messages are considered self-explanatory. Writing conventions Messages are written according to the following conventions. Message description The Transfer CFT messages use the format CFTxnns, for example CFTC01E. The elements that make up the message format are described in the following sections. Where: x: message source nn: sequence number s: message severity Message source Code Description C Catalog: Access to the catalog E End: Transfer CFT shutdown phase F File: Access to files H External PeSIT: PeSIT protocol, non-SIT profile and CFT profile I Init: Transfer CFT initialization phase N Network P Parameter: Access to parameter files R Request: Requests that Transfer CFT received from CFTUTIL, applications, or interactive functions S System: System interface operations by the Transfer CFT T Transfers: Actions relating to transfers U CFTUTIL: Messages from the CFTUTIL utility X Security: Security system (only in the log) Y SSL: SSL protocol Sequence number The sequence number is an index characterizing the message within a given class. Severity The severity code is described in the following table. Code Indicates I Informational message only W An anomaly which may be an error E An error requiring correction (parameter setting or environment error) F A serious system error requiring the intervention of Product Support Symbolic variables used in message text The table below lists the symbolic variables used in message text. Code Description char Alphanumeric character cr Function return code cmd Parameter setting or operator command nameExample: CFTPARM, SEND cpu_id Host computer's CPU number ctx Internal context diagn Diagnostic code of a network errorSpecific to the access method and, in some cases, to the systemExpressed in hexadecimal form diagi Internal CFT diagnostic code (DIAGI) of the catalog diagp CFT protocol diagnostic code (DIAGP) of the catalog dest Partner list identifier (CFTDEST command) direct Transmission direction fname File name host Physical address of the remote partner id Command identifier (value of the ID parameter) idf Model file identifier (CFTSEND/CFTRECV command) idt Transfer identifier keyw Keyword in a parameter setting command or an operator requestExample: PART, DIRECT local Location of a network error: 1: local 0: remote label Freeform name relating to the software protection key maxtrans Number of transfers authorized at any one time mode Action requested n Numeric character nb Numeric code ncr General network error code ncs Network error code specific to the access method and system net Network resource identifier (CFTNET command) part Local partner identifier (CFTPART command) prot Transfer CFT protocol identifier (CFTPROT command) pevent Protocol event pid Process identifier pstate Protocol status recov General error recovery code (in the case of a network error), independent of the system or access method reason Reason code for a network errorSpecific to the access method and, in some cases, to the systemExpressed in hexadecimal form sappl SAPPL parameter value (name of the sending application) scs System return code describing a system interface access error state Transfer status str Character string forming the message label vfm VFM base name Related Links | https://docs.axway.com/bundle/TransferCFT_332_UsersGuide_allOS_en_HTML5/page/Content/Troubleshooting/Messages_and_Codes/Messages_and_error_codes_Start_here.htm | 2020-09-18T18:20:03 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.axway.com |
WHERE
Synopsis
SELECT fields FROM table WHERE condition-expression
Arguments
Description
The optional WHERE clause can be used for the following purposes:
To specify predicates that restrict which data values are to be returned.
To specify an explicit join between two tables.
To specify an implicit join between the base table and a field in another table.
The WHERE clause is most commonly used to specify one or more predicates that are used to restrict the data (filter out rows) retrieved by a SELECT query or subquery. You can also use a WHERE clause in an UPDATE command, DELETE command, or in a result set SELECT in an INSERT (or INSERT OR UPDATE) command.
The WHERE clause qualifies or disqualifies specific rows from the query selection. The rows that qualify are those for which the condition-expression is true. The condition-expression can be one or more logical tests (predicates). Multiple predicates can be linked by the AND and OR logical operators. See “Predicates and Logical Operators” for further details and restrictions.
If a predicate includes division and there are any values in the database that could produce a divisor with a value of zero or a NULL value, you cannot rely on order of evaluation to avoid division by zero. Instead, use a CASE statement to suppress the risk.
A WHERE clause can specify a condition-expression that includes a subquery. The subquery must be enclosed in parentheses.
A WHERE clause can specify an explicit join between two tables using the = (inner join) symbolic join operator. For further details, refer to the JOIN page of this manual.
A WHERE clause can specify an implicit join between the base table and a field from another table using the arrow syntax (–>) operator. For further details, refer to Implicit Joins in Using InterSystems SQL.
Specifying a Field
The simplest form of a WHERE clause specifies a predicate comparing a field to a value, such as WHERE Age > 21. Valid field values include the following: A column name (WHERE Age > 21); an %ID, %TABLENAME, or %CLASSNAME; a scalar function specifying a column name (WHERE ROUND(Age,-1)=60), a collation function specifying a column name (WHERE %SQLUPPER(Name) %STARTSWITH ' AB').
You cannot specify a field by column number.
Because the name of the RowID field can change when a table is re-compiled, a WHERE clause should avoid referring to the RowID by name (for example, WHERE ID=22). Instead, refer to the RowID using the %ID pseudo-column name (for example, WHERE %ID=22).
You cannot specify a field by column alias; attempting to do so generates an SQLCODE -29 error. However, you can use a subquery to define a column alias, then use this alias in the WHERE clause. For example:
SELECT Interns FROM (SELECT Name AS Interns FROM Sample.Employee WHERE Age<21) WHERE Interns %STARTSWITH 'A'
You cannot specify an aggregate field; attempting to do so generates an SQLCODE -19 error. However, you can supply an aggregate function value to a WHERE clause by using a subquery. For example:
SELECT Name,Age,AvgAge FROM (SELECT Name,Age,AVG(Age) AS AvgAge FROM Sample.Person) WHERE Age < AvgAge ORDER BY Age
Integers and Strings
If a field defined as integer data type is compared to a numeric value, the numeric value is converted to canonical form before performing the comparison. For example, WHERE Age=007.00 parses as WHERE Age=7. This conversion occurs in all modes.
If a field defined as integer data type is compared to a string value in Display mode, the string is parsed as a numeric value. For instance, an empty string (''), like any non-numeric string, is parsed as the number 0. This parsing follows ObjectScript rules for handling strings as numbers. For example, WHERE Age='twenty' parses as WHERE Age=0; WHERE Age='20something' parses as WHERE Age=20. For further details, refer to Strings as Numbers in the “Data Types and Values” chapter of Using ObjectScript. SQL only performs this parsing in Display mode; in Logical or ODBC mode comparing an integer to a string value returns null.
To compare a string field with a string containing a single quote, double the single quote. For example, WHERE Name %STARTSWITH 'O''' returns O’Neil and O’Connor, but not Obama.
Date and Time
In InterSystems SQL dates and times are compared and stored using a Logical Mode internal representation. They can be returned in Logical mode, Display Mode, or ODBC mode. For example, the date September 28, 1944 is represented as: Logical mode 37891, Display mode 09/28/1944, ODBC mode 1944-09-28. When specifying a date or time in a condition-expression an error can occur due to a mismatch of SQL mode and date or time format, or due to an invalid date or time value.
A WHERE clause condition-expression must use the date or time format that corresponds to the current mode. For example, when in Logical mode, to return records with a date of birth in 2005, the WHERE clause would appear as follows: WHERE DOB BETWEEN 59901 AND 60265. When in Display mode, the same WHERE clause would appear as follows: WHERE DOB BETWEEN '01/01/2005' AND '12/31/2005'.
Failing to match the condition-expression date or time format to the display mode results in an error:
In Display mode or ODBC mode, specifying date data in the incorrect format generates an SQLCODE -146 error. Specifying time data in the incorrect format generates an SQLCODE -147 error.
In Logical mode, specifying date or time data in the incorrect format does not generate an error, but either returns no data or returns unintended data. This is because Logical mode does not parse a date or time in Display or ODBC format as a date or time value. The following WHERE clause, when executed in Logical mode, returns unintended data: WHERE DOB BETWEEN 37500 AND 38000 AND DOB <> '1944-09-28' returns a range of DOB values, including DOB=37891 (September 28, 1944), which the <> predicate was attempting to omit.
An invalid date or time value also generates an SQLCODE -146 or -147 error. An invalid date is one that you can specify in Display mode/ODBC mode, but InterSystems IRIS cannot convert into a Logical mode equivalent. For example, in ODBC mode the following generates an SQLCODE -146 error: WHERE DOB > '1830-01-01' because InterSystems IRIS cannot process a date value prior to December 31, 1840. The following in ODBC mode also generates an SQLCODE -146 error: WHERE DOB BETWEEN '2005-01-01' AND '2005-02-29', because 2005 is not a leap year.
When in Logical mode, a Display mode or ODBC mode value is not parsed as a date or time value, and therefore its value is not validated. For this reason, in Logical mode a WHERE clause such as WHERE DOB > '1830-01-01' does not return an error.
Stream Fields
In most situations, you cannot use a stream field in a WHERE clause predicate. Doing so results in an SQLCODE -313 error. However, the following uses of stream fields are allowed in a WHERE clause:
Stream null testing: you can specify the predicate streamfield IS NULL or streamfield IS NOT NULL.
Stream length testing: you can specify a CHARACTER_LENGTH(streamfield), CHAR_LENGTH(streamfield), or DATALENGTH(streamfield) function in a WHERE clause predicate.
Stream substring testing: you can specify a SUBSTRING(streamfield,start,length) function in a WHERE clause predicate. %List data in a WHERE clause comparison. For further details, refer to the Data Types reference page in this manual.
To reference structured list data, use the %INLIST predicate or the FOR SOME %ELEMENT predicate.
To use the data values of a list field in a condition-expression, you can use %EXTERNAL to compare the list values to a predicate. For example, to return all records in which the FavoriteColors list field value consists of the single element 'Red':
SELECT Name,FavoriteColors FROM Sample.Person WHERE %EXTERNAL(FavoriteColors)='Red'
When %EXTERNAL converts a list to DISPLAY format, the displayed list items appear to be separated by a blank space. This “space” is actually the two non-display characters CHAR(13) and CHAR(10). To use a condition-expression against more than one element in the list, you must specify these characters. For example, to return all records in which the FavoriteColors list field value consists of the two elements 'Orange' and 'Black' (in that order):
SELECT Name,FavoriteColors FROM Sample.Person WHERE %EXTERNAL(FavoriteColors)='Orange'||CHAR(13)||CHAR(10)||'Black'
Variables
A WHERE clause predicate can specify: or more of the following ObjectScript special variables (or their abbreviations): $HOROLOG, $JOB, $NAMESPACE, $TLEVEL, $USERNAME, $ZHOROLOG, $ZJOB, $ZNSPACE, $ZPI, $ZTIMESTAMP, $ZTIMEZONE, $ZVERSION.
List of Predicates
The SQL predicates fall into the following categories:
Predicate Case-Sensitivity
A predicate uses the collation type defined for the field. By default, string data type fields are defined with SQLUPPER collation, which is not case-sensitive. The “Collation” chapter of Using InterSystems SQL provides details on defining the string collation default for the current namespace and specifying a non-default field collation type when defining a field/property.
The %INLIST, Contains operator ([), %MATCHES, and %PATTERN predicates do not use the field’s default collation. They always uses EXACT collation, which is case-sensitive.
A predicate comparison of two literal strings is always case-sensitive.
Predicate Conditions and %NOINDEX
You can preface a predicate condition with the %NOINDEX keyword to prevent the query optimizer using an index on that condition. This is most useful when specifying a range condition that is satisfied by the vast majority of the rows. For example, WHERE %NOINDEX Age >= 1. For further details, refer to Index Optimization Options in the SQL Optimization Guide.
Predicate Condition on Outlier Value
If the WHERE clause in a Dynamic SQL query selects on a non-null outlier value, you can significantly improve performance by enclosing the outlier value literal in double parentheses. These double parentheses cause Dynamic SQL to use the outlier selectivity when optimizing. For example, if your business is located in Massachusetts (MA), a large percentage of your employees will reside in Massachusetts. For the Employees table Home_State field, 'MA' is the outlier value. To optimally select for this value, you should specify WHERE Home_State=(('MA')).
This syntax should not be used in Embedded SQL or in a view definition. In Embedded SQL or a view definition, the outlier selectivity is always used and requires no special coding.
A WHERE clause in a Dynamic SQL query automatically optimizes for a null outlier value. For example, a clause such as WHERE FavoriteColors IS NULL. No special coding is required for IS NULL and IS NOT NULL predicates when NULL is the outlier value.
Outlier selectivity is determined by running the Tune Table utility. For further details, refer to Outlier Optimization in the “Optimizing Tables” chapter of the SQL Optimization Guide.
Equality Comparison Predicates
The following are the available equality comparison predicates:
For example:
SELECT Name, Age FROM Sample.Person WHERE Age < 21
SQL defines comparison operations in terms of collation: the order in which values are sorted. Two values are equal if they collate in exactly the same way. A value is greater than another value if it collates after the second value. String field collation takes the field’s default collation. The InterSystems IRIS default collation is not case-sensitive. Thus, a comparison of two string field values or a comparison of a string field value with a string literal is (by default) not case-sensitive. For example, if Home_State field values are uppercase two-letter strings:
Note, however, that a comparison of two literal strings is case-sensitive: WHERE 'ma'='MA' is always FALSE.
BETWEEN Predicate
The BETWEEN comparison operator allows you to select those data values that are in the range specified by the syntax BETWEEN lowval AND highval. This range is inclusive of the lowval and highval values themselves. This is equivalent to a paired greater than or equal to operator and a less than or equal to operator. This comparison is shown in the following example:
SELECT Name,Age FROM Sample.Person WHERE Age BETWEEN 18 AND 21
This returns all the records in the Sample.Person table with an Age value between 18 and 21, inclusive of those values. Note that you must specify the BETWEEN values in ascending order; a predicate such as BETWEEN 21 AND 18 would return no records.
Like most predicates, BETWEEN can be inverted using the NOT logical operator, as shown in the following example:
SELECT Name,Age FROM Sample.Person WHERE Age NOT BETWEEN 20 AND 55 ORDER BY Age
This returns all the records in the Sample.Person table with an Age value less than 20 or greater than 55, exclusive of those values.
BETWEEN is commonly used for a range of numeric values, which collate in numeric order. However, BETWEEN can be used for a collation sequence range of values of any data type.
BETWEEN uses the same collation type as the column it is matching against. By default, string data types collate as not case-sensitive.
For further details, refer to the BETWEEN predicate reference page in this manual.
IN and %INLIST Predicates
The IN predicate is used for matching a value to an unstructured series of items. It has the following syntax:
WHERE field IN (item1,item2[,...])
Collation applies to the IN comparison as it applies to an equality test. IN uses the field’s default collation. By default, comparisons with field string values are not case-sensitive.
The %INLIST predicate is an InterSystems IRIS extension for matching a value to the elements of an InterSystems IRIS list structure. It has the following syntax:
WHERE item %INLIST listfield
%INLIST uses EXACT collation. Therefore, by default, %INLIST string comparisons are case-sensitive.
With either predicate you can perform equality comparisons and subquery comparisons.
For further details, refer to the IN and %INLIST predicate reference pages in this manual.
Substring Predicates
You can use the following to compare a field value to a substring:
%STARTSWITH Predicate
The InterSystems IRIS %STARTSWITH comparison operator permits you to perform partial matching on the initial characters of a string or numeric. The following example uses %STARTSWITH. to select those records in which the Name value begins with “S”:
SELECT Name,Age FROM Sample.Person WHERE Name %STARTSWITH 'S'
Like other string field comparisons, %STARTSWITH comparisons use the field’s default collation. By default, string fields are not case-sensitive. For example:
SELECT Name,Home_City,Home_State FROM Sample.Person WHERE Home_City %STARTSWITH Home_State
For further details, refer to the %STARTSWITH predicate reference page in this manual.
Contains Operator ([)
The Contains operator is the open bracket symbol: [. It permits you to match a substring (string or numeric) to any part of a field value. The comparison is always case-sensitive. The following example uses the Contains operator to select those records in which the Name value contains a “S”:
SELECT Name, Age FROM Sample.Person WHERE Name [ 'S'
NULL Predicate
This detects undefined values. You can detect all null values, or all non-null values. The NULL predicate has the following syntax:
WHERE field IS [NOT] NULL
NULL predicate conditions are one of the few predicates that can be used on stream fields in a WHERE clause.
For further details, refer to the NULL predicate reference page in this manual.
EXISTS Predicate
This operates with subqueries to test whether a subquery evaluates to the empty set.
SELECT t1.disease FROM illness_tab t1 WHERE EXISTS (SELECT t2.disease FROM disease_registry t2 WHERE t1.disease = t2.disease HAVING COUNT(t2.disease) > 100)
For further details, refer to the EXISTS predicate reference page in this manual.
FOR SOME Predicate
The FOR SOME predicate of the WHERE clause can be used to determine whether or not to return any records based on a condition test of one or more field values. This predicate has the following syntax:
FOR SOME (table [AS t-alias]) (fieldcondition)
FOR SOME specifies that fieldcondition must evaluate to true; at least one of the field values must match the specified condition. table can be a single table or a comma-separated list of tables, and each table can optionally take a table alias. fieldcondition specifies one or more conditions for one or more fields within the specified table. Both the table argument and the fieldcondition argument must be delimited by parentheses.
The following example shows the use of the FOR SOME predicate to determine whether to return a result set:
SELECT Name,Age AS AgeWithWorkers FROM Sample.Person WHERE FOR SOME (Sample.Person) (Age<65) ORDER BY Age
In the above example, if at least one field contains an Age value less than the specified age, all of the records are returned. Otherwise, no records are returned.
For further details, refer to the FOR SOME predicate reference page in this manual.
FOR SOME %ELEMENT Predicate
The FOR SOME %ELEMENT predicate of the WHERE clause has the following syntax:
FOR SOME %ELEMENT(field) [AS e-alias] (predicate)
The FOR SOME %ELEMENT predicate matches the elements in field with the specified predicate clause value. The SOME keyword specifies that at least one of the elements in field must satisfy the specified predicate condition. The predicate can contain the %VALUE or %KEY keyword.
The FOR SOME %ELEMENT predicate is a Collection Predicate.
For further details, refer to the FOR SOME %ELEMENT predicate reference page in this manual.
LIKE, %MATCHES, and %PATTERN Predicates
These three predicates allow you to perform pattern matching.
LIKE allows you to pattern match using literals and wildcards. Use LIKE when you wish to return data values that contain a known substring of literal characters, or contain several known substrings in a known sequence. LIKE uses the collation of its target for letter case comparisons.
%MATCHES allows you to pattern match using literals, wildcards, and lists and ranges. Use %MATCHES when you wish to return data values that contain a known substring of literal characters, or contain one or more literal characters that fall within a list or range of possible characters, or contain several such substrings in a known sequence. %MATCHES uses EXACT collation for letter case comparisons.
%PATTERN allows you to specify a pattern of character types. For example, '1U4L1",".A' (1 uppercase letter, 4 lowercase letters, one literal comma, followed by any number of letter characters of either case). Use %PATTERN when you wish to return data values that contain a known sequence of character types. %PATTERN can specify known literal characters, but is especially useful when the data value is unimportant, but the character type format of those values is significant.
To perform a comparison with the first characters of a string, use the %STARTSWITH predicate.
Predicates and Logical Operators
Multiple predicates can be associated using the AND and OR logical operators. Multiple predicates can be grouped using parentheses. Because InterSystems IRIS optimizes execution of the WHERE clause using defined indices and other optimizations, the order of evaluation of predicates linked by AND and OR logical operators cannot be predicted. For this reason, the order in which you specify multiple predicates has little or no effect on performance. If strict left-to-right evaluation of predicates is desired, you can use a CASE statement.
The OR logical operator cannot be used to associate a FOR SOME %ELEMENT collection predicate that references a table field with a predicate that a references a field in a different table. For example,
WHERE FOR SOME %ELEMENT(t1.FavoriteColors) (%VALUE='purple') OR t2.Age < 65
Because this restriction depends on how the optimizer uses indices, SQL may only enforce this restriction when indices are added to a table. It is strongly suggested that this type of logic be avoided in all queries.
For further details, refer to “Logical Operators” in the “Language Elements” chapter of Using InterSystems SQL.
See Also
“Querying the Database” chapter in Using InterSystems SQL
SQLCODE error messages listed in the InterSystems IRIS Error Reference | https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_WHERE | 2020-09-18T18:09:35 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.intersystems.com |
The SNMP Collector is used to allow CloudWisdom to monitor the performance of SNMP-enabled devices using a set of specified OIDs. The collector can gather data from as many devices as necessary by adding additional configuration sections under the [devices] header.
You should have SNMP set-up and your community string ready prior to activating the SNMP collector.
/opt/netuitive-agent/conf/collectors.
True.
OID-metric pairslist as necessary for the metrics you want CloudWisdom to collect. | https://docs.metricly.com/integrations/snmp-interface-collector/ | 2020-09-18T16:53:25 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metricly.com |
Editar¶
Configurar la tolerancia del autoensamblado y radio de búsqueda
-
Digitalizando una capa existente
-
El panel de Digitalización Avanzada
The Processing in-place layer modifier
QGIS supports various capabilities for editing OGR, SpatiaLite, PostGIS, MSSQL Spatial and Oracle Spatial vector layers and tables.
Nota
El procedimiento para edición de capas GRASS es diferente - vea la sección Digitalizar y editar una capa vectorial GRASS para más detalles.
Truco
Las ediciones simultáneas
This version of QGIS does not track if somebody else is editing the same feature at the same time as you are. The last person to save its edits wins.
Configurar la tolerancia del autoensamblado y radio de búsqueda¶
For an optimal and accurate edit of the vector layer geometries, we need to set an appropriate value of snapping tolerance and search radius for features vertices.
Tolerancia de autoensamblado.
Nota
By default, only visible features (the features whose style is displayed,
except for layers where the symbology is «No symbols») can be snapped. You
can enable the snapping on invisible features by checking
Enable snapping on invisible features under the
tab.
Truco
Enable snapping by default
You can set snapping to be enabled by default on all new projects in the Snapping Options dialog.tab. You can also set the default snapping mode, tolerance value, and units, which will populate the
Habilitar autoensamblado en intersecciones¶
Another available option is to use
snapping on
intersection, which allows you to snap to geometry intersections of
snapping enabled layers, even if there are no vertices at the intersections.
Radio de búsqueda.
Edición topológica¶
Apart from snapping options, the Snapping options…` dialog ( ) and the Snapping toolbar allow you to enable and disable some topological functionalities.
Habilitar edición topológica.
Evitar intersecciones de nuevos polígonos.
Nota
If the new geometry is totally covered by existing ones, it gets cleared, and QGIS will show an error message.
Advertencia
Use cautiously the Avoid overlap option
Since this option will cut new overlapping geometries of any polygon layer, you can get unexpected geometries if you forget to uncheck it when no longer needed.
Comprobador de Geometría¶
A core plugin can help the user to find the geometry invalidity. You can find more information on this plugin at Geometry Checker Plugin.
Rastreo Automático.
Nota
Adjust map scale or snapping settings for an optimal tracing
If there are too many features in map display, tracing is disabled to avoid potentially long tracing structure preparation and large memory overhead. After zooming in or disabling some layers the tracing is enabled again.
Nota
Does not add topological points
This tool does not add points to existing polygon geometries even if Topological editing is enabled. If geometry precision is activated on the edited layer, the resulting geometry might not exactly follow an existing geometry.
Truco.
Digitalizando una capa existente).
Truco Digitalización avanzada. You can select and unselect both under . Using the basic digitizing tools, you can perform the following functions:
Edición de tabla: Barra de herramientas de edición básica de capa vectorial.
Truco
Guardar regularmente
Remember to
Save Layer Edits regularly. This will also
check that your data source can accept all the changes.
Añadir objetos espaciales.
Nota
Pressing Delete or Backspace key reverts the last node you add.
When you have finished adding points, right-click anywhere on the map area to confirm you have finished entering the geometry of that feature.
Nota
While digitizing line or polygon geometries, you can switch back and forth between the linear Add feature tools and circular string tools to create compound curved geometries.
Truco
Nota.
Truco
Marcadores vértices
La actual versión de QGIS reconoce tres tipos de marcadores de vértices: “Círculo semitransparente”, “Cruz” y “Nada”. Para cambiar el estilo del marcador, seleccione
del menú , haga clic en la pestaña Digitalización y seleccione la entrada apropiada.
Operaciones básicas.
Nota.
Cortar, copiar, y pegar objetos espaciales¶
Selected features can be cut, copied and pasted between layers in the same
QGIS project, as long as destination layers are set to
Toggle editing beforehand.
Truco.
Como un ejemplo, copiaremos algunos lagos a una nueva capa:
Cargar la capa desde donde desee copiar (capa fuente)
Cargar o crear la capa a la que desee copiar (capa destino)
Comenzar a editar la capa destino
Hacer la capa de fuente activa haciendo clic sobre ella en la leyenda
Use the
Select Features by area or single click tool to select the feature(s) on the source layer
Click on the
Copy Features tool
Hacer la capa de destino activa haciendo clic en la leyenda.
Click on the
Paste Features tool
Detener edición y guardar los cambios
¿Qué pasa si la capa de origen y destino tienen diferentes esquemas (los nombres de campo y tipo no son los mismos)? QGIS poblará los que coinciden e ignorará el resto. Si no son importantes los atributos que se copian a la capa de destino, no importa la forma de diseñar los campos y tipos de datos. Si desea asegurarse de que todo - el objeto espacial y sus atributos - se copia, asegúrese de que los esquemas coincidan.
Nota
Congruencia del pegado de objetos espaciales
Si su capa de origen y destino utilizan la misma proyección, entonces el pegado de objetos espaciales tendrá la geometría idéntica a la capa origen. Sin embargo, si el destino tiene una proyección diferente, entonces QGIS no puede garantizar que la geometría sea idéntica. Esto es simplemente porque hay pequeños errores de redondeo involucrados en la conversión entre las proyecciones.
Truco
Copiar atributos de texto en otro.
Borrar objetos espaciales seleccionados.
La herramienta
Cortar objetos espaciales en la barra de herramientas de digitalización también se puede utilizar para borrar objetos espaciales. Este efectivamente borra el objeto espacial pero también lo coloca en un «portapapeles espacial». Por lo tanto, cortamos el objeto espacial para borrar. Entonces podríamos utilizar la herramienta
Pegar objetos espaciales para colocarlo de nuevo, nos da una capacidad de deshacer de un nivel. Cortar, copiar y pegar funcionan sobre los objetos espaciales seleccionados, lo que significa que podemos operar más de una a la vez.
Deshacer y rehacer.
Utilizar el widget de historial de deshacer/rehacer, sólo haga clic para seleccionar la operación en la lista del histórico. Todos los objetos espaciales se revertirán al estado que tenían después de la operación de seleccionada.
Guardar capas editadas¶.
Si los cambios no se pueden guardar (por ejemplo, disco lleno, o los atributos tienen valores que están fuera de rango), el estado de memoria de QGIS es preservado. Esto le permite ajustar sus ediciones e intentar de nuevo.
Truco
Integridad de datos
Siempre es buena idea respaldar sus datos fuente antes de iniciar la edición. Mientras los autores de QGIS han hecho todo el esfuerzo para preservar la integridad de sus datos, nosotros no ofrecemos garantía en este sentido.
Saving multiple layers at once¶
Este objeto espacial permite la digitalización de múltiples capas. Elegir
Guardar las capas seleccionadas para guardar todos los cambios que se hicieron en múltiples capas. Se tiene la oportunidad para
Revertir las capas seleccionadas, así que la digitalización puede ser retirada de todas las capas seleccionadas. Si se desea detener la edición de las capas seleccionadas,
Cancelar para la capa(s) seleccionada(s) es una forma fácil.
Las mismas funciones están disponibles para editar todas las capas del proyecto.
Truco).
Digitalización avanzada¶
Edición avanzada de tabla: la barra de herramientas de edición avanzada de capa vectorial.
Nota.
Rotar objeto(s) espacial(es.
Si se mantiene Shift antes de hacer clic en el mapa, la rotación se hará en pasos de 45 grados, que pueden ser modificados después en el widget de entrada del usuario.
To abort feature rotation, press the ESC button or click on the
Rotate Feature(s) icon.
Simplificar objeto espacial.
Añadir parte.
Borrar parte.
Añadir anillo.
Rellenar anillo.
Borrar anillo¶
The
Delete Ring tool allows you to delete rings within
an existing polygon, by clicking inside the hole. This tool only works with
polygon and multi-polygon features. It doesn’t
change anything when it is used on the outer ring of the polygon.
Remodelar objetos espaciales¶
You can reshape line and polygon features using the
Reshape Features tool on the toolbar. For lines, it replaces the line
part from the first to the last intersection with the original line.
Truco.
Nota
La herramienta de remodelar podría alterar la posición inicial de un anillo de polígono o una linea cerrada. Por lo tanto, el punto que está representado “dos veces” no será más el mismo. Esto puede no ser un problema para la mayoría de las aplicaciones, pero es algo a considerar.
Desplazar curva.
Crear un desplazamiento de una capa de línea, primero se debe entrar en el modo de edición y activar la herramienta
Desplazar curva. A continuación, haga clic en un objeto espacial para desplazarlo. Mueva el ratón y haga clic donde desee o introducir la distancia deseada en el widget de entrada del usuario. Los cambios pueden ser guardados con la herramienta
Guardar edición de capa.
QGIS options dialog (Digitizing tab then Curve offset tools section) allows you to configure some parameters like Join style, Quadrant segments, Miter limit.
Dividir objetos espaciales.
Dividir partes¶
En QGIS ahora es posible dividir las partes de un objeto espacial múlti-parte, de modo que se incrementa el número de partes. Sólo se tiene que dibujar una línea en la parte que se desea dividir utilizando el icono
Dividir partes
Combinar objetos espaciales seleccionados Panel de resumen estadístico for the full list of functions).
Nota
If the layer has default values or clauses present on fields, these are used as the initial value for the merged feature.
Press OK to apply the modifications. A single (multi)feature is created in the layer, replacing the previously selected ones.
Combinar atributos de objetos espaciales.
Rotar símbolos de puntos.
Nota.
Truco
Si se mantiene presionada la tecla Ctrl, la rotación se realiza en pasos de 15 grados.
Símbolos de punto de desplazamiento.
Nota El panel de Digitalización Avanzada this tool can become a «Add circle from center and radius» tool by setting and locking the distance value after first click.
:sup`Add circle from 3 tangents`: Draws a circle that is tangential to three segments. Note that you must activate snapping to segments (See Configurar la tolerancia del autoensamblado y radio de búsqueda)..
El panel de Digitalización Avanzada.
Nota
Las herramientas no están habilitadas si la vista del mapa esta en coordenadas geográficas.
Conceptos.
Opciones de autoensamblado¶
Click the
button to set the Advanced Digitizing Tool snapping settings.
You can make the tool snap to common angles. The options are:
Do not snap to common angles
Ajustar a ángulos 30º
Ajustar a ángulos 45º
Snap to 90º angles
You can also control the snapping to features. The options are:
Do not snap to vertices or segments
Snap according to project configuration
Snap to all layers
Atajos de teclado.
Modo de construcción.
Nota. | https://docs.qgis.org/3.4/es/docs/user_manual/working_with_vector/editing_geometry_attributes.html | 2020-09-18T18:10:31 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.qgis.org |
[−][src]Crate defile
::defile
Helper proc-macro to "ungroup" a captured metavariable (thus potentially breaking their hygiene, hence the name).
This is useful when using helper
macro_rules macros, that need to parse using some special rule (e.g.
:expr,
:path,
:pat), but that later want to further inspect the captured variable.
This is not something a
macro_rules! can do on its own, since such so-called metavariables are seen as an opaque single token (
:tt) (the sequence of tokens captured in the metavariable have been grouped (≈ parenthesized) but using invsibile parenthesis.
Example
macro_rules! check_expr { ( 42 ) => ({ println!("Got `42`!"); }); ( $($tt:tt)* ) => ({ println!("Did not get `42`. Instead, got the following tokens:\n["); $( println!(" `{}`,", stringify!($tt)); )* println!("]"); }); } macro_rules! check_all_exprs {( $( $expr:expr // use :expr to be able to use `,` as a delimiter ),* $(,)? ) => ( fn main () { $( println!("vvvvvvvvvvvvv"); check_expr!($expr); println!("^^^^^^^^^^^^^\n"); )* } )} check_all_exprs!(42, 1 + 1);
outputs:
vvvvvvvvvvvvv Did not get `42`. Instead, got the following tokens: [ `42`, ] ^^^^^^^^^^^^^ vvvvvvvvvvvvv Did not get `42`. Instead, got the following tokens: [ `1 + 1`, ] ^^^^^^^^^^^^^
That is:
the token
42not match
42!
That being said, the expression
1 + 1is viewed as a single indivisible token too.
Indeed, that's kind of the point of this behavior: if we do
2 * $exprwhere
$exprcaptures
1 + 1we expect the result to be
2 * (1 + 1)instead of
2 * 1 + 1!
But by doing:
macro_rules! check_all_exprs {( $( $expr:expr // use :expr to be able to use `,` as a delimiter ),* $(,)? - ) => ( + ) => (::defile::item! { fn main () { $( println!("vvvvvvvvvvvvv"); check_expr!(@$expr); // put `@` before a metavariable to ungroup it println!("^^^^^^^^^^^^^\n"); )* } - )} + })}
we do get:
vvvvvvvvvvvvv Got `42`! ^^^^^^^^^^^^^ vvvvvvvvvvvvv Did not get `42`. Instead, got the following tokens: [ `1`, `+`, `1`, ] ^^^^^^^^^^^^^
42has matched the literal 42, but be aware that this has also resulted in
1 + 1getting split. So, if you were to
defileexpressions such as
2 * @$expr, you may not obtain the expected result! Use with caution.
Caveats
Currently (
1.45.0), there are several bugs regarding the interaction between
macro_rules! macros and procedural macros, that may lead to
defile! and any
other helper procedural macro to split groups that are not
@-prefixed.
Hopefully those bugs are solved, making the actual implementation of
defile!
meaningful. | https://docs.rs/defile/0.1.2/defile/ | 2020-09-18T16:54:15 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.rs |
Message-ID: <1665194922.565134.1600452049511.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_565133_845862473.1600452049511" ------=_Part_565133_845862473.1600452049511 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This page lists all network device products that have product life cycle= date patterns included in the Extended Data Pack 2017-September-1. The lis= t comprises of data for:
148. | https://docs.bmc.com/docs/exportword?pageId=759954088 | 2020-09-18T18:00:49 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bmc.com |
Using TLS with UCS
Purpose: To set up UCS to use TLS.
Contents
Overview
This page describes setting up UCS to use TLS for secure connections. The procedure can also be used with E-mail Server, a component of Genesys eServices. For clients of UCS, see Using TLS with UCS Clients. This page refers to keytool, which is a key and certificate management utility included in JDK or JRE installations. For instance, when you install JDK, keytool is placed in the \bin directory.
Procedure
- Generate a certificate, in any of the following ways:
- Use Windows Certificate Services, as described in the "Certificate Generation and Installation" chapter of the Genesys 8.1 Security Deployment Guide.
- Use keytool with the—genkey parameter; for example:
keytool -genkey -v -alias hostname.example.com -dname "CN=hostname.example.com,OU=IT,O=ourcompany,C=FR" -keypass theKeyPassword -keystore certificate.jks -storepass theKeystorePassword -keyalg "RSA" -sigalg "SHA1withRSA" -keysize 2048 -validity 3650
- Use any other tool, such as openSSL.
- In the Genesys configuration environment, assign the certificate to the Host on which UCS is running, as described in the "Genesys TLS Configuration" chapter of the Genesys 8.1 Security Deployment Guide.
- If you generated a Windows certificate, you must use Microsoft Management Console to make the certificate usable by UCS.
- Locate the certificate and copy it to a selected location on UCS’s host.
- Set configuration options in your UCS Application object. Starting with release 8.1.3, the TLS options are configured as described in the Genesys 8.1 Security Deployment Guide.
Next Steps
Optionally, configure the clients of UCS to use TLS, as described on the Using TLS with UCS Clients page.
8.1.0 Maintenance Release
The 8.1.0 maintenance release of October 2011 adds the possibility of performing the following TLS-related configuration on the Server Info tab (Configuration Manager) or section (Genesys Administrator):
- Configure multiple ports
- Set Secured = Yes, in which case UCS starts in TLS mode
- Specify the connection protocol as ESP or HTTP
Note these limitations:
- Only one certificate per protocol can be configured for one UCS.
- There must be a default port that uses ESP and is associated with a valid certificate.
- This is the port marked default on the Server Info tab (Configuration Manager) or the Server Info section of the Configuration tab (Genesys Administrator).
- You can leave its connection protocol unspecified, in which case it uses ESP. What you must not do is specify any other protocol for it.
- If the server is not able to start listening on this port, then an exception is raised and the server exits.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/CS/8.5.1/User/UsingTLSwithUCS | 2020-09-18T17:32:07 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.genesys.com |
Before you start¶
The motto of dgplug is “Learn yourself and teach others”. The summer training effort is a huge part of this. We try to learn together and help each other. Before you jump into this, there are a few small things one should know.
But, even before you read the rest the document to learn more, first please watch this talk from the ever amazing Ian Coldwater. Take your time, listen to them.
How to ask for help?¶
There will be many times in a day when you will need some help, may be to solve a problem, or to learn a new thing. Before you go and ask for help, you should first search for a solution. Most of the times, you will find the answer yourself. We suggest everyone to use DuckDuckGo search engine. This particular search engine does not track you, and focused to protect your privacy.
If you open the site for the first time, it will show you ways you can enable it as the default search engine in your browser. That should be the first thing to do. Go ahead, and enable DuckDuckGo as the default search engine.
To know more why should you use DuckDuckGo, read their privacy page.
There is no magic mirror!¶
If you just come online and ask help by saying “I have an error.”, no one will be able to help you. Because, we need to see the exact error you can see in your computer. Best way to ask for help is by providing as much information as possible along with the question. If the error message is more than 2 lines, then one should use a pastebin server (like Fedora modernpaste service) and paste the whole error message there, and then only provide the URL to the pastebin in the question.
Learn touch typing¶
Touch typing is one of the most important thing to learn before you jump into programming or commands to control a server. Typing errors are the most common cause behind the errors we get in computers. Typing efficiently and fast will help you through out the life.
Here is a blog post to find out more about the topic.
History of Free and Open Source Software¶
Read this post to learn about the history, then also read this log to hear from Jim Blandy about his experience. Free Software movement is the reason why we have this community standing together. So, make sure to go through the history first.
Download Tor browser¶
The next important step is to download Tor Browser. To start using it, follow these steps. You may have a lot of questions about Why should we use Tor?, through out the summer training we will have many discussions on this. But, to understand the basics, have a look at this page.
Please read the Tor Project chapter to learn in details.
Watch The Internet’s Own Boy¶
Take 2 hours of time and watch this documentary.
Now spend some time and think about it. | https://summertraining.readthedocs.io/en/latest/beforestart.html | 2020-09-18T17:12:17 | CC-MAIN-2020-40 | 1600400188049.8 | [] | summertraining.readthedocs.io |
OAuth 2.0 - Device flow
Overview
The device flow is designed for devices that either do not have access to a browser or have limited input capabilities. This flow allows users to share specific data with an application while keeping their usernames, passwords, and other information private.
This grant type can eliminate the need for the client to store the user credentials for future use, by exchanging the credentials with a long-lived access token or refresh token.
How to implement the device-flow
Register the Client Id at Identify
At the Identify connection list, you can create an OAuth2.0 protocol that has the following settings:
- Client ID: Specify the unique ID of the application. Please note: we do check the case-sensitive.
- Client secret: Specify the Client secret of the application. Please note: we do check the case-sensitive.
- Redirect URL: Specify the redirect URL after successful authentication. e.g,
- Application name: Specify the name of the application
- Set the audience field of tokens which are issued for the application: Specify the Identifier URL which issues the token.
- Allow client credentials flow: This setting must be True.
- Code life time (minutes): Specifies the input code lifetime. Its default value is 60.
- Number of user code group: Specifies the number of 4-character groups of a code. Its default value is 2.
-.
Here is the screenshot of a sample connection:
Ask for a Token
The flow has the following steps:
- The client requests access from the authorization server and includes its client identifier in the request.
- The authorization server issues a verification code and an end-user code, and provides the end-user verification URI.
- The client instructs the end-user to use his or her user-agent elsewhere (mostly a browser on a laptop or a mobile phone) and visit the provided end-user verification URI. The client provides the end-user with the end-user code to enter to grant access.
- The authorization server authenticates the end-user and prompts the end-user to grant the client's access request. If the user hasn't logged in yet, he or she will need to log in.
- While the end-user authorizes (or denies) the client's request, the client repeatedly polls the authorization server to find out if the end-user has completed the end-user authorization step.
- Assuming the end-user has granted access, the authorization server validates the verification code provided by the client and responds with the access token.
Step 1: Request device and user codes
Perform a POST operation to the device_authorization endpoint:
URI parameters:
Step 2: Handle the authorization server response
The authorization server returns a JSON object to your device.
Its content includes:
- device_code: The device verification code.
- user_code: The device verification code.
- verification_uri: The end-user verification URI on the authorization server
Step 3: User login and consent
- The user navigates to the OAuth2/devicepairing endpoint which is shown up from the verification_uri parameter above:
- The user enters the verification code.
- If the code is correct, and if the user hasn't logged yet, the user will need to log in.
- The user approves consent.
Step 4: Poll the authorization server
While the end-user authorizes (or denies) the client's request, the client repeatedly polls the authorization server via the method POST to the token endpoint
URI parameters:
Step 5: Get the access token
If the user has approved the grant, the token endpoint responds with a success response
If you decode the access_token you will see that it contains the following claims: | http://docs.safewhere.com/identify-oauth-2-0-device-flow/ | 2020-09-18T16:30:33 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.safewhere.com |
Administration
- Start or stop services
- Upgrade Parse Server
- Create and restore application backups
- Force HTTPS for Parse Server requests
- Authenticate requests against the Parse API
- Upload files using SFTP
- Enable HTTP authentication for the Parse Dashboard
-
- Modify the default MongoDB root password
- Create and restore MongoDB backups
- Configure and use Gonit
- Connect to Redis from a different machine
- Secure Redis
- Create a Redis cluster
apache
mongodb
gonit
redis | https://docs.bitnami.com/oci/apps/parse/administration/ | 2020-09-18T17:52:56 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bitnami.com |
Form to view all data about an approval request
The AP:Detail form holds all data about an approval request. You can use the AP:Detail form to determine the status of a request, and to see a history of activities on the request for any approval process. If an approver changes the status of an approval request to Hold, the AP:Detail form continues to show the status as Pending. In addition to the fields described in this section, the AP:Detail form also includes hidden Currency, Date, and Time fields to store temporary results during workflow. For example, Currency Field 1 and Currency Field 2 are temporary fields of the currency type.
AP:Detail form
(Click the image to expand it.)
Fields on the AP:Detail form | https://docs.bmc.com/docs/ars91/en/form-to-view-all-data-about-an-approval-request-609073951.html | 2020-09-18T18:15:07 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bmc.com |
Crate flight
Version 1.0.0
See all flight's items
Generated client implementations.
Generated server implementations.
An opaque action specific for the service.
Describes an available action, including both the name used for execution
along with a short description of the purpose of the action.
A message for doing simple auth.
A service specific expression that can be used to return a limited set
of available Arrow Flight streams.
A batch of Arrow data as part of a stream of batches.
The name or tag for a Flight. May be used as a way to retrieve or generate
a flight or be used to expose a set of previously defined flights.
A particular stream or split associated with a flight.
The access coordinates for retrieval of a dataset. With a FlightInfo, a
consumer is able to determine how to retrieve a dataset.
The request that a client provides to a server on handshake.
A location where a Flight service will accept retrieval of a particular
stream given a ticket.
An opaque result returned after executing an action.
Wrap the result of a getSchema call
An opaque identifier that the service can use to retrieve a particular
portion of a stream. | https://docs.rs/arrow-flight/1.0.0/flight/ | 2020-09-18T16:37:28 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.rs |
Overview of the Service Analyzer in ITSI
The Service Analyzer is the home page for Splunk IT Service Intelligence (ITSI) and serves as your starting point for monitoring your IT operations. The Service Analyzer enables you to see the health of your IT environment at a glance.
There are two service analyzer views: the tile view and the tree view. You can drill down to more detailed information from each view to investigate services with poor health scores.
To access the Service Analyzer, click Service Analyzer > Default Service Analyzer from the ITSI main menu. The tile view is the default view, but whichever view you last saved loads the next time you open the Service Analyzer.
Service Analyzer tile view
The tile view is the default service analyzer view. It displays the health scores of your services and their associated KPIs. For more information about services and KPIs, see Overview of service insights in ITSI in the Service Insights manual.
For more information about the tile view, see Use the Service Analyzer tile view.
Service Analyzer tree view
The tree view displays your services graphically to provide a map of your services showing the relationships between them. The nodes are color coded to indicate severity level. You can click on a node to get more detailed information.
For more information about the tree view, see Use the Service Analyzer Tree view.
This documentation applies to the following versions of Splunk® IT Service Intelligence: 4.5.0 Cloud only, 4.5.1 Cloud only, 4.6.0 Cloud only, 4.6.1 Cloud only
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/ITSI/latest/User/ServiceAnalyzeroverview | 2020-09-18T17:21:48 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
The.
To take advantage of CSRF protection in your views, follow these steps:
The CSRF middleware is activated by default in the
MIDDLEWARE
setting.. verifies the Origin header, if provided by the
browser, against the current host and the
CSRF_TRUSTED_ORIGINS
setting. This provides protection against cross-subdomain attacks.
In addition, for HTTPS requests, if the
Origin header isn’t provided,
CsrfViewMiddleware performs.
Origin checking was added, as described above..:. | https://django.readthedocs.io/en/stable-4.0.x/ref/csrf.html | 2022-09-25T04:02:36 | CC-MAIN-2022-40 | 1664030334514.38 | [] | django.readthedocs.io |
a9s PostgreSQL - Overview
This section describes the internals of a9s PostgreSQL.
Credentials
The a9s PostgreSQL service instance has a special user called
cfuser. Every user (e.g., created
with
cf bind-service or
cf create-service-key) inherits its privileges and capabilities from
the
cfuser, which means that every user has access to two roles: its own and the
cfuser. The default role used when connecting is the
cfuser.
All objects in the default database must belong to the
cfuser. Otherwise, other users are not
able to access them. When changing the user role using
SET ROLE or
ALTER ROLE, one must be careful about
the ownership and accessibility of tables, sequences, views, and other objects. When deleting a
credential, all objects belonging to the user are being deleted and the ownership is transferred to
cfuser.
It is possible to configure the privileges for
cfuser and the users who inherits from
cfuser
via custom parameters during instance creation (
cf create-service and
cf update-service)
and the user that inherits from
cfuser during credentials creation (
cf bind-service or
cf create-service-key), check the documentation to know how.
Replication Slots Cleanup
A PostgreSQL user configured with the
REPLICATION role privilege can create
replication slots
to replicate directly from a node.
When an application is connected and streaming using a replication slot, this slot is marked as active, and every WAL file that is replicated by all replication slots are recycled. When a replication slot is marked as inactive, the WAL files are kept and not recycled until that all slots streams the changes from that file. This means that when a slot is inactive, WAL files can consume all the available storage in the persistent disk and break the cluster.
a9s PostgreSQL ships the Replication Slot Reaper routine, which periodically drops inactive replication slots if they are inactive for too long or if it is inactive and the persistent disk usage has hit a threshold. To know these configuration values, consult the platform operator. | https://docs.anynines.com/docs/develop/application-developer/a9s-postgresql/a9s-ad-postgresql-overview/ | 2022-09-25T05:59:22 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.anynines.com |
8.5.200.11
Genesys Knowledge Center Plug-in for Workspace Desktop Edition Release Notes
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release contains the following new features and enhancements:
- The Knowledge Center plugin now automatically pre-selects the search language to the language specified in the value of the "gkc.language" key found in the attached data of the interaction. If the language is not set, it selects the last language the agent used.
- When searching the Knowledge Base, documents in the returned results window now indicate whether they have been recently created or updated.
Resolved Issues
This release contains the following resolved issues:
This release corrects an issue where the Copy Content button would sometimes disappear from the opened knowledge document. (GK-3062)
The Knowledge Center plugin now notifies the user if it cannot contact the Knowledge Center Server. (GK-3061)
This release corrects an issue where the Response tab would wrongly display the Knowledge tab information during an interaction. (GK-2377)
Upgrade Notes
No special procedure is required to upgrade to release 8.5.200.11.
This page was last edited on April 29, 2016, at 19:45. | https://docs.genesys.com/Documentation/RN/latest/gkc-pl-wde85rn/gkc-pl-wde8520011 | 2022-09-25T05:46:57 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.genesys.com |
Security Overview
This section provides an introduction to the security features of Hazelcast. These features allow you to perform security activities, such as intercepting socket connections and remote operations executed by the clients, encrypting the communications between the members at socket level and using SSL socket communication. All of the security features explained in this chapter are the features of Hazelcast Enterprise edition.
While Hazelcast supports non-secured cluster members and clients, it is recommended to secure your deployments. A cluster without security may face with:
unauthorized cluster members joining or accessing it
unwanted or malicious clients accessing it
unauthorized use (access or creation) of cluster resources and data tampering by the malicious cluster members and clients.
And when using Hazelcast’s Jet streaming engine, notice the following security considerations:
Hazelcast jobs allow you to use your custom codes and these codes must be available on cluster classpath or deployed to the cluster; this means any client is able to deploy custom codes to the cluster, so make sure each client is authorized to access the cluster.
The Jet engine bypasses the access control layer when accessing the data structures in the same cluster.
The connectors of the Jet engine include 3rd party codes which may increase the attack surface.
SQL, which is used by the Jet engine, includes file connectors and it can read files on the cluster filesystem.
Due to the above considerations, Hazelcast’s streaming engine is disabled by default for our users who mostly use Hazelcast’s storage engine (formerly known as Hazelcast IMDG) with the JAR distribution (See the Security Defaults section for information about the security considerations for different Hazelcast distributions). Enabling the Jet Engine section shows how you can start using the Jet engine; relatedly, see the Security Hardening Recommendations section to learn the best practices to secure your cluster.
Below, you can see the brief descriptions of Hazelcast’s security features. You can evaluate them and decide which ones you want to use based on your security concerns and requirements.
For data privacy:
TLS/SSL communication for members and clients for all socket-level communication; uses key stores and trust stores to encrypt communications across a Hazelcast cluster, as well as between the clusters replicated over WAN. You can also configure cipher suites to secure the network communication.
For authentication:
JAAS-based authentication between the cluster members and for pluggable identity verifications; works with identity, role and endpoint principal implementations.
Socket Interceptor to interfere socket connections before a new member or client comes to the cluster; you can perform identity checking using custom authentication protocols.
TLS Mutual Authentication to ensure each TLS-communicating side proves its identity to the other.
Security Realms for authentication and identity configurations.
For authorization:
JAAS-based authorization using permission policies for role-based security.
Security Interceptor that provides a callback point for every operation executed against the cluster.
See also Security Hardening Recommendations section to learn more about the best security practices. | https://docs.hazelcast.com/hazelcast/5.0/security/overview | 2022-09-25T05:37:56 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.hazelcast.com |
# Positioning algorithms
The position of an object can be determined in many different ways. It can be based on the Time Difference Of Arrival (TDOA) of the signals or by calculating the distance between the tags and the anchors via a method called Two Way Ranging (TWR). Also measuring the Received Signal Strength (RSS) gives you an idea of the distance between the sender and receiver. When the angle of the propagation of the signal is measured, Angle of Arrival (AOA) could be applied. For each of the methods, different implementations and variations exist. All algorithms have their pros and cons, which we will discuss below.
The most relevant algorithms for us are TWR and TDOA, that's why they will be discussed in more detail. TWR is more accurate than TDOA, but it is more CPU hungry and needs more signals in the air. By default, the engine runs on the PC and the tag itself is not aware of its position (TDOA1 and TWR1). For both schemes, there is however an alternative in which the positions are calculated on the tag itself (TDOA2 and TWR2).
# TDOA
Time Differences (TDs) yield hyperbolas. The position of the tag is calculated by calculating the intersection of the hyperbolas. Two anchors yield one TD (as opposed to only one anchor for one distance). This means a minimum of 3 anchors are necessary to calculate one 2D position and 4 anchors for a 3D position. The clocks of the nodes need to be synchronized. The better the synchronization, the higher the accuracy of the positions. Tags don't need to be active very often so this method is power consumption friendly. We differentiate between 2 schemes (TDOA1 and TDOA2) based on where the time differences are measured.
# TDOA1
The time differences are calculated on the anchors and offloaded to the system. The tags only need to 'blink' once for a position to be calculated, which means the tag power consumption is particularly low in this scheme.
# TDOA2
The time differences are calculated on the tags, and thus the position can also be calculated on the tag. If this position doesn't need to be offloaded to the system, an infinite amount of tags can be used in this scheme. This scheme is similar to how GNSS/GPS works.
# TWR
Two Way Ranging is a Time of Flight (TOF) method. The time of the propagation of the signal is measured between the transmitter and the receiver. A signal is sent from a tag to an anchor and back (hence the 'two way'). It needs to be sent back because the clocks of the anchors and tags are not synchronized. This means that the timestamps taken at the tag are on a different timebase than those taken on the anchor. With the 4 timestamps (2x transmission and 2x reception) we can accurately calculate how long the signal has traveled back and forth between the nodes. Dividing that time by 2 yields the TOF and thus the distance. The airtime of the nodes is higher then in TDOA, so less positions can be calculated.
# Conclusion
Depending on your use-case, any algorithm could be the best for you. Are you mostly looking for accuracy? Then go for TWR. Is energy consumption the most relevant? Choose TDOA. Angle of Arrival can be useful if you don't want to have a lot of hardware and accuracy is not that important. | https://docs.rtloc.com/reference/algorithms.html | 2022-09-25T06:00:56 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['/assets/img/tdoa.f82878d1.png', 'TDOA'], dtype=object)] | docs.rtloc.com |
Overview
Cloudlocks Enterprise API enables you to:
- Integrate Cloudlock's detection and response into your security workflows
- Keep your on-premise or cloud-based incident management systems in sync
- Interact with incidents
The Cloudlock Enterprise API is a REST API with JSON responses. For best practice and future consistency all requests should use the Accept header passing the application/json value and the Content-Type header with the same value.
URL for Cloudlock APIs
Please contact [email protected] for the URL you should use to make calls to the Cloudlock APIs (e.g. https://<provided-by-Cloudlock-support>.cloudlock.com).
If you have a support issue, please contact: [email protected]
Example URL
The calls in this document to callapi.cloudlock.com are for example only.
Authentication
All endpoints require authentication unless stated otherwise.
- To interact with the API, you need an OAuth2 access token. Generate an access token in the Cloudlock application by selecting the Authentication & API tab in the Settings page.
- Click Generate to create your own token.
When making a request to any resource include the Authorization header with a value of Bearer followed by a single space and the token.
All API requests must be made over HTTPS. Calls made over plain HTTP will be redirected to HTTPS.
Pagination
All list-based endpoints support pagination.
Control pagination parameters using:
offset
integer
Indicates the item number to start the result set from
0
limit
integer
Determines the quantity of results to return
20 max=100
Filtering
Filtering a collection is achieved by adding a field (e.g. "incidents") to the querystring, along with the filter value. For example:
You can use multiple filters by separating them with the & operator. For example:
Sorting
Use the order parameter along with the sort-by field to indicate the order for your request list result. For multiple sort orders, use a comma-delimited list of sort parameters. The default sort direction is ascending. Use a leading "-" character to denote descending.
Example:
Ascending:
Descending:
Options Help
To get the full list of options available for a field, use the field name as the ID when querying an endpoint.
Rate Limit
Cloudlock's API has both a rate limit and a quota-based upon your license.
Exceeding the rate limit results in a 429 error.
Errors
Cloudlock uses standard HTTP response codes to indicate success or failure of an API request. Codes in the 2xx range indicate success, while codes in the 4xx range indicate an error and include an error response object:
Example Error Response:
Response 400 (application/json) { "status": "error", "message": "The server cannot process the request due to a syntax error" "additional_info": null, }
Confidentiality and Rights
© 2019 Cisco and/or its affiliates. All rights reserved. Cloudlock is a registered trademark of Cisco. All other trademarks or other third party brand or product names included herein are the trademarks or registered trademarks of their respective companies or organizations and are used only for identification or explanation. Cisco Cloudlock and related documentation are protected by contract law, intellectual property laws and international treaties, and are authorized for use only by customers for internal purposes in accordance with the applicable subscription agreement or terms of service. This documentation may not be distributed to third parties.
This documentation is provided “as is” and all express or implied conditions, representations and warranties, including implied warranty of merchantability, fitness for a particular purpose or non-infringement are hereby disclaimed, except to the extent that such disclaimers are held to be legally invalid.
The information contained in this documentation is subject to change without notice. Cisco recommends you periodically check this site to ensure you are utilizing the most current version of this documentation.
Updated 2 years ago | https://docs.umbrella.com/cloudlock-documentation/docs/introduction-to-api-enterprise | 2022-09-25T04:41:20 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.umbrella.com |
Transfers between Stock Locations are most easily be recorded in the App using a zero value sales order. You simply create one shipment/despatch with a zero order quantity and zero amount that removes quantities from one location and another shipment/despatch with a zero order quantity and zero amount that returns them to another location. The net effect is that you still have the same inventory levels but they are at different locations.
If you wanted to, for example, transfer 100 cases of wine from the warehouse (WH) to the Storeroom (SR), you would take the following steps:
First create a new Sales Order. Select ‘Stock Transfers’ from the Customer drop down list. In the ‘Items’ area, select the Stock Item, set the Unit price to ‘0.00’ and the quantity to ‘0’ and Save the Sales Order.
Then select ‘Pick for Despatch’ from the ‘Despatch’ button drop down menu. You will then receive a warning the following warning message:
Ignore this warning and select the ‘create despatch note’ option.
Select the source location ‘WH’ from the ‘Despatched From Location’ drop down menu and fill in the Quantity Filled to reflect the amount leaving this location. Click ‘Update’. This has completed the “removing” half of our movement, now we to “return” the stock to the other location.
Click on the Sales Order link on the Despatch Note (see arrow on above image) to return to the Sales Order. Select ‘Pick for Despatch’ again.
On this new despatch note, select the destination location (SR) (from the ‘Despatched From Location’ drop down menu. Edit the Quantity Filled section in the ‘Items’ area so that it reflects the amount arriving at this location. This will be a negative number to reflect the fact that the product is being received rather than despatched. Click ‘Update and Finalise’.
These two despatches will result in the 100 cases being removed from inventory at the source location (WH) and added back in at the destination location (SR). | https://docs.vinsight.net/stock-transfers-old | 2022-09-25T04:42:24 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.vinsight.net |
account_idand
license_key
newrelic.iniconfig file for the Lair: pip install newrelic
requirements.txtfile. Open the code editor by clicking on the Show Code icon in the bottom left pane. Select the
requirements.txtfile and add the following at the bottom of the file. If your Lair does not have a
requirements.txtfile, you can create one directly in the code editor:
$YOUR_COMMAND_OPTIONSwith your app’s command line, for example,
python app.py. | https://docs.wayscript.com/managing-tools/integrations/new-relic | 2022-09-25T05:55:53 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.wayscript.com |
Combining Data Sets¶
SNData allows users to combine individual data releases into a single
CombinedDataset object. The resulting object provides the same general user
interface as a single data access module but provides access to data from
multiple surveys / data releases.
Creating a Combined Data Set¶
To create a combined data set, import the data access classes for each of the
data releases you want to join and pass them to the
CombinedDataset
object. For demonstration purposes we combine data from the third data
release of the Carnegie Supernova Project and the three year cosmology release
of the Dark Energy Survey:
The resulting object provides the same user interface as the rest of the SNData package, including having the same method names:
Important
The format of object and table Id’s for
CombinedDataset
objects is slightly different than for a single data release. Please
keep reading.
Unlike the object and table Id’s for a single data release, the default Id’s
for a
CombinedDataset are tuples instead of strings. Each tuple contains
three elements including (in order) the individual object identifier, data
release name, and survey name. For example, the ID value for supernova ‘2007S’
from CSP Data Release 3 (DR3) would be
('2007S', 'DR3', 'CSP').
By specifying object Id’s in this way, it is ensured that objects in combined
data releases always have unique identifiers. However, in the case where
the object Id’s from two data releases are already unique (as is the case when
combining
csp.DR3` and ``des.SN3YR),
CombinedDataset objects are smart
enough to mimic the behavior of a normal / single data release and can
take object Id’s as strings. For example:
Joining Object Id’s¶
It is possible for two different photometric surveys to observe the same astronomical object. In this case, object Id’s from different surveys can be “joined” together so that when requesting data for a given object Id, data is returned for all Id’s that have been joined together. Accomplishing this is as simple as:
When retrieving data for a joined ID, the returned data table is simply the collective data tables for each joined ID stacked vertically.
It is worth noting that
CombinedDataset objects are aware of successive
join actions. This means that the following two examples are functionally
equivalent. | https://sndata.readthedocs.io/en/development/getting_started/combining_datasets.html | 2022-09-25T06:03:57 | CC-MAIN-2022-40 | 1664030334514.38 | [] | sndata.readthedocs.io |
Viewing Jamf Pro Environments
Overview
There are times when you will want to review which of your Jamf Pro environments AppsAnywhere is linked to. Luckily, doing so is really simple. In this article we'll take a look at how you can see which environments AppsAnywhere is currently linked to
Viewing the environments
Navigate to the Manage Jamf Pro Environments page:
Log into AppsAnywhere as an admin user
Click on Return to Admin to access the AppsAnywhere admin portal
On the sidebar menu, go to Connectors > Jamf Pro Server Environments
Here you will see a full list of the Jamf Pro environments AppsAnywhere is linked to.
If you have a large number of environments, you can use the live search to quickly find the entry you are looking for.
From this page you can:
Click the Edit button next to an environment to modify the details of that environment - See Editing a Jamf Pro Environment
Click the Delete button next to an environment to delete that environment - See Deleting a Jamf Pro Environment
Click the + Add environment button in the top right to create a new environment - See Adding a Jamf Pro Environment
Editing the details of, or deleting, a Jamf Pro environment will affect users' ability to launch any resources provided from that environment. Be sure to test everything is working as you expect after you make any changes! | https://docs.appsanywhere.com/appsanywhere/2.12/viewing-jamf-pro-environments | 2022-09-25T04:39:42 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['../../appsanywhere/2.12/2957378179/feature-aa-3170.s2dv.co.uk_admin_jamf_pro_environment.png?inst-v=04c6fcf4-a5c2-453c-b2cc-1630f75bf4e6',
None], dtype=object) ] | docs.appsanywhere.com |
Parasoft SOAtest has two capabilities that help you make bulk updates to your tests when the services that they are testing change:
These two capabilities are typically used in sync to ensure rapid, accurate updating. This lesson will show you how to use both capabilities to update tests.
Start by creating a set of tests for an older version of a bookstore service. We’ll later update these tests using the Change Advisor and Search and Replace capabilities.
To create tests vs. an old version of the bookstore service:
Change Advisor Lesson:
Change Advisor Lessonunder Project Name.
Change Advisor Lessonunder File name, then click Next. the WSDL URL.
10
9.99
5.
To perform change impact analysis on these test assets:.
productInfowas added
pricechanged.
Bookstoreas the service name.
Old Versionwith
1.0.
Current Versionwith
2.0.
LessonTemplateas the file name, then leave
/Change Advisor Lessonas the location.
getItemByIdand
getItemByIdentifier. It appears that this operation changed names, but that the meaning of the operation is the same. This seems like a good match.
getItemByIdand
getItemByIdentifierby right-clicking getItemById and choosing Mark Match getItemById -> get-ItemByIdentifier as Reviewed". This will make the match turn green.
addNewItemToInventoryoperation is marked with a red ‘X’, indicating that there are differences in the schema for that operation that need to be reviewed.
priceand
genreelements, but indicates that they need review
amountthat did not appear in the original version.
priceand
genreelements is incorrect by selecting the price element and clicking Disconnect.
priceelement was renamed to
amount, and that
genreis the new element in the WSDL.
priceelement was renamed to
amountby select price and amount, then clicking the Connect button..
genreelement as follows:
Literaturein the input field, then click OK.
genrenow has a green ‘?’ and that all matches and unmatched nodes are now green. This indicates that we’re finished reviewing and configuring this operation.
To update the tests impacted by this service.
priceelement has been transferred to the amount element, as defined in our change template.
Literatureappears there, as defined in our change template.
Change Advisor is applicable for tests that are configured::
priceelement no longer appears in the response XML (remember that it was renamed to
amountand now appears under the
productInfoelement).
book/productInfo/amountin the With field, then click OK. | https://docs.parasoft.com/plugins/viewsource/viewpagesrc.action?pageId=60168298 | 2022-09-25T06:13:08 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.parasoft.com |
- view_name
- The name of the view whose most recent SQL create text is to be reported. There is an upper limit of 12,500 characters that SHOW VIEW can display.
A SHOW VIEW IN XML request does not report all of the definition constructs for views, so it is not possible to decompose and reconstruct their definitions from their reported XML format definitions.
Despite this, the XML text for view definitions is helpful because it includes the following useful information.
- The names and data types of the columns in the view definition.
- A list of all of the referenced database objects in the view definition.
For further information, see Teradata Vantage™ - SQL Data Definition Language Detailed Topics , B035-1184 . | https://docs.teradata.com/r/Teradata-VantageTM-SQL-Data-Definition-Language-Syntax-and-Examples/September-2020/Comment-Help-and-Show-Statements/SHOW-object/SHOW-object-Syntax-Elements/View | 2022-09-25T04:23:19 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.teradata.com |
In Lab Analyses, you can view the progress of things like your ferments and maturity monitoring on a graph. Here’s an example:
In this document:
1. Go to Lab Analyses.
2. Filter the data that you want to show on the graph, eg: type a batch code or sub block code into the search box and click the search button.
3. If you have a lot of data you may also want to change the “Items per page” at the bottom to more than the default which is 20.
4. To show the graph, open up the gear icon and choose “Graph from table”:
This will take any data that is on the table at the time and show it on the graph.
You can configure what lines to show on the graph for which Analysis Sets by editing the Analysis Set and ticking “Show on Graph.” See below: | https://docs.vinsight.net/lab/graphing | 2022-09-25T04:25:10 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.vinsight.net |
Table of Contents
DiskfreeDiskfree
DescriptionDescription
Check if a given filesystem / directory that it has enough space.
SyntaxSyntax
$oMonitor->addCheck( array( "name" => "check file storage", "description" => "The file storage have some space left", "check" => array( "function" => "Diskfree", "params" => array( "directory" => "[directory]", "warning" => [size], "critical" => [size], ), ), ) );
ParametersParameters
Remark to the [size] value:
The values for warning and critical
- must be integer OR
- integer or float added by a size unit (see below)
- warning level must be higher than critical value
- units can be mixed in warning and critical value
supported size units are
- ‘B’ byte
- ‘KB’ kilobyte
- ‘MB’ megabyte
- ‘GB’ gigabyte
- ‘TB’ terabyte
Example for Diskfree size params:
"warning" => "1.25GB", "critical" => "500.7MB",
ExamplesExamples
None yet. | https://os-docs.iml.unibe.ch/appmonitor/PHP_client/Plugins/Checks/diskfree.html | 2022-09-25T06:15:59 | CC-MAIN-2022-40 | 1664030334514.38 | [] | os-docs.iml.unibe.ch |
AWS Rotated Secret
You can create a Rotated Secret for an AWS user. Before you get started, ensure creating an AWS Target that includes the AWS region, as well as credentials for a privileged user authorized to rotate credentials.
When a client requests a Rotated Secret value, the Akeyless Vault Platform connects to the AWS Cloud through your Gateway to rotate the user password on your target AWS account.
Create a Rotated AWS Secret from the CLI
To create a Rotated AWS <api-key|target> \ --api-id <access id> \ --api-key <access key> \ - AWS Target with which the Rotated Secret should be associated.
authentication-credentials: Determines how to connect to the target AWS account.
use-user-creds- Use the credentials defined on the Rotated Secret item.
target-rotator-creds- Use the credentials defined on the AWS Target item.
Tip
target-rotator-credsif the Rotated Secret user is not authorized to change their own Access Key, and a privileged user, like the AWS Target user, is required to change the Access Key on behalf of the Rotated Secret user.
rotator-type: The type of credentials to be rotated. For AWS Targets, choose:
api-key- to rotate the Access Key specified in the Rotated Secret
target- to rotate the Access Key for the user specified in the AWS Target.
api-id: The Access Key ID of the AWS user whose Access Key should be rotated.
api-key: The Access Key to rotate..
Create a Rotated AWS AWS Target to be associated with the Rotated Secret.
Authenticate with the following credentials: Determines how to connect to the target AWS account:
- User credentials: Use the credentials defined inside the Rotated Secret item.
- Target credentials: Use the credentials defined inside the AWS Target item.
Tip
Select Target credentials if the Rotated Secret user is not authorized to change their own Access Key, and a privileged user, like the AWS Target user, is required to change the Access Key on behalf of the Rotated Secret user.
Rotator type: Determines the rotator type:
- API Key: Rotates the Access Key defined inside the Rotated Secret item.
- Target: Rotates the Access Key defined inside the AWS Target item.
Access Key ID: Defines the Access Key ID of the AWS user whose Access Key should be rotated.
Access Key: Defines the Access Key to rotate.
Tip
You can rotate the Access Key for the AWS Access Key rotations when Auto Rotate is enabled.
Rotation hour (local time zone): Defines the time when the Access Key should be rotated if Auto Rotate is enabled.
- Click Finish.
Updated 2 months ago | https://docs.akeyless.io/docs/create-an-aws-rotated-secret | 2022-09-25T06:15:59 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.akeyless.io |
Managing Settings
Overview
All of the settings used to configure your AppsAnywhere to your exact liking are available in the admin section of AppsAnywhere. To get there:
Log in to AppsAnywhere as an admin user
Click Return to Admin from within the AppsAnywhere portal
Navigate the Settings menu on the right hand side of the top navigation
From here you will see a number of settings pages, each of which are described in an article in this section. | https://docs.appsanywhere.com/appsanywhere/2.12/managing-settings | 2022-09-25T04:44:42 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.appsanywhere.com |
Conjur Java API
The Conjur Java API provides a robust programmatic interface to a Conjur server from within your Java project. Use the Java API to authenticate with Conjur and fetch secrets in a secure manner. Integration with Conjur provides a variety of additional benefits including being able to store security policy as code, and automated secret rotation.
Integration
See the Conjur Java API GitHub repo for integration instructions and an example of basic functionality. | https://docs.conjur.org/latest/en/Content/Developer/conjur-api-java.html?tocpath=Developer%7CClient%20Libraries%7C_____1 | 2022-09-25T05:21:52 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.conjur.org |
If you need your team to be able to access certain information via the web while they are out in the field, it's possible to include hyperlinks in the hint portion of a question within the Device Magic app.
To set up a hyperlink within a question you need to first edit the form in the Device Magic Form Builder. Once you navigate to the form and the question you would like to add a hyperlink to, click "Show Advanced Settings" on the question. This reveals all the additional properties of the question, including the hint option. You can then type a description and include your hyperlink.
Once you have added your hyperlink, save your form. Now, when your devices refresh and update to the new version of the form the hyperlink(s) will be clickable and navigate to your default web browser (the image below shows how a link will look within the Device Magic Forms app).
Key Points to Remember:
Once a user clicks a hyperlink they will be taken outside of the Forms app. To continue editing their form, they will need to reopen the app.
A hyperlink will not work on the device if it does not have a cellular or wifi data connection.
When adding the hyperlink to the hint portion of the question, add a brief description so your team understands the purpose of the link within the form.
It's that easy to make sure your team has the information needed to complete forms while in the field.
If you have any questions or comments feel free to send us a message at [email protected]. | https://docs.devicemagic.com/en/articles/809245-add-a-hyperlink-to-your-form | 2022-09-25T05:36:23 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['https://downloads.intercomcdn.com/i/o/150492615/1d003cc7b0e9e83a3b66cdc2/Capture.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/150492635/363f98ed556afd7736f110e9/Screenshot_20171011-131328.png',
None], dtype=object) ] | docs.devicemagic.com |
Suspend a User from the Profile User Interface
Unsuspend a User from the Profile User Interface
Delete All of a User’s Posts from the Profile User Interface
Give or Take Reputation from a User
Update a User’s Notifications
Disable or Enable a User’s Notifications
Update a User’s Expertise Topics
Update a User’s Authentication Modes
Remove a User’s Authentication Modes
Update a User’s Alterego Preferences | https://docs.dzonesoftware.com/content/kbentry/8050/moderating-a-user-from-the-profile-user-interface.html | 2022-09-25T05:58:15 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.dzonesoftware.com |
$ choose one of the following options when installing OKD:
Manage cloud credentials manually:
You can set the
credentialsMode parameter for the CCO to
Manual to manage.
Remove the administrator-level credential secret after installing OKD with mint mode:
If you are using the CCO with the
credentialsMode parameter set to
Mint, you can remove or rotate the administrator-level credential after installing OKD. OKD, see Rotating or removing cloud provider credentials. OKDRequest custom resource OKD..
Currently, this mode is only supported on AWS and GCP.
In this mode, a user installs OKD.
Install an OKD | https://docs.okd.io/4.7/installing/installing_aws/manually-creating-iam.html | 2022-09-25T04:19:21 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.okd.io |
Know how to integrate Cashfree’s card vault API to use the saved card feature.
Save Card
You need to first save the customer card at checkout after a Saved Card
Use the below form to submit a saved card.
.... | https://docs.cashfree.com/reference/card-vault | 2022-09-25T05:37:56 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.cashfree.com |
.
Note: You cannot have more than 2500 active campaigns at any one time (including paused campaigns). You won't be able to create campaigns if you exceed this limit. If this happens, try archiving campaigns that you don't need instead of pausing them..
Note: If creating a Native Ad you will see a Show Zones with Title checkbox beside the Zone Type. This checkbox ensures that the ad will only target zones that allow the ad to show a title. Checking this box will also affect what zones show in Step 5: Zones.
Choosing an internal or an external campaignChoosing an internal or an external campaign
Note: This feature is only available to users with SaaS features enabled on their.
Campaign TypesCampaign Types
StandardStandard
Creates a standard campaign.
ExclusiveExclusive
This campaign type is used to sell large amounts of impressions – bulk or geo targeted - at a fixed CPM price without using the bidder. An exclusive campaign will always be prioritized over non-exclusive campaigns:
- Weight Set a weight for the campaign from 1 to 10 which determines which campaign will show when there are multiple exclusive campaigns.
Traffic ShareTraffic Share
This campaign type is used for a fixed percentage of traffic from 1% to 100% of an ad spot. The advertiser effectively rents an ad spot for a fixed period of time at a fixed price. If set to 100%, a traffic share campaign can work similarly to an exclusive campaign, in that it will get 100% of the ad spot. However, any Exclusive campaigns will take priority over any traffic share campaigns.
- Share: Set the percentage of traffic that this campaign will receive.
ExchangeExchange
This campaign type creates an RTB campaign using our ad exchange partners.
Select a Partner: Select an ad exchange partner for the campaign from the drop-down. The template for the ad exchange partner will be set automatically.
Throttle: This manually sets the percentage of requests that get considered for a bid.
Automatic Throttle: The automatic throttle automatically sets the percentage of requests that are considered for a bid. When a campaign is not winning bids, the system will throttle the campaign to 1% to reduce waste.
Note: The automatic throttle is applied after any manual throttle that you set. For example, you set a manual throttle of 50% and then also turn on automatic throttle. The automatic throttle will apply its percentage on the 50% you set in the manual throttle.
RTB Floor CPM: Set the minimum Cost Per Mille to be considered for this campaign.
RTB Bid Adjustment: Set the percentage adjustment applied to bids received on the campaign.
ManagedManaged
This campaign type behaves the same way as a Standard campaign, but it can only be created and edited by advanced users. Managed campaigns are used by advanced users to create and manage campaigns for clients.
Note: If you are logged in as a normal user, you will not see this option when creating a campaign.
As a normal user, you may see Managed campaigns in the Campaigns List with the prefix [MANAGED] before them. You will receive an error message if you try to edit or apply bulk updates to them. | https://docs.exads.com/docs/create-campaign-step1/?_gl=1*g561wk*_ga*MTIyMDY5Njg3NC4xNjU3MDEyMTgw*_ga_36C27XMFG9*MTY1OTQzNTE5NS4yOC4xLjE2NTk0MzcxMzEuNjA | 2022-09-25T04:56:55 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['/docs/assets/hires/create-campaign-step-1-merge.png', 'newc'],
dtype=object)
array(['/docs/assets/hires/general-step1.png', 'Step1'], dtype=object)
array(['/docs/assets/hires/internal-vs-external-campaigns.png',
'External vs internal campaigns'], dtype=object)
array(['/docs/assets/hires/step1-campaign-types.png', 'Campaign Types'],
dtype=object) ] | docs.exads.com |
Table of Contents
To open an archive in Ark, choose (Ctrl+O) from the menu. You can also open archive files by dragging and dropping from Dolphin. Archive files should be associated with Ark, so you can also click a file in Dolphin and select to open it or select an extract action for this file.
If you have enabled the information panel in the menu additional information about the selected folders or files in the archive is displayed.
Various operations can be performed for an opened archive by using the menu. For example, you can save the archive with a different name using . Archive properties such as type, size and MD5 hash can be viewed using the item.
Ark has the ability to test archives for integrity. This functionality is currently available for zip, rar and 7z archives. The test action can be found in the menu.
Ark can handle comments embedded in
zip and
rar archives.
zip archives are automatically displayed.
Using
rar archives you can modify a comment with the actions
or
(Alt+C) from the menu.
Note
The comment menu item is enabled only for
rar archives.
To remove a comment from a
rar archive delete the text in
the comment window. | https://docs.kde.org/stable5/en/ark/ark/using-ark.html | 2022-09-25T04:27:18 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)
array(['ark-comment.png', 'Editing a comment'], dtype=object)] | docs.kde.org |
Note: Skip this section if you already you have a target cluster set up.
If you don't have a cluster ready you can set one up based on the Getting Started with Amazon EKS guide.
~/.aws/credentialsor are stored as environment variables
aws sts get-caller-identity
Create a cluster with managed node-pools using eksctl:eksctl create cluster
If you want to use the default settings for installing Pixie's Vizier module, you'll want to ensure that your EKS cluster has persistent volume and storage classes set up.
If your cluster does not have an accessible storage class type, you'll want to deploy with the etcd operator. Note that self-hosted Pixie Cloud requires persistent volumes.
Once connected, follow the install steps to deploy Pixie. | https://docs.px.dev/installing-pixie/setting-up-k8s/eks-setup/ | 2022-09-25T05:58:41 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.px.dev |
Swift Performance has built in Varnish support. If you run autoconfig it will detect Varnish automatically, otherwise you need to enable Autopurge for Varnish on Swift Performance Settings » Caching » Varnish tab.
If you are using proxy (eg: Cloudflare) you may need to set up your server’s IP address here in order to be able to purge Varnish cache. | https://docs.swiftperformance.io/knowledgebase/varnish/ | 2022-09-25T05:47:55 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.swiftperformance.io |
influx bucket delete. | https://test2.docs.influxdata.com/influxdb/v2.3/reference/cli/influx/bucket/delete/ | 2022-09-25T06:02:44 | CC-MAIN-2022-40 | 1664030334514.38 | [] | test2.docs.influxdata.com |
For helping users via Chatz or general help.
0 Members and 1 Guest are viewing this board.
Posts: 1
Topics: 1
Last post: March 11, 2017, 08:34:09 AM Accessing Your cPanel Fi... by Skhilled
Replies: 3
Views: 4,515
Replies: 0
Views: 1,519
Replies: 2
Views: 1,787
Replies: 0
Views: 1,818
Replies: 0
Views: 2,802
Replies: 0
Views: 3,532
Poll
Moved Topic
Locked Topic
Sticky Topic
Topic you are watching
Page created in 0.042 seconds with 15 queries. | https://www.docskillz.com/docs/index.php?PHPSESSID=e07f997c120877303bdd073a973b9cab&board=9.0 | 2022-09-25T06:00:05 | CC-MAIN-2022-40 | 1664030334514.38 | [] | www.docskillz.com |
Settlement Recon Report
A new report Settlement Recon is available to help you with a faster reconciliation for all types of settlements within minutes, for a selected date range. The Settlement Reconc report provides the list of settlements received in the specified date range, along with detailed information about transactions, adjustments/refunds against transactions, disputes, and more, corresponding to the settlements received.
To generate the settlement reconc report,
- Go to PG Dashboard > Reports.
- In the Report Type field, select Settlement Recon and click Generate Reports.
- Select the date range for which you want to view the report. You can also select the file format (csv or xlsx).
- Click Generate Report. You can download the report after it has been generated.
The downloaded report has two sections, Summary and Reconciliation Details.
Summary - Gives a summarized view of all the settlements received from Cashfree Payments for the selected time period. Details like total transaction amount, settlement amount, adjustments, UTR, settlement type and charges, are available in the Summary section.
Reconciliation Details - Shows all individual events associated with settlements for the selected time period. Events like Payments, Refunds, Settlements, Disputes, and details associated with each event are shown in the Reconciliation Details section. Since all events associated with a single settlement are available in a single report, it makes reconciliation faster and much simpler for you.
Sample report is available here.
Steps to Reconcile Settlements
- Get the Settlement UTR/ID of the settlement you want to reconcile from the Settlement Summary section in the downloaded report.
- Search for all events associated with the Settlement UTR/ID in the Reconciliation Details section.
- Calculate the total amount of all individual events of the selected Settlement UTR/ID, and it should be equal to the net settlement amount in the Settlement Summary section for the same UTR.
- Check if the total amount of all the payments in the selected UTR matches the settlement amount of the settlement in the settlement summary section.
In the same way, you can now map all the credits and debits for all settlements received in the selected time period. Within minutes you can now reconcile your settlements for an entire month's of data.
Events
PAYMENT - A credit entry that corresponds to the payment made by the customer. It contains all the details associated with that payment such as customer details, reference ID, payment mode, etc.
REFUND - A debit entry that corresponds to the refund initiated to the customer. It contains all the details associated with that refund such as refund ARN, refund type, transaction amount, etc.
REFUND_REVERSAL - A credit entry that corresponds to the refund reversal initiated to your account.
CHARGEBACK - A debit entry which includes details about the chargeback raised by the customer against a transaction.
CHARGEBACK_REVERSAL - A credit entry which includes details about the chargeback reversal cleared against a transaction.
OTHER_ADJUSTMENT - A debit/credit entry to manage adjustments.
DISPUTE - A debit entry for a dispute raised against a transaction. It contains all the details associated with that dispute such as status, transaction amount etc.
DISPUTE_REVERSAL - A credit entry for a dispute reversal cleared/closed against a transaction.
Updated 16 days ago | https://docs.cashfree.com/docs/settlement-reconciliation-report | 2022-09-25T04:45:02 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.cashfree.com |
Storefront Design View
As an alternative to using Commerce Management, content managers and content editors can create and manage content assets directly in the storefront, by entering the storefront in "Design View". The Design View makes it possible to immediately preview the effect of an action, e.g., the assignment of a component to a page slot, on the resulting page.
The "Design View" is divided into four main areas:
The palette, which provides tiles of components and media assets for drag & drop creation of new components (the palette is hidden by default).
The render area renders the page and displays the preview, with the currently selected asset highlighted.
The content structure tree is used to navigate through the previewed content structure including page, page variant, components, includes, categories, products, etc.
The content edit area displays the available editing options for the currently selected element in the content structure tree.
The Design View provides two modes:
The Inspect mode highlights the rendered parts of the page and helps to navigate easily to that part that has to be edited.
The Layout mode displays all slots and placeholders on a page to extend or change the page layout via drag & drop.
| https://docs.intershop.com/icm/latest/olh/icm/en/content_management/concept_storefront_design_view.html | 2022-09-25T05:05:24 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['../icm_images/screenshots/scr_icm_designViewInspectMode.png',
'Inspect Mode of the Design View (palette hidden)'], dtype=object)
array(['../icm_images/screenshots/scr_icm_designViewLayoutMode.png',
'Layout Mode of the Design View'], dtype=object) ] | docs.intershop.com |
MClimate API Documentation
Search…
MClimate API Documentation
MClimate LoRaWAN Devices
Basic endpoints
API Control
Melissa
Bobbie
Vicki
Vicki LoRaWAN
HT sensor
HT Sensor LoRaWAN
CO2 Sensor LoRaWAN
T-Valve LoRaWAN
Flood Sensor LoRaWAN
AQI Sensor LoRaWAN
Smart Plug
Shelly 1/1PM/EM
Maya
Functions
Schedules
Schedule profiles
Thermostat mode
GitBook
MClimate API Documentation
Introduction
Based on simple REST principles, the MClimate API endpoints return JSON metadata about the smart-home controllers like temperature, humidity, etc., directly from the MCloud.
The API also provides interface to control MClimate devices and access to user-related data, like user’s devices, activity, etc. Such access is enabled through selective authorization, by the user.
The base address of MClimate API is
. The API provides a set of endpoints, each with its own unique path. To access private data through the Web API, such as user profiles, an application must get the user’s permission to access the data. Authorization is via the MClimate Auth service.
Requests
The MClimate Web API is based on
REST
principles. Data resources are accessed via standard HTTPS requests in UTF-8 format to an API endpoint. Where possible, Web API uses appropriate HTTP verbs for each action
METHOD
ACTION
GET
Retrieves resources
Creates resources
PUT
Changes and/or replaces resources or collections
DELETE
Deletes resources
Response format
Our API uses the Hypertext Application Language or HAL as response format for our requests. HAL is a simple format that gives a consistent and easy way to hyperlink between resources in an API. APIs that adopt HAL can be easily served and consumed using open source libraries available for most major programming languages. It's also simple enough that you can just deal with it as you would any other JSON.
HAL recognises two basic types of responses: entity and collection. Both responses follow the same general format, although they are slightly different.
Whenever an API response returns an entity (e.g.
/v1/controllers/{serial_number}
). The HAL response format is as follows:
{
"controller": {
"user_id": 200,
"serial_number": "UV******8CS6",
"mac": "5ECF7F********",
"firmware_version": "V1SHTHF",
"name": "Melissa UV4*****",
"type": "melissa",
"online": false
},
"_links": {
"self": {
"href": "/v1/controllers/UV4R7******S6"
}
}
}
Notice, how the actual information is "enveloped" in a key named after the resource or in this case the "controller" resource. The "_links" key is a mandatory key in HAL and it defines useful links to the resource itself or other related.
Whenever an API response return a collection (e.g.
/v1/controllers
). The HAL response format is as follows:
{
"_embedded": {
"controller": [
{
"user_id": 200,
"serial_number": "H5******J6X",
"mac": "“ACCF***6522E\"",
"firmware_version": "V1SHTHF",
"name": "Melissa H59****X",
"type": "melissa",
"room_id": 7,
"online": false,
"brand_id": 9,
"controller_log": [],
"_links": {
"self": {
"href": "/v1/controllers/H59I****6X"
}
}
},
{…},
{…},
{…},
…
]
},
"total": 22,
"_links": {
"self": {
"href": "/v1/controllers"
},
"first": {
"href": "/v1/controllers?page=1"
},
"href": "/v1/controllers?page=1"
}
}
}
The main difference between a collection response and an entity response is that the collection response is "enveloped" in an "_embedded" key. An important thing to notice is that pagination information is returned with every collection response.
Note: If a collection response returns no information the API returns and empty response following the entity HAL format, but if an entity response return no information a 404 error response is generated.
Response Status Codes
The MClimate.
400
Bad Request - The request could not be understood by the server due to malformed syntax. The message body will contain more information;.
Error responses
The MClimate API follows the error response format as defined in RFC 7807. Here is an example response following this response format:
{
"type": "",
"status": 404,
"title": "Not Found",
"detail": "Controller not found"
}
Fields:
"type" tries to provide further information and reference to the error described in the response
"status" is the HTTP error code generated as a result of the request
"title" human readable description of the HTTP error code
"detail" provides additional information
In the following paragraph we try to give generalized information about the error responses generated by the MClimate API:
Our API consumes content of type "application/json" if your request does not use JSON as transport or your request body is malformed a 400 response is generated
Everytime you try to fetch an unexisting entity a 404 response is generated
Since this API serves the purpose of third party integratiors, authentication and authorization mechanisms apply. Whenever you fail to authorize or authenticate a 401 response is generated
As described in [href] Rate Limiting [/href] if you exceed the number of requests per minute a 429 response is generated
If our servers experience internal problems a 500 error is generated. Should this ever happen please contact us at
Rate Limiting
Rate Limiting enables Web API to share access bandwidth to its resources equally across all users.
Rate limiting is applied as per application based on Client ID, and regardless of the number of users who use the application simultaneously. The number of requests for certain amount of time that are authenticated by one token is also limited.
To reduce the amount of requests, use endpoints that fetch multiple entities in one request. For example: If you often request single controllers, use endpoints that return multiple such as
Fetch All User Controllers.
Note:
If the API returns
status code 429
, it means that you have sent too many requests.
Pagination
Every GET request which returns a collection (e.g. Fetch all user controllers), contains pagination information. It is contained within the "_links" key of every response. The count of entities per page is set to the constant number of 30. An additional key "total" indicates the number of entities currently returned. Referencing which page you would like to fetch is achieved through the "page" GET in the URL. (e.g.
/v1/controllers?page=2
)
Pagination information schema:
"total": 22,
"_links": {
"self": {
"href": "/v1/controllers"
},
"first": {
"href": "/v1/controllers?page=1"
},
"href": "/v1/controllers?page=1"
}
}
Additional API options
Our API provides flexibility we find usefull in current situations. All of the options described below are passed to our endpoints as GET parameters in the URL.
To define explicitly which fields of a given resource you would like to get in the response you can pass the "fields" parameter in the URL
Example:
/v1/controllers/{serial_number}?fields=serial_number,mac,name
The fields are comma separated.
Sometimes it's useful to be able to pass more than just one identifier to a resource rather than just some ID. This can be achieved by just passing the desired field => value pair in the URL
NOTE: This feature is only available when returning collections
Example:
/v1/controllers?type=melissa
You can also sort collections by specific parameter by passing the "sort" parameter in the URL
Example:
/v1/controllers?sort=-created
The sorting order is defined by the sign preceding the field name. Default order is ascending
Limiting the number of entities in a collection response is also useful as it can lower the payload size, preferable for high latency networks. You can limit the number of entities by passing the "limit" parameter in the URL
Example:
/v1/controllers?limit=15
Note: The default value of "limit" is 30 as defined in [href] Pagination [/href]
All of the above parameters can be combined as desired, but what's important to notice is that every field used in these parameters must correspond to a valid field in the resource's entity. For example /v1/controllers?foo=bar will result in an error as "foo" is not a valid field of the controllers resource
Authorization
Authorization Overview
The MClimate API provides information that you can use to build home experiences. The information is ultimately owned by users, and users can explicitly choose to share this information with third-party products.
The purpose of authorization is to give your customers a secure means to grant access to their MClimate device data.
Client site or app before authorization
In your client site or app, you can provide a way for customers to give your product access to their MClimate device data. To do this, create a button or other UI element to initiate the OAuth flow.
When you build user authorization into your app, you can either:
use an external browser to authorize an app
use a new page to auth a web app
Warning:
Do not use iFrames for user authorization.
Client site or app after authorization
After your customer authorizes your third-party product, we'll send an
authorization code
that your product can exchange for an
access token
. Your product can then send the access token with API calls to access MClimate data.
To learn how to set up an authorization flow for a user and obtain an access token, see
Authentication and Authorization with OAuth 2.0.
Authentication and Authorization with OAuth 2.0
The MClimate API uses the OAuth 2.0 protocol for authentication and authorization (Check the official specification
here
).
Before your product can access private data using the MClimate API, it must obtain an access token that grants access to that API. A single access token can grant varying degrees of access to multiple sections of the API.
The authorization sequence begins when your product redirects a browser to a MClimate URL with
query parameters
indicating the requested access. MClimate handles the user authentication, session selection, and user consent. The result is an
authorization code
, which your product can exchange for an
access token
. Your product can then use the access token to make calls to the API.
Step 1 - Configure your third-party product
Note:
You can skip this step if you already have registered your product (and you already have Client ID and Client secret).
Register for client credentials
In order to obtain credentials, you need to have an account in our platform. If you do not have one, go to
registration form
and fill in all required fields. Then, when you are signed in, you need to click on your email in the upper right corner, click on "Clients" and then create a new client. You will get all needed credentials.
Set redirect URI on your server
You have to set URI on your server where the authorization code will be sent and handled e.g.
.
Note that your server must be public (localhost will not work). For development you can use tunnelling solutions like
ngrok
or allowing public access for your IP.
Authorization URL
When you receive your client credentials you can test your authorization URL:{CLIENT_ID}&state={STATE}&redirect_uri={REDIRECT_URI}
The authorization URL includes a state parameter that you can use to test for possible cross-site
request forgery (CSRF) attacks. The
redirect_uri
is optional parameter used to specify the redirect URI if one was not provided or was invalid in the client registration form. The redirection endpoint URI MUST be an absolute URI
Step 2 - Request an authorization code
After your product is configured, you can request an authorization code. The authorization code is not the final token that you use to make calls to MClimate. It is used in the next step of the OAuth 2.0 flow to exchange for an actual access token. This step provides assurance directly from MClimate to the user that permission is being granted to the correct product, with the agreed-upon access.
The user experience
We present a MClimate login page that asks the user to grant access to your product. To test this yourself, load the authorization URL from Step 1 into a browser. You should see an access request page:
Go ahead and enter your MClimate account credentials. After clicking the “Sign in” button you will be automatically redirected to the redirect URI you provided and already set.
e.g.: If you set your redirect URI to be then after successfully authorizing in the MClimate login page you will be redirected to{
AUTHORIZATION_CODE}
&state=xyz
By getting the
code
parameter you will receive your authorization code. Additionally you can check the state parameter too.
Step 3 - Exchange authorization code for an access token
The final step in obtaining an access token is for your product to ask for one using the authorization code it just acquired. This is done by making an "raw" HTTP POST request with Headers:
Content-Type: application/json
.
Note:
Use this endpoint when requesting an access token:
Parameter
Description
client_id
The "product ID" generated in Step 1
client_secret
The “product secret” generated in Step 1
grent_type
The value of this field should always be: authorization_code
code
The authorization code received in Step 2
redirect_uri
(Optional) If redirect uri was provided in the authorize url, use the same value as redirect_uri parameter
Key Point:
Your authorization code is valid for one POST call.
Postman example (auth)
Postman provides an easy way to test an access token request. In the Headers tab, make sure Content-Type: application/json.
Access token response
A successful access token request returns a JSON object containing the following fields:
access_token
— The access token for the user. This value must be kept secure.
expires_in
— The number of seconds remaining, from the time it was requested, before the token expires.
Step 4 - Make authenticated requests
After a product obtains an access token, it sends the token to a MClimate API in an HTTP authorization header.
Note:
Use this root URL when making Nest API calls:
Postman example
Basic endpoints
Last modified
7mo ago
Copy link
Outline
Introduction
Requests
Response format
Response Status Codes
Error responses
Rate Limiting
Pagination
Additional API options
Authorization
Authorization Overview
Client site or app before authorization
Client site or app after authorization
Step 1 - Configure your third-party product
Step 2 - Request an authorization code
Step 3 - Exchange authorization code for an access token
Step 4 - Make authenticated requests | https://docs.mclimate.eu/api-documentation/ | 2022-09-25T05:09:46 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.mclimate.eu |
Roster is a list of scheduled duties for organization users.
Create a roster
Go to Setup > Accounts > Rosters.
If this is the first roster, click Create New. Otherwise, click + Add. The CREATE ROSTER page is displayed.
Enter the following roster properties:
Click Create.
The roster is created. A confirmation popup is displayed, if you have not selected any user or user group. Click Yes or No accordingly.
Click Yes to create a roster. roster
- Select a client from the All Clients list.
- Go to Setup > Accounts > Rosters.
- Select the roster name and click Remove, which displays a confirmation message.
- Click Yes to confirm roster removal. | https://docs.opsramp.com/platform-features/feature-guides/account-management/managing-rosters/ | 2022-09-25T05:45:36 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.opsramp.com |
Swift Performance can generate Critical CSS and also can minify CSS. Minify will remove unnecessary whitespaces, semicolons, etc and will shorten color codes.
There are 3 available options:
- Don’t minify
- Basic
- Full
You don’t need minify if your CSS files are already minified. Basic option is a legacy option, the recommended is using Full | https://docs.swiftperformance.io/knowledgebase/minify-css/ | 2022-09-25T04:16:43 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.swiftperformance.io |
Editing a Parallels Environment
Overview
In case any of the details of your Parallels environment (such as the gateway hostname or port) change for any reason, you must edit the details of the environment in AppsAnywhere to ensure that the link remains active and users are still able to access their Parallels Environments page (See Viewing Parallels Parallels Environment page. | https://docs.appsanywhere.com/appsanywhere/2.12/editing-a-parallels-environment | 2022-09-25T04:11:02 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['../../appsanywhere/2.12/2956362838/image2017-10-11%2016:59:27.png?inst-v=04c6fcf4-a5c2-453c-b2cc-1630f75bf4e6',
None], dtype=object) ] | docs.appsanywhere.com |
Tool bar
Tool bar is at the top of the main editor window including five sets of control buttons or pieces of information that provide editing functions for specific panels and allow the user to conveniently implement workflows.
Transform Tools
This provides the editing node transform attribute (position, rotation, scale, size) function for the scene editor. Please refer to use transform tool to place node for detailed information.
Gizmo Display Mode
This control is for setting bounding.
Preview Game
This includes three buttons:
- Select the preview platform: Click on the drop-down menu to select the preview platform as the simulator or the browser.
- Project: Open the project folder.
- Open App: Open the installation path of the program. | https://docs.cocos.com/creator/2.3/manual/en/getting-started/basics/toolbar.html | 2022-09-25T04:40:45 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['index/toolbar.png', 'toolbar'], dtype=object)
array(['toolbar/transform_tool.png', 'transform tool'], dtype=object)
array(['toolbar/gizmo_position.png', 'gizmo position'], dtype=object)
array(['toolbar/gizmo_rotation.png', 'gizmo rotation'], dtype=object)
array(['toolbar/preview.png', 'preview'], dtype=object)
array(['toolbar/preview_url.png', 'preview url'], dtype=object)
array(['toolbar/open_project.png', 'open project'], dtype=object)] | docs.cocos.com |
If your email address is associated with more than one organization, you can switch between these organizations from your dashboard.
After logging in to your account, you will be directed to your default organization.
Note: If you are looking to change the organization a Device belongs to, check our this article.
Switching between Organizations
After logging in, click the user icon at the top right of your webpage. Under "Switch Organization", you will see a list of the organizations in which you belong to. For example: "Device Magic" and "Device Magic 2".
Click on the organization's name you would like to switch to, and the organization's Home page will load. Use the same method to switch back to another organization.
To learn how to set your default organization, check out this article.
Note: If you would like to remove an organization name from this list, you will need to remove yourself as a user in the organization. Click here to learn how.
Other Useful Articles:
Adding and Managing Users
Removing Users from your Organization
Multiple Organization Default Setting
Connecting Devices to your Organization
Change the Organization (ORG) a Device Belongs To
Change your Password and Personal Preferences
If you have any questions or comments feel free to send us a message at [email protected]. | https://docs.devicemagic.com/en/articles/3723708-switching-between-organizations | 2022-09-25T04:58:58 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['https://downloads.intercomcdn.com/i/o/185901183/cedb6042e79d4a5a1be52caa/Screenshot+2020-02-18+at+14.40.16.png',
None], dtype=object) ] | docs.devicemagic.com |
You can set up Umbrella to log events to an Amazon S3 bucket which you manage. To enable logging in Umbrella to a self-managed Amazon S3 bucket, follow the prerequisite steps to set up an AWS account and create an AWS S3 bucket. Then, configure Umbrella to use the self-managed S3 bucket to record log events.
Table of Contents
- Prerequisites
- Enable Logging
- S3 Bucket Data Path
- Download Files From the S3 Bucket Locally
Prerequisites
- Full admin access to the Umbrella dashboard. See Manage User Roles.
- A login to Amazon AWS service (). If you don't have an account, Amazon provides free sign up for S3.
Note: Amazon requires a credit card in case your usage exceeds free plan usage.
- A bucket configured in Amazon S3 to be used for storing logs. For more information, see Amazon's S3 documentation.
Note: Periods in S3 bucket names are not supported.
JSON Bucket Policy
When you set up your Amazon S3 bucket, you must add a bucket policy which accept uploads from Umbrella. Copy the following preconfigured JSON and substitute your S3 bucket name for
bucketname. Then, paste the Umbrella S3 bucket policy into your Amazon S3 bucket policy.
{ " } ] }
Enable Logging
- Navigate to Admin >.
S3 Bucket Data Path
You can integrate your self-managed AWS S3 bucket with the Cisco Cloud Security App for Splunk. Use the data path to your AWS S3 bucket to set up the integration. The S3 bucket data path contains the following path fields:
<AWS S3 bucket name>-<AWS region>/<AWS S3 bucket directory prefix>
- AWS S3 bucket name and AWS region—the name of your AWS S3 bucket, a dash (
-), and the AWS region.
- AWS S3 bucket directory prefix—the directory prefix (customer folder name) to the AWS S3 bucket.
Sample S3 Bucket Data Path:
my_company_name-us-west-1/dnslogs
Use the data path to your self self-managed S3 bucket to your local directory.
Prerequisites
- Install the AWS CLI to your system.
Run the AWS CLI to download your files from an S3 bucket to your local directory. To run the AWS CLI command in test mode (without syncing files), use the
--dryrun flag.
AWS CLI command syntax:
aws s3 sync s3://DATAPATH /path/to/local/directory/
Detailed sample command:
aws s3 sync s3://mycompany-us-west-1/dnslogs /opt/splunk/etc/apps/TA-cisco_umbrella/data/
Upgrade Reports < Enable Logging to Your Own S3 Bucket > Enable Logging to a Cisco-managed S3 Bucket
Updated 3 months ago | https://docs.umbrella.com/deployment-umbrella/docs/setting-up-an-amazon-s3-bucket | 2022-09-25T05:52:01 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.umbrella.com |
Create distributed port groups for each NSX Edge node uplink, Edge node TEP, management network, and shared storage.
Prerequisites
Verify that you have created a vSphere Distributed Switch.
Procedure
- In the vSphere Client, navigate to a data center.
- In the navigator, right-click the distributed switch and select .
- Create a port group for the NSX Edge uplink.For example,
DPortGroup-EDGE-UPLINK.
- Configure VLAN Type as VLAN Trunking.
- Right-click the distributed switch and from the Actions menu, select .
- Select Teaming and failover and click Next.
- Configure active and standby uplinks.For example, active uplink is
Uplink1and standby uplink is
Uplink2.
- Repeat steps 4-7 for the Edge node TEP, management network, and shared storage.For example, create the following port groups:
- (Optional) Create port groups for the following components:
- vSphere vMotion
- VM traffic
What to do next
Add hosts to the to the vSphere Distributed Switch. See Add Hosts to the vSphere Distributed Switch. | https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-D08E6ABF-E9CF-4176-87F7-34B07F08B3DA.html | 2022-09-25T05:55:27 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.vmware.com |
Programming WebLogic Enterprise JavaBeans
The sections that follow describe the EJB implementation process, and provide Java Beans.
Table 4-1 EJB Development Tasks and Result
Create a source directory where you will assemble the EJB.
BEA with WebLogic Server.
If you prefer to package and deploy your EJB in a JAR file, create a directory for your class files, and within that directory, a subdirectory named
META-INF for deployment descriptor files.
Listing 4-1 Directory Structure for Packaging JAR.
BEA.
Listing 4-2 Local Client Performing a Lookup
...
Context ctx = getInitialContexLt("t3://localhost:7001", "user1", "user1Password");
...
static Context getInitialContext(String url, String user, String password) {
Properties h = new Properties();
h.put(Context.INITIAL_CONTEXT_FACTORY,
:
Using EJB links is a BEA best practice and WebLogic Server fully supports EJB links as defined in the EJB 2.1 Specification. You can link an EJB reference that is declared in one application component to an enterprise bean that is declared in the same J2EE J2EE J2EE:
ejb-jar.xml, specify the name by which the URL is bound in JNDI in the
<jndi-name>element of the
resource-refelement. Configuring WebLogic Server Environments. After you configure a custom channel, assign it to an EJB using the network-access-point element in
weblogic-ejb-jar.xml.
Transaction design decisions are discussed in Features and Design Patterns.: Container-Managed Transaction Elements in ejb-jar.xml and Sun documentation..
UserTransactionobject and begin a transaction before you obtain a Java Transaction Service (JTS) or JDBC database connection. To obtain the
UserTransactionobject,. See Features and Design Patterns for more information.
Note: You can associate only a single database connection with an active transaction context.
See Listing 4-3 for a code sample.
Listing();.
Listinginterface.
Listinginterface which is used in the
weblogic.ejb.WLTimerServiceinterface to pass WebLogic Server-specific configuration information for a timer. The
weblogic.ejb.WLTimerInfomethod is shown in Listing 4-6
Listinginterface which extends the
javax.ejb.Timerinterface to provide additional information about the current state of the timer. The
weblogic.ejb.WLTimerinterface is shown in Listing 4-7.
Listing 4-7
weblogic.ejb.WLTimer Interface..
For comprehensive documentation of the elements in each descriptor file, definitions, and sample usage, refer to:
ejb-jar.xml.
Note: In the sections that follow, click the element name in the "Element" column to view detailed documentation on the element.
This table lists the elements in
weblogic-ejb-jar.xml related to security.
Table 4-2 Security Elements in weblogic-ejb-jar.xml
This table lists the elements in
weblogic-ejb-jar.xml that map the names of beans or resources used in source code to their JNDI names in the deployment environment.
Table 4-3 Resource Mapping Elements in weblogic-ejb-jar.xml
This table lists elements in
weblogic-ejb-jar.xml-jar.xml related to container-managed transactions.
Table 4-7 Container-Managed Transaction Elements in ejb-jar.xml
Table 4-8 lists the elements in
weblogic-ejb-jar.xml related to container-managed transactions.
Table 4-8 Container-Managed Transaction Elements in weblogic-ejb-jar.xml
This table lists the elements in
weblogic-ejb-jar.xml related to performance.
Table 4-9 Performance Elements in weblogic-ejb-jar.xml
This table lists the elements in
weblogic-ejb-jar.xml related to network communications.
Table 4-10 Communications Elements in weblogic-ejb-jar.xml
Container classes include the internal representation of the EJB that WebLogic Server uses and the implementation of the external interfaces (home, local, and/or remote) that clients use. You can use WebLogic Workshop.
BEA recommends that you package EJBs as part of an enterprise application. For more information, see Deploying and Packaging from a Split Development Directory in Developing Applications with BEA tools that support the EJB development process. For a comparison of the features available in each tool, see Table 4-11.
The
javac compiler provided with the Sun Java J2SE.
BEA recommends that you use
EJBGen to generate deployment descriptors; this is a BEA best practice which allows for easier and simpler maintenance of EJBs. When you use
EJBGen, you have to write and annotate only one bean class file, which simplifies writing, debugging, and maintenance. If you use WebLogic Workshop as a development environment, WebLogic Workshop automatically inserts EJBGen tags for you.
For information on EJBGen, see EJBGen Reference.: appc replaces the deprecated
ejbc utility. BEA recommends that you use
appc instead
ejbc..
The following table lists BEA tools for EJB development, and the features provided by each.
Table 4-12 EJB Tools and Features | http://e-docs.bea.com/wls/docs91/ejb/implementing.html | 2009-07-04T09:06:35 | crawl-002 | crawl-002-029 | [] | e-docs.bea.com |
WebLogic Administration Portal Online Help
Task 1: Setting Up Users and Groups
Task 2: Building a Portal
Task 3: Setting Entitlements for Portal Resources
Task 4: Setting Up and Managing Content
Task 5: Setting Up Personalization
Task 6: Setting
Up Delegated Administration Roles
Portal Tasks
How Do I Change the Appearance of a Portal?
How Do I Localize a Portal?
How Do I Create Multi-Channel Portals?
How Do I Create (Duplicate) a New Portlet?
How Do I Create Configurable Portlets?
How
Do I Create Different Views of My Portal for Different Users?
Content Tasks
How Do I Add Content to My Portal?
How Do I Organize Content?
How
Do I Personalize and Update Content?
Users and Groups Tasks
How Do I Set Up a New Administrator?
How Do I Entitle Users to See Specific Parts of My Portal?
How Do I Change an Administrator Password?
How
Do I Update User Information?
WebLogic Administration Portal Online Help
Administering Portals with WebLogic Administration Portal
Overview of Portal Security
Overview of Users and Groups
Overview of Group Hierarchy
Overview of User Profiles and Property Sets
Create a New User
Find Users
Edit a User Profile Values
Add a User to a Group
Remove Users from Groups
Change a User's Password
List a User's Group Membership
Delete a User from the System
Create a Group
Move a Group - Change a Group's Position in the Group Hierarchy
Delete a Group
Assign Delegated Administration to Groups
Remove Delegated Administration from Groups
Overview of Delegated Administration
Overview of Delegated Administration Hierarchy
Create a New Delegated Administration Role
Add a User to a Role
Add a Group to a Role
Add Users to Administrative Roles with Expressions
Remove a User from a Role
Remove a Group from a Role
Grant Delegation Authority to an Existing Role
Move a Role - Change a Role's Position in the Hierarchy
Delete a Delegated Administration Role
Rename a Role
Delegated Administration Role Policy Reference Summary
Overview of Visitor Entitlements
Create a Visitor Role
Add Users to a Visitor Role
Add Groups to a Visitor Role
Add Users to Visitor Roles with Expressions
Modify Visitor Role Properties
Rename a Visitor Role
Delete a Visitor Role
Assign Delegated Administration of Visitor Entitlement Roles
Remove Delegated Administration of Visitor Entitlement Roles
Visitor Entitlement Role Policy Reference Summary
Using Multiple Authentication Providers with WebLogic Portal
View Security Provider Properties
Assign Delegated Administration to Security Providers
Overview of Interaction Management
Duplicate or Modify a Campaign
Modify a Content Action
Modify an Email Action
Modify a Discount Action
Preview a Modified Campaign Action
Overview of Content Selectors
Modify a Content Selector
Overview of User Segments
Modify a User Segment
Duplicate a User Segment
Assign
Delegated Administration of Interaction Management Resources
Remove Delegated Administration of Interaction Management Resources
Overview of Content Management
View Content, Type, or Repository
Overview of Managing Content
Add a Content Item
Add a Content Node
Copy a Content Item
Delete a Content Item
Edit a Content Item
Move a Content Item
Rename a Content Item
Update a Content File associated with a Content Item
Using Library Services with a BEA Repository
Create a Content Item within a Library Services-Enabled Repository
Rename a Content Item within a Library Services-Enabled Repository
Add a Content Node within a Library Services-Enabled Repository
Copy a Content Item within a Library Services-Enabled Repository
Delete a Content Item within a Library Services-Enabled Repository
Edit a Content Item within a Library Services-Enabled Repository
Move a Content Item within a Library Services-Enabled Repository
Check In a Content Item
Change Status of Content Item
Update a Content File in a Library Services-Enabled Repository
Managing Types
Add a Content Type
Copy a Content Type
Delete a Content Type
Edit a Content Type
Rename a Content Type
Add Content Type Properties
Add a New Repositoy Connection
Modify an Existing Repository Connection
Configure a Filesystem Repository Connection
Edit Properites of a Repository Connection
Enabling Library Services for a BEA Repository
Disconnect a Repository
Assign
Delegated Administration of Content Management Resources
Remove Delegated Administration of Content Management Resources
Overview of Portal Management
Overview of Library Administration
Overview of Portal Administration
Create a Desktop
Modify Desktop Properties
Create a Book
Set the Primary Book on a Desktop
Add a Book to a Book
Modify Book Properties
Rearrange the Pages in Your Book
Add Pages to a Book
Overview of Look and Feels
Assign Look and Feel to a Desktop
Create a Page
Rearrange the Portlets on a Page
Manage Page Content
Add a Portlet to a Page
Add a Remote Portlet
Add a Producer
Filtering the Available Portlets List
Deleting Remote Portlets
Manage Portlet Properties
Lock a Portlet's Position on a Page
Create a Portlet Preference
Edit a Portlet Preference
Overview of Portlet Categories
Manage Portlet Category Properties
Create a Portlet Category
Manage a Portlet Category
Assign a New Shell to a Desktop
Modify Shell Properties
Assign a Theme to a Portal Element
Modify Themes
Overview of Portal Resources
Updating Portal Resources
Set Entitlements on Portal Resources
Remove Entitlements from Portal Resources
Make Portal Resources Available in the Weblogic Administration Portal
Delete a Portal Resource from the Portal Library
Assign
Delegated Administration of Portal Resources
Remove Delegated Administration of Portal Resources
Overview of Service Administration
Overview of Configurable Services
Add and Remove Configurable Items
Authentication
Using Multiple Authentication Providers with WebLogic Portal
Authentication Hierarchy Service
Authentication Service Provider Service
Ad Service
Configure the Ad Service
Ad Service Configuration Parameters
Behavior Tracking
Configure Behavior Tracking
Behavior Tracking Configuration Parameters
Cache
Create a New Cache
Configure a Cache
New Cache Configuration Parameters
Configure Cache Configuration Parameters
Campaign Service
Configure a Campaign Service
Campaign Service Configuration Parameters
Content Provider
Configure a New Content Provider
Content Provider Configuration Parameters
Event Service
Configure an Event Service
Event Service Configuration Parameters
Mail Service
Configure a Mail Service
Mail Service Configuration Parameters
Payment Client Service
Configure a Payment Client Service
Payment Client Service Configuration Parameters
Scenario Service
Configure a Scenario Service
Scenario Service Configuration Parameters
Tax Service
Configure a Tax Service
Tax Service Configuration Parameters
Overview of Search Services
Start a Search Service
Stop a Search Service
Pause a Search Service
Restart a Search Service
Restart the DRE
Create a New Auto Indexer Job
Create a New Database within the DRE
Create a New HTTP Fetch Job
Delete an Auto Indexer Job
Delete a Database within the DRE
Delete an HTTP Fetch Job | http://e-docs.bea.com/wlp/docs81/adminportal/index.html | 2009-07-04T09:06:57 | crawl-002 | crawl-002-029 | [] | e-docs.bea.com |
Programming WebLogic JTA
WebLogic Server 9.0:
The following sections provide more information about LLR transaction processing in WebLogic Server:
For more information about the advantages of LLR, see "Understanding the Logging Last Resource Transaction Option" in Configuring and Managing WebLogic JDBC.. Also see "Understanding the Logging Last Resource Transaction Option" in Configuring and Managing WebLogic JDBC.
For a list of data source configuration and usage requirements and limitations, see: DBMSs, the maximum length for a table name is 18 characters. You should consider maximum table name length when configuring your environment.
Note the following restrictions with regard to LLR database tables:
To change the table name used to store transaction log records for the resource, follow these steps:.
Caution: Do not manually delete the LLR transaction records or the LLR table in a production system. Doing so can lead to silent heuristic transaction failures which will be not logged.
In general, the WebLogic transaction manager processes transaction failures in the following way::
This section includes the following information:. | http://e-docs.bea.com/wls/docs91/jta/llr.html | 2009-07-04T09:07:21 | crawl-002 | crawl-002-029 | [] | e-docs.bea.com |
Using WebLogic Server Clusters
These topics recommend design and deployment practices that maximize the scalability, reliability, and performance of applications hosted by a WebLogic Server Cluster.
The following sections describe general design guidelines for clustered applications.
Distributed systems are complicated by nature. For a variety of reasons, make simplicity a primary design goal. Minimize "moving parts" and do not distribute algorithms across multiple objects.
You improve performance and reduce the effects of failures by minimizing remote calls.
Avoid accessing EJB entity beans from client or servlet code. Instead, use a session bean, referred to as a facade, to contain complex interactions and reduce calls from web applications to RMI objects. When a client application accesses an entity bean directly, each getter method is a remote call. A session facade bean can access the entity bean locally, collect the data in a structure, and return it by value.
EJBs consume significant system resources and network bandwidth to execute—they are unlikely to be the appropriate implementation for every object in an application.
Use EJBs to model logical groupings of an information and associated business logic. For example, use an EJB to model a logical subset of the line items on an invoice—for instance, items to which discounts, rebates, taxes, or other adjustments apply.
In contrast, an individual line item in an invoice is fine-grained—implementing it as an EJB wastes network resources. Implement objects that simply represents a set of data fields, which require only
get and
set functionality, as transfer objects.
Transfer objects (sometimes referred to as value objects or helper classes) are good for modeling entities that contain a group of attributes that are always accessed together. A transfer object is a serializable class within an EJB that groups related attributes, forming a composite value. This class is used as the return type of a remote business method.
Clients receive instances of this class by calling coarse-grained business methods, and then locally access the fine-grained values within the transfer object. Fetching multiple values in one server round-trip decreases network traffic and minimizes latency and server resource usage.
Avoid transactions that span multiple server instances. Distributed transactions issue remote calls and consume network bandwidth and overhead for resource coordination.
The following sections describe design considerations for clustered servlets and JSPs.
To enable automatic failover of servlets and JSPs, session state must persist in memory. For instructions to configure in-memory replication for HTTP session states, see Requirements for HTTP Session State Replication and Configure In-Memory HTTP Replication.
Failures or impatient users can result in duplicate servlet requests. Design servlets to tolerate duplicate requests.
See Programming Considerations for Clustered Servlets and JSPs.
The following sections describe design considerations for clustered RMI objects.
It is not always possible to determine when a server instance failed with respect to the work it was doing at the time of failure. For instance, if a server instance fails after handling a client request but before returning the response, there is no way to tell that the request was handled. A user that does not get a response retries, resulting in an additional request.
Failover for RMI objects requires that methods be idempotent. An idempotent method is one that can be repeated with no negative side-effects.
The following table summarizes usage and configuration guidelines for EJBs. For a list of configurable cluster behaviors, see Table 8-2 on page 6.
Table 8-1 EJB Types and Guidelines
The following table lists key behaviors that you can configure for a cluster, and the associated method of configuration.
Table 8-2 Cluster-Related Configuration Options
Different services in a WebLogic Server cluster provide varying types and degrees of state management. This list defines four categories of service that are distinguished by how they maintain state in memory or persistent storage:
Table 8-3 summarizes how J2EE and WebLogic support different each of these categories of service.
Note: In Table 8-3, support for stateless and conversational services is described for two types of clients:
Table 8-3 J2EE and WebLogic Support for Service Types
Deploy clusterable objects to the cluster, rather than to individual Managed Servers in the cluster. For information and recommendations, see:
For information about alternative cluster architectures, load balancing options, and security options, see Cluster Architectures.
The following sections present considerations to keep in mind when planning and configuring a cluster.
For guidelines for how to name and address server instances in cluster, see Identify Names and Addresses..
If your configuration includes a firewall, locate your proxy server or load-balancer in your DMZ, and the cluster, both Web and EJB containers, behind the firewall. Web containers in DMZ are not recommended. See Basic Firewall for Proxy Architectures.
If you place a firewall between the servlet cluster and object cluster in a multi-tier architecture,.
Notes: specify an IP address for
ExternalDNSName.
ExternalDNSName 8-1. | http://e-docs.bea.com/wls/docs81/cluster/best.html | 2009-07-04T09:14:59 | crawl-002 | crawl-002-029 | [] | e-docs.bea.com |
All content with label as5+cachestore+cloud+eviction+gridfs+infinispan+infinispan_user_guide+loader+lock_striping+resteasy+write_behind.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, query, deadlock, intro, archetype, jbossas, nexus, guide, schema, cache,
amazon, s3, grid, test, jcache, api, xsd, maven, documentation, youtube, userguide, ec2, 缓存, hibernate, aws, interface, custom_interceptor, setup, clustering, mongodb, concurrency, out_of_memory, jboss_cache, import, index, events, configuration, hash_function, batch, buddy_replication, write_through, mvcc, notification, tutorial, presentation, xml, read_committed, jbosscache3x, distribution, oauth, data_grid, cacheloader, hibernate_search, cluster, development, websocket, transaction, async, interactive, xaresource, build, installation, client, migration, non-blocking, jpa, filesystem, tx, user_guide, oauth_saml, gui_demo, eventing, client_server, testng, standalone, hotrod, webdav, snapshot, repeatable_read, docs, batching, consistent_hash, store, jta, faq, 2lcache, jsr-107, jgroups, lucene, locking, rest, hot_rod
more »
( - as5, - cachestore, - cloud, - eviction, - gridfs, - infinispan, - infinispan_user_guide, - loader, - lock_striping, - resteasy, - write_behind )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+cachestore+cloud+eviction+gridfs+infinispan+infinispan_user_guide+loader+lock_striping+resteasy+write_behind | 2019-10-14T01:37:18 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.jboss.org |
Numbers
Numbers are a big part of mobile messaging. Our powerful API lets you route large volumes of inbound text messages seamlessly and reliably, using local numbers from all over the world.
In most countries, you can choose either shortcode or long code numbers. Typically, length of a shortcode will be between 4 and 6 digits and the length of a long code will be over 10 digits. This enables 2-way interaction with your customers, enabling a new range of possibilities with your application.
Our Numbers product is coming soon. | https://docs.messagecloud.com/article/125-numbers | 2019-10-14T02:20:06 | CC-MAIN-2019-43 | 1570986648481.7 | [array(['http://928e5925ade456be96c9-b8acf48f351eb653183f04ab447fd892.ssl.cf5.rackcdn.com/numbers-cover-image.png',
None], dtype=object) ] | docs.messagecloud.com |
Contents Performance Analytics and Reporting Previous Topic Next Topic Metrics Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Metrics A metric measures and evaluates the effectiveness of IT service management processes. For example, a metric could measure the effectiveness of the incident resolution process by calculating how long it takes to resolve an incident. Sometimes a metric can be easily obtained from the data. For example, to find the number of incidents that were created today, a report will simply count the number of incidents in the incident table with a Created date of today. Often, however, metrics need to will be gathered, and instances of the metric will be calculated and stored. By an instance we mean a specific occurrence. For example, the "Assigned to Duration" metric measures the duration of time an incident is assigned to an individual. The metric is defined by creating a metric definition of type "Field value duration" and selecting the "Assigned to" field from the Incident table. A metric instance is then created for each incident assignment showing its duration. Reporting on the duration of incident assignments becomes easy. Reporting on a metric is done using the database view that links the metric to the table on which it is defined. Create a metricCreate a metric definition for a task table.Sample field value duration scriptReview the existing Incident Open metric definition to see how you can create your own custom metric.Metric instanceA metric instance is a record in the metric_instance table. A record holds one instance of a metric. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-performance-analytics-and-reporting/page/use/reporting/concept/c_MetricDefinitionSupport.html | 2019-10-14T02:05:38 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.servicenow.com |
A Particle System Force Field component can apply forces to particles belonging.
2018–10–19 編集レビュー を行って修正されたページ
Particle System Force Field added in 2018.3 NewIn20183 | https://docs.unity3d.com/ja/2018.3/Manual/class-ParticleSystemForceField.html | 2019-10-14T01:37:17 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.unity3d.com |
@Retention(RUNTIME) @Target(METHOD) @Incubating public @interface Mutate
RuleSourcemethod rule carrying this annotation mutates the rule subject.
Mutate rules execute after
Defaults rules, but before
Finalize rules.
The first parameter of the rule is the rule subject, which is mutable for the duration of the rule.
Please see
RuleSource for more information on method rules. | https://docs.gradle.org/current/javadoc/org/gradle/model/Mutate.html | 2019-10-14T02:26:27 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.gradle.org |
Pages look wrong or menu links to blank page, wrong page etc
Problem Description:
As you are building your site, you might click on a link in one or your menus and the page does not look as expected, or is a blank page with no widgets and just a header and footer.
Cause:
The most common cause is a bad link – WordPress assigns a permalink to each page you create automatically to ensure your pages are different, but the titles are sometimes the same. This can lead to the wrong page being linked in your menu or WooCommerce settings. Bad links happen when you duplicate pages or create multiple pages with the same title – they end up with slugs like home-1 home-2 or contact-us-2 and can be easily confused with a different version of the page.
Solution:
- Delete duplicates and empty your trash
- Fix the slugs on your final pages
- Edit your menus to ensure the links go to the correct pages.
For detailed steps on verifying your pages and menus, see How to Verify Page Slugs & Delete Duplicates Safely
Did you know?
Our friends at Jetpack are doing some incredible work to improve the WordPress experience. Check out Jetpack and improve your site's security, speed and reliability.
| https://docs.layerswp.com/doc/pages-look-wrong-or-menu-links-to-blank-page-wrong-page-etc/ | 2019-10-14T01:42:28 | CC-MAIN-2019-43 | 1570986648481.7 | [array(['https://refer.wordpress.com/wp-content/uploads/2018/02/leaderboard-light.png',
'Jetpack Jetpack'], dtype=object) ] | docs.layerswp.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.