content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Restart components
Use Restart options to start service components using new configuration properties.
Edit and save configurations for one or more services.
- Click the indicated Components or Hosts links to view details about the requested restart.
- Click Restart and then click the appropriate action.For example, options to restart YARN components include the following: | https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_restart_components.html | 2020-09-18T17:58:26 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.cloudera.com |
Austral Retractaway 50 Clothesline RA50CC Product Video
Hello, and welcome to Lifestyle Clotheslines. This is a product video focusing on one of our retractable clotheslines by Austral, and this is the Retractaway 50. So very similar to the Retractaway 40 model, this just has an extra length maximum extension of up to 10m for this product.
So everything else is essentially the same as the Retractaway 40. It's the same dimensions, five polycord lines, as with the Retractaway 40.
The same color choices, of course, these are a color-bond, powder-coated finish in either Classic Cream or Woodland Grey. Incidentally, if you do order posts and things to go with it, they will be colored to match the clothesline cabinet as well.
So, as I mentioned before, the dimensions are all the same. So you do have five lines, and if you do pull out the maximum extension on the Retractaway 50, which is 10m in this case, that will give you a total line space of 50m. Of course, it has a minimum extension, as well, of only 2m, so you must pull the cords out at least 2m and attach it to the other side, whether a wall or a post, so that the lines can actually be tensioned. As with all Austral outdoor clotheslines, they do have a 10-year warranty on its construction. And this model is generally suited for four to five or more people if you can pull it out that maximum 10m.
So, as I mentioned, same dimensions as the Retractaway 40, just the extra length cord going to 10m. So you get a length side-to-side of 80cm or 800mm, and then top-to-bottom and front-to-back of 12.5cm or 125mm.
It's the same working mechanism as the Retractaway 40, so you've got a tension knob and a locking lever. So you simply push the lever to the unlocked position when you're pulling the clothesline bar out, hook it onto the other side, and then turn the knob clockwise to add tension to the lines, and then flick the lever up to the lock position and you're ready to go.
The little steel bar just there is just what you use to hook onto the receiving bracket at the other end when you do attach that bracket, whether it can be onto a post or a wall. As you can see there, the little bracket there has got the little handle attached to it, and that's what holds the clothesline in place for you.
All the lines are tied onto the bar manually, so if ever you do need to do some manual re-tensioning, you can simply untie the cords there individually and just re-pull them through, add some tension to the lines manually, and then just tie them back off again as they were.
There's a nice closeup shot showing the handle bar attached to the receiving bracket.
And, as I mentioned earlier, several different positions or different types of installations you can do with a retractable, so you've got your standard wall-to-wall installation, post-to-post, post-to-wall, or wall-to-post. Please keep in mind, when attaching the clothesline cabinet onto a post, you will require the optional mount bar bracket so that the clothesline cabinet can be adapted to the post.
As with all of our products here at Lifestyle Clotheslines, we do offer our 100-day Happiness Guarantee, so if you're not satisfied, just give us a call or send us an e-mail and let us know.
If you do need any more information about this product or anything else, just give us a call, 1300 798 779. You can also send us an e-mail or jump onto our live chat during the day. | https://docs.lifestyleclotheslines.com.au/article/1035-austral-retractaway-50-clothesline-ra50cc-product-video | 2020-09-18T17:28:34 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.lifestyleclotheslines.com.au |
1 Introduction
The SAP Logging Connector allows a Mendix app to output logs in a format supported by the Kibana dashboard provided by the SAP Cloud Platform Application Logging service. Without this connector, logs sent to Kibana will not have the correct structure and log level.
By using this connector, logs will be output in a JSON format with the following fields:
msg- the actual log message
level- the log level
written_at- the log timestamp as reported by the Mendix app
written_ts- the log timestamp which can be used for ordering the logs
stacktrace- the stack trace attached to the log message (if it exists)
In addition, the Connector supports multiline log messages.
2 Getting the SAP Logging Connector
To use the SAP Logging connector, you need to import it into your app from the App Store. For more information on importing modules from the App Store, see How to Use App Store Content in Studio Pro.
3 Using the Connector
To format all the log messages, the SAP Logging Connector needs to be initialized during the startup of the Mendix application[1].
To initialize the connector, do the following:
Open Project … > Settings in the Project Explorer:
Switch to the Runtime tab.
Go to the After startup microflow by clicking Show next to the After startup microflow:
If there’s no existing microflow (as indicated by the text
(none), instead of a microflow name), click Select… and create a new microflow by clicking New:
Drag and drop the RegisterSubscriber action at the end of the After startup microflow:
Double-click the RegisterSubscriber action make sure that Log level is set to the constant SapLogLevel:
Edit the constant SapLogLevel to select the minimum log level which you want to send to the SAP Cloud Platform Application Logging service. The supported log levels (case-insensitive) are
Debug,
Trace,
Info,
Warning,
Error, and
Critical.
Now, when the application is started, it will produce logs in the JSON format supported by Kibana.
4 Notes
- Due to technical limitations, the SAP Logging Connector is activated with a 5 second delay. This means that logging configuration is updated after the RegisterSubscriber action is completed.
- The RegisterSubscriber action checks to see if the Mendix application is running in an SAP environment with the SAP Cloud Platform Application Logging service. If the SAP Cloud Platform Application Logging service cannot be found, RegisterSubscriber assumes that the app is running locally and doesn’t change the logging configuration.
- When log messages are generated rapidly, it is possible that Kibana will display them in the wrong order. The
written_atfield can be used to sort the log messages. | https://docs.mendix.com/partners/sap/sap-logger | 2020-09-18T17:35:30 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.mendix.com |
- bump
- displacement
- geometry
- label
- material
- multiresolution
- objects
- rendering
- shape
- surface
- table
- utility
- vrmesh
To add a label to the list of required labels, choose '+ labelname' from Related Labels.
To remove a label from the required labels, choose '- labelname' from above.
- There are no pages at the moment. | https://docs.chaosgroup.com/label/bump+displacement+geometry+label+material+multiresolution+objects+rendering+shape+surface+table+utility+vrmesh | 2020-09-18T18:30:03 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.chaosgroup.com |
4.7. Changing Target Machine¶
All of our examples so far have been run locally. Its time to run something on a HPC! One of the features of Ensemble Toolkit is that you can submit tasks on another machine remotely from your local machine. But this has some requirements, you need to have passwordless ssh or gsissh access to the target machine. If you don’t have such access, we discuss the setup here. You also need to confirm that RP and Ensemble Toolkit are supported on this machine. A list of supported machines and how to get support for new machines is discussed here.
Note
The reader is assumed to be familiar with the PST Model and to have read through the Introduction of Ensemble Toolkit.
Note
This chapter assumes that you have successfully installed Ensemble Toolkit, if not see Installation.
Once you have passwordless access to another machine, switching from one target machine to another is quite simple. We simply re-describe the resource dictionary that is used to create the Resource Manager. For example, in order to run on the XSEDE Stampede cluster, we describe the resource dictionary as follows:
password=password) # Assign the workflow as a set or list of Pipelines to the Application Manager appman.workflow = set([p]) # Create a dictionary to describe our resource request for XSEDE Stampede res_dict = { 'resource': 'xsede.comet',
You can download the complete code discussed in this section
here or find it in your virtualenv under
share/radical.entk/user_guide/scripts.
python change_target to describe our resource request for XSEDE Stampede res_dict = { 'resource': 'xsede.comet', 'walltime': 10, 'cpus': 16, 'project': 'unc100', 'schema': 'gsissh' } # Assign resource request description to the Application Manager appman.resource_desc = res_dict # Run the Application Manager appman.run() | https://radicalentk.readthedocs.io/en/latest/user_guide/change_target.html | 2020-09-18T17:54:48 | CC-MAIN-2020-40 | 1600400188049.8 | [] | radicalentk.readthedocs.io |
Form to preview all approvers assigned to work on an approval request
The approval server uses the AP:PreviewInfo form to store preview data when a process is configured to generate previews. Process administrators can use this form to preview all the approvers assigned to work on an approval request.
You must enter data into all the visible fields to search the AP:PreviewInfo form. See Configuring Approval Server to work with flowcharts.
AP:PreviewInfo form
Fields on the AP:PreviewInfo form
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/ars1808/form-to-preview-all-approvers-assigned-to-work-on-an-approval-request-820497455.html | 2020-09-18T18:15:54 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bmc.com |
-
-
-
!
Share Citrix user profiles on multiple file servers
The simplest implementation of Profile Management is one in which the user store is on one file server that covers all users in one geographical location. This topic describes a more distributed environment involving multiple file servers. For information on highly distributed environments, see High availability and disaster recovery with Profile Management.
Note: Disable server-side file quotas for the user store because filling the quota causes data loss and requires the profile to be reset. It is better to limit the amount of personal data held in profiles (for example, Documents, Music and Pictures) by using folder redirection to a separate volume that does have server-side file quotas enabled.
The user store can be located across multiple file servers, which has benefits in large deployments where many profiles must be shared across the network. Profile Management defines the user store with a single setting, Path to user store, so you define multiple file servers by adding attributes to this setting. You can use any LDAP attributes that are defined in the user schema in Active Directory. For details, see.
Suppose that your users are in schools located in different cities and the #l# attribute (lower case L, for location) is configured to represent this. You have locations in London, Paris, and Madrid. You configure the path to the user store as:
\\#l#.userstore.myschools.net\profile\#sAMAccountName#\%ProfileVer%\
For Paris, this is expanded to:
\\Paris.userstore.myschools.net\profile\JohnSmith\v1\
You then divide up your cities across the available servers, for example setting up Paris.userstore.myschools.net in your DNS to point to Server1.
Before using any attribute in this way, check all of its values. They must only contain characters that can be used as part of a server name. For example, values for #l# might contain spaces or be too long.
If you can’t use the #l# attribute, examine your AD user schema for other attributes such as #company# or #department# that achieve a similar partitioning.
You can also create custom attributes. Use Active Directory Explorer, which is a
Sysinternals tool, to find which attributes have been defined for any particular domain. Active Directory Explorer is available at.
Note: Do not use user environment variables such as %homeshare% to distinguish profiles or servers. Profile Management recognizes system environment variables but not user environment variables. You can, however, use the related Active Directory property,
#homeDirectory#. So, if you want to store profiles on the same share as the users’ HOME directories, set the path to the user store as
#homeDirectory#\profiles.
The use of variables in the path to the user store is described in the following topics:
Share Citrix user profiles on multiple file servers. | https://docs.citrix.com/en-us/profile-management/current-release/plan/multiple-file-servers.html | 2020-09-18T18:10:34 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.citrix.com |
Feature Requests
To create a feature request, follow the steps below:
- Go to TORO's issue-tracking board and log in.
- Click the Create Issue button at the navigation bar and then fill in the details of your request in the appearing form. Ensure that the issue type is set to Feature Request and the project should be set to DEV: Docs to OpenAPI.
- Click the Submit button to publish your ticket.
Ideally the issue should contain the following information:
- Purpose of the new feature
- How the feature could be used | https://docs.torocloud.com/docs-to-openapi/releases/feature-requests/ | 2020-09-18T17:49:21 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.torocloud.com |
Uninstall
although all options, menu items and menu categories get deleted from the database along with the table that holds any orders you may have received, you will manually have to delete any additional pages (such as the order page for example) that have been created as the plugin has no way of knowing if you are using this page elsewhere or have changed the content/name of it.
the same goes for the 3 example icons that come with this plugin as you might have used them elsewhere. | https://docs.wp-pizza.com/uninstall/ | 2020-09-18T17:02:38 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.wp-pizza.com |
Enable the IAM Dev Portal
To enable the Dev Portal, the following properties must be set in the IAM
configuration file (
kong.conf):
portal = on portal_gui_protocol = http portal_gui_host = localhost:8003 portal_emails_from = <[email protected]> portal_emails_reply_to = <[email protected]>
I, IAM will not attempt to send actual emails. This
is useful for testing purposes.
When
portal_gui_use_subdomains is enabled Dev Portal workspace urls will be
included as subdomains e.g
For more information on the Dev Portal properties available, checkout out the IAM Configuration Property Reference
Note: Not all deployments of IAM use a configuration file, if this describes you (or you are unsure) please reference the IAM configuration docs in order to implement this step.
Next Steps
Networking
Review how the Dev Portal config variables are used | https://docs.intersystems.com/irislatest/csp/docbook/apimgr/enterprise/0.34-x/developer-portal/configuration/getting-started.html | 2020-09-18T17:30:19 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.intersystems.com |
About Deformation Animation
Just like with animating pegs and drawing layers, you can animate your deformers by creating keyframes on their corresponding layers in the Timeline. Animating deformers works exactly like making modifications to a deformer, except it requires using the
Transform tool instead of the Rigging tool. When the Transform tool is selected, deformation controls in the Camera view display in green, which means they are in animation mode, whereas when the Rigging tool is selected, they display in red, meaning they are in rigging mode. | https://docs.toonboom.com/help/harmony-17/essentials/deformation/about-deformation-animation.html | 2020-09-18T17:53:31 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.toonboom.com |
Configuring Schemas
The Schemas tab is where we view and define models to be used as request and/or response bodies. A schema defines properties that make up an object. It is considered good practice to declare reusable models as Schema Objects in the OpenAPI specification to avoid duplication and inconsistencies.
Adding a New Schema
To create a schema:
- Go to the Schemas tab.
- Click the green plus button at the top.
- Specify the schema name.
- Click the floppy disk button to save.
Only after your schema has been saved can you proceed to defining object properties.
Property Fields
A schema is a collection of property definitions. A property is defined through the following fields:
Fields with an asterisk (*) are required
In order to add a property to the schema, required fields must be provided.
User Interface
(1) Title field
The name of the schema.
(2) Properties section
This part of the tab shows the fields associated with each property.
(3) Add Property button
(4) Save Property button
Once you've filled up the fields for a new property, or edited the field values of an existing property, click this button to save the new property or your updates.
(5) Delete Selected Property button
Use this button to delete the currently selected property, whose field values are displayed in the Properties section.
(6) Delete All Properties button
(7) More Properties toggle
Click this label to show more property fields.
(8) Properties tree
This section shows all properties defined in the schema. Click any of the properties in the tree to select it.
Adding a New Property Manually
- Click the green plus button at the top of the Properties section.
- Define the property's fields by populating the text inputs in the Properties section, especially required fields. To show more property fields, click More Properties.
- Click the floppy disk button to save.
Define your object property's properties
To add properties to an object property, select your object property from the Properties tree and then click the Add Property button.
Editing an Existing Property
- Select the property you want to edit from the Properties tree.
- Edit fields using the text inputs.
- Click the floppy disk button to save.
Deleting an Existing Property
- Select the property you want to edit from the Properties tree.
- Click the red button, with an 'x' on top of the Properties section.
- Confirm your action.
To delete all properties, click the red button with two x's.
Defining Properties Through JSON Payloads
You can import properties defined in a JSON file or text. To do this:
- Select your schema.
Click the JSON Payload button.
Click the Browse button to choose the JSON file or provide a JSON text using the text area in the modal.
- Click Convert.
- Confirm your action. | https://docs.torocloud.com/docs-to-openapi/usage/models/schemas/ | 2020-09-18T16:28:13 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['../../../placeholders/img/openapi/schemas.png', 'Schemas tab'],
dtype=object)
array(['../../../placeholders/img/openapi/compressed/schemas-adding-gif.png',
'Adding a new schema'], dtype=object)
array(['../../../placeholders/img/openapi/schemas-annotated.png',
'The Schemas tab, annotated'], dtype=object) ] | docs.torocloud.com |
Tuning Tomcat
Martini ships with Apache Tomcat, and uses it for serving all HTTP requests. Since it's included in Martini, configuring it is slightly different than usual, but it's still easy.
HTTP and HTTPS ports
If you would like to configure the HTTP and HTTPS ports, please refer to this document.
Properties related to the HTTP connector in Tomcat that aren't already
documented here are simply added to the Martini application properties
using either an
server.tomcat.http or
server.tomcat.https prefix,
depending on which connector you would like to configure.
For example, if you would like to enable
TRACE HTTP requests to Martini, add a property called
server.tomcat.http.allowTrace and set it to
true.
Properties which may be of interest when tuning Martini's Tomcat server include (in alphabetical order):
Another example could be as follows:
- HTTP running on port 80
- HTTP not allowed more than 10 threads
- HTTPS running on port 443
- HTTPS using a
keystorefile located at
/home/martini/keystore, with password
12345ABCDE
- HTTPS has 5 threads that accept connections
When configured via properties, the above would look like: | https://docs.torocloud.com/martini/latest/setup-and-administration/performance-tuning/tomcat/ | 2020-09-18T17:38:30 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.torocloud.com |
If your μMETOS Blue was intended for soil moisture monitoring it comes with a set of soil moisture
sensors connected to the main unit. The sensors connected to it can be EC 5, HS10, 5TE from Decagon
Limited and/or Watermark sensors or vacuum tensiometers.
This sensors can be connected to the μMETOS Blue itself or if it is needed because of the installation or
because of the number of sensors – they can be connected to an extension box on a serial bus
connection. | http://docs.metos.at/%CE%BCMETOS+Blue+for+soil+moisture+monitoring?structure=uMETOS+Blue | 2020-09-18T18:39:56 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metos.at |
Identify - Attribute query
Introduction
The SAML 2.0 attribute query feature extends the capability of the SAML 2.0 protocol. The traditional SAML 2.0 function requires that the identity provider sends the federation partner all required user attributes. The attributes are included as part of the assertion generated during the single sign-on flow.
Support for attribute query provides a set of core attributes when the initial authentication context is established. You can query user information as needed during the application runtime operation.
Attribute Services Administration
You can find Attribute Services feature at:
Attribute Service connection has the basic settings like:
To see the additional settings you will first have to save a connection with the basic settings then open the connection for editing again. The configuration settings offered by Attribute Services are:
Support API for AttributeServices connection
Safewhere*Identify supports APIs for AttributeServices. With REST API, we can post, put, get, patch, delete an AttributeServices connection as same as other existing connections.
For more details, you can open /admin/swagger/ui/index#/AttributeServices to view description and try them:
Setup Attribute Services flow
The following steps describe the process for querying Attribute Services from Safewhere Identify. It is recommended that you read the following document before starting:
- Saml2Wif installation guideline: Please take special notice on all the PowerShell information in this document.
- How to connect Safewhere*Identify to AD FS 2.0
The login flow that we make in this guideline is:
- The main flow: Saml2Wif => Safewhere Identify => ADFS (Upstream IdP)
- The second flow: Safewhere Identify => Another Identify(AttributeService IdP) to query more attributes
- Create Attribute Service connection at Identify
- Create new attribute service connection, enter the name for it (in this example, I use “AttrSrv” for the name), check enable and save it.
- Given that your attribute service IdP name is identifydev56. After saving attribute service, you need to update the value for the configuration fields:
- Entity ID: replace it with the value of the attribute service IdP entity ID
- Signing certificates: add the thumbprint of signing certificate using for the attribute service IdP
- Encryption certificates: add the thumbprint of encryption certificate using for the attribute service IdP.
- Attribute service setting:
- Location: set the URL value:
- Binding: set the binding value: urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST
- Create External Claims transformation at Identify
- Open Transformation list and create a new external claim transformation with the settings:
- Name: give the name you like
- Transformation type name: select Safewhere.IdentityProvider.Saml2.StandardAttributeServiceQueryClaimsTransformation, Safewhere.IdentityProvider.Saml2
- Continue on error: true
- Additional settings:
- Mapping 1:
- Key: AttributeServiceConnectionName
- Value: input attribute service name you created at step 1
- Mapping 2
- Key: RequestedAttribute1
- Value: input claimType that you need to query value
- Create NameID Transformation
Create an NameID transformation like this:
- Apply transformations to Saml2 Authentication connection
Open the SAML2.0 authentication connection which we use on this login flow, add 2 claim transformations that we created at the step above:
- Create SAML2.0 Protocol connection at AttributeService IdP
Create a SAML2.0 Protocol connection at AttributeService Identify instance and import metadata of the Identify SP to it.
Note: you need to specify the Attribute name which specifies subject claim type/ Default subject claim type in the SAML2.0 Protocol settings to specify user by information from Attribute Query Subject. Otherwise, AttributeService IdP will throw error exception because it cannot find user.
- Run flow and check the result | http://docs.safewhere.com/identify-attribute-query/ | 2020-09-18T16:06:45 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.safewhere.com |
Before you can configure a platform service for a bucket, you must configure at least one endpoint to be the destination for the platform service.
Access to platform services is enabled on a per-tenant basis by a StorageGRID administrator. To create or use a platform services endpoint, you must be a tenant user with Manage Endpoints or Root Access permission, in a grid whose networking has been configured to allow Storage Nodes to access external endpoint resources. Contact your StorageGRID administrator for more information. | https://docs.netapp.com/sgws-115/topic/com.netapp.doc.sg-tenant-admin/GUID-3FE52D74-B54D-4092-9977-0CFD57E6A2CD.html?lang=en | 2021-06-13T03:03:29 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.netapp.com |
RT-Theme 18 Footer area is divided into two sections. The Footer Area containing the footer Widgets and Footer Bottom Area containing the copyrights & the footer custom menu bar.
- Goto the WordPress Menus system and create a custom menu container or use the default RT-Theme Footer Navigation container. Add your menu items to that menu container as explained in the how-to “Creating Navigation Menus” section here in the setup assistant.
- Save the menu and assign it to a theme menu footer location.
- View your website. If all done correctly it will show the menu in the footer area
The RT-Theme 18 footer options can be found here:
In the RT-Theme 18 footer options, one can set the copyright text & number columns to be used in the footer area for displaying widgets just above the copyright & navigation menu. The Footer widgets areas can be found in the WordPress Admin / Appearance / Widgets section.
Any widgets added in the available footer widget area’s container will show in the theme footer area columns in the front of your website.
Note: If you set the footer columns to 3 (1/3) and you add widgets into the 4th footer widget area container in the WordPress Admin Appearance Widgets section they will not show.
Note for advanced users: By using a plugin from wordpress.org which adds widget logic to your widgets you can make widgets in the footer area appear or disappear on a page id basis. | https://docs.rtthemes.com/document/footer-contents-and-options/ | 2021-06-13T01:41:15 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.rtthemes.com |
Staff Members
Manage your employees and their permissions. The Staff Members page gives you an alphabetized list of all people who have administrative access to your dashboard.
#Account status
New users are automatically active in the system. To deactivate a user’s access without deleting the account, uncheck the User is Active box in the Account Status card.
#How to
#How to set permissions
Use the Permissions card to select the permission groups the user is assigned to. See Permission Groups for instructions on how to manage your permission groups.
note
You can only manage groups (including adding and removing members) that grant a subset of your effective permissions. This is a security precaution that prevents users from escalating their permissions beyond what was explicitly granted.
#How to add staff members
Click Add Staff Member above the list of users. Fill in the first and last name of the new staff member and the email address to which any notifications will be sent.
#How to edit or delete users
To edit a user account, access it from the staff members list, make any relevant changes, and then click Save in the footer.
To delete a user, click Delete on the left side of the footer and then confirm the removal. | https://docs.saleor.io/docs/dashboard/configuration/staff/ | 2021-06-13T01:48:15 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['/assets/images/config-staff-list-2ee48ca4539d56ba545d23363f269529.png',
'Staff members list'], dtype=object)
array(['/assets/images/config-staff-permissions-5a50f1ccbb3378f51a070201a3a7cde0.png',
'Staff member permissions'], dtype=object)
array(['/assets/images/config-staff-details-9a3d93e631ec603d93d97a31a11b2d7a.png',
'Staff members details'], dtype=object) ] | docs.saleor.io |
Map groups on a SAML identity provider to Splunk roles
After you configure a Splunk platform deployment to use a Security Assertion Markup Language (SAML) identity provider (IdP) for authentication, you can then authorize groups on that IdP to log into the Splunk platform instance by mapping those groups to Splunk roles. You can map multiple groups on the IdP to a single Splunk role.
This is the only way to give users on your IdP access to the Splunk platform deployment. You cannot give individual users on the IdP access to the Splunk platform deployment unless you create a group on the IdP for the user, or add them to an existing group.
Prerequisites for mapping SAML groups to Splunk roles
Confirm that you have completed the following steps before you attempt to map groups on your IdP to roles on your Splunk platform deployment:
- The identity provider you have is SAML version 2.0 compliant
- You have configured your IdP to supply the necessary attributes in an assertion that it sends
- You have configured your Splunk platform deployment to use the IdP as an authentication scheme.
For more specifics on these prerequisites, see Configure single sign-on with SAML.
Map groups on a SAML identity provider to Splunk roles
- In the system bar, click Settings > Authentication Methods.
- Under External, confirm that the SAML checkbox is selected.
- Click Configure Splunk to use SAML.
- Click Cancel to close the SAML Configuration dialog box and show the SAML groups page.
- Click New Group, or click Edit if you want to modify an existing SAML group.
- If you are creating a new group, in the Group Name field, enter the name of the group. Typically, this is the name of a group on the IdP.
- In the Splunk Roles section, choose the Splunk roles to which you want this group to map by clicking one or more of the roles in the Available item(s) column.
- Click Save. Splunk Web saves the group and returns you to the SAML Groups page.
After you configure SAML SSO and map groups to Splunk roles, you can distribute the login URL to users on your identity! | https://docs.splunk.com/Documentation/Splunk/8.2.0/Security/Mapgroupstoroles | 2021-06-13T01:43:38 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
float Value between 0.0 and 1.0. (Return value might be slightly beyond 1.0.)
Genera ruido Perlin 2D.;
// Create a texture and fill it with Perlin noise. // Try varying the xOrg, yOrg and scale values in the inspector // while in Play mode to see the effect they have on the noise.
public class ExampleScript : MonoBehaviour { // Width and height of the texture in pixels. public int pixWidth; public int pixHeight;
// The origin of the sampled area in the plane. public float xOrg; public float yOrg;
// The number of cycles of the basic noise pattern that are repeated // over the width and height of the texture. public float scale = 1.0F;
private Texture2D noiseTex; private Color[] pix; private Renderer rend;
void Start() { rend = GetComponent<Renderer>();
// Set up the texture and a Color array to hold pixels during processing. noiseTex = new Texture2D(pixWidth, pixHeight); pix = new Color[noiseTex.width * noiseTex.height]; rend.material.mainTexture = noiseTex; }
void CalcNoise() { // For each pixel in the texture...[(int)y * noiseTex.width + (int)x] = new Color(sample, sample, sample); x++; } y++; }
// Copy the pixel data to the texture and load it into the GPU. noiseTex.SetPixels(pix); noiseTex.Apply(); }
void Update() { CalcNoise(); } }
Although the noise plane is two-dimensional, it is easy to use just a single one-dimensional line through the pattern, say for animation effects.
using UnityEngine;
public class Example : MonoBehaviour { // "Bobbing" animation from 1D Perlin noise.
// Range over which height varies. float heightScale = 1.0f;
// Distance covered per second along X axis of Perlin plane.. | https://docs.unity3d.com/es/2019.2/ScriptReference/Mathf.PerlinNoise.html | 2021-06-13T03:47:28 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.unity3d.com |
Table of Contents
Product Index
A set of 20 poses for Genesis 3 Male and Female for use with the Laptop and Table Set. There are 10 individual poses for both the male and female. Also several poses for the laptop, chair and mug. Also included are 2 faces for the male with several zero poses for each. | http://docs.daz3d.com/doku.php/public/read_me/index/45811/start | 2020-08-03T12:02:42 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.daz3d.com |
Site Creation Plus is configured to simplify the overall site creation process in PM Central. No additional configuration of the Web Part is required, however you may want to modify the configuration settings to use another default template, or to predefine the permissions that will be used on the site.
When working with Site Creation Plus keep in mind:
You can configure the web part to display PM Central templates that were customized for your organization.
Customized “Lite” PM Central templates will not be displayed in Site Creation Plus, due to a known limitation. Click here for information on creating sites from a customized Lite template.
The columns available for display in the Project Health section of the Site Creation Plus Web Part are hard coded. Custom columns created in the Project Health list will not be displayed in this Web Part.
When used in PM Central the Site Creation Plus Web Part has configuration options that are specific to the application, such as the inclusion of Project Health information.
The Site Creation Plus Configuration tool pane
Configure Site Creation Plus to use a custom PM Central template
1. Use the Add New Project link under the Central Actions menu or the Create New Project… button on the portfolio home page to access the Site Creation Plus Web Part page
NOTE: The page will need to be in edit mode for you to access the Web Part’s configuration tool pane.
2. Use the selector arrows to add the customized PM Central template(s) to the list of available templates and remove templates that should not be used. (C)
3. Arrange the available templates so the desired default PM Central project template is at the top of the list (C)
4. Optional: Determine which Project Health columns will be available from this web part (D.)
5. Optional: Predefine what permissions will be associated with the site on creation (F & E)
6. Click Save & Close and stop editing the page to see your changes. | https://docs.bamboosolutions.com/document/pmc_site_creation_plus/ | 2020-08-03T12:16:50 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['/wp-content/uploads/2017/06/CreateProjectlinks.png',
'Links to create a new PMC site'], dtype=object) ] | docs.bamboosolutions.com |
How to Add an SVG Logo
If you would like to add an SVG logo to your header, follow these simple step:
- 1
- Install the SVG Support plugin to support this format, because by default, WordPress does not support SVG images.
- 2
- Go to Appearance > Customize > Header > Logo and upload your logo, once uploaded skip the cropping option and Save Changes.
- 3
- If your logo is too large for the header, use the Max Width option to resize to the desired size.
And finally, you need to add some CSS either in the style.css file of your child theme or in the Custom CSS/JS section of the customizer:
#site-logo #site-logo-inner a img { height: 40px; }
Replace "40px" with the height of your logo.
That all! | https://docs.oceanwp.org/article/357-how-to-add-an-svg-logo | 2020-08-03T11:49:15 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.oceanwp.org |
except as part of a builtin statement. Types are always inferred from the usage of the value. Type inference follows a Hindley-Milner style inference system.
1ns // 1 nanosecond 1us // 1 microsecond 1ms // 1 millisecond 1s // 1 second 1m // 1 minute 1h // 1 hour 1d // 1 day 1w // 1 week 1mo // 1 calendar month 1y // 1 calendar year 3d12h4m25s // 3 days, 12 hours, 4 minutes, and 25 seconds.
Bytes types
A bytes type represents a sequence of byte values.
The bytes type name is.
Polymorphism
Flux types can be polymorphic, meaning that a type may take on many different types. Flux supports let-polymorphism and structural polymorphism.
Let-polymorphism
Let-polymorphism is the concept that each time an identifier is referenced, it may take on a different type. For example:
add = (a,b) => a + b add(a:1,b:2) // 3 add(a:1.5,b:2.0) // 3.5
The identifiers,
a and
b, in the body of the
add function are used as both
int and
float types.
Structural polymorphism
Structural polymorphism is the concept that structures (objects in Flux) can be used by the same function even if the structures themselves are different. For example:
john = {name:"John", lastName:"Smith"} jane = {name:"Jane", age:44} // John and Jane are objects with different types. // We can still define a function that can operate on both objects safely. // name returns the name of a person name = (person) => person.name name(person:john) // John name(person:jane) // Jane device = {id: 125325, lat: 15.6163, lon: 62.6623} name(person:device) // Type error, "device" does not have a property name.
Objects of differing types can be used as the same type so long as they both contain the necessary properties. Necessary properties are determined by the use of the object. This form of polymorphism means that checks are performed during type inference and not during runtime. Type errors are found and reported before runtime.. | https://v2.docs.influxdata.com/v2.0/reference/flux/language/types/ | 2020-08-03T12:49:11 | CC-MAIN-2020-34 | 1596439735810.18 | [] | v2.docs.influxdata.com |
GUI login prompt may not re-appear when reconnecting via a web browser after exiting the GUI
When you exit or disconnect from the GUI applet and then try to reconnect from the same web browser session, the login prompt may not appear.
Workaround: Close the web browser, re-open the browser and then connect to the server. When using the Firefox browser, close all Firefox windows and re-open.
GUI does not immediately update IP resource state after network is disconnected and then reconnected
When the primary network between servers in a cluster is disconnected and then reconnected, the IP resource state on a remote GUI client may take as long as 1 minute and 25 seconds to be updated due to a problem in the RMI/TCP layer.
Java Mixed Signed/Unsigned Code Warning – When loading the LifeKeeper Java GUI client applet from a remote system, the following security warning may be displayed:
Enter “Run” and the following dialog will be displayed:
Block? Enter “No” and the LifeKeeper GUI will be allowed to operate.
Recommended Actions: To reduce the number of security warnings, you have two options:
steeleye-lighttpd process fails to start if Port 778 and 779 are in use
If a process is using Port 778 and 779 when steeleye-lighttpd starts up, steeleye-lighttpd fails which can cause GUI connect failiures and resource hierarchy extend issues.
Solution: Set the following tunables on all nodes in the cluster and then restart LifeKeeper on all the nodes:
Post your comment on this topic.
Post your comment on this topic. | http://docs.us.sios.com/spslinux/9.5.0/en/topic/gui-known-issues-restrictions | 2020-08-03T13:04:58 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.us.sios.com |
On-premises XML process customization
Azure DevOps Server 2020 | Azure DevOps Server 2019 | TFS 2018 - TFS 2013
The On-premises XML process model provides support for customizing work tracking objects and Agile tools for a project. With this model, you can update the XML definition of work item types, the process configuration, categories, and more. You can also update the attributes of fields.
You customize your work tracking experience to support your business and reporting needs. The most common customizations include adding a custom field, modifying a work item form, or adding a custom work item type.
Note
For guidance on configuring and customizing your project and teams to support your business needs, review Configuration and customization of Azure Boards.
For Azure DevOps Server 2019 and later versions, you have a choice of process models. When you create a project collection, you'll need to choose between On-premises XML process model and Inheritance process model. To learn more, see Customize work tracking, Choose the process model for your project collection.
Team Foundation Server uses the On-premises XML process model to support customizations. This model relies on updating and importing XML files using the witadmin command line tool.
Important
To customize an Azure DevOps Services project, see About process customization and inherited processes. This article applies to on-premises deployments only.
Supported customizations
You can perform the following tasks when you work with the On-premises XML process model.
Customization sequence
When you manage an on-premises deployment,.
Tip
With witadmin, you can import and export definition files. Other tools you can use include the Process Editor (requires that you have installed a version of Visual Studio). Install the Process Template editor from the Visual Studio Marketplace.
Or, you can use the TFS Team Project Manager, an open-source client available from GitHub..
Maintenance and upgrade implications
Before you customize, you should understand how your customizations may impact your project when you upgrade your application-tier server.
Upgrades to an on-premises deployment can introduce new features that require updates to the objects used to track work. These objects include work item types, categories, and process configuration. Minimizing changes to the workflow for a WIT or the process configuration can help minimize the work you must do when you upgrade your deployment.
To minimize the amount of manual work you'll need to do after an upgrade, understand which customizations support an easy update path and which do not.
Compatible for quick updating
With the following customizations, you can use the Configure Features Wizard to automatically apply any changes to your project. When you make the following customizations, you might need to modify your custom process for the wizard to run, or you might have to update your
The default configuration for. | https://docs.microsoft.com/de-de/azure/devops/reference/on-premises-xml-process-model?toc=%2Fazure%2Fdevops%2Freference%2Ftoc.json&bc=%2Fazure%2Fdevops%2Freference%2Fbreadcrumb%2Ftoc.json&view=azure-devops-2020&viewFallbackFrom=azure-devops | 2020-08-03T13:10:19 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Frontend Accessibility
Upgrading GOV.UK Frontend
This document explains how we upgraded GovWifi Product Pages to use
v3.0 of GOV.UK Frontend. This
newer version of the library replaces GOV.UK Elements, Frontend Toolkit and parts of GOV.UK
Template.
From a very high level, the overall upgrade process, therefore, involved the following:
- upgrading the NPM package
govuk-frontend
2.xto
3.0
- removing the dependency on NPM packages
govuk-elements-sassand
govuk_frontend_toolkit
- fixing any issues arising as a result of these
How we upgraded
Research
We started off by reading through the changelog for GOV.UK Frontend, which explains most of the major changes.
We also looked at the upgrade guide on GOV.UK Design System, which provides more context for the changes required.
Update NPM dependencies
We updated the version of
govuk-frontend to
3.0.0, and removed
govuk-elements-sass and
govuk_frontend_tookit from
package.json, followed by building the app in order to assess the
impact of these changes. As expected, the build failed as the changes introduced were quite
significant.
Migrate code
At this point, we ported the existing code to use the upgraded library, and removed any usage of the removed libraries. In order to achieve this, the changelog and the upgrade guide were referenced, until we were able to build the app without errors. This was very much an iterative process and involved a lot of repetitive changes, such as changing class names for certain HTML elements.
Fix styling
Since we had to remove all references to the removed libraries, most of the pages of the app had their styling broken, and this is what we fixed next.
The existing codebase included a lot of custom styling, for things such as the site header, which
are now provided by
govuk-frontend as components. Therefore, we ported the views and layout
markups to make use of the components provided by the Design System, and got rid of any unnecessary
and redundant CSS.
Fine tuning
Once we had ported the code over to use components provided by the Design System, we began fine tuning the styling in order to bring back some of the uniqueness of the app. This was kept to a minimum so as to keep the look and feel as consistent with the Design System as possible, which should also make future upgrades easier.
Issues
Some issues arose from our porting effort:
moving to
libsasscproved more difficult than expected as there are some known bugs with the newer gem and
govuk-frontend, and some other gems in our dependencies haven’t been ported either (govuk-lint, which is being retired in the process);
we missed a very important detail for the design of the new pages: any product in the government that is part of the GaaP effort (Notify, Pay, Registers, Wifi) should follow the example provided in alphagov/product-page-example, with a compact header bar that fits in one row and makes better use of screen estate. It wasn’t documented anywhere but pointed out to us by a designer in Notify.
some of the patterns we use haven’t been ratified by the design system yet: the side navigation is still custom (as is the design system’s own sidebar) and roughly follows the example of the other GaaP products. | https://govwifi-dev-docs.cloudapps.digital/accessibility.html | 2020-08-03T12:04:28 | CC-MAIN-2020-34 | 1596439735810.18 | [] | govwifi-dev-docs.cloudapps.digital |
BTM - Objectives and Transformation Items
Data Model
LeanIX BTM is powered by a business-centric data model with two new LeanIX Fact Sheet types: Objective and Transformation Item.
Objective Fact Sheets allow for high-level definition and progress tracking of transformation initiatives and can be linked to Business Capabilities. Define Objectives to improve the business:
- Allows high-level progress tracking
- Identifies critical business capabilities
- Enables org. wide transparency
Transformation Item Fact Sheets, on the other hand, are used to plot detailed actions for achieving an Objective and as a way to model changes upon every other Fact Sheet in the wider LeanIX data model. Create detailed plans to achieve Objectives:
- Define Impacts of plans and project changes
- Out-of-the-box Transformation Items consist of three hierarchies: Plan, Building Block, Epic and Project
Objectives
Define your objectives and link them to your business capabilities for a holistic view.
Your objectives can be structured across 3 levels:
- Corporate Strategy
- Strategies are clusters for Business Objectives (business or department level)
- And lastly - the Key Results
Transformation Items
LeanIX offers a best practice structure to quickly kick-start transformation initiatives:
Example: Cloud migration with BTM
To achieve this, the EA will work with respective Business Leaders to
map affected Business Capabilities to the Objectives
define plans to achieve Objective
model impacts on affected Applications
visualize plan based on multiple dimensions (e.g. time, cost, context)
decide, execute plan and track progress of Objective
Updated 9 months ago | https://docs-eam.leanix.net/docs/objectives-and-transformations-items | 2021-11-27T05:02:34 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['https://files.readme.io/48aa687-BTM_Data_Model.png',
'BTM Data Model.png'], dtype=object)
array(['https://files.readme.io/48aa687-BTM_Data_Model.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/4558994-BTM_Transformation.png',
'BTM Transformation.png'], dtype=object)
array(['https://files.readme.io/4558994-BTM_Transformation.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/843ed5e-Example_BTM.png',
'Example BTM.png'], dtype=object)
array(['https://files.readme.io/843ed5e-Example_BTM.png',
'Click to close...'], dtype=object) ] | docs-eam.leanix.net |
Component Tag Helper in ASP.NET Core
Prerequisites
Follow the guidance in the Configuration section for either:
- Blazor Server: Integrate routable and non-routable Razor components into Razor Pages and MVC apps.
- Blazor WebAssembly: Integrate Razor components from a hosted Blazor WebAssembly solution into Razor Pages and MVC apps.
Follow the guidance in the Configuration section of the Prerender and integrate ASP.NET Core Razor components article.
Component Tag Helper
To render a component from a page or view, use the Component Tag Helper (
<component> tag).
Note
Integrating Razor components into Razor Pages and MVC apps in a hosted Blazor WebAssembly app is supported in ASP.NET Core in .NET 5.0 or later.
RenderMode configures whether the component:
- Is prerendered into the page.
- Is rendered as static HTML on the page or if it includes the necessary information to bootstrap a Blazor app from the user agent.
Blazor WebAssembly app render modes are shown in the following table.
Blazor Server app render modes are shown in the following table.
Blazor Server app render modes are shown in the following table.
Additional characteristics include:
- Multiple Component Tag Helpers rendering multiple Razor components is allowed.
- Components can't be dynamically rendered after the app has started.
- While pages and views can use components, the converse isn't true. Components can't use view- and page-specific features, such as partial views and sections. To use logic from a partial view in a component, factor out the partial view logic into a component.
- Rendering server components from a static HTML page isn't supported.
The following Component Tag Helper renders the
Counter component in a page or view in a Blazor Server app with
ServerPrerendered:
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @using {APP ASSEMBLY}.Pages ... <component type="typeof(Counter)" render-
The preceding example assumes that the
Counter component is in the app's Pages folder. The placeholder
{APP ASSEMBLY} is the app's assembly name (for example,
@using BlazorSample.Pages or
@using BlazorSample.Client.Pages in a hosted Blazor solution).
The Component Tag Helper can also pass parameters to components. Consider the following
ColorfulCheckbox component that sets the checkbox label's color and size:
<label style="font-size:@(Size)px;color:@Color"> <input @ Enjoying Blazor? </label> @code { [Parameter] public bool Value { get; set; } [Parameter] public int Size { get; set; } = 8; [Parameter] public string Color { get; set; } protected override void OnInitialized() { Size += 10; } }
The
Size (
int) and
Color (
string) component parameters can be set by the Component Tag Helper:
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @using {APP ASSEMBLY}.Shared ... <component type="typeof(ColorfulCheckbox)" render-
The preceding example assumes that the
ColorfulCheckbox component is in the app's Shared folder. The placeholder
{APP ASSEMBLY} is the app's assembly name (for example,
@using BlazorSample.Shared).
The following HTML is rendered in the page or view:
<label style="font-size:24px;color:blue"> <input id="survey" name="blazor" type="checkbox"> Enjoying Blazor? </label>
Passing a quoted string requires an explicit Razor expression, as shown for
param-Color in the preceding example. The Razor parsing behavior for a
string type value doesn't apply to a
param-* attribute because the attribute is an
object type.
All types of parameters are supported, except:
- Generic parameters.
- Non-serializable parameters.
- Inheritance in collection parameters.
- Parameters whose type is defined outside of the Blazor WebAssembly app or within a lazily-loaded assembly.
The parameter type must be JSON serializable, which typically means that the type must have a default constructor and settable properties. For example, you can specify a value for
Size and
Color in the preceding example because the types of
Size and
Color are primitive types (
int and
string), which are supported by the JSON serializer.
In the following example, a class object is passed to the component:
MyClass.cs:
public class MyClass { public MyClass() { } public int MyInt { get; set; } = 999; public string MyString { get; set; } = "Initial value"; }
The class must have a public parameterless constructor.
Shared/MyComponent.razor:
<h2>MyComponent</h2> <p>Int: @MyObject.MyInt</p> <p>String: @MyObject.MyString</p> @code { [Parameter] public MyClass MyObject { get; set; } }
Pages/MyPage.cshtml:
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @using {APP ASSEMBLY} @using {APP ASSEMBLY}.Shared ... @{ var myObject = new MyClass(); myObject.MyInt = 7; myObject.
The preceding example assumes that the
MyComponent component is in the app's Shared folder. The placeholder
{APP ASSEMBLY} is the app's assembly name (for example,
@using BlazorSample and
@using BlazorSample.Shared).
MyClass is in the app's namespace. | https://docs.microsoft.com/en-us/aspnet/core/mvc/views/tag-helpers/built-in/component-tag-helper?view=aspnetcore-5.0 | 2021-11-27T06:51:14 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
Google Analytics Measurement Protocol library for PHP
Description
Send data to Google Analytics from the server using PHP. This library fully implements GA measurement protocol so its possible to send any data that you would usually do from analytics.js on the client side. You can send data regarding the following parameters categories (Full List): General User Session Traffic Sources System Info Hit Content Information App Tracking Event Tracking E-Commerce Enhanced E-Commerce Social Interactions Timing Exceptions Custom Dimensions / Metrics Content Experiments * Content Grouping
Installation
Use Composer to install this package.
If you are using
PHP 5.5 or above and
Guzzle 6 then:
{ "require": { "theiconic/php-ga-measurement-protocol": "^2.0" } }
Or if you are using
PHP 5.4 or above and
Guzzle 5 then:
{ "require": { "theiconic/php-ga-measurement-protocol": "^1.1" } }
Take notice v1 won't receive more updates, you are encourage to update to v2.
Integrations
You can use this package on its own, or use a convenient framework integration: Laravel 4/5 - Yii 2 - * Symfony2 -
Feel free to create an integration with your favourite framework, let us know so we list it here.
Usage
The required parameters for all hits are Protocol Version, Tracking ID and Client ID. Some optional ones like IP Override are recommended if you don't want all hits to seem like coming from your servers.
use TheIconic\Tracking\GoogleAnalytics\Analytics; // Instantiate the Analytics object // optionally pass TRUE in the constructor if you want to connect using HTTPS $analytics = new Analytics(true); // Build the GA hit using the Analytics class methods // they should Autocomplete if you use a PHP IDE $analytics ->setProtocolVersion('1') ->setTrackingId('UA-26293728-11') ->setClientId('12345678') ->setDocumentPath('/mypage') ->setIpOverride("202.126.106.175"); // When you finish bulding the payload send a hit (such as an pageview or event) $analytics->sendPageview();
The hit should have arrived to the GA property UA-26293728-11. You can verify this in your Real Time dashboard. Take notice, if you need GA reports to tie this event with previous user actions you must get and set the ClientId to be same as the GA Cookie. Read (here).
The library is 100% done, full documentation is a work in progress, but basically all parameters can be set the same way.
// Look at the parameter names in Google official docs at // $analytics->set<ParameterName>('my_value');
// Get any parameter by its name // Look at the parameter names in Google official docs at // $analytics->get<ParameterName>();
All methods for setting parameters should Autocomplete if you use an IDE such as PHPStorm, which makes building the Analytics object very easy.
Use Cases
Asynchronous Requests (Non-Blocking)
By default, sending a hit to GA will be a synchronous request, and block the execution of the script until it gets a response from the server or timeouts after 100 secs (throwing a Guzzle exception). However, an asynchronous non-blocking request can be sent by calling setAsyncRequest(true) before sending the hit:
// When building the Analytics hit, just make a call to the setAsyncRequest method passing true // now sending the hit won't block the execution of the script $analytics ->setAsyncRequest(true) ->sendPageview();
This means that we are sending the request and not waiting for a response. The AnalyticsResponse object that you will get back has NULL for HTTP status code.
Order Tracking with Enhanced E-commerce
use TheIconic\Tracking\GoogleAnalytics\Analytics; $analytics = new Analytics(); // Build the order data programmatically, including each order product in the payload // Take notice, if you want GA reports to tie this event with previous user actions // you must get and set the same ClientId from the GA Cookie // First, general and required hit data $analytics->setProtocolVersion('1') ->setTrackingId('UA-26293624-12') ->setClientId('2133506694.1448249699') ->setUserId('123'); // Then, include the transaction data $analytics->setTransactionId('7778922') ->setAffiliation('THE ICONIC') ->setRevenue(250.0) ->setTax(25.0) ->setShipping(15.0) ->setCouponCode('MY_COUPON'); // Include a product, only required fields are SKU and Name $productData1 = [ 'sku' => 'AAAA-6666', 'name' => 'Test Product 2', 'brand' => 'Test Brand 2', 'category' => 'Test Category 3/Test Category 4', 'variant' => 'yellow', 'price' => 50.00, 'quantity' => 1, 'coupon_code' => 'TEST 2', 'position' => 2 ]; $analytics->addProduct($productData1); // You can include as many products as you need this way $productData2 = [ 'sku' => 'AAAA-5555', 'name' => 'Test Product', 'brand' => 'Test Brand', 'category' => 'Test Category 1/Test Category 2', 'variant' => 'blue', 'price' => 85.00, 'quantity' => 2, 'coupon_code' => 'TEST', 'position' => 4 ]; $analytics->addProduct($productData2); // Don't forget to set the product action, in this case to PURCHASE $analytics->setProductActionToPurchase(); // Finally, you must send a hit, in this case we send an Event $analytics->setEventCategory('Checkout') ->setEventAction('Purchase') ->sendEvent();
Validating Hits
From Google Developer Guide:.
To send a validation hit, turn on debug mode like this
// Make sure AsyncRequest is set to false (it defaults to false) $response = $analytics ->setDebug(true) ->sendPageview(); $debugResponse = $response->getDebugResponse(); // The debug response is an associative array, you could use print_r to view its contents print_r($debugResponse);
GA actually returns a JSON that is parsed into an associative array. Read (here) to understand how to interpret response.
Contributors
- Jorge A. Borges - Lead Developer ()
- Juan Falcón - arcticfalcon
- Syed Irfaq R. - irazasyed
- Andrei Baibaratsky - baibaratsky
- Martín Palombo - lombo
- Amit Rana - amit0rana
- Stefan Zweifel - stefanzweifel
- Titouan BENOIT - nightbr
License
THE ICONIC Google Analytics Measurement Protocol library for PHP is released under the MIT License. | https://php-ga-measurement-protocol.readthedocs.io/en/latest/ | 2021-11-27T04:51:19 | CC-MAIN-2021-49 | 1637964358118.13 | [] | php-ga-measurement-protocol.readthedocs.io |
The type of kind matrix. More...
The type of kind matrix.
The matrix is represented as a compound of column vectors. The number of matrix columns is given by the size of the underlying compound, see mi::neuraylib::IType_compound::get_size(). The number of matrix rows is given by the dimension of a column vector. Both dimensions are either 2, 3, or 4.
TypeColxRowwhere
Typeis one of
floator
double,
Colis the number of columns and
Rowis the number of rows (see also section 6.9 in [MDLLS]). This convention is different from the convention used by mi::math::Matrix.
Returns the type of the matrix elements, i.e., the type of a column vector.
The kind of this subclass. | https://raytracing-docs.nvidia.com/mdl/api/classmi_1_1neuraylib_1_1IType__matrix.html | 2021-11-27T05:05:40 | CC-MAIN-2021-49 | 1637964358118.13 | [] | raytracing-docs.nvidia.com |
AWS Key Management Service in AWS Snowball Edge Edge AWS Snowball Edge, AWS KMS protects the encryption keys used to protect data on each AWS Snowball Edge device. When you create your job, you also choose an existing KMS key. Specifying the ARN for an AWS KMS key tells AWS Snowball which AWS KMS master key to use to encrypt the unique keys on the AWS Snowball Edge device. For more information on AWS Snowball Edge supported Amazon S3 server-side-encryption options , see Server-Side Encryption in AWS Snowball Edge.
Using the AWS-Managed Customer Master Key for Snowball Edge
If you'd like to use the AWS-managed customer master key (CMK) for Snowball Edge created for your account, follow these steps.
To select the AWS KMS CMK for your job
On the AWS Snow Family AWS KMS CMKs.
Choose Next to finish selecting your AWS KMS CMK.
Creating a Custom KMS Envelope Encryption Key
You have the option of using your own custom AWS KMS envelope encryption key with AWS Snowball Edge. If you choose to create your own key, that key must be created in the same region that your job was created in.
To create your own AWS KMS key for a job, see Creating Keys in the AWS Key Management Service Developer Guide. | https://docs.aws.amazon.com/snowball/latest/developer-guide/kms.html | 2021-11-27T05:30:42 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.aws.amazon.com |
CraftTweaker
Link to crafttweaker
CraftTweaker is a Minecraft mod that allows modpack authors to customize the game, allowing for new recipes to be added, old ones to be removed and just general modpack customization!
CraftTweaker uses a custom scripting language called ZenScript, which is a fairly easy to learn language that fits CraftTweaker's needs more than an already existing language would (such as JavaScript).
This site will hopefully help guide you through everything that is possible with CraftTweaker, all that would be left is for you to use the knowledge and create something amazing! | https://docs.blamejared.com/1.14/pl | 2021-11-27T06:28:04 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.blamejared.com |
Import and export
Are you looking for ways to bulk import data into your commercetools Project? Or do you want to download a dump of your data stored in your commercetools Project? Or do you want to keep data between commercetools Projects in sync?
Import API
The API performs asynchronous data import and automatically handles dependencies. Our (Java, PHP, and TypeScript) SDKs provide full support of the API features.
CLI tools
Exchange data stored in files via command-line tools. All CLI tools are supported by the ImpEx UI. Release notes for individual commands can be found here.
Project Sync
For data import from arbitrary sources into the platform, Java Sync library and JavaScript SDK's Sync Action Builders are available. For data synchronization between commercetools Projects, ready-to-run commercetools-project-sync is available. | https://docs.commercetools.com/import-export | 2021-11-27T05:40:21 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.commercetools.com |
Find resources with search
How to find data resources
On data.world you can not only search for many different kinds of resources, but you can also filter your results by resource type or integrated facets like status, owner, or tag. Each refinement of the results set tells you how many resources meet the combined criteria, and it's easy to drill down through the myriad results to find just the resource you are looking for. See the article on filtering search results for more information on narrowing down your search results. | https://docs.data.world/en/59261-59488-Find-resources-with-search.html | 2021-11-27T06:28:13 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.data.world |
Create a Workflow
New to BPMN and want to learn more before moving forward? This blog post helps to explain the standard and why it's a good fit for microservices orchestration. Deploy a Workflow.
Zeebe Modeler is a desktop modeling tool that allows you to build and configure workflow models using BPMN 2.0. In this section, we'll create a workflow model and get it ready to be deployed to Zeebe.
We'll create an e-commerce order process as our example, and we'll model a workflow that consists of:
- Initiating a payment for an order
- Receiving a payment confirmation message from an external system
- Shipping the items in the order with or without insurance depending on order value
This is what your workflow model will look like when we're finished:
The payment task and shipping tasks are carried out by worker services that we'll connect to the workflow engine. The "Payment Received" message will be published to Zeebe by an external system, and Zeebe will then correlate the message to a workflow instance.
To get started
- Open the Zeebe the Modeler.
It's a BPMN best practice to label all elements in our model, so:
- Double-click on the Start Event
- Label it "Order Placed" to signify that our process will be initiated whenever a customer places an order
Next, we need to add a Service Task:
- Click on the Start Event and select Task icon
- Label the newly created Task "Initiate Payment"
- Click the wrench icon and change the Task to a Service Task
Next, we'll configure the "Initiate Payment" Service Task so that an external microservice can work on it:
- Click on the "Initiate Payment" task
- Expand the Properties panel on the right side of the screen if it's not already visible
- In the Type field in the Properties panel, enter
initiate-payment
This is what you should see in your Modeler now.
This Type field represents the job type in Zeebe. A couple of concepts that are important to understand at this point:
- A job is simply a work item in a workflow that needs to be completed before a workflow instance can proceed to the next step. (See: Job Workers)
- A workflow instance is one running instance of a workflow model--in our case, an individual order to be fulfilled. (See: Workflows)
For every workflow workflow instance, the worker will activate it, complete it, and notify Zeebe. Zeebe will then advance that workflow instance to the next step in the workflow.
Next, we'll add a Message Event to the workflow:
- Click on the "Initiate Payment" task on the workflow instance can advance. (See: Message Events)
In the scenario we're modeling, we initiate a payment with our Service Task, but we need to wait for some other external system to actually confirm that the payment was received. This confirmation comes in the form of a message that will be sent to Zeebe - asynchronously - by an external service.
Messages received by Zeebe need to be correlated to specific workflow instances. To make this possible, we have some more configuring to do:
- Select the Message Event and make sure you're on the "General" tab of the Properties panel on the right side of the screen
- In the Properties panel, click the
+icon to create a new message. You'll now see two fields in the Modeler that we'll use to correlate a message to a specific workflow instance: Message Name and Subscription Correlation Key.
- Let's give this message a self-explanatory name:
payment-received.
When Zeebe receives a message, this name field lets us know which message event in the workflow model the message is referring to.
But how do we know which specific workflow instance--that is, which customer order--a message refers to? That's where Subscription Correlation Key comes in. The Subscription Correlation Key is a unique ID present in both the workflow instance payload and the message sent to Zeebe.
We'll use
orderId for our correlation key.
Go ahead and add the expression
= orderId to the Subscription Correlation Key field.
When we create a workflow instance, we need to be sure to include
orderId as a variable, and we also need to provide
orderId as a correlation key when we send a message.
Here's what you should see in the Modeler:
Next, we'll add an Exclusive (XOR) Gateway to our workflow model. The Exclusive Gateway is used to make a data-based decision about which path a workflow instance should follow. In this case, we want to ship items with insurance if total order value is greater than or equal to $100 and ship without insurance otherwise.
That means that when we create a workflow instance, we'll need to include order value as an instance variable. But we'll come to that later.
First, let's take the necessary steps to configure our workflow model to make this decision. To add the gateway:
- Click on the Message Event you just created
- Select the Gateway (diamond-shaped) symbol - the Exclusive Gateway is the default when you add a new gateway to a model
- Double-click on the gateway and add a label "Order Value?" so that it's clear what we're using as our decision criteria
We'll add two outgoing Sequence Flows from this Exclusive Gateway that lead to two different Service Tasks. Each Sequence Flow will have a data-based condition that's evaluated in the context of the workflow instance payload.
Next, we need to:
- Insurance" Service Task
- Click on the wrench icon
- Choose "Default Flow"
Now we're ready to add a second outgoing Sequence Flow and Service Task from the gateway:
- Insurance" task
- Add another Exclusive Gateway to the model to merge the branches together again (a BPMN best practice in a model like this one).
- Select the "Ship With Insurance" task
- Add an outgoing sequence flow that connects to the second Exclusive Gateway you just created
The only BPMN element we need to add is an End Event:
- Click on the second Exclusive Gateway
- Add an End Event
- Double-click on it to label it "Order Fulfilled"
Lastly, we'll change the process ID to something more descriptive than the default
Process_1 that you'll see in the Modeler:
- Click onto a blank part of the canvas
- Open the Properties panel
- Change the Id to
order-process
Here's what you should see in the Modeler after these last few updates:
That's all for our modeling step. Remember to save the file one more time to prepare to deploy the workflow to Zeebe, create workflow instances, and complete them. | https://docs.camunda.io/docs/0.25/components/zeebe/getting-started/create-a-workflow/ | 2021-11-27T05:52:11 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['/assets/images/tutorial-3.0-complete-workflow-ccad27bdd9f510d4fd1314ae560ffff0.png',
'Getting Started Workflow Model'], dtype=object)
array(['/assets/images/tutorial-3.1-initiate-payment-task-3d7a204208c6f2b42ba8a47c1c6ebdc3.png',
'Initiate Payment Service Task'], dtype=object)
array(['/assets/images/tutorial-3.2-modeler-message-event-fff88a2499d8f93aac727305a8c50257.png',
'Message Event'], dtype=object)
array(['/assets/images/tutorial-3.3-add-message-name-14e574e6781e16189ef1ead6f64c254d.png',
'Add Message Name'], dtype=object)
array(['/assets/images/tutorial-3.4-add-correlation-key-e7f5299f09dd4651effd4017a49f575a.png',
'Message Correlation Key'], dtype=object)
array(['/assets/images/tutorial-3.5-add-xor-gateway-5cee946d82cde03e75b39b0dc66207bc.png',
'Add Exclusive Gateway to Model'], dtype=object)
array(['/assets/images/tutorial-3.6-label-xor-gateway-93abf0aa6c3f12efbc136d53c350a36f.png',
'Label Exclusive Gateway in Model'], dtype=object)
array(['/assets/images/tutorial-3.7-no-insurance-task-4e19ca325ba49fd7cb804faea0ea9301.png',
'Add No Insurance Service Task'], dtype=object)
array(['/assets/images/tutorial-3.8-default-flow-51c9e64dde97df1d41494e256d885b2e.png',
'Add No Insurance Service Task'], dtype=object)
array(['/assets/images/tutorial-3.9-condition-expression-3181825ba25f6cd91c4d25a013d5b964.png',
'Condition Expression'], dtype=object)
array(['/assets/images/tutorial-3.10-end-event-099e6b75d1c3be40891b09d36c7cb105.png',
'Condition Expression'], dtype=object)
array(['/assets/images/tutorial-3.11-process-id-bc6ac6cd55b428402175f3c704d966c3.png',
'Update Process ID'], dtype=object) ] | docs.camunda.io |
Messenger "Active X" Control
If one or more users encounter an error similar to following upon starting the Messenger Module: "class not registered, error in messnger.cob", a "Library" program being utilized may not be installed on the computer. If this occurs, an "active X" control must be installed and registered prior to use. On the computer that encounters this error, run the client.exe program from the \client folder within your RTA fleet directory. This will install the "active X" control rich-text-box necessary to operate the Messenger Module. | https://docs.rtafleet.com/rta-manual/messenger-module/messenger-%22active-x%22-control/ | 2021-11-27T05:49:20 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rtafleet.com |
Add a static neighbor
A static neighbor entry allows you to bind a MAC address to an IP address and port. The firewall performs the neighbor lookup in the static neighbor table when it receives the request on a specific port. If an entry is not available in the table, the firewall will check the neighbor caches and add the MAC address if required.
- Go to Network > Neighbors (ARP–NDP).
- From the Show list, select Static neighbor table and click Add.
Specify the settings.
Click Save. | https://docs.sophos.com/nsg/sophos-firewall/18.5/Help/en-us/webhelp/onlinehelp/AdministratorHelp/Network/Neighbors/ARPNDP/NetworkStaticNeighborAdd/index.html | 2021-11-27T05:50:53 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.sophos.com |
Create a job on the daemon using the syntax parameters and the object list. You can specify job variable values directly on the command line, or in the XML file the command submits. If you specify different variable values for the same parameters in the command line and in the XML file, the value in the command line is used.
- Type datamove create -f objectlist.xml and any job variable values that are not specified or that you want to override in the XML.
- Make a note of the job name.When the create command completes, the job name is displayed on the screen.A job can also be created with the Move command, which does not require creating an object list first. | https://docs.teradata.com/r/Ejo4329~6zoo1qGzwKf2Xw/N0CxKK8BSorky~zy_80r7A | 2021-11-27T06:42:44 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.teradata.com |
#include <wx/toolbar.h>
A toolbar is a bar of buttons and/or other controls usually placed below the menu bar in a wxFrame.
You may create a toolbar that is managed by a frame calling wxFrame::CreateToolBar(). Under Pocket PC, you should always use this function for creating the toolbar to be managed by the frame, so that wxWidgets can use a combined menubar and toolbar. Where you manage your own toolbars, create wxToolBar as usual.
There are several different types of tools you can add to a toolbar. These types are controlled by the wxItemKind enumeration.
Note that many methods in wxToolBar such as wxToolBar::AddTool return a
wxToolBarToolBase* object. This should be regarded as an opaque handle representing the newly added toolbar item, providing access to its id and position within the toolbar. Changes to the item's state should be made through calls to wxToolBar methods, for example wxToolBar::EnableTool. Calls to
wxToolBarToolBase methods (undocumented by purpose) will not change the visible state of the item within the tool bar.
After you have added all the tools you need, you must call Realize() to effectively construct and display the toolbar.
wxMSW note: Note that under wxMSW toolbar paints tools to reflect system-wide colours. If you use more than 16 colours in your tool bitmaps, you may wish to suppress this behaviour, otherwise system colours in your bitmaps will inadvertently be mapped to system colours. To do this, set the msw.remap system option before creating the toolbar:
If you wish to use 32-bit images (which include an alpha channel for transparency) use:
Then colour remapping is switched off, and a transparent background used. But only use this option under Windows XP with true colour:
This class supports the following styles:
wxTB_TEXT.
wxTB_HORZ_LAYOUTand
wxTB_TEXT.
wxTB_HORIZONTALstyle. This style is new since wxWidgets 2.9.5.
See also Window Styles. Note that the wxMSW native toolbar ignores
wxTB_NOICONS style. Also, toggling the
wxTB_TEXT works only if the style was initially on.
The following event handler macros redirect the events to member function handlers 'func' with prototypes like:
Event macros for events emitted by this class:_TOOL_DROPDOWNevent. If unhandled, displays the default dropdown menu set using wxToolBar::SetDropdownMenu().
The toolbar class emits menu commands in the same way that a frame menubar does, so you can use one EVT_MENU() macro for both a menu item and a toolbar button. The event handler functions take a wxCommandEvent argument. For most event macros, the identifier of the tool is passed, but for EVT_TOOL_ENTER() the toolbar window identifier is passed and the tool identifier is retrieved from the wxCommandEvent. This is because the identifier may be
wxID_ANY when the mouse moves off a tool, and
wxID_ANY is not allowed as an identifier in the event system.
Default constructor.
Constructs a toolbar.
Toolbar destructor.
Adds any control to the toolbar, typically e.g. a wxComboBox.
wxMAC_USE_NATIVE_TOOLBARset to 1
Adds a new radio tool to the toolbar.
Consecutive radio tools form a radio group such that exactly one button in the group is pressed at any moment, in other words whenever a button in the group is pressed the previously pressed button is automatically released. You should avoid having the radio groups of only one element as it would be impossible for the user to use such button.
By default, the first button in the radio group is initially pressed, the others are not.
Adds a separator for spacing groups of tools.
Notice that the separator uses the look appropriate for the current platform so it can be a vertical line (MSW, some versions of GTK) or just an empty space or something else.
Adds a stretchable space to the toolbar.
Any space not taken up by the fixed items (all items except for stretchable spaces) is distributed in equal measure between the stretchable spaces in the toolbar. The most common use for this method is to add a single stretchable space before the items which should be right-aligned in the toolbar, but more exotic possibilities are possible, e.g. a stretchable space may be added in the beginning and the end of the toolbar to centre all toolbar items.
Adds a tool to the toolbar.
Adds a tool to the toolbar.
This most commonly used version has fewer parameters than the full version below which specifies the more rarely used button features.
Adds a tool to the toolbar.
Deletes all the tools in the toolbar.
Factory function to create a new separator toolbar tool.
Factory function to create a new toolbar tool.
Factory function to create a new control toolbar tool.
Removes the specified tool from the toolbar and deletes it.
If you don't want to delete the tool, but just to remove it from the toolbar (to possibly add it back later), you may use RemoveTool() instead.
This function behaves like DeleteTool() but it deletes the tool at the specified position and not the one with the given id.
Enables or disables the tool.
Returns a pointer to the tool identified by id or NULL if no corresponding tool is found.
Returns a pointer to the control identified by id or NULL if no corresponding control is found.
Finds a tool for the given mouse position.
Returns the left/right and top/bottom margins, which are also used for inter-toolspacing.
Returns the size of bitmap that the toolbar expects to have.
The default bitmap size is platform-dependent: for example, it is 16*15 for MSW and 24*24 for GTK. This size does not necessarily indicate the best size to use for the toolbars on the given platform, for this you should use
wxArtProvider::GetNativeSizeHint(wxART_TOOLBAR) but in any case, as the bitmap size is deduced automatically from the size of the bitmaps associated with the tools added to the toolbar, it is usually unnecessary to call SetToolBitmapSize() explicitly.
Returns a pointer to the tool at ordinal position pos.
Don't confuse this with FindToolForPosition().
Get any client data associated with the tool.
Called to determine whether a tool is enabled (responds to user input).
Returns the long help for the given tool.
Returns the value used for packing tools.
Returns the tool position in the toolbar, or
wxNOT_FOUND if the tool is not found.
Returns the number of tools in the toolbar.
Returns the default separator size.
Returns the short help for the given tool.
Returns the size of a whole button, which is usually larger than a tool bitmap because of added 3D effects.
Gets the on/off state of a toggle tool.
Inserts the control into the toolbar at the given position.
You must call Realize() for the change to take place.
Inserts the separator into the toolbar at the given position.
You must call Realize() for the change to take place.
Inserts a stretchable space at the given position.
See AddStretchableSpace() for details about stretchable spaces.
Inserts the tool with the specified attributes into the toolbar at the given position.
You must call Realize() for the change to take place.
Inserts the tool with the specified attributes into the toolbar at the given position.
You must call Realize() for the change to take place.
Called when the user clicks on a tool with the left mouse button.
This is the old way of detecting tool clicks; although it will still work, you should use the EVT_MENU() or EVT_TOOL() macro instead.
This is called when the mouse cursor moves into a tool or out of the toolbar.
This is the old way of detecting mouse enter events; although it will still work, you should use the EVT_TOOL_ENTER() macro instead.
Called when the user clicks on a tool with the right mouse button. The programmer should override this function to detect right tool clicks.
This function should be called after you have added tools.
Removes the given tool from the toolbar but doesn't delete it.
This allows inserting/adding this tool back to this (or another) toolbar later.
Sets the dropdown menu for the tool given by its id.
The tool itself will delete the menu when it's no longer needed. Only supported under GTK+ und MSW.
If you define a EVT_TOOL_DROPDOWN() handler in your program, you must call wxEvent::Skip() from it or the menu won't be displayed.
Set the values to be used as margins for the toolbar.
Set the margins for the toolbar.
Sets the default size of each tool bitmap.
The default bitmap size is 16 by 15 pixels.
Note that size does not need to be multiplied by the DPI-dependent factor even under MSW, where it would normally be necessary, as the toolbar adjusts this size to the current DPI automatically.
Sets the client data associated with the tool.
Sets the bitmap to be used by the tool with the given ID when the tool is in a disabled state.
This can only be used on Button tools, not controls.
Sets the long help for the given tool.
Sets the bitmap to be used by the tool with the given ID.
This can only be used on Button tools, not controls.
Sets the value used for spacing tools.
The default value is 1.
Sets the default separator size.
The default value is 5.
Sets the short help for the given tool.
Toggles a tool on or off.
This does not cause any event to get emitted. | https://docs.wxwidgets.org/trunk/classwx_tool_bar.html | 2021-11-27T04:47:25 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.wxwidgets.org |
Kernel-based Time-varying Regression - Part III¶
The tutorials I and II described the KTR model, it’s fitting procedure, and diagnostics / validation methods (visualizations of the KTR regression). This tutorial covers more KTR configurations for advanced users. In particular, it describes how to use knots to model change points in the seasonality and regression coefficients.
For more detail on this see Ng, Wang and Dai (2021)., which describes how KTR knots can be thought of as change points. This highlights a similarity between KTR and Facebook’s Prophet package which introduces the change point detection on levels.
Part III covers different KTR arguments to specify knots position:
level_segements
level_knot_distance
level_knot_dates
[1]:
import pandas as pd import numpy as np from math import pi import matplotlib.pyplot as plt import orbit from orbit.models import KTR from orbit.diagnostics.plot import plot_predicted_data from orbit.utils.plot import get_orbit_style from orbit.utils.dataset import load_iclaims %matplotlib inline pd.set_option('display.float_format', lambda x: '%.5f' % x)
[2]:
print(orbit.__version__)
1.1.0dev
Fitting with iClaims Data¶
The iClaims data set gives the weekly log number of claims and several regressors.
[4]:
# without the endate, we would get end date='2018-06-24' to make our tutorial consistent with the older version df = load_iclaims(end_date='2020-11-29') DATE_COL = 'week' RESPONSE_COL = 'claims' print(df.shape) df.head()
(570, 7)
[4]:
Specifying Levels Segments¶
The first way to specify the knot locations and number is the
level_segements argument. This gives the number of between knot segments; since there is a knot on each end of each the total number of knots would be the number of segments plus one. To illustrate that, try
level_segments=10 (line 5).
[5]:
response_col = 'claims' date_col='week'
[6]:
ktr = KTR( response_col=response_col, date_col=date_col, level_segments=10, prediction_percentiles=[2.5, 97.5], seed=2020, estimator='pyro-svi' )
[7]:
ktr.fit(df=df) _ = ktr.plot_lev_knots()
INFO:root:Guessed max_plate_nesting = 1
Note that there are precisely there are \(11\) knots (triangles) evenly spaced in the above chart.
Specifying Knots Distance¶
An alternative way of specifying the number of knots is the
level_knot_distance argument. This argument gives the distance between knots. It can be useful as number of knots grows with the length of the time-series. Note that if the total length of the time-series is not a multiple of
level_knot_distance the first segment will have a different length. For example, in a weekly data, by putting
level_knot_distance=104 roughly means putting a knot once in two years.
[20]:
ktr = KTR( response_col=response_col, date_col=date_col, level_knot_distance=104, # fit a weekly seasonality seasonality=52, # high order for sharp turns on each week seasonality_fs_order=12, prediction_percentiles=[2.5, 97.5], seed=2020, estimator='pyro-svi' )
[21]:
ktr.fit(df=df) _ = ktr.plot_lev_knots()
INFO:root:Guessed max_plate_nesting = 1
In the above chart, the knots are located about every 2-years.
To highlight the value of the next method of configuring knot position, consider the prediction for this model show below.
[24]:
predicted_df = ktr.predict(df=df) _ = plot_predicted_data(training_actual_df=df, predicted_df=predicted_df, prediction_percentiles=[2.5, 97.5], date_col=date_col, actual_col=response_col)
As the knots are placed evenly the model can not adequately describe the change point in early 2020. The model fit can potentially be improved by inserting knots around the sharp change points (e.g.,
2020-03-15). This insertion can be done with the
level_knot_dates argument described below.
Specifying Knots Dates¶
The
level_knot_dates argument allows for the explicit placement of knots. It needs a string of dates; see line 4.
[28]:
ktr = KTR( response_col=response_col, date_col=date_col, level_knot_dates = ['2010-01-03', '2020-03-15', '2020-03-22', '2020-11-29'], # fit a weekly seasonality seasonality=52, # high order for sharp turns on each week seasonality_fs_order=12, prediction_percentiles=[2.5, 97.5], seed=2020, estimator='pyro-svi' )
[29]:
ktr.fit(df=df)
INFO:root:Guessed max_plate_nesting = 1
[31]:
_ = ktr.plot_lev_knots()
[30]:
predicted_df = ktr.predict(df=df) _ = plot_predicted_data(training_actual_df=df, predicted_df=predicted_df, prediction_percentiles=[2.5, 97.5], date_col=date_col, actual_col=response_col)
Note this fit is even better than the previous one using less knots. Of course, the case here is trivial because the pandemic onset is treated as known. In other cases, there may not be an obvious way to find the optimal knots dates.
Conclusion¶
This tutorial demonstrates multiple ways to customize the knots location for levels. In KTR, there are similar arguments for seasonality and regression such as
seasonality_segments and
regression_knot_dates and
regression_segments. Due to their similarities with their knots location equivalent arguments they are not demonstrated here. However it is encouraged fro KTR users to explore them.
References¶
Ng, Wang and Dai (2021). Bayesian Time Varying Coefficient Model with Applications to Marketing Mix Modeling, arXiv preprint arXiv:2106.03322
Sean J Taylor and Benjamin Letham. 2018. Forecasting at scale. The American Statistician 72, 1 (2018), 37–45. Package version 0.7.1. | https://orbit-ml.readthedocs.io/en/latest/tutorials/ktr3.html | 2021-11-27T06:16:28 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['../_images/tutorials_ktr3_9_1.png',
'../_images/tutorials_ktr3_9_1.png'], dtype=object)
array(['../_images/tutorials_ktr3_13_1.png',
'../_images/tutorials_ktr3_13_1.png'], dtype=object)
array(['../_images/tutorials_ktr3_15_0.png',
'../_images/tutorials_ktr3_15_0.png'], dtype=object)
array(['../_images/tutorials_ktr3_20_0.png',
'../_images/tutorials_ktr3_20_0.png'], dtype=object)
array(['../_images/tutorials_ktr3_21_0.png',
'../_images/tutorials_ktr3_21_0.png'], dtype=object)] | orbit-ml.readthedocs.io |
Special features for creators
Who is this doc for?
✏️ Do you write blog posts for developers?
🔭 Have you ever seen your blog post picked up by daily.dev feed?
🚗 Did you wonder how much traffic your post got through daily.dev?
👽 Do you believe in UFOs? Just kidding.
If you answer yes to all of these questions, this article is going to make your day!
Introducing a whole new set of features for content creators!
In this post, we will cover:
- Why should you claim ownership of an article you wrote?
- How to claim ownership of articles you write?
- How to get your article picked up by daily.dev?
Let’s get started. 🚀
Why should you claim ownership of an article you wrote?
Get notified when your post got featured by daily.dev
At the end of the day, we all write for a particular audience. Engaging with your readers in real-time can make the difference between a memorable article and a forgotten one. Even more, getting notified in real-time will enable you to encourage a meaningful discussion about the blog post you wrote. That way, you can increase your readership and create a long-lasting relationship with your audience.
Get an exclusive author badge
Whenever you comment on a post you wrote, you’ll be able to see your exclusive author badge. Simple yet awesome. If you invested time in writing an article, you deserve some recognition.
See it in action:
Get an analytics report for every post that got featured
We never met authors who are not curious about their blog post’s stats. Today, we bring a missing piece for any post who got picked up by daily.dev. If your article got picked up by daily.dev, you could now expect to get a complete analytics report within 24 hours or so.
Gain reputation points and build up your profile
In case you’ve missed it in our previous announcements, here’s a brief about What is reputation? How do I earn it?
We completely redesigned the profile to fit a new special section for your articles. That’s a great way to show your achievements to the world. And the best part? For every upvote your article earns, you will receive a +1 reputation point!
How to claim ownership of articles you write?
- Step 1 - Go to your profile on daily.dev
- Step 2 - Click “Account details”
- Step 3 - Add your Twitter handle
How to get your article picked up by daily.dev?
Wrap up
- We have many new features made especially to empower content creators.
- Creators get notified when their articles are picked by daily.dev feed.
- Creators get an exclusive badge when they comment.
- Creators get an analytics report for every post they wrote that got picked up by daily.dev.
- Creators can gain reputation points by earning upvotes on their articles.
- Claiming ownership on future posts you write is extremely easy. Just add your Twitter handle to your profile, and you’re all set. | https://docs.daily.dev/docs/for-content-creators/claiming-ownership-on-article | 2021-11-27T05:41:04 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['https://daily-now-res.cloudinary.com/image/upload/v1635256424/docs/5f8ee3a31f47664ff3a9a0db_M2PRpVJTd6XhahQuDouhGspwO9GR01_l_SbwAe44q_CbxUf3nT6VdDnmclolwyw9Wsb4VAwyDBj3KYNbANu8tlX8JdhVwD2qWoH8Avpsafa_kBGtPDVIF7R9YuVK-H69ct_IzhSG.gif',
None], dtype=object)
array(['https://daily-now-res.cloudinary.com/image/upload/v1635256512/docs/5f8ee3a27a7b84389bc4b4cd_CzmUQxV9KULWBuzPx3i85AA8lJCksb5xBaoJ8t4CF9i-o-CIARaANz7t4Z8iW0MQIC2tITPDls40g8JP_5QK_2xFUNLYNIDZwM5bmttIXBzou1ZyzkcAcAN7RXN6P3eYYCO06pop.png',
None], dtype=object)
array(['https://daily-now-res.cloudinary.com/image/upload/v1635256556/docs/5f8ee3a55f89924d52959f10_gqjufILdNpmls81_Me95dj4M8d1QJFyptPBTEjHrkKr1FJUWYZZ9WN7TNB0cF8zYyi1f86Pa-7zR9ouUuxEv_zebisDEbxVQMFAj0DkxpIgGwHYN7toJ73g4G6ajtb6yUALX7at7.gif',
None], dtype=object)
array(['https://daily-now-res.cloudinary.com/image/upload/v1635256584/docs/5f8ee3a40afdcad2ea9b1cd5_UOUpf1FCZMJPa2EAbyO9h0LbFpFFb1z44gpcVQ5tEC9Ggxaj9SizlTxYtiAIVvtu-8NJ_YET37Xz8Np3ZCKIixvhgYfC561MZ-i1M5uoCMlAXiKp-vQ45iKcs3MRZc7cA0J2dXyA.gif',
None], dtype=object)
array(['https://daily-now-res.cloudinary.com/image/upload/v1635256617/docs/5f8ee3a319135745f302c017_Nu6I3OBdqhgcFHDNc-r569okaI700t5hFOjsTLvUCM4SeY9wzCxWeYinbNVUHK5W0f8rNQi_0zeEsZHUfdNoJqth8S0IST49uJSyV3j1K6QZpXWThFLpgJ7PprQixE5C09hk6Opc.gif',
None], dtype=object) ] | docs.daily.dev |
DriveWorks Live License usage is reported directly in the browser.
This feature is available in the Web Theme and Application Theme from the Session Management Links.
Session Management displays the following information about each license in use.
Application Theme
Web Theme
Licenses can be freed from inactive users by selecting the session from the list and clicking the End User session
Application Theme
Web Theme
See also
Welcome to DriveWorks Pro 11 -What's New | https://docs.driveworkspro.com/Topic/WhatsNewDriveWorks11AppThemeSessionMan | 2021-11-27T05:26:14 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.driveworkspro.com |
Date: Mon, 02 Oct 2006 09:40:54 -0400 From: Lowell Gilbert <[email protected]> To: [email protected] Cc: [email protected] Subject: Re: Permissions on /var/mail directory Message-ID: <[email protected]> In-Reply-To: <[email protected]> (Gerard Seibert's message of "Fri, 29 Sep 2006 11:12:33 -0400") References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
Gerard Seibert <[email protected]> writes: > FreeBSD 6.1-RELEASE-p8 > postfix-current-2.4.20060903,3 > dovecot-1.0.r7 > > I just did a buildworld along with a new kernel this morning. While doing > the installworld, I noticed an error message displayed regarding > the /var/mail directory. I have the directory set to: 1777 so that dovecot > can assess it. The installworld process reset the permissions to 0775 which > were not sufficient for Dovecot. > > The dovecot.log file had over a hundred entries similar to this: > > deliver(gerard): Error: > open(/var/mail/.temp.scorpio.seibercom.net.1123.cd38cd4d82e1368f) failed: > Permission denied > deliver(gerard): Error: file_lock_dotlock() failed with mbox > file /var/mail/gerard: Permission denied > > Obviously the /var/log/maillog had similar fail warnings. > > By changing the permission to 1777 on the /var/mail directory and running > postsuper -r ALL, I was able to get the mail delivered. This is the second > time this has happened. The last time I rebuild world I experienced the > same phenomena. Why does build world insist on changing the directory > permissions and is there a way I can prevent it from doing so? > > What I am trying to determine is if I really should have those settings on > the directory, or if I have something configured wrong in either postfix or > dovecot. Those permissions are awfully lenient, but if you've got a single-user machine, I suppose you could live with it.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=237480+0+/usr/local/www/mailindex/archive/2006/freebsd-questions/20061008.freebsd-questions | 2021-11-27T05:42:01 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.freebsd.org |
- Language selections
- General Translation Guidelines
- French translation guidelines
Translating GitLab
For managing the translation process, we use CrowdIn.
To contribute translations at
translate.gitlab.com,
you must create a CrowdIn account. You may create a new account or use any of their supported
Language editor
The online translation editor is the easiest way to contribute translations.
-
Be sure to check the following guidelines before you translate any strings.
Namespaced strings
When an externalized string is prepended with a namespace (for example,
s_('OpenedNDaysAgo|Opened')), the namespace should be removed from the final translation. For
example, in French,
OpenedNDaysAgo|Opened is translated to
Ouvert•e, not
OpenedNDaysAgo|Ouvert•e.
Technical.
Formality
The level of formality used in software varies by language:
Refer to other translated strings and notes in the glossary to assist you in determining a suitable level of formality.
Inclusive
To propose additions to the glossary, please open an issue.
French. | https://docs.gitlab.com/14.3/ee/development/i18n/translation.html | 2021-11-27T05:05:25 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.gitlab.com |
- Unit tests
- Integration tests
- White-box tests at the system level (formerly known as System / Feature tests)
- Black-box tests at the system level, aka end-to-end tests
- EE-specific tests
- How to test at the correct level?
Testing levels
This diagram demonstrates the relative priority of each test type we use.
e2e stands for end-to-end.
As of 2019-05-01, we have the following distribution of tests per level:.
Frontend unit tests
Unit tests are on the lowest abstraction level and typically test functionality that is not directly perceivable by a user.
When to use unit tests
- Exported functions and classes: Anything exported can be reused at various places in ways you have no control over. You should document the expected behavior of the public interface with tests.
- Vuex actions: Any Vuex action must work in a consistent way, independent of the component it is triggered from.
- Vuex mutations: For complex Vuex mutations, you should separate the tests from other parts of the Vuex store to simplify problem-solving.
When not to use unit tests
- Non-exported functions or classes: Anything not exported from a module can be considered private or an implementation detail, and doesn’t need to be tested.
- Constants: Testing the value of a constant means copying it, resulting in extra effort without additional confidence that the value is correct.
- Vue components: Computed properties, methods, and lifecycle hooks can be considered an implementation detail of components, are implicitly covered by component tests, and don’t need to be tested. For more information, see the official Vue guidelines.
What to mock in unit tests
- State of the class under test: Modifying the state of the class under test directly rather than using methods of the class avoids side effects in test setup.
- Other exported classes: Every class must be tested in isolation to prevent test scenarios from growing exponentially.
- Single DOM elements if passed as parameters: For tests only operating on single DOM elements, rather than a whole page, creating these elements is cheaper than loading an entire HTML fixture.
- All server requests: When running frontend unit tests, the backend may not be reachable, so all outgoing requests need to be mocked.
- Asynchronous background operations: Background operations cannot be stopped or waited on, so they continue running in the following tests and cause side effects.
What not to mock in unit tests
- Non-exported functions or classes: Everything that is not exported can be considered private to the module, and is implicitly tested through the exported classes and functions.
- Methods of the class under test: By mocking methods of the class under test, the mocks are tested and not the real methods.
- Utility functions (pure functions, or those that only modify parameters): If a function has no side effects because it has no state, it is safe to not mock it in tests.
- Full HTML pages: Avoid loading the HTML of a full page in unit tests, as it slows down tests.
Frontend component tests
Component tests cover the state of a single component that is perceivable by a user depending on external signals such as user input, events fired from other components, or application state.
When to use component tests
- Vue components
When not to use component tests
- Vue applications: Vue applications may contain many components. Testing them on a component level requires too much effort. Therefore they are tested on frontend integration level.
- HAML templates: HAML templates contain only Markup and no frontend-side logic. Therefore they are not complete components.
What to mock in component tests
- DOM: Operating on the real DOM is significantly slower than on the virtual DOM.
- Properties and state of the component under test: Similar to testing classes, modifying the properties directly (rather than relying on methods of the component) avoids side effects.
- Vuex store: To avoid side effects and keep component tests simple, Vuex stores are replaced with mocks.
- All server requests: Similar to unit tests, when running component tests, the backend may not be reachable, so all outgoing requests need to be mocked.
- Asynchronous background operations: Similar to unit tests, background operations cannot be stopped or waited on. This means they continue running in the following tests and cause side effects.
- Child components: Every component is tested individually, so child components are mocked. See also
shallowMount()
What not to mock in component tests
- Methods or computed properties of the component under test: By mocking part of the component under test, the mocks are tested and not the real component.
- Functions and classes independent from Vue: All plain JavaScript code is already covered by unit tests and needs not to be mocked in component tests..
Frontend integration tests
Integration tests cover the interaction between all components on a single page. Their abstraction level is comparable to how a user would interact with the UI.
When to use integration tests
- Page bundles (
index.jsfiles in
app/assets/javascripts/pages/): Testing the page bundles ensures the corresponding frontend components integrate well.
- Vue applications outside of page bundles: Testing Vue applications as a whole ensures the corresponding frontend components integrate well.
What to mock in integration tests
- HAML views (use fixtures instead): Rendering HAML views requires a Rails environment including a running database, which you cannot rely on in frontend tests.
- All server requests: Similar to unit and component tests, when running component tests, the backend may not be reachable, so all outgoing requests must be mocked.
- Asynchronous background operations that are not perceivable on the page: Background operations that affect the page must be tested on this level. All other background operations cannot be stopped or waited on, so they continue running in the following tests and cause side effects.
What not to mock in integration tests
- DOM: Testing on the real DOM ensures your components work in the intended environment. Part of DOM testing is delegated to cross-browser testing.
- Properties or state of components: On this level, all tests can only perform actions a user would do. For example: to change the state of a component, a click event would be fired.
- Vuex stores: When testing the frontend code of a page as a whole, the interaction between Vue components and Vuex stores is covered as well.
About controller tests
GitLab is transitioning from controller specs to request specs.
In an ideal world, controllers should be thin. However, when this is not the case, it’s acceptable to write a system or feature test without JavaScript instead of a controller test. Testing a fat controller usually involves a lot of stubbing, such as:
controller.instance_variable_set(:@user, user)
and use methods deprecated in Rails 5.
White-box tests at the system level (formerly known as System / Feature tests)
Formal definitions:
These kind of tests ensure the GitLab Rails application (for example,
gitlab-foss/
gitlab) works as expected from a browser point of view.
Note that:
- knowledge of the internals of the application are still required
- data needed for the tests are usually created directly using RSpec factories
- expectations are often set on the database or objects state
These tests should only be used when:
- the functionality/component being tested is small
- the internal state of the objects/database needs to be tested
- it cannot be tested at a lower level
For instance, to test the breadcrumbs on a given page, writing a system test makes sense since it’s a small component, which cannot be tested at the unit or controller level.
Only test the happy path, but make sure to add a test case for any regression that couldn’t have been caught at lower levels with better tests (for example, if a regression is found, regression tests should be added at the lowest level possible).
Frontend feature combining those guidelines with this page.
When to use feature tests
- Use cases that require a backend, and cannot be tested using fixtures.
- Behavior that is not part of a page bundle, but defined globally.
Relevant notes
A
:js flag is added to the test to make sure the full environment is loaded:
scenario 'successfully', :js do sign_in(create(:admin)) end
The steps of each test are written using (capybara methods).
XHR (XMLHttpRequest) calls might require you to use
wait_for_requests in between steps, such as:
find('.form-control').native.send_keys(:enter) wait_for_requests expect(page).not_to have_selector('.card')) tests.
The reasons why we should follow these best practices are as follows:
- System tests are slow to run because must commit the transactions in order for the running application to see the data (and vice-versa). In that case we need to truncate the database after each spec instead of rolling back a transaction (the faster strategy that’s in use for other kind of tests). This is slower than transactions, however, so we want to use truncation only when necessary.
Black-box tests at the system level, aka end-to-end tests
Formal definitions:
GitLab consists of multiple pieces such as GitLab Shell, GitLab Workhorse, Gitaly, GitLab Pages, GitLab Runner, and GitLab Rails. All theses pieces are configured and packaged by Omnibus GitLab.
The QA framework and instance-level scenarios are part of GitLab Rails so that they’re always in-sync with the codebase (especially the views).
Note that:
- knowledge of the internals of the application are not required
- data needed for the tests can only be created using the GUI or the API
- expectations can only be made against the browser page and API responses
Every new feature should come with a test plan.
See end-to-end tests for more information.
Note that
qa/spec contains unit tests of the QA framework itself, not to be
confused with the application’s unit tests or
end-to-end tests.
Smoke tests
Smoke tests are quick tests that may be run at any time (especially after the pre-deployment migrations).
These tests run against the UI and ensure that basic functionality is working.
See Smoke Tests for more information.
GitLab QA orchestrator
GitLab QA orchestrator is a tool that allows to test that all these pieces integrate well together by building a Docker image for a given version of GitLab Rails and running end-to-end tests (i.e. using Capybara) against it.
Learn more in the GitLab QA orchestrator README. / picture !
- behavior you are testing is not worth the time spent running the full application, for example, if you are testing styling, animation, edge cases or small actions that don’t involve the backend, you should write an integration test using Frontend integration tests.
Return to Testing documentation | https://docs.gitlab.com/14.3/ee/development/testing_guide/testing_levels.html | 2021-11-27T05:36:50 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.gitlab.com |
ITts
Engine Site. Actions Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Determines the action or actions the engine should perform.
public: property int Actions { int get(); };
public int Actions { get; }
member this.Actions : int
Public ReadOnly Property Actions As Integer
Property Value
An
int containing the sum of one or more members of the
TtsEngineAction enumeration. | https://docs.microsoft.com/en-us/dotnet/api/system.speech.synthesis.ttsengine.ittsenginesite.actions?view=netframework-4.8 | 2021-11-27T07:15:17 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.microsoft.com |
SMB(see below) used to login to the Samba server.
Password: the password to login to the Samba server.
Share: The share on the Samba server to mount.
Remote Subfolder: The remote subfolder inside the Samba share to mount (optional, defaults to /). To assign the Nextcloud logon username automatically to the subfolder, use
$userinstead.
See Configuring External Storage (GUI) for additional mount options and information.
See External Storage authentication mechanisms for more information on authentication schemes.
SMB update notifications
Nextcloud can use smb update notifications to listen
Update notifications are not supported when using ‘Login credentials, save in session’ authentication. Using login credentials is only supported with ‘Login credentials, save in database’.
Even when using ‘Login credentials, save in database’ or ‘User entered, stored in database’ authentication the notify process
can not use the credentials saved to attach to the smb shares because the notify process does not run in the context of a specific user
in those cases you can provide the username and password using the
--username and
--password arguments.
Decrease sync delay
Any updates detected by the notify command will only be synced to the client after the Nextcloud cron job has been executed
(usually every 15 minutes). If this interval is too high for your use case, you can decrease it by running
occ files:scan --unscanned --all
at the desired interval. Note that this might increase the server load and you’ll need to ensure that there is no overlap between runs. | https://docs.nextcloud.com/server/22/admin_manual/configuration_files/external_storage/smb.html | 2021-11-27T04:50:38 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['../../_images/smb.png', 'Samba external storage configuration.'],
dtype=object) ] | docs.nextcloud.com |
The GNU Project's
aspell package is executed by (but not linked or
compiled into) Webinator for spell-checking and "Did you mean..."
queries. Complete source code and documentation is available at
or
or by contacting Thunderstone tech support and requesting a CD
containing the source. Sending of a CD will require payment of
shipping and handling charges by the requestor.
aspell is
governed by the terms of the GNU Lesser GPL, which is reproduced on
here. | https://docs.thunderstone.com/site/vortexman/aspell.html | 2021-11-27T05:38:40 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.thunderstone.com |
REST guidelines¶
In this section REST guidelines are stated. Communication between the service calling the provided API and Onegini IdP is implemented using REST.
Profile¶
The profile object from the Onegini Java SDK is used in some of the API’s below is an example.
{ "gender": "M", "name": { "first_name": "John", "last_name": "Doe", "initials": "J", "display_name": "John J Doe Jr, MSc" }, "date_of_birth": "1995-05-24", "email_addresses": [ { "primary": true, "value": "[email protected]" }, { "primary": false, "value": "[email protected]", "tag": "BUSINESS" }, { "primary": false, "value": "[email protected]", "verified": true } ], "phone_numbers": [ { "primary": true, "value": "+12125551234", "tag": "MOBILE" }, { "primary": false, "value": "+3160123456" } ], "addresses": [ { "primary": true, "tag": "ALTERNATIVE", "street_name": "Pompmolenlaan", "house_number": 9, "house_number_addition": "2nd floor", "postal_code": "3447 GK", "city": "Woerden", "region": "Utrecht", "country_code": "NL", "company_name": "Onegini", "attention": "John Doe" }, { "primary": false, "street_name": "Main Street", "house_number": 1, "postal_code": "01A A34", "city": "Mytown", "country_code": "GB" } ], "custom_attributes": [ { "name":"myCRM", "value":"ABC123DEF456" } ], "preferred_locale": "en_GB" }
Stateless communication¶
The communication between the service calling the API and Onegini IdP is stateless.
Non cacheable communication¶
The communication between the service calling the API and Onegini IdP is not cacheable. Onegini IdP will mark the communication as being so:
Cache-Control: no-store Pragma: no-cache
JSON¶
The service using the API can use JSON to communicate with Onegini IdP. To receive JSON messages another header must be added:
Accept: application/json
UTF-8 encoding¶
Onegini IdP uses UTF-8 encoding:
Content-Type: application/json;charset=UTF-8
Generic error response¶
When an error occurs in the API a general JSON error response is returned. The response is denoted below:
{ "error_code":"1001", "error_message":"Registration not available" }
Bad request error response¶
In case the API call is missing some of the parameters or the one provided do not match expected type the application will respond with 400 BAD REQUEST and following body:
{ "error_code":"1041", "error_message":"The request received by the server was invalid or malformed" }
Security¶
All APIs are protected with HTTP basic authentication.
The Authorization header needs to be added every request.
The Authorization header is constructed as follows:==
Metadata¶
Events are stored for state changes in Onegini IdP. These events can contain meta data about the request that triggered this change.
The following HTTP request headers are not stored as meta data for security reasons:
authorization
cookie
proxy-authorization
Extra audit information¶
The custom HTTP headers
x-onegini-api-agent-user and
x-onegini-api-agent-app can be used to store the identifier of the person and application that caused the API call.
Examples:
- user identifier or name of a customer service agent that triggers password reset or changes the e-mail address for an end-user
- identifier of the application that creates or deletes accounts based on data in an external system
Additional API configuration¶
The custom HTTP header
X-Onegini-Api-Configuration can be used to modify IDP configuration in API request context only. Currently following options are supported:
userNotificationEnabled - allows to disable email notifications on Person API updates.
Example
X-Onegini-Api-Configuration: userNotificationEnabled=false
Concurrent requests to API¶
Please note that currently concurrent requests to API that relate to the same person are not supported. In case it's necessary to perform multiple API calls for the same person, please execute them sequentially. | https://docs-single-tenant.onegini.com/cim/stable/idp/api-reference/rest-guidelines.html | 2021-11-27T06:20:42 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs-single-tenant.onegini.com |
Patching Nextcloud
Applying a patch
Patching server
Navigate into your Nextcloud server’s root directory (contains the
status.phpfile)
Now apply the patch with the following command:
patch -p 1 < /path/to/the/file.patch
Note
There can be errors about not found files, especially when you take a patch from GitHub there might be development or test files included in the patch. when the files are in build/ or a tests/ subdirectory it is mostly being
Patching apps
Navigate to the root of this app (mostly
apps/[APPID]/), if you can not find the app there use the
sudo -u www-data php occ app:getpath APPIDcommand to find the path.
Now apply the patch with the same command as in Patching server
Reverting a patch
Navigate to the directory where you applied the patch.
Now revert the patch with the
-Roption:
patch -R -p 1 < /path/to/the/file.patch
Getting a patch from a GitHub pull request
If you found a related pull request on GitHub that solves your issue, or you want to help developers and verify a fix works, you can get a patch for the pull request.
Using as an example.
Append
.patchto the URL:
Download the patch to your server and follow the Applying a patch steps.
In case you are on an older version, you might first need to go the the correct version of the patch.
You can find it by looking for a link by the
backportbot-nextcloudor a developer will leave a manual comment about the backport to an older Nextcloud version. For the example above you the pull request for Nextcloud 21 is at and the patch at | https://docs.nextcloud.com/server/22/admin_manual/issues/applying_patch.html | 2021-11-27T06:28:32 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['../_images/getting-a-patch-from-github.png',
'backportbot-nextcloud linking to the pull request for an older version.'],
dtype=object) ] | docs.nextcloud.com |
This package contains:
The resource adaptor type and resource adaptor
Source code for example services
Ant scripts to deploy the resource adaptor and example services to Rhino
Resource adaptor type API Javadoc
The SIP RA Type API is based on JAIN SIP, with some proprietary extensions for SLEE applications.
The examples include B2BUA, FMFM, Location, Presence, Proxy and Registrar. | https://docs.rhino.metaswitch.com/ocdoc/books/devportal-downloads/1.0/downloads-index/sip.html | 2021-11-27T05:32:07 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rhino.metaswitch.com |
Set or Change Customer Markups
This utility program creates and changes customer markup rates for a range of customers.
Running the Utility
- Select System > Custom Utilities II > Misc Utilities > Set Customer Markups from the RTA main menu (SIMM).
- Enter the starting and ending facility and customer number or press F1 to make the selection from a lookup list.
- Select the checkbox for the items to change and then enter the new markup values or settings.
To change the tax flags for shop supplies, select the Change tax shop supplies checkbox. The Tax shop supplies checkbox will become available. To charge customers shop supplies, select the Tax shop supplies checkbox. To not charge shop supplies, leave the Tax shop supplies checkbox blank. The system will flag/unflag customer records accordingly.
The Change tax outside parts checkbox works in the same manner as the Change tax shop supplies checkbox. | https://docs.rtafleet.com/rta-manual/miscellaneous-utilities/set-or-change-customer-markups/ | 2021-11-27T06:16:45 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rtafleet.com |
The following issues are known in the VMware vRealize Automation for ServiceNow ITSM application:
Junk sys_id: In the VMware vRealize Automation for ServiceNow ITSM application, when a catalog item or project is deleted, if the deleted catalog item or project is part of a defined entitlement, then these deleted records are seen as junk values in the entitlement record.
The VMware vRealize Automation for ServiceNow ITSM application displays Boolean type fields as checkboxes in catalog items.
If there are multiple catalog items with the same name, then the VMware vRealize Automation for ServiceNow ITSM application displays the latest created catalog item and earlier created catalog items are removed.
While deleting an endpoint make sure that data is in the VMware application scope. If not, an error message displays for the cross scope data deletion post endpoint is deleted and data is removed successfully.
When a shared resource is stopped being shared, the UI of the unshared resource becomes distorted. Reload the page to resolve this issue.
In VMware vRealize Automation, if a Cloud Template is having a property group or property definition defined in the input, then the same Cloud Template will fail in ServiceNow.
For the Service Portal and Native UI, the RITM displays extra fields that are not part of the request form.
The vRealize Automation Configure Items for the catalog items does not support the vRealize Automation for ServiceNow ITSM application.
If you have created the following fields in vRealize Automation then while performing import in ServiceNow, the following fields are not created:
deploymentName
description
project
For the User Portal, if any catalog item has a dot in its versions then the catalog item will not fetch the dependent drop-down values.
In the ServiceNow Quebec release, the catalog item requests having a password field are getting failed. This issue occurs as in the custom scoped application, the ITSM application is unable to use getDecryptedValue() for masked variable in catalog item. This is a ServiceNow known issue for Quebec release.
In the Native UI, if a day-2 action catalog item contains the check-box or label fields then the ServiceNow displays the cross-scope info messages while loading the day-2 catalog items. This is a ServiceNow known issue.
While fetching the Date Time field from the external sources, an extra T displays for the time in the ServiceNow Native UI.
If in the custom form, you have customized the deployment name field using the source data as default value/Read only/External then multiple deployment fields displays on the User Portal.
On the Service Portal, duplicate data type fields are displaying in the RITM if the catalog item have multiple versions. | https://docs.vmware.com/en/vRealize-Automation/services/config-guide/GUID-3F666430-316D-47FD-9C04-F60F9E77879A.html | 2021-11-27T05:07:54 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.vmware.com |
01. GETTING STARTED [4] 1. Download The Theme Package 2. WordPress Information 3. Recommended PHP Configuration Limits 4. Files Included In The Package 02. INSTALLATION [4] 01. How To Install Your Theme 02. How To Install Plugins Included 03. Activate & Save Permalinks 04. How To Install A Plugin Not Included 03. DEMO CONTENT 04. UPDATES [2] 1. How to Update Ciloe theme 2. How To Update Plugins Included 05. GENERAL [3] 1. Custom Site Identity 2. Google Map API Key 3. Custom Options (Page) 06. STYLING [3] 01. How To Change The Color 02. How To Add Your Custom CSS 03. Homepage Settings 07. HEADER [2] 01. Header Layout 02. How to change logo & mobile logo 08. MENUS [2] 1. Create New Menu 2. Create New Mega Menu 09. FOOTER [2] 1. Choose footer layout 2. Create or edit a footer 10. MOBILE LAYOUT [4] 01. Enable header mobile 02. Enable shop mobile 03. Enable product mobile 04. Display slideshow on mobile 11. SHORTCODE PIN MAPPER [3] 1. Product Pin Mapper 2. How to use Product Pin Mapper 3. Ziss Options 12. TRANSLATIONS [2] 01. How to translate the theme 02. How to translate the plugin 13. TROUBLESHOOTING [2] 01. Troubleshooting Theme Installation 02. Troubleshooting the Demo Content Import 14. NEWSLETTER [2] 01. Newsletter Popup 02. Newsletter Builder 15. SIZE GUIDE [2] 01. Create new Size Guide 02. How to use Size Guide in the product? 16. PRODUCT BUILDER [2] 01. Create new product with WPBakery Shortcode (Product Builder) 02. How to export the product style which you have created?? Home / Fami Themes Documentation / Ciloe / 06. STYLING 02. How To Add Your Custom CSS Simply go to Appearance > Customize > Additional CSS and add your custom CSS there. At last, don’t forget click “Publish” to activate your custom CSS on your site. Example: Like this picture below: Doc navigation < 01. How To Change The Color 03. Homepage Settings > Was this page helpful? Yes No | https://docs.famithemes.net/docs/ciloe/06-styling/02-how-to-add-your-custom-css/ | 2021-11-27T05:38:41 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.famithemes.net |
vngshare¶
vngshare is the stand-alone mode of ngshare. It stands for Vserver-like Notebook Grader Share. It is similar to vserver and allows easy testing. For details about vserver, see “Development History” below.
Install¶
For detailed instructions, see Developer Installation.
pip3 install tornado jupyterhub sqlalchemy cd ngshare python3 vngshare.py [--host <bind_IP_address> [--port <port_number>]]
Default Behavior¶
vngshare by default enables debug (e.g. verbose error output). It allows developers to view and reset database content easily. Users can be authenticated by simply passing in their username in GET / POST requests (see Authentication).
vngshare will create a database at
/tmp/ngshare.db and store uploaded files in
/tmp/ngshare/. Though there is no file system APIs like in vserver, unauthorized users can easily corrupt your data. So do not use in production.
Development History¶
The development of
ngshare (backend) requires collaborating with frontend development and requires solving technical issues, so our plan breaks the development into different stages.
- Develop
vserver(see Project Structure) with Unix file system APIs. This allows frontend to forward all file system calls (e.g. read file, write file) to another server. It allows frontend to test the idea when backend is implementing next stage.
- Develop
vserverwith nbgrader APIs (e.g. create course, release assignment). After this the frontend can begin large changes to the exchange mechanism by replacing file system calls with nbgrader API calls. At this point no authentication is made.
- Add authentication to
vservernbgrader APIs. To make things simple the frontend just needs to send the username, and the backend trusts what frontend does. During the first three stages, the backend can concurrently investigate how to set up a JupyterHub service.
- Port
vserver’s nbgrader APIs to
ngshare(final API server). There should be minimal effort in both backend and frontend as long as JupyterHub service can be set up correctly. The front end need to change the address of the server and send an API token instead of username; the backend need to copy the logic of
vserver.
- Maintain
ngshare, fix any bugs and implement any features as frontend requests.
Currently we are at stage 5.
Historical Project Structure¶
This project used to has 2 parts
ngshareis the final API server that will be used in nbgrader in production. Written as Tornado Web Server and using SQLAlchemy.
vngsharestands for Vserver-like Notebook Grader Share. It has the same functionality as
ngsharebut is built as a stand-alone server (does not require JupyterHub environment), which makes testing easier.
vserveris a simple and vulnerable API server, written in Flask, that allows testing the project structurte and development of frontend without waiting for backend.
- Mar 7, 2020: Since
ngshareis already mature,
vserveris no longer maintained.
- May 9, 2020:
vserveris migrated to | https://ngshare.readthedocs.io/en/latest/contributer_guide/vngshare.html | 2021-11-27T04:52:11 | CC-MAIN-2021-49 | 1637964358118.13 | [] | ngshare.readthedocs.io |
python-netdiscover¶
The python-netdiscover is a simple wrapper for the netdiscover reconnaissance tool.
This library offers a simple way to create scans from a python script and analyse the results.
Notes¶
This tool needs to be run as root. It is necessary to be presented on the system the netdiscover tool. The library will look for the netdiscovery binary in the following paths:
- netdiscover
- /usr/bin/netdiscover
- /usr/sbin/netdiscover
- /usr/local/bin/netdiscover
- /sw/bin/netdiscover
- /opt/local/bin/netdiscover
If netdiscovery is not present in any of the paths above, you can specifie path with the argument netdiscover_path on Discover class.
disc = Discover(netdiscover_path="path_of_netdiscover") | https://python-netdiscover.readthedocs.io/en/latest/readme.html | 2021-11-27T05:32:26 | CC-MAIN-2021-49 | 1637964358118.13 | [] | python-netdiscover.readthedocs.io |
The SAS facility provides Resource Adaptor and service developers an interface for integrating with the Metaswitch Service Assurance Server, an end-to-end tracing system. SAS provides an integrated end-to-end view of calls passing through an operator’s network. It combines traces from all network elements with reporting capability into a complete trace of the call that can be examined at multiple levels of detail to determine how the call was processed.
The principal interface for reporting data to SAS is the Trail.
Trails are created by the SAS facility, either explicitly, when an RA calls
startTrail() or implicitly, when an RA or service calls
getOrCreateTrail() with an activity reference.
Trails typically last the lifetime of an activity but may be shared by multiple activities, e.g. a database lookup in call setup will use the trail of the SIP dialog or transaction.
SAS trails are composed of two data message types that are reported by the network element, Events and Markers. Each event and marker in a trail is reported asynchronously to the SAS server. SAS events are functional events that affect the processing of a call, e.g. a network message or a decision made by a service and the data that was used. SAS markers are informational data about a trail, typically used for search or correlation between trails. Both events and markers can contain parameters to provide information for display, search or correlation.
Bundles and Mini-bundles
A bundle file is a YAML document mapping event names to human readable descriptions. SAS requires event decoding bundle files to display the events received. Correlation and storage do not require the bundle file, it is only used at display time.
Rhino extends the SAS bundle model to use composable mini-bundles that are assembled at runtime into a bundle to export for loading into SAS. Each network element may only use one bundle when reporting to a SAS instance so Rhino builds this bundle from all the mini-bundles found in components deployed to a namespace. Developers of resource adaptors and services that use the SAS facility must write mini-bundle files to describe the events their components report.
Mini-bundles contain a set of named event descriptions and enumerated values used to construct a SAS bundle. These are combined with system-identifying information configured in Rhino to produce the bundle SAS will use to decode messages. Each mini-bundle file starts with a version, followed by a set of events and, optionally, enums listing values that can be expanded from integer "static" parameters in event messages. A component may report events from multiple mini-bundles. Events have symbolic names to support use by multiple components. They must be packaged in a deployment jar that is installed into Rhino with the component that reports the events. This may be the same jar as the component or another that the component depends on.
After deployment the system operator exports the merged bundle and installs it into the SAS UI. At this time the component bundles are combined with the configured Resource ID and written with numeric IDs to form a bundle file SAS can load. If multiple versions of a mini-bundle are deployed, only the latest version will be used. For more information on how Rhino combines mini-bundles see SAS Bundle Generation and SAS Bundle Mappings in the Rhino Administration and Deployment Guide
Rhino comes with an Ant task for creating enum classes from bundle files generate-bundle-enums. Java enums are created for the SAS events and enums in the bundle file provided. The bundle files must be named for the package the enums will be created in.
Each event in a bundle must have a summary and a level. It may optionally contain details and call-flow data. Call flow descriptions must contain data, protocol and direction, other attributes are optional but should be provided where available. All text attributes of events can contain parameterised text using the Liquid templating language. For full details of the event structure contact your Metaswitch representative.
For an example of a mini-bundle file, see Service Assurance Server (SAS) Tracing in the HTTP resource adaptor guide.
Invoking Trail Accessor
On event delivery to a service, the invoking Trail is made available through the Invoking Trail Accessor.
Calling
getInvokingTrail() will return the SAS trail attached to the ACI on which the event was fired.
On subsequent downcalls into RAs that result in a new SLEE activity being created via
SleeEndpoint.startSuspended, the invoking trail (if one exists) will automatically be attached to the new activity.
This behaviour means that a SAS trail will automatically be passed along through any number of downcalls into RAs and asynchronous events as long as the service always attaches to the ACI for the new activity.
In most cases, this behaviour is desired and saves one from having to add code that explicitly gets a trail from the invoking ACI and attaches it a new ACI.
If necessary, however, one can call
setInvokingTrail(Trail) or
remove() to manually set or clear the invoking trail before calling into a RA.
Trail Association and Colocation
SAS Trails that form part of the same call can be associated by calling
associate(Trail).
If the Rhino SAS facility is configured with multiple SAS servers, different trails may not be using the same server.
Associating trails that are using the same SAS server is more efficient, as a trail association message can be sent to the one server informing it that the trails form part of the same trail group. If the trails being associated are using different servers, then a generic correlation marker gets sent to each SAS server with the UUID of the one trail. The SAS reporting then needs to perform some additional work to correlate the trails.
To ensure related trails are colocated on the same SAS server, SLEE components can call
startColocatedTrail(Trail) or
startAndAssociateTrail(Trail, Scope).
These methods will create a new SAS trail using the same server as the given trail and, in the second case, will automatically send a trail association message to the server.
Entry point to the SAS event reporting facility.
Users of this facility create a
Trail, then use that to create and report events The
Trail interface provides two ways to create then report Events and Markers:
create a message, add parameters, then report the message
convenience methods that create a Marker or Event, with various combinations of parameters, then report it, in one call.
The SAS Facility supports communication with a federation of SAS servers.
As such, trails created with
startTrail() are distributed between servers by simple round-robin.
When starting a new trail that will be associated with an existing one, use
startColocatedTrail(Trail) or
startAndAssociateTrail(Trail, Scope)
to start a new trail on the same server as the given trail.
The primary interface for creating and reporting markers and events. A trail is a sequence of related events and markers representing the processing sequence for a dialog or transaction.
Provides access to the InvokingTrail. The InvokingTrail is the SAS trail attached to the activity owning the current event, at the time the event is delivered.
Superinterface of
EventMessage and
MarkerMessage with functionality common to both.
A Message sent to SAS may contain parameters to be used when decoding the message on the SAS server for display or correlation.
Variable-length parameters
Thread-safety
There are some critical thread-safety issues to consider for parameters added to messages.
Parameters are not copied or marshalled into the message until
report() is called, and then only if SAS tracing is enabled.
This means two things:
Parameters must not be modified until the
report()method is called.
Large or complex objects can be passed to this method, and if tracing is disabled or the event is discarded and not reported, it will have negligible memory or CPU cost
The
threadSafeParam(byte[]) method allows callers to add a parameter that will not be copied, even after
report() is called.
This should be used for parameters which the caller plans to modify and has therefore defensively copied or marshalled into a new byte array.
Object parameter handling
Parameters passed to
varParam(Object) and the multi-parameter equivalents will be handled differently depending on their type:
null— Encoded as zero length byte array.
byte[]— copied directly into the message
java.nio.ByteBuffer— Warning: Unsupported. Will be coerced to zero length byte array. Use
threadSafeParam(byte[])instead.
java.lang.String— encoded as UTF-8 and copied into the message
implements
EncodeableParameter— call
EncodeableParameter#encode(ByteBuffer)copy bytes written to stream into the message
implements
MarshalableParameter— call
MarshalableParameter#marshal()and copy the returned
byte[]into the message
any other type — call
Object#toString()and proceed as for
java.lang.String
The
varParam method should not be used for parameters that implement
EnumParameter). It will still add the parameter but
will log an error message.
staticParam(EnumParameter) should be used for enum parameters.
As noted above, this marshalling/copying happens when the
report() method is called, not when the parameters are added to the message.
There are exceptions to this. These conversions happen when the parameter is added:
nullparameter to empty byte array
ByteBufferto empty byte array
EnumParameterto its integer value
Methods to set message fields specific to Events.
An EventMessage is a message to SAS describing a network event or processing step within a network service. EventMessages have an ID used to look up a description from a bundle deployed to the SAS server and contain parameters holding information about the state of the system that led to the event being reported.
Methods to set message fields specific to Markers.
A MarkerMessage is a message sent to SAS to provide context about a trail. The marker may be used for correlating multiple trails into a trace or branch. It may also be used for searching for a trace in the SAS UI.
Some markers have special meaning to the SAS server, the START, END and FLUSH markers indicate when a trail is started, when one is ended, and when no more data is expected in the next few seconds.
Ant task to generate Java enums from mini-bundle files. Contains an implicit fileset parameter that identifies the set of SAS bundle files to create Java enums for.
Task attributes are:
dir: The base directory to search for bundle files destDir: The output base directory where enum classes will be created eventsClassName: Optional. The classname for the generated events enum
com/opencloud/slee/services/example/sas/sas-bundle.yamlfound in
${resources}/sas-bundles.
The output classes will be created in
${src}, including
com.opencloud.slee.services.example.sas.SasEvent and any enums representing SAS enums in the bundle.
<generate-bundle-enums
sas/com.opencloud.test.rhino.sas.bundle.ra.yamlrelative to
${resources}.
The output classes will be created in
${src}, including
com.opencloud.test.rhino.sas.bundle.ra.RaSasEvent and any enums representing SAS enums in the bundle.
<oc:generate-bundle-enums <include file="sas/com.opencloud.test.rhino.sas.bundle.ra.yaml"/> </oc:generate-bundle-enums> | https://docs.rhino.metaswitch.com/ocdoc/books/rhino-documentation/2.6.0/rhino-extended-apis/slee-facilities/sas-facility.html | 2021-11-27T04:51:53 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.rhino.metaswitch.com |
Manually Send Your Log File
Scan2CAD gives you the option to send a message along with your log file to Scan2CAD support from within the software.
Learn more about sending your log file.
In some circumstances, you may be unable to send your message due to blocking on corporate networks or internet connectivity issues.
How do I manually send my log file?
- Follow the same instructions to send your log file.
- If Scan2CAD is not able to automatically send your log file through the software, the application will ask you to save the file to your computer instead.
- Save the file to your computer and email Scan2CAD support with your log file attached.
What happens after I send my log file?
After emailing Scan2CAD Support with your log file we will respond within 24 hours to help you.
In the unlikely event that you are unable to launch the app to access the splash screen you can find the log files in the following locations:
On Windows
1. Navigate to %appdata%\..\Local\Temp\Scan2CAD
2. Send us the contents of this folder.
Note: If you’re not sure how to find the %appdata% folder, you can click Start, or the Cortana search icon in Windows 10 and type %appdata% in the search box.
On MacOS
- Please navigate to the folder located at ~/Library/Application Support/Scan2CAD
- Send us the contents of this folder
Note: If you don’t know how to locate the folder you can: Go to Finder click Go > Connect to Folder > paste the above file path in the field. | https://docs.scan2cad.com/article/71-manually-send-log-file | 2021-11-27T05:40:55 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.scan2cad.com |
Update Your App Delegate
Now that all the required frameworks have been added to your project, you need to implement them in the code base.
This section outlines how to implement the frameworks in the code base, using either Objective_C or Swift.
Please make sure you have already completed the following steps:
- Retrieve the YOUR_APP_KEY and YOUR_APP_GROUP_KEY values from the HurreeSDK website – it is required for the instructions below
- Initialize HurreeSDK in your application's main thread with four actions in the
didRegisterForRemoteNotificationsWithDeviceToken(Objective-C):
- Initialise the
AnalyticsSingletonObject
- Call
deportKeyValuesfunction with the required keys
- Call
deportUserValuesfunction with required keys
- Call
sendLoginDetails
Please note that YOUR_OBJECT_NAME refers to an object name of your choice.
To update your app delegate
Using Objective-C
Add the following three lines of code to the top of your AppDelegate.h file:
The first two lines contain the bridging heading that imports the SDK for an Objective-C project:
The final line contains the property:
@property(nonatomic, strong) AnalyticsSingleton *YOUR_OBJECT_NAME
Add the following lines of code (including YOUR_APP_KEY and YOUR_APP_GROUP_KEY values) to the
application:didRegisterForRemoteNotificationsWithDeviceToken method in your AppDelegate.m file:
This enables the application to start the SDK, start the login process and receive push notifications from the server:
:^(id result) { //Print Result }];
Using Swift
Add following line of code to the
application(_:didRegisterForRemoteNotificationsWithDeviceToken:)
method in your AppDelegate.swift file:
This enables the application to start the SDK, start the login process and receive push notifications from the server:
let YOUR_OBJECT_NAME = AnalyticsSingleton.sharedInstance(){(result) in //Print Result } | https://docs.hurree.co/ios_sdk/update_your_app_delegate.html | 2017-08-16T21:40:41 | CC-MAIN-2017-34 | 1502886102663.36 | [] | docs.hurree.co |
Release Notes
Magento Community Edition 1.9.2.4.
The SUPEE-7405 v 1.1 patch bundle includes the following:.
Patch Download and Installation
If you have not yet installed the previous patches, please do so now to bring your system up to date.
- SUPEE-7405 v. 1.1
Review Best Practices
-.
See also: | http://docs.magento.com/m1/ce/user_guide/magento/release-notes-ce-1.9.2.4.html | 2017-08-16T21:33:22 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.magento.com |
Glossary Item Box
This topic shows you how to use stored procedures that are already created on your SQL server explicitly. For the purposes of this example we will use a simple class Person:
And two stored procedures.
One for delete operations:
One for insert operations:
The above stored procedures can be easily executed using the scope.GetSqlQuery() method or by reverse mapping them as shown in this topic.
In order to execute the stored procedure using the GetSqlQuery() method you will need to do this:
Here are all the parameters that you need to pass to the GetSqlQuery method:
Note that the Reverse Mapping wizard generates the appropriate code based on the current backend.
After you have defined the GetSqlQuery in the above way you need to actually execute the procedure. This can be done by calling the execute method of the query. Depending on what the procedure requires either no parameters can be passed or an object array containing all the required parameters.Note that the query is not executed until the result is required. A simple call for the count of the query result will execute the query.Using this approach all kind of stored procedures can be executed. If you do not want to write this code by yourself you can use the reverse engineering wizard to generate static methods with the same behavior. | http://docs.telerik.com/help/openaccess-classic/programming-with-openaccess-using-getsqlquery-to-execute-stored-procedures.html | 2017-08-16T17:47:18 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.telerik.com |
Add support of persistence task environment¶
Use Case¶
There are situations when same environment is used across different tasks. For example you would like to improve operation of listing objects. For example:
- Create hundreds of objects
- Collect baseline of list performance
- Fix something in system
- Repeat the performance test
- Repeat fixing and testing until things are fixed.
Current implementation of Rally will force you to recreate task context which is time consuming operation.
Problem Description¶
Fortunately Rally has already a mechanism for creating task environment via contexts. Unfortunately it's atomic operation: - Create task context - Perform subtask scenario-runner pairs - Destroy task context
This should be split to 3 separated steps. | http://rally.readthedocs.io/en/latest/feature_request/persistence_benchmark_env.html | 2017-08-16T17:14:44 | CC-MAIN-2017-34 | 1502886102309.55 | [] | rally.readthedocs.io |
Applies To: System Center 2016
You can install DPM 2016 on Windows Server 2012 R2, or on Windows Server 2016. If you are installing DPM 2016 on Windows Server 2012 R2, you must upgrade an existing DPM installation from DPM 2012 R2 with Update Rollup 10 or greater. Before you upgrade or install DPM 2016, please read the Installation prerequisites.
Upgrade path for DPM 2016
If you are going to upgrade from a previous version of DPM to DPM 2016, make sure your installation has the necessary updates:
- Upgrade DPM 2012 R2 to DPM 2012 R2 Update Rollup 10. You can obtain the Update Rollups from Windows Update.
- Upgrade DPM 2012 R2 Update Rollup 10 to DPM 2016.
- Update the agents on the protected servers.
- Upgrade Windows Server 2012 R2 to Windows Server 2016.
- Upgrade DPM Remote Administrator on all production servers.
- Backups will continue without rebooting your production server.
For information about upgrading all technologies in System Center, see the article, Upgrade to System Center 2016.
Upgrade steps for DPM
- To install DPM, double-click Setup.exe to open the System Center 2016 Wizard.
- Under Install, click Data Protection Manager. This starts Setup. Agree to the license terms and conditions and follow the setup wizard.
Some DPM 2016 features, such as Modern Backup Storage,. For instructions on installing DPM, see the article, Installing DPM 2016.
Migrating the DPM database during upgrade
You may want to move the DPM Database as part of an upgrade. For example, you are merging instances of SQL Server. You are moving to a remote more powerful SQL server. You want to add fault tolerance by using a SQL Server cluster; or you want to move from a remote SQL server to a local SQL server or vice versa. DPM 2016 setup allows you to migrate the DPM database to different SQL Servers during an upgrade.
Possible database migration scenarios
- Upgrading DPM 2012 R2 using a local instance and migrating to a remote instance of SQL Server during setup.
- Upgrading DPM 2012 R2 using a remote instance and migrating to a local instance of SQL Server during setup.
- Upgrading DPM 2012 R2 using a local instance and migrating to a remote SQL Server Cluster instance during setup.
- Upgrading DPM 2012 R2 using a local instance and migrating to a different local instance of SQL Server during setup.
- Upgrading DPM 2012 R2 using a remote instance and migrating to a different remote instance of SQL Server during setup.
- Upgrading DPM 2012 R2 using a remote instance and migrating to a remote SQL Server Cluster instance during setup.
Preparing for a database migration
The new SQL Server that you want to use to migrate the DPM database to must have the same SQL Server requirements, setup configuration, firewall rules, and DPM Support files (sqlprep) installed before performing the DPM Upgrade.
Once you have the new instance of SQL Server installed and prepped for being used by DPM, you must make a backup of the current DPM 2012 R2 UR10 KB3143871 (4.2.1473.0) or a later database and restore it on the new SQL Server.
Pre-upgrade steps: Backup and restore DPM 2012 R2 DPM database to a new SQL instance
In this example, we will prepare a remote SQL Server cluster to use for the migration.
- On the System Center Data Protection Manager 2012 R2 server or on the remote SQL Server hosting the DPM database, start Microsoft SQL Management Studio and connect to the SQL instance hosting the current DPM 2012 R2 DPMDB.
Right-click the DPM database, and under Tasks, select the Back Up… option.
Add a backup destination and file name, and then select OK to start the backup.
After the backup is complete, copy the output file to the remote SQL Server. If this is a SQL Cluster, copy it to the active node hosting the SQL instance you want to use in the DPM upgrade. You have to copy it to the Shared Cluster disk before you can restore it.
- On the Remote SQL Server, start Microsoft SQL Management Studio and connect to the SQL instance you want to use in the DPM upgrade. If this is a SQL Cluster, do this on the Active node that you copied the DPM backup file to. The backup file should now be located on the shared cluster disk.
Right-click the Databases icon, then select the Restore Database… option. This starts the restore wizard.
Select Device under Source, and then locate the database backup file that was copied in the previous step and select it. Verify the restore options and restore location, and then select OK to start the restore. Fix any issue that arise until the restore is successful.
After the restore is complete, the restored database will be seen under the Databases with the original name. This Database will be used during the upgrade. You can exit Microsoft SQL Management Studio and start the upgrade process on the original DPM Server.
If the new SQL Server is a remote SQL server, install the SQL management tools on the DPM server. The SQL management tools must be the same version matching the SQL server hosting the DPMDB.
Starting upgrade to migrate DPMDB to a different SQL Server
Note
If sharing a SQL instance, run the DPM installations (or upgrades) sequentially. Parallel installations may cause errors.
After the pre-migration preparation steps are complete, start the DPM 2016 Installation process. DPM Setup shows the information about current instance of SQL Server pre-populated. This is where you can select a different instance of SQL Server, or change to a Clustered SQL instance used in the migration.
Change the SQL Settings to use the instance of SQL Server you restored the DPM Database to. If it’s a SQL cluster, you must also specify a separate instance of SQL Server used for SQL reporting. It's presumed that firewall rules and SQLPrep are already ran. You have to enter correct credentials and then click the Check and Install button.
Prerequisite check should succeed, press NEXT to continue with the upgrade.
Continue the wizard.
After setup is complete, the corresponding database name on the instance specified will now be DPMPB_DPMServerName. Because this may be shared with other DPM servers, the naming convention for the DPM database will now be: DPM2016$DPMDB_DPMServerName
Adding Storage for Modern Backup Storage
To store backups efficiently, DPM 2016 uses Volumes. Disks can also be used to continue storing backups like DPM 2012 R2.
Add Volumes and Disks
If you run DPM 2016 on Windows Server, you can use volumes to store backup data. Volumes provide storage savings and faster backups. You can give the volume a friendly name, and you can change the name. You apply the friendly name while adding the volume, or later by clicking the Friendly Name column of the desired volume. You can also use PowerShell to add or change friendly names for volumes.
To add a volume in the administrator console:
In the DPM Administrator console, select the Management feature > Disk Storage > Add.
In the Add Disk Storage dialog, select an available volume > click Add > type a friendly name for the volume ** > click OK.
If you want to add a disk, it must belong to a protection group with legacy storage. Those disks can only be used for those protection groups. If the DPM server doesn't have sources with legacy protection, the disk won't appear. See the topic, Adding disks to increase legacy storage, for more information on adding disks. You can't give disks a friendly name.
Assign Workloads to Volumes
DPM 2016 allows the user to specify which kinds of workloads should be assigned to which volumes. For example, expensive volumes that support high IOPS can be configured to store only the workloads that require frequent, high-volume backups like SQL with Transaction Logs. To update the properties of a volume in the storage pool on a DPM server, use the PowerShell cmdlet, Update-DPMDiskStorage.
Update-DPMDiskStorage
Syntax
Parameter Set: Volume
Update-DPMDiskStorage [-Volume] <Volume> [[-FriendlyName] <String> ] [[-DatasourceType] <VolumeTag[]> ] [-Confirm] [-WhatIf] [ <CommonParameters>]
The changes made through PowerShell are reflected in the UI.
Protecting Data Sources
To begin protecting data sources, create a Protection Group. The following procedure highlights changes or additions to the New Protection Group wizard.
To create a Protection Group:
In the DPM Administrator Console, select the Protection feature.
On the tool ribbon, click New.
The Create new Protection Group wizard opens.
Click Next to advance the wizard to the Select Protection Group Type screen.
On the Select Protection Group Type screen, select the type of Protection Group to be created and then click Next.
On the Select Group Members screen, in the Available members pane, DPM lists the members with protection agents. For the purposes of this example, select volume D:\ and E:\ to add them to the Selected members pane. Once you have chosen the members for the protection group, click Next.
On the Select Data Protection Method screen, type a name for the Protection group, select the protection method(s) and click Next. If you want short term protection, you must use Disk backup.
On the Specify Short-Term Goals screen specify the details for Retention Range and Synchronization Frequency, and click Next. If desired, click Modify to change the schedule when recovery points are taken.
The Review Disk Storage Allocation screen provides details about the selected data sources, their size, the Space to be Provisioned, and Target Storage Volume.
The storage volumes are determined based on the workload volume allocation (set using PowerShell) and the available storage. You can change the storage volumes by selecting other volumes from the drop-down menu. If you change the Target Storage, the Available disk storage dynamically changes to reflect the Free Space and Underprovisioned Space.
The Underprovisioned Space column in Available disk storage, reflects the amount of additional storage needed if the data sources grow as planned. Use this value to help plan your storage needs to enable smooth backups. If the value is zero, then there are no potential problems with storage in the foreseeable future. If the value is a number other than zero, then you do not have sufficient storage allocated - based on your protection policy and the data size of your protected members.
The remainder of the New Protection Group wizard is unchanged from DPM 2012 R2. Continue through the wizard to complete creation of your new protection group.
Migrating legacy storage to Modern Backup Storage
After upgrading DPM 2012 R2 to DPM 2016 and the operating system to Windows Server 2016, you can update your existing protection groups to the new DPM 2016 features. By default, protection groups are not changed, and continue to function as they were configured in DPM 2012 R2. Updating protection groups to use Modern Backup Storage is optional. To update the protection group, stop protection of all data sources with Retain Data, and add the data sources to a new protection group. DPM begins protecting these data sources the new way.
In the Administrator Console, select the Protection feature, and in the Protection Group Member list, right-click the member, and select Stop protection of member....
The Remove from Group dialog opens.
In the Remove from Group dialog, review the used disk space and the available free space in the storage pool. The default is to leave the recovery points on the disk and allow them to expire per their associated retention policy. Click OK.
If you want to immediately return the used disk space to the free storage pool, select Delete replica on disk. This will delete the backup data (and recovery points) associated with that member.
Create a new protection group that uses Modern Backup Storage, and include the unprotected data sources.
Adding Disks to increase legacy storage
If you want to use legacy storage with DPM 2016, it may become necessary to add disks to increase legacy storage. To add disk storage:
On the Administrator Console, click Management.
Select Disk Storage.
On the tool ribbon click Add.
The Add Disk Storage dialog opens.
In the Add Disk Storage dialog, click Add disks.
DPM provides a list of available disks.
Select the disks, click Add to add the disks, and click OK.
New PowerShell cmdlets
For DPM 2016, two new cmdlets: Mount-DPMRecoveryPoint and Dismount-DPMRecoveryPoint are available. Click the cmdlet name to see its reference documentation.
Enable Cloud Protection
You can back up a DPM server to Azure. The high level steps are:
- create an Azure subscription,
- register the server with the Azure Backup service,
- download vault credentials and the Azure Backup Agent,
- configure the server's vault credentials and backup policy,
For more information on backing up DPM to the cloud, see the article, Preparing to backup workloads to Azure with DPM. | https://docs.microsoft.com/en-gb/system-center/dpm/upgrade-to-dpm-2016 | 2017-08-16T17:40:32 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.microsoft.com |
Privacy
All system text boxes that are marked as secure are hidden from the recorded video automatically. In addition we support a way to mark your custom sensitive views so they will be treated similarly.
Protected view
Marking view as protected in code
You can protect any view that you want by importing
Bugsee.h header and implement the following code:
Objective-C
self.myView.bugseeProtectedView = YES;
Swift
self.myView.bugseeProtectedView = YES
Marking view as protected in storyboard
- Open storyboard or xib file with interface
- Select view that you need to protect
- Choose the Identity inspector tab on right panel(3rd tab)
- Add a User Defined Runtime Attribute called bugseeProtectedView as shown in the picture.">
Going dark
In some rare cases you might want to conceal the whole screen and stop recording events completely. The following API's will come in handy, no data is being gathered between the calls to pause and resume.
Objective-C
// To stop video recording use [Bugsee pause]; // And to continue [Bugsee resume];
Swift
// To stop video recording use Bugsee.pause() // two methods for hooking your own filters, via a delegate or a block that we will call for every event about to be recorded.
Regardles of the method you chose, the principle is similar, for every event to be recorded, Bugsee will call your method and provide you with BugseeNetworkEvent object. It is your method's responsibility to clean up all user identifiable data from that structure and call decisionBlock() to pass it back to Bugsee.
Using delegate
Your class should implement BugseeDelegate protocol and it must set itself as the delegate for Bugsee.
Objective-C
-(void)bugseeFilterNetworkEvent:(BugseeNetworkEvent *)event completionHandler:(BugseeNetworkFilterDecisionBlock)decisionBlock{ NSError * error; // Below is an example code that will remove access_token from all URLs going through the filter. NSRegularExpression * regex = [NSRegularExpression regularExpressionWithPattern:@"access_token=[0-9a-z\\-]*&" options:NSRegularExpressionCaseInsensitive error:&error]; event.url = [regex stringByReplacingMatchesInString:event.url options:0 range:NSMakeRange(0, event.url.length) withTemplate:@""]; // Send the event further, call with nil if you want to omit this event altogether. decisionBlock(event); } // ..somewhere within the class Bugsee.delegate = self;
Swift
private func bugseeFilterNetworkEvent(event: BugseeNetworkEvent, completionHandler decisionBlock: BugseeNetworkFilterDecisionBlock){ do { let regex = try NSRegularExpression.init(pattern: "&access_token=[0-9a-z\\-]*", options: NSRegularExpressionOptions.CaseInsensitive) let range = NSMakeRange(0 , event.url.characters.count) event.url = regex.stringByReplacingMatchesInString(event.url, options: .ReportProgress, range: range, withTemplate: "") } catch { print("Somethings went wrong!") } // Send the event further, call with nil if you want to omit this event altogether. decisionBlock(event) }
Using filter with block
Alternatively you can set up a filter by registering a block to be executed for every event.
Objective-C
[Bugsee setNetworkEventFilter:^(BugseeNetworkEvent *event, BugseeNetworkFilterDecisionBlock decisionBlock) { // modify BugseeNetworkEvent as you wish here // Send the event further, call with nil if you want to omit this event altogether. decisionBlock(event); } // unregister a block before deallocating a class in which it was registered [Bugsee removeNetworkEventFilter];
Swift
Bugsee.setNetworkEventFilter { (event, decisionBlock) in // modify BugseeNetworkEvent as you wish // Send the event further, call with nil if you want to omit this event altogether. decisionBlock(event) } // always call removeNetworkEventFilter method if you deallocate class where setNetworkEventFilter: was called Bugsee.removeNetworkEventFilter()
Network Events
The delegate or hook is going to be called several times for each network request, depending on its lifecycle. Usually for successful requests its going to be called twice, once with the request event (request headers and body) and once after completion and will contain headers and body of the response. | https://docs.bugsee.com/sdk/ios/privacy/ | 2017-08-16T17:08:39 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['../protect-view-storyboard.png', 'Story Board Protect View'],
dtype=object) ] | docs.bugsee.com |
Document Type
Document
Recommended Citation
Department of Attorney General, State of Rhode Island, "12th Annual Open Government Summit: Access to Public Records Act & Open Meetings Act, 2010" (2010). School of Law Conferences, Lectures & Events. 86.
Included in
Administrative Law Commons, Civil Procedure Commons, Law and Society Commons, Legislation Commons, State and Local Government Law Commons
Patrick C. Lynch, Attorney General | http://docs.rwu.edu/law_pubs_conf/86/ | 2017-08-16T17:24:25 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.rwu.edu |
We are pleased to announce support for iOS 10 in our latest iOS SDK release: 8.0.0. With the launch of iOS 10, we are adding support for iOS Rich Notifications and content extensions. Get started creating iOS 10 apps today by:
Installing our iOS SDK 8.0.0.
Enabling a notification service extension in your project.
Sending rich notifications via our API by using the Media Attachment key.
Sending rich notifications through our Message Composer, Automation, and Message Personalization UIs.
iOS Rich Notifications
Rich Notifications on iOS let you add images, animated GIFs, audio and video to push notifications. This gives you a number of new ways to engage with your users right from the notification.
The example below includes an image with banner thumbnail, notification summary, and two interactive response options.
This is a major step towards parity with Android, where rich notifications have proven to be a boon to marketers. Read more about rich notification engagement rates on our blog.
See the new Media options in our composers’ Optional Message Features.
Python Library Support
We have also updated our Urban Airship Python library to support all new iOS 10 features. Enjoy!
Further Reference:
iOS SDK 8.0.0 Migration guide: iOS SDK 8.0 Migration Guide
iOS SDK 8.0.0 Documentation: iOS platform documentation | https://docs.urbanairship.com/whats-new/2016-09-30-ios-10-support/ | 2017-08-16T17:11:33 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['https://docs.urbanairship.com/images/ios-10-collapsed.png', None],
dtype=object)
array(['https://docs.urbanairship.com/images/ios-rich-notification.png',
None], dtype=object) ] | docs.urbanairship.com |
Tutorials
This page lists the available tutorials for libpointmatcher. The Beginner Section is aimed at the more casual user and contains high-level information on the various steps of point cloud registration. The Advanced Section is targeted at those with existing experience with point cloud registration and proficiency in C++ development. Those who wish to contribute to libpointmatcher can follow the guidelines in the Developer section.
Beginner
- What is libpointmatcher about?
- What can I do with libpointmatcher?
- Ubuntu: How to compile libpointmatcher
- Windows: How to compile libpointmatcher
- Mac OS X: How to compile libpointmatcher
- What the different data filters do?
- Example: Applying a chain of data filters
- Example: An introduction to ICP
- The ICP chain configuration and its variants
- Configuring libpointmatcher using YAML
- Supported file types and importing/exporting point clouds
Advanced
- How to link a project to libpointmatcher?
- How are point clouds represented?
- Example: Writing a program which performs ICP
- How to move a point cloud using a rigid transformation?
- Example: Configure an ICP solution without yaml
- Measuring Hausdorff distance, Haussdorff quantile and mean residual error? See this discussion for code examples.
- How to compute the residual error with
ErrorMinimizer::getResidualError(...)See the example code provided here.
- How to I build a global map from a sequence of scans? See the example align_sequence.cpp.
- How to minimize the error with translation, rotation and scale? See this example.
- How to do a nearest neighbor search between two point clouds without an ICP object? See the comments here.
Developer
Note: if you don't find what you need, don't hesitate to propose or participate to new tutorials.
| http://libpointmatcher.readthedocs.io/en/latest/ | 2017-08-16T17:32:41 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['./images/banner_light.jpeg', 'alt tag'], dtype=object)
array(['./images/banner_dark.jpeg', 'alt tag'], dtype=object)] | libpointmatcher.readthedocs.io |
.
(Apigee Edge customers on paid accounts receive multiple sites with duplicate portal environments: Dev, Test, and Live.).
If you click the Apigee logo on your portal, you'll get to the home page.
Configure the connection between the portal and Edge
The.
There are three pieces of information that the portal needs to communicate with Edge:
-.It is recommended that you create a user with Developer Administrator privileges exclusively for connecting to Edge from the portal to minimize the risk that the user will be deleted. If the designated user is deleted on Edge, then the portal will no longer be able to connect to Edge.
To view the connection information:
- In the Drupal administration menu, select Configuration > Dev Portal. Create?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://docs.apigee.com/developer-services/content/creating-developer-portal?rate=W8l0e-zYnvfV2SBybDtQvaDn9TsLAv61rXIGU_BeKow | 2016-07-23T15:02:41 | CC-MAIN-2016-30 | 1469257823072.2 | [array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/user-settings-organization.png',
None], dtype=object)
array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/myApps_v24_1_v2.png',
None], dtype=object)
array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/portal-home-page_v24_2.png.png',
None], dtype=object) ] | docs.apigee.com |
Network requests in Office 2016 for Mac
Office policies for network proxy servers. The details in this article are intended to compliment the Office 365 URL and address ranges article, which includes endpoints for computers running Microsoft Windows.otepoints:.
Researcher
The following network endpoints apply to both Office 365 Subscription. Office 2016 applications., attempt 2016 for Mac build 15.25 [160726] or later.
Telemetry
Office 2016: 'W. To enable crash reporting without sending usage telemetry, the following preference can be set:
defaults write com.microsoft.errorreporting IsMerpEnabled -bool TRUE 2016 builds 15.27 or later, as they include specific fixes for working with NTLM and Kerberos servers.
See also
Office 365 URLs and IP address ranges | https://docs.microsoft.com/en-us/office365/enterprise/network-requests-in-office-2016-for-mac?redirectSourcePath=%252fid-id%252farticle%252fpermintaan-jaringan-di-office-2016-untuk-mac-afdae969-4046-44b9-9adb-f1bab216414b | 2018-09-18T20:19:08 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.microsoft.com |
Control.
XYFocus
Control. Up XYFocus
Control. Up XYFocus
Control. Up XYFocus
Property
Up
Definition
public : DependencyObject XYFocusUp { get; set; }
DependencyObject XYFocusUp(); void XYFocusUp(DependencyObject xyfocusup);
public DependencyObject XYFocusUp { get; set; }
Public ReadWrite Property XYFocusUp As DependencyObject
<control XYFocusUp="{x:Bind dependencyObjectValue}"/>
The object that gets focus when a user presses the Directional Pad (D-pad) up.
Remarks
XYFocusUpUpUp")) { button1.XYFocusUp = button2; } | https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.control.xyfocusup | 2018-09-18T19:23:15 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.microsoft.com |
Create a Cluster¶
On this page
- A. Open the Create New Cluster Dialog
- B. Configure the Cluster Cloud Provider & Region
- C. Select the Cluster Tier
- D. Select any Additional Settings
- E. Enter the Cluster Name
- F. Enter your Payment Information and Deploy your Cluster
- G. (Optional) Create a MongoDB Administrative User
- H. Connect to your cluster
Atlas-managed MongoDB deployments, or “clusters”, can be either a replica set or a sharded cluster. All Atlas clusters run using the WiredTiger storage engine. This tutorial covers creating and configuring a new Atlas cluster.
To learn how to modify an existing Atlas cluster, see Modify a Cluster.
Important
If this is the first
M10+ dedicated paid cluster for the
selected region or regions and you plan on creating one or more
VPC peering connections, please review the documentation on
VPC Peering Connections before
continuing.
A. Open the Create New Cluster Dialog¶
Note
Before creating a cluster, check that you have the correct organization and project selected. Click the Context drop down in the top left corner to select a specific organization or project. For more information on creating and managing organizations and projects, see Organizations and Projects.
Each Atlas project supports up to 25 clusters. Please contact Atlas support for questions or assistance regarding the cluster limit. To contact support, select Support from the left-hand navigation bar of the Atlas UI.
Go to the Clusters view and click the Add New Cluster or Build a New Cluster button to display the Create New Cluster dialog. As you build your cluster, Atlas displays the associated costs at the bottom of the screen. You can hover over the displayed cost for additional estimates.
B. Configure the Cluster Cloud Provider & Region¶
Select your preferred cloud provider and region. The choice of cloud provider and region affects the configuration options for the available clusters, network latency for clients accessing your cluster, the geographic location of the nodes in your cluster, and the cost of running the cluster.
Atlas supports deploying
M0 Free Tier and
M2/M5 shared-tier clusters on all cloud providers but only a
subset of each cloud provider’s regions. Regions marked as
Free Tier Available support deploying
M0 Free Tier
clusters. For a list of regions that support
M2/M5 clusters, see:
Regions marked as ★ are Recommended regions that provide higher availability compared to other regions. For more information, see:
The number of availability zones, zones, or fault domains in a region has no affect on the number of MongoDB nodes Atlas can deploy. MongoDB Atlas clusters are always made of replica sets with a minimum of three MongoDB nodes.
From the Cloud Provider & Region section, you can also Select Multi-Region, Workload Isolation, and Replication Options.
Select Multi-Region, Workload Isolation, and Replication Options¶
To configure additional cluster options, toggle Select Multi-Region, Workload Isolation, and Replication Options (M10+ clusters) to Yes. Use these options to add cluster nodes in different geographic regions with different workload priorities, and direct application queries to the appropriate cluster nodes.
AWS Only
If this is the first
M10+ dedicated paid cluster for the
selected region or regions and you plan on creating one or more
VPC peering connections, please review the documentation
on VPC Peering Connections before
continuing.
The following options are available when configuring cross-region clusters:
- Electable nodes for high availability
Having additional regions with electable nodes increases availability and helps better withstand data center outages.
The first row lists the Highest Priority region. Atlas prioritizes nodes in this region for primary eligibility. For more information on priority in replica set elections, see Member Priority.
Click Add a region to add a new row for region selection and select the region from the dropdown. Specify the desired number of Nodes for the region. The total number of electable nodes across all regions in the cluster must be 3, 5, or 7.
Backup Data Center Location
If this is the first cluster in the project and you intend to enable continuous snapshot backups, Atlas selects the backup data center location for the project based on the geographical location of the cluster’s Highest Priority region. To learn more about how Atlas creates the backup data center, see Fully Managed Backup Service.
When selecting a Region, regions marked as Recommended provide higher availability compared to other regions. For more information, see:
Each node in the selected regions can participate in replica set elections, and can become the primary as long as the majority of nodes in the replica set are available.
You can improve the replication factor of single-region clusters by increasing the number of Nodes for your Highest Priority region. You do not have to add additional regions to modify the replication factor of your Highest Priority region.
To remove a region, click the trash icon icon next to that region. You cannot remove the Highest Priority region.
Atlas provides checks for whether your selected cross-regional configuration provides availability during partial or whole regional outages. To ensure availability during a full region outage, you need at least one node in three different regions. To ensure availability during a partial region outage, you must have at least 3 electable nodes in a Recommended region or at least 3 electable nodes across at least 2 regions.
- Read-only nodes for optimal local reads
Use read-only nodes to optimize local reads in the nodes’ respective service areas.
Click Add a region to select a region in which to deploy read-only nodes. Specify the desired number of Nodes for the region.
Read-only nodes cannot provide high availability because they cannot participate in elections, or become the primary for their cluster. Read-only nodes have distinct replica set tags that allow you to direct queries to desired regions.
To remove a read-only region, click the trash icon icon next to that region.
- Analytics nodes for workload isolation
Use analytics nodes to isolate queries which you do not wish to contend with your operational workload. Analytics nodes are useful for handling data analysis operations, such as reporting queries from BI Connector for Atlas. Analytics nodes have distinct replica set tags which allow you to direct queries to desired regions.
Click Add a region to select a region in which to deploy analytics nodes. Specify the desired number of Nodes for the region.
Analytics nodes cannot participate in elections or become the primary for their cluster.
To remove an analytics node, click the trash icon icon next to that region.
See also
For additional information on replica set tag sets, see Configure Replica Set Tag Sets in the MongoDB manual.
Note
Having a large number of regions or having nodes spread across long distances may lead to long election times or replication lag.
Important.
C. Select the Cluster Tier¶
Select your preferred cluster tier. The selected cluster tier dictates the memory, storage, and IOPS specification for each data-bearing server [2] in the cluster.
Atlas categorizes the cluster tiers as follows:
- Shared Clusters
Sandbox replica set clusters for getting started with MongoDB. These clusters deploy to a shared environment with access to a subset of Atlas features and functionality. For complete documentation on shared cluster limits and restrictions, see Atlas M0 (Free Tier), M2, and M5 Limitations.
Atlas provides an option to deploy one
M0Free Tier replica set per project. You can upgrade an
M0Free Tier cluster to an
M2+paid cluster at any time.
M2and
M5are low-cost shared starter cluster tiers. These cluster tiers provide the following additional features and functionality compared to
M0cluster tiers:
- Backups for your cluster data.
- Increased storage.
- API access.
Note
Atlas deploys MongoDB 4.0 for all cluster tiers in the Shared Clusters tier. However, Shared Clusters do not support all functionality in MongoDB 4.0. See Atlas M0 (Free Tier), M2, and M5 Limitations for details.
Atlas supports shared cluster deployment in a subset of Cloud Providers and Regions. Atlas greys out any shared cluster tier not supported by the selected cloud service provider and region. For a complete list of regions that support shared cluster deployments, see:
- Dedicated Clusters (for development and low-traffic applications)
Cluster tiers that support development environments and low-traffic applications.
These cluster tiers support replica set deployments only, but otherwise provide full access to Atlas features and functionality.
- Dedicated Clusters (for production and high-traffic applications)
Clusters that support production environments with high traffic applications and large datasets.
These cluster tiers support replica set and sharded cluster deployments with full access to Atlas features and functionality.
Some cluster tiers have variants, denoted by the ❯ character. When you select these cluster tiers, Atlas lists the variants and tags each cluster tier to distinguish their key characteristics.
- NVMe Storage on AWS
For applications which require low-latency and high-throughput IO, Atlas offers storage options on AWS which leverage locally attached ephemeral NVMe SSDs. The following cluster tiers have an NVMe option, with the size fixed at the cluster:
M40
M50
M60
M80
M200
M400
Clusters with NVMe storage use Cloud Provider Snapshots for backup. Backup cannot be disabled for NVMe clusters.
NVMe clusters use a hidden secondary node consisting of a provisioned volume with high throughput and IOPS to facilitate backup.
The following table highlights key differences between an
M0 Free
Tier cluster, an
M2 or
M5 shared starter cluster, and an
M10+ dedicated cluster.
For a complete list of M0 (Free Tier), M2, and M5 limitations, see Atlas M0 (Free Tier), M2, and M5 Limitations.
From the Cluster Tier section, you can also Customize Your Storage.
Customize Your Storage¶
Each cluster tier comes with a default set of resources. Clusters of size M10 and larger provide the ability to customize your storage capacity.
Atlas provides the following storage configuration options, depending on the selected cloud provider and cluster tier.
Cluster Class (AWS only)
Clusters of size M40 and larger on AWS offer multiple options, including:
Low CPU
General
Local NVMe SSD
Locally attached ephemeral NVMe SSDs offer the highest level of speed and performance.
Select the Class box with your preferred speed. Changes to cluster class affect cost.
Storage Capacity
The size of the server data volume. To change this, either:
- Specify the exact disk size in the text box, or
- Move the slide bar until the text box displays your preferred disk size.
Changes to storage capacity affect cost.
Auto-Expand Storage: Available on clusters of size M10 and larger. When disk usage reaches 90%, automatically increase storage by an amount necessary to achieve 70% utilization. To enable this feature, check the box marked Auto-expand storage when disk usage reaches 90%.
Changes to storage capacity affect cost.
Contact Atlas support for guidance on oplog sizing for clusters with automatic storage expansion enabled. For details on how Atlas handles reaching database storage limits, refer to the FAQ page.
IOPS (configurable for AWS only)
Atlas clusters on AWS of size M30 and greater allow you to customize the maximum IOPS rate of your cluster. To provision the IOPS rate of your cluster, check the box marked Provision IOPS and either:
- Specify the exact IOPS rate in the text box, or
- Move the slide bar until the text box displays your preferred IOPS rate.
Note
The available IOPS range for a cluster is tied to disk storage capacity. If you modify your cluster’s storage capacity, the range of available IOPS values changes as well.
If you do not choose to provision IOPS, the default IOPS rate changes as the cluster’s storage capacity changes.
Changes to IOPS provisioning affect cost.
Important
Atlas enforces the following minimum ratios by cluster tier to facilitate consistent network performance with large datasets.
Disk Capacity to RAM:
- <
M40: 3:1
M40: 50:1
- =>
M50: 100 to 1
Example
A cluster with 50 GB storage requires a value for IOPS of at least 150. To support 3 TB of disk capacity, you must select a cluster tier with at least 32 GB of RAM (M50 or higher).
For example, a cluster with 50 GB storage requires a value for IOPS of at least 150. To support 3 TB of disk capacity, you must select a cluster tier with at least 32 GB of RAM (M50 or higher).
Atlas has a 4 TB disk capacity limit on all replica sets and shards, regardless of the cluster tier. To expand total cluster storage beyond 4 TB, enable sharding.
For clusters with Auto-Expand Storage enabled, Atlas respects the calculated maximum storage for the selected cluster tier. Users whose disk capacity reaches the allowable limit receive notification by email.
For more information on the default resources and available configuration options for each cloud service provider, see:
See also
Connection Limits and Cluster Tier
D. Select any Additional Settings¶
From the Additional Settings section, you can
Select the MongoDB Version of the Cluster¶
Select the new MongoDB version from the Select a version dropdown. Atlas always deploys the cluster with the latest stable release of the specified version.
Atlas supports creating
M10+
paid tier clusters with the following MongoDB versions:
- MongoDB 3.4
- MongoDB 3.6
- MongoDB 4.0
- MongoDB 4.2 (beta)
M0 Free Tier and
M2/M5 shared-tier clusters only support
MongoDB 4.0.
As new maintenance releases become available, Atlas automatically upgrades to these releases via a rolling process to maintain cluster availability.
You can upgrade an existing Atlas cluster to a newer major MongoDB version, if available, when you scale a cluster. However, you cannot downgrade a cluster’s MongoDB version.
Important
If your project contains a custom role that uses actions introduced in a specific MongoDB version, you cannot create a cluster with a MongoDB version less than that version unless you delete the custom role.
Enable.
Atlas provides the following backup options for
M10+
clusters:
Deploy a Sharded Cluster¶
Important
You cannot deploy a sharded cluster with MongoDB 4.2. Atlas only supports MongoDB 4.2 on replica sets. replia host machines in a cluster affect cost, see Number of Servers.
For more information on sharded clusters, see Sharding in the MongoDB manual.
Configure the Number of Shards¶
This field is visible only if the deployment is a sharded cluster.
You can set the number of shards to deploy with the sharded cluster. You can have no fewer than 2 shards and no more than 50 shards.
Enable BI Connector for Atlas¶
To enable BI Connector for Atlas for this cluster, toggle Enable Business Intelligence Connector (M10 and up) to Yes.
Note.
If enabled, select the node type from which BI Connector for Atlas should read.
The following table describes the available read preferences for BI Connector for Atlas and their corresponding readPreference and readPreferenceTag connection string options.
The
nodeType read preference tag dictates the type of node BI Connector for Atlas
connects to. The possible values for this option are as follows:
ELECTABLErestricts BI Connector to the primary and electable secondary nodes.
READ-ONLYrestricts BI Connector to connecting to non-electable secondary nodes.
ANALYTICSrestricts BI Connector to connecting to analytics nodes.
Tip
When using a
readPreferenceof
"analytics", Atlas places BI Connector for Atlas on the same hardware as the analytics nodes from which BI Connector for Atlas reads.
By isolating electable data-bearing nodes from the BI Connector for Atlas, electable nodes do not compete for resources with BI Connector for Atlas, thus improving cluster reliability and performance.
For high traffic production environments, connecting to the Secondary Node(s) or Analytics Node(s) may be preferable to connecting to the Primary Node.
For clusters with one or more analytics nodes, select Analytics Node to isolate BI Connector for Atlas queries from your operational workload and read from dedicated, read-only analytics nodes. With this option, electable nodes do not compete for resources with BI Connector for Atlas, thus improving cluster reliability and performance.
The BI Connector generates a relational schema by sampling data from MongoDB. The following sampling settings are configurable:
Enable Encryption at Rest¶
Configure Encryption at Rest using your Key Management for your Atlas Project
You must configure the Atlas project for Encryption at Rest using your Key Management before enabling the feature for your Atlas clusters. To learn more, see Encryption at Rest using Customer Key Management.
Atlas supports the following Encryption at Rest providers:
Important
If you want to switch from one Encryption at Rest provider on your cluster to another, you must first disable Encryption at Rest for your cluster, then re-enable it with your desired Encryption at Rest provider. See Encryption at Rest using Customer Key Management.
Atlas encrypts all cluster storage and snapshot volumes,
ensuring the security of all cluster data at rest
(Encryption at Rest). Atlas
Project Owners can configure
an additional layer of encryption on their data at rest using the
MongoDB
Encrypted Storage Engine
and their Atlas-compatible Encryption at Rest provider.
To enable Atlas Encryption at Rest for this cluster, toggle Encryption At Rest with WiredTiger Encrypted Storage Engine (M10 and up) to Yes.
Atlas Encryption at Rest using your Key Management supports
M10 or greater replica set clusters backed by
AWS or
Azure only. Support for clusters deployed
on Google Cloud Platform (GCP) is in development. Atlas Encryption
at Rest supports encrypting Cloud Provider Snapshots only.
You cannot enable Encryption at Rest on a cluster using
Continuous Backups.
Atlas clusters using Encryption at Rest using your Key Management incur an increase to their hourly run cost. For more information on Atlas billing for advanced security features, see Advanced Security.
Important
If Atlas cannot access the Atlas project key management provider or the encryption key used to encrypt a cluster, then that cluster becomes inaccessible and unrecoverable. Exercise extreme caution before modifying, deleting, or disabling an encryption key or key management provider credentials used by Atlas.
Configure Additional Configuration Options¶
You can configure the following
mongod runtime options
on
M10+ paid tier clusters:
- Set Oplog Size [1]
Modify the oplog size of the cluster. For sharded cluster deployments, this modifies the oplog size of each shard in the cluster. This option corresponds to modifying the
replication.oplogSizeMBconfiguration file option for each
mongodin the cluster.
To modify the oplog size:
- Log into Atlas.
- For your desired cluster, click Edit Configuration from the ellipsis h icon menu.
- Click Additional Settings.
- Set the oplog to the desired size.
- Click Apply Changes.
You can check the oplog size by connecting to your cluster via the
mongoshell and authenticating as a user with the
Atlas adminrole. Run the
rs.printReplicationInfo()method to view the current oplog size and time.
Warning
Reducing the size of the oplog requires removing data from the oplog. Atlas cannot access or restore any oplog entries removed as a result of oplog reduction. Consider the ramifications of this data loss before reducing the oplog.
- Enforce Index Key Limit
- Enable or disable enforcement of the 1024-byte index key limit. Documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. If disabled,
mongodwrites documents that breach the limit but does not index them. This option corresponds to modifying the
failIndexKeyTooLongparameter via the
setParametercommand for each
mongodin the cluster.
- Allow Server-Side JavaScript
- Enable or disable execution of operations that perform server-side execution of JavaScript. This option corresponds to modifying the
security.javascriptEnabledconfiguration file option for each
mongodin the cluster.
- Set Minimum TLS Protocol Version [1]
Sets the minimum TLS version the cluster accepts for incoming connections. This option corresponds to configuring the
net.ssl.disabledProtocolsconfiguration file option for each
mongodin the cluster.
TLS 1.0 Deprecation
For users considering this option as a method for enabling the deprecated Transport Layer Security (TLS) 1.0 protocol version, please read What versions of TLS does Atlas support? before proceeding. Atlas deprecation of TLS 1.0 improves your security of data-in-transit and aligns with industry best practices. Enabling TLS 1.0 for any Atlas cluster carries security risks. Consider enabling TLS 1.0 only for as long as required to update your application stack to support TLS 1.1 or later.
- Require Indexes for All Queries
- Enable or disable the execution of queries that require a collection scan to return results. This option corresponds to modifying the
notablescanparameter via the
setParametercommand for each
mongodin the cluster.
E. Enter the Cluster Name¶
F. Enter your Payment Information and Deploy your Cluster¶
Click Create Cluster below the form to enter payment
information. Atlas does not prompt you for payment information
if you have already provided payment information for the
organization within which you are deploying
the cluster. For Atlas
M0 Free Tier clusters, Atlas
does not require payment information to deploy the cluster.
See Billing Overview for more information on Atlas billing and payments.
G. (Optional) Create a MongoDB Administrative User¶
Atlas only allows client connections to the cluster from entries in the project IP whitelist. Clients must also authenticate as a MongoDB database user associated to the project.
If you have not configured the project IP whitelist or MongoDB users, navigate to the Database Access and Network Access pages to configure basic project security.
When creating your first MongoDB database user, select the
Atlas admin role from the user configuration
dialog to create an administrative user. See
MongoDB Database User Privileges for more information on
MongoDB user privileges.
If you need to modify your existing project security settings navigate to the guilabel:Database Access, Network Access, and Advanced pages under the Security section in the navigation. See Security Features and Setup for complete documentation on Atlas project security settings.
H. Connect to your cluster¶
Once Atlas deploys your cluster, click Connect on the cluster to open the Connect dialog. To learn how to connect to your cluster, see Connect to a Cluster.
Open Ports 27015 to 27017 to Access Atlas Databases
If you use a whitelist on your firewall for network ports, open ports 27015 to 27017 to TCP and UDP traffic on Atlas hosts. This grants your applications access to databases stored on Atlas.
To configure your application-side networks to accept Atlas
traffic, we recommend using the Atlas API
Get All Clusters endpoint to retrieve
mongoURI from the
response elements. You can also use
the Get All MongoDB Processes endpoint to
retrieve cluster hostnames
(mongo-shard-00-00.mongodb.net, mongo-shard-00-01.mongodb.net etc).
You can parse these hostname values and feed the IP addresses programatically into your application-tier orchestration automation to push firewall updates.
See also | https://docs.atlas.mongodb.com/create-new-cluster/ | 2020-01-18T01:46:50 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.atlas.mongodb.com |
- Security Features and Setup >
- Set up a Private Endpoint
Set up a Private Endpoint¶
On this page
Feature unavailable in Free and Shared-Tier Clusters
This feature is not available for
M0 (Free Tier),
M2, and
M5 clusters. To learn more about which features are unavailable,
see Atlas M0 (Free Tier), M2, and M5 Limitations.
MongoDB Atlas supports private endpoints on AWS using the AWS PrivateLink feature. When you enable this feature, Atlas creates its own VPC and places clusters within a region behind a network load balancer in the Atlas VPC. Then you create resources that establish a one-way connection from your VPC to the network load balancer in the Atlas VPC using a private endpoint.
Connections to Atlas clusters using private endpoints offer the following advantages over other network access management options:
- Connections using private endpoints are one-way. Atlas VPCs can’t initiate connections back to your VPCs. This ensures your network trust boundary is not extended.
- Connections to private endpoints within your VPC can be made transitively from:
- Another VPC peered to the private endpoint-connected VPC.
- An on-premises data center connected with DirectConnect to the private endpoint-connected VPC. This enables you to connect to Atlas directly from your on-premises data center without whitelisting public IP addresses.
Considerations¶
High Availability¶
To ensure AWS PrivateLink connections to Atlas can withstand an availability zone outage, you should deploy VPC subnets to multiple availability zones in a region.
Private Endpoint-Aware Connection Strings¶
When you enable AWS PrivateLink, Atlas generates an SRV record for your VPC. This SRV record resolves to the network load balancer provisioned in the Atlas VPC, and assigns a unique port to each Atlas cluster node in the region for which you enabled AWS PrivateLink.
Clients connecting to Atlas clusters using AWS PrivateLink use private endpoint-aware connection strings containing SRV records:
Note
Only clients that use your VPC’s DNS can retrieve an SRV record from a private endpoint-aware connection string.
The SRV record used in a private endpoint-aware connection string contains a configuration that maps a unique port for each member in a cluster’s replica set to that hostname. The ports listed correspond to ports on the load balancer provisioned in the Atlas VPC. When you connect using AWS PrivateLink, all nodes in an Atlas cluster are accessible via the same hostname, with the load balancer resolving individual nodes by their port.
The following example shows a DNS lookup of the SRV record for a
AWS PrivateLink-enabled single-region cluster, showing three unique ports defined for the
cluster0-pl-0-k45tj.mongodb-dev.net hostname:
The hostname the SRV record contains is a CNAME record that resolves to the endpoint-specific regional DNS name that AWS generates for the interface endpoint. An alias record exists for each subnet you deployed the interface endpoint to. Each alias record contains the private IP address of the interface endpoint ENI for that subnet.
The following example shows the DNS lookup for the hostname in the SRV record, including the endpoint-specific regional DNS name for the interface endpoint and its alias records:
When a client in your VPC connects to an Atlas cluster using a private endpoint-aware connection string, the client attempts to establish a connection to the load balancer in the Atlas VPC through one of the interface endpoint ENIs. Your client’s DNS resolution mechanism handles which of the interface endpoint ENIs the hostname resolves to. If one ENI is unavailable the next is used. This is opaque to the driver or other connection mechanism. The driver is only aware of the hostname in the SRV record, listening on one port for each node in the cluster’s replica set.
See Connect to Atlas using a Private Endpoint to learn how to connect to Atlas clusters using private endpoint-aware connection strings.
IP Whitelists and VPC Peering with Private Endpoints¶
When Private Endpoints are enabled, you can still enable access to your Atlas clusters using other methods, such as public IP whitelisting and VPC peering.
Clients connecting to Atlas clusters using other methods use standard connection strings. Your clients might have to identify when to use private endpoint-aware connection strings and standard connection strings.
Limitations¶
You can’t use AWS PrivateLink to connect to Atlas clusters running MongoDB version 3.4 or earlier.
AWS PrivateLink must be active in all regions into which you deploy a multi-region cluster. You receive an error if AWS PrivateLink is active in some, but not all, targeted regions.
If you create private endpoints in more than one region, you can’t create more than one private endpoint in each region. If you create more than one private endpoint in a single region, you can’t create private endpoints in other regions.
You can use AWS PrivateLink in Atlas projects with up to 100 addressable targets per region. Use additional projects or regions to connect to addressable targets beyond this limit in the same region.
Addressable targets include:
- Each node in a replica set, excluding nodes that comprise a shard in a sharded cluster.
- Each
mongosinstance for sharded clusters.
- Each BI Connector for Atlas instance across all dedicated clusters in the project.
Note
To request a one-time increase to use AWS PrivateLink with up to 500 addressable targets per Atlas project, contact MongoDB Support.
When you delete all AWS PrivateLink endpoints for a region in Atlas, you must manually delete the private endpoint. AWS lists the endpoint as
rejected. Atlas can’t delete this resource because it lacks the required permissions.
Prerequisites¶
To enable connections to Atlas using private endpoints, you must:
- Have either the
Project Owneror
Organization Ownerrole in Atlas.
- Have an AWS user account with an IAM user policy that grants permissions to create, modify, describe, and delete endpoints. For more information on controlling the use of VPC endpoints, see the AWS Documentation.
- (Recommended): Install the AWS CLI.
- If you have not already done so, create your VPC and EC2 instances in AWS. See the AWS documentation for guidance.
Procedures¶
Configure an Atlas Private Endpoint¶
Enable clients to connect to Atlas clusters using private endpoints with the following procedure:
Select the AWS region in which you want to create the Atlas VPC, then click Next.¶
You must select the same region in which your AWS VPC resides.
Note
If your organization has no payment information stored, Atlas prompts you to add it before continuing.
Atlas creates VPC resources in the region you selected. This might take several minutes to complete.
Create the Atlas VPC Endpoint and your VPC Endpoint Interface.¶
Enter the following details about your AWS VPC:
Copy the command the dialog displays and run it using the AWS CLI.
Note
You can’t copy the command until Atlas finishes creating VPC resources in the background.
See Creating an Interface Endpoint to perform this task using the AWS CLI.
AWS creates the private endpoint and connects it from your VPC to the Atlas VPC using the details you provided.
You may receive an error like the following when you create the private endpoint:
If you receive this error, Atlas has deployed VPC resources into different availability zones than the ones to which you deployed your VPC subnets. To resolve this error:
- Describe the service endpoint on the
service-namefrom the command displayed in the CLI command in the Atlas UI.
- Deploy a VPC subnet into at least one of the availability zones the service endpoint supports.
- Create the private endpoint again using the VPC subnet IDs you deployed into the supported availability zones. If you’re using the AWS CLI, Replace the Subnet IDs in the Atlas UI with the VPC subnets you deployed. Copy the displayed command again, then run it using the AWS CLI.
Click Next.
Configure your resources’ security groups to send traffic to and receive traffic from the VPC endpoint.¶
For each resource that needs to connect to your Atlas clusters using AWS PrivateLink, the resource’s security group must allow all outbound traffic on all ports to the VPC endpoint.
See Adding Rules to a Security Group for more information.
Create a security group for your VPC endpoint to allow resources to access it.¶
This security group must allow inbound traffic on all ports from each resource that needs to connect to your Atlas clusters using AWS PrivateLink:
- In the AWS console, navigate to the VPC Dashboard.
- Click Security Groups, then click Create security group.
- Use the wizard to create a security group. Make sure you select your VPC from the VPC list.
- Select the security group you just created, then click the Inbound Rules tab.
- Click Edit Rules.
- Add rules to allow all inbound traffic from each resource in your VPC that you want to connect to your Atlas cluster.
- Click Save Rules.
- Click Endpoints, then click the endpoint for your VPC.
- Click the Security Groups tab, then click Edit Security Groups.
- Add the security group you just created, then click Save.
To learn more about VPC security groups, see the AWS documentation.
Verify that the AWS PrivateLink private endpoint is available.¶
You can connect to an Atlas cluster using the AWS PrivateLink private endpoint when all of the resources are configured and the private endpoint becomes available.
To verify that the AWS PrivateLink private endpoint is available:
In the Security section of the left navigation, click Network Access.
In the Private Endpoint tab, verify the following statuses for the region that contains the cluster you want to connect to using AWS PrivateLink:
See Troubleshoot AWS PrivateLink Connection Issues for more information.
Connect to Atlas using a Private Endpoint¶
You use a private endpoint-aware connection string when you connect to Atlas clusters using a private endpoint:
Note
See Private Endpoint-Aware Connection Strings for important considerations about private endpoint-aware connection strings.
Use a private endpoint-aware connection string to connect to an Atlas cluster with the following procedure:.
Select your preferred connection method.¶
In the Choose a connection method step, Atlas provides instructions for each listed connection method. Click your preferred connection method and follow the instructions given.
For connecting via a command line tool such as
mongodump or
mongorestore,
use the Command Line Tools tab for an
auto-generated template for connecting to your Atlas cluster with
your preferred tool.
Troubleshoot AWS PrivateLink Connection Issues¶
Check the status of your AWS PrivateLink connections.¶
The Private Endpoint tab on the Network Access page lists each AWS PrivateLink connection you’ve created. The Atlas Endpoint Service Status and Interface Endpoint Status fields show the status of each AWS PrivateLink connection.
Refer to the following statuses to help you determine the state of your AWS PrivateLink connections:
Atlas Endpoint Service Status
Interface Endpoint Status
Make sure that your security groups are configured properly.¶
For each resource that needs to connect to your Atlas clusters using AWS PrivateLink, the resource’s security group must allow all outbound traffic on all ports to the VPC endpoint.
See Adding Rules to a Security Group for more information.
Your VPC endpoint security group must allow inbound traffic on all ports from each resource that needs to connect to your Atlas clusters using AWS PrivateLink.
Whitelist instance IP addresses or security groups to allow traffic from them to reach the VPC endpoint security group. | https://docs.atlas.mongodb.com/security-private-endpoint/ | 2020-01-18T01:55:52 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.atlas.mongodb.com |
- Security Features and Setup >
- Configure Federated Authentication >
- Manage Organization Mapping for Federated Authentication
Manage Organization Mapping for Federated Authentication¶
On this page
When you map organizations to your Identity Provider, Atlas grants users who authenticate through the IdP membership in the selected organizations. You can give these users a default role in the mapped organizations. Organization mapping lets you configure a single IdP to grant users access to multiple Atlas organizations.
You can apply the same IdP to multiple organizations. You can assign each organization a single IdP.
Prerequisites¶
To complete this tutorial, you must have already linked an IdP to Atlas and mapped one or more domains to that IdP. For instructions on these procedures, see:
Federation Management Access¶
You can manage federated authentication from the Federation
Management Console. You can access the console as long as you are an
Organization Owner in one or more organizations that are
delegating federation settings to the instance.
Map an Organization.
Connect an Organization to the Federation Application¶
Click View Organizations.
Atlas displays all organizations where you are an
Organization Owner.
Organizations which are not already connected to the Federation Application have Connect button in the Actions column.
Click the desired organization’s Connect button.
After you connect the organization to the Federation Application, apply an IdP to the organization.
Apply an Identity Provider to the Organization¶
From the Organizations screen in the management console:
Click the Name of the organization you want to map to an IdP.
On the Identity Provider screen, click Apply Identity Provider.
Atlas directs you to the Identity Providers screen which shows all IdPs you have linked to Atlas.
For the IdP you want to apply to the organization, click Modify.
At the bottom of the Edit Identity Provider form, select the organizations to which this IdP applies.
Click Next.
Click Finish.
Select a Default User Role for the Organization¶
You can have Atlas grant users who authenticate through the IdP a default role in a mapped organization. You can select different roles for different organizations.
Note
The selected role only applies to users who authenticate through the IdP if they do not already have a role in the organization.
Procedure¶
- In the Federation Management Console, click Organizations in the left navigation.
- Click the Name of the organization for which you want to assign default permissions.
- In the Default User Role dropdown, select the desired role. To remove a default user role, click the times circle icon next to the dropdown.
Change an Organization’s Mapped Identity Provider¶
Reconfigure your IdP to change the organizations to which it’s mapped.
Unmap the Current Identity Provider¶
- Click Organizations in the left navigation.
- Click the Identity Provider of the organization whose IdP you wish to change.
- Click Modify for the IdP which is currently mapped to the organization.
- At the bottom of the Edit Identity Provider form, deselect the organization.
- Click Next.
- Click Finish.
Disconnect an Organization from the Federation Application¶
When you disconnect an organization from the Federation Application, Atlas no longer grants membership or a default organization role to users who authenticate through the IdP.
From the Federation Management Console:
- Click View Organizations.
- Open the Actions dropdown for the organization you want to disconnect.
- Click Disconnect.
- Click Confirm. | https://docs.atlas.mongodb.com/security/manage-org-mapping/ | 2020-01-18T02:05:06 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.atlas.mongodb.com |
. The HA and WLB tabs are available only when a pool is selected and the Snapshots tab is only available when a VM is selected.
ConsoleConsole
On this tab, you can run a console session on a VM or a managed server.
See also Run a Remote Console Session to read about the different types of remote VM console supported in XenCenter.
Switch to Remote Desktop or Switch to Default Desktop
Switches between Windows remote console types
Switch to Graphical Console or Switch to Text Console
Switches between Linux remote console types. You might:
Ensure that the Linux guest agent is installed on the VM to launch the SSH console.
Send Ctrl+Alt+Del
Sends the Ctrl+Alt+Del key sequence to the remote console.
Most keyboard shortcuts are transmitted to the server or VM when you use a remote console. However, your local system always intercepts the Ctrl+Alt+Del key sequence and prevents it from being sent if you type it in directly at the remote console.
Undock (Alt+Shift+U)
Undocks the Console tab into a floating window.
To shut down or reboot a server, install Citrix VM Tools, shut down, reboot or suspend a virtual machine from within the floating console window, select the lifecycle icon in the top-left corner of the window and then click), but this behavior. The Connection bar shows the name of the VM or server you are working on and including two controls: a Pin button to allow you to turn the Connection bar on permanently, and a Restore down button that you can click to exit full-screen mode.
You can control various console settings in the Options dialog box. For example, the text clipboard on your local machine is shared with the remote console by default. Items you cut or copy are placed on the clipboard and made available for pasting on either your local computer or on the remote console. You can turn clipboard sharing off and change various other console settings from the XenCenter Options dialog box; see Changing XenCenter Options.
GeneralGeneral
View general properties of the selected container, virtual machine, server, resource pool, template, or storage repository on the General tab; click Properties to set or change properties.
Copy any of the values shown on this pane to the Windows clipboard by right-clicking on the value and clicking Copy on the shortcut menu.
GPUGPU
The GPU tab allows you to view or edit the GPU placement policy, view the available GPUs and virtual GPU types. The GPUs are grouped the following articles:
Note:
- GPU Pass-through and Graphics Virtualization are available for Citrix Hypervisor Premium Edition customers, or those customers who have access to Citrix Hypervisor.
USBUSB
The USB tab allows you to pass through individual physical USB devices to a VM so the VM’s OS can use it as a local USB device. You can enable or disable passthrough by clicking the Enable Passthrough or Disable Passthrough button on the USB tab. To attach a USB, perform the following steps:
- Shut down the VM.
- Right-click the VM and select Properties.
- On the left pane, click USB.
- Click Attach.
- In the Attach USB dialog box, click Attach.
- Start the VM. The USB is now attached to the VM.
- In the same way, click Detach to detach the USB from the VM.
USB pass-through is supported only on the following HVM guests:
Windows
- Windows 7 SP1
- Windows 8.1
- Windows 10
- Windows Server 2008 SP2
- Windows Server 2008 R2 SP1
- Windows Server 2012
- Windows Server 2012 R2
- Windows Server 2016
Linux
- RHEL 7
- Debian 8
Note:
- USB passthrough supports a maximum of 6 USBs to be passed through to a single VM.
- Snapshot/Suspend/ Pool Migrate/ Storage Migrate operations are not supported when USB is passed through to VM.
- USB passthrough feature is available for Citrix Hypervisor Premium Edition customers.
- Plugging in untrustworthy USB devices to your computer might put your computer at risk. Assign USB devices with modifiable behavior only to trustworthy guest VMs.
- Do not boot BIOS from USB devices.
- Ensure that the USB device to passthrough is trustworthy and can work stably in normal Linux environment (for example, CentOS 7).
- USB device passthrough is blocked in a VM if high availability is enabled on the pool and the VM has restart priority as Restart. The USB attach button is disabled and the following message is displayed: The virtual USB cannot be attached because the VM is protected by HA. When configuring high availability for a pool, if a VM is not agile, the Restart option is disabled with the following tooltip: The VM has one or more virtual USBs. Restart cannot be guaranteed.
High availabilityHigh availability
On the HA tab for a pool, you can:
- Enable high availability using the Configure HA button.
- Change the pool’s high availability configuration using the Configure HA button.
- Disable high availability.
When high availability has been enabled, you can see high availability status (failure capacity and server failure limit) and the status of the selected heartbeat storage repositories on the HA tab.
For more information, see the following articles:
HomeHome
The Home tab allows you to add a server to the list of managed servers or open a browser window to find out more about Citrix Hypervisor.
MemoryMemory
You can enable Dynamic Memory Control (DMC) and configure dynamic memory limits on the Memory tab. VMs can have a static memory allocation or can use DMC. DMC allows the amount of memory allocated to a VM to be adjusted on-the-fly as memory requirements on the server change without having to restart the VM. The Memory tab also lets you update the Control Domain (dom0) memory.
For more information, see the following articles:
NetworkingNetworking
The Networking tab displays a list of networks configured on the pool, server, or the VM you have selected. It provides a centralized location to access or modify your network settings.
For more information, see the following articles:
NICsNICs
View detailed information about the physical is automatically routed over the second NIC, ensuring server management connectivity. See Configuring NICs.
Note:
Use vSwitch as your network stack to bond four NICs. You can only bond two NICs when using Linux bridge.
PerformancePerformance
View performance data for your VMs and managed servers on the Performance tab. Full performance data is only available for VMs with Citrix VM Tools installed.
The tab provides real-time monitoring of performance statistics across resource pools, VM, or SR. For more information, see Configuring Performance Alerts.
SearchSearch
Select the top-level XenCenter item, pool, or server in the Resources pane and then click the Search tab to perform complex searches of your managed resources. You can construct queries based on object types, folders, and attributes such as name, description, tags, high availability status or restart priority, and power state.
For more information, see the following articles:
SnapshotsSnapshots
Create, delete and export VM snapshots, revert a VM to a selected snapshot, and use existing snapshots to create VMs and templates on the Snapshots tab.
See VM Snapshots.
StorageStorage
View the storage configuration of the selected virtual machine, server, resource pool, or storage repository on the Storage tab. The settings shown on this tab depend on the type of resource currently selected in the Resources pane.
UsersUsers
Configure role-based access to Citrix Hypervisor users and groups through AD user account provisioning and Role Based Access Control (RBAC) on the Users tab. In this tab you can do the following tasks:
- Join a pool or server to an Active Directory (AD) domain
- Add an AD user or group to a pool
- Assign roles to users and groups.
For more information, see Managing Users.
WLBWLB
Access key Workload Balancing features, including configuration, optimization recommendations, and status on the WLB tab.
Note:
WLB is available for Citrix Hypervisor Premium Edition customers, or those customers who have access to Citrix Hypervisor through their Citrix Virtual Apps and Desktops entitlement. For more information about licensing, see About Citrix Hypervisor Licensing. | https://docs.citrix.com/en-us/xencenter/current-release/tabs.html | 2020-01-18T00:58:27 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['/en-us/xencenter/media/001_LifeCycle_h32bit_24.png',
'Lifecycle icon. Three stacked circles: blue, green, red.'],
dtype=object) ] | docs.citrix.com |
Add new Payment Method¶
Warning: content with restricted access
All the information contained under the present documentation page is only relevant for Account Owners or Administrators, since only they have sufficient rights to view the content exposed herein and make the appropriate changes. We remind the reader that a user is always the Owner and full administrator of his own personal Account.
In order to add a new payment method, the user should first click on the "Create" button at the top-right corner of the "Payment Methods" tab under the "Billing" page, and should then fill in the form appearing underneath it which has the following aspect:
In the form shown above, the user should enter the appropriate Credit Card details on the panel to the left, and also insert the relevant information about the Billing Address associated with the same Credit Card on the right-hand panel.
Note: Credit Card data confidentiality
Any Credit Card information is never stored on our platform. All payments are in fact executed directly through the "stripe.com" service.
Finalize Addition of new Payment Method¶
To finalize the addition of the new Credit Card as a future payment option, the user should click on the bottom
Add Card button, or alternatively he/she should click a second time on the "Create" button (which will at this stage appear as a "minus" sign instead of the original "plus" sign) to negate all changes made to the Credit Card information being currently inserted.
Once a new Credit Card payment method has been added, it will be shown in the overall list under the "Payment Methods" tab for future reference, and it will be delegated as the default method of payment for all future transactions.
Animation¶
In the example animation below, we demonstrate how to add a new Credit Card payment method by filling the resulting forms with some arbitrary information. The card finally appears as a new entry in the list of Cards at the top of the page:
Set Default or Remove Payment Method¶
If multiple payment methods are stored under the same account, the default choice that will be used when crediting the account in all future transactions can be selected from the first column of the list under the "Payment Methods" tab page.
Any Credit Card payment method can furthermore be removed from the account's list by clicking on
Remove in the correct entry row.
The location of both the "Set Default" and "Remove" features in the list of payment methods is highlighted in the below image:
| https://docs.exabyte.io/accounts/accounting/payment-methods/ | 2020-01-17T23:48:43 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['../../../images/accounts/add-new-payment.png',
'Add new Payment Method Add new Payment Method'], dtype=object)
array(['../../../images/accounts/remove-default-payment.png',
'Remove Default Payment Remove Default Payment'], dtype=object)] | docs.exabyte.io |
.
- Moving repositories: Moving all repositories managed by GitLab to another file system or another server.
- Sidekiq MemoryKiller: Configure Sidekiq MemoryKiller to restart Sidekiq.
- Extra Sidekiq operations: Configure an extra set of Sidekiq processes to ensure certain queues always have dedicated workers, no matter the amount of jobs that need to be processed.
- Unicorn: Understand Unicorn and unicorn-worker-killer.
- Speed up SSH operations by Authorizing SSH users via a fast, indexed lookup to the GitLab database, and/or by doing away with user SSH keys stored on GitLab entirely in favor of SSH certificates.
- Filesystem Performance Benchmarking: Filesystem performance can have a big impact on GitLab performance, especially for actions that read or write Git repositories. This information will help benchmark filesystem performance against known good and bad real-world systems.
- ChatOps Scripts: The GitLab.com Infrastructure team uses this repository to house common ChatOps scripts they use to troubleshoot and maintain the production instance of GitLab.com. These scripts are likely useful to administrators of GitLab instances of all sizes. | https://docs.gitlab.com/ee/administration/operations/ | 2020-01-18T00:56:54 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.gitlab.com |
All content with label amazon+archetype+batch+cacheloader+concurrency+datagrid+hot_rod+infinispan+jboss_cache+listener+maven+partitioning+release+scala+test.
Related Labels:
expiration, publish, coherence, interceptor, server, replication, recovery, transactionmanager, dist, query, deadlock, lock_striping, jbossas, nexus, guide, schema, cache, s3, grid,
jcache, api, xsd, documentation, write_behind, ec2, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, gridfs, out_of_memory, import, index, events, configuration, hash_function, buddy_replication, loader, xa, write_through, cloud, jsr352, mvcc, notification, tutorial, xml, read_committed, jbosscache3x, distribution, cachestore, data_grid, resteasy, integration, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, searchable, cache_server, installation, client, jberet, migration, non-blocking, jpa, filesystem, tx, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, webdav, hotrod, repeatable_read, docs, batching, consistent_hash, store, whitepaper, jta, faq, as5, 2lcache, jsr-107, lucene, jgroups, locking, rest
more »
( - amazon, - archetype, - batch, - cacheloader, - concurrency, - datagrid, - hot_rod, - infinispan, - jboss_cache, - listener, - maven, - partitioning, - release, - scala, - test )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+archetype+batch+cacheloader+concurrency+datagrid+hot_rod+infinispan+jboss_cache+listener+maven+partitioning+release+scala+test | 2020-01-18T00:56:29 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.jboss.org |
Compares two specified string objects, ignoring or honoring their case, and using culture-specific information to influence the comparison,.
- culture
-
An object that supplies culture-specific comparison information.An object that supplies culture-specific comparison information.#14
Compare the path name to "file" using an ordinal comparison. The correct code to do this is as follows:
code reference: System.String.Compare#15 | http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.String.Compare(System.String%2CSystem.String%2CSystem.Boolean%2CSystem.Globalization.CultureInfo) | 2020-01-18T00:39:17 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.go-mono.com |
BMC Impact Model Designer is installed with BMC ProactiveNet CMDB Extensions.
The table below lists the requirements to install and use BMC ProactiveNet CMDB Extensions and BMC Impact Model Designer.
Hardware requirements for the CMDB Extensions are identical to AR system as they are installed on the same AR Server.
You may require additional memory to install BMC Atrium Core Console. For more information, see the BMC Action Request System Compatibility and Support Matrix available on the BMC Support site.
BMC Atrium Core Console is not supported with the JBoss server for BMC Remedy Mid Tier on IBM AIX..
For information about installing BMC Impact Model Designer, see Installing BMC ProactiveNet CMDB Extensions.
4 Comments
Roland Pocek
will bppm 96 support atrium cmdb v9? if so, is there a timeframe?
Pete Flores
Sanjay Prahlad
Roland Pocek
since cmdb 9.1 is stated in the supported CMDB Versions with Fixpack 2, is this valid for all patches and subversions of 9.1? | https://docs.bmc.com/docs/display/public/proactivenet96/BMC+ProactiveNet+CMDB+Extensions+and+BMC+Impact+Model+Designer+requirements?focusedCommentId=743235421 | 2020-01-18T01:56:08 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.bmc.com |
The recommended way to install Go on Mac is with brew
brew install go
If you prefer a regular package installer instead, those are available here. If you run into trouble, go over the official Go installation instructions.
Verify Go is installed correctly by running in terminal
go version
Any version above 1.11 should suffice.
Go creates a workspace on your machine where source files should be placed. This is a bit different from other programming languages which are less opinionated about the location of your source files.
Unless configured explicitly otherwise, your Go workspace is found at
~/go/src
The common convention is to place files in a directory structure that mirrors easily to Github. If your Github username is
johnsnow and your repo name is
mycontract you should place your files at
~/go/src/github.com/johnsnow/mycontract
For more information about workspaces, consult the official Go documentation.
Working with an IDE that has good Go support is highly recommended for code completion, syntax highlighting and debugging.
Atom is an excellent free editor which has Go support through the go-plus plugin. You can install both by running in terminal
brew cask install atomapm install go-plus
Another free alternative is VSCode, which supports Go out of the box. A commercial alternative is GoLand by JetBrains. For the complete list of editors consult the official Go documentation.
One of the main benefits of the Go programming language is its simplicity. It should not require more than a few days to gain a firm grasp of the syntax.
The official documentation contains a fantastic Tour of Go - an interactive tutorial that teaches the basics of the language in an hour or so. Another way to dive quickly into the language is using this cheat sheet which contains all syntax in one page.
If you don't feel like learning the language first that's also fine. Most of the contract examples are simple enough to understand without any prior knowledge. | https://docs.orbs.network/contract-sdk/getting-started/untitled | 2020-01-18T01:24:59 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.orbs.network |
Step By Step Publish To Azure
Introduction
Before reading this document, it's suggested to read Getting Started to run the application and explore the user interface. This will help you to have a better understanding of concepts defined here.
Create The Azure Website
It is possible to publish ASP.NET Zero's Angular client app and server side Web.Host API app together or separately. In this document, we will publish both apps separately.
So, go to your Azure Portal and create two websites, one for Web.Host project and other one for Angular application.
Creating an Azure Website for Host
We will be using "Web App + SQL" for Web.Host project but if you already have an SQL Database, you can just create Web App and use the connection string of your Azure SQL Database.
So, select "Web App + SQL" and click create:
And configure it according to your needs. A sample setting is shown below:
Creating an Azure Website for Angular
Select "Web App" and click create. Since we already created the database for Web.Host application, we don't need it here.
And configure it according to your needs. A sample setting is shown below:
Publish Host Application to The Azure
The details will be explained in the next lines. Here are the quick steps to publish the Host Application to the Azure.
- Run the migrations on the Azure
- Configure the .Web.Host/appsettings.production.json
- Publish the application to Azure
Run Migrations on The Azure
One of the best ways to run migrations on the Azure is running
update-database command in the Visual Studio.
In order to do that, your public your client IP address have access to the Azure. Of course, this operation can also be done via the Azure Portal. Check here to learn how to configure the firewall for client access via Azure Portal.
Apply Migrations
Open appsettings.json in .Web.Host project and change connection settings according to the Azure Database:
Open Package Manager Console in Visual Studio, set .EntityFrameworkCore as the Default Project and run the
update-database command as shown below:
Configure the appsettings.production.json
Azure is using appsettings.production.json, so this file should be configured like following:
Publish
Right click the Web.Host project and select "Publish". Select "Microsoft Azure App Service" and check "Select Existing". Click "Create Profile" button.
Following screen will be shown:
Select "azure-publish-demo-server" and click "OK", then click "Publish" button. Host application is live now:
Publish Angular to The Azure
The details will be explained in the next lines. Here are the quick steps to publish the AngularUI to the Azure
- Run the
yarncommand to restore packages
- Run the
ng build -prod
- Copy the web.config file that is placed in angular folder to dist folder
- Configure the angular/dist/assets/appconfig.json
- Upload required files to the Azure
Prepare The Publish Folder
Run the
yarn command to restore packages and run the
ng build --prod to create publish folder that named dist.
Copy the web.config
Copy the web.config file that is placed in angular folder to angular/dist folder.
Copy the appconfig.json
Configure the angular/dist/assets/appconfig.production.json like following:
Upload Files to Azure
Files must be uploaded to the Azure via FTP. Transfer files from the dist to the www folder in the Azure. The folder structure should look like:
Angular application is live now. Browse the and see it works.
| https://docs.aspnetzero.com/en/aspnet-core-angular/latest/Deployment-Angular-Publish-Azure | 2020-01-17T23:50:13 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-create-azure-host-website.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-create-azure-host-website-configuration.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-create-azure-angular-website.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-create-azure-angular-website-configuration.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-allow-ip-to-azure.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-connection-string.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-update-database.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-appsttings-production.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-new-publish-profile.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-select-azure-website.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-swagger-ui.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-publish-angular.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-appconfig.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-filezilla.png',
None], dtype=object)
array(['https://raw.githubusercontent.com/aspnetzero/documents/master/docs/en/images/azure-publish-angular-angular-ui.png',
None], dtype=object) ] | docs.aspnetzero.com |
Latches¶
Latches have an internal Boolean state that can be modified using Boolean inputs. In digital logic, there are many different types of latch that vary in the effects their inputs produce on the internal Boolean state variable. All latches provide the value of the internal state as their output, traditionally named Q.
In electronics terminology, there is a technical distinction between latches and flip-flops related to their interaction with clocks. However, this distinction is meaningless in CertSAFE, and this example project uses the terms interchangeably.
In the latches included in this example library, the starting value of Q before the first time the latch is executed is always false. However, it is easy to add an IC (initial condition) pin to the components if it is desired that the initial state be user-specifiable.
SR Latch (Set Priority), SR Latch (Reset Priority), E Latch, and JK Latch¶
These four components have two Boolean inputs, named S and R for set and reset. (The inputs to the JK Latch are instead named J and K respectively, to match the convention from traditional circuit diagrams.) Every frame, one of four actions is taken, depending on the values of the S and R inputs:
If S is true and R is false, the internal state Q is set to true.
If S is false and R is true, Q is set to false.
If both S and R are false, Q retains its value from the previous frame.
If both S and R are true, the behavior differs between the four components:
- The SR Latch (Set Priority) sets Q to true.
- The SR Latch (Reset Priority) sets Q to false.
- The E Latch retains the value of Q from the previous frame.
- The JK Latch toggles the value of Q relative to the previous frame.
All four latches are implemented internally by using Boolean logic to compute the new Q value for the current frame as a function of the Q value from the previous frame (obtained with a One Frame Delay primitive) and the current values of the S and R inputs.
T Flip-Flop¶
The T Flip-Flop has a single Boolean input labeled T. Every frame where the T input is true, the internal state toggles. The example implementation of a T Flip-Flop uses an XOR gate combining the T input with the previous value of Q. One way of thinking of an XOR gate is as a controlled NOT gate that flips the value of Q if T is true, but leaves Q at its previous value if T is false. | https://docs.certsafe.com/example-components/latches.html | 2020-01-18T01:45:19 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['../_images/sr-latches.png', '../_images/sr-latches.png'],
dtype=object)
array(['../_images/t-flip-flop.png', '../_images/t-flip-flop.png'],
dtype=object) ] | docs.certsafe.com |
Access Control Lists
An access control list (ACL) is a set of conditions that you can apply to a network appliance to filter IP traffic and secure your appliance from unauthorized access.
You can configure an ACL on your Citrix ADC SDX Management Service GUI to limit and control access to the appliance.
Note
ACLs on SDX appliances are supported from release 12.0 57.19 onwards.
This topic includes the following sections:
- Usage Guidelines
- How to Configure ACLs
- Additional Actions for ACL Rules
- Troubleshooting
Usage GuidelinesUsage Guidelines
Keep the following points in mind while creating ACLs on your appliance:
- When you upgrade the SDX appliance to release 12.0 57.19, the ACL feature is disabled by default.
- SDX administrators can control only inbound packets through ACL on the SDX appliance.
- If you use Citrix Application Delivery Management to manage your SDX appliance, you must create appropriate ACL rules to allow communication between MAS and SDX Management Service.
- For any other configurations on the SDX appliance such as provisioning or deleting VPXs, adding/deleting external servers, SNMP management, and so on, do not require any changes in the existing ACL configuration. Communication with these entities are taken care of by the Management Service.
How to Configure an ACLHow to Configure an ACL
Configuring an ACL involves the following steps:
- Enable the ACL feature
- Create an ACL rule
- Enable the ACL rule
Note
You can create ACL rules without enabling the ACL feature. However, if the feature is not enabled, you cannot enable an ACL rule after you’ve created it.
To enable the ACL feature
1. To enable the ACL feature, log on to the SDX Management Service GUI and navigate to Configuration > System > ACL.
2. By using the toggle button, turn on the ACL feature.
![localized image]
To create an ACL rule
1. On the ACL page, click Create Rule.
2. The Create Rule window opens. Add the details listed in the following table.
3. Click OK to create the rule.
Figure: An example of an ACL rule
After the rule is created, it is in disabled state. To make the rule effective, you must enable the rule.
Note
To enable a rule, the ACL feature should be enabled. If the feature is disabled, and you attempt to enable an ACL rule, a message “ACL is not running” appears.
To enable an ACL rule
1. Hover your mouse over the rule that you want to enable and click the circle with three dots.
2. From the menu, select Enable.
3. Alternatively, select the radio button for that rule and click the Enable tab.
4. At the prompt, click Yes to confirm.
Additional Actions for ACL RulesAdditional Actions for ACL Rules
You can apply the following actions to ACL rules:
1. Disable an ACL rule
2. Edit an ACL rule
3. Delete an ACL rule
4. Renumber the priority of ACL rules
To disable an ACL rule
1. Hover the mouse over the rule that you want to disable and select the circle with three dots.
2. Click Disable from the list.
3. Alternatively, select the radio button for that rule and click the Disable tab.
4. Click Yes to confirm.
Note
When you disable a rule, the rule no longer applies to incoming traffic; however, the rule configuration remains under ACL settings.
To edit an ACL rule
1. Hover the mouse over the rule that you want to edit and select the circle with three dots.
2. Click Edit Rule from the list. The Modify Rule window opens.
3. Alternatively, select the radio button for that rule and click the Edit Rule tab. The Modify Rule window opens
4. Make the edits and click OK.
Note
You can edit a rule in both enabled and disabled state. If you edit a rule that is already enabled, the edits get applied immediately. For a rule in disabled state, the edits get applied when you enable the rule.
To delete an ACL rule
1. Ensure that the rule is in disabled state.
2. Hover the mouse over the rule that you want to delete and select the circle with three dots. Click Delete Rule from the list.
3. Alternatively, select the radio button for that rule and click the Delete Rule tab.
4. Click Yes to confirm.
Note
You cannot delete a rule in enabled state.
To renumber priorities of ACL rules
1. Hover the mouse over the rule that you want to renumber the priorities for and select the circle with three dots. Click Renumber Priority(s) from the list.
2. Alternatively, select the radio button for that rule and click the Select Action tab.
3. Select Renumber Priority(s).
4. The SDX Management Service automatically assigns new priority numbers, which are multiples of 10, to all the existing rules.
5. Edit the rules to assign priority numbers according to your requirement. See the “To edit an ACL rule” section for more information about how to edit a rule.
Figure. An example of existing priority numbers
Figure. An example of priority numbers in multiples of 10, after priorities are renumbered
TroubleshootingTroubleshooting
If ACL rules are improperly set up, all user accounts can be denied access. If you inadvertently lose all network access to the SDX Management Service because of improper ACL setup, follow these steps to gain access.
1. Log on to the XenServer management IP address by using SSH and your “root” account.
2. Log on to the console of the Management Service VM by using nsroot privileges.
3. Run the command “pfctl –d”.
4. Log on to the Management Service through GUI and reconfigure the ACL accordingly. | https://docs.citrix.com/en-us/sdx/13/configuring-management-service/access-control-lists.html | 2020-01-18T00:22:37 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['/en-us/sdx/media/sdx-acl-1.png', 'ACL rule'], dtype=object)
array(['/en-us/sdx/media/example-sdx-acl-rule.png', 'localized image'],
dtype=object)
array(['/en-us/sdx/media/sdx-existing-acl-priority.png',
'Existing priority number'], dtype=object)
array(['/en-us/sdx/media/sdx-renumbered-acl-priority.png',
'renumbers priority'], dtype=object) ] | docs.citrix.com |
What does it cost to get a gram of mass to stationary Mars Orbit?
The question is:
What does it cost to get a gram of mass to stationary Mars Orbit?
Now what are the shipping charges to Mars? Recently India put a Mars Mission into transfer orbit for around $73,000,000. The launch costs were low due to the efficiencies India brings to every project, The ISRO Mars Orbiter named: Mangalyaan, set the standard for the minimum cost to injection into Mars orbits.
The Mangalyaan science package masses 15 kilograms or 15,000 grams. This means that the cost per gram to stationary orbit around Mars for the Mangalyaan is $4900/gram. Yes, you read that correctly: $4,900/gram. Think about this: a skittle candy weighs 1 gram on average, so the cost of a single skittle candy would be $4,900 US/gram.
To get your skittle candy to the surface of Mars would be even more expensive, that would include the cost of atmospheric de-acceleration devices like parachutes or cushioning systems and then take a percentage of the load that the skittle would take up. The cost equation might look like:
((Mass of Skittle + ∑sum all of the other objects mass going to the surface of Mars)/Number of items)
The cost of getting from Mars Orbit to the surface of Mars will likely be expensive. Unlike our Mars Orbital Computational and Storage Cloud, getting those sensors to Mars is going to be much more expensive, but the floor is $4800 US/Gram.
What does this mean?
Things will need to be produced on Mars using Mars Resources, what would the early production systems that grind and shape the Basalt rock on the planet Mars. This would save money over shipping products from Earth. What would this robot look like? Let me see any of your diagrams about these robots.
So if you have robots constructing construction or sensor technologies, this means that they will need control from machine intelligence on orbit with input from Earth. Much of the processing could be done on the Robot, but any extra items used in processing also require increase the costs, costs that could be used for improved frameworks, power systems and so forth.
Conclusion
The thought here is this: What does the Mars environment look after 20 years or 10 Holman transfer orbits? What would happen if parts of the systems could be built using on Mars products? Sensors, wheels, all have parts that could be constructed on Mars using remotely controlled robots that have autonomous features that also use the Azure Stack on orbit for control. | https://docs.microsoft.com/en-us/archive/blogs/devschool/what-does-it-cost-to-get-a-gram-of-mass-to-stationary-mars-orbit | 2020-01-18T01:59:31 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
Predict Categorical Fields
The Predict Categorical Fields assistant displays a type of learning known as classification. A classification algorithm learns the tendency for data to belong to one category or another based on related data. The classification table below shows the actual state of the field versus predicted state of the field. The yellow bar highlights an incorrect prediction.
Algorithms
The Predict categorical Fields assistant uses the following classification algorithms:
Fit a model to predict a categorical field
Prerequisites
- For information about preprocessing, see Preprocessing in the Splunk Machine Learning ML-SPL API Guide.
- If you are not sure which algorithm to choose, start with the default algorithm, Logistic Regression, or see Algorithms.
Steps
- Run a search.
- (Optional) Add preprocessing steps.
- Select the algorithm to use to predict field values.
- Select the categorical field you want to predict.
- Select a combination of fields you want to use to predict the categorical field.
- Specify how much of your data to use for training (fitting the data model) versus testing (validating the model afterwards).
- Fill out any additional fields required by the algorithm you selected.
- Enter a name in the Save the model as field. The model is saved when you click outside the field.
- Click Fit Model.
This list of fields is populated by the search you just ran.
This list contains all of the fields from your search except for the field you selected to predict.
The data is divided randomly into two groups. The default split is 50/50.
To get information about a field, hover over it to see a tooltip.
You must specify a name for the model in order to fit a model on a schedule or schedule an alert. You can find your model in the saved history.
Interpret and validate
After you fit the model, review the prediction results and visualizations to see how well the model predicted the categorical field. In this analysis, metrics are related to misclassifying the field, and are based on false positives and negatives, and true positives and negatives.
Refine the model
After you validate the model, you can refine the model by adjusting which fields you use to predict the categorical field and fit the model again:
- Remove fields that might generate a distraction.
- Try adding more fields. In the Load Existing Settings tab, which displays a history of models you have fitted, sort by the statistics to see which combination of fields yielded the best results.
Deploy the model
After you validate and refine the model, deploy it.
- Click the icon to the right of Fit Model to schedule model training.
- (Optional) To access it,, such as every! | https://docs.splunk.com/Documentation/MLApp/3.1.0/User/PredictCategoricalFields | 2020-01-18T00:46:32 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
.
Note: The connector can be used in both outbound and regular mode simultaneously. Even if you enable outbound mode, you can still configure Kerberos authentication for internal users using authentication methods and policies.
Procedure
- In the VMware Identity Manager console, select the Identity & Access Management tab, then. | https://docs.vmware.com/en/VMware-Identity-Manager/services/com.vmware.vidm-cloud-deployment/GUID-C97A4D37-8F1F-4B24-9A97-1A25A0033999.html | 2020-01-18T01:10:43 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.vmware.com |
Mainly just keeps
header.php cleaner.
Outputs a string of favicons that should cover most use-cases.
Includes icons for iOS & Android, as well as MS Tiles and a handful of generic icons.
Icons can be generated here:.
Make sure to adjust the output string to match the icons that you generate.
Also adjust:
./app/site.webmanifest to match your new icons.
/*** Add to header.php** @param $title string The title for MS Application Tiles* @param $color string The background color for MS Application Tiles*/<?= (new WPDD\Favicon\Display())->favicons('WP DryDock', '#fff'); ?> | https://docs.wpdrydock.com/rdbi-theme-bootstrap/favicon | 2020-01-18T01:40:49 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.wpdrydock.com |
#include <resized_publisher.h>
Definition at line 4 of file resized_publisher.h.
Get a string identifier for the transport provided by this plugin.
Implements image_transport::PublisherPlugin.
Definition at line 7 of file resized< image_transport_tutorial::ResizedImage >.
Definition at line 5 of file resized_publisher.cpp. | http://docs.ros.org/hydro/api/image_transport/html/classResizedPublisher.html | 2020-01-18T00:33:24 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.ros.org |
cp_mgmt_network_facts – Get network objects facts on Check Point over Web Services API¶
New in version 2.9.
Synopsis¶
- Get network objects facts on Check Point devices.
- All operations are performed over Web Services API.
- This module handles both operations, get a specific object and get several objects, For getting a specific object use the parameter ‘name’.
Examples¶
- name: show-network cp_mgmt_network_facts: name: New Network 1 - name: show-networks cp_mgmt_network_facts: details_level: standard limit: 50 offset: 0
Status¶
- This module is not guaranteed to have a backwards compatible interface. [preview]
- This module is maintained by the Ansible Community. [community] | https://docs.ansible.com/ansible/latest/modules/cp_mgmt_network_facts_module.html | 2020-01-18T01:06:59 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.ansible.com |
fortios_endpoint_control_client – Configure endpoint control client lists in Fortinet’s FortiOS and FortiGate¶
New in version 2.8.
Synopsis¶
- This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the user to set and modify endpoint_control feature and client endpoint control client lists. fortios_endpoint_control_client: host: "{{ host }}" username: "{{ username }}" password: "{{ password }}" vdom: "{{ vdom }}" https: "False" state: "present" endpoint_control_client: ad_groups: "<your_own_value>" ftcl_uid: "<your_own_value>" id: "5" info: "<your_own_value>" src_ip: "<your_own_value>" src_mac: "<your_own_value>"
Return Values¶
Common return values are documented here, the following are the fields unique to this module:
Status¶
- This module is not guaranteed to have a backwards compatible interface. [preview]
- This module is maintained by the Ansible Community. [community] | https://docs.ansible.com/ansible/latest/modules/fortios_endpoint_control_client_module.html | 2020-01-18T00:26:28 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.ansible.com |
Upload Files¶
Files originating from the user's local hard drive can be uploaded to Dropbox through the corresponding button , accessible from either the actions toolbar or the actions drop-down of Files Explorer.
Note: uploading of files is only supported in Dropbox
The upload files action is not available for Jobs Viewer, since in this case the user is not supposed to change the files contained there.
Animation¶
Here, we demonstrate how to upload a "POSCAR" input file, containing the crystal structure data for performing a simulation using the VASP engine. | https://docs.exabyte.io/data-in-objectstorage/actions/upload/ | 2020-01-18T01:11:21 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.exabyte.io |
C# worker SDK
The C# worker SDK is one way to develop SpatialOS games with the worker SDK. It is for writing workers using. | https://docs.improbable.io/reference/13.6/csharpsdk/introduction | 2020-01-18T00:42:06 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['https://commondatastorage.googleapis.com/improbable-docs/docs2/reference/402ab92b38546a33/assets/csharpsdk/csharp-sdk-header.png',
'C# SDK documentation'], dtype=object) ] | docs.improbable.io |
Reports the zero-based index position of the last occurrence of a specified string within this instance. The search starts at a specified character position and proceeds backward toward the beginning of the string for a specified number of character positions.
- count character positions have been examined. For example, if startIndex is string, int) method always returns startIndex, which is the character position at which the search begins. In the following example, the string.LastIndexOf(string, int, int) method is used to find the position of a soft hyphen (U+00AD) in the two characters that precede the final "m".LastIndexOf#23 | http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.String.LastIndexOf(System.String%2CSystem.Int32%2CSystem.Int32) | 2020-01-17T23:50:35 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.go-mono.com |
-
Secured Admin Service - Could Not Establish Trust Relationship for the SSL/TLS Secure Channel with Authority Localhost
Symptoms
After Securing the Admin Service, changing the
<AdminServiceUri> value in the
Coveo.SearchProvider.config file to use https, and adding the username and password of the secured service, you get the error below in the Diagnostic Page or the Indexing Manager:
Could not establish trust relationship for the SSL/TLS secure channel with authority 'localhost'.
Cause
When creating the certificate, the signature will use the machine name, and will not accept
localhost as a valid authority.
Environment
- All Coveo for Sitecore 3 minor versions
- Single server installation
- Admin Service
Resolution
To solve this issue:
- Open the
Coveo.SearchProvider.configfile and locate the
<AdminServiceUri>node.
- Change
localhostto the name of the machine used to create the certificate’s
.pfxfile on step 6 of the Securing the Admin Service documentation.
Display Mode
People also viewed | https://docs.coveo.com/en/641/ | 2020-01-18T00:24:00 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.coveo.com |
The collectd input
The collectd input allows InfluxDB to accept data transmitted in collectd native format. This data is transmitted over UDP.
A note on UDP/IP buffer sizes
If you’re running Linux or FreeBSD, please adjust your operating system UDP buffer size limit, see here for more details.
Configuration
Each collectd input allows the binding address, target database, and target retention policy to be set. If the database does not exist, it will be created automatically when the input is initialized. If the retention policy is not configured, then the default retention policy for the database is used. However if the retention policy is set, the retention policy must be explicitly created. The input will not automatically create it.
Each collectd.
Multi-value plugins can be handled two ways. Setting parse-multivalue-plugin to “split” will parse and store the multi-value plugin data (e.g., df free:5000,used:1000) into separate measurements (e.g., (df_free, value=5000) (df_used, value=1000)), while “join” will parse and store the multi-value plugin as a single multi-value measurement (e.g., (df, free=5000,used=1000)). “split” is the default behavior for backward compatibility with previous versions of influxdb.
The path to the collectd types database file may also be set.
Large UDP packets
Please note that UDP packets larger than the standard size of 1452 are dropped at the time of ingestion. Be sure to set
MaxPacketSize to 1452 in the collectd configuration.
Config Example
[ = "/usr/share/collectd/types.db" security-level = "none" # "none", "sign", or "encrypt" auth-file = "/etc/collectd/auth_file" parse-multivalue-plugin = "split" # "split" or "join"
Content from README on GitHub. | https://docs.influxdata.com/influxdb/v1.7/supported_protocols/collectd/ | 2020-01-17T23:50:49 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.influxdata.com |
Reports the zero-based index position of the last occurrence in this instance of one or more characters specified in a Unicode array. The search starts at a specified character position and proceeds backward toward the beginning of the string for a specified number of character positions.
- anyOf
-
A Unicode character array containing one or more characters to seek.A Unicode character array containing one or more characters. | http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.String.LastIndexOfAny(System.Char%5B%5D%2CSystem.Int32%2CSystem.Int32) | 2020-01-18T00:58:19 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.go-mono.com |
Troubleshooting Dat
We've provided some troubleshooting tips based on issues users have seen. Please open an issue or ask us in our chat room if you need help troubleshooting and it is not covered here.
Check Your VersionCheck Your Version
Knowing the version is really helpful if you run into any bugs, and will help us troubleshoot your issue.
In the Command Line:
dat -v
You should see the Dat semantic version printed, e.g.
13.1.2.
Networking IssuesNetworking Issues
All Dat transfers happen directly between computers. Dat has various methods for connecting computers but because networking capabilities vary widely we may have issues connecting. Whenever you run a Dat there are several steps to share or download files with peers:
- Discovering other sources
- Connecting to sources
- Sending & Receiving Data
With successful use, Dat will show network counts after connection. If you never see a connection, your network may be restricting discovery or connection. Please try using the dat doctor (see below) between the two computers not connecting. This will help troubleshoot the networks.
Dat DoctorDat Doctor
We've included a tool to identify network issues with Dat, the Dat doctor. The Dat doctor will run two tests:
- Attempt to connect to a public server running Dat.
- Attempt a direct connection between two computers. You will need to run the command on both the computers you are trying to share data between.
In Dat Desktop:
Our desktop Dat doctor is still in progress, currently you can only test connections to our public server (#1).
- View > Toggle Developer Tools
- Help > Doctor
You should see the doctor information printed in the console.
In the Command Line:.
Known Networking IssuesKnown Networking Issues
- Dat may have issues connecting if you are using iptables.
Installation TroubleshootingInstallation Troubleshooting
To use the Dat command line tool you will need to have node and npm installed. Make sure those are installed correctly before installing Dat. Dat only supports Node versions 4 and above.
node -v
Global InstallGlobal.
Command Line DebuggingCommand Line Debugging
If you are having trouble with a specific command, run with the debug environment variable set to
dat (and optionally also
dat-node).
This will help us debug any issues:
DEBUG=dat,dat-node dat clone dat://<readKey> dir | https://docs.dat.foundation/docs/troubleshooting | 2020-01-18T01:29:11 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.dat.foundation |
TFS MSSCCI Provider Update
- Visual Studio .NET 2003
- Visual C++ 6 SP6
- Visual Visual Basic 6 SP6
- Visual FoxPro 9 SP1
- Microsoft Access 2003 SP2
- SQL Server Management Studio 2005
Many more IDEs than this support MSSCCI for integrated source control operations. Brian's asking users of those IDEs to check it out and let us know that (or whether) your favourite IDE works.
Beyond features, we would like to have this MSSCCI provider work in as many hosts as possible. We don't have the resources to go test each of the many dozens of hosts that support MSSCCI. We're hoping that you, the community, will help us with this effort. We'd really appreciate it if you would try out the TFS MSSCCI provider in as many IDEs as you can and let us know what you find. Please report success or failure and any bugs that you find. Also let us know what version of the IDE you have tested it against. We're going to add a web page to our Developer Center that lists all of the IDEs that have been confirmed to work. Help the community out and report your experiences! You can do this by sending mail to mailto:[email protected] This email address should be active within 24 hours from the time of this posting.
I certainly think it's worth taking Brian up on his challenge. | https://docs.microsoft.com/en-us/archive/blogs/acoat/tfs-msscci-provider-update | 2020-01-18T02:19:00 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
We are preparing a new source of documentation for you. Work in progress!
Voice control
Users can control their app by voice.
Contents
Prerequisites
Voice control is available on Android devices. On the device, make sure to enable Voice Control and Text-to-Speech features. Voice control is available in English.
Enabling voice control
To enable voice control in Resco Mobile CRM app:
- Go to Setup.
- Enable Appearance > Voice control.
Using voice control
First, address the app by saying "Resco" (reskəʊ; reskoʊ). While the app is listening, you have few seconds to say a command.
Your current element within the app is marked by grey dotted line. Use the following commands for navigation within the app:
- go back
- go next
- read more
- read focus
- find
- open
- close form
Say "help" and "read commands" if you are looking for ways how to control the app. Say "stop" to interrupt reading the commands.
Say "start" to run a command. Say "input" to fill in the currently focused field.
Voice inspection
You can perform an inspection (fill in a questionnaire) using voice, along with a smartwatch.
Customizing in Woodford
You can use the Woodford tool to customize the available command words.
- Edit an app project.
- Select Voice Control from the Project menu.
- Modify the commands, then click Save.
See also
- Feature introduction and demo Webinar | https://docs.resco.net/wiki/Voice_control | 2020-01-17T23:58:23 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.resco.net |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.