content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
When a Publisher and a Subscriber are connected and synchronization occurs, the Merge Agent detects if there are any conflicts. If conflicts are detected, the Merge Agent uses a conflict resolver to determine which data will be accepted and propagated to other sites. Note Although a Subscriber synchronizes with the Publisher, conflicts typically occur between updates made at different Subscribers rather than updates made at a Subscriber and at the Publisher.. See Also Article Options for Merge Replication
https://docs.microsoft.com/en-us/sql/relational-databases/replication/merge/advanced-merge-replication-resolve-merge-replication-conflicts
2017-06-22T18:40:16
CC-MAIN-2017-26
1498128319688.9
[]
docs.microsoft.com
Using Amazon S3 Dual-Stack Endpoints Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. This section describes how to use dual-stack endpoints. Topics Amazon S3 Dual-Stack Endpoints When you make a request to a dual-stack endpoint, the bucket URL resolves to an IPv6 or an IPv4 address. For more information about accessing a bucket over IPv6, see Making Requests to Amazon S3 over IPv6. When using the REST API, you directly access an Amazon S3 endpoint by using the endpoint name (URI). You can access an S3 bucket through a dual-stack endpoint by using a virtual hosted-style or a path-style endpoint name. Amazon S3 supports only regional dual-stack endpoint names, which means that you must specify the region as part of the name. Use the following naming conventions for the dual-stack virtual hosted-style and path-style endpoint names: Virtual hosted-style dual-stack endpoint: bucketname.s3.dualstack. aws-region.amazonaws.com Path-style dual-stack endpoint: s3.dualstack. aws-region.amazonaws.com/ bucketname For more information about endpoint name style, see Accessing a Bucket. For a list of Amazon S3 endpoints, see Regions and Endpoints in the AWS General Reference. Important You can use transfer acceleration with dual-stack endpoints. For more information, see Getting Started with Amazon S3 Transfer Acceleration.. The following sections describe how to use dual-stack endpoints from the AWS CLI and the AWS SDKs. Using Dual-Stack Endpoints from the AWS CLI This section provides examples of AWS CLI commands used to make requests to a dual-stack endpoint. For instructions on setting up the AWS CLI, see Setting Up the AWS CLI. You set the configuration value use_dualstack_endpoint to true in a profile in your AWS Config file to direct all Amazon S3 requests made by the s3 and s3api AWS CLI commands to the dual-stack endpoint for the specified region. You specify the region in the config file or in a command using the --region option. When using dual-stack endpoints with the AWS CLI, both path and virtual addressing styles are supported. The addressing style, set in the config file, controls if the bucket name is in the hostname or part of the URL. By default, the CLI will attempt to use virtual style where possible, but will fall back to path style if necessary. For more information, see AWS CLI Amazon S3 Configuration. You can also make configuration changes by using a command, as shown in the following example, which sets use_dualstack_endpoint to true and addressing_style to virtual in the default profile. $ aws configure set default.s3.use_dualstack_endpoint true $ aws configure set default.s3.addressing_style virtual If you want to use a dual-stack endpoint for specified AWS CLI commands only (not all commands), you can use either of the following methods: You can use the dual-stack endpoint per command by setting the --endpoint-urlparameter to aws-region.amazonaws.com any aws-region.amazonaws.com s3or s3apicommand. $ aws s3api list-objects --bucket bucketname--endpoint-url. aws-region.amazonaws.com You can set up separate profiles in your AWS Config file. For example, create one profile that sets use_dualstack_endpointto trueand a profile that does not set use_dualstack_endpoint. When you run a command, specify which profile you want to use, depending upon whether or not you want to use the dual-stack endpoint. Note When using the AWS CLI you currently cannot use transfer acceleration with dual-stack endpoints. However, support for the AWS CLI is coming soon. For more information, see Using Transfer Acceleration from the AWS Command Line Interface (AWS CLI) . Using Dual-Stack Endpoints from the AWS SDKs This section provides examples of how to access a dual-stack endpoint by using the AWS SDKs. AWS SDK for Java Dual-Stack Endpoint Example The following example shows how to enable dual-stack endpoints when creating an Amazon S3 client using the AWS SDK for Java. For instructions on creating and testing a working Java.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; public class DualStackEndpoints { public static void main(String[] args) { String clientRegion = "*** Client region ***"; String bucketName = "*** Bucket name ***"; try { // Create an Amazon S3 client with dual-stack endpoints enabled. AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withCredentials(new ProfileCredentialsProvider()) .withRegion(clientRegion) .withDualstackEnabled(true) .build(); s3Client.listObjects(bucketName); }(); } } } If you are using the AWS SDK for Java on Windows, you might have to set the following Java virtual machine (JVM) property: java.net.preferIPv6Addresses=true AWS .NET SDK Dual-Stack Endpoint Example When using the AWS SDK for .NET you use the AmazonS3Config class to enable the use of a dual-stack endpoint as shown in the following example. var config = new AmazonS3Config { UseDualstackEndpoint = true, RegionEndpoint = RegionEndpoint.USWest2 }; using (var s3Client = new AmazonS3Client(config)) { var request = new ListObjectsRequest { BucketName = “myBucket” }; var response = await s3Client.ListObjectsAsync(request); } For a full .NET sample for listing objects, see Listing Keys Using the AWS SDK for .NET. For information about how to create and test a working .NET sample, see Running the Amazon S3 .NET Code Examples. Using Dual-Stack Endpoints from the REST API For information about making requests to dual-stack endpoints by using the REST API, see Making Requests to Dual-Stack Endpoints by Using the REST API.
https://docs.aws.amazon.com/AmazonS3/latest/dev/dual-stack-endpoints.html
2018-06-18T04:00:58
CC-MAIN-2018-26
1529267860041.64
[]
docs.aws.amazon.com
The Blackfin ISA does not include any generally usable atomic instructions. While it does include a TESTSET instruction, there are some severe restrictions (and anomalies) that prevent it being used in general code. As such, a software-only solution has been put together so that code running under Linux in user space can behave as if it has access to a variety of useful atomic functions. To be clear, this does not cover kernel code, or code running under other operating systems. The types of atomic functions provided are exchange (xchg), compare and swap (cas), simple math (add/subtract), and binary operations (and/or/xor). The atomic functions live in a specific memory address range (as documented in the ABI). When user space code needs to do an atomic operation, it calls the functions in this range. This code region is referred to as the fixed code region because the code lives at a fixed location. If the user code happens to be interrupted by the kernel in the middle of one of these tiny functions, the kernel will automatically complete the function for the user code (register moves, data load/stores, etc…). This way, from the perspective of user space, the functions are always completed atomically. If the user code isn't interrupted by any asynchronous event, then the function completed atomically. The atomic code is hand crafted to be as few instructions as possible so that the kernel rarely needs to do these modifications. However, when it does, it knows exactly which registers need to be changed, or which registers contain the memory addresses that need to be modified. This check is placed at the end of the common interrupt return path. We see if the user space PC is in the middle of this atomic range. If it isn't, we continue on quickly (as this is the most common code path). If it is in the range, we perform further checks as necessary for each function. The base address is chosen because only code that is atomic should be executing below it (see the NULL pointer section in the Linux ABI). Since these addresses are fixed in the Linux ABI, the toolchain uses them automatically when creating Linux applications. file: arch/blackfin/mach-common/entry.S scm failed with exit code 1: file does not exist in git file: arch/blackfin/kernel/fixed_code.S scm failed with exit code 1: file does not exist in git
https://docs.blackfin.uclinux.org/doku.php?id=linux-kernel:fixed-code
2018-06-18T04:03:28
CC-MAIN-2018-26
1529267860041.64
[]
docs.blackfin.uclinux.org
Monitoring and collecting data from Sentry Sentry’s real-time error tracking gives you insight into production deployments and information to reproduce and fix crashes. More information on: Installation This integration can be enabled on the Third party page under Datasources in the sidebar of your CoScale application. Please follow the instructions inside the application, if you require any assistance please don’t hesitate to contact us.
http://docs.coscale.com/agent/plugins/sentry/
2018-06-18T03:57:39
CC-MAIN-2018-26
1529267860041.64
[]
docs.coscale.com
Using Amazon Redshift Spectrum to Query External Data. Amazon Redshift. Note) Asia Pacific (Mumbai) Region (ap-south-1) Asia Pacific (Seoul) Region (ap-northeast-2) Asia Pacific (Singapore) Region (ap-southeast-1) Asia Pacific (Sydney) Region (ap-southeast-2) Asia Pacific (Tokyo) Region (ap-northeast-1) Canada (Central) Region (ca-central-1) EU (Frankfurt) Region (eu-central-1) EU (Ireland) Region (eu-west-1) EU (London) Region (eu-west-2) South America (São Paulo) Region (sa-east-1) Amazon Redshift Spectrum Considerations Note the following considerations when you use Amazon Redshift Spectrum: The Amazon Redshift cluster and the Amazon S3 bucket must be in the same AWS Region. Your cluster can't have Enhanced VPC Routing enabled. External tables are read-only. You can't perform insert, update, or delete operations on external tables. You can't control user permissions on an external table. Instead, you can grant and revoke permissions on the external schema.. Redshift Spectrum doesn't support nested data types, such as STRUCT, ARRAY, and MAP. When using the Athena or AWS Glue data catalog, the following limits apply: A maximum of 10,000 databases per account. A maximum of 100,000 tables per database. A maximum of 1,000,000 partitions per table. A maximum of 10,000,000 partitions per account. You can request a limit increase by contacting AWS Support. These limits don’t apply to an Apache Hive metastore. For more information, see Creating External Schemas for Amazon Redshift Spectrum.
https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html
2018-06-18T04:05:24
CC-MAIN-2018-26
1529267860041.64
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Interface for accessing CloudHSMV2 For more information about AWS CloudHSM, see AWS CloudHSM and the AWS CloudHSM User Guide. Namespace: Amazon.CloudHSMV2 Assembly: AWSSDK.CloudHSMV2.dll Version: 3.x.y.z The IAmazonCloudHSMV
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudHSMV2/TICloudHSMV2.html
2018-08-14T11:14:10
CC-MAIN-2018-34
1534221209021.21
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Returns the descriptions of existing applications. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DescribeApplicationsAsync. Namespace: Amazon.ElasticBeanstalk Assembly: AWSSDK.ElasticBeanstalk.dll Version: 3.x.y.z Container for the necessary parameters to execute the DescribeApplications service method. The following operation retrieves information about applications in the current region: var response = client.DescribeApplications(new DescribeApplicationsRequest { }); List applications = response.Applications; .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EB/MIEBDescribeApplicationsDescribeApplicationsRequest.html
2018-08-14T10:54:16
CC-MAIN-2018-34
1534221209021.21
[]
docs.aws.amazon.com
This chapter describes the process for deploying Enterprise JavaBeans in WebSphere Application Server 3.5. For the steps to create an application server and a container, refer to "Creating an Application Server." For local and native clients, the jar file is located in the directory specified in the Directory field of the EJB section of the Component Interface window. For remote clients, the jar file is located where you ran the makeejb utility on the server's application library. As part of its EJB generation, Panther creates a jar file containing the EJB's Java files, deployment descriptor, and environment settings needed by the bean. When you deploy and install the EJB in WebSphere Administrative Console, it creates a jar file in its Deployed EJBs directory (typically $WAS_HOME/deployedEJBs). This changes the name of the jar file, prepending Deployed to the jar file name.
http://docs.prolifics.com/panther/html/stu_html/ejbdpy35.htm
2018-08-14T11:22:58
CC-MAIN-2018-34
1534221209021.21
[]
docs.prolifics.com
Authority systems are those that have a single source. These are systems in a network that really need to only exist in one place. For example, if there were multiple asset management systems, or per course asset management systems, it would become unmanagable. Also, if there were multiple authorities of section data, it would become difficult to maintain integrity of sections across the rest of the network.
http://elmsln.readthedocs.io/en/latest/systems/authority/
2018-08-14T11:13:28
CC-MAIN-2018-34
1534221209021.21
[]
elmsln.readthedocs.io
@Generated(value="OracleSDKGenerator", comments="API Version: 20180115") public final class SteeringPolicySummary extends Object A DNS steering policy. Warning:* Oracle recommends that you avoid using any confidential information when you supply string values using the API. Note: Objects should always be created or deserialized using the SteeringPolicySummary.Builder. This model distinguishes fields that are null because they are unset from fields that are explicitly set to null. This is done in the setter methods of the SteeringPolic","ttl","healthCheckMonitorId","template","freeformTags","definedTags","self","id","timeCreated","lifecycleState"}) @Deprecated public SteeringPolicySummary(String compartmentId, String displayName, Integer ttl, String healthCheckMonitorId, SteeringPolicySummary.Template template, Map<String,String> freeformTags, Map<String,Map<String,Object>> definedTags, String self, String id, Date timeCreated, SteeringPolicySummary.LifecycleState lifecycleState) public static SteeringPolicySummary.Builder builder() Create a new builder.PolicySummaryPolic
https://docs.cloud.oracle.com/iaas/tools/java/latest/com/oracle/bmc/dns/model/SteeringPolicySummary.html
2019-09-15T13:37:49
CC-MAIN-2019-39
1568514571360.41
[]
docs.cloud.oracle.com
Contents Samples for Detecting Language, Sentiment, and Actionability The installation of eServices Manager includes samples that detect - Language - Sentiment - Actionability Screening Rules Genesys supplies sample screening rules that analyze interactions for sentiment and actionability Models Language Detection Classification Server 8.5.3[+] Click to show section Classification Server 8.5.2 and earlier[+] Click to show section Actionability To use the actionability sample, import the file Actionability.kme, which is located in the <eServicesManagerHome>\ActionabilityModel directory. This provides: - A model Actionability for analyzing actionability. - The training object Actionability that created that model. - A category tree Actionability that contains the categories to assign to interactions as a result of the analysis. Sentiment To deploy the sentiment sample, use the following procedure. - In Configuration Manager or Genesys Administrator, create a language called English_Sentiment. - With eServices Manager set to that language, import the file EnglishSentiment.kme, which is located in the <eServicesManagerHome>\SentimentModel directory. This provides: - A model SentimentSampleModel for analyzing sentiment. - The training object Sentiment that created that model. - A category tree SentimentDetection that contains the categories to assign to interactions as a result of the analysis. Feedback Comment on this article:
https://docs.genesys.com/Documentation/ESDA/8.5.3/ContAn/detectLSA
2019-09-15T12:20:22
CC-MAIN-2019-39
1568514571360.41
[]
docs.genesys.com
Litium Studio 4.7.3 service release includes the following fixes: It also includes these new features: Apply the Service release by running the upgrade program or perform a manual upgrade. All documentation for Litium Studio 4.7 also applies to its Service releases. Download Litium Studio 4.7.3 >>> Starting from Litium Studio 4.7 Hotfixes are renamed Service releases. If you have any questions feel free to contact Litium Support. About Litium Join the Litium team Support
https://docs.litium.com/news/new-service-release-litium-studio-4-7-3-(important)
2019-09-15T12:34:23
CC-MAIN-2019-39
1568514571360.41
[]
docs.litium.com
Note The documentation you're currently reading is for version 3.1.0. Click here to view documentation for the latest stable version. Sharing code between Sensors and Python Actions¶ You can create a python package called lib with a __init__.py file and place it in ${pack_dir}/ to share code between Sensors and Python actions. For example, the path /opt/stackstorm/packs/my_pack/lib/ can contain library code you want to share between Sensors and Actions in pack my_pack. Note, if you want to share common code across packs, the recommended approach is to pin the dependency in the packs’ requirements.txt and push the dependency to pypi to be installable via pip. The lib feature is restricted to scope of individual packs only. The lib folder can contain any number of python files. These files can in turn contain library code, common utility functions and the like. You can then use import statements in sensors and actions like from common_lib import base_function to import base_function from a file named common_lib.py inside /opt/stackstorm/packs/examples/lib/ folder. You can call code from dependencies in pack’s requirements.txt from inside the files in the lib folder as you are able to call them inside sensors and actions. Due to how python module loading works, files inside the lib folder cannot have the same names as standard python module names. Actions may fail with weird errors if you named your files same as standard python module names. Note that this pack lib folder is different from shell actions’ lib folder which is inside /opt/stackstorm/packs/some_pack/actions/lib/. The pack lib folder is never copied to a remote machine and is strictly for sharing code between sensors and actions. This feature is turned off by default to avoid potential issues that might arise due to existing pack structures and lib imports. You may require to refactor your pack if enabling this feature breaks your packs. To enable this feature, simply set the following config option in /etc/st2/st2.conf: [packs] enable_common_libs = True You have to restart st2 via st2ctl restart for the config change to be picked up.
https://bwc-docs.brocade.com/reference/sharing_code_sensors_actions.html
2019-09-15T12:24:02
CC-MAIN-2019-39
1568514571360.41
[]
bwc-docs.brocade.com
newrelic.agent.suppress_apdex_metric(flag=True) Description This call suppresses the generation of the Apdex metric for a web transaction. You might wish to do this when a long-running transaction is frequently affecting the average Apdex score for your application. To un-suppress a previously suppressed transaction, set the flag to False. You can also suppress the Apdex medtric in the WSGI environ dictinary. To do so, set newrelic.suppress_apdex_metric key for the specific request in the WSGI environ dictionary passed by the WSGI server into the WSGI application being monitored. Parameters Return value(s) None. Example(s) Turn off Apdex for specific transaction If you have frequent, long-running functions in your app that are causing your average Apdex to go down, you can call suppress_apdex_metric where the transaction is being generated: import newrelic.agent newrelic.agent.suppress_apdex_metric()
https://docs.newrelic.com/docs/agents/python-agent/python-agent-api/suppress_apdex_metric
2019-09-15T12:33:12
CC-MAIN-2019-39
1568514571360.41
[]
docs.newrelic.com
Cisco Media Gateway This section describes how to integrate SIP Server with the Cisco Media Gateway Controller (MGC). It contains the following sections: Note: The instructions in this section assume that the Cisco Media Gateway is fully functional. This page was last modified on October 30, 2013, at 15:08. Feedback Comment on this article:
https://docs.genesys.com/Documentation/SIPS/8.1.1/IntegrationReferenceManual/CiscoMG
2019-09-15T12:01:13
CC-MAIN-2019-39
1568514571360.41
[]
docs.genesys.com
CfgSwitchAccessCode Description CfgSwitchAccessCode contains a list of Access Codes that are used to place, route, or transfer calls from its Switch to other Switches in a multi-site installation. Depending on the structure of a numbering plan, you may or may not need access codes to reach DNs that belong to different Switches of a multi-site telephone network. You can modify (that is, create, change, or delete) the contents of the Access Codes for a particular Switch or for a set of Switches. Attributes - switchDBID — A unique identifier of the Switch to which this access code is assigned. Mandatory. If value is set to 0 the accessCode value is used as default access to this switch if no other access code is specified on source switch to access this switch. - accessCode — A pointer to the access code. - targetType — Type of the target within the switch specified by switchDBID for which all the routing parameters below are specified. See CfgTargetType. - routeType — Type of routing for the target specified in targetType for this switch. See CfgRouteType. - dnSource — Source of information to specify parameter dn in function TRouteCall. See comments. - destinationSource — Source of information to specify parameter destination in function TRouteCall. See comments. - locationSource — Source of information to specify parameter location in function TRouteCall. - dnisSource — Source of information to specify parameter dnis in function TRouteCall. - reasonSource — Source of information to specify parameter reasons in function TRouteCall. - extensionSource — Source of information to specify parameter extensions in function TRouteCall. Uniqueness of a switch access code is defined by the combination of values of its first three properties, i.e., switchDBID, accessCode, and targetType. Thus, when a certain access code is to be deleted, it is necessary and sufficient to specify those three parameters in the corresponding item of the deletedSwitchAccessCodes list in CfgDeltaSwitch. Function TRouteCall is a function of the T-Library and is defined in the T-Library SDK C Developer's Guide. If targetType=CFGTargetISCC the dnSource property is used for definition of ISCC protocol parameters and presented on GUI (Configuration Manager) with caption “ISCC Protocol Parameters”. If targetType=CFGTargetISCC the destinationSource property is used for definition of ISCC call overflow parameters and presented on GUI (Configuration Manager) with caption “ISCC Call Overflow Parameters”. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSDK/8.5.x/ConfigLayerRef/CfgSwitchAccessCode
2019-09-15T12:04:56
CC-MAIN-2019-39
1568514571360.41
[]
docs.genesys.com
Constructor: messageActionChatDeleteUser Back to constructors index Message action chat delete user Attributes: Type: MessageAction Example: $messageActionChatDeleteUser = ['_' => 'messageActionChatDeleteUser', 'user_id' => int]; Or, if you’re into Lua: messageActionChatDeleteUser={_='messageActionChatDeleteUser', user_id=int} This site uses cookies, as described in the cookie policy. By clicking on "Accept" you consent to the use of cookies.
https://docs.madelineproto.xyz/API_docs/constructors/messageActionChatDeleteUser.html
2019-09-15T12:15:38
CC-MAIN-2019-39
1568514571360.41
[]
docs.madelineproto.xyz
With New Relic APM's Python New Relic's Python agent, including your license key. - Set the NEW_RELIC_CONFIG_FILEas an environment variable pointing to newrelic.ini. Once the agent and configuration file have been installed, New Relic's Python agent can automatically monitor applications that reside in the GAE flexible environment. Wait until the deployment completes, then view your GAE flex app data in the New Relic APM Overview page. Build a custom runtime using Docker See Google's documentation for building custom runtimes. This example describes how to add New Relic to your GAE flex app by building a custom runtime for Docker. For more information about deploying and configuring your Node.js app in the GAE flexible environment, see: - New Relic's GAE flex examples on Github for Python - Google App Engine's documentation for Python - Google App Engine's tutorials to deploy a Python app - 1. Set up the GAE project and install dependencies When building a custom runtime using Docker, set the NEW_RELIC_CONFIG_FILEas an environment variable pointing to the Dockerfile instead of to your Python app's newrelic.ini. - Follow standard procedures to install New Relic's Python agent, including your license key. - Follow Google App Engine procedures Python to create a Google Cloud Platform project, create an App Engine application, and complete other prerequisites for the Google Cloud SDK. The Google Cloud SDK also provides the gcloudcommand line tool to manage and deploy GAE apps. - 2. Configure your app.yaml The app.yamlconfiguration file is required for a GAE flexible environment app with a custom runtime. At a minimum, make sure it contains: env: flex runtime: custom - 3. Configure a Dockerfile The Dockerfile defines the Docker image to be built and is required for a GAE flexible environment app. The following Dockerfile example shows the Python agent installed for an application served with gunicorn. These procedures are similar to New Relic's Python quick start guide. The Dockerfile will contain customer-specific code, including Python version, installation requirements, etc). # [START dockerfile] FROM gcr.io/google_appengine/python # Install the fortunes binary from the debian repositories. RUN apt-get update && apt-get install -y fortunes # Optional: Change the -p argument to use Python 2.7. RUN virtualenv /env -p python3.4 # Set virtualenv environment variables. This is equivalent to running # source /env/bin/activate. ENV VIRTUAL_ENV /env ENV PATH /env/bin:$PATH ADD requirements.txt /app/ RUN pip install -r requirements.txt ADD . /app/ CMD NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program gunicorn -b :$PORT main:app # [END dockerfile] - New Relic agent troubleshooting logs from GAE Use these resources to troubleshoot your GAE flex environment app: - To connect to the GAE instance and start a shell in the Docker container running your code, see Debugging an instance. To redirect New Relic Python agent logs to Stackdriver in the Cloud Platform Console, add the following statement to the newrelic.iniconfiguration: log_file = stderr - To view the logs, use the Cloud Platform Console's Log Viewer.
https://docs.newrelic.com/docs/agents/python-agent/hosting-services/install-new-relic-python-agent-gae-flexible-environment
2019-09-15T12:26:03
CC-MAIN-2019-39
1568514571360.41
[]
docs.newrelic.com
Displace Node¶ Displace Node. The Displace Node displaces the pixel position based on an input vector. This node could be used to model phenomena, like hot air distortion, refractions of uneven glass or for surreal video effects. Inputs¶ - Image - Standard image input. - Vector - Input of the displacement map. If the a color output is implicitly converted in the vector input, the first channel (red) value determines displacement along X axis. The second channel (green) the displacement along Y axis. If the input is a grayscale image, where both the channel values are equal, the input image will be displaced equally in both X and Y directions. - Scale X, Y - Separate scaling of the vector input in X and Y direction. Acting as multipliers by increasing or decreasing the strength of the displacement along their respective axes.
https://docs.blender.org/manual/de/dev/compositing/types/distort/displace.html
2019-09-15T12:26:08
CC-MAIN-2019-39
1568514571360.41
[array(['../../../_images/compositing_node-types_CompositorNodeDisplace.png', '../../../_images/compositing_node-types_CompositorNodeDisplace.png'], dtype=object) ]
docs.blender.org
Emulated Ringing Microsoft Lync UCMA does not provide the notification about the transition of an endpoint into alerting state. So, straightforward UCMA approach does not allow notifying T-Client and providing corresponding attached data, when a call is delivered to the agent endpoint. To address this issue, T-Server emulates EventRinging as soon as an invitation for a new conversation is sent to an agent. Emulation creates a race condition, between alerting phone and EventRinging on the desktop. To resolve the race condition, the desktop application implements the following logic: - If EventRinging arrives first at the desktop: - Stores value Conversation-ID from the AttributeExtensions of EventRinging. - Waits for a new conversation with Conversation-ID matching the Conversation-ID extension key in EventRinging. - Shows toast (screen pop) with call attributes and user data attached when a call reaches the destination. - If a call arrives first at the desktop: - If this conversation should be handled (for example, as defined in "monitoring of direct calls"), the desktop stores the Conversion-ID in its memory. - Waits for EventRinging with Conversation-ID in the AttributeExtensions matching the Conversation-ID of a new Lync conversation. - When arrived, shows toast (screen pop) with attached data. This page was last modified on July 29, 2015, at 08:19. Feedback Comment on this article:
https://docs.genesys.com/Documentation/Skype/8.5.0/Dep/EmulatedRinging
2019-09-15T12:35:30
CC-MAIN-2019-39
1568514571360.41
[]
docs.genesys.com
MultiScaleImage.OriginalPixelHeightProperty Field Microsoft Silverlight will reach end of support after October 2021. Learn more. Identifies the OriginalPixelHeight dependency property. Namespace: System.Windows.Controls Assembly: System.Windows (in System.Windows.dll) Syntax 'Declaration Public Shared ReadOnly OriginalPixelHeightProperty As DependencyProperty public static readonly DependencyProperty OriginalPixelHeightProperty Field Value Type: System.Windows.DependencyProperty The identifier for the OriginalPixelHeight dependency property. Version Information Silverlight Supported in: 5 Platforms For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers. See Also
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/hh358337%28v%3Dvs.95%29
2019-09-15T12:39:52
CC-MAIN-2019-39
1568514571360.41
[]
docs.microsoft.com
, ephemeral, and persistent volumes. - Pod-level health checks.
http://docs-staging.mesosphere.com/mesosphere/dcos/1.11/deploying-services/pods/
2019-09-15T13:52:28
CC-MAIN-2019-39
1568514571360.41
[]
docs-staging.mesosphere.com
Note The documentation you're currently reading is for version 3.1.0. Click here to view documentation for the latest stable version. Roadmap¶ EWC is still under active development. We welcome community feedback, and encourage contributions. Here’s our plans for the next two releases. Note This is a roadmap. It represents our current product direction. All product releases will be on a when-and-if available basis. Actual feature development and timing of releases will be at the sole discretion of the development team. This roadmap does not create a commitment to deliver a specific feature. Contents are subject to change without notice. If there’s something you really need, remember: this is Open Source. Write and contribute the feature. Pull Requests are open to anyone. 3.1¶ Ubuntu: GA Support Ubuntu 18.04, with Python 3.6 MongoDB: Support MongoDB 4.0 (required for Ubuntu 18.04). Ubuntu: Drop Ubuntu 14.04 support. ChatOps: Microsoft Teams GA. Core: Support latest pipand requests(). 3.2¶ Orquesta: Workflow runtime graph. RHEL/CentOS: Support RHEL 8.x (assuming it has been released!) RHEL/CentOS: Drop support for RHEL/CentOS 6.x. WebUI: Datastore viewer/editor. ChatOps: RBAC. SAML: Support SAML authentication. Job Scheduling: Job scheduling for ad-hoc jobs. Monitor the master branch to see how we’re progressing. Backlog¶ Here’s some more things on our list that we haven’t scheduled yet: Dry Run Workflows Simulate running Orquesta workflows without actually making changes. History and Audit service: History view with advanced search over years worth of execution records, over multiple versions of continuously upgraded EWC. At-scale refinements: Ensure event handling reliability, and event storm resilience. Complete support for multi-node deployment of sensor containers and rules engines for resilience and throughput. DB/Filesystem Consistency: Provide better tooling for managing consistency between database and filesystem consistency for rules, actions, sensors, etc. Configurable Sensors: Run multiple instances of the same sensor, with different configurations. Pack Dependency: Better automatic handling of pack dependencies. Pluggable Configuration: Support multiple configuration backends for better security. RBACv2: Filters: Tag and property based filters, more refined and convenient access control. Permissions: Permissions on key value objects, arbitrary triggers, support for a default role. Something else you’d like to see on the backlog? Submit an issue. Or want to see something implemented sooner? Submit a PR! Release History¶ Done in v3.0 Orquesta GA: GA release of “Orquesta” workflow engine. Workflow Designer v2: Complete overhaul of Workflow Designer for easier creation and editing of workflows via a Web UI. Includes Orquesta workflow editing and creation. ChatOps: Microsoft Teams Beta. Python3: All Exchange packs updated for Python3 CI/CD. Legacy Runners: Remove legacy CloudSlang and Winexe runners. Done in v2.10 Orquesta RC: Release Candidate of “Orquesta” workflow engine. Includes with-items, delay, scheduling, notifications, Unicode support. Begin Mistral deprecation. ChatOps: Update ChatOps components. HA: Simplify & streamline running EWC in HA mode. k8s: Reference configurations for running EWC Community and Enterprise in HA mode on k8s. Ubuntu 18.04: Beta support of Ubuntu 18.04, MongoDB 4.0, Python 3.6. Done in v2.9 Orquesta Second Beta: Second beta of new “Orquesta” workflow engine. WebUI: Real-time streaming output, and Inquiries support Action Output Structure Definition: Enable optional definition of action payload, so that it can be inspected and used when passing data between actions in workflows. k8s: Beta reference configuration for running EWC Enterprise in HA mode on k8s. Windows Runners: Add pywinrm-based Windows runner. Done in v2.8 Orquesta Beta: Public beta of new “Orquesta” workflow engine (nb this was originally named “Orchestra”). WebUI: Update look & feel of Web UI, and add “Triggers” tab for troubleshooting rules. Python3 Actions: Support Python 3 actions on a per-pack basis. Metrics Framework: New framework for metrics collection for action results, time, etc. Done in v2.7 Action Versioning: Allow running specific action version - better management of rolling upgrades. Mistral Callbacks: Refactor Mistral to support callbacks instead of polling. UTF-8/Unicode: Allow UTF-8/Unicode characters in pack config files. Virtual Appliance: Vagrantbox/Virtual Appliance with ST2 already installed, for quicker testing. Done in v2.6 React Web UI: Rewrote st2web Web UI to use React framework. Streaming Output: Streaming output enabled by default. Pack Development: Shared libdirectory for actions and sensors. st2client: Python 3 support for st2client. Done in v2.5 st2.ask: Support ability to request/provide permission to proceed with workflow. Streaming Output: Provide streaming output from long-running actions as it is received. Done in v2.4 Pack UI: Web interface for pack management. Pause and Resume: Pause and Resume Workflows and ActionChains. Done in v2.3 API Docs: Auto-generated REST API docs - see api.stackstorm.com. Monitoring Docs: Create EWC monitoring guidelines. Docker based installer: Complete the vision of OS independent, layered Docker-based installer, to increase reliability, modularity, and speed of deployment. Done in v2.2 Mistral Jinja support: Mistral workflows now support Jinja notation. Security improvements: Better default security posture for MongoDB, RabbitMQ, PostgreSQL. Done in v2.1 StackStorm Pack Exchange: Make integration and automation packs discoverable, continuously tested, and community rated. Solve the problem of packs spread all over GitHub. Ubuntu Xenial (16.04) support Done in v1.6 MongoDB: MongoDB 3.x support. Datastore: Access K/V datastore from the Mistral workflows. Done in v1.5 Pack configuration: Configuration separated from the pack code. Datastore: Key/value datastore secrets. Done in v1.4 Packaging: Deprecation of All-in-One Installer. Packaging: Native deb/rpm packages with bundled python dependencies. ChatOps: ChatOps API support for Slack/HipChat providers. Done in v1.3 Workflows: st2 re-run- resume failed workflows. Scale: Garbage collection service. Done in v1.2 Packs: Pack Testing support. ChatOps: Fully reworked ChatOps with Jinja templating. Policies: Timeout and retry policies. Done in v1.1 FLOW: Visual workflow representation and drag-and-drop workflow designer. RBAC: Role based access control for packs, actions, triggers and rules. Pluggable authentication backends including PAM, Keystone, Enterprise LDAP. All-in-one installer: production ready single-box reference deployment with graphical setup wizard. RHEL 6 and 7 support Trace-tags: ability to track a complete chain of triggers, rules, executions, related to a given triggering event. Native SSH: replace Fabric; Fabric based SSH still available and can be enabled via config. WebUI major face-lift Done in v0.11 ChatOps: two-way chat integration beyond imagination. More integration packs: Major integrations - Salt, Ansible, some significant others. Check the full list. Done in v0.9 Experimental windows support: windows runner, and windows commands. Web UI complete basics: rule create/edit/delete in UI. Done in v0.8 Web UI: refactor history view, create and edit rules and workflows, add graphical representations for workflow definitions and executions. Improved Mistral integration: simplified Mistral DSL for EWC actions, visibility of workflow executions, and reliable of EWC-Mistral communication. Includes Mistral improvements, features, and fixes. Operational supportability: Better output formats, better visibility to ongoing actions, better logs, better debugging tools. Scale and reliability improvements: deployed and run at scale, shown some good numbers, and more work identified. Done in v0.6.0 YAML: complete moving to YAML for defining rules, action and trigger metadata, configurations, etc. Plugin isolation and management: Improved managements of sensors, action runners and provide isolated environments. Reliability: improvements on sensor and action isolation and reliability. See Changelog for the full gory history of everything we’ve delivered so far. Questions? Problems? Suggestions? Engage! - Slack community channel: stackstorm-community.slack.com (Register here)
https://bwc-docs.brocade.com/roadmap.html
2019-09-15T12:24:09
CC-MAIN-2019-39
1568514571360.41
[]
bwc-docs.brocade.com
Objects when the application pauses.. OnApplicationPause. Note: MonoBehaviour.OnApplicationPause receives true or false. There is no way to call this message. Also the keyboard/mouse/etc has no way to control MonoBehaviour.OnApplicationPause. Pause means the game is running normally or has been suspended. #pragma strict public class AppPaused extends MonoBehaviour { var isPaused: boolean = false; function OnGUI() { if (isPaused) GUI.Label(new Rect(100, 100, 50, 30), "Game paused"); } function OnApplicationFocus(hasFocus: boolean) { isPaused = !hasFocus; } function OnApplicationPause(pauseStatus: boolean) { isPaused = pauseStatus; } }; } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.4/Documentation/ScriptReference/MonoBehaviour.OnApplicationPause.html
2019-09-15T12:27:18
CC-MAIN-2019-39
1568514571360.41
[]
docs.unity3d.com
current selection filtered by type and mode. For a selected GameObject that has multiple Components of type, only the first one will be included in the results. If type is a subclass of Component or GameObject the full SelectionMode is supported. If type does not subclass from Component or GameObject (eg. Mesh or ScriptableObject) only SelectionMode.ExcludePrefab and SelectionMode.Editable are supported. class ToggleActive extends ScriptableObject { @MenuItem ("Example/Toggle Active of Selected %i") static function DoToggle() { var activeGOs: Object[] = Selection.GetFiltered( GameObject, SelectionMode.Editable | SelectionMode.TopLevel); for (var obj in activeGOs) var activeGO = obj as GameObject; activeGO.SetActive(!activeGO.activeSelf); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.1/Documentation/ScriptReference/Selection.GetFiltered.html
2019-09-15T12:20:57
CC-MAIN-2019-39
1568514571360.41
[]
docs.unity3d.com
This. Click My Subscriptions.
https://docs.wso2.com/display/AM1100/Enforcing+Throttling+to+an+API
2019-09-15T12:21:59
CC-MAIN-2019-39
1568514571360.41
[]
docs.wso2.com
routine produce Documentation for routine produce assembled from the following types: class Supply (Supply) method produce method produce(Supply: --> Supply) Creates a "producing" supply with the same semantics as List.produce. my = Supply.from-list(1..5).produce();.tap(-> ); # OUTPUT: «1␤3␤6␤10␤15␤» class Any (Any) method produce Defined as: multi method produce(Any: & --> Nil)multi method produce(Any: )multi sub produce (, +list) This is similar to reduce, but returns a list with the accumulated values instead of a single result. <10 5 3>.reduce( &[*] ).say ; # OUTPUT: «150␤»<10 5 3>.produce( &[*] ).say; # OUTPUT: «(10 50 150)␤» The last element of the produced list would be the output produced by the .reduce method. If it's a class, it will simply return Nil. class List (List) routine produce Defined as: multi sub produce(, *)multi method produce(List: ) Generates a list of all intermediate "combined" values along with the final result by iteratively applying a function which knows how to combine two values. If @values contains just a single element, a list containing that element is returned immediately. If it contains no elements, an exception is thrown, unless &with is an operator with a known identity value. If &with is the function object of an operator, its inherent identity value and associativity is respected - in other words, (VAL1, VAL2, VAL3).produce(&[OP]) is the same as VAL1 OP VAL2 OP VAL3 even for operators which aren't left-associative: # Raise 2 to the 81st power, because 3 to the 4th power is 81[2,3,4].produce(&[**]).say; # OUTPUT: «(4 81 2417851639229258349412352)␤»say produce &[**], (2,3,4); # OUTPUT: «(4 81 2417851639229258349412352)␤»say [\**] (2,3,4); # OUTPUT: «(4 81 2417851639229258349412352)␤»# Subtract 4 from -1, because 2 minus 3 is -1[2,3,4].produce(&[-]).say; # OUTPUT: «(2 -1 -5)␤»say produce &[-], (2,3,4); # OUTPUT: «(2 -1 -5)␤»say [\-] (2,3,4); # OUTPUT: «(2 -1 -5)␤» A triangle metaoperator [\ ] provides a syntactic shortcut for producing with an infix operator: # The following all do the same thing...my = (1,2,3,4,5);say produce , ;say produce * + *, ;say produce &[+], ; # operator does not need explicit identitysay [\+] ; # most people write it this way The visual picture of a triangle [\ is not accidental. To produce a triangular list of lists, you can use a "triangular comma": [\,] 1..5;# (# (1)# (1 2)# (1 2 3)# (1 2 3 4)# (1 2 3 4 5)# ) Since produce is an implicit loop, it responds to last and redo statements inside &with: say (2,3,4,5).produce: ; # OUTPUT: «(2 5 9)␤»
http://docs.perl6.org/routine/produce
2019-09-15T13:49:31
CC-MAIN-2019-39
1568514571360.41
[]
docs.perl6.org
Modifying Parameter Group Templates Modifying Parameter Group Templates Changes that you make to a Parameter Group Template are not propagated to Parameter Groups that use that template. - In the header, go to Operations > Operational Parameters > Group Templates. - On the Parameter Group Template List panel, select the Parameter Group Template that you want to modify. - On the <Parameter Group Template name> panel that is displayed to the right of the Parameter Group Template List panel, modify the properties of the Parameter Group Template, as required. - When you are finished modifying the Parameter Group Template, click Save to save your changes, or click Cancel to cancel your changes and leave the Parameter Group Template unchanged. This page was last modified on July 10, 2013, at 08:54. Feedback Comment on this article:
https://docs.genesys.com/Documentation/GA/8.1.3/user/ParameterGroupTemplatesModifying
2019-09-15T12:19:01
CC-MAIN-2019-39
1568514571360.41
[]
docs.genesys.com
Table of Contents - → (N) (Ctrl+N) Generate a new puzzle of the type currently selected. - → (Ctrl+O) Load a previously saved puzzle, with all of its dimensions, settings, current state of the cube and history of moves, using a file selection dialog box to locate the required file. - → (Shift+U) Undo all previous moves and start again. - → (Ctrl+S) Save the current puzzle, with all of its dimensions, settings, current state of the cube and history of moves, using a file selection dialog box to name a new file if the puzzle has not previously been saved and loaded. - → Save the current puzzle under a new file name, with all of its dimensions, settings, current state of the cube and history of moves, using a file selection dialog box. - → Choose a type of puzzle to play from a series of sub-menus graded by difficulty, based on cube dimensions and number of shuffling moves, or use sub-menu item Make your own... to create your own puzzle, using a dialog box. - → (Ctrl+Q) Quit Kubrick, automatically saving the current puzzle's dimensions, settings, state of the cube and history of moves. - → (Ctrl+Z) Undo a previous move (repeatedly if required). - → (Ctrl+Shift+Z) Redo a previously undone move (repeatedly if required). - → (Ctrl+D) Start/Stop demo of random puzzle solving on the start page of Kubrick. - → (S) Solve the cube. This shows all your moves being undone, then all the shuffling moves being undone and then the shuffling moves being re-done, leaving you set up to have another go at the puzzle. - → (Shift+U) Undo all previous moves and start again. - → (Shift+R) Redo all previously undone moves. Adjust the orientation of a rotated cube by the minimum amount needed to make the rotations a combination of 90 degree moves, thus setting the axes parallel to the XYZ axes. In addition, some whole-cube 90 degree moves are inserted in your list of moves to achieve the desired effect. This is to standardise the view's perspective so that the top, front and right sides are visible together and keyboard moves become properly meaningful. The inserted moves can be undone and redone, exactly as if you had made them directly yourself. For example, if you have used the right mouse-button to turn the cube upside-down, the top or Up (U) face is now what used to be the bottom or Down (D) face and what used to be the Y axis is pointing downwards. In this situation, → will redefine the faces and axes so that the new top face is known as Up (U) and the Y axis is again the one that points upwards. - → Show a view of the front of the cube. - → Show views of the front and back of the cube. Slice moves and rotations can be performed on either picture and the other will move simultaneously. - → Show a large view of the front of the cube and two smaller views of the front and back. Slice moves can be performed on any of the pictures and the others will move simultaneously, but only the large one can be rotated. - → (Ctrl+D) Run the Main Demo, in which a cube changes shape, shuffles and solves itself as it rotates at random. - → Show a sub-menu in which pretty patterns on the 3x3x3 cube can be selected and the moves to create them are demonstrated. There is also an Info item that tells you a little more about such patterns. - → Show a sub-menu in which sequences of moves used to solve the 3x3x3 cube can be selected and the sequences are demonstrated. There is also an Info item that tells you a little more about such solution moves. - → (W) Show animations of shuffling moves as they occur. This is an aid for beginners, but might be a form of cheating for experienced players. - → (O) Show animations of your own moves as they occur. This is an aid for beginners, because it slows down the animations. Experienced players can turn this option off and moves are then animated at high speed, taking about a tenth of a second to turn 90 degrees. - → Open a dialog where you can configure the toolbar actions for Kubrick. - → Open a game settings dialog. See Game Configuration section for more details. Additionally Kubrick has the common KDE and menu items, for more information read the sections about the Settings Menu and Help Menu of the KDE Fundamentals.
https://docs.kde.org/stable5/en/kdegames/kubrick/interface.html
2019-09-15T13:07:51
CC-MAIN-2019-39
1568514571360.41
[array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)]
docs.kde.org
) Remarks. XAML Attribute Usage <object property="x,y,z"/> -or <object property="x y z"/> XAML Values x The x-coordinate of this Point3D. y The y-coordinate of this Point3D. z The z-coordinate of this Point3D.
https://docs.microsoft.com/en-gb/dotnet/api/system.windows.media.media3d.point3d?view=netframework-4.7.2
2019-09-15T12:47:52
CC-MAIN-2019-39
1568514571360.41
[]
docs.microsoft.com
PoolMon Overview PoolMon displays the following data about memory allocations. The data is sorted by the allocations' pool tags. The number of allocation operations and free operations (and unfreed memory allocations). The change in the number of allocation operations and free operations between updates. The total size of memorypaged pools. Using PoolMon, you can also: Sort and reconfigure the PoolMon display while it is running. Save configured data to a file. Generate a file of the tags used by drivers on the local system (32-bit Windows only). Feedback
https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/poolmon-overview
2019-09-15T13:12:58
CC-MAIN-2019-39
1568514571360.41
[]
docs.microsoft.com
So you want to create a site with Moss and check that it works, but you cannot update your DNS records at this moment... no problem! You can browse your website using a domain name of the form <whatever>.<server.ip.address>.getmoss.site. E.g. say the main domain name of your website is (or will be) site.com and your server's IP address is 10.0.0.1. The IP address-based domain name could be site.com.10.0.0.1.getmoss.site. The corresponding DNS query will resolve to IP address 10.0.0.1. You can either use such domain name as the main domain or as a domain alias. Note that in the latter case, if you created a WordPress site with Moss using a different domain name, you must update your WordPress URI and Site URI accordingly (otherwise WordPress will redirect clients to the original domain name). Once you're done, you can browse your site by typing the IP address-based domain name in your browser. Later on you might rename your domain name or, in case you added an alias, you can update or delete it at will. In case you're curious about getmoss.site, you may take a look to our article on free wildcard DNS services. CAVEATS We don't recommend that you enable Let's Encrypt in one of these domains because the certificate issuance might fail with high likelihood, due to the limits Let's Encrypt enforces. More info in the article we link above.
https://docs.moss.sh/en/articles/2333466-browse-your-website-using-your-server-s-ip-address
2019-09-15T13:01:56
CC-MAIN-2019-39
1568514571360.41
[]
docs.moss.sh
method udp Documentation for method udp assembled from the following types: class IO::Socket::Async From IO::Socket::Async (IO::Socket::Async).
http://docs.perl6.org/routine/udp
2019-09-15T13:46:29
CC-MAIN-2019-39
1568514571360.41
[]
docs.perl6.org
How to integrate iContact TUTORIAL AIM: In this tutorial we will take a quick look at how to integrate iContact with OptimizeLeads. - To integrate with the iContact email system. Click on the 'Integrations' link on the main menu. Next click on the 'Connect' button next to the iContact section. - Once clicked, you will be prompted to enter your 'API Application ID', 'API Application Password' and 'Main iContact Username'. You might need to contact iContact support if you need help with creating your Application ID. You will need to check the checkbox to say that "I add a contact (one at a time)". - Once you've filled in your credentials, click on the 'Connect' button to confirm the integration. You are now ready to begin collecting leads using OptimizeLeads!
https://docs.optimizepress.com/article/2594-how-to-integrate-icontact
2021-10-16T08:05:27
CC-MAIN-2021-43
1634323584554.98
[array(['https://s3.amazonaws.com/op-leads/media/lJ6PzWbx5XRQ-201612011547/Screen Shot 2016-12-01 at 15.47.02.png', None], dtype=object) array(['https://s3.amazonaws.com/op-leads/media/QByYzRBz1Nbl-201612011548/Screen Shot 2016-12-01 at 15.48.38.png', None], dtype=object) ]
docs.optimizepress.com
Configuration What's configurable? Executing the -config command, you can change the following attributes: - Anti-Spam filter - Greeting Message - Goodbye Message - Invite Filter - IP Filter - Logs - Prefix - Auto-role - Set role for more info on each feature, refer to running the command in your own server. The Configuration Menu The way you configure Poni to your server's needs has changed a tad. The change itself makes it easier to configure certain attributes of the bot through reactions instead of typing -config and then the attribute you want to change and how you want to change it. This is depicted in the image below. Syntax The Syntax for most of the attributes is as simple as reacting with the attribute's corrosponding emoji and selecting whether you want it on or off by selecting the green check mark or the red X. However, attributes such as Prefix, Set role, Greeting and Goodbye messages you can reply to with plaintext after selecting the attribute's emoji in the configuration menu. Anti-Spam Filter This module allows you to filter out spam in your server. Greeting Message This module allows you to customize the Greeting message when users join your server. This can always be disabled by typing -silence, in doing so will also result in your Goodbye messages also being disabled. Goodbye This module allows you to configure the Goodbye message when a user leaves your server. This will always be displayed when users leave unless you have the server silenced. Invite Filter This module allows you to filter out discord invites / advertisements that are sent by members. IP Filter This module allows you to filter out IP addresses in your server. This can help prevent malicious activity in your server. Logs This module allows you to configure if deleted user messages are sent to your logs channel, post setup. Prefix This module allows you to configure the prefix for your server. Set role This module allows you to set role IDs for Poni to reference so you are not limited to Poni's premade roles. The role types you can configure are as follows: - Staff - Member - Trusted - Muted These roles are needed for Poni to complete actions like mute users or let the agree module function.
https://docs.ponibot.com/config/
2021-10-16T08:24:39
CC-MAIN-2021-43
1634323584554.98
[array(['../configmenu.png', None], dtype=object)]
docs.ponibot.com
Extended quartile Computes the quartiles and the minimum and maximum of a data sample. Syntax extended_quartile(Statistics_data) Description extended_quartile(Statistics_data) Given a statistics data, returns a vector of 5 components: the first element is the minimum, the second, third and fourth element correspond to the first, second and third quartile, and the fifth element is the maximum. Related functions Quartile Table of Contents Syntax Description Related functions
https://docs.wiris.com/en/calc/commands/statistics/extended_quartile
2021-10-16T08:27:05
CC-MAIN-2021-43
1634323584554.98
[]
docs.wiris.com
Disables/enables nodes. Supported types are shapes, lights, shaders, and operators. Selection An expression to select which nodes this operator will affect. The expression syntax is described in the selection expression documentation, with some examples. Note that if the operator is connected to a procedural the selections are assumed to be relative to the procedural's namespace. Mode Disable Enable/disable the operator. Disabled operators are bypassed when the operator graph is evaluated. Enable Disables the operator. Shapes Only Ignores all shapes if set to false.
https://docs.arnoldrenderer.com/display/A5AFCUG/Disable
2021-10-16T09:12:15
CC-MAIN-2021-43
1634323584554.98
[]
docs.arnoldrenderer.com
Website Telegram Search… Introduction Getting Started Swarm Markets DEX Market Participants Token SMT Core Concepts Passport Wallet Swaps Pools Vouchers Advanced Topics Fees Reference Protocol overview AMM xTokens Smart contracts Audits API / Subgraph About Licenses Terms GitBook Vouchers Can't find the answer you're looking for? Ask your question in our community Discord . What are crypto vouchers and how do they work? Swarm Markets crypto vouchers are similar to gift cards but are denominated in a cryptocurrency such as Wrapped Bitcoin (wBTC) or Ether (ETH). This means that the exchange rate compared to EUR, USD, GBP or other fiat currencies continuously fluctuates. Buying a crypto voucher is equivalent to buying and holding the underlying cryptocurrency. What crypto assets can I purchase? Vouchers are currently available in Wrapped Bitcoin (WBTC) and Ether (ETH). What is wrapped bitcoin? Wrapped bitcoin is a version of bitcoin that is compatible with DeFi platforms built on the Ethereum Network, like Swarm Markets. Where can I buy a crypto voucher from? People will be able to purchase vouchers after using Yoti and its partner’s free app, EasyID to verify their identity. The vouchers can then be used on Swarm Markets and redeemed via our platform. What payment methods are accepted? Currently, crypto vouchers may be purchased using widely accepted credit cards, debit cards, and through bank accounts that support SEPA payments. What do I get when I buy a crypto voucher? When you successfully complete a purchase, you will see your new voucher in the vouchers section of the Swarm Markets platform, including the value of the cryptocurrency it is denominated in. You will also receive a copy of your voucher by email. The funds represented in your voucher are held in custody by Swarm Markets until you request to redeem them to your own crypto wallet. What fees are charged for buying crypto vouchers on Swarm Markets? Swarm Markets does not apply any fee for the purchase of crypto vouchers. Currently we partner with a payment provider called MoonPay. The processing fees are charged by MoonPay, and network fees for transacting on the Blockchain are applied to all crypto voucher purchases. The processing fee varies based on payment method. The network fee, also known as gas fees, is dynamic based on network conditions and is set by transaction processors on the Ethereum Network at the time of your transaction. MoonPay Fee Schedule Processing Fee Network Fee card payments 4.5% min. €3.99/£3.99/$3.99 or currency equivalent dynamic bank transfers 1% min. €3.99/£3.99/$3.99 or currency equivalent dynamic What is a network (gas) fee? All transactions made on the Ethereum Blockchain must be validated, which uses a lot of computational power. A fee is made to those who process transactions to compensate them for this use of energy. The network fee on your voucher is calculated before you confirm your transaction and will be displayed on screen for you to check, before you make payment. Are there any limitations I should be aware of? Voucher purchases are currently limited to a total of (500 EUR / 600 USD / 450 GBP) in active balances at any time. A voucher has an active balance if it has not been redeemed. For example, if the total value of your active (unredeemed) vouchers is currently 400 EUR, you are able to purchase additional vouchers up to a value of 100 EUR. MoonPay, our payment provider, imposes minimum purchase amounts, depending on the selected cryptocurrency: For ETH, the minimum purchase is 30 USD. For WBTC, the minimum purchase is 0.0042 WBTC. What does it mean to ‘redeem’ a crypto voucher? When you redeem a voucher, Swarm Markets sends you the cryptocurrency value of the voucher to your crypto wallet address. How do I redeem a crypto voucher? To redeem your voucher, click the “Redeem” button on any of your active vouchers. After confirmation, we will receive your redemption request and transfer the funds to your connected wallet address. Note that if you have not previously onboarded with Swarm Markets, redemption requires connecting an Ethereum address and verifying your email. You will need to set up a crypto wallet to redeem the crypto assets before you go through the redemption process on Swarm Markets. Can I redeem my voucher for bitcoin instead of wrapped bitcoin? Redemption is currently only available in the cryptocurrency of the voucher. Bitcoin is currently not supported. Can I spend my voucher? Swarm Markets crypto vouchers cannot currently be used to spend in stores or online. At any time, the real crypto assets they represent can be redeemed to your connected crypto wallet address. Is Swarm Markets regulated? Swarm Markets operates under regulatory license from the Federal Financial Supervisory Authority (BaFin) in Germany, provided to Swarm Capital GmbH Branch Office Berlin, and is supported by Swarm Markets GmbH (together “Swarm Markets”). Click here for more information. Each customer on Swarm Markets must undergo KYC (Know Your Customer) and AML (anti-money laundering) checks, using Yoti’s digital identification software, in order to be verified to use the decentralised finance platform. Voucher purchasers are individually responsible for compliance with local regulations applicable to the purchase of cryptocurrencies. All investments—including the purchase of cryptocurrencies—carry with them the risk of partial or total loss. What is the Yoti app? Yoti is a digital identity app that gives individuals a safe way to prove their identity and age online and in person with thousands of UK businesses. People can download the app for Apple or Android phones, add their ID document and facial biometrics and Yoti verifies the identity data. Any details added are encrypted into unreadable data. Only the owner has the key to unlock the encrypted details, which is stored safely in their phone and ready to share. Who can use the Yoti app? For consumers. Anyone with a smartphone can download the app for free. The app accepts ID documents, including passports, driving licenses and national ID cards from over 195 countries. Can't find the answer you're looking for? If you’re having problems, contact for support. For general information, you can always visit our community Discord . Core Concepts - Previous Pools Next - Advanced Topics Last modified 16d ago Copy link
https://docs.swarm.markets/core-concepts/vouchers
2021-10-16T09:30:38
CC-MAIN-2021-43
1634323584554.98
[]
docs.swarm.markets
Tips to Consider When Selecting the Surpassing Online Pharmacy Once you have been prescribed medication, you have to make sure that you take the entire dose for your health to improve. Considering that the cost of medication is high nowadays, then people are using different ways for this cost to be reduced. This shows that their medications have to be bought from online pharmacies. Since you are looking forward to selecting the best online pharmacy, then it is ideal to read more on this page to find an affordable one too. First, for you to identify the best online pharmacy you should consider using the internet as well as the social media accounts. In your social media groups, you can find plenty of people who have been getting their prescriptions through online pharmacies. Thus, if you use referrals, then you should find plenty of pharmacies from which you can source your drugs from. Again, you can use the internet to search for the online Canadian pharmacy. Thus, you can find plenty of online pharmacies through the use of both referrals as well as social media accounts. This shows that you should contemplate finding the reviews for these pharmacies for you to choose the best among them. You should choose the drug store which has positive reviews to ensure you get quality medicines. When you are choosing an online pharmacy for your medications, it is ideal to consider the license as well as the certification for the sale of drugs. You need to be provided with quality medications for your health to improve successfully. Therefore, a licensed and certified online pharmacy should be chosen for your needs. The license shows that the pharmacy has been selling the medicines legally. Still, you would find a pharmacy that has been delivering the prescriptions through the orders of their patients without any issues if it has the certification. Hence, with the certification you would identify the pharmacy which has a clean track record for past sales. When you are choosing the right pharmacy, it is ideal to contemplate how much the drugs cost. You are looking for online pharmacies because you want to reduce the cost of medications. Hence, before you select the pharmacy, you would need to consider finding more about the prices of the drugs from several pharmacies. The online pharmacy whose prices are reasonable ought to be chosen for all your medication needs. This company would be ideal because it has drugs which can be purchased affordably. Thus, as you are choosing the best online Canadian pharmacy it is ideal to select the store through use of referrals while considering the reviews, ensure that both the license and certification are available and make sure that its drugs are sold at an affordable rate. Source: more info here
http://docs-prints.com/2020/12/24/case-study-my-experience-with-23/
2021-10-16T08:52:36
CC-MAIN-2021-43
1634323584554.98
[]
docs-prints.com
You are viewing an old version of this page. View the current version. Compare with Current View Page History Version 2 Current » Welcome to WHMCS KeysProvider Module What can WHMCS KeysProvider Module do? Start selling Cd-key, Serial-Key or Logins with your WHMCS natively. The module is crafted by Codebox.ca
https://docs.codebox.ca/pages/viewpage.action?pageId=2064499
2021-10-16T08:43:47
CC-MAIN-2021-43
1634323584554.98
[]
docs.codebox.ca
pdoTools Last updated May 8th, 2019 | Page history | Improve this page | Report an issue This class handles chunks and contains various service methods. $pdo = $modx->getService('pdoTools'); $chunk = $pdo->getChunk('chunkName', array('with', 'values')); It can load chunks by various methods: - Default method - as chunk from database. Just specify its name. @INLINEchunk that will be generated on the fly: @FILEchunk, that will be loaded from file. Due to security reasons you can use only files of types tpland html. Files are loaded from directory specified in system setting pdotools_elements_path. [[!pdoResources? &elementsPath=`/core/elements/` &tpl=`@FILE chunks/file.tpl` ]] @TEMPLATE- chunk will be generated on the fly from template of resource. So, this one only for rows with filled field template. It is kind of replacement for snippet renderResources. Every pdoTools based snippet could load chunks this ways. pdoResources, getTickets, msProducts and so on. The only thing you must remember - is to be careful with @INLINE because if you will specify placeolders directly on page - they can be processed before snippet run. That is why pdoTools supports different tags for placeholders: [[!pdoResources? &parenets=`0` &tpl=`@INLINE <p>{{+id}} - {{+pagetitle}}</p>` ]] This placeholders will pass to snippet unprocessed and than pdoTools will replace {{}} to [[]] with no harm to logic. Remember to use this syntax for all @INLINE chunks on MODX pages. When placeholders are passed into pdoTools it tries to parse it yourself. It can parse simple tags like [[+tag]] [[%lexicon]] [[~id_for_link]] [[~[[+id]]]] But it will load MODX parser to process any nested snippets, chunks or output filters. So, any chunk with output filter will be slower. But how we can modify our data before processing? It is a simple - we need to use &prepareSnippet! [[!pdoResources? &parents=`0` &tpl=`@INLINE <p>{{+id}} - {{+pagetitle}}</p>` &prepareSnippet=`cookMyData` ]] Snippet cookMyData will receive $row variable with all selected fields of one row and must return string with it (because MODX snippets can`t return array). Let`s we just add some random string to every pagetitle of resource: <?php $row['pagetitle'] .= rand(); return json_encode($row); *you can use json_encode() or serialize() to return data Now you know how we able to throw away all output filters and nested snippets from your chunks to make them faster. Of course, it is much faster to do some work in one snippet instead of parsing multiple snippets in chunks. Also you can use objects $modx and $pdoTools in prepareSnippet to cache data you need to work. pdoTools has methods setStore() and getStore(). For example, I want to highlight users of some groups in my comments (yes, it is the real task). So I call snippet with my prepareSnippet [[!TicketComments? &prepareSnippet=`prepareComments` ]] And there is my prepareComments snippet: <?php if (empty($row['createdby'])) {return json_encode($row);} // If we do not have cached groups if (!$groups = $pdoTools->getStore('groups')) { $tstart = microtime(true); $q = $modx->newQuery('modUserGroupMember'); $q->innerJoin('modUserGroup', 'modUserGroup', 'modUserGroupMember.user_group = modUserGroup.id'); $q->select('modUserGroup.name, modUserGroupMember.member'); $q->where(array('modUserGroup.name:!=' => 'Users')); if ($q->prepare() && $q->stmt->execute()) { $modx->queryTime += microtime(true) - $tstart; $modx->executedQueries++; $groups = array(); while ($tmp = $q->stmt->fetch(PDO::FETCH_ASSOC)) { $name = strtolower($tmp['name']); if (!isset($groups[$name])) { $groups[$name] = array($tmp['member']); } else { $groups[$name][] = $tmp['member']; } } } foreach ($groups as & $v) { $v = array_flip($v); } // Save groups to cache $pdoTools->setStore('groups', $groups); } $class = ''; if (!empty($row['blocked'])) { $class = 'blocked'; } elseif (isset($groups['administrator'][$row['createdby']])) { $class = 'administrator'; } $row['class'] = $class; return json_encode($row); And now I can use [[+class]] in my chunk to highlight admins and blocked users. Using of "Store" methods of pdoTools allows me to cache the data only at run time without save to hdd. It is very fast and handy. It total: - You can load chunks by various ways. - They will be processed so fast, how they simple are. - It is much better to put all your template logic to &prepareSnippetinstead of additional nested snippets or output filters calls in chunks. Remember, every nested call in chunk costs you seconds of total time of page load. Logic must be in PHP, not in MODX tags.
https://docs.modx.com/3.x/en/extras/pdoTools/Classes/pdoTools
2021-10-16T09:31:28
CC-MAIN-2021-43
1634323584554.98
[]
docs.modx.com
Administrative Processes¶ It's easy to get started integrating Dash, but you will need to make some decisions about whether you plan to convert your income earned in Dash into your local fiat currency, or if you prefer to hold some or all of it in Dash. Most payment processors offer a range of fiat conversion options, although various fees and limits may be applicable. Onboarding Process¶ New merchants typically go through the following steps when joining the Dash ecosystem: - Set up a Dash wallet - Identify an appropriate payment processor - Decide on how and when to convert funds - Implementation and testing - Release and marketing - Integration on DiscoverDash Promoting Dash¶ A wide range of ready-to-go visual products are available to help you promote Dash as a payment method to your customers. This includes promotional graphics and stickers, fonts for consistent visual design and guidelines on how to use the Dash visual identity. See the Marketing section for more information. The reduced fees may also offer an additional incentive for your customers to pay with Dash, particularly in businesses with high cash handling fees or where it is necessary to add a fee to process credit card transactions. Currency Conversion¶ Cryptocurrency is a relatively recent development, and rapid development in the ecosystem coupled with various barriers to access and heavy trading mean that fiat-denominated value is subject to considerable fluctuation. As a merchant, you will need to make decisions about how much of your income taken in cryptocurrency should actually be held in cryptocurrency, and how much should be converted back to a fiat currency (such as USD) directly. Different payment processors offer different solutions to this problem. Services such as GoCoin are able to convert a specified percentage of received payments into a range of fiat currencies for withdrawal. Others such as CoinPayments offer the ability to diversify your payments into a range of different cryptocurrencies, but require you to set up automatic withdrawals to an exchange for conversion to fiat currency. Finally, services such as Uphold allow you to convert your Dash payments between various currencies and commodities very easily, and even offer automated investment services. Note that these listing are not endorsements, and you must complete your own due diligence and/or seek advice from a tax and investment specialist before investing.
https://dash-filipino.readthedocs.io/tl/latest/merchants/administrative.html
2021-10-16T08:27:24
CC-MAIN-2021-43
1634323584554.98
[]
dash-filipino.readthedocs.io
One option for your SMS text message flows is to present your contact with a set of options and let them choose which is the best for them. This is done using the SMS Menu Applet. To reach the SMS flow editor and the menu applet, go to your Admin Dashboard > Call Flow tab and then select Create SMS Flow (if never created one) or click on the envelope icon to modify the one you created previously. SMS Menu Applet The Menu applet works as an interactive menu, so all information is automated and sent to the contact as they request with a keypress. It's a hands-off way to get information to your audience without having to sift through messages in your inbox. Example use cases include: Sending business hours Sending location information Sending event information, such as open house location and hours Automated qualification of a lead (e.g. - text back "sell" if you'd like to sell the property) Accessing the latest promotion/promo code Allowing opt-out of receiving text messages from you Configuring your SMS menu Once you drag the Menu applet into the flow editor, you will need to set it up. See below for the instructions. ❗ Keep in mind that this applet only works if it is the first/only applet in the flow. For more advanced flows please consider using the SMS API. You can customize the Menu Prompt, deciding what your contact should receive based on pre-set messages. Menu Prompt - type in your prompt, with all the options and hints for the expected message (for example, Text back "sell" if you intend to sell the house. Text back "buy" if you intend to buy a house, and so on.) This is the exact language your contact will see when reaching out to you. Keyword - the keypress that will trigger the next received message by your lead. This can be numbers or letters - just make sure it is clear in your menu prompt. Reply - the message that is triggered by the lead's selection. You can add multiple options to it in order to ease your communication, by filtering it. The incoming messages will not go into an inbox because the reply is automated based on your setup. You can always check the answers received in your Communications App from your Podio Organization, the Workspace linked to your smrtPhone account. ❗❗❗ Please, be aware of texting policies and regulations!
https://docs.smrtphone.io/en/articles/5577992-creating-a-texting-menu-menu-applet-sms-flow
2021-10-16T08:19:45
CC-MAIN-2021-43
1634323584554.98
[array(['https://downloads.intercomcdn.com/i/o/390456130/cd96b25b1dde371125046b90/sms+menu.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/390474572/5dcbe11bc19b325b6eabe4f8/sms+menu+desf.png', None], dtype=object) ]
docs.smrtphone.io
Deploying¶ Deploying your JupyterLite requires: an actual HTTP server (doesn’t presently work with) Warning Serving some of the kernels requires that your web server supports serving application/wasm files with the correct headers Hint An HTTPS-capable server is recommended for all but the simplest localhost cases. Get an Empty JupyterLite Site¶ The minimum deployable site contains enough to run JupyterLab and RetroLab, but no content. Hint Use of the CLI is optional, but recommended. It offers substantially better integration with other Jupyter tools. To get the Python CLI and API from PyPI: python -m pip install --pre jupyterlite # TODO: mamba install jupyterlite To build an empty site (just the JupyterLite static assets): jupyter lite init Static Site: The Hard Way¶ download a release archive from GitHub Releases download nightly/work-in-progress builds from GitHub actions clone/fork the [repository] and do a development build TBD: use cookiecutter-jupyterlite TBD: yarn add @jupyterlite/builderfrom npmjs.com Hint It is recommended to put these files under revision control. See Configuring for what you can configure in your JupyterLite. Build Tools¶ While the JupyterLite CLI will create the correct assets for JupyterLite, it might not be enough to deploy along with the rest of your content. WebPack¶ TBD sphinx¶ Sphinx is the workhorse of documentation of not only the scientific Python documentation community, but also the broader Python ecosystem, and many languages beyond it. It is well adapted to building sites of any size, and tools like myst-nb enable make it very palletable to include executable, and even interactive, content. JupyterLite assets can be copied to the default static directory in conf.py, e.g. docs/_static with html_static_path, or replace the entire site with html_extra_path html_static_path¶ This search path can be merged several layers deep, such that your theme assets, the “gold master” JupyterLite assets, and any customizations you wish to make are combined. html_static_path = [ "_static", "../upstream-jupyterlite", "../my-jupyterlite" # <- these "win" ] The composite directory will end up in docs/_build/_static. html_extra_path¶ A slightly more aggressive approach is to use html_extra_path to simply dump the assets directly into the doc folder. This approach can be used to deploy a site that launches directly into your JupyterLite. Adapting the example above: html_extra_path = ["../upstream-jupyterlite", "../my-jupyterlite"] Again, the last-written index.html will “win” and be shown to vistors to /, which will immediately redirect to appUrl as defined in the schema. Standalone Servers¶ Local¶ Suitable for local development, many languages provide easy-to-use servers that can serve your JupyterLite locally while you get it working the way you want. Jupyter¶ If you’re already running a [Jupyter Server]-powered app, such as JupyterLab, your files will be served correctly on e.g.. Python¶ http.server¶ The http module in the Python standard library is a suitably-effective server for local purposes. python -m http.server -b 127.0.0.1 If you are using a recently-released Python 3.7+, this will correctly serve application/wasm files for pyodide. sphinx-autobuild¶ If using Sphinx, sphinx-autobuild provides a convenient way to manage both static content and rich interactive HTML like your JupyterLite. sphinx-autobuild docs docs/_build This will regenerate your docs site and automatically refresh any browsers you have open. As your JupyterLite is mostly comprised of static assets, changes will not trigger a refresh by default. Enabling the -a flag will allow reloading when static assets change, but at the price rebuild the whole site when any file changes… this can be improved with the -j<N> flag, but is not compatible with all sphinx extensions. sphinx-autobuild docs docs/_build -aj8 NodeJS¶ Most nodejs-based servers will be able to host JupyterLite without any problems. Note, however, that http-server does not support the application/wasm MIME type. On-Premises¶ nginx¶ TBD httpd¶ TBD IIS¶ TBD Hosted¶ Binder¶ A JupyterLite can be deployed behind jupyter-server-proxy using any local server method. This is a good way to preview deployment interactively of a e.g. Lab extension that can work in both the “full” binder experience, and as a static preview. Hint See the JupyterLite binder configuration for an example. ReadTheDocs¶ The Sphinx deployment approach will work almost transparently with ReadTheDocs, for the small price of a .readthedocs.yml file in the root of your repository. Hint See the JupyterLite .readthedocs.yml for an example. Hint You might also want to enable the Autobuild Documentation for Pull Requests feature of Read The Docs to automatically get a preview link when opening a new pull request: Netlify¶ Netlify makes it easy and convenient to host static websites from existing git repositories, and make them widely available via their CDN. To deploy your own JupyterLite on Netlify, you can start from the JupyterLite Demo by generating a new repository from the template. Then add a runtime.txt file with 3.7 as the content to specify Python 3.7 as dependency. Finally specify jupyter lite build --output-dir dist as the “Build Command”, and dist as “Published Directory”: You might also want to specify the --debug flag to get extra log messages: Vercel¶ Just like Netlify, Vercel can connect to an existing git repository and seamlessly deploy static files on push and PR events (previews). Unfortunately, their build image only includes Python 3.6 and JupyterLite requires Python 3.7+. Fortunately it is possible to run arbitrary bash scripts, which provides a convenient escape hatch. Specify the Python packages in a requirements-deploy.txt file with additional dependencies if needed: jupyterlab~=3.1.0 jupyterlite Then create a new deploy.sh file with the following content: #!/bin/bash yum install wget wget -qO- | tar -xvj bin/micromamba ./bin/micromamba shell init -s bash -p ~/micromamba source ~/.bashrc # activate the environment and install a new version of Python micromamba activate micromamba install python=3.9 -c conda-forge -y # install the dependencies python -m pip install -r requirements-deploy.txt # build the JupyterLite site jupyter lite --version jupyter lite build --output-dir dist Micromamba creates a new self-contained environment, which makes it very convenient to install any required package without being limited by the build image. Then configure the build command and output directory on Vercel: You might also want to specify the --debug flag to get extra log messages: jupyter lite build --debug GitHub Pages¶ JupyterLite can easily be deployed on GitHub Pages, using the jupyterlite CLI to add content and extensions. Hint See the JupyterLite Demo for an example. That repository is a GitHub template repository which makes it convenient to generate a new JupyterLite site with a single click. GitLab Pages¶ JupyterLite can easily be deployed on GitLab Pages, using the jupyterlite CLI and setting the output_path to the public folder in your .gitlab-ci.yml file. Suppose that your notebooks are stored in the content folder; and you don’t require any additional python dependencies and configuration overrides, the .gitlab-ci.yml could look like. image: python pages: stage: deploy before_script: - python -m pip install jupyterlite script: - jupyter lite build --contents content --output-dir public artifacts: paths: - public # mandatory, other folder won't work only: - main # the branch you want to publish Hint See the gitlab pages template for a more involved example. Heroku¶ TBD
https://jupyterlite.readthedocs.io/en/latest/deploying.html
2021-10-16T08:30:28
CC-MAIN-2021-43
1634323584554.98
[array(['https://user-images.githubusercontent.com/591645/119787419-78db1c80-bed1-11eb-9a60-5808fea59614.png', 'rtd-pr-preview'], dtype=object) array(['https://user-images.githubusercontent.com/591645/124728917-4846c380-df10-11eb-8256-65e60dd3f258.png', 'netlify-build'], dtype=object) array(['https://user-images.githubusercontent.com/591645/124779931-79d88280-df42-11eb-8f94-93d5715c18bc.png', 'deploy-logs'], dtype=object) array(['https://user-images.githubusercontent.com/591645/135726080-93ca6930-19de-4371-ad13-78f5716b7299.png', 'image'], dtype=object) ]
jupyterlite.readthedocs.io
Patricia Clark Boston Air Route Traffic Control Center. Temporary Debt Limit Extension Act...
http://docs.house.gov/billsthisweek/20140210/CPRT-113-HPRT-RU00-S540.xml
2016-02-06T00:05:50
CC-MAIN-2016-07
1454701145578.23
[]
docs.house.gov
Smart Search on large sites From Joomla! Documentation Read this page if your Joomla site has a large number of pages and/or some of your pages are particularly large. Smart Search is suitable for the majority of Joomla sites. However, search presents particular challenges for large sites and both the old and new search methods are likely to present difficulties; just in different ways. It should be remembered that Smart Search is a pure PHP implementation of a search engine and particularly large sites may be better off using a standalone search engine such as Solr. To use Smart Search on a large site you will probably need to adjust some of the configuration settings. What follows is some general advice on what to look out for and what to try tweaking. There are a number of known outstanding issues with regard to running Smart Search on large sites which will hopefully be addressed in future versions and these are also described here. Smart Search works by creating and maintaining an independent index of search terms in a number of database tables. The problem for large sites is that the indexing process can be quite heavy in terms of CPU usage, memory usage and disk usage. Even after the initial construction of the index is complete, incremental updates can also be quite heavy. The good news is that querying the index is a relatively quick and lightweight operation. Contents Always use the CLI indexer Because the initial indexing process can take a long time it is best to run the indexer from the command line so as to avoid any issues from browser sessions timing out. The CLI indexer will not timeout regardless of how long it takes t complete and it can be easily aborted if problems are encountered. Furthermore, error messages are easily visible with the CLI indexer, whereas they are hidden when running from the Administrator. For instructions on using the CLI indexer see Setting up automatic Smart Search indexing Batching The indexer breaks the indexing job into batches of content items. By default the batch size is set at 30 meaning that up to 30 content items will be indexed per batch. Increasing the batch size will potentially make the indexing process faster, but it will use more memory and possibly more temporary disk space. Out of memory issues If the indexer is running out of memory then try making the following adjustments one at a time until the problem is resolved. - Decrease the batch size. If you have particularly large content items the indexer can run out of memory on even a single content item, so try dropping it to 5 initially and if you still run out of memory, drop it to 1. - If you are able to allocate more memory to the indexer then do so. You can increase the memory allocated to the command-line indexer using an extra parameter on the command-line. For example, to increase the memory limit to 256Mb use the following command, replacing the 256M with as much memory as you can safely allocate to a process on your system. - php -d memory_limit=256M finder_indexer.php - Reduce the memory table limit. The default is 30000 terms which means that as soon as the temporary in-memory jos_finder_tokens table reaches this number of rows the indexer will switch to using a disk table instead of a memory table. It may be that you don't have enough memory to handle a full or nearly full memory table so reducing the limit will tell the indexer to switch to disk sooner and so use less memory. Try 10000 or even much smaller numbers. - Change the database engine used for the jos_finder_tokens and jos_finder_tokens_aggregate tables from MEMORY to MYISAM or INNODB. This could seriously impact performance as more of the indexing process will use the disk instead of memory, but it might allow the indexer to finish without running out of memory. Expect the indexing process to run for much longer. This will not affect search performance however. - Try to identify which content items are causing the indexer to run out of memory. If it's not obvious then you might try disabling all the Smart Search plugins except one. Running the indexer with only one plugin enabled at a time should reveal which content type(s) are causing the issue. As a last resort you might consider breaking a few exceptionally large content items into separate items. If the problem is with a custom content type then look at the plugin code and consider indexing less of the available content per item. Out of disk space issues The Smart Search index tables can get very big very quickly! The jos_finder_links_termsX tables (where X is a single hexadecimal character) contain one row per term/phrase per content item and a single Joomla article containing 1000 words will typically result in approximately 3000 rows being added to these tables. A second article of a similar size will add a similar number of rows even if both articles contain the same words. A site with tens of thousands of articles, some of which may contain thousands of words, is very likely to end up with these mapping tables containing millions of rows. It is not unusual for the index tables to occupy several gigabytes of disk space in such circumstances. With the present version of Smart Search there isn't much you can do about this. However, it is hoped that in the next release you will be able to adjust the number of words per phrase that get indexed. At present this is hard-wired at 3, meaning that every word that gets indexed is also indexed as part of a pair of adjacent words and as part of a triplet of adjacent words. This is useful for the auto-completion feature and generally improves the quality of search results. On sites where disk space is an issue it would be good to reduce this to 2 or even 1, so that the mapping tables would be correspondingly smaller. Notes - There is currently no concurrency locking to prevent more than one process running the indexer at the same time. This will almost certainly result in a corrupt index. Even someone saving changes to a content item while a full index is being carried out could potentially damage the index.
https://docs.joomla.org/Smart_Search_on_large_sites
2016-02-06T00:44:07
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Revision history of "Subpackage Updater" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 07:15, 20 June 2013 Wilsonge (Talk | contribs) deleted page Subpackage Updater (content was: "{{Description:Subpackage Updater}} This subpackage is available in the following Joomla versions:- <splist showpath=notparent /> <noinclude>Category:Subpackage..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=Subpackage_Updater&action=history
2016-02-06T01:49:26
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
< Main Index JBoss AS Tools News > Seam code completion and validation now also supports the Seam 2 notion of import's. import This allow the wizard to correctly identify where the needed datasource and driver libraries need to go. Hibernate support is now enabled by default on war, ejb and test projects generated by the Seam Web Project wizard. This enables all the HQL code completion and validation inside Java files. Code completion just requires the Hibernate Console configuration to be opened/created, validation requires the Session Factory to be opened/created. When the Seam Wizards (New Entity, Action, etc.) completes, it now (if necessary) automatically touch the right descriptors to get the new artifacts redeployed on the server. EL code completion now support more of the enhancements to EL available in Seam, e.g. size, values, keySet and more are now available in code completion for collections and will not be flagged during validation. size values keySet
http://docs.jboss.org/tools/whatsnew/seam/seam-news-1.0.0.cr1.html
2016-02-06T00:37:55
CC-MAIN-2016-07
1454701145578.23
[]
docs.jboss.org
JDatabaseQueryMySQLi (cleaning up content namespace and removing duplicated API references) - 12:35, 29 August 2012 JoomlaWikiBot (Talk | contribs) automatically marked revision 72307 of page JDatabaseQueryMySQLi patrolled - 12:35, 29 August 2012 MediaWiki default (Talk | contribs) allowed - 17:13, 27 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 55297 of page JDatabaseQueryMySQLi patrolled
https://docs.joomla.org/index.php?title=Special:Log&page=JDatabaseQueryMySQLi
2016-02-06T00:51:47
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Difference between revisions of "Beginner profile" From Joomla! Documentation Revision as of 11:57, 15 November 2008 As someone who has not used Joomla before, it is likely you have no clue how to get started using it. Maybe you have a friend or a neighbour read the
https://docs.joomla.org/index.php?title=Chunk:Beginner_profile&diff=11744&oldid=11743
2016-02-06T00:43:05
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
Modal Popups A Modal Popup window is a child window that requires users to interact with it before they can return to operating the parent application. Modal windows often have a different appearance than normal windows and are typically without navigation buttons and menu headings. Modal Popups are detected automatically by Test Studio, like HTML Popups, however a separate Recording Toolbar is not attached. Notice the Connect to modal pop-up window and Close modal pop-up window steps. There are additional attributes in the Properties pane under the Modal Popup heading. - IsModalPopup identifies whether the popup window is modal. - ModalPopupPartialCaption locates the desired modal popup based on its partial caption. Test Studio cannot connect to Modal Popups which do not have a title in Firefox, Safari, and Chrome. Ensure all Modal Popups are titled if you are testing in browsers other than Internet Explorer.
http://docs.telerik.com/teststudio/features/dialogs-and-popups/modal-popups
2016-02-06T00:16:19
CC-MAIN-2016-07
1454701145578.23
[array(['/teststudio/img/features/dialogs-and-popups/modal-popups/fig1.png', 'Modal Window'], dtype=object) array(['/teststudio/img/features/dialogs-and-popups/modal-popups/fig2.png', 'Modal Window Test'], dtype=object) ]
docs.telerik.com
Transactions. It offers a two-phase commit protocol which allows multiple database backends, even non-ZODB databases, to participate in a transaction and commit their changes only if all of them can successfully do so. It also offers support for savepoints, so that part of a transaction can be rolled back without having to abort it completely. The best part is that this transaction mechanism is not tied to the ZODB and can be used in Python applications as a general transaction support library. Because of this and also because understanding the transaction package is important to use the ZODB correctly, this chapter describes the package in detail and shows how to use it outside the ZODB. Getting the transaction package¶ To install the transaction package you can use easy_install: $ easy_install transaction After this, the package can be imported in your Python code, but there are a few things that we need to explain before doing that. Things you need to know about the transaction machinery¶ Transactions¶ A transaction consists of one or more operations that we want to perform as a single action. It’s an all or nothing proposition: either all the operations that are part of the transaction are completed successfully or none of them have any effect. In the transaction package, a transaction object represents a running transaction that can be committed or aborted in the end. Transaction managers¶ Applications interact with a transaction using a transaction manager, which is responsible for establishing the transaction boundaries. Basically this means that it creates the transactions and keeps track of the current one. Whenever an application wants to use the transaction machinery, it gets the current transaction from the transaction manager before starting any operations The default transaction manager for the transaction package is thread aware. Each thread is associated with a unique transaction. Application developers will most likely never need to create their own transaction managers. Data Managers¶ A data manager handles the interaction between the transaction manager and the data storage mechanism used by the application, which can be an object storage like the ZODB, a relational database, a file or any other storage mechanism that the application needs to control. The data manager provides a common interface for the transaction manager to use while a transaction is running. To be part of a specific transaction, a data manager has to ‘join’ it. Any number of data managers can join a transaction, which means that you could for example perform writing operations on a ZODB storage and a relational database as part of the same transaction. The transaction manager will make sure that both data managers can commit the transaction or none of them does. An application developer will need to write a data manager for each different type of storage that the application uses. There are also third party data managers that can be used instead. The two phase commit protocol¶ The transaction machinery uses a two phase commit protocol for coordinating all participating data managers in a transaction. The two phases work like follows: - The commit process is started. - Each associated data manager prepares the changes to be persistent. - Each data manager verifies that no errors or other exceptional conditions occurred during the attempt to persist the changes. If that happens, an exception should be raised. This is called ‘voting’. A data manager votes ‘no’ by raising an exception if something goes wrong; otherwise, its vote is counted as a ‘yes’. - If any of the associated data managers votes ‘no’, the transaction is aborted; otherwise, the changes are made permanent. The two phase commit sequence requires that all the storages being used are capable of rolling back or aborting changes. Savepoints¶ A savepoint allows a data manager to save work to its storage without committing the full transaction. In other words, the transaction will go on, but if a rollback is needed we can get back to this point instead of starting all over. Savepoints are also useful to free memory that would otherwise be used to keep the whole state of the transaction. This can be very important when a transaction attempts a large number of changes. Using transactions¶ Now that we got the terminology out of the way, let’s show how to use this package in a Python application. One of the most popular ways of using the transaction package is to combine transactions from the ZODB with a relational database backend. Likewise, one of the most popular ways of communicating with a relational database in Python is to use the SQLAlchemy Object-Relational Mapper. Let’s forget about the ZODB for the moment and show how one could use the transaction module in a Python application that needs to talk to a relational database. Installing SQLAlchemy¶ Installing SQLAlchemy is as easy as installing any Python package available on PyPi: $ easy_install sqlalchemy This will install the package in your Python environment. You’ll need to set up a relational database that you can use to work out the examples in the following sections. SQLAlchemy supports most relational backends that you may have heard of, but the simplest thing to do is to use SQLite, since it doesn’t require a separate Python driver. You’ll have to make sure that the operating system packages required for using SQLite are present, though. If you want to use another database, make sure you install the required system packages and drivers in addition to the database. For information about which databases are supported and where you can find the drivers, consult. Choosing a data manager¶ Hopefully, at this point SQLAlchemy and SQLite (or other database if you are feeling adventurous) are installed. To use this combination with the transaction package, we need a data manager that knows how to talk to SQLAlchemy so that the appropriate SQL commands are sent to SQLite whenever an event in the transaction life-cycle occurs. Fortunately for us, there is already a package that does this on PyPI, so it’s just a matter of installing it on our system. The package is called zope.sqlalchemy, but despite its name it doesn’t depend on any zope packages other than zope.interface. By now you already know how to install it: $ easy_install zope.sqlalchemy You can now create Python applications that use the transaction module to control any SQLAlchemy-supported relational backend. A simple demonstration¶ It’s time to show how to use SQLAlchemy together with the transaction package. To avoid lengthy digressions, knowledge of how SQLAlchemy works is assumed. If you are not familiar with that, reading the tutorial at will give you a good enough background to understand what follows. After installing the required packages, you may wish to follow along the examples using the Python interpreter where you installed them. The first step is to create an engine: This will connect us to the database. The connection string shown here is for SQLite, if you set up a different database you will need to look up the correct connection string syntax for it. The next step is to define a class that will be mapped to a table in the relational database. SQLAlchemy’s declarative syntax allows us to do that easily: The User class is now mapped to the table named ‘users’. The create_all method in line 12 creates the table in case it doesn’t exist already. We can now create a session and integrate the zope.sqlalchemy data manager with it so that we can use the transaction machinery. This is done by passing a Session Extension when creating the SQLAlchemy session: In line 3, we create a session class that is bound to the engine that we set up earlier. Notice how we pass the ZopeTransactionExtension using the extension parameter. This extension connects the SQLAlchemy session with the data manager provided by zope.sqlalchemy. In line 4 we create a session. Under the hood, the ZopeTransactionExtension makes sure that the current transaction is joined by the zope.sqlalchemy data manager, so it’s not necessary to explicitly join the transaction in our code. Finally, we are able to put some data inside our new table and commit the transaction: Since the transaction was already joined by the zope.sqlalchemy data manager, we can just call commit and the transaction is correctly committed. As you can see, the integration between SQLAlchemy and the transaction machinery is pretty transparent. Aborting transactions¶ Of course, when using the transaction machinery you can also abort or rollback a transaction. An example follows: We need a new transaction for this example, so a new session is created. Since the old transaction had ended with the commit, creating a new session joins it to the current transaction, which will be a new one as well. We make a query just to show that our user’s fullname is ‘John Smith’, then we change that to ‘John Q. Public’. When the transaction is aborted in line 8, the name is reverted to the old value. If we create a new session and query the table for our old friend John, we’ll see that the old value was indeed preserved because of the abort: Savepoints¶ A nice feature offered by many transactional backends is the existence of savepoints. These allow in effect to save the changes that we have made at the current point in a transaction, but without committing the transaction. If eventually we need to rollback a future operation, we can use the savepoint to return to the “safe” state that we had saved. Unfortunately not every database supports savepoints and SQLite is precisely one of those that doesn’t, which means that in order to be able to test this functionality you will have to install another database, like PostgreSQL. Of course, you can also just take our word that it really works, so suit yourself. Let’s see how a savepoint would work using PostgreSQL. First we’ll import everything and setup the same table we used in our SQLite examples: We are now ready to create and use a savepoint: Everything should look familiar until line 4, where we create a savepoint and assign it to the sp variable. If we never need to rollback, this will not be used, but if course we have to hold on to it in case we do. Now, we’ll add a second user: The new user has been added. We have not committed or aborted yet, but suppose we encounter an error condition that requires us to get rid of the new user, but not the one we added first. This is where the savepoint comes handy: As you can see, we just call the rollback method and we are back to where we wanted. The transaction can then be committed and the data that we decided to keep will be saved. Managing more than one backend¶ Going through the previous section’s examples, experienced users of any powerful enough relational backend might have been thinking, “wait, my database already can do that by itself. I can always commit or rollback when I want to, so what’s the advantage of using this machinery?” The answer is that if you are using a single backend and it already supports savepoints, you really don’t need a transaction manager. The transaction machinery can still be useful with a single backend if it doesn’t support transactions. A data manager can be written to add this support. There are existent packages that do this for files stored in a file system or for email sending, just to name a few examples. However, the real power of the transaction manager is the ability to combine two or more of these data managers in a single transaction. Say you need to capture data from a form into a relational database and send email only on transaction commit, that’s a good use case for the transaction package. We will illustrate this by showing an example of coordinating transactions to a relational database and a ZODB client. The first thing to do is set up the relational database, using the code that we’ve seen before: Now, let’s set up a ZODB connection, like we learned in the previous chapters: We’re ready for adding a user to the relational database table. Right after that, we add some data to the ZODB using the user name as key: Since both the ZopeTransactionExtension and the ZODB connection join the transaction automatically, we can just make the changes we want and be ready to commit the transaction immediately. >>> transaction.commit() Again, both the SQLAlchemy and the ZODB data managers joined the transaction, so that we can commit the transaction and both backends save the data. If there’s a problem with one of the backends, the transaction is aborted in both regardless of the state of the other. It’s also possible to abort the transaction manually, of course, causing a rollback on both backends as well. The two-phase commit protocol in practice¶ Now that we have seen how transactions work in practice, let’s take a deeper look at the two-phase commit protocol that we described briefly at the start of this chapter. The last few examples have used the ZopeTransactionExtension from the zope.sqlalchemy package, so we’ll look at parts of its code to illustrate the protocol steps. The complete code can be found at. The ZopeTransactionExtension uses SQLAlchemy’s SessionExtension mechanism to make sure that after a session has begun an instance of the zope.sqlalchemy data manager joins the current transaction. Once this is accomplished, the SQLAlchemy session can be made to behave according to the two-phase commit protocol. That is, a call to transaction.commit() will make sure to call the zope.sqlalchemy data manager in addition to any other data managers that have joined the transaction. To be part of the two-phase commit, a data manager needs to implement some specific methods. Some people call this a contract, others call it an interface. The important part is that the transaction manager expects to be able to call the methods, so every data manager should have them. if it intends to participate in the two-phase commit. The contract or interface that the zope.sqlalchemy implements is named IDataManager (I stands for Interface, of course). We’ll now go through each step of the two-phase commit methods in order, as declared by the IDataManager interface. Once the commit begins, the methods are called in the order that they are listed, except for tpc_finish and tpc_abort, which are only called if the transaction succeeds (tpc_finish) or fails (tpc_abort). abort¶ Outside of the two-phase commit proper, a transaction can be aborted before the commit is even attempted, in case we come across some error condition that makes it impossible to commit. The abort method is used for aborting a transaction and forgetting all changes, as well as end the participation of a data manager in the current transaction. The zope.sqlalchemy data manager uses it for closing the SQLAlchemy session too: The _finish method called on line 3 is responsible for closing the session and is only called if there’s an actual transaction associated with this data manager: As we’ll see, the cleanup work done by the _finish method is also used by other two-phase commit steps. tpc_begin¶ The two-phase commit is initiated when the commit method is called on the transaction, like we did in many examples above. The tpc_begin method is called at the start of the commit to perform any necessary steps for saving the data. In the case of SQLAlchemy the very first thing that is needed is to flush the session, so that all work performed is ready to be committed: commit¶ This is the step where data managers need to prepare to save the changes and make sure that any conflicts or errors that could occur during the save operation are handled. Changes should be ready but not made permanent, because the transaction could still be aborted if other transaction managers are not able to commit. The zope.sqlalchemy data manager here just makes sure that some work has been actually performed and if not goes ahead and calls _finish to end the transaction: tpc_vote¶ The last chance for a data manager to make sure that the data can be saved is the vote. The way to vote ‘no’ is to raise an exception here. The zope.sqlalchemy data manager simply calls prepare on the SQLAlchemy transaction here, which will itself raise an exception if there are any problems: tpc_finish¶ This method is only called if the manager voted ‘yes’ (no exceptions raised) during the voting step. This makes the changes permanent and should never fail. Any errors here could leave the database in an inconsistent state. In other words, only do things here that are guaranteed to work or you may have a serious error in your hands. The zope.sqlalchemy data manager calls the SQLAlchemy transaction commit and then calls _finish to perform some cleanup: tpc_abort¶ This method is only called if the manager voted ‘no’ by raising an exception during the voting step. It abandons all changes and ends the transaction. Just like with the tpc_finish step, an error here is a serious condition. The zope.sqlalchemy data manager calls the SQLAlchemy transaction rollback here, then performs the usual cleanup: More features and things to keep in mind about transactions¶ We now know the basics about how to use the transaction package to control any number of backends using available data managers. There are some other features that we haven’t mentioned and some things to be aware of when using this package. We’ll cover a few of them in this section. Joining a transaction¶ Both the zope.sqlalchemy and the ZODB packages make their data managers join the current transaction automatically, but this doesn’t have to be always the case. If you are writing your own package that uses transaction you will need to explicitly make your data managers join the current transaction. This can be done using the transaction machinery: To join the current transaction, you use transaction.get() to get it and then call the join method, passing an instance of your data manager that will be joining that transaction from then on. Before-commit hooks¶ In some cases, it may be desirable to execute some code right before a transaction is committed. For example, if an operation needs to be performed on all objects changed during a transaction, it might be better to call it once at commit time instead of every time an object is changed, which could slow things down. A pre-commit hook on the transaction is available for this: In this example the hook some_operation will be registered and later called when the commit process is started. You can pass to the hook function any number of positional arguments as a tuple and also key/value pairs as a dictionary. It’s possible to register any number of hooks for a given transaction. They will be called in the order that they were registered. It’s also possible to register a new hook from within the hook function itself, but care must be taken not to create an infinite loop doing this. Note that a registered hook is only active for the transaction in question. If you want a later transaction to use the same hook, it has to be registered again. The getBeforeCommitHooks method of a transaction will return a tuple for each hook, with the registered hook, args and kws in the order in which they would be invoked at commit time. After-commit hooks¶ After-commit hooks work in the same way as before-commit hooks, except that they are called after the transaction commit succeeds or fails. The hook function is passed a boolean argument with the result of the commit, with True signifying a successful transaction and False an aborted one. The getAfterCommitHooks method of a transaction will return a tuple for each hook, with the registered hook, args and kws in the order in which they would be invoked after commit time. Commit hooks are never called for doomed or explicitly aborted transactions. Synchronizers¶ A synchronizer is an object that must implement beforeCompletion and afterCompletion methods. It’s registered with the transaction manager, which calls beforeCompletion when it starts a top-level two-phase commit and afterCompletion when the transaction is committed or aborted. Synchronizers have the advantage that they have to be registered only once to participate in all transactions managed by the transaction manager with which they are registered. However, the only argument that is passed to them is the transaction itself. Dooming a transaction¶ There are cases where we encounter a problem that requires aborting a transaction, but we still need to run some code after that regardless of the transaction result. For example, in a web application it might be necessary to finish validating all the fields of a form even if the first one does not pass, to get all possible errors for showing to the user at the end of the request. This is why the transaction package allows us to doom a transaction. A doomed transaction behaves the same way as an active transaction but if an attempt to commit it is made, it raises an error and thus forces an abort. To doom a transaction we simply call doom on it: The isDoomed method can be used to find out if a transaction is already doomed: Context manager support¶ Instead of calling commit or abort explicitly to define transaction boundaries, it’s possible to use the context manager protocol and define the boundaries using the with statement. For example, in our SQLAlchemy examples above, we could have used this code after setting up our session: We can have as many statements as we like inside the with block. If an exception occurs, the transaction will be aborted at the end. Otherwise, it will be committed. Note that if you doom the transaction inside the context, it will still try to commit which will result in a DoomedTransaction exception. Take advantage of the notes feature¶ A transaction has a description that can be set using its note method. This is very useful for logging information about a transaction, which can then be analyzed for errors or to collect statistics about usage. It is considered a good practice to make use of this feature. The transaction notes have to be handled and saved by the storage in use or they can be logged. If the storage doesn’t handle them and they are needed, the application must provide a way to do it. This example is very simple and will log the transaction even if it fails, but the intention was to give an idea of how transaction notes work and how they could be used. Application developers must handle concurrency¶ Reading through this chapter, the question might have occurred to you about how the transaction package handles concurrent edits to the same information. The answer is it doesn’t, the application developer has to take care of that. The most common type of concurrency problem, is when a transaction can’t be committed because another transaction has a lock on the resources to be modified. This and other similar errors are called transient errors and they are the easiest to handle. Simply retrying the transaction one or more times is usually enough to get it committed in this case. This is so common that the default transaction manager will try to find a method named should_retry on each data manager whenever an error occurs during transaction processing. This method gets the error instance as a parameter and must return True if the transaction should be retried and False otherwise. For example, here’s how the zope.sqlalchemy data manager defines this method: First, the method checks if the error is an instance of the SQLAlchemy ConcurrentModificationError. If this is the case, odds are that retrying the transaction has a good chance of succeeding, so True is returned. After that, if the error is some kind of DBAPIError, again as defined by SQLAlchemy, the data manager checks the error against its own list of retryable exceptions. If there’s a match, there are two possibilities: if a test function was not defined for the error in question, True is immediately returned. However, if there’s a test function defined, the error is passed to it to verify whether it’s really retryable or not. Again, if it is, True is returned. This strategy should be enough to handle a good number of transient errors and can be tailored to whatever backend you are using if you are willing to create your own data manager. There are other kinds of conflicts that can occur during a transaction that must be caught and handled by the application, but these are usually application-specific and must be planned for and solved by the developer. Retrying transactions¶ Since retrying a transaction is the usual solution for transient errors, applications that use the transaction package have to be prepared to do that easily. A simple for loop with a try: except clause could be enough, but that can get very ugly very quickly. Fortunately, transaction managers provide a helper for this case. Here’s an example, which assumes that we have performed the same SQLAlchemy setup that we have used in previous examples: The attempts method of the transaction manager returns an iterator, which by default will try the transaction three times. It’s possible to pass a different number to the attempts call to change that. If a transient error is raised while processing the transaction, it is retried up to the specified number of tries. The data manager is responsible for raising the correct kind of exception here, which should be a subclass of transaction.interfaces.TransientError. Avoid long running transactions¶ We have seen that transient errors are many times the result of locked resources or busy backends. One important lesson to take from this is that avoiding long transactions is a very good idea, because the quicker a transaction is finished, the quicker another one can start, which minimizes retries and reduces the load on the backend. Uncommitted transactions in many backends are stored in memory, so a big number of changes on a single transaction can eat away systems resources very fast. The developer should look for ways of getting the required work done as fast as possible. For example, if a lot of changes are required at once, the application could use batching to avoid committing the whole bunch in one go. Writing our own data manager¶ By now we have enough knowledge about how the transaction package implements transactions to create our first data manager. Let’s create a simple manager that uses the Python pickle module for storing pickled data. We will use a very simple design: the data manager will behave like a dictionary. We will be able to perform basic dictionary operations, like setting the value of a new key or changing an existing one. When we commit the transaction, the dictionary items will be stored in a pickle on the filesystem. The PickleDataManager¶ Let’s open a new file and name it pickledm.py. The first thing to do is to import a few modules: Nothing surprising here, just what we need to be able to create our class: We define a class, which we’ll call PickleDataManager and assign the default transaction manager as its transaction manager. Now for the longest method of our data manager, which turns out to be __init__: The initialization method accepts an optional pickle_path parameter, which is the path on the filesystem where the pickle file will be stored. For this example we are not going to worry a lot about this. The important thing is that once we have the path, we try to open an existing pickle file in lines 3-6. If it doesn’t exists we just assign None. We will use a dictionary named ‘uncommitted’ as a work area for our data manager. If no data file existed, it will be an empty dictionary. If there is a data file, we try to open it and assign its value to our work area (lines 8-12). Any changes that we do to our data will be made on the uncommitted dictionary. Additionally, we’ll need another dictionary to keep a copy of the data as it was at the start of the transaction. For this, we copy the uncommitted dictionary into another dictionary, which we’ll name ‘committed’. Using copy is important to avoid altering the committed values unintentionally. We ant our data manager to function as a dictionary, so we need to implement at least the basic methods of a dictionary to get it working. The trick is to actually make those methods act on the uncommitted dictionary, so that all the operations that we perform are stored there. These are fairly simple methods. Basically, for each method we call the corresponding one on the uncommitted dictionary. Remember this acts as a sort of work area and nothing will be stored until we commit. Now we are ready for the transaction protocol methods. For starters, if we decide to abort the transaction before initiating commit, we need to go back to the original dictionary values: This is very easy to do, since we have a copy of the dictionary as it was at the start of the transaction, so we just copy it over. For the next couple of methods of the two-phase commit protocol, we don’t have to do anything for our simple data manager: The tpc_begin method can be used to get the data about to be committed out of any buffers or queues in preparation for the commit, but here we are only using a dictionary, so it’s ready to go. The commit method is used to prepare the data for the commit, but there’s also nothing we have to do here. Now comes the time for voting. We want to make sure that the pickle can be created and raise any exceptions here, because the final step of the two-phase commit can’t fail. We are going to try to dump the pickle to make sure that it will work. We don’t care about the result now, just if it can be dumped, so we use devnull for the dump. For simplicity, we just check for pickling errors here. Other error conditions are possible, like a full drive or other disk errors. Remember, all that the voting method has to do is to raise an error if there is any problem, and the transaction will be aborted in that case. If this happens all that we have to do is to copy the committed value into the work area, so we go back to the starting value. If there were no problems we can now perform the real pickle dump. At this point the data in our work area is officially committed, so we can copy it to the committed dictionary. That’s really all there is to it for a basic data manager. Let’s add a bit of an advanced feature, though: a savepoint. To add savepoint functionality, a data manager needs to have a savepoint method that returns a savepoint object. The savepoint object needs to be able to rollback to the saved state: In the savepoint initialization, we keep a reference to the data manager instance that called the savepoint. We also copy the uncommitted dictionary to another dictionary stored on the savepoint. If the rollback method is ever called, we’ll copy this value again directly into the data managers work area, so that it goes back to the state it was in before the savepoint. One final method that we’ll implement here is sortKey. This method needs to return a string value that is used for setting the order of operations when more than one data manager participates in a transaction. The keys are sorted alphabetically and the different data managers’ two-phase commit methods are called in the resulting order. In this case we just return a string with the ‘pickledm’ identifier, since it’s not important in what order our data manager is called. There are cases when this feature can be very useful. For example, a data manager that does not support rollbacks can try to return a key that is sorted last, so that it commits during tpc_vote only if the other backends in the same transaction that do support rollback have not rolled back at that point. For easy reference, here’s the full source of our data manager: Using transactions in web applications¶ Nowadays many development projects happen on the web and many web applications require integration of multiple systems or platforms. While the majority of applications may still be 100% based on relational database backends, there are more and more cases where it becomes necessary to combine traditional backends with other types of systems. The transaction package can be very useful in some of these projects. In fact, the Zope web application server, where the ZODB was born, has been doing combined transaction processing of this kind for more than a decade now. Developers who use applications like the Plone Content Management System still take advantage of this functionality today. For many years, the transaction support in Zope was tightly integrated with the ZODB, so it has seen very little use outside of Zope. The ongoing evolution of the Python packaging tools and in particular the existence of the Python Package Index have influenced many members of the Zope community and this has led to a renewed interest in making useful Zope tools available for the benefit of the lager Python community. One project which has been fairly successful in promoting the use of important Zope technologies is the Repoze project (). The main objective of this project is to bridge Zope technologies and WSGI, the Python web server gateway standard. Under this banner, several packages have been released to date that allow using some Zope technologies independently of the Zope framework itself. Some of these packages can be used with the ZODB, so we’ll have occasion to work with them later, but the one that we will discuss now will allow us to work with transactions using WSGI. Repoze.tm2: transaction aware middleware for WSGI applications¶ WSGI is the dominant way to serve Python web applications these days. WSGI allows connecting applications together using pipelines and this has spawned the development of many middleware packages that wrap an application and perform some service at the beginning and ending of a web request. One of these packages is repoze.tm2, a middleware from the Repoze project which uses the transaction package to start a new transaction on every request and commit or abort it after the wrapped application finishes its work, depending on if there were any errors or not. It’s not necessary to call commit or abort manually in application code. All that’s needed is that there is a data manager associated with every backend that will participate in the transaction and that this data manager joins the transaction explicitly. To use repoze.tm2, you first need to add it to your WSGI pipeline. If you are using PasteDeploy for deploying your applications, that means that the repoze.tm2 egg needs to be added to your main pipeline in your .ini configuration file: [pipeline:main] pipeline = egg:repoze.tm2#tm myapp In this example, we have an app named ‘myapp’, which is the main application. By adding the repoze.tm2 egg before it, we are assured that a transaction will be started before calling the main app. The same thing can be accomplished in Python easily: Once repoze.tm2 is in the pipeline, all that’s needed is to join each data manager that we want to use into the transaction: That’s basically all that there’s to it. Any exception raised after this will cause the transaction to abort at the end. Otherwise, the transaction will be committed. Of course, in a web application there may be some conditions which do not result on an exception, yet are bad enough to warrant aborting the transaction. For example, all 404 or 500 responses from the server indicate errors, even if an exception was never raised. To handle this situation, repoze.tm2 uses the concept of a commit veto. To use it you need to define a callback in your application that returns True if the transaction should be aborted. In that callback you can analyze the environ and request headers and decide if there is information there that makes aborting necessary. To illustrate, let’s take a look at the default commit veto callback included with repoze.tm2: As you can see, this commit veto looks for a header named x-tm and returns True if the header’s value is not commit; it also returns True if there is a 40x or 50x response from the server. When the commit veto returns True, the transaction is aborted. To use your own commit veto you need to configure it into the middleware. On PasteDeploy configurations: [filter:tm] commit_veto = my.package:commit_veto The same registration using Python: To use the default commit veto, simply substitute the mypackage commit_veto with the one from repoze.tm2: from repoze.tm import default_commit_veto Finally, if some code needs to be run at the end of a transaction, there is an after-end registry that lets you register callbacks to be used after the transaction ends. This can be very useful if you need to perform some cleanup at the end, like closing a connection or logging the result of the transaction. The after-end callback is registered like this: A to-do application using repoze.tm2¶ We’ll finish up this long introduction to the transaction package with a simple web application to manage a to-do list. We’ll use the pickle data manager that we developed earlier in this chapter along with the repoze.tm2 middleware that we just discussed. We will use the Pyramid web application framework (). Pyramid is a very flexible framework and it’s very easy to get started with it. It also allows us to create “single file” applications, which is very useful in this case, to avoid lengthy setup instructions or configuration. To use Pyramid, we recommend creating a virtualenv and installing the Pyramid and repoze.tm2 packages there: $ virtualenv --no-site-packages todoapp $ cd todoapp $ bin/easy_install pyramid repoze.tm2 The transaction package is a dependency as well, but will be pulled automatically by repoze.tm2. We want to use our pickle data manager too, so copy the pickledm.py file we created earlier to the virtualenv root. Now we are ready to write our application. Start a file named todo.py. Make sure it’s on the virtualenv root too. Add the following imports there: You will see some old friends here, like transaction and our pickledm module. On line 5 we import the serve method from paste.httpserver, which we will use to serve our application. Lines 7 and 8 import the view configuration machinery of the Pyramid framework and a Configurator object to configure our application. Finally, lines 10 and 11 import the TM wrapper and the commit veto function that we discussed in the previous section. Since we have no package to hold our application’s files, we have to make sure that we can find the page template that we’ll use for rendering our app, so we set that up next: In Pyramid, you can define a root object, very similar to what you get when you connect to a ZODB database. The root object points to the root of the web site: The root object idea is part of a way of defining the structure of a site called traversal. Using traversal, instead of configuring application URLs using regular expressions, like many web frameworks, we define a resource tree which starts at this root object and could potentially contain thousands of other branches. In this case, however, one root object is all that we need for our application. Pyramid allows us to define views as any callable object. In this case, we’ll use a class to define our views, because this enables us to use the class’ __init__ method as a common setup area for the collection of individual views that we will define. See how we instantiate our pickle data manager and make it join the current transaction. All the views defined in this class will have access to our data manager. Pyramid allows the use of decorators to configure application views. There are several predicates that we can use inside a view configuration. For our simple to-do application we’ll define five views: one for the initial page that will be shown when accessing the site and one each for adding, closing, reopening and deleting tasks. Remember the Root object that we defined above? This is where we finally use it. We are going to define the application’s main view and the Root object will be the context of that view. Context basically means the last object in the URL that represents a path to the resource from the root of the resource tree. The context object of a view is available at rendering time and can be used to get resource specific information. In this case, the main view will show all the items that we have stored in our pickle data manager. In Pyramid, a view must return a Response object, but since it’s a very common thing in web development to use the view to pass some values to a template for rendering, there is a renderer predicate in view configuration that lets us give a template path so that Pyramid takes care of the rendering. In that case, returning a dictionary with the values that the template will use is enough for the view. If you take a look at line 1 above, you’ll see that we used as a renderer the template that we defined before the class. As we explained above, the context parameter there means the object in the site structure that the view will be applied to. In this case it’s the root of the site, though the specific Root object is not actually used in the view code. The view configuration mechanism in Pyramid is very powerful and makes it easy to assign views which are used or not depending on things like request headers or parameter values. In this case, we use the request method, so that this view will only be called if the method used is GET. Notice how on line 3 we use the data manager to get all the stored to-do items for showing on the task list. The next view finally does something transactional. When the request contains the parameter ‘add’ this view will be called and a new to-do item will be added to the task list. The renderer is the same template that displays the full task list. Since this view will only be called when the add button is pressed on the form, we know that there is a parameter on the request with the name ‘text’. This is the item that will be added to the task list. In this example application we don’t expect any other user than ourselves, so we can safely use the time as a key for the new item value. We assign that key to the data manager, get the updated list of items for sorting and the view is done. Notice that we didn’t have to call commit even though there was a change, because repoze.tm2 will do that for us after the request is completed. The next few views are almost equal to the add view. In the done view we get a list of task ids and mark all of those tasks as completed: The done view does exactly the reverse, marking the list of tasks as not completed: Finally, the delete view removes the task with the passed id from our data manager. As with all the other views, there’s no need to call commit. That’s really the whole application, all we need now is a way to configure it and start a server process. We’ll set this up so that running todo.py with the Python interpreter starts the application: Pyramid uses a Configurator object to handle application configuration and view registration. On line 2 we create a configurator and then on line 3 we call its scan method to perform the view registration. Be aware that using the decorators to define the views in the code above is not enough for registering them. The scan step is required for doing that. On line 4 we use the configurator to create a WSGI app and then we wrap that with the repoze.tm2 middleware, to get our automatic transaction commits at the end of each request. We pass in the default_commit_veto as well, so that in the event of 4xx or 5xx response, the transaction is aborted. Finally, on line 6, we use serve to start serving our application with paste’s http server. We are done, this is the complete source of the application: Our application is almost ready to try, we only need to add a todo.pt template in the same directory as the todo.py file, with the following contents: Pyramid has bindings for various template languages, but comes with chameleon and mako “out of the box”. In this case, we used chameleon, but as you can see it’s a pretty simple form anyway. The most important part of the template is the loop that starts on line 14. The tal:repeat attribute on the <tr> tag means that for every task in the tasks variable, the contents of the tag should be repeated. The tasks list comes from the dictionary that was returned by the view, you may remember. The task list comes from the data manager items and thus each of its elements contains a tuple of id (key) and task. Each task is itself a tuple of description and status. These values are used to populate the form with the task list. You can now run the application and try it out on the browser. From the root of the virtualenv type: $ bin/python todo.py serving on 0.0.0.0:8080 view at You can add, remove and complete tasks and if you restart the application you will find the task list is preserved. Try removing the wrapper and see what happens then.
http://zodb.readthedocs.org/en/latest/transactions.html
2016-02-05T23:59:39
CC-MAIN-2016-07
1454701145578.23
[]
zodb.readthedocs.org
Information for "Release procedure and checklist" Basic information Display titleRelease procedure and checklist Redirects toJoomla:Release procedure and checklist (info) Default sort keyRelease procedure and checklist Page length (in bytes)52 Page ID49:12, 23 January 2015 Latest editorTom Hutchison (Talk | contribs) Date of latest edit14:12, 23 January 2015 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’
https://docs.joomla.org/index.php?title=Release_procedure_and_checklist&action=info
2016-02-06T00:40:10
CC-MAIN-2016-07
1454701145578.23
[]
docs.joomla.org
User Interface Displayable name of the workflow process - workflow-process-name:STATE - workflow-process-name “Currently with” caption in action panel - status-ui-currently-with:STATE - status-ui-currently-with Annotations to “Currently with” As the “Currently with” often shows the name of a specific person, you can add an annotation to display their role. - status-ui-currently-with-annotation:ACTIONABLE-BY:STATE - status-ui-currently-with-annotation:ACTIONABLE-BY - status-ui-currently-with-annotation Actionable by display name Use the following to replace the actionable by display name under the “Currently with” header. - status-ui-actionable-by:ACTIONABLE-BY:STATE - status-ui-actionable-by:ACTIONABLE-BY - status-ui-actionable-by
https://docs.haplo.org/standard/workflow/definition/text/other-ui
2022-09-24T19:43:00
CC-MAIN-2022-40
1664030333455.97
[]
docs.haplo.org
Monitoring provides 360-degree visibility for applications, servers, virtualization, containers, synthetics, storage, and network devices. A monitor is a mechanism to check a device periodically for its behavior and performance. Using monitoring, you can gather information and keep track of the performance of your target resources. As part of the onboarding process: - All resources in your network are discovered and managed. - Monitoring templates are assigned to monitor resources according to the configured metrics. - Timely alerts are raised for quick action. to detect connection failures. PING measures the packet loss and round trip time using ICMP. - SNMP: Simple Network Management (SNMP) Protocol is a well-known and popular protocol for network management used for collecting information and configuring network devices such as Servers, Printers, Hubs, Switches, and Routers on an Internet Protocol (IP) network. - WMI: Windows Management Instrumentation (WMI) describes the processes and utilities required to scan systems remotely for early warning signs of potential failure. WMI is the is space handled in custom monitor script parameters? The parameters are enclosed in double-quotes (") while passing to the script. Enclose any user-specific arguments with spaces between single quotes (‘). What is the waiting time for the monitor scripts to get updated? The standard waiting time is 12 hours for the scripts to get updated unless the agent gets restarted. The agent only checks for the latest RBA details every 12 hours. If the immediate reflection of updates is needed, re-apply the monitoring template. When does the updated scripts configuration get pushed to the agent? After the monitoring template gets re-applied, script configuration gets pushed to the agent. How does the agent proceed after receiving the updated information of a script? The latest information is downloaded and saved at the agent, but the latest script is downloaded only at the time of execution. Does Agent custom monitors work during a maintenance window time frame? Yes, monitoring is not impacted during a maintenance window. Are subject and description fields required in the alert XML? Can they be null? Yes, both Subject and Description fields are required in the alert XML. Only then the agent sends the alert to the Alert browser. What are the DOs and DON’Ts in a custom script monitor execution? - DOs: De-attach and re-attach templates when there are any changes. - DON’Ts: Do not update the number of parameters to a custom script, which has already been defined as a monitor in a template and is applied to devices. - DON’T update scripts frequently. Can I move a script from one category to another category? No, this functionality does not exist currently. Can I apply a custom script to multiple devices? Yes, Global Custom scripts are applied across all VARS and all clients. Client-specific scripts are applied per client only. Does all the alerts posted from the custom scripts appear in the Alerts tab? Yes, all the alerts appear under the alerts tab. Does the agent handles the previous states and post alerts only during the transition state? No, the agent does not handle the previous states of the monitor. The end-user is responsible for handling previous states and post alerts only during the transition state. a permission denied message We are sorry. You do not have permission to access this page. Scenarios Monitor Windows environment Scenario: An organization wants to monitor a Windows environment to track Windows event logs, applications, and services. At the same time, get notified when monitoring conditions and set thresholds exceed. Solution: Configure event log, application and service monitor from the available native monitors. An alert is generated based on the frequency and thresholds set while configuring the monitors. Customize disk space monitor Scenario: An organization wants to monitor disk space for Linux and Windows devices using a Perl script. Solution: Using the agent custom monitors, provide script details and select execution type as Perl. You can write the script in the Script text field and enter the metrics for warnings such as Default Warning Threshold and Critical Warning Threshold to generate alerts.
https://docs.opsramp.com/solutions/monitoring/
2022-09-24T19:23:58
CC-MAIN-2022-40
1664030333455.97
[array(['http://docsmedia.opsramp.com/diagrams/hybrid-cloud-discovery-monitoring.png', 'Supported Resources'], dtype=object) array(['http://docsmedia.opsramp.com/diagrams/service-map-1.png', 'Service Map'], dtype=object) array(['http://docsmedia.opsramp.com/diagrams/login-screen.png', 'A cloud based solution'], dtype=object) array(['http://docsmedia.opsramp.com/diagrams/apis.png', 'APIs and extensibility'], dtype=object) array(['http://docsmedia.opsramp.com/diagrams/monitoring-architecture-diagram.png', 'Monitoring Architecture Diagram'], dtype=object) ]
docs.opsramp.com
The API request lifecycle Flexible and Annual Redis Enterprise Cloud subscriptions can leverage a RESTful API that permits operations against a variety of resources, including servers, services, and related infrastructure. Once it’s enabled, you can use the REST API to create, update, and delete subscriptions, databases, and other entities. API operations run asynchronously, which means that provisioning occurs in the background. When you submit a request, a background process starts working on it. The response object includes an ID that lets you determine the status of the background process as it performs its work. For operations that do not create or modify resources (such as most GET operations), the API is sychronous; that is, the response object reports the results of the request. Asynchronous operations have two main phases: processing and provisioning. A resource is not available until both phases are complete. Task processing During this phase, the request is received, evaluated, planned, and executed. Use tasks to track requests Many operations are asychronous, including CREATE, UPDATE, and DELETE operations. The response objects for such operations provide a taskId identifier that lets you track the progress of the underlying operation. You can query the taskId to track the state of a specific task: GET "https://[host]/v1/tasks/<taskId>" You can also query the state of all active tasks or recently completed tasks in your account: GET "https://[host]/v1/tasks" Task process states During the processing of a request, the task moves through these states: received- Request is received and awaits processing. processing-in-progress- A dedicated worker is processing the request. processing-completed- Request processing succeeded and the request is being provisioned (or de-provisioned, depending on the specific request). A responsesegment is included with the task status JSON response. The response includes a resourceIdfor each resource that the request creates, such as Subscription or Database ID. processing-error- Request processing failed. A detailed cause or reason is included in the task status JSON response. receivedstate cannot be cancelled and it will await completion (i.e. processing and provisioning). If you wish to undo an operation that was performed by a task, perform a compensating action (for example: delete a subscription that was created unintentionally) Task provisioning phase When the processing phase succeeds and the task is in the processing-completed state, the provisioning phase starts. During the provisioning phase, the API orchestrates all of the infrastructure, resources, and dependencies required by the request. The provisioning phase may require several minutes to complete. You can query the resource identifier to track the progress of the provisioning phase. For example, when you provision a new subscription, use this API call to query the status of the subscription: GET "https://[host]/v1/subscriptions/<subscription-id>" Where the <subscription-id> is the resource ID that you receive when the task is in the processing-completed state. Provisioning state values During the provisioning of a resource (such as a subscription, database, or cloud account) the resource transitions through these states: pending- Provisioning is in progress. active- Provisioning completed successfully. deleting- De-provisioning and deletion is in progress. error- An error occurred during the provisioning phase, including the details of the error. Process limitations The following limitations apply to asynchronous operations: For each account, only one operation is processed concurrently. When multiple tasks are sent for the same account, they will be received and processed one after the other. The provisioning phase can be performed in parallel except: - Subscription creation, update, and deletion: You cannot change (make non-active) more than three subscriptions at the same time. - Database creation in an existing subscription: This can cause the subscription state to change from activeto pendingduring database provisioning in cases such as database sizing that requires cluster resizing or updating cluster metadata. For example: - Concurrently sending multiple “create database” tasks will cause each task to be in the receivedstate, awaiting processing. - When the first task starts processing it will be moved to the processing-in-progressstate. - When that first task is completed (either processing-completedor processing-error), the second task will start processing, and so on. - Typically, the processing phase is much faster than the provisioning phase, and multiple tasks will be in provisioned concurrently. - If the creation of the database requires an update to the subscription, the subscription state is set to pending. When you create multiple databases one after the other, we recommend that you check the subscription state after the processing phase of each database create request. If the subscription is in pendingstate you must wait for the subscription changes to complete and the subscription state to return to active.
https://docs.redis.com/latest/rc/api/get-started/process-lifecycle/
2022-09-24T19:11:31
CC-MAIN-2022-40
1664030333455.97
[array(['../../../../images/rv/api/processing-and-provisioning.png', 'processing-and-provisioning'], dtype=object) ]
docs.redis.com
Step 0: Creating The Folders¶ Before getting started, you will need to create the folders needed for this application: .
https://azcv.readthedocs.io/en/stable/tutorial/folders.html
2022-09-24T20:48:01
CC-MAIN-2022-40
1664030333455.97
[]
azcv.readthedocs.io
Navigation Installation To install Compare products: - Download WooCommerce Compare Products Extension PRO Version or Lite Version -. Image Legend: Legend:. Image Legend Legend: Create Custom Categories & Features Compare Products gives you the total flexibility to create your own custom Compare categories, features and feature values completely independent of your WooCommerce Product Categories and Attributes and the Attributes Terms. See image below. Image Legend. Note: Compare Express Products. After a product is added to the Compare basket. - A ‘Clear All’ link shows at the bottom left which allow. Image Legend. Note:.
https://docs.a3rev.com/woocommerce/compare-products/
2022-09-24T18:46:24
CC-MAIN-2022-40
1664030333455.97
[array(['https://docs.a3rev.com/wp-content/uploads/2022/04/woo-compare_Product-comparison1.jpg', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/woo-compare_Product-comparison_features.jpg', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/woo-compare_Product-comparison_features_edit.jpg', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/woo-compare_Product-comparison_features_dropdown.jpg', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/woo-compare_Product-comparison_features_add.jpg', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/Variations2.jpg', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/Compare-Widget.png', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/Woo-Compare-Widget.jpg', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/Compare-table.jpg', None], dtype=object) array(['https://docs.a3rev.com/wp-content/uploads/2022/04/Product-Variations.jpg', None], dtype=object) ]
docs.a3rev.com
Create 'Deploy to Cyclic' button Follow these steps to create a button that will allow users to fork your repo and deploy to Cyclic in one action. Easy Just copy this markdown directly into your README.md file inside your repo on Github. The target uses http referrer header to determine the source repo to use in targeting the app.cyclic.sh deploy path. []() Renders as: HTML If you would like to embed HTML directly into a site with a configured target you can set the app.cyclic.sh path yourself. Replace GH_LOGIN and GH_REPO with the values for your Github user name and repository name. For example if you wanted to create a fork and deploy button for: The values would be: GH_LOGIN=seekayel GH_REPO=express-hello-world <a href=""> <img src="" /> </a> Renders as:
https://docs.cyclic.sh/how-to/create-deploy-to-cyclic-button
2022-09-24T18:37:04
CC-MAIN-2022-40
1664030333455.97
[array(['https://deploy.cyclic.sh/button.svg', 'Deploy to Cyclic'], dtype=object) ]
docs.cyclic.sh
New Package Process for New Contributors This is a long version of the New Package Process, containing more details so new contributors can follow it more easily. Also the mandatory sponsoring step is included. Install Packager Tools Follow Installing Packager Tools. Check if the package already exists If some useful software is not included in Fedora already, you can submit it as a new package. The package you are submitting can be of any Free and Open Source project. Before creating your package, make sure that the software is not already in the Fedora repository: Check if the package already exists by searching in Fedora Packages. Search in the Review Tracker for packages under review. Check the orphaned or retired packages that need new maintainers. Be aware of forbidden items. Make a Package If you don’t know how to create an RPM package, see the How to create an RPM package. Make sure that your package meets the Packaging Guidelines and Package Naming Guidelines. Be aware of the Package Review Guidelines (they will be used during the package review). Make sure your package builds. This is surprisingly important because a significant number of submissions don’t. Upload Your Package want to make ad-hoc builds available for users while you are getting the package into the official repositories, consider using Copr. It is a lightweight automated build system that can create repositories using the SRPM you upload. You can use this Copr space to point reviewers to your src.rpm and spec. Create Your Review Request Fill out Bugzilla Fedora review form. Before submitting your request, be sure there’s not a previous request for the same package. There is a convenient search box on the package review status page. Make sure that you put the name of the package (excluding version and release numbers) in the Review Summaryfield, along with a very brief summary of what the package is. Put a description of your package (usually, this can be the same thing as what you put in the spec %description) in the Review Descriptionfield.. Inform Upstream The Fedora Project prefers Staying Close to Upstream Projects. Sponsored When the package is APPROVED by the reviewer, you must separately obtain member sponsorship in order to check in and build your package. Sponsorship is not automatic and may require that you further participate in other ways in order to demonstrate your understanding of the packaging guidelines. The an email confirmation of your sponsorship. Add Package to Source Code Management (SCM) system and Set Owner Before proceeding, please sync your account by login on Fedora Package Sources using your FAS credentials. If you are becoming a maintainer for a new package, instead of being a co-maintainer, use fedpkg to request a new git repository for your package. The sub-command is fedpkg request-repo which includes help text for setting up the Pagure API token the command requires. When creating your API-key choose toggle-all for the ACLs. You must specify the repository name and review bug number. For example: fedpkg request-repo python-prometheus_client 1590452 The request will be reviewed and processed by an admin, usually within 24 hours. Once the ticket is processed, you will have access to commit and build the package. fedpkg request-repo only creates a branch for Rawhide. To request branches for other Fedora releases, use fedpkg request-branch: fedpkg request-branch --repo python-prometheus_client f36 You will need to run this for each non-rawhide branch. If you wish, you can also use the --all-releases flag to request branches for all current Fedora releases. You could check out your distgit repository now, but before doing that, consider doing mkdir ~/fedora-scm ; cd ~/fedora-scm — that way, all your files are inside a single directory. Also, run ssh-add, so that you won’t have to keep typing in your key password. Now you are ready to checkout your distgit repository from the SCM: fedpkg clone your-package Test Your Package Refer to Using Mock to test package builds and Koji Scratch Builds for more information on testing your package. Mock uses your local system while Koji command line tool uses the Fedora build system server. Import, commit, and build your package Now that you’ve checked out your (empty) distgit repository with fedpkg, cd into the repository’s main branch: cd <packagename> Run fedpkg to import the contents of the SRPM into the SCM: fedpkg import PATH_TO_SRPM # Review Changes, press 'q' to stop; Revert with: git reset --hard HEAD git commit -m "Initial import (fedora#XXXXXX)." git push fedpkg build Obviously, replace PATH_TO_SRPM with the full path (not URL) to your approved SRPM, and XXXXXX with the package review bug number. If your package is using autochangelog, writing the bug number as specified will make the Fedora update system automatically close the bug when your package is submitted to Rawhide stable repository. If the. For more information on using the Fedora package maintenance system, see the Package maintenance guide. Update Your Branches (if desired) Branches are f# (formerly F- and before that FC-), main, etc. So f is the branch for Fedora. To switch to a branch first: fedpkg switch-branch BRANCH (e.g. f36) Merge the initial commit from main (Rawhide), creating an identical commit in the branch: git merge rawhide. You do not need to submit updates for Rawhide (main) manually because these are automatically created for you when the build completes. For all other branches, you must manually push updates for all builds that you would like to make available to users. You can push an update using Bodhi via the command line using this in each branch: fedpkg update It is often easier to complete builds for all your branches and then push a single update using the Bodhi web interface. Bodhi is smart enough to split your update into individual updates, one for each Fedora release branch. You can also select multiple builds from different packages to include in a single update using the web interface. This is useful when you would like to push linked builds, for example: an application package and its dependencies that are necessary for it to run correctly. Please see the Package Update Guide for more details. Make the package available in "comps" files If appropriate for the package, make it available in "comps" files so that it can be selected during installation and included in dnf package group operations. See How to use and edit comps.xml for package groups for more info. Watch for updates Fedora has the infrastructure available for monitoring new upstream releases of the software you are packaging. Refer to Upstream Release Monitoring for more details.
https://docs.fedoraproject.org/bn/package-maintainers/New_Package_Process_for_New_Contributors/
2022-09-24T20:50:11
CC-MAIN-2022-40
1664030333455.97
[]
docs.fedoraproject.org
Introduction IBM Informix. It supports a comprehensive set of high availability options, high levels of performance, data replication capabilities, scalability and minimal administrative overhead for both simple and complex IT infrastructures. Prerequisites Remote machine user credentials should have the administrator/root level access. Server path in configuration is JSON field, need to provide the payload in below format: Server Path" } ] } Install the integration From All Clients, select a client. Go to Setup > Integrations > Integrations. From Available Integrations, select Adapter > IBM Informix Database. The Install Informix Database. Server Path: Provide the server path: Default: { “informixServers”: [ { “serverName”: “”, “serverPath”: "" } ] } Example: Windows:" } ] } Linux: server path : { "informixServers": [ { "serverName": "ol_informix1410", "serverPath": "/opt/IBM/Informix_Software_Bundle/ol_informix1410.ksh" } ] } Database Host Name/IP Address: Enter the database host name/IP address. OS Platform: Select Windows or Linux. Notification Alerts: Select TRUE or FALSE. In the Server, Informix Informix Database. The Informix DB Server (Native Resource Type) is displayed under Components: View resource metrics To confirm Informix monitoring, review the following: - Metric graphs: A graph is plotted for each metric that is enabled in the configuration. - Alerts: Alerts are generated for metrics that are configured as defined for integration. Supported Metrics Risks, Limitations & Assumptions - OpsRamp provides discovery and monitoring support for only Windows and Linux operating systems. - Application can handle Critical/Recovery failure alert notifications for below two cases when user enables Notification Alerts in configuration: - SQLException - SQLInvalidAuthorizationSpecException - Monitoring data regarding metrics is pulled from SYSMASTER database - No database level monitoring is provided. - Privileges required for the monitoring user: - CONNECT access to the sysmaster database - CONNECT access to the sysadmin database
https://docs.opsramp.com/integrations/storage/ibm/ibm-informix-adapter/
2022-09-24T18:46:55
CC-MAIN-2022-40
1664030333455.97
[array(['https://docsmedia.opsramp.com/screenshots/Integrations/install-ibm-informix-integration-popup.png', None], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Integrations/ibm-informix-integration.png', None], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Integrations/create-adapter-config-informix.png', None], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Integrations/informix-config.png', None], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Integrations/informix-discovered-resources-infrapage.png', None], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Integrations/informix-discovered-resources-components.png', None], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Integrations/informix-metrics-graphs.png', None], dtype=object) ]
docs.opsramp.com
A key insight into defining indexes is determining which of the filters in a query can be “covered” by a given index. Filters and combinations of filters qualify for coverage based on different criteria. Each "scan" in a query, that is, each argument to a FROM clause that is not a subquery, can use up to one index defined on its table. When a table defines multiple indexes on the same table, these indexes compete in the query planner for the mission of controlling each scan in each query that uses the table. The query planner uses several criteria to evaluate which one of the table's indexes that cover one or more filters in the query is the most likely to be the most efficient. When indexing a single column, as in "CREATE INDEX INDEX_OF_X_A ON X(A);", a covered filter can be any of the following: "A <op> <constant>", where <op> can be any of "=, <, >, <=, or >=" "A BETWEEN <constant1> AND <constant2>" "A IN <constant-list>" A special case of "A LIKE <string-pattern>" where <string-pattern> contains a fixed prefix followed by a wild-card character Here, <constant>, <constant1>, and <constant2> can be actual literal constants like 1.0 or 'ABC' or they can be placeholders (?) that resolve to constants at runtime. <constant-list> can be a list of literals or literals and parameters like ('ABC', 'BAC', 'BCA', 'ACB', 'CBA', 'BAC') or (1, 2, 3, ?) or (?, ?, ?, ?, ?) or a single vector-valued placeholder. Each of these "constants" can also be an expression of constants, such as ((1024*1024)-1). Depending on the order in which tables are scanned in a query, called the join order, a covered filter can also be "A <op> <column>" where <column> is a column from another table in the query or any expression of a column or columns from another table and possibly constants, like B or (B || C) or SUBSTR( B||C , 1 , 4 ). The join order dependency works like this: if you had two tables indexed on column A and your query is as follows, only one table could be indexed: SELECT * FROM X, Y WHERE X.A = Y.A and X.B = ?; The first one to be scanned would have to use a sequential table scan. If you also had an index on X.B, X could be index-scanned on B and Y could then be index-scanned on A, so a table scan would be avoided. The availability of indexes that cover the scans of a query have a direct effect on the planners selection of the join order for a query. In this case, the planner would reject the option of scanning Y first, since that would mean one more sequential scan and one fewer index scan, and the planner prefers more index scans whenever possible on the assumption that index scans are more efficient. When creating an index containing multiple columns, as in "CREATE INDEX INDEX_OF_X_A_B ON X(A, B);", a covered filter can be any of the forms listed above for coverage by a simpler index “ON X(A)”, regardless of the presence of a filter on B — this is used to advantage when columns are added to an index to lower its cardinality, as discussed below. A multi-column index “ON X(A, B) can be used more effectively in queries with a combination of filters that includes a filter on A and a filter on B. To enable the more effective filtering, the first filter or prefix filter on A must specifically have the form of "A = ..." or "A IN ..." — possibly involving column(s) of other tables, depending on join order — while the filter on B can be any form from the longer list of covered filters, above. A specific exception to this rule is that a filter of the form "B IN ..." does not improve the effectiveness of a filter of the form "A IN ...", but that same filter "B IN ..." can be used with a filter of the specific form "A = ...". In short, each index is restricted to applying to only one “IN” filter per query. So, when the index is covering “A IN …”, it will refuse to cover the “B IN …” filter. This extends to indexes on greater numbers of columns, so an index "ON X(A, B, C)" can generally be used for all of the filters and filter combinations described above using A or using A and B. It can be used still more effectively on a combination of prefix filters like "A = ... " ( or "A IN ..." ) AND "B = ..." ( or "B IN ..." ) with an additional filter on C — but again, only the first "IN" filter improves the index effectiveness, and other “IN” filters are not covered. When determining whether a filter can be covered as the first or prefix filter of an index (first or second filter of an index on three or more columns, etc.), the ordering of the filters always follows the ordering of the columns in the index definition. So, “CREATE INDEX INDEX_ON_X_A_B ON X(A, B)” is significantly different from “CREATE INDEX INDEX_ON_X_B_A ON X(B, A)”. In contrast, the orientation of the filters as expressed in each query does not matter at all, so "A = 1 and B > 10" has the same effect on indexing as "10 < B and A = 1" etc. The filter “A = 1” is considered the “first” filter in both cases when the index is “ON (A, B)” because A is first. Also, other arbitrary filters can be combined in a query with “AND” without disqualifying the covered filters; these additional filters simply add (reduced) sequential filtering cost to the index scan. But a top-level OR condition like "A = 0 OR A > 100" will disqualify all filters and will not use any index. A general pre-condition of a query's filters eligible for coverage by a multi-column index is that the first key in the index must be filtered. So, if a query had no filter at all on A, it could not use any of the above indexes, regardless of the filters on B and/or on C. This is the condition that can cause table scans if there are not enough indexes, or if the indexes or queries are not carefully matched. This implies that carelessly adding columns to the start of an already useful index's list can make it less useful and applicable to fewer queries. Conversely, adding columns to the end of an already useful index (rather than to the beginning) is more likely to make the index just as applicable but more effective in eliminating sequential filtering. Adding to the middle of the list can cause an index to become either more or less effective for the queries to which it applies. Any such change should be tested by reviewing the schema report and/or by benchmarking the affected queries. Optimal index use and query performance may be achieved either with the original definition of the index, with the changed definition, or by defining two indexes.
https://docs.voltdb.com/v7docs/PerfGuide/IndexWork.php
2022-09-24T19:48:39
CC-MAIN-2022-40
1664030333455.97
[]
docs.voltdb.com
This topic includes information on how to configure federated authenticators in WSO2 Identity Server. Before you begin For more information on what federated authenticators are, see Outbound/federated authenticators in the Identity Server architecture..
https://docs.wso2.com/display/IS560/Configuring+Federated+Authentication
2022-09-24T20:30:03
CC-MAIN-2022-40
1664030333455.97
[]
docs.wso2.com
Swap Clouds What happens when your cloud provider goes offline? You can deploy your project in a different cloud provider in minutes. Using the (AWS or GCP) guides, make sure you've connected Zeet with your alternate cloud. Then, create a new project using the same repo. In step 8, instead of deploying to your existing cloud, deploy your project to your alternate cloud. Resources - Discord: Join Now - GitHub: - Express:
https://docs.zeet.co/serverless/swap-clouds/
2022-09-24T19:13:03
CC-MAIN-2022-40
1664030333455.97
[]
docs.zeet.co
Providing access to AWS accounts owned by third parties. To learn whether principals in accounts outside of your zone of trust (trusted organization or account) have access to assume your roles, see What is IAM Access Analyzer?. Third parties must provide you with the following information for you to create a role that they can assume: The third party's AWS account ID. You specify their AWS account ID as the principal when you define the trust policy for the role. An external ID to uniquely associate with the role. The external ID can be any secret identifier that is known by you and the third party. For example, you can use an invoice ID between you and the third party, but do not use something that can be guessed, like the name or phone number of the third party. You must specify this ID when you define the trust policy for the role. The third party must provide this ID when they assume the role. For more information about the external ID, see How to use an external ID when granting access to your AWS resources to a third party. The permissions that the third party requires to work with your AWS resources. You must specify these permissions when defining the role's permission policy. This policy defines what actions they can take and what resources they can access. After you create the role, you must provide the role's Amazon Resource Name (ARN) to the third party. They require your role's ARN in order to assume the role. For details about creating a role to delegate access to a third party, see How to use an external ID when granting access to your AWS resources to a third party. When you grant third parties access to your AWS resources, they can access any resource that you specify in the policy. Their use of your resources is billed to you. Ensure that you limit their use of your resources appropriately.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html
2022-09-24T20:04:49
CC-MAIN-2022-40
1664030333455.97
[]
docs.aws.amazon.com
Verifying metadata collection After enabling Atlas metadata collection, newly submitted Flink jobs on the cluster are also submitting their metadata to Atlas. You can verify the metadata collection with messages in the command line by requesting information regarding the Atlas hook. To verify the metadata collection, you can run the "Streaming WordCount" example from Running a Flink Job. In the log, the following new lines appear: ... 20/05/13 06:28:12 INFO hook.FlinkAtlasHook: Collecting metadata for a new Flink Application: Streaming WordCount ... 20/05/13 06:30:35 INFO hook.AtlasHook: <== Shutdown of Atlas Hook Flink communicates with Atlas through a Kafka topic, by default the one named ATLAS_HOOK.
https://docs.cloudera.com/csa/1.7.0/governance/topics/csa-atlas-verify.html
2022-09-24T20:20:49
CC-MAIN-2022-40
1664030333455.97
[]
docs.cloudera.com
This topic explains the purpose of using custom datasources and how you can define custom datasource implementations using the management console. Alternatively, you can simply create datasources using the default RDBMS configuration provided in WSO2 products. Note that when you define a data service, you have the option of using the common datasource types, such as EXCEL, CSV etc. in addition to custom datasources. See the topic on creating datasources for a data service. About custom datasources Custom datasources allows you to define your own datasource implementation. There are two options for writing a custom datasource, and these two options cover most of the common business use cases as follows: - Custom tabular datasources: Used to represent data in tables, where a set of named tables contain data rows that can be queried later. A tabular datasource is typically associated with an SQL data services query. This is done by internally using our own SQL parser to execute SQL against the custom datasource. You can use the org.wso2.carbon.dataservices.core.custom.datasource.TabularDataBasedDS interface to implement tabular datasources. For a sample implementation of a tabular custom datasource, see org.wso2.carbon.dataservices.core.custom.datasource.InMemoryDataSource . Also, this is supported in Carbon datasources with the following datasource reader implementation: org.wso2.carbon.dataservices.core.custom.datasource.CustomTabularDataSourceReader. - Custom query datasources: Used when the datasource has some form of query expression support. Custom query datasources are implemented using the org.wso2.carbon.dataservices.core.custom.datasource.CustomQueryBasedDS interface. You can create any non-tabular datasource using the query-based approach. Even if the target datasource does not have a query expression format, you can create your own. For example, you can support any NoSQL type datasource this way. For a sample implementation of a query-based custom datasource, see org.wso2.carbon.dataservices.core.custom.datasource.EchoDataSource. This is supported in Carbon datasources with the following datasource reader implementation: org.wso2.carbon.dataservices.core.custom.datasource.CustomQueryDataSourceReader. Samples - InMemoryDSSample is a sample data service (shipped with DSS by default), which contains both datasource implementations (InMemoryDataSource and EchoDataSource) explained above. See a demonstration of this sample here. - Also, you can find a sample configuration file containing the InMemoryDSSample in the <PRODUCT_HOME>\repository\conf\datasources\custom-datasources.xmlfile. Creating custom datasources You can create custom data sources as shown below. - Go to the Configure tab on the management console and click Data Sources to open the Data Sources screen. - Then click Add Data Source. The following screen will open: - Enter "Custom" as the datasource type. - In the Custom Data Source Type field, enter "DS_CUSTOM_TABULAR" (to store data in tables) or "DS_CUSTOM_QUERY" (to store non-tabular data accessed through a query). - In the Name and Description fields, enter a unique name for the datasource. - In the Configuration section, specify the xml configuration of the datasource. See the examples given below. XML configuration for a custom tabular datasource (DS_CUSTOM_TABULAR type): <configuration> <customDataSourceClass>org.wso2.carbon.dataservices.core.custom.datasource.InMemoryDataSource</customDataSourceClass> <customDataSourceProps> <property name="inmemory_datasource_schema">{Vehicles:[ID,Model,Classification,Year]}</property> <property name="inmemory_datasource_records"> {Vehicles:[["S10_1678","Harley Davidson Ultimate Chopper","Motorcycles","1969"], ["S10_1949","Alpine Renault 1300","Classic Cars","1952"], ["S10_2016","Moto Guzzi 1100i","Motorcycles","1996"], ["S10_4698","Harley-Davidson Eagle Drag Bike","Motorcycles","2003"], ["S10_4757","Alfa Romeo GTA","Classic Cars","1972"], ["S10_4962","LanciaA Delta 16V","Classic Cars","1962"], ["S12_1099","Ford Mustang","Classic Cars","1968"], ["S12_1108","Ferrari Enzo","Classic Cars","2001"]]} </property> </customDataSourceProps> </configuration> XML configuration for a custom query datasource (DS_CUSTOM_QUERY): <configuration> <customDataSourceClass>org.wso2.carbon.dataservices.core.custom.datasource.EchoDataSource</customDataSourceClass> <customDataSourceProps> <property name="p1">val1</property> <property name="p2">val2</property> </customDataSourceProps> </configuration>. After creating datasources, they appear on the Data Sources page. You can edit and delete them as needed by clicking the Edit or Delete links.
https://docs.wso2.com/display/AS521/Configuring+a+Custom+Datasource
2022-09-24T20:11:28
CC-MAIN-2022-40
1664030333455.97
[]
docs.wso2.com
Native AnalyticsNative Analytics Altis Analytics is only available to Accelerate and Enterprise tier customers.. DashboardsDashboards By default Altis Analytics replaces the default WordPress dashboard with an overview of your best performing content. If needed you can switch this off via the dashboard config option like so: { "extra": { "altis": { "modules": { "analytics": { "dashboard": false } } } } } Altis also provides a more detailed analytics view and insights page under the main Dashboard menu item in the admin. Data Structure and APIsData Structure and APIs The highly flexible analytics data set is the engine behind audiences, and the Altis Optimization Framework. Learn about the available APIs and the data structure in detail here, and learn about the different ways you can integrate analytics data with external services here. Altis also includes native integration for Segment.com, which when activated pushes analytics data to Segment for further tracking and analysis. Learn about the Segment integration for Altis Analytics here. Optimization FrameworkOptimization Framework Altis Optimization Framework is a flexible and extensive framework that enables Content Optimization through Personalization and A/B tests, the features that power Altis Experience Blocks..
https://docs.altis-dxp.com/v12/analytics/native/
2022-09-24T18:47:52
CC-MAIN-2022-40
1664030333455.97
[]
docs.altis-dxp.com
T- The Java type this codec serializes from and deserializes to. public abstract class ParsingCodec<T> extends TypeCodec<T> TypeCodecthat stores JAVA objects as serialized strings. This can serve as a base for codecs dealing with XML or JSON formats. This codec can be seen as a convenience base class to help implementing Java-to-XML or Java-to-JSON mappings, but it comes with a performance penalty: each Java object is serialized in two steps: first to a String, and then to a ByteBuffer, which means that each serialization actually incurs in two potentially expensive operations being carried. If you are using an XML or JSON library that supports writing Java ParsingCodec(Class<T> javaType) public ParsingCodec(TypeToken<T> javaType) public ParsingCodec(TypeCodec<String> innerCodec, Class<T> javaType) public ParsingCodec(TypeCodec<String> innerCodec, TypeToken<T> javaType) public ByteBuffer serialize(T value, ProtocolVersion protocolVersion) throws InvalidTypeException nullinput as the equivalent of an empty collection. serializein class TypeCodec<T> value- An instance of T; may be null. InvalidTypeException- if the given value does not have the expected type public<T> value- The CQL string to parse, may be nullor empty. nullon a null input. InvalidTypeException- if the given value cannot be parsed into the expected type protected abstract String toString(T value) value- the value to convert into a string protected abstract T fromString(String value) value- the string to parse
https://docs.datastax.com/en/drivers/java-dse/1.2/com/datastax/driver/extras/codecs/ParsingCodec.html
2022-09-24T18:52:26
CC-MAIN-2022-40
1664030333455.97
[]
docs.datastax.com
#Flip challenge We consider AI as an important part of the Idena project to improve the flip challenge and announce a contest for AI researchers and practitioners with a $55,000 reward cascade to develop an open AI instrument. We welcome AI researchers and practitioners to develop an open source AI instrument for solving flips. Idena will award the following prizes (paid in iDNA, the Idena blockchain coin) to the first individual or a team to break respective accuracy in solving flips using with a verifiable proof: #Flip Challenge Rules The applicant that will be able to show consistent accuracy (average for 3 epochs) will receive the corresponding prize cascade. For example, if the average accuracy reached is 72.5% the prize cascade of $1,000+$2,000 =$3,000 (equivalent amount in iDNA) will be paid. In case if 2 or more algorithms apply at the same testing time the prize amount will be paid on first come first serve basis according to the accuracy reached. For example, if there is the first participant who reached 72,5% and the second one who reached 74% then the prize cascade of $3,000 will be paid to the first participant and $3,000+$4,000=$7,000 will be paid to the second participant. Eligible AI algorithms should provide friendly API, be open source, cross-platform and must work without internet connection. AI instrument will be integrated into the Idena app for flip patterns detection. AI should be trained on the dataset of flips that currently available in the Idena blockchain explorer. Idena team will use the limited number of invites to collect out of sample flips for contestants' AI testing. Flip challenge committee: The contest is designed and administered by the Idena team. Protocol: to be specified The Idena team reserves the right to cancel or amend the flip challenge and these rules and conditions.
https://docs.idena.io/docs/wp/flip-challenge
2022-09-24T19:07:40
CC-MAIN-2022-40
1664030333455.97
[]
docs.idena.io
Showcase: Projects Your theme comes with a Project post type for showcasing showreels, design, art, photos or any body of work containing single videos, images or a gallery of images. Project posts can be grouped into categories that help viewers filter results in Project archives or widgets. Each Project post is intended to showcase a single project or body of work . For example, a graphic designer may have one post for an advertising project which displays a featured image of the finished product, and several additional images of the work as used on posters, magazine ads, or photos showing the process of making the art. For photographers, your project posts may only have one image per post, which is then assigned to a common category such as Landscapes, Fashion, etc. Project Categories - Go to→ - Enter the category name and click Add Category - Repeat to add more categories. Showcase is designed to handle one level of categories and will not display or breadcrumb subcategories. Use tags to link projects together that have more refined similarities, explained more later. Project Posts - Go to→ - Add a Title for your project. - Optionally add some content to the post editor. - For best results, keep this brief unless using the Content Bottom layout. - Do not insert images into the editor - Fill out your Details under the Project Options area below the editor. These fields are optional and can be left blank. - Client: Type any text you wish. - Project Date: Type any text you wish. - Project URL: Must be a URL with https:// or http:// prefix. This is converted to a “Visit Site” link labelled “Website” in your post. - Project Role: Were you the developer, designer, creative director or something else? Type any text you wish. - All labels on the post can be changed with a translation plugin to server whatever purpose you like, or hidden. - Click the Layout tab in the PROJECT OPTIONS area below the editor - Layout determines the placement of the content and the gallery/image of this single post - Gallery type determines the type of gallery to use: - List will display images full-width to the column one after another - Grid will create a masonry grid of thumbnails and allow you to choose how many columns you want - Slideshow will turn them into a full-width slider with thumbnail navigation below it. We recommend choosing 3 or more columns for the thumbnails. - Optional: Click the Video tab in the Project Options if you would like to add a video. - The Video Url should be the share URL for your video on any service that supports oembed such as YouTube or Vimeo. See How to Embed Audio & Video with Layers for detailed help. - The Video Thumbnail is optional and is displayed in place of the Video player when the Grid or Slideshow layout is used. - Set your Project Excerpt. - Below the Options panel you should see an Excerpt box. If not, click Screen Options at the top-right of the screen and click to enable it. While optional, excerpts are important if your Projects contain more then a few sentences, or where you want to display different or shorted introductory text in your archives/widget. - Optional: Over on the right you will see the Tags field to add some Tags (above the featured image box on the right). - Tags help you connect projects together that live in different categories. For example, you may have photography categories such as Landscapes, Fashion and Editorial with a mix of color, black and white, digital and film. Tags allow you to add the type (such as “Digital”) to projects in different categories. While visitors can’t filter tags in the main portfolio header, clicking a tag link under your projecs will show them an archive of all portfolios with that tag. Neat! - Optional: Just above Tags is an Order field. Enter a number to “weigh” the post if you plan to manually order - Select a Category - Click save Featured Image At a minimum, all projects need a Featured Image. This image represents the post in archives, widgets and shares to social media, regardless if the post has a video URL or gallery. Click Set Featured Image on the right sidebar and upload or browse to the image to use to represent this project. - For best results, images should be at least 1000px wide - Make sure you have configured WordPress Media options! A note on video posts: The Video Thumbnail only enables a video lightbox on the post when the grid or slideshow is used. Even if your post only contains a Video URL, you need a feaured image to prevent the video player from loading into your archives. In most cases, you want to encourage your visitors to click through to the post view for video content, rather than have multiple video players load on an archive page. Project Gallery If you want to add multiple images to a project, you can do so easily in the Project Gallery located below the Project Options: - Click Add project gallery images - Click a Media Library thumbnail to select it, or click Upload Files at the top left to upload more. - Hold down Ctrl (PC) or Cmd (Mac) when clicking the thumbnails to select multiple images at once. - When you’re ready. click Add To Gallery Rearrange your images by dragging and dropping them. Delete images from the gallery by hovering over them and clicking the X icon. Add More images by repeating steps 1-3. How do I add more videos? Showcase is not really a video gallery plugin, so is limited in its handling of video. Filmmaking or motion graphics portfolios work best with one video per project post where the category represents the body of work or thing you are showcasing. Single video is otherwise useful for tutorials, courses, short videos of your work in the real world (such as demonstrating a physical product, or showing off your work hung in a gallery, etc) where it is followed by images or photos. You can add additional (small file size) videos by uploading them to the media gallery. Videos must be mp4 files to work on most browsers. We don’t really recommend doing it this way though – instead, embed your videos the WordPress way by pasting the url or shortcode (if using a self-hosted video plugin) into the post editor, then use the Content Top or Content Bottom layout in combination with the List gallery type for a more fluid presentation. Customizing Projects Aside from the Layout and Gallery type, you can customize your Project posts using the Showcase options in the Customizer. The following steps will walk you through the options twice, once while viewing a single project post, and again while viewing a category to customize the archive view too. Post Elements & Lightbox - View one of your Project posts. - Click Customize in the Admin toolbar at the top of the page. - Click on Showcase to expand the options panel - Click on Single Page - Under Display, uncheck a box to hide the Breadcrumbs, Title, Content or Comments. - To turn off comments for individual Projects, use the Quick Edit screen under Projects in the admin instead. -. - Uncheck Pagination to hide the Next and Previous navigation. - Check Popup Thumbnails To Lightbox to enable the lightbox for images. Showcase uses a built-in lightbox (PrettyPhoto) which will open images on your project posts in a nice overlay when this setting is on. Videos ignore this setting. To open Videos in a lightbox from your Project post, you must add a Video Thumbnail. Colors Color options for elements in your single Project posts such as the titles, description, project details, breadcrumbs and pagination are found under the Single Page options under the Styling section. - From Single Page, scroll down to Styling - Choose your desired colors Custom CSS In the spirit of keeping things simple and easy to use, we can’t cover every aspect of customization using controls here, but you can hide, color and reposition conent to a degree using Custom CSS. See our tutorial to get started. Linking Projects Projects are linked automatically from the Project widget, Project page, or Project catetgory view. You can also add individual Project posts to a menu: - Go to Appearance Menus - Continue with Project Pages to learn about archives, the Project page template, and building your own with the Project widget → Did you know? Our friends at Jetpack are doing some incredible work to improve the WordPress experience. Check out Jetpack and improve your site's security, speed and reliability.
https://docs.layerswp.com/doc/layers-showcase-project-posts/
2022-09-24T19:40:17
CC-MAIN-2022-40
1664030333455.97
[array(['https://refer.wordpress.com/wp-content/uploads/2018/02/leaderboard-light.png', 'Jetpack Jetpack'], dtype=object) ]
docs.layerswp.com
Notes Flexible capturing of attributes The Java agent offers the ability to fine tune the attributes being sent to New Relic. Please see our docs site for more information on configuring attributes: Agent-Side High Security Configuration If your account is set to high security in the New Relic UI, you must add the following to your local newrelic.yml configuration file:high_security: true Without this property, the agent will stop collecting data when high security is enabled in the New Relic UI. See Discovery of hostname reported to New Relic If New Relic reports an IP address for your hostname, you can now control whether the host name is an IP version 4 or 6 address by setting the following property in your newrelic.yml configuration file:process_host:ipv_preference: {4 or 6} Improved JMX metric naming You can now set the metric name when configuring JMX metrics through a custom yaml file using the property "root_metric_name". Note, all of the JMX metrics will still be prefixed with "JMX" and end with the name of the attribute. See Fix: Naming of CGLib classes CGLib auto generated classes with Spring resulted in poor metric names. The agent now excludes the random part from the name. Fix: JMS transaction naming JMS onMessage instrumentation now uses a lower priority for naming transactions and honors the enable_auto_transaction_naming config. Improved Jetty coverage Jetty versions 9.04 through 9.06 were not instrumented. This has been fixed. Fix: Potential memory leak from database calls In some cases when database work is performed outside of a New Relic transaction, a memory leak could occur. This bug has been present in the agent since 3.5.0. Fix: VerifyError can occur when using Nevado JMS.
https://docs.newrelic.com/docs/release-notes/agent-release-notes/java-release-notes/java-agent-370
2022-09-24T20:34:05
CC-MAIN-2022-40
1664030333455.97
[]
docs.newrelic.com
SAML Authentication SAML Authentication enables the integration of OnApp as a Service Provider into third-party systems via Single Sign-On possibility, so the users of third-party systems can use their credentials to access OnApp services, without the need to be previously registered in OnApp Cloud. This Authentication is enabled by adding an Identity Provider (IdP) instance, which is used to direct OnApp login requests to the server configured with SAML. - It must be configured properly to be able to store OnApp mapping attributes (user role, time zone, etc.). - It requires that only HTTPS protocol is used. Selecting a SAML IdP on OnApp login screen or from the drop-down menu, a user will be redirected to the login screen of that identity provider. Upon logging in there with their email and password (or if they are already logged in), they will be redirected back to OnApp Control Panel. This final redirect will contain an email attribute of that user which is used for their recognition in OnApp system – if such a user already exists, he or she is recognized and authorized, if not - a new OnApp user will be automatically created. The attributes of the third party system users will be synchronized during every login, depending on the available keys for attributes mapping. This will enable a third-party system administrator to preset the main OnApp user properties (user role, time zone, group) without the necessity to enter OnApp and make the required configurations manually. Users created without these attributes can be located and managed at Users > Users with Config Problems on your OnApp Control Panel. To do so, disable the switch Local Login for SAML Users at Control Panel > Admin > Settings > Configuration > System tab. See also:
https://docs.onapp.com/adminguide/6.6/cloud-configuration/control-panel-configuration/authentication/saml-authentication
2022-09-24T19:21:01
CC-MAIN-2022-40
1664030333455.97
[]
docs.onapp.com
. Slow WebSphere performance on Mac platforms Valid from Pega Version 7.1.2 Use the following JVM setting to improve WebSphere performance on Mac platforms: -Djava.net.preferIPv4Stack=true PATCH. Elasticsearch reports support string comparison operators Valid from Pega Version 8.2 To improve performance, reports that use string comparison operators in filters can now run queries against Elasticsearch instead of querying the database. The following operators are now supported for Elasticsearch queries. - Starts with, Ends with, Does not start with, and Does not end with - Contains and Does not contain - Greater than, Less than, Greater than or equal, and Less than or equal In cases where a query cannot be run against Elasticsearch, the query is run against the database, for example, if the query includes a join. To determine if a query was run against Elasticsearch, use the Tracer and enable the Query resolution event type. For more information, see Tracer event types to trace. Predictive models monitoring Valid from Pega Version 8.2 In Prediction Studio, you can now monitor the predictive performance of your models to validate that they make accurate predictions. Based on that information, you can re-create or adjust the models to provide better business results, such as higher accept rates or decreased customer churn. For more information, see Monitoring predictive models. Kafka custom serializer Valid from Pega Version 8.2 In Kafka data sets, you can now create and receive messages in your custom formats, as well as in the default JSON format. To use custom logic and formats for serializing and deserializing ClipboardPage objects, create and implement a Java class. When you create a Kafka data set, you can choose to apply JSON or your custom format that uses a PegaSerde implementation. For more information, see Creating a Kafka data set and Kafka custom serializer/deserialized implementation. Additional configuration options for File data sets Valid from Pega Version 8.2 You can now create File data sets for more advanced scenarios by adding custom Java classes for data encryption and decryption, and by defining a file set in a manifest file. Additionally, you can improve data management by viewing detailed information in the dedicated meta file for every file that is saved, or by automatically extending the filenames with the creation date and time. For more information, see Creating a File data set for files on repositories and Requirements for custom stream processing in File data sets. Simplified testing of event strategies Valid from Pega Version 8.2 Evaluate event strategies by creating test runs. During each run, you can enter a number of sample events with simulated property values, such as the event time, the event key, and so on. By testing a strategy against sample data, you can understand the strategy configuration better and troubleshoot potential issues. For more information, see Evaluate event strategies through test runs. Data flow life cycle monitoring Valid from Pega Version 8.2 You can now generate a report from the Run details section of a Data Flow rule that provides information about run events. The report includes reasons for specific events which you can analyze to troubleshoot and debug issues more quickly. You can export the report and share it with others, such as Global Customer Support. For more information about accessing event details, see Creating a real-time run for data flows and Creating a batch run for data flows.
https://docs.pega.com/platform/release-notes-archive?f%5B0%5D=releases_capability%3A9046&f%5B1%5D=releases_capability%3A9071&f%5B2%5D=releases_capability%3A9076&f%5B3%5D=releases_note_type%3A983&f%5B4%5D=releases_note_type%3A985&f%5B5%5D=releases_version%3A7146&f%5B6%5D=releases_version%3A27976
2022-09-24T20:34:06
CC-MAIN-2022-40
1664030333455.97
[]
docs.pega.com
1.4. Patient/subject identification in CamCOPS¶ - Patient identification fields Configuring the meaning of the ID number fields Uploading and finalizing policies - Minimum details required by the tablet software 1.4.1. Overview¶ CamCOPS is intended for use in a variety of situations, ranging from anonymous or pseudonymous use (in which subjects are identified only by a code) through to full identifiable clinical data (in which subjects must typically be identified by multiple identifiers for safety). We can use the phrase “identification policy” to describe what set of information is required in a particular scenario. A single instance of CamCOPS supports multiple groups, and each group can have its own identification policies. Thus, for example, a pseudonymous research study can co-exist with identifiable records. See Groups for more detail. 1.4.2. Patient identification fields¶ CamCOPS includes the following patient identification/information fields. Not all need be used. Forename Surname Sex (one of: M, F, X) Date of birth ID numbers(s), which are flexibly defined (see below) Address (free text) General practitioner’s (GP’s) details (free text) Other details (free text) All may be selected as part of the minimum patient identification details. What counts as the minimum is configurable. Furthermore, the meaning of the ID numbers is entirely configurable. Below we explain the purposes of this system. When writing an ID policy, use the following terms: New in version 2.2.8: otheridnum, address, gp, and otherdetails were added in CamCOPS v2.2.8. 1.4.3. Configuring the meaning of the ID number fields¶ Your institution will use one or more ID number fields. For example, in the UK NHS, every patient should have a unique nationwide NHS number. Most NHS institutions use their own ID as well, and some specialities (such as liaison psychiatry) operate in multiple hospitals. Research studies may use a local, idiosyncratic numbering system. Configure the meanings of up to 8 numbering systems (see server configuration.) The first ID number is special in only one way: the web viewer’s drop-down ID selector will default to it. So, pick your institution’s main ID number for this slot; that will save your users some effort. Todo Have the default ID number type configurable per group? 1.4.4. Uploading and finalizing policies¶ The server supports two ID policies: an upload policy – the minimum set of identifying information required to upload information to the server – and a finalizing policy – the minimum set of identifying information required for a tablet to “sign off” and transfer all its information to the server (after which the tablet app can’t edit that information). The policies you require depend on your institution. Some examples are given below. You can configure the policies using brackets ( ), AND, OR, NOT, and any of the fields listed above. Some examples are shown below. Configure the policies using the Group management option on the server main menu. New in version 2.2.8: NOT was added in CamCOPS v2.2.8. 1.4.5. Examples¶ 1.4.5.1. Example 1: clinical, multi-site¶ Suppose we have a mental health NHS Trust – call it CPFT – with its own hospitals that provides liaison psychiatry services in four other hospitals. We might use the following IDs: and these policies: Upload policy forename AND surname AND dob AND sex AND anyidnum Finalize policy forename AND surname AND dob AND sex AND idnum1 This would allow users to enter information while sitting in Addenbrooke’s Hospital and in possession of the forename, surname, DOB, sex, and Addenbrooke’s hospital number. Equally, the same would be true at any other of the hospitals; or the NHS number could be used. The user could then print out the information (from the CamCOPS webview PDFs) for the Addenbrooke’s records, or store an electronic copy. Once back at a CPFT office, the CPFT number(s) could be looked up, or created, and entered into the CamCOPS tablet application (by editing that patient’s details). Only once this is done will the CamCOPS software allow a “final” upload (an upload that moves rather than copies). “Final” records would then conform to a hypothetical CPFT policy of requiring a CPFT RiO number for each record, as well as basic information (forename, surname, DOB, sex). An alternative organization might standardize upon NHS numbers instead, and edit its finalizing policy accordingly. 1.4.5.2. Example 2: research¶ Suppose we’re operating in a very simple research context. We don’t want patient-identifiable data on our computers; we’ll operate with pseudonyms (codes for each subject). We might have a separate secure database to look up individuals from our pseudonyms, but that is outside CamCOPS. We might have the following identifiers: Upload policy sex AND idnum1 Finalize policy sex AND idnum1 This requires users to enter the subject’s sex and research ID only. 1.4.5.3. Example 3: research hosted by a clinical institution¶ Suppose you’re a research group operating within a clinical institution, but collecting data (under appropriate ethics approval) for research purposes. You may want to use patient-identifiable data or pseudonyms. You will want full read access to your data (likely at the SQL level), but you shouldn’t have full read access to all patients at that institution. There are at least three possible approaches. You could set up a new server, or you could add a second CamCOPS database to your existing server, or you can simply add a new group to your CamCOPS server. The last is likely to be quickest and best. 1.4.5.4. Example 4: research where personal identifying data (PID) is prohibited¶ Compare example 2, but now you want to try to enforce a “no PID” rule. This is not completely enforceable by a computer, because some CamCOPS tasks allow free text, and wherever there is free text, somebody could type in sensitive information. However, the following method can certainly help: sex AND idnum1 AND NOT (otheridnum OR forename OR surname OR dob OR address OR gp OR otherdetails) This will stop users uploading information with any PID in the Patient table, if idnum1 is a non-identifying pseudonym for the study. New in version 2.2.8: NOT and some other tokens were added in CamCOPS v2.2.8; see above. 1.4.6. Minimum details required by the tablet software¶ The tablet’s internal minimum identification policy, which is fixed, is: sex AND ((forename AND surname AND dob) OR anyidnum) This allows either a named (forename, surname, DOB, sex) or an anonymous/pseudonym-based system for research (sex plus one ID number), or any other sensible mixture as above.
https://camcops.readthedocs.io/en/stable/introduction/patient_identification.html
2022-09-24T19:32:09
CC-MAIN-2022-40
1664030333455.97
[]
camcops.readthedocs.io
Log event notification listener Description The LOGEVENT notification listener is similar to the TRACK notification listener, but provides more granularity. It lets you know what data was sent to Optimizely in a given event batch, and when the batch was sent. For more information about event batching, see the corresponding topic for your SDK language. You can use this notification listener to inspect and audit what data you're sending to Optimizely. Parameters The following tables show the information provided to the listener when it is triggered: Log Event The Log Event object is created using EventFactory. It represents the batch of impression and conversion events that have been passed to the Event Dispatcher to be sent to the Optimizely backend. Examples For example code, see the notification listener topic in your SDK language. Updated about 2 years ago
https://docs.developers.optimizely.com/full-stack/docs/log-event-notification-listener
2022-09-24T19:05:11
CC-MAIN-2022-40
1664030333455.97
[]
docs.developers.optimizely.com
Should LITA donate $100 to help sponsor the Freedom to Read Foundation's Saturday night reception at the 2019 ALA Annual Conference? Proposal and discussion (private Board discussion in ALA Connect) Moved by Emily Morton-Owens Seconded by Bohyun Kim Vote open Friday, May 10 - Friday, May 17, 2019 - Yes: 9 votes - No: 0 votes - I abstain: 0 votes
https://docs.lita.org/2019/05/board-vote-to-donate-to-2019-ftrf-reception/
2022-09-24T18:50:40
CC-MAIN-2022-40
1664030333455.97
[]
docs.lita.org
TWCloud adds OSLC support and now exposes model element data following OSLC Architecture Managemenet (AM) () vocabulary. Along with core OSLC services provided in TWCloud, this enables smooth integration with other OSLC-compatible tools by linking resources in Linked Data fashion. Here are the key points behind current TWCloud OSLC provider implementation: - OSLC root services document URI can be found using the following pattern - http(s)://TWC_IP:PORT/oslc/rootservices. - Each of the model elements is exposed using the following URI pattern - http(s)://TWC_IP:PORT/oslc/am/{projectID}/{elementID}. *Note: the elementID here is an Element Server ID. - We expose the following model element properties: Sample RDF/XML representation of element data - rdf:type () - represents architecture resource type, which according to OSLC AM vocabulary is always a Resource (). - dcterms:modified () - stands for the last element modification date. - dcterms:identifier () - the Element Server ID as used in element's URI pattern. - dcterms:title () - the name of the element <?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns: <rdf:Description rdf: <rdf:type rdf: <oslc:serviceProvider rdf: <dcterms:modified rdf:May 16, 2018 2:08:52 PM</dcterms:modified> <dcterms:identifier rdf:50775427-bce2-4de4-b747-8ab82d294237</dcterms:identifier> <dcterms:title rdf:Engine</dcterms:title> </rdf:Description> </rdf:RDF> - Currently we have neither OSLC delegated dialog nor querying services exposed. - We offer OSLC UI previews through integration with CC4TWC. For more information, check Publishing an OSLC resource. OAuth 1.0a authentication In order to ensure secure access to server resources via OSLC, OAuth 1.0a authentication protocol is used. OAuth 1.0a requires consumer key and secret to be known before starting the authentication process flow: - Consumer key - currently it needs to be generated manually via service exposed in the root services document. The following HTTP POST request should be made to a consumer key generation service (jfs:oauthRequestConsumerKeyUrl): { "name": "consumerNameGoesHere", "secret": "validOAuthSecretGoesHere" } { "key": "generatedConsumerKeyShouldBeHere" } - Consumer secret - valid OAuth consumer secrets are specified in TWCloud's Authentication server properties file.
https://docs.nomagic.com/display/TWCloud190SP1/OSLC+API
2022-09-24T20:32:42
CC-MAIN-2022-40
1664030333455.97
[]
docs.nomagic.com
13. Solvers¶ A constraint-based reconstruction and analysis model for biological systems is actually just an application of a class of discrete optimization problems typically solved with linear, mixed integer or quadratic programming techniques. Cobrapy does not implement any algorithm to find solutions to such problems but rather creates. [1]: from cobra.io import load_model model = load_model('textbook') [2]: model.solver = 'glpk' # or if you have cplex installed model.solver = 'cplex' For information on how to configure and tune the solver, please see the documentation for optlang project and note that model.solver is simply an optlang object of class Model. [3]: type(model.solver) [3]: optlang.cplex_interface.Model 13.1. Internal solver interfaces¶ Cobrapy also contains its own solver interfaces but these are now deprecated and will be removed completely in the near future. For documentation of how to use these, please refer to older documentation.
https://cobrapy.readthedocs.io/en/latest/solvers.html
2022-09-24T19:52:54
CC-MAIN-2022-40
1664030333455.97
[]
cobrapy.readthedocs.io
Software Download Directory Live Forms v8.1 is no longer supported. Please visit Live Forms Latest for our current Cloud Release. Earlier documentation is available too. A Live Forms' user is anyone that is authenticated to Live Forms for the purpose of: On This Page: These are users that don't need an account in Live Forms and also aren't involved in any of the items described above. Anonymous users do NOT count as Live Forms users and therefore are not limited by the license. Common examples are filling out a survey or a contact form. Concurrent users are Live Forms users that are logged in to Live Forms Live Forms. Live Forms also offers Unlimited user licenses.
https://docs.frevvo.com/d/display/frevvo81/Concurrent+Users
2020-02-17T00:40:40
CC-MAIN-2020-10
1581875141460.64
[array(['/d/images/icons/linkext7.gif', None], dtype=object)]
docs.frevvo.com
Gets the reason that a specified health check failed most recently. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. get-health-check-last-failure-reason --health-check-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --health-check-id (string) The ID for the health check for which you want the last failure reason. When you created the health check, CreateHealthCheck returned the ID in the response, in the HealthCheckId element. Note If you want to get the last failure reason for a calculated health check, you must use the Amazon Route 53 console or the CloudWatch console. You can't use GetHealthCheckLastFailureReason for a calculated health. HealthCheckObservations -> (list) A list that contains one Observation element for each Amazon Route 53 health checker that is reporting a last failure reason. .
https://docs.aws.amazon.com/ja_jp/cli/latest/reference/route53/get-health-check-last-failure-reason.html
2020-02-17T01:21:23
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
Software Download Directory Thanks for using Live Forms. We recommend the following Tutorials to help you get started quickly. If you'd rather jump directly into more detailed documentation describing how to design, deploy and use™ use the navigation links to the left. To log In to:. This method can be used to reset the password for tenant administrators and for the superuser (admin@d) for in-house installations. The Forget Password feature is not supported forusers in a SAML tenant. If SAML tenant users browse the URL frevvo/web/login, enter their login id then click Forgot Password, they will see the following error message:
https://docs.frevvo.com/d/pages/diffpagesbyversion.action?pageId=21537754&selectedPageVersions=2&selectedPageVersions=3
2020-02-17T00:38:39
CC-MAIN-2020-10
1581875141460.64
[array(['/d/images/icons/linkext7.gif', None], dtype=object)]
docs.frevvo.com
Configure email forwarding for a mailbox Email forwarding lets you to set up a mailbox to forward email messages sent to that mailbox to another user's mailbox in or outside of your organization. Important If you're using Office 365 for business, you should configure email forwarding in the Microsoft 365 admin center: Configure email forwarding in Office 365 If your organization uses an on-premises Exchange or hybrid Exchange environment, you should use the on-premises Exchange admin center (EAC) to create and manage shared mailboxes. Use the Exchange admin center to configure email forwarding You can use the Exchange admin center (EAC) set up email forwarding to a single internal recipient, a single external recipient (using a mail contact), or multiple recipients (using a distribution group). You need to be assigned permissions before you can perform this procedure or procedures. To see what permissions you need, see the "Recipient Provisioning Permissions" entry in the Recipients Permissions topic. In the EAC, navigate to Recipients > Mailboxes. In the list of user mailboxes, click or tap the mailbox that you want to configure. For on-premises Exchange organizations, the recipient limit is unlimited. For Exchange Online organizations, the limit is 500 recipients.. What if you want to forward mail to an address outside your organization? Or forward mail to multiple recipients? You can do that, too! External addresses: Create a mail contact and then, in the steps above, select the mail contact on the Select Recipient page. Need to know how to create a mail contact? Check out Manage mail contacts. Multiple recipients: Create a distribution group, add recipients to it, and then in the steps above, select the mail contact on the Select Recipient page. Need to know how to create a mail contact? Check out Create and manage distribution groups. How do you know this worked? To make sure that you've successfully configured email forwarding, do one of the following: In the EAC,. Additional information This topic is for admins. If you want to forward your own email to another recipient, check out the following topics: For information about keyboard shortcuts that may apply to the procedures in this topic, see Keyboard shortcuts for the Exchange admin center. Having problems? Ask for help in the Exchange forums. Visit the forums at Exchange Online or Exchange Online Protection. Feedback
https://docs.microsoft.com/en-us/exchange/recipients-in-exchange-online/manage-user-mailboxes/configure-email-forwarding?redirectedfrom=MSDN
2020-02-17T02:33:07
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Quality of Service (QoS) Policy Applies to: Windows Server (Semi-Annual Channel), Windows Server 2016 You can use QoS Policy as a central point of network bandwidth management across your entire Active Directory infrastructure by creating QoS profiles, whose settings are distributed with Group Policy. Note In addition to this topic, the following QoS Policy documentation is available. QoS policies are applied to a user login session or a computer as part of a Group Policy object (GPO) that you have linked to an Active Directory container, such as a domain, site, or organizational unit (OU). QoS traffic management occurs below the application layer, which means that your existing applications do not need to be modified to benefit from the advantages that are provided by QoS policies. Operating Systems that Support QoS Policy You can use QoS policy to manage bandwidth for computers or users with the following Microsoft operating systems. - Windows Server 2016 - Windows 10 - Windows Server 2012 R2 - Windows 8.1 - Windows Server 2012 - Windows 8 - Windows Server 2008 R2 - Windows 7 - Windows Server 2008 - Windows Vista Location of QoS Policy in Group Policy In Windows Server 2016 Group Policy Management Editor, the path to QoS Policy for Computer Configuration is the following. Default Domain Policy | Computer Configuration | Policies | Windows Settings | Policy-based QoS This path is illustrated in the following image. In Windows Server 2016 Group Policy Management Editor, the path to QoS Policy for User Configuration is the following. Default Domain Policy | User Configuration | Policies | Windows Settings | Policy-based QoS By default no QoS policies are configured. Why Use QoS Policy? As traffic increases on your network, it is increasingly important for you to balance network performance with the cost of service - but network traffic is not normally easy to prioritize and manage. On your network, mission-critical and latency-sensitive applications must compete for network bandwidth against lower priority traffic. At the same time, some users and computers with specific network performance requirements might require differentiated service levels. The challenges of providing cost-effective, predictable network performance levels often first appear over wide area network (WAN) connections or with latency-sensitive applications, like voice over IP (VoIP) and video streaming. However, the end-goal of providing predictable network service levels applies to any network environment (for example, an Enterprises' local area network), and to more than VoIP applications, such as your company's custom line-of-business applications. Policy-based QoS is the network bandwidth management tool that provides you with network control - based on applications, users, and computers. When you use QoS Policy, your applications do not need to be written for specific application programming interfaces (APIs). This gives you the ability to use QoS with existing applications. Additionally, Policy-based QoS takes advantage of your existing management infrastructure, because Policy-based QoS is built into Group Policy. Define QoS Priority Through a Differentiated Services Code Point (DSCP) You can create QoS policies that define network traffic priority with a Differentiated Services Code Point (DSCP) value that you assign to different types of network traffic. The DSCP allows you to apply a value (0–63) within the Type of Service (TOS) field in an IPv4 packet's header, and within the Traffic Class field in IPv6. The DSCP value provides network traffic classification at the Internet Protocol (IP) level, which routers use to decide traffic queuing behavior. For example, you can configure routers to place packets with specific DSCP values into one of three queues: high priority, best effort, or lower than best effort. Mission-critical network traffic, which is in the high priority queue, has preference over other traffic. Limit Network Bandwidth Use Per Application with Throttle Rate You can also limit an application's outbound network traffic by specifying a throttle rate in QoS Policy. A QoS policy that defines throttling limits determines the rate of outbound network traffic. For example, to manage WAN costs, an IT department might implement a service level agreement that specifies that a file server can never provide downloads beyond a specific rate. Use QoS Policy to Apply DSCP Values and Throttle Rates You can also use QoS Policy to apply DSCP values and throttle rates for outbound network traffic to the following: Sending application and directory path Source and destination IPv4 or IPv6 addresses or address prefixes Protocol - Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) Source and destination ports and port ranges (TCP or UDP) Specific groups of users or computers through deployment in Group Policy By using these controls, you can specify a QoS policy with a DSCP value of 46 for a VoIP application, enabling routers to place VoIP packets in a low-latency queue, or you can use a QoS policy to throttle a set of servers' outbound traffic to 512 kilobytes per second (KBps) when sending from TCP port 443. You can also apply QoS policy to a particular application that has special bandwidth requirements. For more information, see QoS Policy Scenarios. Advantages of QoS Policy With QoS Policy, you can configure and enforce QoS policies that cannot be configured on routers and switches. QoS Policy provides the following advantages., QoS Policy makes it easier to configure a user-level QoS policy on a domain controller and propagate the policy to the user's computer. Flexibility. Regardless of where or how a computer connects to the network, QoS policy is applied - the computer can connect using WiFi or Ethernet from any location. For user-level QoS policies, the QoS policy is applied on any compatible device at any location where the user logs on. Security: If your IT department encrypts users' traffic from end to end by using Internet Protocol security (IPsec), you cannot classify the traffic on routers based on any information above the IP layer in the packet (for example, a TCP port). However, by using QoS Policy, you can classify packets at the end device to indicate the priority of the packets in the IP header before the IP payloads are encrypted and the packets are sent. Performance: Some QoS functions, such as throttling, are better performed when they are closer to the source. QoS Policy moves such QoS functions closest to the source. Manageability: QoS Policy enhances network manageability in two ways: a. Because it is based on Group Policy, you can use QoS Policy to configure and manage a set of user/computer QoS policies whenever necessary, and on one central domain-controller computer. b. QoS Policy facilitates user/computer configuration by providing a mechanism to specify policies by Uniform Resource Locator (URL) instead of specifying policies based on the IP addresses of each of the servers where QoS policies need to be applied. For example, assume your network has a cluster of servers that share a common URL. By using QoS Policy, you can create one policy based on the common URL, instead of creating one policy for each server in the cluster, with each policy based on the IP address of each server. For the next topic in this guide, see Getting Started with QoS Policy. Feedback
https://docs.microsoft.com/en-us/windows-server/networking/technologies/qos/qos-policy-top
2020-02-17T00:46:07
CC-MAIN-2020-10
1581875141460.64
[array(['../../media/qos/qos-gp.jpg', 'Location of QoS Policy in Group Policy'], dtype=object)]
docs.microsoft.com
[−][src]Crate bitpacking Fast Bitpacking algorithms This crate is a Rust port of Daniel Lemire's simdcomp C library. It contains different flavor of integers compression via bitpacking : BitPacker1x, BitPacker4x, and BitPacker8x. Each produces different formats, and are incompatible one with another, and requires integers to be encoded in block of different size.. BitPacker4x and BitPacker8x are designed specifically to leverage SSE3 and AVX2 instructions respectively. The library will fallback to a scalar implementation if these instruction sets are not available. For instance : - because your compilation target architecture is not x86_64 - because the CPU you use is from an older generation I recommend using BitPacker4x if you are in doubt. See the BitPacker trait for example usage.
https://docs.rs/bitpacking/0.8.2/bitpacking/
2020-02-17T01:07:24
CC-MAIN-2020-10
1581875141460.64
[]
docs.rs
When you enter a TQL statement, the system warns you of possible dependency consequences with a prompt asking if you’d like to proceed. This should make you feel safe issuing TQL commands, even commands like dropping a table.: cat safest_script_ever.sql | tql --allow_unsafe If you do not run the script using the flag, it will fail if any of its commands might cause problems with dependent objects. above).
https://docs.thoughtspot.com/5.0/admin/loading/check-dependencies-tql.html
2020-02-17T00:49:14
CC-MAIN-2020-10
1581875141460.64
[]
docs.thoughtspot.com
Tools¶ Many. ORM-like Layers¶. - uMongo - uMongo is a Python MongoDB ODM. Its inception comes from two needs: the lack of async ODM and the difficulty to do document (un)serialization with existing ODMs. Works with multiple drivers: PyMongo, TxMongo, motor_asyncio, and mongomock. The source is available on GitHub No longer maintained¶ - MongoKit - The MongoKit framework is an ORM-like layer on top of PyMongo. There is also a MongoKit google group. - MongoAlchemy - MongoAlchemy is another ORM-like layer on top of PyMongo. Its API is inspired by SQLAlchemy.. Framework Tools¶ This section lists tools and adapters that have been designed to work with various Python frameworks and libraries. - Djongo is a connector for using Django with MongoDB as the database backend. Use the Django Admin GUI to add and modify documents in MongoDB. The Djongo Source Code is hosted on GitHub and the Djongo package is on pypi. -. Alternative Drivers¶ These are alternatives to PyMongo.
https://pymongo.readthedocs.io/en/stable/tools.html
2020-02-17T00:18:45
CC-MAIN-2020-10
1581875141460.64
[]
pymongo.readthedocs.io
. ' Return Product objects with the specified ID. Dim query As ObjectQuery(Of Product) = context.Products.Where("it.ProductID = @product", New ObjectParameter("product", productId)) // Return Product objects with the specified ID. ObjectQuery<Product> query = context.Products .Where("it.ProductID = @product", new ObjectParameter("product",. ' Define a query that returns a nested ' DbDataRecord for the projection. Dim query As ObjectQuery(Of DbDataRecord) = context.Contacts.Select("it.FirstName, it.LastName, it.SalesOrderHeaders") _ .Where("it.LastName = @ln", New ObjectParameter("ln", lastName)) // Define a query that returns a nested // DbDataRecord for the projection. ObjectQuery<DbDataRecord> query = context.Contacts.Select("it.FirstName, " + "it.LastName, it.SalesOrderHeaders") .Where("it.LastName = @ln", new ObjectParameter("ln", lastName));: ' Define the query with a GROUP BY clause that returns ' a set of nested LastName records grouped by first letter. Dim query As ObjectQuery(Of DbDataRecord) = _") // Define the query with a GROUP BY clause that returns // a set of nested LastName records grouped by first letter. ObjectQuery<DbDataRecord> query ="); Note Use the ToTraceString method to see the data source command that will be generated by an ObjectQuery. For more information, see Object Queries. Aliases Query builder methods are applied sequentially to construct a cumulative query command. This means that the current ObjectQuery command is treated like a sub-query to which the current method is applied. Note The CommandText property returns the command for the ObjectQuery instance. In a query builder method, you refer to the current ObjectQuery command by using an alias. By default, the string "it" is the alias that represents the current command, as in the following example: ' Return Product objects with a standard cost ' above 10 dollars. Dim cost = 10 Dim productQuery As ObjectQuery(Of Product) = context.Products.Where("it.StandardCost > @cost") productQuery.Parameters.Add(New ObjectParameter("cost", cost)) int cost = 10; // Return Product objects with a standard cost // above 10 dollars. ObjectQuery<Product> productQuery = context.Products .Where("it.StandardCost > @cost", new ObjectParameter("cost", cost)); When you set the Name property of an ObjectQuery, that value become the alias in subsequent methods. The following example extends the previous one by setting name of the ObjectQuery to "product" and then using this alias in the subsequent OrderBy method: ' Return Product objects with a standard cost ' above 10 dollars. Dim cost = 10 Dim productQuery As ObjectQuery(Of Product) = context.Products.Where("it.StandardCost > @cost") productQuery.Parameters.Add(New ObjectParameter("cost", cost)) ' Set the Name property for the query and then ' use that name as the alias in the subsequent ' OrderBy method. productQuery.Name = "product" Dim filteredProduct As ObjectQuery(Of Product) = productQuery.OrderBy("product.ProductID"): ' Get the contacts with the specified name. Dim contactQuery As ObjectQuery(Of Contact) = context.Contacts.Where("it.LastName = @ln AND it.FirstName = @fn", _ New ObjectParameter("ln", lastName), New ObjectParameter("fn", firstName)) // Get the contacts with the specified name. ObjectQuery<Contact> contactQuery = context.Contacts .Where("it.LastName = @ln AND it.FirstName = @fn", new ObjectParameter("ln", lastName), new ObjectParameter("fn", firstName));. See Also Concepts Querying a Conceptual Model
https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/bb896238(v=vs.100)?redirectedfrom=MSDN
2020-02-17T02:07:09
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
The: Figure: Basic workflow using Dynamic Analyzer Using the various profiling features of the Dynamic Analyzer, you can determine where your application can be optimized further.
https://docs.tizen.org/application/tizen-studio/common-tools/dynamic-analyzer/overview
2020-02-17T00:52:36
CC-MAIN-2020-10
1581875141460.64
[]
docs.tizen.org
9. Help¶ The SIMP team is here to help! Please see the following for a list of resources. - 9.1. Frequently Asked Questions - 9.1.1. SIMP Version Guide - 9.1.2. What is the Password Complexity for SIMP? - 9.1.3. How can the root user login - 9.1.4. Meltdown and Spectre - 9.1.5. Why aren’t audit logs being forwarded to syslog? - 9.1.6. Puppet-Related Issues - 9.1.7. Why does SIMP use rsync? - 9.1.8. How to recover from SELINUX policy failure - 9.1.9. YUM Repo Issues - 9.2. Public Resources - 9.3. Commercial Resources
https://simp.readthedocs.io/en/master/help/
2020-02-17T02:36:15
CC-MAIN-2020-10
1581875141460.64
[]
simp.readthedocs.io
How to configure SSO for Druva inSync Cloud using the IDP Azure AD? Overview This article describes the steps to configure SSO for Druva inSync Cloud using the IDP Azure AD. The SSO is configured in the following order: Configure SSO for Druva inSync Cloud using the IDP Azure AD Configure a custom App for Druva inSync on Azure Portal - Log on to the Azure portal (URL: portal.azure.com) - Log on using Azure Administrator account. - Navigate to Azure Active Directory > Enterprise Applications. - On the Enterprise applications page, click New application. - Click All > Non-gallery Application. - Enter Druva inSync as the as the display name of the application and then click Add. Druva inSync will be added as an application. - Navigate to Azure Active Directory > Enterprise Applications > All Applications and configure the Application Settings. - Click Druva inSync Application. The application configuration page opens. - Go to Manage > Properties and configure the settings as shown in the image below. - Upload a Druva inSync Logo to identify the application easily. - Click Save . Configure Azure AD single sign-on To configure Azure AD single sign-on with Druva, perform the following steps: - On the Druva inSync application integration page of the Azure portal, click Single sign-on. - On the Single sign-on window, select Mode as SAML-based Sign-on to enable single sign-on. - Under the Druva Domain and URLs section, enter the following values. - Identifier: druva-cloud - Reply URL: - Under User Attributes: - Set User Identifier to user.mail. - Select View and edit all other user attributes. - Under SAML Token Attributes, delete all the attributes that are added by default. - Add the attributes in the table below and ensure that the order of attributes and case of the Attribute Name is preserved. - Add the attributes in the order and case specified in the table below: Follow these steps to add the above attributes: Click Add attribute to open the Add Attribute window. Enter the attribute name as shown for that row. Enter the respective attribute value from the Value column. The token generated value is explained later in the tutorial. Click OK. For information on generating SSO token, see Generate SSO token. On the SAML Signing Certificate section, click Certificate(Base64) and save the certificate file on your system. On Druva Configuration section, click Configure Druva to open Configure sign-on window. Copy the SAML Single Sign-On Service URL from the Quick Reference section. Configure Druva inSync Cloud to use Azure AD login - On a different web browser window, log on to inSync Management Console as an administrator. - Go to > Settings. - Open the Single Sign-On tab, click Edit. - On the Single Sign-On Settings window, add the following details: - ID Provider Login URL: Enter the SAML Single Sign-On Service URL copied earlier. - ID Provider Certificate: Open your base-64 encoded certificate in notepad and copy the content to this field. - Clear AuthnRequests Signed and Want Assertions Encrypted. - Click Save. Assigning Users/Groups in Azure AD to use Druva inSync app - On the Azure portal, open the applications view. - Open the directory view and navigate to Enterprise applications > All applications. - Select Druva from the applications list. - In the menu on the left, click Users and groups. - Click Add and select Users and groups on the Add Assignment window. - On Users and groups dialog, select the Users or Group that you want to assign the Druva App, in the Users list. - Since Auto-provisioning the users using Azure AD is not configured, ensure that the User or Admin account selected has a corresponding account created in inSync. - Click Select on Users and groups window. - Click Assign on Add Assignment window. Enabling single sign-on in inSync for Users and Administrators Refer the following articles form inSync documentation:
https://docs.druva.com/Knowledge_Base/inSync/How_To/How_to_configure_SSO_for_Druva_inSync_Cloud_using_the_IDP_Azure_AD%3F
2017-10-17T04:06:19
CC-MAIN-2017-43
1508187820700.4
[]
docs.druva.com
Securing Server-Agent Communications Typically, the RHQ Server and RHQ Agents talk to each other in the clear, meaning all communications traffic is unencrypted and no authentication is performed on either end. Many times, the environment in which you install your RHQ Servers and RHQ Agents does not warrant the extra setup time and runtime performance degradation you incur when enabling security on RHQ communications traffic (for example, if you already have a VPN and/or firewall protections in place that guard your RHQ Servers and RHQ Agents against intrusion). However, there are those that need or simply want the peace of mind of knowing their RHQ traffic is fully encrypted and authenticated. This section will describe the steps that you need to perform in order to fully secure the communications traffic between RHQ Servers and RHQ Agents. - Implications Of Not Using Secure Communications - RHQ and SSL - Setting Up Secure RHQ Communications - Step 1 - Use SSL Transport To Enable Encryption - Step 2 - Prepare Your SSL Certificates - Step 3 - Distribute Your Keystores and Truststores - Step 4 - Tell The RHQ Servers and Agents About Their Keystores/Truststores - Step 5 - Test Your Setup - Setting Up Server-Side sslsocket Transport - Troubleshooting Secure Communications Setup Implications Of Not Using Secure Communications RHQ does not secure the communications between the RHQ Server and RHQ Agent by default (out-of-box). Some issues that are of concern when running RHQ without secure communications are outlined below. You need to be aware of these issues before deciding to run RHQ with an unsecured communications channel between server and agent: - It is possible for an unauthorized person to install a rogue RHQ Agent and have that agent register with the RHQ Server. A rogue agent is one in which the RHQ administrator did not install or give permission to register into the RHQ system. - It is possible for an intruder to silently sniff the communications between the RHQ Agent and RHQ Server, possibly obtaining very sensitive data about the machines they are running on. - It is possible for an intruder to capture and manipulate the communications traffic between the RHQ Agent and RHQ Server as part of a man-in-the-middle attack, possibly being able to do very damaging things to the machines they are running on. Running RHQ without securing the communications should only be done under the following circumstances, and only when you understand the full implications of doing so (as explained above): - If you are installing the RHQ Server and all RHQ Agents on a fully secured network, with firewalls and/or a VPN limiting access to your entire network to only authorized and trusted personnel. - If you are running a demo of RHQ. When running RHQ as a demo, you may typically want to get the system installed and running as quickly and easily as possible. You normally would not want to concern yourself with securing the communications which involves manual, time-consuming steps. RHQ and SSL RHQ utilizes SSL technology to perform both encryption and authentication. You can enable encryption (scrambling the data between server and agent to avoid someone eavesdropping on the traffic) and optionally enable authentication (which prohibits an intruder from attempting to spoof either a RHQ Server or RHQ Agent). It is recommended that you understand the basics of SSL and certificate-based security before attempting to secure your RHQ communications. Encryption By simply using a transport that uses SSL, you automatically get encryption. This means that when you configure your RHQ Server and RHQ Agent's communications layer, use an SSL-enabled transport to encrypt the traffic. SSL-enabled transports includes sslservlet and sslsocket. You do not have to worry about setting up certificates if you want to use SSL-enabled transports just for encryption. The RHQ Server ships with a certificate and the RHQ Agent will create a self-signed certificate if it needs one. Note that it is possible to just use SSL encryption without authentication. Some people may just wish to encrypt their RHQ traffic, without requiring the RHQ Servers and RHQ Agents authenticating each other. This setup, while less secure, is much easier to setup because it does not require creating and distributing trusted certificates. Authentication Authentication via SSL requires that you distribute trusted certificates to your RHQ Servers and RHQ Agents. You must also configure those RHQ Servers and RHQ Agents to reject any messages coming from remote clients that do not match any of those trusted certificates. In order to support authentication, you must use SSL-enabled transports (which include sslservlet and sslsocket). You must also obtain trusted certificates for all your servers and agents and package those certificates in a set of keystore and truststore files. You can configure the RHQ Server to authenticate RHQ Agents, RHQ Agents to authenticate the RHQ Server or both. RHQ provides this flexibility in case you only want to authenticate in one direction but not another. However, when people feel the need for authentication, they will usually enable it in both directions. Authentication requires a bit more work to setup. Because true authentication requires a high degree of trust, you have to manually create and sign your certificates, create keystores and truststores that contain those certificates, then distribute your keystores and truststores in a highly secure manner to all your RHQ Servers and RHQ Agents. This may mean going as far as physically hand-delivering your trusted certificates via memory stick and copying the certificates from the memory stick to all the computers hosting the RHQ Servers and RHQ Agents. Many times it is not as highly paranoid as that - however, at some point along the way, you have to place trust in whatever certificates you are using and distributing. Setting Up Secure RHQ Communications The following are the steps necessary to set up secure communications in RHQ. Remember that you have the option for encryption-only or encryption-with-authentication. If you choose the former, you do not need to perform any of the steps dealing with the creation, packaging and distribution of certificates and keystores/truststores. If you wish to have full security with encryption-with-authentication, follow all the steps listed below. If you are setting up with encryption-with-authentication, these steps assume you want authentication in both directions (that is, the agents will need to authenticate with the server and the server will need to authenticate with the agents). The RHQ low-level communications layer is based on JBoss/Remoting and uses what is known as a transport to ship messages back and forth between agents and servers. A transport can be either unencrypted (like the raw socket transport or servlet) or it can be encrypted (like sslsocket or sslservlet). The servlet-based transports, servlet and sslservlet, are HTTP and HTTPS transports respectively. They are "servlet"-based because the HTTP/HTTPS traffic is routed through a servlet running in the Tomcat server hosted within the RHQ Server. We use these servlet based transports because they leverage the highly performant Tomcat connector infrastructure with no need for additional thread pooling to accept incoming agent requests since Tomcat will handle the requests. Using these servlet-based transports means that the configuration you set up here for agent communications are the same settings that take effect for user requests coming into the GUI via a browser. For the following steps, we are going to configure the server-to-agent channel to use the sslsocket transport and the agent-to-server channel to use the sslservlet transport. Step 1 - Use SSL Transport To Enable Encryption The first thing you need to do is tell the RHQ Servers and RHQ Agents to use SSL when talking to each other. RHQ Server Instructions - Shutdown the RHQ Server - Go to the /bin directory under the location where your RHQ Server is installed - In that directory, find the file rhq-server.properties and load it into your favourite text editor - Find the following configuration preferences used to set the server's communications connector and set them appropriately: rhq.communications.connector.transport=sslservlet rhq.communications.connector.bind-address= rhq.communications.connector.bind-port= rhq.communications.connector.transport-params=/jboss-remoting-servlet-invoker/ServerInvokerServlet where transport is the SSL transport you wish to use - sslservlet in this case. Because we are using sslservlet transport, the other settings can typically be left as-is - empty value defaults for bind-address and bind-port and the servlet path for transport-params. - If you only want SSL encryption, ensure that certificate based authentication is disabled by having the following properties set as below: rhq.server.tomcat.security.client-auth-mode=false rhq.server.client.security.server-auth-mode-enabled=false - (Optional) You may wish to explicitly define the secure socket protocol used by this connector, although the default (TLS) is usually good enough. If the default protocol of TLS is not what you want, find the following configuration preferences and set them appropriately: rhq.server.tomcat.security.secure-socket-protocol=TLS rhq.server.client.security.secure-socket-protocol=TLS - Save the configuration file - If you do not plan on proceeding to setup SSL authentication at this stage, you can restart the RHQ Server. RHQ Agent Instructions - Answer all prompts until you get to the prompt asking for the Agent Transport Protocol and enter sslsocket. The name for this configuration preference is rhq.communications.connector.transport. - At the prompt asking for the RHQ Server Hostname or IP Address, enter the public endpoint address of the RHQ Server. This is typically just the hostname where your server is running. If you are not sure of this value, you can easily determine it by going to the server GUI's Administration > Topology > Servers page - the public endpoint is listed there. Note that this hostname or IP address must be routable by the agent, otherwise, the agent will get connection failures when it tries to talk to the server. The name for this configuration preference is rhq.agent.server.bind-address. - At the prompt asking for the RHQ Server Port, enter the port that the RHQ Server will be listening to for agent requests. Because we are using the sslservlet transport, this will be the Tomcat secure port, whose default is 7443. The name for this configuration preference is rhq.agent.server.bind-port. - At the prompt asking for the RHQ Server Transport Protocol, enter sslservlet which is the RHQ Server's new transport that we set in the previous section. The name for this configuration preference is rhq.agent.server.transport. - At the prompt asking for the RHQ Server Transport Parameters, enter the RHQ Server's new transport parameters as defined by its rhq.communications.connector.transport-params setting which, for sslservlet transport, must be "/jboss-remoting-servlet-invoker/ServerInvokerServlet". The name for this configuration preference is rhq.agent.server.transport-params. - If you only want SSL encryption, ensure that certificate based authentication is disabled by having the following properties set: - Client Authentication Mode : none - Server Authentication Mode Enabled? : false - (Optional) You may wish to explicitly define the agent connector's transport parameters. You may also wish to explicitly set the secure socket protocols used by the agent (however, the default protocol of TLS is usually sufficient). - Agent Transport Parameters are the optional JBoss/Remoting transport parameters that are used by the agent and server to interact with the agent's connector. See the JBoss/Remoting documentation for specifics. The name for this configuration preference is rhq.communications.connector.transport-params. - Incoming Secure Socket Protocol is the protocol used when accepting incoming messages from the RHQ Server (make sure this matches the RHQ Server's protocol setting rhq.server.client.security.secure-socket-protocol). The name for this configuration preference is rhq.communications.connector.security.secure-socket-protocol. - Outgoing Secure Socket Protocol is the protocol used when sending outgoing messages to the RHQ Server (make sure this matches the RHQ Server's protocol setting rhq.communications.connector.security.secure-socket-protocol). The name for this configuration preference is rhq.agent.client.security.secure-socket-protocol. - Exit the agent (effectively shutting it down) and then restart it. At this point, you have now configured your RHQ Servers and RHQ Agents to encrypt their messages to each other via SSL. You can be assured that no one can effectively eavesdrop on the communications between them. However, if you wish to strengthen the security of the network traffic even further, continue on to the next series of steps which will enable certificate-based authentication between your RHQ Servers and RHQ Agents. Step 2 - Prepare Your SSL Certificates If you wish to have your RHQ Servers and RHQ Agents authenticate one another, you need something that "identifies" each one of them. SSL requires digital certificates for this purpose. If your company or organization can request and receive officially signed certificates from a trusted CA, you will need to obtain one certificate for each of your RHQ Servers and RHQ Agents that you plan on deploying in your RHQ environment. If you already have the certificates given to you from your CA, you must place each of them in their own keystore file and combine all of them into a single truststore file. Otherwise, please follow the instructions to create your own certificates. The purpose of these instructions is to generate a keystore file and a truststore file for each RHQ Server and RHQ Agent. Each keystore file will contain a single self-signed certificate that belongs to one of the RHQ Server or RHQ Agent entities. Each truststore file contains all the certificates belonging to every RHQ Server and RHQ Agent. - For each RHQ Server or RHQ Agent, execute this command to generate its keystore file: In this example, assume one of my RHQ Agents will be installed on a machine whose hostname is "myhost.mycorp.com". This command creates and self-signs a certificate and stores it under the alias "myhost" in the file "myhost.keystore". The certificate is good for 10 years (3650 days). The certificate keys were generated using the DSA algorithm and were stored in the keystore using the JKS format. The key itself and the keystore file have been password protected with the password "jonpassword". It is recommended that you at least choose your own passwords when generating your keystores. The important part is to make sure you set the Common Name (CN) of the Distinguished Name (the -dname option) to the correct address where this keystore is to be installed. That is because as part of the SSL handshake, a remote client will attempt to verify that the issuer of the certificate (as listed in the CN) is the same name as where the certificate actually came from. Now that we have generated a self-signed certificate for this RHQ Agent and stored it in a file named "myhost.keystore", we can store that away for now and continue generating certificates/keystores for the rest of the machines until we have one keystore file for each and every machine that will host a RHQ Server or RHQ Agent. It is best to name the keystores so you remember which keystore file belongs to which machine (hence why, in the example above, the hostname was part of the filename). Same holds true with the alias names. - Put each self-signed certificate you generated in the previous step in a single truststore file. You do this by exporting each certificate from each keystore and importing them all into a single truststore file. - For each keystore file, export the self-signed certificate: This extracts the self-signed certificate from the previously created myhost.keystore file and stores the certificate in the file "myhost.cer". - For each exported self-signed certificate, import them into a single truststore file: This command is similiar to the -genkey command used to create the original keystore certificate. However, rather than asking the keytool to generate a new certificate, we are giving it an existing certificate and asking keytool to place it in the truststore file. The -keystore option defines the name of our truststore file. -alias is the name that we assign this certificate within the truststore file (note that for convenience, we give it the same alias under which it was found in its keystore file). Once a certificate is placed in the truststore, you will no longer need the certificate file ("myhost.cer" in the above example). - Repeat these steps for each keystore file you created. You want to import every certificate into the same truststore - eventually having a single truststore file that contains all of your certificates. For example, if I had a total of 5 RHQ Servers and RHQ Agents in my RHQ environment, I would have 5 separate keystore files but a single all.truststore file that contains all 5 certificates. You can use keytool to list the certificates in your truststore file, to make sure you did them all: > keytool -list -keystore all.truststore -storepass jonpassword -storetype JKS Keystore type: JKS Keystore provider: SUN Your keystore contains 2 entries anotherhost, Feb 25, 2007, trustedCertEntry, Certificate fingerprint (MD5): 24:D9:8A:50:BA:1B:26:08:DC:44:A8:2A:9E:8A:43:D9 myhost, Feb 25, 2007, trustedCertEntry, Certificate fingerprint (MD5): 91:F8:78:15:21:E8:0C:73:EC:B6:3B:1D:5A:EC:2B:01 When you are all done, you will no longer need the certificate files (e.g. myhost.cer). Step 3 - Distribute Your Keystores and Truststores At this point, you have created a set of keystore files (one for each RHQ Server and RHQ Agent in your RHQ environment) and a single truststore file (a duplicate copy is to be given to each RHQ Server and RHQ Agent). You must now distribute those files to all the machines where your RHQ Servers and RHQ Agents live. You must do so in a secure fashion and ensure that no one can steal, intercept or otherwise manipulate your keystore/truststore files. You must also make sure that you distribute the keystore files to the host machines that match the certificates' CN host addresses. If you mix them up and, for example, put the "myhost" keystore file on the "anotherhost.mycorp.com" machine, the SSL communications will fail for the RHQ Server or RHQ Agent running on "anotherhost". RHQ Server Instructions Each RHQ Server distribution is running inside a JBossAS application server instance. That JBossAS has a standalone/configuration subdirectory under its installation directory ($JBOSS_HOME) that you can use for the location to store the server's keystore/truststore files. Technically, you can put them anywhere that the JBossAS server can access them, but the rest of these instructions will assume you will place them in the $JBOSS_HOME/standalone/configuration directory. If you installed RHQ to use the JBossAS that ships with RHQ (which is the typical install scenario), $JBOSS_HOME is $RHQ_SERVER_HOME/jbossas where $RHQ_SERVER_HOME is the directory where you unzipped and installed RHQ itself. For each RHQ Server, take its keystore file (make sure the keystore file has the appropriate CN value that matches the RHQ Server's hostname) and store it in $JBOSS_HOME/standalone/configuration under the name server.keystore. Make a copy of your truststore file and place it in that same directory under the name all.truststore. RHQ Agent Instructions Each RHQ Agent distribution has a /conf directory. It is the logical choice to store the agent's keystore/truststore files. (note: putting them here makes them safe when performing agent auto-updates - agents will retain all keystore/truststore files that are found in the /conf and /data directory). For each RHQ Agent, take its keystore file (make sure the keystore file has the appropriate CN value that matches the RHQ Agent's hostname) and store it in the agent's /conf directory. Make a copy of your truststore file and place it in the agent's /conf directory as well. Step 4 - Tell The RHQ Servers and Agents About Their Keystores/Truststores Now you have to tell your RHQ Servers and RHQ Agents where your keystore and truststore files are in addition to providing other information about those files so they can be read properly. After completing this step, your RHQ Servers and RHQ Agents will be able to successfully authenticate themselves to each other. RHQ Server Instructions - Shutdown the RHQ Server. - Open the /bin directory in the RHQ Server installation directory. - In that directory, find the file rhq-server.properties and load it into your favorite text editor. - Find the following configuration preferences used to set the server's security using information about your new keystore and truststore files: #=true rhq.server.tomcat.security.secure-socket-protocol=TLS rhq.server.tomcat.security.algorithm=SunX509 rhq.server.tomcat.security.keystore.alias=myhost rhq.server.tomcat.security.keystore.file=${jboss.server.config.dir}/server.keystore rhq.server.tomcat.security.keystore.password=jonpassword rhq.server.tomcat.security.keystore.type=JKS rhq.server.tomcat.security.truststore.file=${jboss.server.config.dir}/all.truststore rhq.server.tomcat.security.truststore.password=jonpassword rhq.server.tomcat.security.truststore.type=JKS ... # Client-side SSL Security Configuration (for outgoing messages to agents) rhq.server.client.security.secure-socket-protocol=TLS rhq.server.client.security.keystore.file=${jboss.server.config.dir}/server.keystore rhq.server.client.security.keystore.algorithm=SunX509 rhq.server.client.security.keystore.type=JKS rhq.server.client.security.keystore.password=jonpassword rhq.server.client.security.keystore.key-password=jonpassword rhq.server.client.security.keystore.alias=myhost rhq.server.client.security.truststore.file=${jboss.server.config.dir}/all.truststore rhq.server.client.security.truststore.algorithm=SunX509 rhq.server.client.security.truststore.type=JKS rhq.server.client.security.truststore.password=jonpassword rhq.server.client.security.server-auth-mode-enabled=true What is being configured here is the server-side Tomcat SSL security settings (rhq.server.tomcat.security...) to handle incoming messages from RHQ Agents via the sslservlet transport and the client-side SSL security settings (rhq.server.client.security...) to handle outgoing messages to RHQ Agents via the sslsocket transport. Because we are sharing the keystore and truststore for both directions, a lot of these values are the same. Since we want to enable both server-side and client-side with SSL authentication, we set rhq.server.tomcat.security.client-auth-mode to "true" (which tells Tomcat to only process an incoming request if it has a valid SSL certificate) and we set rhq.server.client.security.server-auth-mode-enabled to "true" (meaning any outgoing messages sent to a RHQ Agent will only be sent if that RHQ Agent has a valid SSL certificate). The rest of the settings and their values should be fairly self-evident. You are simply telling the RHQ Server where it can find its keystore and truststore files, the passwords to access data in those files, the alias of the RHQ Server's own certificate as found in the keystore, etc. - Save the configuration file - Restart the RHQ Server RHQ Agent Instructions - Answer all prompts until you get to the prompts asking about security . Because we are sharing the keystore and truststore for both directions, a lot of these values are the same. The prompts to look for and their new values are: - Client Authentication Mode: need - Server Authentication Mode Enabled?: true - Incoming Secure Socket Protocol: TLS - Server-side Keystore File: conf/myhost.keystore - Server-side Keystore Algorithm: SunX509 - Server-side Keystore Type: JKS - Server-side Keystore Password: jonpassword - Server-side Keystore Key Password: jonpassword - Server-side Keystore Key Alias: myhost - Server-side Truststore File: conf/all.truststore - Server-side Truststore Algorithm: SunX509 - Server-side Truststore Type: JKS - Server-side Truststore Password: jonpassword - Outgoing Secure Socket Protocol: TLS - Client-side Keystore File: conf/myhost.keystore - Client-side Keystore Algorithm: SunX509 - Client-side Keystore Type: JKS - Client-side Keystore Password: jonpassword - Client-side Keystore Key Password: jonpassword - Client-side Keystore Key Alias: myhost - Client-side Truststore File: conf/all.truststore - Client-side Truststore Algorithm: SunX509 - Client-side Truststore Type: JKS - Client-side Truststore Password: jonpassword - Exit the agent (effectively shutting it down) and then restart it At this point, you have now configured your RHQ Servers and RHQ Agents to both encrypt their messages to each other and to authenticate each other via SSL. You now can be assured that no one can effectively eavesdrop on your RHQ communications nor can an infiltrator attempt to spoof itself as a bogus RHQ Server or RHQ Agent. Step 5 - Test Your Setup Once you are done with the preceding steps, you can finally restart your RHQ Servers and RHQ Agents. They should begin to talk to each other normally - if you have done everything correctly, you will not notice anything different! If you want to confirm that they really are using SSL authentication, simply remove a keystore or truststore file from either a RHQ Server or RHQ Agent and you should begin to notice errors appear in their log files. Removing a keystore file or truststore file from a RHQ Agent will prohibit that agent from being able to send and receive messages to/from the RHQ Server - which you can confirm by looking at the agent log file and looking for error messages. After you've finished testing, make sure you remember to restore the keystore/truststore files. You can test the SSL authentication by creating another keystore for one of your RHQ Agents, replace that keystore with the original keystore and try to see if that RHQ Agent can talk to the RHQ Server. Because this new keystore has a certificate that does not exist in the RHQ Server's truststore, the RHQ Server will no longer trust that agent and will reject its messages. In effect, you simulated an infiltrator trying to spoof a RHQ Agent and the RHQ Server detected this security breach. Setting Up Server-Side sslsocket Transport As mentioned previously, if you use the sslservlet transport, messages from the agent to the server are routed over secure HTTP through the JBossAS app server's Tomcat web container, the same as when your users make browser requests to the server GUI. If: - you wish to allow your users to access the server GUI over the secure https: protocol... - and you use sslservlet transport... - and you require agents to authenticate themselves to the server via SSL certificates then you will also require your users' browsers to have certificates installed and those user certificates must be placed in your server-side truststore file; otherwise, your users that try to access the server GUI over https: will find that Tomcat will reject their requests due to missing certificates, even if the user can authenticate themselves using their RHQ username and password. In highly secure environments, this may not be a problem because users might already have certificates assigned to them and installed in their browser and you might be able to get or build a truststore that contains all of your users' certificates. However, under most situations, this is not desireable. Users normally do not have their own certificates installed in their browsers and even if they did, you probably cannot obtain their certificates ahead of time to place them in the server's truststore. The users will authenticate themselves to the server GUI via their own RHQ username and password. So the question becomes, how can you require agents to authenticate themselves with certificates without requiring users to do so? The first way you can do this is (as mentioned above) have the agents talk to the server over one secure web HTTP port (for example, 7443) and have the user browsers talk to the server over another secure web HTTP port (for example, 9443 such as). This solution is described above and will not be repeated here. The second way you can do this lies in the ability of RHQ to use a different transport. Rather than have the agents talk to the server over secure HTTP through Tomcat using the sslservlet transport, we can tell the servers and agents to use the non-HTTP based sslsocket transport instead (this is the same transport that is used when the server sends messages to the agent - now we are simply saying the messages flowing in the opposite direction (agent-to-server) should use that same transport). This will enable the RHQ Server to open a special server-side socket that you designate that will be used to accept agent requests. This special socket will circumvent Tomcat and the Tomcat configuration - it will have its own SSL security configuration, allowing you to configure Tomcat to be less strict in the requests it accepts. All of the steps covered in previous sections are still valid. The only difference in being able to use sslsocket versus sslservlet for agent-to-server communication is a few configuration setting changes. These differences will be explained below. sslsocket RHQ Server Settings In general, the rhq-server.properties security settings with the names that start with rhq.server.tomcat.security do not need to be changed since we aren't going to ask the agents to go through Tomcat. So those should be left as they were when the server was initially installed; instead, you will now want to use the rhq.communications.connector.security settings. In rhq-server.properties: rhq.communications.connector.transport=sslsocket rhq.communications.connector.bind-address= rhq.communications.connector.bind-port=55555 rhq.communications.connector.transport-params= ... # Server-side SSL Security Configuration (for incoming messages from agents) # These are used when secure transports other than sslservlet are used rhq.communications.connector.security.secure-socket-protocol=TLS rhq.communications.connector.security.keystore.file=${jboss.server.config.dir}/server.keystore rhq.communications.connector.security.keystore.algorithm=SunX509 rhq.communications.connector.security.keystore.type=JKS rhq.communications.connector.security.keystore.password=jonpassword rhq.communications.connector.security.keystore.key-password=jonpassword rhq.communications.connector.security.keystore.alias=myhost rhq.communications.connector.security.truststore.file=${jboss.server.config.dir}/all.truststore rhq.communications.connector.security.truststore.algorithm=SunX509 rhq.communications.connector.security.truststore.type=JKS rhq.communications.connector.security.truststore.password=jonpassword rhq.communications.connector.security.client-auth-mode=need Here the transport is now sslsocket. Because we are using sslsocket transport, you need to indicate which port to bind to (this is the port the server will listen to when receiving agent requests - in this example we used port 55555 but you can use any free port). The bind-address can still be left as-is, it will pick up a default for the server. You can specify a bind-address if you want to explicitly tell the server what to bind to. The transport-params can be any valid JBoss/Remoting transport parameters - this can be left blank. See Communications Configuration or the JBoss/Remoting documentation for more information on what you can do with these transport parameters. For the security settings, all have the same values as before except their names start with rhq.communications.connector.security, not rhq.server.tomcat.security. sslsocket RHQ Agent Settings Very little has to change from the previous instructions to get the agent to talk to the server over the sslsocket protocol. The only difference is that you have to tell the agent to use the sslsocket transport, what the new port is and what the server's transport parameters are: - RHQ Server Port: enter the new port that the RHQ Server will be listening to for agent requests. The name for this configuration preference is rhq.agent.server.bind-port. - RHQ Server Transport Protocol, enter sslsocket. The name for this configuration preference is rhq.agent.server.transport. - RHQ Server Transport Parameters, enter the RHQ Server's new transport parameters as defined by its rhq.communications.connector.transport-params setting. The name for this configuration preference is rhq.agent.server.transport-params. Troubleshooting Secure Communications Setup Correct configuration for secure communications can be difficult at times because problems in any of the following areas can halt successful communications between server and agent and frustrate the process: - certificate creation and deployment - groups of server and agent configuration entries that must be set correctly for good communication between both sides - the decision to use socket or servlet transports has performance and client side ramifications - understanding when the RHQ Agent and RHQ Server are acting as clients to each other (i.e. the sender) and when they are the "server" (i.e. the receiver) and which settings modify the sender or receiver If you've already gone through the detailed instructions above for setting up secure RHQ communications but are still having difficulties it is helpful to double check with the following steps. For simplicity we'll assume that you have addressed the following fundamental setup instructions: - Verify with keytool such that: - all the certificates for client authorization are correctly configured with proper CN designations for each RHQ Server/Agent and all are in a keystore with a unique alias. - the shared truststore file correctly includes all exported certificates with aliases. - there are no issues resolving hostnames defined in all certificate CN designations. This means all CN hostnames must resolve between RHQ Server and RHQ Agent from either direction. - passwords for keystores and truststores are correctly set. - Assume that you should use the default secure socket protocol of TLS. Example of relevant RHQ Server default settings (security turned off) in <rhq-server-install-dir>/bin/rhq-server.properties # RHQ Server's remote endpoint for agents to talk to # bind-address and bind-port are derived from the HA server definition, # if you set the address/port here, they will override the HA server # definition found in the database rhq.communications.connector.transport=servlet rhq.communications.connector.bind-address= rhq.communications.connector.bind-port= rhq.communications.connector.transport-params=/jboss-remoting-servlet-invoker/ServerInvokerServlet ... #=false rhq.server.tomcat.security.secure-socket-protocol=TLS rhq.server.tomcat.security.algorithm=SunX509 rhq.server.tomcat.security.keystore.alias=RHQ rhq.server.tomcat.security.keystore.file=${jboss.server.config.dir}/rhq.keystore rhq.server.tomcat.security.keystore.password=RHQManagement rhq.server.tomcat.security.keystore.type=JKS rhq.server.tomcat.security.truststore.file=${jboss.server.config.dir}/rhq.truststore rhq.server.tomcat.security.truststore.password=RHQManagement rhq.server.tomcat.security.truststore.type=JKS # Server-side SSL Security Configuration (for incoming messages from agents) # These are used when secure transports other than sslservlet are used rhq.communications.connector.security.secure-socket-protocol=TLS rhq.communications.connector.security.keystore.file=${jboss.server.config.dir}/rhq.keystore rhq.communications.connector.security.keystore.algorithm=SunX509 rhq.communications.connector.security.keystore.type=JKS rhq.communications.connector.security.keystore.password=RHQManagement rhq.communications.connector.security.keystore.key-password=RHQManagement rhq.communications.connector.security.keystore.alias=RHQ rhq.communications.connector.security.truststore.file=${jboss.server.config.dir}/rhq.truststore rhq.communications.connector.security.truststore.algorithm=SunX509 rhq.communications.connector.security.truststore.type=JKS rhq.communications.connector.security.truststore.password=RHQManagement rhq.communications.connector.security.client-auth-mode=none # Client-side SSL Security Configuration (for outgoing messages to agents) rhq.server.client.security.secure-socket-protocol=TLS rhq.server.client.security.keystore.file=${jboss.server.config.dir}/rhq.keystore rhq.server.client.security.keystore.algorithm=SunX509 rhq.server.client.security.keystore.type=JKS rhq.server.client.security.keystore.password=RHQManagement rhq.server.client.security.keystore.key-password=RHQManagement rhq.server.client.security.keystore.alias=RHQ rhq.server.client.security.truststore.file=${jboss.server.config.dir}/rhq.truststore rhq.server.client.security.truststore.algorithm=SunX509 rhq.server.client.security.truststore.type=JKS rhq.server.client.security.truststore.password=RHQManagement rhq.server.client.security.server-auth-mode-enabled=false Examples of relevant RHQ Agent default settings (security turned off) in <rhq-agent-install-dir>/conf/agent-configuration.xml <entry key="rhq.agent.server.transport" value="servlet" /> <entry key="rhq.agent.server.bind-port" value="7080" /> <!-- <entry key="rhq.agent.server.bind-address" value="127.0.0.1" /> --> <entry key="rhq.agent.server.transport-params" value="/jboss-remoting-servlet-invoker/ServerInvokerServlet" /> ... <entry key="rhq.communications.connector.transport" value="socket" /> <entry key="rhq.communications.connector.bind-port" value="16163" /> <!-- <entry key="rhq.communications.connector.bind-address" value="127.0.0.1" /> <entry key="rhq.communications.connector.transport-params" value="serverBindAddress=127.0.0.1& \ serverBindPort=16163&numAcceptThreads=3&maxPoolSize=303&clientMaxPoolSize=304& \ socketTimeout=60000&enableTcpNoDelay=true&backlog=200" /> <entry key="rhq.communications.connector.lease-period" value="5000" /> --> ... <entry key="rhq.communications.connector.security.client-auth-mode" value="none" /> ... <entry key="rhq.agent.client.security.server-auth-mode-enabled" value="false" /> Unencrypted/unsecured Communication Setup - RHQ Server (servlet) <-> RHQ Agent (socket) - Default settings. No Change. - RHQ Server (socket) <-> RHQ Agent (socket) - Does not use performant Tomcat connector. - Server: rhq.communications.connector.transport=socket rhq.communications.connector.bind-address= rhq.communications.connector.bind-port=7800 rhq.communications.connector.transport-params= - Agent: <entry key="rhq.agent.server.transport" value="socket" /> <entry key="rhq.agent.server.bind-port" value="7800" /> <entry key="rhq.agent.server.transport-params" value="" /> SSL Communication(Encryption only) Setup - RHQ Server (sslservlet) <-> RHQ Agent (sslsocket) -=false rhq.server.client.security.server-auth-mode-enabled=false - Agent: <entry key="rhq.communications.connector.transport" value="sslsocket" /> <entry key="rhq.agent.server.transport" value="sslservlet" /> <entry key="rhq.agent.server.bind-port" value="7443" /> - RHQ Server (sslsocket) <-> RHQ Agent (sslsocket) - This setup does not use the performant Tomcat connector. - Server: rhq.communications.connector.transport=sslsocket rhq.communications.connector.bind-address= rhq.communications.connector.bind-port=7800 rhq.communications.connector.transport-params= rhq.server.tomcat.security.client-auth-mode=false rhq.server.client.security.server-auth-mode-enabled=false - Agent: <entry key="rhq.communications.connector.transport" value="sslsocket" /> <entry key="rhq.agent.server.transport" value="sslsocket" /> <entry key="rhq.agent.server.bind-port" value="7800" /> <entry key="rhq.agent.server.transport-params" value="" /> SSL Communication(Encryption with Client Cert Authentication) Setup - RHQ Server (sslservlet) <-> RHQ Agent (sslsocket) - Additionally you will need to uncomment/update keystore,alias,truststore values to match your certificate details in both server and agent configuration files. - Will enable encryption with certificate authentication but for browser clients as well. Use the following "RHQ Server (sslsocket) <-> RHQ Agent (sslsocket)" setup option below to avoid this restriction. -=true rhq.server.client.security.server-auth-mode-enabled=true - Agent: <entry key="rhq.communications.connector.transport" value="sslsocket" /> <entry key="rhq.agent.server.transport" value="sslservlet" /> <entry key="rhq.agent.server.bind-port" value="7443" /> <entry key="rhq.communications.connector.security.client-auth-mode" value="need" /> <entry key="rhq.agent.client.security.server-auth-mode-enabled" value="true" /> - RHQ Server (sslsocket) <-> RHQ Agent (sslsocket) - Additionally you will need to uncomment/update keystore,alias,truststore values to match your certificate details in both server and agent configuration files. - Server: rhq.communications.connector.transport=sslsocket rhq.communications.connector.bind-address= rhq.communications.connector.bind-port=55555 rhq.communications.connector.transport-params= rhq.communications.connector.security.client-auth-mode=need rhq.server.client.security.server-auth-mode-enabled=true - Agent: <entry key="rhq.communications.connector.transport" value="sslsocket" /> <entry key="rhq.agent.server.transport" value="sslsocket" /> <entry key="rhq.agent.server.bind-port" value="55555" /> <entry key="rhq.agent.server.transport-params" value="" /> <entry key="rhq.communications.connector.security.client-auth-mode" value="need" /> <entry key="rhq.agent.client.security.server-auth-mode-enabled" value="true" /> Still need more SSL debugging information If you have double checked all of your settings, certificates, and keystore passwords and you do not see any helpful log messages within the server or agent logs then you can additionally enable verbose SSL communication messaging within the agent to attempt to get more helpful communication information: - Add or modify the RHQ_AGENT_ADDITIONAL_JAVA_OPTS environment variable in your <rhq-agent-install-dir>/bin/rhq-agent-env.[sh,bat] file then restart the agent. This will turn on more java communication debugging:
https://docs.jboss.org/author/display/RHQ/Securing+Communications
2017-10-17T04:03:31
CC-MAIN-2017-43
1508187820700.4
[]
docs.jboss.org
Morphing Rules Animate Pro follows a set of rules as it evaluates the shapes. Familiarize yourself with these basic morphing rules before you start morphing. Related Topics Closest Similar Shape<< Pencil Line to Pencil Line. When two pencil lines cross one another, they are considered to be two lines and not four lines anymore (as it was in previous versions of the application). In this case, you must have two pencil lines in your destination drawing for your morphing to work correctly. Fill Shape to Fill Shape If you have a brush line or a colour fill zone which are contour vectors, make sure that you morph it with another fill or brush line. Contour vectors will not morph with pencil lines (central vectors). A brush line can morph into a colour fill zone and vice versa. Colour Swatch to Same Colour Swatch Animate Pro does not morph colours. If you want to perform a colour transition, you have to create the effect at the compositing level. A colour palette is composed of colour swatches. Each colour swatch has its own unique identification number, even if two colours pots are the same shade of red, they are identified independently. A colour zone or shape will morph with another one painted with the same colour swatch. Vanishing and Appearing If a colour zone does not find a match in the first or the second drawing, it will progressively appear or disappear. Colour Art to Colour Art and Line Art to Line Art In Animate Pro,. Related Topics
https://docs.toonboom.com/help/animate-pro/Content/HAR/Stage/008_Morphing/004_H1_Morphing_Rules.html
2017-10-17T03:45:43
CC-MAIN-2017-43
1508187820700.4
[array(['../../../Resources/Images/HAR/Stage/Morphing/an_closestzone.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Morphing/an_penciltopencil.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Morphing/har9_pencilthick.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Morphing/har9_texturenotsupported.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Morphing/an_centreline.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Morphing/an_colorzonetocolourzone.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Morphing/an_colorpots.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Morphing/an_vanishrule.png', None], dtype=object) ]
docs.toonboom.com
The Javascript Client¶ The Pico Javascript client makes it simple to call your API from a web application. The client automatically generates proxy module objects and functions for your API so you can call your API functions just like any other library functions. All serialisation of arguments and deserialisation of responses is taken care of by the client so you can focus on your application logic. The only thing you need to be aware of is that all function calls are asynchronous returning promises. Basic Structure¶ The basic structure of a web app using the Pico Javascript client is as follows: - Line 6: We include Pico’s pico.jslibrary inside the head of the document. - Line 8: We load our examplemodule definition as JavaScript in the head. - Line 13: In a script element in the body we import our examplemodule assigning it to the examplevariable. - Line 15: We use our hellofunction. - Line 16-18: We assign a callback to the promise. The order and position of these elements within the document is important. pico.js must always be loaded before the module is loaded and these both must be in the head of the document to ensure they have completed by the time they are used later. Promises¶ The proxy functions generated by the client are asynchronous, meaning that they will not wait for the result before returning. This is due to the nature of how HTTP requests work in the browser. Instead they immediately return a promise which later resolves and calls a callback with the result as a parameter. The promise object has two methods of interest: then and catch. var p = example.hello('world') p.then(function(result){ console.log(result) }) p.catch(function(err){ console.error(err) }) The callback function passed to .then is called when the promise resolves successfully. If an error occurs then the function passed to .catch is called. If you don’t set a catch callback any errors are ignored. The error object passed to catch callback contains a .message and .code property which describe the exception that occurred on the Python side and the relevant HTTP status code. API¶ Asynchronous functions¶ Each of these functions is asynchronous, so they return a Promise. pico. loadAsync(module)¶ Load the Python module named module. The module proxy will be passed to the promise callback. Submodules may be loaded by using dotted notation e.g. module.sub_module. Synchronous functions¶ pico. importModule(module)¶ Note the module definition must have been previously loaded using pico.loadAsyncor by loading /<module_name>.jsin a script tag in the head of the document. pico. loadModuleDefinition(module_definition)¶ This function creates a proxy module from the given definition and stores it in the internal module registry for later import with pico.importModule. It also returns the proxy module directly. This function is called internally by the /<module_name>.jsloading mechanism.
http://pico.readthedocs.io/en/latest/guide/clientjs.html
2017-10-17T04:07:58
CC-MAIN-2017-43
1508187820700.4
[]
pico.readthedocs.io
Post-upgrade considerations for inSync Private Cloud Overview This page provides information about inSync-related configuration or functionality that you must consider after upgrading to inSync Private Cloud (5.9.6) Elite or Enterprise edition. inSync Ports The following table lists each port that you use with inSync, their purpose, the default ports used in inSync 5.4/5.4.1, and the updated ports used in inSync 5.9.6. Note: - Existing inSync deployments that use port 6061 will continue to work. You do not need to update your port configuration to port 443. If you want to use port 443 for your existing inSync deployments, contact Druva Support. - If you customized any port in inSync 5.4/5.4.1, inSync 5.9.6 continues to use the customized ports. Start the Edge Server on port 443 You must start the Edge Server on port 443, instead of port 6061. To register the Edge Server on port 443 - Log on to the inSync Management Console. - Click > Settings. - Under the Edge Servers tab, click Edit. - On the Edit Settings window, in the Backup and sync port box, type 443. - Click Save. inSync URLs inSync Master Management Console URL The following table lists the inSync Master Management Console 5.9.6 URL mapping based on different customization scenarios. inSync Web URL The following table lists the inSync Web 5.9.6 URL mapping based on different customization scenarios. inSync logs and configuration files for Windows only inSync now stores all log files and configuration files at the following location: C:\ProgramData\Druva\inSyncCloud MySQL inSync 5.9.6 uses your existing MySQL installation. You do not need to reinstall or configure MySQL after upgrading to inSync 5.9.6. inSync Web user interface inSync 5.9.6 comes with a redesigned inSync Web user interface. When users log on to inSync Web, the inSync Share page appears by default. inSync Share folder location With inSync 5.9.6, users can now select the location for their inSync Share folder. To use this feature, you must upgrade all installations of inSync Client to inSync Client 5.9.6. inSyncConfig.ini for Mac While installing inSync 5.9.6 clients on Mac laptops through IMD, you must specify the parameter values in the inSyncConfig.ini file in single quotes (') instead of double quotes ("). For more information, see Install inSync on Mac laptops. inSync administrator roles In inSync 5.9.6, we have introduced a granular set of administrative rights for the inSync administrators. When you upgrade from your previous inSync version to inSync 5.9.6, the profile administrators will map to the new Profile Admin role. Additionally, the predefined rights assigned to the new Profile Admin role automatically apply to the upgraded profile administrator. - For more information on the inSync administrator roles, see Predefined administrator roles. - For more information on the predefined roles and rights, see Predefined roles and rights.
https://docs.druva.com/010_002_inSync_On-premise/inSync_On-Premise_5.9.6/020_Install_and_Upgrade/060_Upgrade_to_version_5.9.6/Post-upgrade_considerations_for_inSync_Private_Cloud
2017-10-17T04:03:53
CC-MAIN-2017-43
1508187820700.4
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) ]
docs.druva.com
All devices that you manage have what we call a lifecycle. Intune can help you manage this lifecycle—from enrollment, through configuration and protection, to retiring the device when it's no longer required: Enroll Today's mobile device management (MDM) strategies deal with a variety of phones, tablets, and PCs (iOS, Android, Windows, and Mac OS X). If you need to be able to manage the device, which is commonly the case for corporate-owned devices, the first step is to set up device enrollment (Classic portal). You can also manage Windows PCs by enrolling them with Intune (MDM) or by installing the Intune client software. Configure Getting your devices enrolled is just the first step. To take advantage of all that Intune offers and to ensure that your devices are secure and compliant with company standards, you can choose from a wide range of policies. These let you configure almost every aspect of how managed devices operate. For example, should users have a password on devices that have company data? You can require one. Do you have corporate Wi-Fi? You can automatically configure it. Here are the types of configuration options that are available: - Device configuration (Classic portal). These policies let you configure the features and capabilities of the devices that you manage. For example, you could require the use of a password on Windows phones or disable the use of the camera on iPhones. - Company resource access (Classic portal). When you let your users access their work on their personal device, this can present you with challenges. For example, how do you ensure that all devices that need to access company email are configured correctly? How can you ensure that users can access the company network with a VPN connection without having to know complex settings? Intune can help to reduce this burden by automatically configuring the devices that you manage to access common company resources. - Windows PC management policies (with the Intune client software). While enrolling Windows PCs with Intune gives you the most device management capabilities, Intune continues to support managing Windows PCs with the Intune client software. If you need information about some of the tasks that you can perform with PCs, start here. Protect In the modern IT world, protecting devices from unauthorized access is one of the most important tasks that you'll perform. In addition to the items in the Configure step of the device lifecycle, Intune provides these capabilities that help protect devices you manage from unauthorized access or malicious attacks: - Multi-factor authentication. Adding an extra layer of authentication to user sign-ins can help make devices even more secure. Many devices support multi-factor authentication that requires a second level of authentication, such as a phone call or text message, before users can gain access. - Windows Hello for Business settings (Classic portal). Windows Hello for Business is an alternative sign-in method that lets users use a gesture—such as a fingerprint or Windows Hello—to sign in without needing a password. - Policies to protect Windows PCs (with the Intune client software). When you manage Windows PCs by using the Intune client software, policies are available that let you control settings for Endpoint Protection, software updates, and Windows Firewall on PCs that you manage. Retire When a device gets lost or stolen, when it needs to be replaced, or when users move to another position, it's usually time to retire or wipe (Classic portal) the device. There are a number of ways you can do this—including resetting the device, removing it from management, and wiping the corporate data on it.
https://docs.microsoft.com/en-us/intune/device-lifecycle
2017-10-17T04:14:10
CC-MAIN-2017-43
1508187820700.4
[array(['media/device-lifecycle.png', 'the Intune device lifecycle The device lifecycle'], dtype=object)]
docs.microsoft.com
Contents: Registered users of this product or Trifacta Wrangler Enterprise should login to Product Docs through the application. Through the Transformer page, you can change the source that is used for your dataset., - Open your wrangled dataset and recipe, - Change the source to the new file, and - Execute a job immediately to process the new week of data. NOTE: A dataset source can be an imported dataset or a wrangled dataset. Subsequent changes to the source data affect your dataset in development. If you select a wrangled dataset, your dataset in development also receives any subsequent changes based on modifications of the recipe in the source dataset. For more information, see Imported vs Wrangled Dataset. Notes and Limitations: If there are differences between the schemas of the source and the new source, your recipe is likely to break on the dataset when the new dataset is selected. You can swap your original source dataset with an imported dataset or a wrangled dataset. If needed, you can swap back to the original source at any time. - the source for a wrangled dataset, click the dataset drop-down in the Transformer page: Figure: Dataset Selector - Click the bi-directional link between the dataset name and its source. Select the new source dataset: NOTE: You can select imported datasets or wrangled datasets from any flow to which you have access. Changes to the source data or (if applicable) recipe are inherited. For more information, see Imported vs Wrangled Dataset. Figure: Change Dataset Dialog - Click Select. - Your dataset is now using the selected dataset as its source, and the current recipe in the Transformer page is applied to the new source. This page has no comments.
https://docs.trifacta.com/display/PE/Dataset+Browser
2017-10-17T03:50:32
CC-MAIN-2017-43
1508187820700.4
[array(['/download/resources/com.adaptavist.confluence.rate:rate/resources/themes/v2/gfx/loading_mini.gif', None], dtype=object) array(['/download/resources/com.adaptavist.confluence.rate:rate/resources/themes/v2/gfx/rater.gif', None], dtype=object) ]
docs.trifacta.com
Master User Account Privileges When you create a new DB instance, the default master user that you use gets certain privileges for that DB instance. The following table shows the privileges the master user gets for each of the database engines. Note If you accidentally delete the permissions for the master user you can restore them by resetting the password for the account.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.MasterAccounts.html
2017-10-17T04:16:37
CC-MAIN-2017-43
1508187820700.4
[]
docs.aws.amazon.com
Verbatims < WPD:Community | Survey Verbatims The table below lists the raw comments from participants at the Berlin (Feb. 8 & 9) and San Francisco (Feb. 23) doc sprints, and tracks the bugs logged against these issues. The gathering of this information, along with the survey was the first effort toward gaining insight into the usability of the wiki for contributors. Herein are summarized the findings with a bias toward raising issues with respect to editing and contributing. Summary The verbatim information below is summarized here by category. Not all categories are summarized. Design/Layout - Broken, unimplemented features (Comments, Q&A) are confusing users. - Users are unable to log in because the log-in elements do not appear on the home page. Editing tasks - Editing in MediaWiki and with SMW forms is not easy (this runs contrary to one of our basic assumptions) - Session bugs are a serious impediment to productivity. - Need a guide to creating examples. - Need a process (and a flag) for reviewing contributions. - Topics and topic clusters are confusing and rife with abuse. - How to documentation (WPD:*) is hard to find and difficult to follow. - Markdown is preferred over MediaWiki markup. - Flag design is scaring some users away from working on pages. Forms - Syntax section too restrictive, not applicable to all cases. - (Many other issues, see below.) Getting Started - Need to identify tasks according to role, domain expertise, skill. - Need tasks for designers, and others. - Supplement the Getting Started pages with video. - Need a more obvious path to getting started from home pages. Search - Duplicate pages in search results - one for each instance of the search term (this is low-hanging; why is it still on the tree?) - Can't search internal help pages. Next steps To help with the resolutions to these problems, in the list below cite the relevant bug from our project management system. If the issue requires a new bug, please create it.
https://docs.webplatform.org/wiki/WPD:Community/Survey/Verbatims
2015-06-30T05:16:50
CC-MAIN-2015-27
1435375091751.85
[]
docs.webplatform.org
Let's get started with Amazon Elastic Compute Cloud (Amazon EC2) by launching, connecting to, and using a Linux instance. We'll use the AWS Management Console, a point-and-click web-based interface, to launch and connect to a Linux instance. Important Before you begin, be sure that you've completed the steps in Setting Up with Amazon EC2. The instance is an Amazon EBS-backed instance (meaning that the root volume is an Amazon EBS volume). We'll also create and attach an additional Amazon EBS volume. You can either specify the Availability Zone in which to launch your instance, or let us select an Availability Zone for you. When you launch your instance, you secure it by specifying a key pair and security group. (You created these when getting set up.) When you connect to your instance, you must specify the private key of the key pair that you specified when launching your instance. To complete this exercise, perform the following tasks: Launch an Amazon EC2 Instance (Optional) Add a Volume to Your Instance Clean Up Your Instance and Volume Related Topics If you'd prefer to launch a Windows instance, see this tutorial in the Amazon EC2 User Guide for Microsoft Windows Instances: Getting Started with Amazon EC2 Windows Instances. If you'd prefer to use the AWS CLI, see this tutorial in the AWS Command Line Interface User Guide: Using Amazon EC2 through the AWS CLI. If you'd prefer to use the Amazon EC2 CLI, see this tutorial in the Amazon EC2 Command Line Reference: Launching an Instance Using the Amazon EC2 CLI.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html?r=8152
2015-06-30T05:15:23
CC-MAIN-2015-27
1435375091751.85
[]
docs.aws.amazon.com